> How to turn Jira from a ticket-tracking system into a real engineering management tool — without magic, without expensive consultants, and without illusions.
---
Russian version [[KPI в Jira. практическое руководство для инженерных лидеров]]
## 1. Introduction
### Why KPI Systems in Software Development Usually Fail
Let me start with an uncomfortable truth: most KPI programs in engineering teams don't improve performance — they destroy it. Slowly, imperceptibly, but systemically.
The pattern is always the same. A new leader arrives with a mandate to "make processes transparent." Within 60 days, a KPI framework is announced. Teams start optimizing for metrics instead of building good software. Six months later, velocity is climbing on paper, quality is falling in reality, and your best engineers start leaving because they feel like they're under a microscope.
I've seen this in banking systems serving 10,000 users and in fintech startups with 15 developers. The mechanics are always the same.
Why does this happen? Three fundamental mistakes:
**First:** measuring activity instead of outcomes. Closed ticket counts, commit counts, velocity — all of these measure motion, not progress. A team can be extremely active and simultaneously be building a product that destroys the customer experience.
**Second:** measuring people instead of the system. This is the most destructive mistake. Deming was right: 85–95% of performance problems are systemic, not individual. An engineer working in a broken process, with a poor codebase, without a proper development environment will show terrible metrics regardless of their skill level. When you measure people instead of the system, you punish symptoms and leave the disease untreated.
**Third:** the wrong tool, or the right tool used incorrectly. This is exactly where Jira is simultaneously the solution and the problem.
### Jira as an Underused Management Tool
Jira exists in nearly every engineering organization. And in nearly every organization it's used at roughly 20% of its capacity: creating tickets, running sprints, occasionally glancing at the burndown. That's it.
Yet Jira already contains the data needed to track Lead Time, Cycle Time, Bug Escape Rate, reopen rate, WIP, sprint predictability, and much more. You don't need to collect this data — it's already there. You just need to learn how to read it.
The goal of this article is to show you exactly how. No generalities — concrete JQL queries, dashboard configurations, and examples from real fintech practice.
---
## 2. Native Jira Capabilities
Before buying plugins, you need to understand what Jira can already do. This matters for two reasons: first, many of the necessary metrics are available for free; second, without understanding the native data, plugins will be useless — garbage in, garbage out.
### Issue Lifecycle and Status Transitions
Every issue in Jira stores a complete history of status transitions with timestamps. This is the foundation for calculating Cycle Time and detecting process bottlenecks.
Natively available fields:
- `created` — issue creation date
- `updated` — date of last modification
- `resolutiondate` — the date the issue was resolved/closed
- `status` — current status
- `statusCategory` — status category (To Do / In Progress / Done)
These fields form the basis for core formulas:
- **Lead Time** = `resolutiondate` − `created`
- **Cycle Time** ≈ `resolutiondate` − (date of first transition to "In Progress")
An important caveat: native Jira doesn't store transition history in a format that's easily queryable through JQL. For accurate Cycle Time you'll need either a plugin (Time in Status) or an export to an external tool. But for rough estimates, native data is sufficient.
### Sprint Data
If you're using Scrum boards, Jira stores:
- Which sprint an issue belongs to
- Whether an issue was added to the sprint after it started (sprint scope change)
- Whether an issue was completed in the sprint or carried over
- Velocity for each sprint
This is the foundation for predictability and team stability metrics.
### Components, Labels, and Versions
Frequently ignored but critically important attributes:
- **Components** — which part of the system an issue relates to (e.g., `payment-processing`, `atm-xfs-layer`, `reporting`)
- **Labels** — flexible tags for cross-functional classification
- **Fix Version** — which release an issue belongs to
- **Affects Version** — which version a bug was found in
Without these fields being properly populated, most useful KPIs become impossible to calculate. Standardizing these is the first thing to address during implementation.
### Issue Links
Jira natively supports these link types:
- `blocks / is blocked by`
- `duplicates / is duplicated by`
- `relates to`
- `is caused by / causes` (can be added as a custom link type)
Proper use of links allows you to build a traceable graph: Story → Bug → Incident, which is critical for quality metrics.
### Native Reports and Gadgets
|Report|What It Shows|What It's Useful For|
|---|---|---|
|Velocity Chart|Story points / issues per sprint|Team performance trend|
|Sprint Report|Completed vs. incomplete issues|Sprint predictability|
|Burndown Chart|Work remaining over time|Real-time sprint control|
|Control Chart|Cycle Time per issue|Identifying outliers|
|Cumulative Flow Diagram|Issue flow by status|Detecting WIP problems|
|Release Burndown|Progress toward a release|Delivery planning|
|Created vs. Resolved|Incoming vs. outgoing issue flow|Team load balance|
---
## 3. Core KPIs in Native Jira
### A. Delivery Metrics
#### Lead Time
**Definition:** The total elapsed time from issue creation to resolution. Includes waiting time in the backlog.
**Formula:**
```
Lead Time = resolutiondate - created
```
**JQL for analysis:**
```jql
project = "PAYMENTS"
AND issuetype = Story
AND status = Done
AND resolutiondate >= -30d
ORDER BY resolutiondate DESC
```
To calculate Lead Time, export results to CSV and compute the date difference. Native Jira has no built-in time aggregation — this is one of its key limitations.
**Gadget:** Control Chart configured across all statuses from "Open" to "Done."
**Interpretation:** If the average Lead Time exceeds 6 weeks for a medium-sized Story, you have a systemic problem: either too large a queue, insufficient team capacity, or issues that are too large. Look at the trend, not the absolute value.
---
#### Cycle Time
**Definition:** The time from when an issue is actively picked up to when it is closed. This is what the engineering team can directly influence.
**JQL for approximate Cycle Time:**
```jql
project = "PAYMENTS"
AND issuetype in (Story, Bug)
AND status changed to "In Progress" during (-60d, now())
AND status = Done
```
**Gadget:** Control Chart — the key tool for Cycle Time in Jira. It renders each issue as a point on a chart of "close date vs. duration." Outliers above the 85th percentile are your problem issues.
**Interpretation:**
- Rising median Cycle Time → accumulating technical debt or issues are growing in size
- High variability (wide scatter of points) → unpredictable process, unstable definition of done
- Stable median + rare outliers → a mature process
---
#### Throughput
**Definition:** The number of issues completed per time period, normalized by issue type.
**JQL:**
```jql
project = "PAYMENTS"
AND issuetype in (Story, Task)
AND status changed to Done during (-14d, now())
AND issuetype != Sub-task
```
Breakdown by type:
```jql
project = "PAYMENTS"
AND issuetype = Bug
AND status changed to Done during (-14d, now())
AND labels = "production-defect"
```
**Gadget:** Created vs. Resolved Chart. If the Resolved line is consistently below the Created line, your backlog is growing — an early signal of team overload.
**Interpretation:** Never compare throughput between teams without normalizing for complexity. A team closing 8 issues per week may be delivering more value than one closing 25, if the first team is working on critical payment logic and the second on UI components.
---
#### Sprint Completion Rate
**Definition:** The percentage of issues completed within a sprint from those committed at the sprint's start — excluding issues added after the sprint began.
**JQL for issues added after sprint start:**
```jql
project = "PAYMENTS"
AND sprint = "PAYMENTS Sprint 42"
AND sprint not in openSprints()
AND created >= "2025-01-15"
```
Where `2025-01-15` is the sprint start date.
**JQL for incomplete sprint issues:**
```jql
project = "PAYMENTS"
AND sprint = "PAYMENTS Sprint 42"
AND status != Done
AND sprint not in openSprints()
```
**Gadget:** Sprint Report (native). Look at "Completed Issues" vs. "Issues Not Completed."
**Target value:** 80–85% consistently is excellent. Below 70% consistently signals a systemic planning problem. 100% consistently is a red flag — the team is sandbagging commitments.
---
### B. Quality Metrics
#### Bug Count Trend
**JQL — open bugs by component:**
```jql
project = "PAYMENTS"
AND issuetype = Bug
AND status not in (Done, Closed, "Won't Fix")
AND component = "payment-processing"
ORDER BY priority DESC
```
**JQL — bugs created in the last 30 days:**
```jql
project = "PAYMENTS"
AND issuetype = Bug
AND created >= -30d
AND created <= now()
ORDER BY created DESC
```
**Gadget:** Two-Dimensional Filter Statistics — lets you break down bugs by priority and component simultaneously.
---
#### Bug Escape Rate
**Definition:** The proportion of bugs found in the production environment out of all bugs found in a given period.
**Data prerequisite:** You need a custom field `Found In Environment` with values: `Development`, `Testing/QA`, `Staging`, `Production`.
**JQL — production bugs:**
```jql
project = "PAYMENTS"
AND issuetype = Bug
AND "Found In Environment" = Production
AND created >= -30d
```
**JQL — all bugs in the period:**
```jql
project = "PAYMENTS"
AND issuetype = Bug
AND created >= -30d
```
**Calculation:**
```
Bug Escape Rate = (prod_bugs / total_bugs) × 100%
```
Export both queries to CSV and compute the ratio. Automating this calculation requires eazyBI or an external BI tool.
**Target for fintech:** <10% for P1/P2 bugs. Above 25% means your QA process is broken.
---
#### Reopen Rate
**Definition:** The percentage of bugs that were closed and subsequently reopened due to an inadequate fix.
**JQL — bugs reopened in the last 30 days:**
```jql
project = "PAYMENTS"
AND issuetype = Bug
AND status changed to "Reopened" during (-30d, now())
```
**Alternative using status history:**
```jql
project = "PAYMENTS"
AND issuetype = Bug
AND status was "Done" before -7d
AND status changed from Done during (-7d, now())
```
**Interpretation:** A Reopen Rate above 15% signals insufficient test coverage during fixes, or deadline pressure causing bugs to be closed without adequate verification.
---
### C. Workflow Efficiency Metrics
#### WIP — Work in Progress
**Definition:** The number of issues currently in active development.
**JQL — current team WIP:**
```jql
project = "PAYMENTS"
AND status in ("In Progress", "In Review", "In Testing")
AND assignee in membersOf("payments-dev-team")
AND sprint in openSprints()
```
**JQL — issues with no movement for more than 3 days:**
```jql
project = "PAYMENTS"
AND status in ("In Progress", "In Review", "QA Testing")
AND updated <= -3d
AND sprint in openSprints()
```
**Gadget:** Cumulative Flow Diagram (CFD). If the band for "In Progress" or "In Review" starts widening, WIP is growing and a bottleneck is forming.
**Target value:** No more than 1–2 issues in WIP per developer. If an engineer has 5+ issues "In Progress," that's a sign of a dysfunctional process, not high productivity.
---
#### Bottleneck Detection via Time in Status
Native Jira doesn't provide a direct "time in status" report. However, the CFD shows issue accumulation at specific statuses — that's your first signal.
**JQL — issues stuck in a status for more than 5 days:**
```jql
project = "PAYMENTS"
AND status = "In Review"
AND status changed to "In Review" before -5d
AND status != Done
```
Replace `"In Review"` with any suspect status: `"Awaiting Deployment"`, `"QA Testing"`, `"Waiting for Info"`.
---
### KPI Summary Table: JQL, Gadgets, and Interpretation
|KPI|JQL (simplified)|Gadget / Report|Target Value|Red Flag|
|---|---|---|---|---|
|Lead Time|`status = Done AND resolutiondate >= -30d`|Control Chart|<4 wks for Story|>8 weeks|
|Cycle Time|`status changed to "In Progress" during (-60d, now()) AND status = Done`|Control Chart|<5d p50|>14d p50|
|Throughput|`status changed to Done during (-14d, now()) AND issuetype != Sub-task`|Created vs. Resolved|Stable trend|Declining 2+ wks in a row|
|Sprint Completion Rate|Sprint Report (native)|Sprint Report|80–85%|<70%|
|Bug Count Trend|`issuetype = Bug AND status not in (Done, Closed)`|Pie Chart / 2D Stats|Trend ↓|Growth >20% per quarter|
|Bug Escape Rate|`"Found In" = Production AND created >= -30d`|2D Filter Stats|<10%|>25%|
|Reopen Rate|`status changed to "Reopened" during (-30d, now())`|Filter Stats|<10%|>20%|
|WIP|`status in ("In Progress","In Review") AND sprint in openSprints()`|CFD|<1.5× team size|>2× team size|
|Stale Issues|`status = "In Progress" AND updated <= -7d`|Filter Results|0 issues >7d|>3 issues|
---
## 4. Limitations of Native Jira
An honest conversation: native Jira is good for basic tracking but has fundamental limitations that make it insufficient for a mature KPI system.
### No Accurate Time in Status
This is the biggest weakness. Jira stores transition history but doesn't provide a convenient way to query "how long did an issue spend in status X" through JQL. You can see when a status changed, but you can't aggregate this across issues and teams natively.
The result: true Cycle Time out of the box is not accurate enough. The CFD shows trends, but doesn't let you drill into a specific issue and say "it waited 12 days in QA."
### No DORA Metrics Out of the Box
Deployment Frequency, Change Failure Rate, MTTR — none of this exists in native Jira. Jira doesn't know about deployments unless you've set up a CI/CD integration. Getting these metrics requires either a plugin (Jira + GitLab integration) or an external system (Grafana, Datadog).
### Weak Cross-Project Analytics
In organizations with multiple Jira projects — a typical situation for a bank with teams like Core Banking, ATM Software, Mobile, DevOps — native reports operate within a single project. Getting a consolidated KPI view "across the entire engineering division" through the native interface is extremely difficult.
### Limited Historical Analytics
Native Jira reports work well for current state and the last 30–90 days. Historical trends beyond a year belong to BI tools. Standard gadgets don't let you compare Q1 2024 with Q1 2025 in a single view.
### No Time-Based Metric Aggregation Without Manual Work
Average Lead Time for a month, median Cycle Time for a quarter, percentile distributions — all of this requires an export to Excel or configuring eazyBI. In native Jira you work with issue lists, not aggregated metrics.
### Limitations Summary
|Need|Native Jira|Solution|
|---|---|---|
|Accurate Time in Status|No|Time in Status plugin|
|DORA metrics|No|Jira + GitLab/Jenkins integration|
|Cross-project analytics|Limited|eazyBI|
|Historical analytics >90 days|Limited|eazyBI / external BI|
|Percentile distributions|No|eazyBI / export to BI|
|Automated Bug Escape Rate|No|eazyBI + custom fields|
|PR metrics|No|Git integrations|
|Dashboards for non-technical stakeholders|Basic|Rich Filters / eazyBI|
---
## 5. Advanced KPI Tracking with Plugins
### A. Time in Status
**What it is:** A Jira plugin (Atlassian Marketplace) that adds detailed analysis of how long issues spend in each status.
**What it provides:**
- Accurate Cycle Time calculation: from "In Progress" to "Done" using actual elapsed time, not just calendar time
- A delay heat map by status — instantly shows that 60% of time is spent in "Awaiting QA"
- Filtering by issue type, component, and assignee (for systemic diagnosis, not for evaluating individuals)
- Percentile distributions: p50, p85, p95
**Real fintech use case:**
The payments processing team notices that average Cycle Time has grown from 6 to 14 days over the quarter. Native Jira shows only a Control Chart with outliers. Time in Status reveals: 8 of those 14 days are spent in "Security Review" status. The cause: one security engineer has become a bottleneck. The fix: add a second security reviewer and parallelize the reviews.
Without Time in Status, you'd spend months guessing why Cycle Time grew.
**Configuration for fintech:**
Create a report broken down by status:
```
To Do → Refinement → In Progress → In Review → Security Review → QA → Staging → Done
```
For each status: mean time, median, p85, issue count. Any status with a median >2 working days for standard Stories is a candidate for optimization.
---
### B. eazyBI
**What it is:** A powerful BI tool deeply integrated with Jira. Essentially MDX/SQL analytics on top of Jira data with polished dashboards.
**What it provides:**
- Custom calculated metrics (mean, median, percentiles, year-over-year comparison)
- Cross-project analytics in a single dashboard
- Historical analytics with no depth limitations
- MDX expressions for non-standard calculations
- Native integration with Git, Jenkins, and other data sources
**Key use cases for fintech:**
Bug Escape Rate calculated automatically via MDX:
```mdx
[Measures].[Issues created]
WHERE
[Issue Type].[Bug]
AND [Custom Field - Found In Environment].[Production]
```
Divide by the total number of bugs for the period — you get an automatically updated KPI without any manual Excel work.
Lead Time with percentiles: eazyBI lets you calculate median Lead Time and p85 Lead Time for any period, by any project or component. This is simply not achievable with native Jira.
Quarter-over-quarter comparison: a "Q1 2025 vs. Q1 2024" dashboard takes 15 minutes to set up in eazyBI. In native Jira — several hours of manual exporting and comparing in spreadsheets.
**Cost and justification:** eazyBI is a paid tool (~$200–500/month for a team of 20–50 on Cloud). For fintech teams with serious reporting requirements to regulators or a board of directors, it's a mandatory investment. For smaller teams (under 10 people), native tools plus Excel will suffice.
---
### C. Rich Filters for Jira Dashboards
**What it is:** A plugin that adds interactive filters and enhanced visualizations to standard Jira dashboards.
**What it provides:**
- Dropdown filters directly on the dashboard (without writing JQL)
- Interactive tables with sorting and grouping
- One-click Excel export directly from the dashboard
- Drill-down: click "15 bugs" → see the list of those exact bugs
**When you need it:** When leaders (CTO, Head of Risk, banking clients) need to look at dashboards independently without knowing JQL. Rich Filters transforms a Jira dashboard from a developer tool into a management tool.
---
### D. Git Integrations: GitHub, GitLab, Bitbucket
**What it is:** Atlassian provides native integrations (via Jira Software → Development Panel) as well as third-party plugins like GitLab for Jira and GitHub for Jira.
**What you get after linking a repository to a Jira project:**
- Linked branches and commits (by mentioning the issue key: `PAY-123`)
- Pull Requests / Merge Requests: status, who's reviewing, when it was created
- Pipeline status: did CI pass for this issue?
**Metrics available through integration:**
- PR Cycle Time ≈ time from PR creation to merge (visible in GitLab Analytics)
- Deployment status ≈ whether an issue reached a specific environment (if Deployment events are configured)
- PRs not linked to a Jira issue — commits not tied to a ticket (a process discipline violation)
**JQL using Git data:**
```jql
project = "PAYMENTS"
AND development[pullrequests].open > 0
AND status = "In Progress"
```
Shows issues in development with open PRs — useful for daily standups.
```jql
project = "PAYMENTS"
AND development[branches].count > 0
AND status = "To Do"
```
Issues "not yet started" in Jira that already have branches — a sign of work happening without tickets.
---
## 6. DevOps Metrics in the Jira Ecosystem
DORA metrics — Deployment Frequency, Lead Time for Changes, Change Failure Rate, MTTR — are the standard for DevOps organizations. Native Jira doesn't provide them, but they can be approximated through a combination of tools.
### Deployment Frequency
**Approach 1: Jira Release + Fix Version**
Each deployment = a Jira release tagged as Fix Version. Record each deployment date as the Release Date.
```jql
project = "PAYMENTS"
AND fixVersion in releasedVersions()
AND fixVersion in versionsReleasedBetween("2025-01-01", "2025-03-31")
```
Number of unique versions per period = an approximation of Deployment Frequency.
**Approach 2: Jira + GitLab/Jenkins via webhooks**
Configure GitLab CI/CD or Jenkins to create a Jira issue of type `Deployment` for every deployment, with fields: environment, version, deployment status, and timestamp.
```jql
project = "DEVOPS"
AND issuetype = Deployment
AND "Environment" = Production
AND created >= -30d
```
---
### Change Failure Rate
The `Deployment` issue type gets a `Deployment Status` field (Success / Rollback Required / Hotfix Required).
```jql
project = "DEVOPS"
AND issuetype = Deployment
AND "Deployment Status" in ("Rollback Required", "Hotfix Required")
AND created >= -30d
```
Divide by total deployments for the period.
**Linking to incidents:**
```jql
project = "DEVOPS"
AND issuetype = Incident
AND "Caused by Deployment" is not EMPTY
AND created >= -30d
```
---
### MTTR (Mean Time to Restore)
Create an `Incident` issue type with these fields:
- `Incident Detected` (datetime) — time of detection
- `Incident Resolved` (datetime) — time of restoration
- `Incident Severity` (P1/P2/P3/P4)
- `Root Cause Category` (Software / Infrastructure / Deployment / External)
**JQL — P1 incidents in the last 90 days:**
```jql
project = "INCIDENTS"
AND issuetype = Incident
AND "Incident Severity" = P1
AND status = Resolved
AND resolutiondate >= -90d
ORDER BY resolutiondate DESC
```
MTTR = average(`Incident Resolved` − `Incident Detected`) across the result set. Calculate via eazyBI or export to Excel.
**Target values for fintech ATM:**
- P1 (ATM offline): MTTR <30 min (software rollback), <4h (field visit required)
- P2 (partial degradation): MTTR <2h
- P3 (no transaction impact): MTTR <24h
### Integration Architecture: Jira + CI/CD + Incidents
```
GitLab CI/CD Pipeline
│
├── Deployment success → Jira: Deployment [Status: Success]
├── Deployment failed → Jira: Deployment [Status: Failed]
│ └── automatically creates Incident [P2]
└── Rollback executed → Jira: Deployment [Status: Rollback]
└── linked to original Incident
Jira Incident
│
├── Linked to: Deployment (caused by)
├── Linked to: Bug (root cause)
└── Fields: Detected, Resolved, MTTR (auto-calculated via Automation)
```
This scheme requires no external tools — GitLab Webhooks → Jira REST API → Jira Automation Rules is sufficient.
---
## 7. Designing a Real KPI Dashboard
A complete dashboard example for a fintech engineering team of 15 people (Dev + QA + DevOps).
### The Three-Layer Dashboard Principle
- **Layer 1 — Leadership (CTO/Head of Engineering):** strategic view, updated weekly
- **Layer 2 — Team Managers:** operational view, updated daily
- **Layer 3 — The Team:** tactical view, real-time
---
### Layer 1 Dashboard: Engineering Leadership
**Widget 1: Throughput Trend (Created vs. Resolved — 12 weeks)**
- Type: Line Chart (Created vs. Resolved gadget)
- JQL Resolved: `project in (PAYMENTS, ATM, DEVOPS) AND status changed to Done during (-84d, now()) AND issuetype in (Story, Bug, Task)`
- Goal: Resolved line consistently at or above Created line
**Widget 2: Bug Escape Rate — monthly breakdown table**
- Type: Two Dimensional Filter Statistics
- X-axis: creation month; Y-axis: Found In Environment
- Updates: automatically via eazyBI
**Widget 3: Sprint Predictability — last 6 sprints**
- Type: Sprint Report or eazyBI
- Shows: % completion per sprint, broken down by team
**Widget 4: Open P1/P2 Bugs — current state**
- JQL: `issuetype = Bug AND priority in (P1, P2) AND status not in (Done, "Won't Fix") AND project in (PAYMENTS, ATM)`
- Zero = target. Any P1 older than 24 hours requires immediate action.
**Widget 5: Deployment Frequency**
- JQL: `issuetype = Deployment AND "Environment" = Production AND created >= -30d`
- Target: ≥4 deployments per month
---
### Layer 2 Dashboard: Engineering Manager
**Widget 6: Cumulative Flow Diagram — current sprint**
- A widening status band = bottleneck → manager action required
**Widget 7: Issues with no movement for more than 3 days**
- JQL: `project = PAYMENTS AND status in ("In Progress", "In Review", "In Testing") AND updated <= -3d AND sprint in openSprints()`
**Widget 8: WIP by status**
- JQL: `project = PAYMENTS AND status in ("In Progress", "In Review", "QA Testing", "Awaiting Deployment") AND sprint in openSprints()`
- If "Awaiting Deployment" is >30% of WIP — time to automate the deployment process
**Widget 9: Bug Reopen Rate — current sprint**
- JQL: `project = PAYMENTS AND issuetype = Bug AND status changed to "Reopened" during (startOfSprint(), now())`
**Widget 10: Unassigned issues in the active sprint**
- JQL: `project = PAYMENTS AND sprint in openSprints() AND assignee is EMPTY AND status != Done`
---
### Layer 3 Dashboard: Team (Operational)
**Widget 11: My current WIP**
- JQL: `assignee = currentUser() AND status in ("In Progress", "In Review")`
**Widget 12: My issues waiting on others**
- JQL: `assignee = currentUser() AND status in ("Awaiting Approval", "Waiting for Info", "Blocked")`
**Widget 13: Open PRs older than 24h (via Git integration)**
- JQL: `project = PAYMENTS AND development[pullrequests].open > 0 AND status = "In Review" AND updated <= -1d`
---
### Full Dashboard Widget Reference Table
|#|Widget|KPI|Level|Gadget Type|Interpretation|
|---|---|---|---|---|---|
|1|Throughput Trend|Throughput|Leadership|Created vs. Resolved|Resolved > Created → healthy flow|
|2|Bug Escape Rate|QA quality|Leadership|2D Filter Stats / eazyBI|>25% → systemic QA failure|
|3|Sprint Predictability|Predictability|Leadership|Sprint Report / eazyBI|<70% → planning problem|
|4|Open P1/P2 Bugs|Critical defects|Leadership|Filter Results|Any P1 >24h → immediate action|
|5|Deployment Frequency|Delivery speed|Leadership|Filter Stats|<4/month → blocking the business|
|6|CFD|WIP and bottlenecks|Manager|CFD (native)|Widening band → bottleneck|
|7|Stale issues|Blockers|Manager|Filter Results|>3d no movement → blocked|
|8|WIP by status|Load distribution|Manager|Pie/Bar Chart|Imbalance → workload mismatch|
|9|Bug Reopen Rate|Fix quality|Manager|Filter Stats|>15% → inadequate fix testing|
|10|Unassigned sprint issues|Sprint organization|Manager|Filter Results|>0 → planning gap|
|11|My WIP|Personal control|Team|Filter Results|>2 → context switching|
|12|My blocked issues|Blockers|Team|Filter Results|>0 → escalation needed|
|13|Open PRs >24h|PR review|Team|Filter Results (Git)|>0 → review bottleneck|
---
## 8. Linking Tasks, Bugs, and Incidents
This is one of the most undervalued aspects of KPI work in Jira. Without proper links between entities, you can't answer the key questions: "Which Story caused this production bug?" or "Which deployment triggered this incident?"
### Link Strategy
**The core traceability chain:**
```
Epic → Story → Task/Sub-task
└──→ Bug (found during development)
Story → Bug (found in QA: "is caused by")
Bug → Incident (found in Production: "caused by" / "relates to")
Deployment → Incident ("caused by")
```
**Team rules:**
1. Every bug found in QA must be linked to the Story that caused it (`Story PAY-100 causes Bug PAY-456`)
2. Every production incident must be linked to the deployment that triggered it
3. Every incident must have a linked bug as its root cause
### Custom Fields for Quality Metrics
**Required custom fields for Bug:**
|Field|Type|Values|Purpose|
|---|---|---|---|
|`Found In Environment`|Select|Dev / QA / Staging / Production|Bug Escape Rate|
|`Root Cause Category`|Select|Logic Error / Missing Test / Requirements Gap / Infra|Root cause analysis|
|`Introduced In Sprint`|Sprint Picker|(current sprint)|Traceability to delivery|
|`Caused By Story`|Issue Link|(link to Story)|Full traceability|
**Required custom fields for Incident:**
|Field|Type|Values|Purpose|
|---|---|---|---|
|`Incident Severity`|Select|P1 / P2 / P3 / P4|Prioritization, MTTR|
|`Incident Detected`|DateTime|—|MTTR calculation|
|`Incident Resolved`|DateTime|—|MTTR calculation|
|`Caused By Deployment`|Issue Link|(link to Deployment)|Change Failure Rate|
|`Customer Impact`|Number|(number of affected customers)|Business impact|
### Using Labels and Components
**Components** — architectural system components, set by the administrator, stable:
```
payment-processing / atm-xfs-layer / card-management / reporting / security / infrastructure
```
**Labels** — flexible tags the team applies situationally:
```
regression / performance / security-critical / customer-reported / tech-debt
```
**JQL using Components for quality analysis:**
```jql
project = PAYMENTS
AND issuetype = Bug
AND component = "payment-processing"
AND "Found In Environment" = Production
AND created >= -90d
ORDER BY priority DESC
```
This query shows production bugs for a specific component over a quarter. More than 10 such bugs is a signal for refactoring or increased test coverage in payment-processing.
### Automating Links via Jira Automation
Set up a rule: when an `Incident` issue is created with the label `deployment-related`, automatically create a link to the most recent deployment in the corresponding project.
```
Trigger: Issue Created
Condition: issuetype = Incident AND labels = "deployment-related"
Action: Link Issue → [last Deployment in DEVOPS project] → "caused by"
```
This eliminates human error when filling in links under the pressure of an active incident.
---
## 9. Implementation Plan
A realistic plan for a fintech team of 15–30 people. Total timeline: **3 months**.
### Step 1: Audit and Standardize Workflows (Weeks 1–2)
Before building KPIs, make sure your data is clean and consistent.
**What to do:**
- Audit current statuses across all projects. A typical problem: one project has "In Development," another has "In Progress," a third has "Development" — three different statuses for the same state.
- Standardize to a single global workflow for all Dev projects
- Document the Definition of Done for each status transition
**Risk:** Teams resist changing their workflow. **Mitigation:** Explain the reason — not "because management wants KPIs," but "because we can't see where issues are getting stuck."
---
### Step 2: Data Cleanup and Retroactive Population (Weeks 2–4)
**What to do:**
- Add required custom fields (Environment, Severity, Components) — start with new issues
- Retroactively populate `Found In Environment` for all open bugs
- Configure required field population at status transitions (Jira Workflow Conditions)
**JQL to audit empty required fields:**
```jql
project = PAYMENTS
AND issuetype = Bug
AND "Found In Environment" is EMPTY
AND created >= -90d
```
If this query returns hundreds of issues, you have a data discipline problem. That's normal at the start — the important thing is to document it and begin fixing it.
---
### Step 3: Add Custom Fields and Configure (Weeks 3–5)
**What to do:**
- Create all required custom fields (see Section 8)
- Configure issue screens: fields appear where they're needed, not everywhere
- Create baseline JQL filters and save them — they'll become the foundation for dashboards
- Configure Components for each project
**Important:** don't overload issue screens with dozens of fields. Rule: if a field isn't used in JQL or a metric, it isn't needed.
---
### Step 4: Build the JQL Filter Library (Weeks 5–7)
Create a library of saved filters:
```
[PAYMENTS] Open P1/P2 Bugs
[PAYMENTS] Bug Escape Rate - Last 30d
[PAYMENTS] Tasks stuck > 3 days
[PAYMENTS] Sprint Completion - Last 6 Sprints
[DEVOPS] Deployment Log - Production
[ALL PROJECTS] Open Incidents
```
Per filter: view permissions open to all in the organization, edit permissions only for the owner.
---
### Step 5: Build the Dashboards (Weeks 7–9)
Create all three dashboards (Layers 1–3 from Section 7). Start with Layer 2 for Engineering Managers — it's the most immediately useful and delivers value quickly.
**Widget creation order:**
1. Start simple — Filter Results (issue lists)
2. Then aggregates — Filter Statistics, Pie Charts
3. Finally the complex ones — CFD, Control Chart, Created vs. Resolved
---
### Step 6: Embed KPIs into Team Rhythm (Weeks 9–12)
**What to do:**
- Present dashboards to the team. Key message: "These metrics show how the system works, not how you personally perform."
- Add KPI review to the team rhythm: 5 minutes on the dashboard at sprint start, 10 minutes at retro
- For the first 4–6 weeks, don't draw conclusions — collect baselines
- After establishing a baseline, set initial targets together with the team
**What to absolutely avoid:**
- Don't publish individual metrics (how many tickets engineer X closed)
- Don't introduce KPI-based bonuses in the first 6 months
- Don't use dashboards as a pressure tool against specific individuals
### Implementation Timeline
|Period|Focus|Expected Output|
|---|---|---|
|Month 1, weeks 1–2|Workflow audit, standardization|Single unified workflow, documented DoD|
|Month 1, weeks 2–4|Data cleanup, retroactive population|90-day baseline data|
|Month 2, weeks 5–7|Custom fields, JQL filters|Library of 10–15 saved filters|
|Month 2, weeks 7–9|Dashboard build|3 working dashboards (Layers 1–3)|
|Month 3, weeks 9–12|Embedding KPIs into team rhythm|KPI review as part of retro and planning|
|Month 3+|Baseline → Targets → Iteration|First measurable improvements|
---
## 10. Anti-Patterns
### Anti-Pattern 1: Measuring Developers by Closed Ticket Count
**What it looks like:** A manager creates a "Top 10 Developers by Issues Closed This Month" widget and uses it in performance reviews.
**Why it's destructive:**
- Engineers start splitting tasks: one real piece of work becomes five tickets
- Complex work (refactoring critical systems, mentoring, architecture documentation) doesn't show up in the count and becomes devalued
- The most experienced engineers, doing systemic work, look "unproductive" — and leave
- In fintech: an engineer who closed 3 tickets but prevented a $500K incident looks worse than one who closed 20 trivial UI tasks
**The right approach:** Measure team-level throughput. For individuals — only qualitative indicators: code review participation, documentation contributions, on-call incident response quality.
---
### Anti-Pattern 2: Velocity as a KPI
**What it looks like:** "Our velocity needs to grow by 10% each quarter." Velocity ends up in OKRs or monthly executive reports.
**Why it's destructive:**
- Teams inflate story point estimates
- Planning poker consensus shifts upward
- Velocity stops being a calibration tool and becomes political
- Real throughput doesn't change; only the numbers do
**The right approach:** Use velocity for internal sprint planning only. Never for comparing teams or as an executive KPI.
---
### Anti-Pattern 3: Ignoring Issue Context
**What it looks like:** Cycle Time grew from 5 to 12 days. Immediate conclusion: "the team is working slower." Immediate pressure on the team.
**Why it's destructive:**
- There are dozens of reasons Cycle Time grows: issue complexity increased, dependencies emerged, security review became mandatory, the team is onboarding new members
- Without analyzing root causes, any action may make things worse
- Teams start splitting issues to show a "better" Cycle Time without changing the actual process
**The right approach:** When Cycle Time grows, ask "what changed?" — not "who is responsible?"
---
### Anti-Pattern 4: Dashboard Overload
**What it looks like:** A dashboard with 25+ widgets that nobody looks at. Or 15 metrics where only 2 are ever discussed.
**Why it's destructive:**
- Information noise kills focus
- Important signals are lost among irrelevant ones
- Teams stop trusting the dashboard
**The right approach:** Leadership — maximum 5–7 metrics. Teams — maximum 10. Rule: if a metric hasn't triggered an action in the last 3 months, remove it.
---
### Anti-Pattern 5: Jira as the Single Source of Truth on Quality
**What it looks like:** Bug count in Jira = the quality assessment of the product.
**Why it's destructive:**
- The number of bugs in Jira depends on the discipline of creating tickets, not on actual quality
- Teams stop filing bugs to "improve" the metric
- Bugs get fixed in Slack or verbally, bypassing Jira entirely
**The right approach:** Combine Jira metrics with automated sources: error rate from Sentry/Grafana, automated test results from CI, user-reported incidents from Service Desk.
---
## 11. Real-World Case Study
### Background: RegionalPay — A Fintech Company Developing ATM Software
_RegionalPay is a fictional but representative company. The situation reflects real patterns from practice._
45 people total, 28 in engineering (development, QA, DevOps). Three teams: ATM Core, Payment Gateway, Back Office. Jira in use for 3 years.
---
### Before: Chaotic Jira Usage
- Three teams, three different workflows. ATM Core had 12 statuses, Payment Gateway had 5, Back Office had 8. Transitions didn't reflect the real process.
- In 60% of bugs, the `Environment` field was empty. Bug Escape Rate was impossible to calculate.
- Velocity was the primary and only metric discussed at weeklies.
- No links between Stories and Bugs. The question "which Story caused this incident?" had no answer.
- Incidents tracked via email and Excel. No connection to Jira.
- Dashboard: velocity chart and burndown — and nothing else useful.
**Symptoms:**
- The CTO could not answer a banking client's question: "How many critical production bugs have you had in the last 90 days?"
- A regulatory review required an incident report — collecting the data manually took 5 full working days
- The ATM Core team complained about "constant rework," but had no data to prove it
---
### What Was Done (3 Months)
**Month 1:** Workflow standardized to a single unified workflow across all teams. Custom fields added: `Found In Environment`, `Root Cause Category`, `Incident Severity`, `Caused By Deployment`. Bugs from the past 90 days retroactively populated.
**Month 2:** 14 JQL filters configured. Layer 1 and Layer 2 dashboards built. GitLab → Jira integration set up (a `Deployment` issue type is now created automatically on every deploy). Incidents migrated from Excel to Jira.
**Month 3:** Dashboards presented to the teams. KPI review added to sprint retrospectives (10 minutes). Baseline recorded. First targets agreed upon with the teams.
---
### After: 6 Months In
|Metric|Before|After|Change|
|---|---|---|---|
|Bug Escape Rate|Unknown → Baseline 38%|17%|**–55%**|
|Average P1 response time|~4h (from email)|45 min|**–81%**|
|Time to compile regulatory report|5 working days|2 hours (JQL + export)|**–95%**|
|Sprint Completion Rate|Baseline 61%|78%|**+28%**|
|Stale issues (>7 days)|~12 issues constantly|~2–3 issues|**–75%**|
|Story → Bug traceability|0%|84%|**+84 p.p.**|
**What actually changed:**
**Bug Escape Rate.** Baseline measurement revealed 38% of bugs reaching production. Component-level analysis identified: 71% of these were in the `payment-processing` module, categorized as "Missing Test." The QA team focused on that module. Six months later: 17%.
**Response speed.** Moving from email/Excel to a Jira Incident type with `Detected` and `Resolved` fields revealed within 2 weeks: the mean time from detection to escalation was 47 minutes (incidents were logged in email, forwarded, then discussed). An automatic Slack alert triggered on P1 creation in Jira reduced escalation time to under 5 minutes.
**Sprint Completion Rate.** The dashboard showed that in 40% of sprints, more than 25% of issues were added after the sprint started. That was the root cause of unpredictability. A rule was introduced: any issue added to an active sprint requires explicit manager sign-off, recorded in a comment. Completion Rate grew from 61% to 78% within three sprints.
---
## 12. Conclusion
### Key Lessons
**First:** Jira is a database of your engineering process. KPI quality is determined by data quality. Before building metrics, clean up your workflows, fields, and data discipline. Garbage in — garbage out.
**Second:** Start small. Don't try to implement all 20 metrics at once. Begin with three: Bug Escape Rate, Sprint Completion Rate, stale issues. Get the data, verify it's trustworthy — then expand.
**Third:** KPIs only work when the team understands their purpose. Don't announce metrics — explain why they're needed and what will change when they improve. Show how the data helps eliminate the things that frustrate engineers themselves: waiting days for code review, unpredictable sprints, constant rework.
**Fourth:** Plugins are necessary, but not urgently. Start with native Jira — most basic KPIs are available at no extra cost. Buy Time in Status and eazyBI when you know exactly what they'll give you and when native capabilities are genuinely insufficient.
**Fifth:** Jira is a tool for managing the system, not the people. Metrics should answer "where is the process broken?" — not "who is performing poorly?" This distinction is the difference between a KPI system that improves engineering culture and one that destroys it.
In fintech and ATM development, the cost of failure is incomparably higher than in consumer software. A production bug in a payment processing system isn't "a degraded user experience" — it's financial loss and regulatory risk. This makes a quality metrics system not merely "best practice," but an operational necessity.
A properly configured Jira is not a surveillance tool. It's a visibility tool. And visibility is the first step toward managed improvement.
---
_Engineering Leadership Series · KPIs in Jira · Fintech & Enterprise Software · 2025_