Managers need metrics for variety of reasons: measuring ‘success’ or status, performance reviews and analysis, and so on. The mistake I see too often is that the easier it is to collect a metric, the more likely that it’s not measuring anything useful. Of course, the easiest metrics to collect or understand are also the most likely to be used. Let’s take ‘bug tickets’ as an example. It is easy to count how many tickets get entered. But that is not a good measure of quality, because how many of those tickets are user error or truly ‘features’? So managers often look to the next level of metric: ticket resolution rate (tickets closed per day or week or iteration or whatever). If you have ever dealt with a help desk that constantly closes tickets for things that aren’t actually fixed, causing a proliferation of tickets, you know what it’s like dealing with an organization driven by this metric! Instead of actually getting work done or helping the user (for example, leaving tickets open until the user accepts the resolution), the organization exists solely to open as many tickets as possible and then close them as quickly as possible, so it can get its resolution rate up. A better number would be the hardest to measure: ratio of true ‘ug tickets’ created in relationship to features deployed, changes made, or something similar. Needless to say, that is not an easy number to understand or to collect and report on. The result is that organizations choose to make decisions based on the wrong metrics rather than the right ones, due to convenience.