This document lists some well-known software metrics, gives their standard definitions, and proposes identifiers for use in resource formats.
The metrics defined here have been selected based on input from the following estimation industry experts:
This input has been consolidated in Key Software Metrics? .Where possible, we refer to industry standards. The candidate sources of industry standards are:
In alignment with Web architecture, we propose to define URIs to name important concepts such as software metrics. URIs are used for XML namespaces and Semantic Web. The base URI for software metrics will be http://open-services.net/software-metrics.
In order to group software metrics into related groups, we'll introduce categories. The initial categories are:
Each category will define a namespace for URIs. The above categories define the following namespaces:
See Issues About Metrics Categories
Within a category, specific metrics are defined with fragment identifiers. For example, with the size category, the metric KLOC becomes:
The following topics have been discussed:
Add your comments here.
If the quality namespace is to measure reliability, why not rename it reliability? ASQ defines "Quality" to include Juran's goodness of fit (what we may call coverage of requirements) which would not affect reliability from Musa's perspective.
-- SkipPletcher - 19 Jun 2009
Skip, the proposed definition is not authoritative. We should improve it to include goodness of fit.
-- ArthurRyman - 22 Jun 2009
I attempted to combine the Galorath and QSM metrics categories into one common area (QSM for now because I'm TWiki challenged). I did leave metrics separate where we measure them differnently like SLOC and Defects.
-- LawrencePutnamJr - 21 Oct 2009
The measurement types we listed (scope, cost, schedule, and quality) are project management measures, which is good and common but not specifically software -- although we're using software-centric definitions for a few of those measures. We also have slight difference in categorizing the metrics (size, quality, schedule, and effort), so we might estimate cost but measure effort or v.v. -- related, but not the same. What about metrics for delivered value (perhaps as an inverse of risk), and learning rate (how much faster do we deliver value as the solution matures). Do we want to measure reuse (at the code or service level)? Do we want to have any measure of waste (dead or abandoned code, specified but dropped features, rework)? How about things like test coverage (code or reqts) and utility (how many delivered features are actually exercised after production delivery)? These would seem collectively to draw a bigger, perhaps more software-specific box.
-- SkipPletcher - 03 Dec 2009