Software Metrics Definitions
This document lists some well-known software metrics, gives their standard definitions, and proposes identifiers for use in resource formats.
Estimation Industry Experts
The metrics defined here have been selected based on input from the following estimation industry experts:
This input has been consolidated in
Key Software Metrics.
Existing Standards
Where possible, we refer to industry standards. The candidate sources of industry standards are:
- IEEE SWEBOK - IEEE Software Engineering Body of Knowledge
- IFPUG - International Function Point Users Group
- ISO - International Organization for Standardization
- PMI PMBOK - Project Management Institute Project Management Body of Knowledge
- SEI CMMI - Software Engineering Institute Capability Maturity Model Integration
Web Identifiers
In alignment with Web architecture, we propose to define URIs to name important concepts such as software metrics. URIs are used for XML namespaces and Semantic Web. The base URI for software metrics will be
http://open-services.net/ns/ems.
In order to group software metrics into related groups, we'll introduce categories. The initial categories are:
- size - to measure the size of software, e.g. lines of code, function points, story points, etc.
- quality - to measure the reliability of software, e.g. defect density, defect discovery rate, defect removal rate, etc.
- schedule - to measure the elapsed time of a project
- effort - to measure the total work spent by a team on a project, e.g. person/hour, number of full-time equivalent (perhaps by role)
Each category will define a namespace for URIs. The above categories define the following namespaces:
See
Issues About Metrics Categories
Within a category, specific metrics are defined with fragment identifiers. For example, with the size category, the metric KLOC becomes:
Standard Terms
The following topics have been discussed:
Comments
Add your comments here.
If the quality namespace is to measure reliability, why not rename it reliability? ASQ defines "Quality" to include Juran's goodness of fit (what we may call coverage of requirements) which would not affect reliability from Musa's perspective.
--
SkipPletcher - 19 Jun 2009
Skip, the proposed definition is not authoritative. We should improve it to include goodness of fit.
--
ArthurRyman - 22 Jun 2009
I attempted to combine the Galorath and QSM metrics categories into one common area (QSM for now because I'm TWiki challenged). I did leave metrics separate where we measure them differnently like SLOC and Defects.
--
LawrencePutnamJr - 21 Oct 2009
The measurement types we listed (scope, cost, schedule, and quality) are project management measures, which is good and common but not specifically software -- although we're using software-centric definitions for a few of those measures. We also have slight difference in categorizing the metrics (size, quality, schedule, and effort), so we might estimate cost but measure effort or v.v. -- related, but not the same. What about metrics for delivered value (perhaps as an inverse of risk), and learning rate (how much faster do we deliver value as the solution matures). Do we want to measure reuse (at the code or service level)? Do we want to have any measure of waste (dead or abandoned code, specified but dropped features, rework)? How about things like test coverage (code or reqts) and utility (how many delivered features are actually exercised after production delivery)? These would seem collectively to draw a bigger, perhaps more software-specific box.
--
SkipPletcher - 03 Dec 2009