Estimation and Measurement Telecon, 2009-08-07
See
Weekly Meeting Logistics for telecon information.
Attendees
ArthurRyman,
CarsonHolmes,
LawrencePutnamJr,
LeeFischman
Regrets
AndyBerner,
ScottBosworth,
SteveAbrams
Minutes
1. Representation of Effort Estimates by Labor Category and Activity
Review sample data for
MetricsEffortEstimateDetail provided by:
We reviewed the sample data contributed by
LeeFischman. The data describes estimates for effort broken down along the dimensions of Activity. Labor Category, and Time Period. However, the reports shown did not contain the raw data, but instead the rolled-up values along one or two dimensions. There is an underlying three-dimensional data set.
The values given in the sample data are for a fixed probability, e.g. 80%. For example, an estimated cost of $1,000 at an 80% probability level means that there is an 80% probability that the actual cost will be less than or equal to $1,000.
Giving a single value for an estimate is not in general adequate for project and portfolio management since it does not convey the degree of risk in the estimate. Instead, we need at least the expected value (i.e. the mean), and the variance. The variance is a measure of the risk and therefore of interest to portfolio and project managers.
ArthurRyman showed a draft
RDF/XML representation of the estimated data as a fact table.
LawrencePutnamJr then gave a live demo of SLIM-Estimate and showed similar data. Slightly different terms are used. Here's the dictionary:
Standard? |
Galorath |
QSM |
WBS Element |
Activity |
Task |
Role |
Labor Category |
Skill |
Probability Level |
Probability Level |
Assurance Level |
Both vendors generate probability distributions by Monte Carlo methods. Estimates are generated from input parameter, but there is some uncertainty in their values. Values are generated for the input parameters according to their probability distributions, and the models are applied to them yield definite values for the effort, duration, quality, and other metrics. This calculation is repeated 500 to 1000 times to yield a probability distribution for the output estimates. Therefore the underlying probability distributions are available in the estimation tools. These unrolled-up values should be sent to the EMS 1.0 service to allow users to analyze the data with reporting tools.
QSM applies factors to the effort estimates associated with tasks in order to make it done by skill. Gven an estimate for a task, a certain amount can be allocated to managers, developers, testers, etc. The method by which effort is allocated to roles and time periods is part of the model and not part of the data interchange. Only the result of the computation is interchanged. Furthermore, the raw (unrolled-up data) should be interchanged to eliminate potential inconsistency between rolle-up values and to allow maximum flexiblility in how users of the estimates analyse them.
2. Service-Factory-Instance Pattern - ArthurRyman
Review proposal for modelling resources as
service, factory, or instance.
ArthurRyman briefly reviewed the proposed design pattern for resources, namely a three-level hierarchy of Service-Factory-Instance. The Service contains Factories. Factories create and list instances of a given type, e.g. Estimate.
Comments
Add your comments here: