This wiki is locked. Future workgroup activity and specification development must take place at our new wiki. For more information, see this blog post about the new governance model and this post about changes to the website.

QM Scenario: Traceability

STATUS: WORKING DRAFT This is a working draft of the scenario. Its contents are expected to change.

This scenario focuses on the needs of consumers to create and follow traceability links across multiple resources and OSLC domains


Traceability links are established between CM and QM domains

  1. Developer creates a change request using CM tool
  2. Tester is notified about the change request
  3. Tester creates a test case using QM tool or finds an existing test case that can be reused
  4. Tester requests a change request selection dialog
  5. CM provider provides selection dialog
  6. Tester selects change request
  7. QM tool links the test case with the change request
  8. Developer reviews change request. The link to the test case is visible.
  9. Developer gets summary information about the test case using the link
  10. Developer navigates to further details about the test case using the link

Traceability links are established between RM and QM domains

  1. Architect creates a requirement using RM tool
  2. Tester is notified about the requirement
  3. Tester creates a test case using QM tool or finds an existing test case that can be reused
  4. Tester requests a requirement selection dialog
  5. RM provider provides selection dialog
  6. Tester selects requirement
  7. QM tool links the test case with the requirement
  8. Architect reviews requirement. The link to the test case is visible.
  9. Architect gets summary information about the test case using the link
  10. Architect navigates to further details about the test case using the link

Requirements validation

  1. Traceability links are established as described in previous scenarios
  2. Developer implements change request and notifies tester
  3. Tester creates execution record for the plan item
  4. Tester executes test and creates an execution result
  5. Architect searches for test cases associated with the requirement
  6. Architect navigates to a test case and examines its execution records
  7. Architect examines execution results associated with execution record

Regression test

  1. A defect is found during business as usual activities
  2. Developer delivers a fix for the defect and resolves it
  3. Developer is confident that the specific defect has been fixed, but is unsure whether the fix may have accidentally regressed some other part of the system as a side effect.
  4. Developer requests a test case selection dialog
  5. QM provider provides selection dialog
  6. Developer selects a regression test case from the appropriate category
  7. CM tools links the defect to the regression test
  8. Peer Developer executes the regression test
  9. Regression test succeeds
  10. Peer Developer links execution result to the defect
  11. Developer views the link to the regression test from the defect
-- alternate
  • Regression test fails
  • Peer Developer reopens defect
  • Go to step 2





  • QM system
  • RM system
  • CM system

wrt to Traceability links I think it is worth emphasizing that not all changes need an associated test (or at least not a new test). I think it is worth allowing for the possible situations - (1) developer change doesn't require a test - this might be a risk and so therefore RM tool needs to capture this assessment and allow for sign off (2) developer change doesn't require the creation of a new test case (for example an internal change with API fixed), so the QM tool must allow for the association of an existing test with the change.

-- NigelLawrence - 06 May 2010

wrt to Requirements Validation - some extensions to the approach above: (1) Architect asks for summary view of all test cases associated with a requirement or requirements where the last execution result is a certain state (normally a failure). (2) Architect wishes to know which tests associated with a requirement have not been executed within a the last period of time (iteration, milestone, calendar month etc.) (3) A combination of (1) & (2) - for example show me all test cases associated with these requirements where the last results is within the last week and it failed

-- NigelLawrence - 06 May 2010

wrt Regression test: Normally the tester is going to find the bug running a particular regression test - so to a certain extent verifying the fix to the bug is simple... re-execute the test (ie. dialog with QM to select the test is a no-op). But it might be helpful if the QM could provide other associated tests that would be pertinent to the nature of the change to check that the fix not only stops the test from failing but also provides you with confidence that it hasn't regressed anything else.

Another scenario that isn't really accounted for is updating the regression suites. What is the work flow for filling holes in a regression bucket? Suppose a customer problem or some adhoc investigation exposes a hole in your regression testing that must be fixed. the QM tool should allow for this eventuality and possible link through to other resources employed by a service organisation to confirm that the hole has been plugged.

Finally I'm interested in the process that goes before step (1) above. How has the tester (in my case automated test execution engine) executed the the regression test that has resulted in the failure. Is it blindly running through all the resgression tests? Ideally it would be a nice scenario if the tester was guided by correlations between the underlying developer changes and the regression tests, so that only relevant tests are executed in the order of relevance.

-- NigelLawrence - 06 May 2010

Nigel, thanks for the feedback. During the last meeting we agreed that the "establishing links" scenario made too many presumptions about how CM and RM tool integrate. We decided to break it into two scenarios, one for establishing links between CM/QM and another for links between RM/QM. After breaking it into two scenarios I incorporated your feedback by allowing the tester to either create a new test case or select an existing test case.

-- PaulMcMahan - 12 May 2010

re: Requirements validation, I agree that these are useful extensions. Since the final summary views span multiple domains and artifacts within those domains (testcase, TER, executionresult) I think that it we should consider these extensions in terms of reporting.

-- PaulMcMahan - 12 May 2010

re: Regression test. I think that the scenario as written was confusing and misleading because it implied that the scenario began by the tester running regression tests without any prompting or context. The scenario was actually supposed to begin with a developer making a change and feeling confident in that change but wondering if some other part of the system might have been adversely affected by it. This would prompt the developer to look for a regression test that he could assign to the defect. Once assigned to the defect, a tester would then be notified that a regression test is needed and execute it. I updated the scenario to reflect this better.

-- PaulMcMahan - 12 May 2010

re: Regression test, test suites are an area that we will probably need to defer for QM V2

-- PaulMcMahan - 12 May 2010

wrt Requirements validation - should we separate out to have a separate Change validation, in the same way we separate out the linkage of QM to RM and WM to CM? My change validation scenario would essentially be a a subset of Requirements validation (step 2-5), but the focus is in validating the function delivered in a single change as opposed to validating the function delivered in delivering a requirement (which might be satisfied by many changes).

-- NigelLawrence - 19 May 2010

wrt to regression test - I agree that altering the words so that it wasn't role based (ie. developer does, this tester does that) makes it cleared but I still think the scenarios needs to be refined. My main issue with the current flow is that the entry point is vague. I think that the bug can arise from 2 classes of activity - (1) planned background regression testing and (2) other BAU activity (ie. issue from the field, unplanned test activity, developer identifying a problem in one area as they are developing another). In the fist case the association between defect and test case is potentially simple (we choose the test case that failed). But we might want to allow multiple test cases to to be associated with the defect (ie. the test case that failed and also test cases that exercise functionality that is 'likely' to be affected by the defect fix).

-- NigelLawrence - 19 May 2010

re: Requirements validation - I think it would be preferable for the QM provider to not make any assumptions, or even have any awareness, of associations between CM and RM providers. So I removed step 2 from the scenario since IMO it really deals with activities outside of the QM domain.

-- PaulMcMahan - 04 Jun 2010

re: Regression test - good point. what I really had in mind here was the BAU activity where a developer is making an ad-hoc change to the system and has insufficient knowledge or confidence about the downstream affects. I will clarify this as the actual entry point in the scenario.

-- PaulMcMahan - 04 Jun 2010

Topic revision: r7 - 04 Jun 2010 - 17:41:58 - PaulMcMahan
This site is powered by the TWiki collaboration platform Copyright � by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our Terms of Use
Ideas, requests, problems regarding this site? Send feedback