QM Scenario: Traceability
STATUS: WORKING DRAFT This is a working draft of the scenario. Its contents are expected to change.
This scenario focuses on the needs of consumers to create and follow traceability links across multiple resources and OSLC domains
Scenario
Traceability links are established between CM and QM domains
- Developer creates a change request using CM tool
- Tester is notified about the change request
- Tester creates a test case using QM tool or finds an existing test case that can be reused
- Tester requests a change request selection dialog
- CM provider provides selection dialog
- Tester selects change request
- QM tool links the test case with the change request
- Developer reviews change request. The link to the test case is visible.
- Developer gets summary information about the test case using the link
- Developer navigates to further details about the test case using the link
Traceability links are established between RM and QM domains
- Architect creates a requirement using RM tool
- Tester is notified about the requirement
- Tester creates a test case using QM tool or finds an existing test case that can be reused
- Tester requests a requirement selection dialog
- RM provider provides selection dialog
- Tester selects requirement
- QM tool links the test case with the requirement
- Architect reviews requirement. The link to the test case is visible.
- Architect gets summary information about the test case using the link
- Architect navigates to further details about the test case using the link
Requirements validation
- Traceability links are established as described in previous scenarios
- Developer implements change request and notifies tester
- Tester creates execution record for the plan item
- Tester executes test and creates an execution result
- Architect searches for test cases associated with the requirement
- Architect navigates to a test case and examines its execution records
- Architect examines execution results associated with execution record
Regression test
- A defect is found during business as usual activities
- Developer delivers a fix for the defect and resolves it
- Developer is confident that the specific defect has been fixed, but is unsure whether the fix may have accidentally regressed some other part of the system as a side effect.
- Developer requests a test case selection dialog
- QM provider provides selection dialog
- Developer selects a regression test case from the appropriate category
- CM tools links the defect to the regression test
- Peer Developer executes the regression test
- Regression test succeeds
- Peer Developer links execution result to the defect
- Developer views the link to the regression test from the defect
-- alternate
- Regression test fails
- Peer Developer reopens defect
- Go to step 2
Pre-conditions:
Post-conditions:
Alternatives:
Resources
- QM system
- RM system
- CM system
wrt to Traceability links I think it is worth emphasizing that not all changes need an associated test (or at least not a new test). I think it is worth allowing for the possible situations - (1) developer change doesn't require a test - this might be a risk and so therefore RM tool needs to capture this assessment and allow for sign off (2) developer change doesn't require the creation of a new test case (for example an internal change with API fixed), so the QM tool must allow for the association of an existing test with the change.
--
NigelLawrence - 06 May 2010
wrt to Requirements Validation - some extensions to the approach above:
(1) Architect asks for summary view of all test cases associated with a requirement or requirements where the last execution result is a certain state (normally a failure).
(2) Architect wishes to know which tests associated with a requirement have not been executed within a the last period of time (iteration, milestone, calendar month etc.)
(3) A combination of (1) & (2) - for example show me all test cases associated with these requirements where the last results is within the last week and it failed
--
NigelLawrence - 06 May 2010
wrt Regression test:
Normally the tester is going to find the bug running a particular regression test - so to a certain extent verifying the fix to the bug is simple... re-execute the test (ie. dialog with QM to select the test is a no-op). But it might be helpful if the QM could provide other associated tests that would be pertinent to the nature of the change to check that the fix not only stops the test from failing but also provides you with confidence that it hasn't regressed anything else.
Another scenario that isn't really accounted for is updating the regression suites. What is the work flow for filling holes in a regression bucket? Suppose a customer problem or some adhoc investigation exposes a hole in your regression testing that must be fixed. the QM tool should allow for this eventuality and possible link through to other resources employed by a service organisation to confirm that the hole has been plugged.
Finally I'm interested in the process that goes before step (1) above. How has the tester (in my case automated test execution engine) executed the the regression test that has resulted in the failure. Is it blindly running through all the resgression tests? Ideally it would be a nice scenario if the tester was guided by correlations between the underlying developer changes and the regression tests, so that only relevant tests are executed in the order of relevance.
--
NigelLawrence - 06 May 2010
Nigel, thanks for the feedback. During the last meeting we agreed that the "establishing links" scenario made too many presumptions about how CM and RM tool integrate. We decided to break it into two scenarios, one for establishing links between CM/QM and another for links between RM/QM. After breaking it into two scenarios I incorporated your feedback by allowing the tester to either create a new test case or select an existing test case.
--
PaulMcMahan - 12 May 2010
re: Requirements validation, I agree that these are useful extensions. Since the final summary views span multiple domains and artifacts within those domains (testcase, TER, executionresult) I think that it we should consider these extensions in terms of reporting.
--
PaulMcMahan - 12 May 2010
re: Regression test. I think that the scenario as written was confusing and misleading because it implied that the scenario began by the tester running regression tests without any prompting or context. The scenario was actually supposed to begin with a developer making a change and feeling confident in that change but wondering if some other part of the system might have been adversely affected by it. This would prompt the developer to look for a regression test that he could assign to the defect. Once assigned to the defect, a tester would then be notified that a regression test is needed and execute it. I updated the scenario to reflect this better.
--
PaulMcMahan - 12 May 2010
re: Regression test, test suites are an area that we will probably need to defer for QM V2
--
PaulMcMahan - 12 May 2010
wrt Requirements validation - should we separate out to have a separate Change validation, in the same way we separate out the linkage of QM to RM and WM to CM? My change validation scenario would essentially be a a subset of Requirements validation (step 2-5), but the focus is in validating the function delivered in a single change as opposed to validating the function delivered in delivering a requirement (which might be satisfied by many changes).
--
NigelLawrence - 19 May 2010
wrt to regression test - I agree that altering the words so that it wasn't role based (ie. developer does, this tester does that) makes it cleared but I still think the scenarios needs to be refined. My main issue with the current flow is that the entry point is vague. I think that the bug can arise from 2 classes of activity - (1) planned background regression testing and (2) other BAU activity (ie. issue from the field, unplanned test activity, developer identifying a problem in one area as they are developing another). In the fist case the association between defect and test case is potentially simple (we choose the test case that failed). But we might want to allow multiple test cases to to be associated with the defect (ie. the test case that failed and also test cases that exercise functionality that is 'likely' to be affected by the defect fix).
--
NigelLawrence - 19 May 2010
re: Requirements validation - I think it would be preferable for the QM provider to not make any assumptions, or even have any awareness, of associations between CM and RM providers. So I removed step 2 from the scenario since IMO it really deals with activities outside of the QM domain.
--
PaulMcMahan - 04 Jun 2010
re: Regression test - good point. what I really had in mind here was the BAU activity where a developer is making an ad-hoc change to the system and has insufficient knowledge or confidence about the downstream affects. I will clarify this as the actual entry point in the scenario.
--
PaulMcMahan - 04 Jun 2010