[Oslc-Automation] Temporary deployment solutions - tear-down plans - locating the plans in automated script construction

Martin P Pain martinpain at uk.ibm.com
Wed Aug 21 11:53:44 EDT 2013


Hi John,

Stephen's first few paragraphs were addressing your argument that putting 
the relationship on the plan was wrong and your backing it up by saying it 
doesn't match an OO analogy. Stephen opposed this by suggesting an OO 
analogy that it does fit (the Command pattern). Let's move away from 
discussing that (and your replies to that) to the other arguments against 
putting the relationship on the Plan, to avoid losing focus.

Your next argument was that the Request/Result knows the details of the 
entity to be torn down, not the Plan - and that we are not tearing down 
all instances created by the Plan. This is correct. However, by making the 
only parameter that is passed to the teardown Plan a URI that the provider 
is in complete control of, then the provider has the ability to make sure 
that the correct information is passed on to the teardown plan. That 
"produced" URI could be the AutoResult or AutoRequest if the provider 
wished to do so (indeed in our case it would be), but it is left up to the 
provider.

These are the only two arguments we see in your email (from 20th Aug) for 
putting the link on the Plan, but to us they do not carry as much weight 
as the desire to know in advance (at orchestration-plan design time) that 
teardown is possible, and how to perform it.

The reason for the teardown plan link being on the Plan, is so that you 
can tell when adding this plan to something else (whether an orchestration 
plan, or as something to be run in the set-up phase of a test run) that 
there is the option - or requirement - to tear it down later. We could add 
a predicate saying "the result will tell you how to tear down", but this 
goes one step further and tells you how to perform it.
This gives the motivation for the "produced" link - to give the URI to 
pass to the teardown plan. (It could also be useful for knowing what 
details to pass to other Plans that are using the result of this 
deployment, but currently that would still require provider-specific 
knowledge, so I won't go into that any more here.)
The "produces" link is there to give more information about what will be 
in the "produced" link. Not having this won't break the scenario IMO, but 
it helps the teardown plan be more flexible with parameters (as it could 
be used to help select the correct parameter to use), and it might also 
help an orchestrator know what other Plans can be chained after this one.

I expect these requirements could be met by a predicate on the Plan which 
indicates "the results will tell you how to tear them down" (in our case 
we would want the added semantics of "and please do so if you know when 
you've finished with them"), then a link on the Result to the teardown 
Plan, Action or naked POST/DELETE endpoint. But I haven't thought it 
through fully. Would this be more acceptable to you John? Would this still 
be acceptable to you Stephen, Michael, etc? I might be able to put this 
into a proposal tomorrow.

Thanks,
Martin



From:   John Arwe <johnarwe at us.ibm.com>
To:     oslc-automation at open-services.net, 
Date:   21/08/2013 14:36
Subject:        Re: [Oslc-Automation] Temporary deployment solutions - 
tear-down plans - locating the plans in automated script construction
Sent by:        "Oslc-Automation" 
<oslc-automation-bounces at open-services.net>



>  I've heard it said on numerous occasions by various people on the calls 
that the Automation spec is about actions to create other things. 

Usually "what X is about" statements are shaped by how people are 
[thinking about] using it, which is in turn based on their experiences. 
(Good) specifications typically try to do just what's in scope, and 
otherwise stay out of the way.... thus they admit several (ideally: many) 
uses, perhaps even apparently conflicting ones, and certainly ones that 
the promulgators never imagined. 
The obvious question to a statement like "the Automation spec is about 
actions to create other things", in the context of OSLC which tries to 
color within the lines of REST and Linked Data, is "should we then 
deprecate POST for create?" ... and FWIW at the HTTP level in W3C's LDP 
WG, lots of people still want to allow other ways to create resources, 
like PUT and even PATCH.  I don't think anyone has actually advocated 
dumping POST or CreationCapabilities -- just an example of reductio ad 
absurdum.  We have to ask what the limits are. 

More generally, any HTTP operation can be encapsulated (via Automation, or 
some other spec) ... but I don't think many would say we want to dump REST 
and do everything via Automation.  If you have both, and the capabilities 
of one include the other, you have to establish some community guidance, 
rules, norms, whatever for when to use each.  I think we're starting to 
feel our way through that process now. 


For my money, what Automation adds that the HTTP community has no standard 
answer for is how to handle long-running requests without relying on a 
persistent connection.  Otherwise, POST would be all you need; the 
Automation vocabulary is a useful refinement of POST, but parameter 
passing via name-value pairs and having no standard Plan URIs are not 
substantively different than naked POSTs.  But monitoring, canceling, some 
standardization of current/final states ... clear net positive, and very 
general, for those long-running interactions. 

The question coming up from some areas like our Provisioning and 
Scheduling products is really how to expose what they manage 
HTTP-RESTfully *and* allow those same operations to be "automated" 
(scheduled), whether they turn out to be long-running or not, without 
implementing "many" specs. 


> So given that I think the Automation spec is all about being a command 
pattern, it doesn't seem unreasonable to have 2 different commands (2 
Plans) which affect the system in different ways. Or do people not expect 
the Automation Spec to be used this way? I would welcome some documented 
clarity around this. 

IMO: Automation allows you to have as many Plans as you like, implementing 
whatever operations you like (including each HTTP operation, if the mood 
strikes you).  Regardless of which design pattern(s) you think it fits. 

If you want to define more constrained (yet still general) operations like 
Start/Stop, than HTTP gives you (or is likely to) then you end up with 
vocabulary(ies) for those ... both Actions and AutoPlans can fit that 
bill; the net difference between the two so far is that if the provider 
uses AutoPlans, then the client interaction style is assumed to be aligned 
with long-running requests (fair disclosure: I did convince people in 
Automation 2.0 to allow short-running requests as well via the 200 flow, 
but general clients still must be prepared for the others).  If the 
provider sticks with straight HTTP, or Actions, the client interaction 
style is assumed to be aligned with short-running requests; HTTP provides 
202 for long-running, but leaves completely open what the "monitor 
resource" looks like and how the client interacts with it after the 202 is 
received.   


> To comment on the idea of HTTP delete on the manufactured resource; I 
don't like that idea because we are straying into someone elses 
specification to define behaviour we require for Automation. The created 
resources may be completely custom and outside of OSLC's remit and I don't 
think we should be forcing them to support DELETE in that way to get our 
desired behaviour. 

I think that's our choice; i.e. I understand the issue, and it seems 
trivial in my head to spec it without that problem.  But I'm not married 
to either alternative at the moment myself. 


> With the current RQM to RTVS integration the automated "stop" step is 
not possible and relies on user interaction. At least in the short term 
this is the major pain point that I would want to address. RQM should be 
able to "stop" what it started, and it needs to be able to do that in a 
way that is not RTVS specific so that other automation providers could be 
used. 

I heard no objections to doing so last week, even via the proposed 
predicate.  If that were to turn out to be a "tactical" solution in the 
long run, with something like Actions being the eventual "strategic" one, 
fine ... this is linked data, it can be exposed both ways once the long 
term one settles out.  The discussion on the teardown predicate was only 
around where it's exposed, the Plan vs the Result. 

Best Regards, John

Voice US 845-435-9470  BluePages 
Tivoli OSLC Lead - Show me the Scenario 




From:        Stephen Rowles <stephen.rowles at uk.ibm.com> 
To:        oslc-automation at open-services.net, 
Date:        08/21/2013 05:07 AM 
Subject:        Re: [Oslc-Automation] Temporary deployment solutions - 
tear-down plans - locating the plans in automated script construction 
Sent by:        "Oslc-Automation" 
<oslc-automation-bounces at open-services.net> 



John, 

To pick up on your OO analogy. If that is correct then I would agree it 
seems odd. However I've heard it said on numerous occasions by various 
people on the calls that the Automation spec is about actions to create 
other things. Given this I'm not sure your constructor analogy holds true. 
>From what I've understood people expect the Automation Plan/Request/Result 
to be more analogous to an OO "Command" pattern than an object 
construction pattern. 

In the command pattern various command objects exists which do things, you 
create a command to achieve what you want to do which are then executed 
as/when appropriate to cause the desired effect. 

So given that I think the Automation spec is all about being a command 
pattern, it doesn't seem unreasonable to have 2 different commands (2 
Plans) which affect the system in different ways. Or do people not expect 
the Automation Spec to be used this way? I would welcome some documented 
clarity around this. 


To comment on the idea of HTTP delete on the manufactured resource; I 
don't like that idea because we are straying into someone elses 
specification to define behaviour we require for Automation. The created 
resources may be completely custom and outside of OSLC's remit and I don't 
think we should be forcing them to support DELETE in that way to get our 
desired behaviour. 


With the current RQM to RTVS integration the automated "stop" step is not 
possible and relies on user interaction. At least in the short term this 
is the major pain point that I would want to address. RQM should be able 
to "stop" what it started, and it needs to be able to do that in a way 
that is not RTVS specific so that other automation providers could be 
used. 


So in Summary: 

1) Is it correct that the Automation spec is intended to be a Command 
pattern style spec as I've described? 
2) Do people agree that we shouldn't be placing requirements on 
implementation details outside of the Automation spec? 


Stephen Rowles 



From:        John Arwe <johnarwe at us.ibm.com> 
To:        oslc-automation at open-services.net, 
Date:        20/08/2013 19:35 
Subject:        Re: [Oslc-Automation] Temporary deployment solutions - 
tear-down plans - locating the plans in automated script construction 
Sent by:        "Oslc-Automation" 
<oslc-automation-bounces at open-services.net> 



Catching up with this finally.   

There's a persistent underlying assumption that (REST hat on) feels 
misplaced, i.e. that the deploy Plan-A links to a teardown Plan-B, and 
Plan-B has a parameter (ick!) telling it what to teardown (which sounds a 
lot like destroy).  The equivalent OO statement is that I call a class's 
constructor (Plan-A) to manufacture an instance, and then I call some 
other class entirely (Plan-B) to destroy the instance.  [brows furrow] You 
will see a version of this point in the 8/15 minutes now that I fleshed 
out some things Michael was unable to minute. 

The deployed environment (virtual service) created as a result of creating 
an Automation Request (i.e. a constructor call, with parameters) against 
Plan A is the thing you want to tear down as I understand it, not all 
instances of Plan A output.  The 1:1 corresponding, already existing, 
Automation resource is the Automation Result.  So it seems perfectly 
natural that the Result (perhaps indirectly, via the "deployed env" it 
created) would tell a client how to tear it down.  I see no way the *Plan* 
can do so, because the *Plan* lacks knowledge of whatever parameters 
accompanied the Request (and in practical terms, any output parameters, 
which might also come into play in the general case).  The context needed 
to reverse the process is the original request; that has to be accessible 
to the teardown implementation somehow.  Smart implementations might need 
less than the full original request.  Since everything needed is 
accessible to the implementation, there is no need for (client-specified) 
parameters on teardown (goes my argument); if you want to take some 
-different- action, like "preserve env for later debug by humans", that's 
a -different action- with a different link. 

I do think there is room for debate on where that link gets placed, on the 
Result or on the "primary resulting resource", depending upon what 
semantics you attach to each. 
On the Result: you're depending on the Result to live on as long as the 
deployed environment.  Not clear that you need to introduce that 
dependency. 
On the "manufactured resource", linked to by the Result: Not clear to me 
that you need anything more than the link and HTTP DELETE on the 
"manufactured resource" to trigger teardown. 
Note that in both cases, implementations CAN use a "Plan-B with RPC-ish 
parameter" style by putting the parameter in the URI, if they so desire. 
Other implementations just have to ensure that whatever resource holds the 
teardown link has whatever subset of the Request parameters it needs to 
function properly.  That all seems like tasty loosely coupled goodness. 

As to the worries about the "right" cardinality of the proposed 
auto:produces/d predicate(s), containers is the obvious fix.  The 
cardinality is 0:1, if your Request produces >1 thing then its output is a 
container of those things.  fin. 

For those who cry out "aha, but 'teardown' might just be deregistration of 
'interest' so the env is eligible for re-use, so DELETE goes too far", I 
say this is all tied up in the semantics of the manufactured resource 
(which Automation leaves open).  If the manufactured entity is just an 
"interest registration", then deleting it is exactly what the client 
wants.  If it is "the env", ditto.  It's all Automation provider 
implementation detail; if the provider wants to allow clients to see 
through the encapsulation, that's called a proprietary extension. 

> There's no way for the orchestrator to know that the two plans are 
linked in this way 
To be clear: I agree with this, modulo the "plan/Result/env" substitution 
above.   I.e. I do see a reasonable requirement for an "undo a previous 
Auto Request" 'action'. 

> "it can find the reference in the result contributions to the deployed 
environment" vs mult contributions 
Ditto, modulo the "where to place the link" discussion above.  If the 
inversion/teardown link is always on the Result, it's not obvious to me 
that we need produced/s.  It would be useful to be clear on each one why 
it's needed (exactly what fails without it), and if there are dependency 
relationships between the decisions then understand those. 

> a means to determine which input parameter that resource should be 
passed in 
So far, I've argued there is no parameter for teardown so this is moot. 
Best Regards, John

Voice US 845-435-9470  BluePages 
Tivoli OSLC Lead - Show me the Scenario 
_______________________________________________
Oslc-Automation mailing list
Oslc-Automation at open-services.net
http://open-services.net/mailman/listinfo/oslc-automation_open-services.net



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
_______________________________________________
Oslc-Automation mailing list
Oslc-Automation at open-services.net
http://open-services.net/mailman/listinfo/oslc-automation_open-services.net
_______________________________________________
Oslc-Automation mailing list
Oslc-Automation at open-services.net
http://open-services.net/mailman/listinfo/oslc-automation_open-services.net



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://open-services.net/pipermail/oslc-automation_open-services.net/attachments/20130821/4bad6c2d/attachment-0003.html>


More information about the Oslc-Automation mailing list