This wiki is locked. Future workgroup activity and specification development must take place at our new wiki. For more information, see this blog post about the new governance model and this post about changes to the website.
-- JimAmsden - 03 Sep 2009

Thoughts on MAD and the Semantic Web

"Architecture Management" could be considered to consist of planning, execution, evolution, measurement and governance of the principles, patterns and building blocks making up the assets that facilitate the construction of solutions that meet business objectives. Architecture for planning is that part of architecture management that determines how things should change, for what purpose, and for whom. Solution delivery is about how to instantiate and assemble architecture building blocks to construct solutions that adhere to architecture guiding principles. Enterprise architecture development methods help determine what enterprise assets are needed and how they should change over time to meet business needs.

The Architecture Management Resource Definitions (AMRD) define resource formats for defining model elements, their containment hierarchy, and links between them. These resource formats support extension and integration through the creation of a ModelElement? proxy for representing some other element, and the ability to create typed links between these elements through their proxies.

This is an effective strategy that has been proven in practice. For example IBM RequisitePro? and Rational Software Architect integration is done using a similar approach where ModelElement? is a proxy for the related element that can be used to establish the link information. This approach supports search, navigation and retrieval. But there is a trend toward enabling further semantics and integration with other services.

Interoperability and integration requires more than just search and navigation retrieval. It requires encoding more semantics in the data representation itself in order to support the development of new tools, products and services that use the data. The AMRD approach augments existing Jazz services (for describing services and handling requests and responses) through annotation of links between elements. But there may be an opportunity to build more agile workflow management tools that can deal with different levels of semantic annotation.

We are moving beyond the time when enterprise architecture, application design and construction, and lifecycle management tools were concerned mostly with data collection and presentation. There is becoming an increasing emphasis on data manipulation and handling to make it actionable. This requires information to be placed in context, and integration with information from other sources in order to deliver business value. This involves accessing data, evaluating its relevance to a particular problem, and integrating it with other data in order to make informed decisions - transforming data into information and information into knowledge. RESTful web services and URL links provide effective data transfer, but don't enable discovery, integration and assessment of data in some context to address specific stakeholder concerns through specific viewpoints.

See SSWAP - a hybrid technology that addresses rich, high throughput semantic information to the web service model to enable semantic discovery and engagement. It is an OWL application providing web services and an OWL ontology to allow web resources to describe themselves, query those resources, integrate them, and semantically encode the result. SSWAP supports logical deductions from data, rather then the more lexical associations of ModelElement? containment and heuristics associated with typed links supported by AMRD.

The problem we are trying to solve is to link semantically rich information across a wide range of enterprise architecture, solution delivery, asset and lifecycle management tools.

Product Integration Summary

Click to enlarge

This may be difficult to do with proxy elements and typed links as due to the richness of the links involved, extensions in each of the data sources required to support the meaning of the links, and the need to be able to reason about the results beyond what might be supported by any of the existing tools. What AM needs to do is provide:

  1. The ability to identify the major components of an model – mapping each part to a value-proposition, and a role that would be responsible for creating, maintaining, and/or leveraging that model component
    • Data model, Process model, Service model, etc.
    • Defining the relationships between the major model components
    • Defining an structured relationship between the levels of abstraction, and their use within strategy, planning, portfolio, architecture, solutions management and governance practices
  2. Model Management is concerned with four main areas:
    • “Team modeling” during model development
    • Asset Management, the sharing and managing of “models as assets” to be activated as seeds or constraints
    • “Model consistency management” across abstraction levels and domains
    • The formatting standards/conventions necessary to enable interoperation between tools and roles
  3. A successful model management solution requires:
    • Consistency in how the various tools support the various model artifacts used
    • Consistency in how those tools support team modeling, asset management and model consistency management
    • Support for the necessary flexibility in different team situations (including different organizational setups)
    • Well-documented best practices
    • Consistent lifecycles across domains and levels

With regard to AMRD, here are some thoughts on specific things we could discuss. These are just ideas that need further exploration. They may result in no changes to AMRD, or they may input into some future evolution of ARRM.

1. Defining OSLC Elements

ModelElement? is used to represent discretely addressable elements out of a typically larger collection/model of elements for which a (possibly separate) Model Management Repository can provide additional information beyond that provided in the element itself, including additional resources, properties and links between resources.

It might be helpful to provide the RDFS for the oslc_am elements to more fully define the XML representation as RDF-XML. Alternatively simple rdf:type and rdf:property assertions could be used and included in the definition of the MAD elements.

For example:

<rdfs:Class rdf:ID="ModelElement/>

<rdf:Property rdf:about="dc:type">
<rdfs:domain rdf:resource="oslc_am:ModelElement"/>
<rdfs:range rdf:resource=""/>

<rdf:Property rdf:about="oslc_am:representedBy">
<rdfs:domain rdf:resource=""/>
<rdfs:range rdf:resource="oslc_am:ModelElement"/>

<rdfs:Class rdf:ID=ResourceLink...

2. Resource Context

All resources are identified by a unique URI. This includes resources that have multiple versions, possibly organized by streams, branches, baselines, configurations, workspaces, etc. A server will always (eventually) access a resource by its unique URI.

However, in practical situations, we find that a number of related resources exist in some common context. In the simplest case for XML, that context is the XML document itself which establishes a context for resolving relative URLs in that document. Namespaces can also contribute to context outside the document in order to disambiguate names.

When introducing versioning, identity management and resolution becomes much harder. Usually a set of related elements will be modified together in some identified context - from a workspace to a committed stream to a captured baseline in Jazz for example. The creator of the stream warrants that the specific versions in the stream are intended to work together, establish version integrity between related elements. It is often very useful to refer to these related elements without including an absolute URI for the specific version in any given XML document. This makes it easy to have a document that represents a resource in a state described by a particular stream to be easily switched and viewed in the context of a different stream. To do this, the document needs to identify the context that specifies the particular stream it comes from. This could include server, project, workspace, stream and/or baseline information.

OSLC AM has this concept in the oslc_am:context (I think). But this is associated with the ModelElement? and its representation of a particular resource. Another approach would be to associate this context with the resource as a whole allowing a set of ModelElements? to be addressed in the same context.

Resolution of the URLs in the document can be done on the client or the server. To do them on the client, the client would make a GET request after combining the document URL with information in the document context to create the absolute URI for the specific resource version. How this is done would be specified by OSLC-AM.

If the URL is resolved by the server, then the context information could be contained in some header interpreted by the server.

The resource can then change switch to a different stream by just changing the context. This could be handled dynamically by the server including the context information in the returned entity, or in a header for the client to retain for future use. Of course it is also possible to override the context in any link and refer to a specific version in some other context.

3. Extending Resources

The purpose of OSLC-AM is to enable tool integration and extension to provide additional information beyond that provided in the element itself, including additional properties and resources. Initially OSLC-AM might be focused on providing C/ALM capabilities on these various model elements, and that lifecycle management may require additional links between the elements.

For example, consider a tool and data source (BMM) for capturing business motivation. Elements of this data source could include Influencer, Vision, Goal, Objective Vision, Strategy, Tactic, Policy, Rule, Assessment, Risk, PotentialReward? , etc. There would be relationships between these elements:

  • Goal amplifies Vision.
  • Objective quantifies Goal.
  • Strategy isAComponentOfThePlanFor Goal.
  • Tactic implements Strategy.
  • Strategy channelsEffortsTowards Goal
  • etc.

Consider another tool and data source (SOA) for modeling services. Elements of this data source could include Participant, Capability, Service, Request, ServiceInterface? , ServiceContract? , ServiceChannel? , MessageType? , Policy, etc. And there would be relationships between these elements:

  • Participant provides Service
  • Participant consumes Service
  • Service type ServiceInterface?
  • ServiceInterface? definesRoleIn ServiceContract?
  • etc.

We could envision lots of ways these two models might be integrated for various purposes including service identification (using SOMA Goal-Service modeling techniques) or impact analysis:

  • Service realizes Goal
  • Service support Tactic
  • ServiceInterface? follows BMM:Policy
  • etc.

These extensions could be exploited through extensions in either or both of the existing tools (if development plans are to provide new features in the existing tools), or some new service portfolio planning or impact analysis tools could be created that manages the new relationships (mediator pattern) and the other are not aware of the new relationships.

Another possible scenario is that the new capability integrating these data sources needs to add properties to specific elements in the existing data sources that are required to support the new semantics. For example, the service portfolio tool might need to add an expected revenue impact property to the PotentialReward? element in order to support computation of a service litmus test to determine if a tactic should be suppoted by a service. The service portfolio tool might also need a new link between elements in the BMM tool. For example, the service litmus test might also need a new relationship between Risk and Strategy to evaluate the risks for a service supporting a tactic that implements a strategy.

So this would seem to imply that we might need a more flexible way to extend elements in data sources with additional properties and links to other data. This will be difficult to do with a ModelElement? that mirrors, represents, or is essentially a proxy for an element in any data source. ModelElement? also duplicates some information from represented element (name and description) that may result in data redundancy and integrity issues. Rather RDF could be used directly to add assertions that define new properties for anything. And there is no difference between properties and links which may minimize the tooling overall requirements.

4. Linking Between Things

The above example seems to indicate that links may need to be created between elements in a data source as well as elements between data sources. Ideally the same mechanism would be used for both. We can't predict how a tool/data source chooses to implement links - could be MOF, foreign keys, ID references hrefs, etc. However, OSLC-AM can influence the resource representations of those data sources that are to be available in the open Jazz platform. Effective integration will require separation of storage formats, information structure, semantics and views. If these resource representations were ontologies captured in RDF, then links within a resource representation could be the same as links between resource representations. If we did this, then tools would only need to deal with one abstraction of properties and links - RDF triples. Anything could be linked to or add properties to anything, there would be no constraint based on what element in the data source had an associated ModelElement? proxy. An additional benefit would be that these resource representations, and the links between them could be immediately available on the semantic web where inference engines and reasoners could be used to provide additional value.

For example, the impact analysis tool described above could use a reasoner to evaluate the impact of change between connected business motivation and service model elements.

This might also eliminate the need for link types and properties on links. Instead RDF could be used to assert whatever information is needed including what is a link (rdf:property), assertions about properties, and even assertions about assertions using reification.

Topic revision: r1 - 03 Sep 2009 - 19:06:38 - JimAmsden
This site is powered by the TWiki collaboration platform Copyright � by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Contributions are governed by our Terms of Use
Ideas, requests, problems regarding this site? Send feedback