OSLC enables a RELM of possibilities

At the beginning of September IBM Rational announced Rational Engineering Lifecycle Manager (RELM) with a post on the Invisible Thread developerWorks blog. Meg Self, VP Complex & Embedded Systems for IBM Rational, explains RELM in the blog post as follows:

RELM can help engineering teams to visualize, analyze and organize engineering data and their relationships … It is designed to help engineering teams make effective and timely decisions, improve reuse of engineering data and maintain compliance with standards. RELM builds a near real-time index of the data and relationships from source tools … RELM delivers cross-domain views, impact analysis and the ability to group the data into product & system structures to support search and queries. RELM indexes linked lifecycle data that has been created to OSLC (Open Services for Lifecycle Collaboration) specifications.

(Emphasis added.)

“RELM indexes linked lifecycle data that has been created to OSLC specifications” … that seemed like something I should learn more about, and something that would be interesting for others in the OSLC community too. A few well-placed questions (if I do say so myself) led me right back to the OSLC community and Nick Crossley. With the stars aligned, or at least the scheduling daemons dealt with, I sat down with Nick for a virtual cup of Tea, Earl Grey: Hot, and had a chat about RELM and OSLC.


Sean Kennedy (Me): In keeping with social norms, my first order of business, as we enjoyed our tea, is to learn a bit more about you. So, who is Nick Crossley?

Nick Crossley (NC): My background is in tools and systems programming, and I was part of Telelogic when it was acquired by IBM. I am the architect for the Rational Change and Rational Synergy products, as well as RELM. With these responsibilities I have spent quite a bit of time thinking about, and working on, Linked Data, version and configuration management, product data management, and, of course, OSLC. I’ve been involved in the OSLC Core workgroup for several years, and lead the OSLC Configuration Management workgroup. My experience with specifications and standards is not limited to OSLC: “back in the day” I contributed to SPARC-based standards, more recently I contributed to a couple of JSRs (JSR 147 (WVCM) and JSR 203 (New New IO)), and on a weekly basis I participate many cross-Rational internal architecture teams.

Me: Now that I know that there is nothing I could ask you about RELM and OSLC that you cannot answer, let me jump right to the point: What does “RELM indexes linked lifecycle data that has been created to OSLC specifications” mean? Is RELM an OSLC consumer or provider of any specification?

NC: Yes, and no … RELM doesn’t, strictly speaking, run on OSLC. The main purpose of RELM is to help companies visualize, analyze, and organize their data from across many tools. To do this we have created something we call the Lifecycle Query Engine (LQE), which creates an index from across many tools and types of data that makes it possible for RELM to achieve its main purpose. Now, LQE isn’t based on an OSLC specification, but it became an achievable technology because it could access data from so many different tools using OSLC.

Me: I see, so RELM is a consumer of OSLC data.

NC: Well, yes, most of the data in the index is as defined by OSLC (it exposes what OSLC describes), but it is also more than that, so no. Sometimes there is additional data that the tools have, whose use cases are not yet handled by OSLC, in these cases the tools publish a superset of OSLC and that is what RELM consumes. This additional data is formatted in the spirit of OSLC, that is, the way OSLC describes and defines Linked Data, is carried throughout all the IBM Rational tools.

Me: I’m starting to get your original “yes and no” answer … RELM uses OSLC where available, and extends it to cover scenarios that OSLC has not (yet) addressed.

NC: Exactly. OSLC Core is followed by all tools, even if the data is not part of a defined data domain. This makes the UI preview for rich hover “just work”, for example. It doesn’t matter where the data comes from, you can hover over the link in RELM and see the details from the tool. And, of course, you can click on that link to see the full resource in the source tool. By following the principles of Linked Data and OSLC Core, we’re able to support queries on disparate data without the effort of creating a grand unifying schema, as would be needed for a relational database.

Me: So LQE doesn’t just suck data into a relational database and then work with it there?

NC: No, RELM collects Linked Data from the various tools and keeps, and works with, it that way. Actually, it would be extremely difficult, if not impossible, to do what RELM does without the resource shapes and vocabularies defined by the various OSLC specifications. The resource shapes and vocabularies themselves are stored in the index and can be queried. This allows us to handle whatever data the tools provide without knowing the details of its structure. A static schema is not assumed, or even necessary, we can construct queries and reports dynamically based on the OSLC resource shapes and vocabularies the tool has associated with its data.

Me: This is quite exciting! You’re starting with a base of OSLC and Linked Data and doing things that could not be done before. With RELM you’re really pushing the envelope.

NC: With RELM we are trying to tackle a common problem that remains generally unsolved. As Meg put in her announcement blog, RELM delivers “cross-domain views, impact analysis and the ability to group the data into product & system structures to support search and queries”.

Me: Part of me wishes that I were working with you on this exciting technology! Another part of me wonders about how other standardization(-like) efforts have suffered setbacks when large companies have employed an “embrace, extend, extinguish” strategy: Why shouldn’t the OSLC community worry about this?

NC: Extending OSLC has been something the OSLC community has embraced since the beginning. Anyone can take an OSLC specification and start something else with it outside the OSLC community. Implementers can, and have, added their own extensions to specifications – just as the specifications themselves allow: the specifications are explicitly worded to allow for extension – we embrace the ‘Open’ part of our name! Indeed, OSLC’s incremental and extendable approach is one of the reasons it’s successful: we agree on increments of common functionality fairly quickly and implementers can pick that up and add whatever else they need; that implementation work, especially the extensions, then helps drive the next increment of specification. There is a virtuous cycle between specifications and implementations created by the scenario-based, incremental approach of OSLC specification development. When I consider the challenges faced by potential implementers of JSR 147 (WVCM), the biggest one is that the specification tried to cover everything so implementers need to do a lot of work cover scenarios that are not important to them.

Me: I can certainly understand the difficult decisions facing potential implementers of a 100% specification in an 80/20 world. Now, this virtuous cycle you mention, it would certainly be accelerated if the implementer brought his experiences and insights to the community and participated in the creation of “the next increment of specification”. Which workgroups and specifications do you think will be influenced by RELM?

NC: RELM includes the ability to manage product lines, and the ALM-PLM Interoperability workgroup, led by Rainer Ersch of Siemens, is drafting vocabularies for defining product data. It would be very good to see convergence between what RELM has done, and what the ALM-PLM workgroup is producing. We don’t assume that RELM 1.0 got it everything right and think future releases will benefit from the work done in the OSLC community. Another workgroup (incidentally of interest to many of the ALM-PLM workgroup members) that will be influenced by RELM is the Configuration Management workgroup that I lead. RELM uses a new service called VVC (Versions, Variants, and Configurations) that IBM designed as a configuration management system for linked lifecycle data, and we are currently working to extend VVC to provide Global Configurations that work across multiple providers (e.g. RELM, RTC, CC, svn, git, DOORS, RHAPSODY, RQM, Bugzilla) and allows you to do baselines across them all. I think VVC and these new Global Configurations are things the workgroup can look to, learn from, and improve upon. Finally, as I’ve mentioned before, we’ve followed OSLC Core, even when the shape of the data is not defined for the domain. Some of our experience here could lead to improvements in OSLC Core too.

Me: Wow. It will be interesting to see how these workgroups assimilate what has been done in RELM. Well Nick, I am so glad that I’ve been able to have some time with you to discuss RELM and OSLC, where should people go to learn more about RELM?

NC: You can certainly start on the product page. There is also a recent developerWorks article that takes you from the concept of Linked Data through to its application in RELM to help systems engineers get more from their data. And if you are interested in shaping some of this work yourself, consider joining the OSLC Configuration Management workgroup – I would welcome new members!

Me: Thanks Nick! I really appreciate you taking the time to explain RELM; how it has used, and been inspired by OSLC; and how the work done in RELM may influence the future of OSLC specifications.


There you have it folks! I hope you found reading this interview as rewarding as I found doing the interview. If you think there are questions I missed asking Nick, want more details on something Nick said, or just want to comment on this article, follow the link below to the form thread for discussing this post.