Effective Use Of Model-Driven Development

The original promise of Object Management Group’s Model-Driven Architecture (OMG MDA) initiative, and its realization in Model-Driven Development (MDD) methods and tools, was to be able to separate conceptual business design, from platform independent logical solution design, from platform specific solution implementation, and to use model-to-model and model-to-code transforms to automatically transform one from the other. The goal was to raise the level of abstraction “programmers” use to analyze, design, and construct solutions so that they could be delivered faster, more reliability, using less skilled developers, while following architecture decisions and guiding principles to close the business-IT gap.

There has been tremendous progress in the development of methods and tools that support MDD. However, the results have perhaps been a bit below expectations. There are a number of potential forces that challenged the realization of this vision. Summarizing the key ones briefly:

  1. Standards bodies are sometimes hard places to innovate effectively and efficiently.
  2. UML2, SysML, SPEM, BPMN, SoaML, MOF and other related standards that form the foundation of OMG’s Model Driven Architecture became quite complex in their own right, creating challenges for tool vendors and users.
  3. At the same time, the emergence of higher level languages like Java, C#, Ruby, etc., along with the shift to Web and mobile based development with maturing APIs like AWS, iOS and Android, along with the introduction of highly productive integrated development environments like eclipse, Visual Studio, Xcode, etc. made traditional programming easier and more productive, cutting into the MDA value proposition.
  4. Round-trip engineering required to keep models and code in sync proved to be more difficult to support with tools, use in practice, and more difficult to manage in design and development projects then we would have hoped.

However, these are relatively insignificant contributing factors. Perhaps the primary issue was the belief that models could be sufficiently detailed and efficiently developed that they would essentially replace code, and easily address operating system, API and platform variability through automated transforms.

As a practical matter, this perhaps hasn’t worked out as envisioned. The biggest issue is that achieving the full MDA vision as practiced actually tended to commingle analysis, design and implementation – attempting to use the same model to perform all these functions aided by various automated transforms. But this coupling is exactly what analysis, design and implementation practices are trying to avoid. The hallmark of software engineering is separation of concerns, addressing commonality and variability, leveraging refactoring as a means of improving asset vitality, supporting commonality and variability for reuse, and managing change.

My views on MDD have evolved, and are continuing to evolve in recommended approaches to MDA. On the one hand, IBM has offerings like IBM Business Process Designer that allows business analysts to develop BPMN models that can be directly executed by IBM Business Process Manager – no transforms are actually required. On the other hand, IBM provides many development capabilities using various programming languages with no modeling at all, including COBOL, C++, Java, Enterprise Generation Language, etc. In the middle are tools like IBM Integration Designer that present a higher-level set of views and editing tools for visual Web Services development using XSD, WSDL, SCA, BPEL and other w3c XML specifications. How do we reconcile all these different approaches? Clearly there’s no one size fits all solution. Rather the context of the particular problem domain, existing assets, team organization and skills, existing methods and tools, etc. will have a big impact on the role models play in this continuum. However, there may be some practical guidelines for MDD that provide more effective outcomes.

What I’m coming to realize is that approaches to MDA should be incorporated in a more holistic approach to Solution Delivery and Lifecycle Management (SDLC) which typically addresses the following facets:

cropped-sdlcvs1.jpg

Oddly enough – these facets are the subject of this blog! Here are a few guidelines for getting the most out of MDD in the context of full SDLC activities and work products.

Analysis and design should focus on design concerns, not implementation. Those concerns generally address the solution architecture, as an instantiation of enterprise architecture building blocks used in a particular context to address project-specific requirements. Analysis and design models help inform project planning, guide development for and with reuse, enable effective change and impact analysis, support needs-driven test planning, and bridge between business requirements and delivered solutions. Analysis and design models also provide developers with the information they need to know to guide their work effectively and efficiently.

Design models should inform, but not be the implementations. This is because models that are sufficiently detailed to support transforms to executable code are often not only very tedious and expensive to develop (programming in pictures can be hard) but become unwieldy for their intended purpose. They become so complex and detailed in their own right, that they are no longer as useful for effective planning, change assessment, and impact analysis. And developers are generally more productive using text-based programming languages in modern IDEs. They don’t need models to be the implementation. They need the models to be sufficiently high level and comprehensible that they can inform the implementations, ensuring that design decisions are followed, and providing a means of communicating implementation constraints and discoveries back to the analysts and designers in order to improve the designs.

Above all, design models should be seen as a means of mediating between the what (requirements) and the how (implementation). They do this by providing an effective means of capturing, communicating, validating, reasoning about and acting on shared information about complex systems that helps close the gap between requirements and delivered solution. At the same time, the design models provide the foundation for change and impact analysis and project lifecycle management and governance. Tools like:

  • Rational DOORS-NG for requirements management
  • Rational Team Concert for change and configuration management
  • Rational Quality Manager, and
  • Rational Design Manager

provide an integrated set of capabilities leveraging OSLC and the Jazz platform common services to effectively use design models to link complex and rapidly changing  artifacts for more effective lifecycle management. Together these tools help you do real-time planning, informed by lifecycle traceability, through in-context collaboration to support continuous process improvement. Design models provide the context in which to understand the links between all these artifacts and the implications of change in them.

Avoid round-trip engineering and design/implementation synchronization problems by avoiding the coupling in the first place. Try to keep the models at a relatively high level so that they clearly address the business problem and identify the cohesion and coupling pain points that impact all projects to at least some extent. At the same time, try to strike a balance between design and implementation concerns by providing implementation guidance in the model documentation rather than in the model itself. Developers will be able to complete these implementations easier in programming IDEs than the analyst can using UML.  Taking this approach the design models and implementation code are linked, but not highly coupled. It is not necessary to do round-trip engineering since the design and implementation aren’t just different copies or representations of the same thing. Rather they address different but related and richly linked concerns.

Use design models and MDD to create the implementation scaffold. The models and MDD can still be used to generate the overall solution architecture to speed up development and provide developers with a starting point that is directly derived from the design through an automated transform. But they don’t need to be so detailed that they generate the details of that implementation that are better managed with other development tools. The code generated from the models should be kept separate from the code developed by hand. There are various techniques for doing this including adapter, facade, or mediator patterns, or subclassing. Avoid using @generated markers in to separate generated from non-generated code in the same resources. This can be difficult to maintain. Keeping the enterprise architecture, analysis, design and implementation in sync then becomes part of an overall approach to information linking, management and governance, informed by the requirements to be fulfilled and the validating test cases.

This approach may be a reasonable compromise, getting the most out of modeling for planning and change management while appropriately leveraging modern programming tools.  It keeps the models high level enough so they are still useful for planning, impact analysis, and informing architectural decisions.

The value of the models comes more from their ability to close gaps between business plans, project requirements, solution architecture and the delivered results then they do as a means of improving programmer productivity. And the models are much more useful for guiding refactoring required to propagate design changes into existing implementations, and for harvesting implementation decisions and discoveries back into the designs.

I hope this compromise provides some ideas on leveraging MDD to maximize the value of the models while still providing the information developers need to do their work.

About jimamsden

I'm a software developer and amateur musician with an interest in home audio recording.
This entry was posted in Solution Architecture and tagged , , , . Bookmark the permalink.

2 Responses to Effective Use Of Model-Driven Development

  1. jimamsden says:

    Bran Selic responded to this post through private emails, but graciously agreed that I could summarize his comments here. I really appreciate Bran’s insight and the mentoring he has given me over the years on MDD. That said, Bran disagreed with some of the observations in this post and I hope the summary that follows accurately reflects his view.

    1. Use separate models for different purposes

    The idea that a single model can serve architecture, analysis, design and implementation (understanding, design intent, implementation) was discovered to be impractical years ago and is not common MDD practice, especially for heterogeneous systems.

    2. Formally link different models through transforms and traceability links

    Different models are required for different purposes, but should be formally linked through rich semantics, not separate models either one generated from another (CIM, PIM) or completely decoupled. The connection can be done through automated transforms or manually created traceability links.

    3. Treat MDD as a compiler with no editing of derived artifacts

    Round-trip engineering was a terrible idea and should never have been done. Rather MDD should treat models just like compilers treat source code and the derived artifacts should not require further editing (although template contents might need to be provided). Either the model should be sufficiently specified to provide complete implementations (of part of the system), or manually created code should be integrated with generated code through subclassing, adapters, or mediators.

    4. Design is for humans, programming for machines

    Bran strongly disagreed with my statement that “code-based” languages such as Java or C# are “high-level” as I qualified them. He claims that the difference in quality and productivity between a Java statement and a Fortran IV statement is negligible. These languages are essentially machine oriented, that is they are based on the core premise that the human has to adapt to how the machine works. In his view this is the fundamental difference between programming and modeling languages: the latter are inherently for human consumption and the problem of transforming them into something machines understand is a problem that can be solved through automation (an oft-proven claim).

    6. Leverage UML profiles to create expressive domain-specific languages while retaining the ability to reuse existing libraries and frameworks

    Additional information and insight is available from this presentation by Bran, A Systematic Approach to Domain-Specific Language Design Using UML (https://cs.uwaterloo.ca/~jmatlee/teaching/846/Schedule/Mar19/Henry.pdf). He summarizes the challenges with DSLs and programming languages as:

    * Increasing knowledge and demand for IT applications brings greater domain specialization
    * DSL are specialized languages for specific domains as opposed to attempting to capture concepts using more general purpose programming languages
    * This supports more direct expression of problem-domain patterns and ideas
    * The vast majority of current software is written in general purpose languages
    * This is in part due to:
    * lack of good tool support for DSL development and use
    * Tools are not well designed, engineered, or contain valuable content
    * High cost and low return limit interest in developing DSLs due to the limited number of users
    * General purpose languages have readily trained user base
    * DSLs often lack reusable content
    * Prevalence of open-source software
    * It can be quite difficult to extend or integrate DSLs to integrate different domains
    * Domain specific APIs can provide similar abstractions while leveraging common programming languages

  2. jimamsden says:

    I re-read the post and made a few updates based on Bran’s comments. I substantially agree with Bran’s observations, and didn’t necessarily find them at odds with the main points of the post, but rather additional guidance that I hadn’t included. Rather than update the post, I thought it best to just let readers see Bran’s view more directly from his comments.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s