The original promise of Object Management Group’s Model-Driven Architecture (OMG MDA) initiative, and its realization in Model-Driven Development (MDD) methods and tools, was to be able to separate conceptual business design, from platform independent logical solution design, from platform specific solution implementation, and to use model-to-model and model-to-code transforms to automatically transform one from the other. The goal was to raise the level of abstraction “programmers” use to analyze, design, and construct solutions so that they could be delivered faster, more reliability, using less skilled developers, while following architecture decisions and guiding principles to close the business-IT gap.
There has been tremendous progress in the development of methods and tools that support MDD. However, the results have perhaps been a bit below expectations. There are a number of potential forces that challenged the realization of this vision. Summarizing the key ones briefly:
- Standards bodies are sometimes hard places to innovate effectively and efficiently.
- UML2, SysML, SPEM, BPMN, SoaML, MOF and other related standards that form the foundation of OMG’s Model Driven Architecture became quite complex in their own right, creating challenges for tool vendors and users.
- At the same time, the emergence of higher level languages like Java, C#, Ruby, etc., along with the shift to Web and mobile based development with maturing APIs like AWS, iOS and Android, along with the introduction of highly productive integrated development environments like eclipse, Visual Studio, Xcode, etc. made traditional programming easier and more productive, cutting into the MDA value proposition.
- Round-trip engineering required to keep models and code in sync proved to be more difficult to support with tools, use in practice, and more difficult to manage in design and development projects then we would have hoped.
However, these are relatively insignificant contributing factors. Perhaps the primary issue was the belief that models could be sufficiently detailed and efficiently developed that they would essentially replace code, and easily address operating system, API and platform variability through automated transforms.
As a practical matter, this perhaps hasn’t worked out as envisioned. The biggest issue is that achieving the full MDA vision as practiced actually tended to commingle analysis, design and implementation – attempting to use the same model to perform all these functions aided by various automated transforms. But this coupling is exactly what analysis, design and implementation practices are trying to avoid. The hallmark of software engineering is separation of concerns, addressing commonality and variability, leveraging refactoring as a means of improving asset vitality, supporting commonality and variability for reuse, and managing change.
My views on MDD have evolved, and are continuing to evolve in recommended approaches to MDA. On the one hand, IBM has offerings like IBM Business Process Designer that allows business analysts to develop BPMN models that can be directly executed by IBM Business Process Manager – no transforms are actually required. On the other hand, IBM provides many development capabilities using various programming languages with no modeling at all, including COBOL, C++, Java, Enterprise Generation Language, etc. In the middle are tools like IBM Integration Designer that present a higher-level set of views and editing tools for visual Web Services development using XSD, WSDL, SCA, BPEL and other w3c XML specifications. How do we reconcile all these different approaches? Clearly there’s no one size fits all solution. Rather the context of the particular problem domain, existing assets, team organization and skills, existing methods and tools, etc. will have a big impact on the role models play in this continuum. However, there may be some practical guidelines for MDD that provide more effective outcomes.
What I’m coming to realize is that approaches to MDA should be incorporated in a more holistic approach to Solution Delivery and Lifecycle Management (SDLC) which typically addresses the following facets:
Oddly enough – these facets are the subject of this blog! Here are a few guidelines for getting the most out of MDD in the context of full SDLC activities and work products.
Analysis and design should focus on design concerns, not implementation. Those concerns generally address the solution architecture, as an instantiation of enterprise architecture building blocks used in a particular context to address project-specific requirements. Analysis and design models help inform project planning, guide development for and with reuse, enable effective change and impact analysis, support needs-driven test planning, and bridge between business requirements and delivered solutions. Analysis and design models also provide developers with the information they need to know to guide their work effectively and efficiently.
Design models should inform, but not be the implementations. This is because models that are sufficiently detailed to support transforms to executable code are often not only very tedious and expensive to develop (programming in pictures can be hard) but become unwieldy for their intended purpose. They become so complex and detailed in their own right, that they are no longer as useful for effective planning, change assessment, and impact analysis. And developers are generally more productive using text-based programming languages in modern IDEs. They don’t need models to be the implementation. They need the models to be sufficiently high level and comprehensible that they can inform the implementations, ensuring that design decisions are followed, and providing a means of communicating implementation constraints and discoveries back to the analysts and designers in order to improve the designs.
Above all, design models should be seen as a means of mediating between the what (requirements) and the how (implementation). They do this by providing an effective means of capturing, communicating, validating, reasoning about and acting on shared information about complex systems that helps close the gap between requirements and delivered solution. At the same time, the design models provide the foundation for change and impact analysis and project lifecycle management and governance. Tools like:
- Rational DOORS-NG for requirements management
- Rational Team Concert for change and configuration management
- Rational Quality Manager, and
- Rational Design Manager
provide an integrated set of capabilities leveraging OSLC and the Jazz platform common services to effectively use design models to link complex and rapidly changing artifacts for more effective lifecycle management. Together these tools help you do real-time planning, informed by lifecycle traceability, through in-context collaboration to support continuous process improvement. Design models provide the context in which to understand the links between all these artifacts and the implications of change in them.
Avoid round-trip engineering and design/implementation synchronization problems by avoiding the coupling in the first place. Try to keep the models at a relatively high level so that they clearly address the business problem and identify the cohesion and coupling pain points that impact all projects to at least some extent. At the same time, try to strike a balance between design and implementation concerns by providing implementation guidance in the model documentation rather than in the model itself. Developers will be able to complete these implementations easier in programming IDEs than the analyst can using UML. Taking this approach the design models and implementation code are linked, but not highly coupled. It is not necessary to do round-trip engineering since the design and implementation aren’t just different copies or representations of the same thing. Rather they address different but related and richly linked concerns.
Use design models and MDD to create the implementation scaffold. The models and MDD can still be used to generate the overall solution architecture to speed up development and provide developers with a starting point that is directly derived from the design through an automated transform. But they don’t need to be so detailed that they generate the details of that implementation that are better managed with other development tools. The code generated from the models should be kept separate from the code developed by hand. There are various techniques for doing this including adapter, facade, or mediator patterns, or subclassing. Avoid using @generated markers in to separate generated from non-generated code in the same resources. This can be difficult to maintain. Keeping the enterprise architecture, analysis, design and implementation in sync then becomes part of an overall approach to information linking, management and governance, informed by the requirements to be fulfilled and the validating test cases.
This approach may be a reasonable compromise, getting the most out of modeling for planning and change management while appropriately leveraging modern programming tools. It keeps the models high level enough so they are still useful for planning, impact analysis, and informing architectural decisions.
The value of the models comes more from their ability to close gaps between business plans, project requirements, solution architecture and the delivered results then they do as a means of improving programmer productivity. And the models are much more useful for guiding refactoring required to propagate design changes into existing implementations, and for harvesting implementation decisions and discoveries back into the designs.
I hope this compromise provides some ideas on leveraging MDD to maximize the value of the models while still providing the information developers need to do their work.