I recently worked on an engagement at a large bank in Australia to develop a new solution using SOA principles, Rational Software Architect, and targeted for deployment to WebSphere Process Server. In the process of developing the engagement report, I though I would provide a brief summary of some SOA Guiding Principles to use during the engagement, and any additional principles that might facilitate future development. To create this summary, I tried doing a Web search on “SOA Guiding Principles”. I got a lot of hits. But to my surprise, most of them were about advice on how to sell SOA, transition to SOA, govern SOA – but not that much on best practices in actually creating SOA models or solutions. When I searched for more specific information about WSDL and SCA best practices, I got lots of information on the standards themselves, but not much on best practices for using them, or many success stories building solutions with SCA. Perhaps this sheds some light on the utility, popularity, or usability of these technologies or approaches.
I do believe that the underlying principles of SOA are sound and applicable to modern application development. They are mostly based on the Object-Oriented Technology and Component-Based Development technologies, extending them to abstract and encapsulate the relationships between components in order to better manage coupling in complex systems. But somehow these concepts of SOA seem to be somewhat lost on many people who are still taking a more traditional functional decomposition view of their problem domain. This could be a result of the legacy systems they have do deal with, or the somewhat process-centric views of SOA that come from BPM Suites vendors and methods such as SOMA that tend to view services as a means of implementing activities used in business processes.
I thought I would capture some SOA guiding principles as a useful topic of discussion for Solution Architects, especially those that deal with enterprise and solution architecture. Here’s the list, without much explanation, as a topics for future discussion. Let’s use the comments section to refine these guiding principles, understand their implications, and add some more in order to help our clients be more successful using SOA to deliver their solutions.
1. Start analysis with business value or supply chain
Starting the analysis with the business value or supply chain helps ensure the participants and services identified have significant business value. These services can then be managed in a portfolio, assessing their cost, risk, time to value, adherence to enterprise architecture guiding principles, use of consolidated platform, etc. to help determine which services should be developed first.
2. End analysis with implementing business processes
These processes implement the participant services, often by using services provided by others. The architecture is specified by the participants and their interactions, not the decomposition of the processes.
Notice that 1 and 2 are inverted from the SOA Reference Architecture:
SOA principles should be applied to all aspects of the enterprise and solution architecture – including the business architecture in order to facilitate separation of concerns and manage cohesion and coupling.
3. Focus architecture on discovering the participants who collaborate to get work done
The participants abstract the consumers and providers of the services. They encapsulate the state and behavior of the consumer’s goals, needs and expectations separated from the provider’s value propositions, capabilities and commitments. SOA is about encapsulating the relationships between the consumers and providers in order to match the consumer needs with the provider capabilities.
4. SOA is more than RPC
A service is work done by one for another. A remote procedure call is a means of supporting late-bound, distributed functional calls, which can describe only part of that work.
SOA is about abstracting and encapsulating the relationships between participants/components in order to provide more effective management of coupling in complex systems. Organizing operations in an interface is only part of this abstraction. There can be multiple interfaces involved in a service, which can involve a long running protocol that describes the agreement for how the participants interact. Protocols describe the sequence of operation calls, not just the calls themselves, or the message exchange patterns.
5. All data and functionality will be exposed through service interfaces
Decouples specification from implementation, provides more stable interfaces, enables service virtualization for earlier testing, supports distributed development, facilitates reuse.
6. Teams communicate with each other through service interfaces, no other interprocess communication should be allowed
This ensures that all of the interactions between participants are encapsulated in the service contracts and interfaces. As a result, change and impact analysis is simpler and more reliable because there is less possibility of forgetting some hidden connections between parts.
7. Teams can use whatever implementation technology they want or need to implement their services
The implementation should never be exposed in the service interface. It should be encapsulated in the participant’s implementation.
This is similar to the approach to modeling business value chains. The participants in the value/supply chain expose their contracts and agreements, but not the private, internal business processes they use to implement and follow those agreements. That’s where they maintain their IP and competitive advantage.
This is subject to other enterprise architecture guiding principles and the ABBs of the technical architecture, which constrain the implementation variability.
8. All service interfaces must be externalizable
It can then be a business/reuse decision for if, when, and to whom the capability is exposed as a service. If a service isn’t designed to be externalizable, then two things can occur. First the interface may be incompletely or insufficiently defined. The possibility that an interface might become external often results in better design. Second, when a business need for the service arrises, the service should not need to be redesigned, impacting all the current consumers, in order to make it external. If it was not designed to be external, then this would become a barrier to realizing the business need.
9. Refactor commonality up and variability down
This provides the greatest level of reuse, minimizes the chance of redundant implementations and accidental variability, while keeping coupling from variability low.
10. Practice relentless separation of concerns
This has all kinds of benefits including simplification, ease of maintenance, better reuse, easier project management, better connection to requirements and test cases, etc. But it also provides more flexibility for distributing work.
11. Use SOMA techniques as a means of identifying, specifying and realizing services
IBM Services Oriented Method and Architecture (SOMA) is an excellent method for identifying and qualifying candidate services. Rational Software Architect includes a Rational Method Composer (RMC) instance of SOMA that provides detailed guidance on the method and supporting tools.
Here’s a list of the approaches SOMA describes for identifying candidate services:
- Goal-Service Modeling
- Domain Decomposition (Top Down Analysis)
- Process Decomposition
- Functional Area Analysis
- Information Analysis,
- Modeling, and Planning
- Rule and Policy Analysis
- Variation Oriented Analysis
- Existing Asset Analysis (Bottom up Analysis)
- Service Refactoring and Rationalization
- Service Litmus Tests
- Exposure Decisions, including Exposure Scope
12. Design models should inform, but not be the implementations
OMG Model-Driven Architecture (MDA), and tools like Rational Software Architect’s support for model-to-code transforms were an admirable attempt to raise the level of abstraction of programming, make it easier to understand the analysis, design and implementation of programs through visualization, and enable lower skilled developers to be productive through rich code generation and testing through executable models.
Unfortunately this hasn’t worked out as well in practice as we might have hoped. MDA tends to commingle design and implementation making the models tedious to develop, hard to maintain, and too detailed to be useful for reasoning about complex systems and how they change over time. At the same time, rich programming environments and modern programming languages have addressed similar goals, making modeling less attractive to developers.
I prefer to separate analysis and design from implementation, use models for the former, and programming languages for the latter. This gives the best of both worlds. The models are easy to create, modeling only what is needed in order to understand the problem and guide the implementation. Its still important to keep the models in sync with the code. But this should be treated as a change management and governance process, not something that requires the models and code to be isomorphic and always exactly in sync. That’s just doing the implementation twice.
Here’s some goals for modeling:
- Ensure the implementation is consistent with the design decisions
- Speed up development through initial code generation
- Automate the creation of Java and XML details that can be tedious and error prone when done manually
- Provide new developers with an initial implementation to get them on-boarded faster
- An effective means of introducing new technologies to developers through generated examples that have immediate business value
- Improve the productivity and utilization of less skilled developers
- Provide an initial implementation that meets design guidelines that is easier to manage and govern when utilizing off-shore developers by maintaining more control of the implementation
- Support implementation evolution if development and testing discover that the initial design decisions or target platform architecture are insufficient to meet development schedules or quality of service commitments
- Keep design and implementation as separate concerns, but formalizes the relationship between them through manageable coupling
13. Use IBM Process Designer for relatively simple, possibly changing, automation of existing manual business processes that require a lot of fairly simple user interaction (CRUD)
This guiding principle, and the next two, deal with the relationship between process modeling and implementation, and SOA modeling and implementation. As discussed in item 2, the SOA Reference Architecture, and common practice, is to start with business processes, and use SOA as a means of automating activities in the business processes. This is a good approach in many cases. But more complex systems of systems might benefit from taking more of a SOA approach across all the domains.
If you take this process-centric approach, then there are some useful guidelines for making the best use of the available tools and technologies. These recommendations are closely aligned with the SOA Reference Architecture.
IBM Process Designer is very useful for opportunistic or situational development of relatively simple business processes. It provides “direct-to-middleware” capability that supports easy development, deployment and change management of business processes. This is a very good way to automate high-level, paper-based processes that involve human tasks.
14. Use IBM Integration Designer when there is some assembly required.
IBM Integration Designer can support richer processes and services as it provides additional capabilities for exploiting WSDL, XSD, and BPEL to develop and integrate processes and services.
Use modules and BPEL to assemble existing, well-designed services that require limited and relatively simple message mapping and choreography, perhaps including compensation. Integration Designer is also a good choice when monitoring is a key part of the application environment.
15. Use a modern programming language if the integration is complex, low-level, or has high performance requirements.
If the problem domain is complex, utilizes a number of APIs in a common programming language, is low-level, or has high performance requirements, then direct coding might be faster and easier.
16. Apply SOA as an architectural style to any domain
SOA identifies the participants involved and the interactions between them, regardless of the domain, scope, time horizon, level of detail, etc.
SOA is an architectural style, not approach to IT implementation. It shouldn’t be confused with Web Services (WSDL, XSD, BPEL) which is only one of many possible technical architectures that can be used to deploy SOA solutions. You can also use SOA principles to design RESTful Web Services, JEE session beans, or JMS messages.
17. Separate the persistent data model from the service data model
Persistent data models describe the information used by the enterprise to implement its applications. Service data models describe the information exchanged between service consumers and providers. Service data is described as part of the service contracts and interfaces. These two data models are related, but they are usually not the same model. Service data modeling has some unique challenges, especially when it is coupled too tightly with the persistent data models:
- Define model service data items once and reuse them in different services
- Wrap request and response using reusable Service Data Objects (SDO)s with appropriate evolution/variability
- Extra fields are often required, but not used in subsequent services. They become part of the common model can result in fields that are never populated or used
- Consumers of SDOs can be confused by extra data that doesn’t describe the intent of the contract
- Often service requests only need an ID, but end up getting all the content defined with the ID
- Services end up having too much data. WSDL has large XSDs with lots of unused fields
Service data should be modeled separate from the persistent data. Service data should be informed by the persistent data schema, but it need not be the same. There should be limited need for automated mapping between service data and persistent data. Rather that is the responsibility of the implementing service operations, not the service data itself. Good service data design can lead to better reuse of SDOs and simpler service interfaces. It can also lead to less data transforms between service consumers and providers as well as less transform layers in order to overcome data impedance mismatches in the call stack required to implement a service.
A typical pattern might be that service data is interchanged by the service operations as they do the work. The implementation of the services involve processes that utilize persistent data as resources in their implementation. These are related by different concerns that will require different data models.