Switching from Evernote to Apple Notes

I take a lot of notes, always have. For years I keep paper notebooks that covered usually about 4 months. When a notebook filled up, I would quickly review its contents and create an index on the last page of key content I might need to be able to locate in the future. This also give me a nice chance to review what I’d been doing, note any actions I never completed (I used a check box to note actions), and think about what’s next.

This worked OK. It was low budget, easy to transport to meetings, easy to include sketches with text, and a low barrier to capturing notes. But it was hard to make updates, and find anything quickly.  And as I get older, my handwriting gets slower and worse, while demands on my time goes up. I’m at the point where I can’t really read my handwriting that well anymore. Plus once lighter laptops and wireless WiFi became accessible, transparencies with overhead projectors were replaced by shared Power Point presentations, and we all started carrying computers to meeting.

Now don’t get me wrong, there’s lots of good reasons for having your computer with you in meetings. But Kent Beck once said “Once the computer turns on the communication turns off.” That’s especially true now that few of our meetings are face to face or even include video conferencing. But that’s a topic for another day.

Anyway, since I had to carry the computer anyway, why not use it for notes. After trying lots of things, I found Evernote, and my note taking dreams were finally realized. It ran on every platform. I could create local notebooks for company sensitive material. I could organize and search notes. I could include images and draw sketches (with some effort). It was everything I needed, and I used it literally for years.

But Evernote suffered a bit for functional bloat, had some quality issues, and the use of notebook stacks was not that flexible for organizing notes like a Wiki. I made it work though, and there was enough UI configuration flexibility that I came up with a usage pattern that was convenient and made good use of screen real estate.

I tried the professional for fee version for a year, and found it added more functional bloat and features I generally didn’t need. So I went back to the free version.

But now Evernote has changed their pricing policy and now only allows a user to use the free version on two devices. You can still use the Web client on other devices, but that’s not a very efficient way to use Evernote.

So I thought it might be time to explore alternatives. I gave Apple Notes a try a while ago and found too limiting. But the version in El Capitan was actually a significant improvement. And there was a relatively easy way to export content from Evernote and import into Apple Notes. So I gave Apple Notes a serious try and found the UI (which is very similar to Apple Mail) to be pleasant, efficient, uncluttered and easy to use – i.e., good usability design which is what Apple is known for.

If your considering using Apple Notes, here’s a few things I’ve discovered that are remaining challenges that might encourage you to stick with Evernote a bit longer.

  1. You cannot create note links – this is a significant missing feature preventing creating Notes that have organize notes different ways for different purposes, like a Wiki
  2. There’s no support for tables – Evernote tables get translated into tab-separated fields and don’t render well.
  3. Copy text and paste into other app like Lotus Notes does not retain the formatting, it converts it to tabbed text that does not wrap with a proper hanging indent
  4. No back button when navigating notes. I’m surprised how often I used that in Evernote, just like with web pages in a browser.
  5. Search is always on all notes, and cannot be scoped by folders.
  6. There’s no tagging and note navigation by tag for different ways to organize the same notes.
  7. Text highlighting (yellow highlighter) is not supported (at least I couldn’t find it).
  8. Creating a folder does not create it as a subfolder in the selected folder, you have to move it.
  9. Cannot share notes with other Notes users like Evernote or through Dropbox. iCloud doesn’t yet support flexible sharing.
  10. Can’t resize images.
  11. Can’t zoom a note to make it bigger for presentation purposes.
  12. Note windows do not remember their resized shape.
  13. When doing a search for notes, there’s no way to see what folder the note is in.
  14. When opening a note in a new window, there’s no way to display the toolbar to facilitate the editing of the note in that window.
  15. Rich client app is Apple only and not available on Windows, Linux, or Andriod platforms. However, the Web UI is quite good and works everywhere.
  16. There’s no easy way to share a note through email.

And Apple Notes has some nice advantages too:

  • It does a good job pasting content into a note and using the right font
  • The UI is simple, consistent, efficient, easy to use and effective
  • The Web UI is nearly as nice as the rich client UI and looks and functions well.
  • Like Apple Mail, Apple Notes can integrate notes from other accounts, like google notes.
  • You can have notes stored locally or in iCloud
  • Completely free on all Apple devices

Microsoft OneNote was another possible option. But the MacOS version has two fundamental showstoppers: you can have local notebooks, all notes have to be on OneDrive, and you can have more than one window open at a time. Not sure who thought these were good ideas, but the make OneNote impossible to use on MacOS.

 

Advertisements
Posted in Uncategorized | Leave a comment

SAFe and/or DAD?

Many enterprises are finding RUP to be too costly, prescriptive and heavyweight, doesn’t provide the expected improvements in quality or time to value, and as a result often isn’t really followed by project teams. At the other end of the spectrum, Scrum is probably too simple, or a poor fit for enterprise IT development since it focusing primarily on the construction phase of largely independent projects with relatively small teams. Scrum is a good starting point, but needs to be scaled up to meet typical enterprise project needs.

There are a number of different approaches to scaling agile processes beyond individual projects to address enterprise concerns. Two of the more popular approaches are Scaled Agile Framework (SAFe) and Disciplined Agile Delivery (DAD).

SAFe and DAD both aim to scale Agile and Lean approaches, in particular Scrum, to support larger projects and cross project coordination at the enterprise level. But, they do it differently. DAD scales Agile within a delivery team to address inception and transition concerns. The inception phase helps prepare the project backlog needed to guide the project’s iterations. The transition phase includes additional testing and other activities required to move a release into production. For continuous delivery, the inception and transition phases of DAD may be quite short and potentially largely automated. DAD is also less prescriptive, providing a goal centered approach to tailoring the process to support earlier value delivery and risk reduction. There is a Rational Method Composer (RMC) plugin for DAD and a DAD Rational Team Concert (RTC) process template provided by IBM. However these assets are not being actively maintained as IBM is focused on SAFe support.

SAFe utilizes Scrum at the team level, and scales agile and lean across teams at the program and portfolio management level. Portfolio management helps drive Epics from enterprise investment strategies. Program management coordinates team activities to enact shared business direction and architectural vision, determine related groups of work items for cross-team dependencies, and coordinate with external team representatives. SAFe is also supported by IBM with an RTC process template and RMC method plugin with are both provided free with a purchase of RTC. There are many sessions at IBM InterConnect2015 that introduce these offerings. Be sure to check them out.

When conducting method adoption workshops or transition initiatives, you may find that your enterprise clients do need program and portfolio management capabilities. But the stakeholders often don’t highlight these issues as they are often more focused on either silo’d SDLC activities or individual project delivery challenges. Portfolio management, enterprise architecture, reusable asset management, and project cross-cutting concerns are important, but you may encounter situations where more core capabilities at the team and project delivery level needs to be addressed first. Therefore SAFe might be seen as addressing cross project and enterprise portfolio concerns that are less critical to your clients success, and not their immediate needs. SAFe does utilize Scrum at the team level, but that may not be sufficient to meet their initial enterprise project delivery needs.

DAD may be a better tactical fit in these situations since it addresses broader project management concerns than Scrum, without necessarily addressing full enterprise program and portfolio management issues you client might not be ready to consume. SAFe might be a better strategic solution in the future, but it may be beyond the scope of you client’s immediate needs.

One way to address the apparent conflict is to take a hybrid approach.

1. Recommend DAD to your client as a starting point for introducing agile in an enterprise context. Leverage DAD’s goal centered approach to tailoring the process as needed. DAD may address you client’s immediate needs, may be more consumable by the organization, has rich community support, and may fit better with the specific method adoption initiative.

2. Recognize that project teams are not entirely independent in an enterprise context because of asset reuse, enterprise architect building blocks and guiding principles, systems of systems dependencies across the teams, etc., and that eventually the program and portfolio management concerns will need to be addressed.

3. When you client is ready to scale agile and lean across project teams, and address program and portfolio management issues, you can position SAFe as a natural tailoring and extension of DAD to address these concerns. The tailoring of DAD would be to reduce the overlap between the inception and transition phases with the program management activities in SAFe, essentially replacing DAD with its Scrum subset and utilizing SAFe to provide scale.

Taking this approach allows you to start with DAD to address broader project delivery needs than are covered by Scrum, and then relatively seamlessly introduce SAFe to include  even broader program and portfolio management concerns. You may find this is an effective approach for scaled agile method adoption.

Posted in Uncategorized | Leave a comment

Requirements Delivery and Asset Management Project Lifecycles

Enterprise assets, including business architecture and BPM operational components cannot simply be split into a strategic/tactical dimension – enterprise assets are usually involved in both. Elements of the enterprise architecture are building blocks that are instantiated and used in solution architectures. They play a strategic role in developing plans to close gaps between existing and desired business value propositions, capabilities and quality commitments. And they play a tactical role when instated for solutions that realize the plans.

Enterprise assets generally follow two parallel life cycles. The work done in these life cycles is usually organized into projects that allocate budget, time, work and resources to meet a set of requirements. Let’s characterize the projects for these two life cycles as Requirements Delivery Projects (RDPs) which address specific requirements for specific business initiatives, and Asset Management Projects (AMPs) that manage and govern rustable assets across the enterprise over time.

The characteristics of these projects are quite different. Requirements Delivery Projects:

  • Are often short lived
  • Address specific requirements to deliver specific business values/results
  • Can have requirements that conflict with those of other projects
  • Are on different release cycles with different milestones
  • May be critical to short, medium and/or long term business vitality
  • Use assets across enterprise business functional areas
  • Are often focused on deliverables over commonality/variability analysis required to support development for and with reuse
  • Have limited budgets and time horizons focused on the specific project deliverables
  • Are not necessarily funded sufficiently for enterprise asset management
  • Often have matrix organizations of teams accountable for functional areas as well as specific project deliverables

Asset Management Projects are often quite the opposite:

  • Have lifetimes that are tied to the enterprise itself
  • Are organized by business functional areas, not specific project deliverables
  • Addresses requirements of the enterprise for vitality and sustainability across projects
  • Must resolve requirements and component changes across projects over time
  • Are on more coordinated, and often much longer release cycles
  • Are critical to long term business vitality, business integration, sustainability and agility through reuse, but can have immediate impact on short term goals
  • Do address commonality/variability for development for and with reuse, address separation of concerns and refactoring to manage cohesion and coupling
  • Need (but often don’t have) independent budgets and supporting organizational structures
  • Are funded for asset management and governance
  • Often have matrix organizations of teams accountable for different functional areas

It is important to have organizational structure, methods, and practices for harvesting, managing and governing change in enterprise assets across all ongoing projects. Otherwise it may be difficult to manage enterprise assets to avoid redundant development of overlapping functionality, poor adherence to architecture guiding principles, limited reuse, accidental variability, and collisions and incompatibilities when attempting to do enterprise wide business integration.

Enterprise architecture management involves understanding and categorizing the many business functions of the enterprise to establish a context for separation of concerns, minimizing coupling, and supporting asset management and governance. Many organizations realize this, and are familiar with TOGAF ADM. However, some struggle to have an effective EA practice in place. They do have a topology of cross-cutting categorizations of business functions that reflected at least a good tacit understanding of their business. The question is how to use that knowledge to support more effective change and configuration management of enterprise assets, in the context of managing individual project requirements.

One possibility is to take a two-dimensional approach, separating changes for RDPs and AMPs. Each RDP has its own lifecycle project that organizes the teams, processes and work necessary to address its specific set of requirements. These projects result in changes in assets that meet project needs first, enterprise needs second. At the same time, enterprises should create a lifecycle project for each business functional area who’s purpose is to manage the assets of that functional area.

Functional areas are chosen for enterprise asset management because functional cohesion is the strongest cohesion, will likely result in less coupling across projects and assets over time, and facilitate more efficient and effective change management. Some organizations want to create different project areas for use cases, design models, code, test cases, etc. because that is how their teams are organized. But the coupling between use cases, realizing design models, implementing code and validating test cases in a business functional area is likely much greater than the coupling between use cases across functional areas. So these different abstractions of a single functional area are best managed together in the same lifecycle project. Artifact managers within the lifecycle project provide additional separation of concerns relying on project associations to manage the coupling between artifact types.

Each RDP manages change through CCM project area streams. Each stream captures the changes to a set of artifacts contained in (loadable) components over time for some purpose. An RDP will likely have a single production integration stream that represents the artifacts that are delivered from the project. Change sets flow up from developers completing work items to the integration stream, and down from the integration streams into ongoing developer streams to minimize the impact of change. The purpose of the RDP is to deliver the project requirements through the artifacts on the integration stream. Project management and governance controls the flow of change sets between the streams based usually on push models – change sets are pushed or delivered from one stream to another by the team member making the changes.

The AMPs also manage change thorough CCM project area streams. Each stream captures the changes to a set of functionally cohesive enterprise assets, also contained in components over time for the purpose of managing the lifecycle of the assets, improving their vitality over time. AMPs and RDPs can share the same components depending on the affinity of the projects. Often most of the work associated with asset changes are actually done in the RDPs with these changes harvested and hardened for asset lifecycle management in the AMPs by teams responsible for assets in specific business functional areas. Change sets flow up from the RMPs into the AMP streams – often more than one stream, at least QA and enterprise integration streams. Project management and governance controls the flow of change sets between the streams based usually on pull models – change sets are pulled from the RDP streams to the AMP by the the owners of the business functional areas. The AMP change sets are pulled down into the RMP streams for reuse.

The separation of these two dimensions of change provide the project areas, team organizations and stream mechanisms to manage changes made in projects delivering business requirements coordinated with changes made to improve asset vitality and quality. AMPs provide a target destination for harvesting appropriate change sets across possibly conflicting RDPs.

Posted in Uncategorized | Leave a comment

Collaborative Program Management

Large, multi-year projects like Intelligent Transportation Systems, Health Care Exchanges, new factory construction, road and bridge constructions, etc. involve significant collaboration between teams. There are typically three primary roles that participate in these collaborations. The Client is the individual, public sector or private sector organization who has the goals, needs and expectations that are to be addressed. A client represents the demand side of an initiative, and is often the source of requirements for the initiative, and the funds to pay for it. Another role is the Solution Provider who actually builds components of the solution. Solution providers represent the supply side of the equation, delivering the outputs having the value propositions, capabilities and commitments that meets the goals, needs and expectations of the client. Solution providers often organize their work into one or more projects required to deliver the outputs, including systems and software, that meet the client requirements.

A large project may involve many solution providers. In some cases, the client may choose to work directly with the solution providers through their own Project Management Office (PMO). In other cases, the initiative may be sufficiently complex, requiring unique skills that the client does not need to establish and maintain in-house. In these cases, the client may engage with a program management consulting organization to provide the necessary skills, and to act as a mediator between the client and the many solution providers. The solution providers are responsible for delivering outputs to the program manager who is in turn responsible for ensuring the program outcomes are delivered to the client. The collaboration between the client, program manager and solution providers manages the information and material flow in a supply or value chain connecting supply and demand. In the rest of this post we’ll be examining some sources of waste and inefficiency in this supply chain, in particular the challenges with document-centers communication and collaboration.

Large initiatives follow a fairly typical lifecycle. The client assesses business influencers, their goals and strategies, and envisions initiatives that will achieve the desired results. The initiatives are then elaborated into requirements that are collected into one or more RFPs issued to potential solution providers. The solution providers study the RFPs and develop proposals including detailed output descriptions and SOWs that estimate the cost and delivery schedules they are willing and able to commit to. The client examines the proposals, assessing them against some criteria and eventually awards the contracts to the winning bidders.

Then the projects begin. During their lifecycle, the solution providers deliver interim outputs which the program management organization assess against the requirements to determine the gaps. Additional work is then done to close the delivery gaps. This is repeated until the program management organization delivers an acceptable result to the client who signs off on the deliverables and the program is completed.

Now let’s look at how the collaboration and communication between the client, program management, and solution provider organizations typically takes place. The RFPs and SOWs are the primary means of communication and represent the formal agreements between the parties. In the past these documents were delivered on paper through the mail and were marked up through the review process, edited and republished until accepted. Then the determination of the requirements/solution gap required the program management organization to assess deliverables against requirements in the final accepted documents. If the requirements changed, the change order process describes how the documents are updated, reviewed and accepted by all the involved parties.

We’ve made some real improvements this process by leveraging typical electronic office documents and email to reduce the development and delivery time for managing these documents. However, these improvements are relatively insignificant compared to the time and effort required to develop, maintain and use these documents to manage the overall projects and program. There has been little substantial change in how clients, partners and suppliers interact in the supply chain, even through there is extreme pressure on them all to do more with less.

There are many sources of waste and inefficiency in the supply chain. Lean value streams is a way of determining these sources of waste and assessing different opportunities for process improvements. It is beyond the scope of this post to examine the whole value stream involved in these large initiatives. But we can focus on the particular challenges of document-centric collaboration and communication, and propose specific solutions that address these challenges.

Document centric collaboration and communication has a number of challenges that create latency in the value stream. Documents focus on presentation for human consumption, commingling the view with the underlying data. This makes the information in the document harder to reuse to support other stakeholder needs. Business analysts and solution architects have to spend time on presentation creation, making compromises on information organization, style and content in order to balance different stakeholder needs with a single document. In the meantime, these highly skilled resources are not spending their time on eliciting, using, and managing requirements and other rich program information. Even in cases where documents are at least partially generated from other information, they can still take a long time to develop and maintain.

Documents tend to copy rather than link to reused information. This redundancy results in the same information being reviewed multiple times. Redundancy also increase document maintenance costs since the copied information has to be updated in multiple places, and the updates also have to be reviewed multiple times.

Because of the production and maintenance costs, documents are often out of date shortly after they have been published resulting in consumers working with incorrect information. Using email as the primary means of information sharing also increases the chances that stakeholders are working with incorrect or out of date versions of the documents.

But perhaps the most significant challenge is that it is not practical to support fine-grained linking between elements across documents, manual hyperlinks are just too difficult to maintain. As a result, it is difficult to process information in documents through automation, and documents support informal, poor, course-grained change and impact analysis.

The fundamental issue is that information that should be shared between the stakeholders is being communicated indirectly through published documents instead of used directly. Let’s explore a different approach. IBM Rational Collaborative Lifecycle Management (CLM) tools were designed to facilitate collaboration between stakeholders in typical Solution Delivery and Lifecycle Management (SDLC) or Systems and Software Engineering (SSE) processes. These tools provide support for requirements management, change and configuration management, and quality management. We can use these tools to reduce wastes in the supply chain for large initiatives. Since the tools are used by many stakeholders, across different organizations, they can be provided as SaaS through a hosting provider to make them accessible as needed while also reducing the tool installation and maintenance costs.

Let’s take another look at the initiative lifecycle, and see how it might be improved by utilizing CLM tools to decrease communication latency and errors. The client assesses business influencers, their goals and strategies, and envisions initiatives that will achieve the desired results. The initiatives are then elaborated into requirements that are captured directly in the CLM requirements management tool, DOORS-NG. Modules can be used to organize the same requirements in different ways to meet different stakeholder needs, but without copying the requirements. Diagrams and text can be easily linked based on semantically rich relationships that are easily navigated directly in the tool. Reviews can also be created in DOORS-NG, and the results of the reviews and review comments are stored directly in the database attached to the applicable requirements. When the client is ready to issue RFPs, they simply publish a link to the appropriate requirements modules in the DOORS-NG database. There’s no need to create an RFP document – the solution providers can access the requirements directly.

The solution providers study the requirements and develop proposals including detailed output descriptions, project plans and work items in Rational Team Concert (RTC) that estimate the cost and delivery schedules they are willing and able to commit to. The plans and work items are directly linked to the requirements they are intended to fulfill. And the work items and requirements can also be linked to the test cases in the quality plan that will be used to determine the requirements/delivery gap. Solution providers submit their bids as links to RTC project plans. Clients can now utilize portfolio management tools such as Rational Focal Point to objectively assess the bid proposals against initiative criteria, and easily visualize which bids have the lowest costs, risks, time to value, etc. This is because the SOWs are now in a form that can be processed by Focal Point instead of being trapped in documents that can only be read by humans.

During the program lifecycle, the solution providers deliver interim outputs which the program management organization assess against the requirements to determine the gaps. The IT outputs can be delivered directly in RTC using the source code management facility. RTC change and configuration management provides rich capabilities for managing change on multiple streams to support different needs. Rational Quality Manager (RQM) can be used by the program management organization to capture test results, and connect the tests to the requirements and new work items required to close the delivery gaps. This is repeated until the program management organization delivers an acceptable result to the client who signs off on the deliverables and the program is completed.

In summary, Rational CLM can be used to eliminate the documents and enable collaboration and communication directly through shared information, stakeholder specific dashboards and views, subscription and notification, and full lifecycle traceability and impact analysis. CLM capabilities can be used to significantly reduce waste and risk in projects and programs. Providing these capabilities through SaaS reduces the infrastructure and startup costs that would normally have been born by all of the clients, partners and suppliers to create their own lifecycle management solutions. Since these facilities would be siloed and fragmented too, they would still limit the access to shared information. All touch points between clients, partners and suppliers in the supply chain is through the CLM shared information. RFPs are replaced by requirements management. SOWs are replaced by project plans and work items that are directly connected to requirements and validating test plans.The PMO uses quality management as the primary means of assessing the gap between delivered solutions and requirements by connecting test results to new work items and requirements.

 

Posted in Uncategorized | Leave a comment

Effective Use Of Model-Driven Development

The original promise of Object Management Group’s Model-Driven Architecture (OMG MDA) initiative, and its realization in Model-Driven Development (MDD) methods and tools, was to be able to separate conceptual business design, from platform independent logical solution design, from platform specific solution implementation, and to use model-to-model and model-to-code transforms to automatically transform one from the other. The goal was to raise the level of abstraction “programmers” use to analyze, design, and construct solutions so that they could be delivered faster, more reliability, using less skilled developers, while following architecture decisions and guiding principles to close the business-IT gap.

There has been tremendous progress in the development of methods and tools that support MDD. However, the results have perhaps been a bit below expectations. There are a number of potential forces that challenged the realization of this vision. Summarizing the key ones briefly:

  1. Standards bodies are sometimes hard places to innovate effectively and efficiently.
  2. UML2, SysML, SPEM, BPMN, SoaML, MOF and other related standards that form the foundation of OMG’s Model Driven Architecture became quite complex in their own right, creating challenges for tool vendors and users.
  3. At the same time, the emergence of higher level languages like Java, C#, Ruby, etc., along with the shift to Web and mobile based development with maturing APIs like AWS, iOS and Android, along with the introduction of highly productive integrated development environments like eclipse, Visual Studio, Xcode, etc. made traditional programming easier and more productive, cutting into the MDA value proposition.
  4. Round-trip engineering required to keep models and code in sync proved to be more difficult to support with tools, use in practice, and more difficult to manage in design and development projects then we would have hoped.

However, these are relatively insignificant contributing factors. Perhaps the primary issue was the belief that models could be sufficiently detailed and efficiently developed that they would essentially replace code, and easily address operating system, API and platform variability through automated transforms.

As a practical matter, this perhaps hasn’t worked out as envisioned. The biggest issue is that achieving the full MDA vision as practiced actually tended to commingle analysis, design and implementation – attempting to use the same model to perform all these functions aided by various automated transforms. But this coupling is exactly what analysis, design and implementation practices are trying to avoid. The hallmark of software engineering is separation of concerns, addressing commonality and variability, leveraging refactoring as a means of improving asset vitality, supporting commonality and variability for reuse, and managing change.

My views on MDD have evolved, and are continuing to evolve in recommended approaches to MDA. On the one hand, IBM has offerings like IBM Business Process Designer that allows business analysts to develop BPMN models that can be directly executed by IBM Business Process Manager – no transforms are actually required. On the other hand, IBM provides many development capabilities using various programming languages with no modeling at all, including COBOL, C++, Java, Enterprise Generation Language, etc. In the middle are tools like IBM Integration Designer that present a higher-level set of views and editing tools for visual Web Services development using XSD, WSDL, SCA, BPEL and other w3c XML specifications. How do we reconcile all these different approaches? Clearly there’s no one size fits all solution. Rather the context of the particular problem domain, existing assets, team organization and skills, existing methods and tools, etc. will have a big impact on the role models play in this continuum. However, there may be some practical guidelines for MDD that provide more effective outcomes.

What I’m coming to realize is that approaches to MDA should be incorporated in a more holistic approach to Solution Delivery and Lifecycle Management (SDLC) which typically addresses the following facets:

cropped-sdlcvs1.jpg

Oddly enough – these facets are the subject of this blog! Here are a few guidelines for getting the most out of MDD in the context of full SDLC activities and work products.

Analysis and design should focus on design concerns, not implementation. Those concerns generally address the solution architecture, as an instantiation of enterprise architecture building blocks used in a particular context to address project-specific requirements. Analysis and design models help inform project planning, guide development for and with reuse, enable effective change and impact analysis, support needs-driven test planning, and bridge between business requirements and delivered solutions. Analysis and design models also provide developers with the information they need to know to guide their work effectively and efficiently.

Design models should inform, but not be the implementations. This is because models that are sufficiently detailed to support transforms to executable code are often not only very tedious and expensive to develop (programming in pictures can be hard) but become unwieldy for their intended purpose. They become so complex and detailed in their own right, that they are no longer as useful for effective planning, change assessment, and impact analysis. And developers are generally more productive using text-based programming languages in modern IDEs. They don’t need models to be the implementation. They need the models to be sufficiently high level and comprehensible that they can inform the implementations, ensuring that design decisions are followed, and providing a means of communicating implementation constraints and discoveries back to the analysts and designers in order to improve the designs.

Above all, design models should be seen as a means of mediating between the what (requirements) and the how (implementation). They do this by providing an effective means of capturing, communicating, validating, reasoning about and acting on shared information about complex systems that helps close the gap between requirements and delivered solution. At the same time, the design models provide the foundation for change and impact analysis and project lifecycle management and governance. Tools like:

  • Rational DOORS-NG for requirements management
  • Rational Team Concert for change and configuration management
  • Rational Quality Manager, and
  • Rational Design Manager

provide an integrated set of capabilities leveraging OSLC and the Jazz platform common services to effectively use design models to link complex and rapidly changing  artifacts for more effective lifecycle management. Together these tools help you do real-time planning, informed by lifecycle traceability, through in-context collaboration to support continuous process improvement. Design models provide the context in which to understand the links between all these artifacts and the implications of change in them.

Avoid round-trip engineering and design/implementation synchronization problems by avoiding the coupling in the first place. Try to keep the models at a relatively high level so that they clearly address the business problem and identify the cohesion and coupling pain points that impact all projects to at least some extent. At the same time, try to strike a balance between design and implementation concerns by providing implementation guidance in the model documentation rather than in the model itself. Developers will be able to complete these implementations easier in programming IDEs than the analyst can using UML.  Taking this approach the design models and implementation code are linked, but not highly coupled. It is not necessary to do round-trip engineering since the design and implementation aren’t just different copies or representations of the same thing. Rather they address different but related and richly linked concerns.

Use design models and MDD to create the implementation scaffold. The models and MDD can still be used to generate the overall solution architecture to speed up development and provide developers with a starting point that is directly derived from the design through an automated transform. But they don’t need to be so detailed that they generate the details of that implementation that are better managed with other development tools. The code generated from the models should be kept separate from the code developed by hand. There are various techniques for doing this including adapter, facade, or mediator patterns, or subclassing. Avoid using @generated markers in to separate generated from non-generated code in the same resources. This can be difficult to maintain. Keeping the enterprise architecture, analysis, design and implementation in sync then becomes part of an overall approach to information linking, management and governance, informed by the requirements to be fulfilled and the validating test cases.

This approach may be a reasonable compromise, getting the most out of modeling for planning and change management while appropriately leveraging modern programming tools.  It keeps the models high level enough so they are still useful for planning, impact analysis, and informing architectural decisions.

The value of the models comes more from their ability to close gaps between business plans, project requirements, solution architecture and the delivered results then they do as a means of improving programmer productivity. And the models are much more useful for guiding refactoring required to propagate design changes into existing implementations, and for harvesting implementation decisions and discoveries back into the designs.

I hope this compromise provides some ideas on leveraging MDD to maximize the value of the models while still providing the information developers need to do their work.

Posted in Solution Architecture | Tagged , , , | 2 Comments

Requirements Management

The purpose of requirements management is to assure each project documents, verifies, and meets the needs and expectations of its customers and internal or external stakeholders. A requirements management program providing processes and tools for effective and efficient elicitation, capture, elaboration, validation, review and approval of requirements and management expectations is considered essential for successful SDLC.

Requirements management is central to SDLC, as a means of guiding planning, change management, and quality management to deliver solutions that meet business and client goals, needs and expectations. Requirements must be linked to the business goals and strategies that motivated them, to the work items that produce the solutions that address them, and to the test cases that validate their achievement. Requirements traceability enables impact analysis in order to address changing requirements, and to assess, and develop a plan for closing the gap between what is needed and what has been delivered.

Poor requirements practices can have a detrimental impact on solution delivery and pose significant challenges across teams. Organizations often struggle with limited budgets and cannot consistently meet customer needs. Analysts face impediments to accurately capture requirements, gain consensus and ensure they are accurately implemented and tested. Development teams encounter challenges identifying changes and evaluate which requirements are most important to implement. Economic pressures are driving the need for organizations to achieve value in investments faster than ever, despite common challenges encountered.

The whole team creates and uses requirements. Requirements are inputs to other areas in the development lifecycle and high quality requirements are essential. Without high quality requirements common quality symptoms include: business needs are not met, poorly coordinated team effort, difficult to incorporate or manage change, and possible cost overrun and budget delays.

Some of these quality issues may be addressed by engaging a wide range of stakeholders in the requirements process and more importantly gaining concurrence regarding requirement content. Requirement content is not only textual but it is visual as well. There are a diverse set of artifacts which may be captured to help define a “to be” solution. Some examples include BPMN diagrams describe the business process, use case diagrams to define actors and use cases for what the system must do, and visualization of the user interfaces through sketches and storyboards.

Requirements Management challenges include:

  • Eliciting and collecting requirements
    • Determining how much requirements management is necessary
    • Clients and users often don’t know what they want until you show them something that isn’t it implying that requirements may not be known or understood until late in the development lifecycle
    • Difficulty in choosing an appropriate method: Agile: manage requirements incrementally as they are discovered through iteration deliveries or Waterfall: Analyze requirements up front to know what you want to build before you build the wrong thing
  • Using Requirements
    • Difficulty using and reviewing document-centered approaches to requirement collection
    • Connecting requirements to the the designs and work that implement them, and the test cases that validate them
    • Organizing and presenting requirements so they are meaningful to other stakeholders
    • Coping with requirements complexity
  • Managing Requirements
    • Understanding the impact of large collections of rapidly changing, loosely related requirements
    • Treating requirements as simple lists of items with insufficient relationships to support change and impact analysis
    • Quantifying and communicating the value of requirements management to stakeholders
    • Inability to determine if completed work produces the intended outcomes
    • No agreed upon or practiced governance management process
    • Ineffective use of requirements to manage outsource projects

Context Determines the approach for how, when and how many requirements to collect. Both the agile and waterfall approaches to requirements engineering are appropriate in their own context. Projects with a lot of change that need to get out to the market quickly might be best done with high-level, low-ceremony requirements practices. Stable projects with safety-critical implications could best be done with a plan-driven, well-documented specification.

IBM recommends that clients use RRC to capture requirements directly instead of using traditional requirements documents. Capture requirements from existing office documents, and harvests those documents into RRC for further analysis to reduce duplicate effort. Document-centric collaboration across individuals, teams and organizations has many challenges including:

  • Documents focus on presentation for human consumption, commingling the view with the underlying data making it difficult to create different views for different stakeholder needs
  • This can cause business analysts and solution architects to spend a lot of time on presentation creation, not on eliciting, using, and managing requirements and other rich SDLC information
  • Documents can take a significant amount of time to develop, even documents that are automatically generated
  • Documents tend to copy reused information (rather than link) which results in the same information being reviewed multiple times
  • It is not practical to support fine-grained linking between elements across documents, manual hyperlinks are just too difficult to maintain.
  • Documents can be costly to maintain, especially when the copied information has to be updated in multiple places
  • Document collaboration and version management can introduce challenges, especially when email is the the primary means of information sharing
  • Documents are often out of date shortly after they have been published resulting in consumers working with incorrect information
  • It is difficult to process information in documents through automation, and documents provide poor change and impact analysis

The requirements should be collected as individual items in a database in order provide more efficient and effective requirements management. Instead of using using requirements documents, the business analysts, project managers and other stakeholders can enter their business requirements directly in a requirements management tool. Then various dashboards and reports can be used to manage these requirements by the stakeholders, and create published reports and/or documents using Rational Insight or Rational Publishing Engine as needed. Requirements reviews can be done directly in tools like DOORS-NG instead of reviewing potentially static and out of date requirements documents.

The use of DOORS-NG provides an organization the ability to elicit, capture, elaborate, validate and reach concurrence on a rich set of artifacts in single repository. Analysts may organize and manage requirements through the use of attributes and standardize content capture through artefact templates. Report and query content via out of the box reports and dashboards. Through a collaborative environment organizations may reach consensus of requirements faster, aided by the use of reviews as well as gain deep visibility into team activity via CLM integrations with test and development.

The next section describes the key functional capabilities needed to support effective requirements management.   Requirements management tools should provide both requirements definition and management capabilities.

Requirements Definition and Elaboration:

  • Organize requirement content via custom types of requirements such as (Stakeholder Needs, Wants, Functional Requirements, Non-Functional Requirements).
  • Categorize requirement types to capture content such as priority, complexity and status through the use of attributes.
  • There are many artifacts that are inputs to the requirements process. The use of modules, collections, shared filters tags, attributes, hyperlinks and advanced searches to help you find and organize requirements and related information. These features help improve productivity and increase reuse.
  • Better stakeholder involvement leads to higher quality requirements and better project outcomes. Involve different roles in the requirements process from business users, development, test, operations and production through an online community. Provide visibility to requirements and their related business context visible to the extended team.
  • Break down the information islands that exist among the various SDLC tools and data format used to express requirements information. Establish relationships among related information using hyperlinks, collections, attributes and tags. Have group conversations in threaded comments; see what others are creating, changing and commenting on. Include in this web of requirements information files created by the tools you use today: office documents, recording of conference calls, and informal documentation (for example, snapshots of white boards).
  • Consolidate unstructured information (rich text, images, tables, links, etc…) with easy document creation. Embed artifacts (diagrams, sketches) to create concise and clear vision and specification documents. Using this editor, users capture the annotated information to support any project, business goal or requirement.
  • Organize and find requirements by leveraging collections, shared filters, tags, attributes, hyperlinks and advanced searches to help you find and organize requirements and related information. These features help improve productivity and increase reuse.
  • The ability to analyze, organize and manage requirements and their changes efficiently using attributes, collections, tags, filters, and views.
  • Capture common terms from text and organize in a glossary through the use of Dynamic Glossaries, Word Documents.
  • Identify relationships between requirements and other artifacts through requirement links.

Requirements Visualization:

  • Business process models aid teams to capture “as is” and “to be” business process content. Business process diagrams help teams create, share and validate current and future state business processes, including roles, rules, tasks, and decision points. Link business process diagram elements, tasks, and decision points to use cases, UI sketches, and requirements. Allow stakeholders to conceptualize how various inputs, outputs, and roles can work together to execute processes that create value for the business. Then relate other requirements artifacts to this business process to create a web of requirements information.
  • Flush out additional details for the solution by identifying requirements that are related to business process diagrams, sketches and storyboards.
  • Express user interfaces (UIs) with wire frame mock-ups through a simple UI Sketching editor. Create mock-ups and workflow examples for web-based interactions that link any UI component to rich document descriptions and requirements. Visualize system transactions and interactions as the user would, identify user experience issues prior to costly web-development.
  • Storyboards are a common, proven technique in movie-making: a fast, inexpensive way of communicating ideas; finding points of consensus, disagreement, and ambiguity; then making decisions. Requirements definition and solution design are highly iterative and likewise benefits from this kind of visual expression of user scenarios. Both non-technical and technically-minded stakeholders can readily grasp the relevance of the storyboarded scenarios, and this can raise the quality of the requirements elicitation and validation conversations beyond what typically happens during textual document reviews. Quickly assemble UI storyboards in Requirements Composer from sketches using reusable components and templates. Make fast, consistent changes, which are propagated automatically. Link any UI part to other documents and requirements. UI storyboards can be represented in low visual fidelity when communicating general ideas, or in more life-like, higher fidelity, when communicating notions of visual design.
  • Use Case Model aid teams to capture use case diagrams that capture system behavior. Elaborate use cases with rich document descriptions. Link use cases diagrams with use case specifications, user interfaces sketches, storyboards, process flows and requirements.

Requirements Validation and View:

  • Validation of requirement content can both be formal and informal in nature. The key goal is to ensure requirement content is clear and understood. The comment functionality aids organizations to ask teams to pose questions and feedback related to requirements. This information is stored in the history of the artifacts.
  • A web-based review and approval workflow enables your teams to achieve consensus and validate requirements faster by shortening the review and approval cycles. The reviews help depict business decisions that are made. The customizable user dashboards and viewlets provide collective information about project membership, recent activity, recent requirements, collaboration, and reviews. Commenting capabilities now include direct comments to multiple users.
  • Project Dashboards, customised with viewlets providing project team members with a consistent view of project information. Let the important information find you. Web dashboards include customizable viewlets, which can pull information from multiple tools (Requirements Composer, Team Concert, Quality Manager). See the latest comments for you or for the whole team; see new and recent changes in the project artifacts.
  • Express sophisticated relationships among requirements, view, and analyze them. Define custom link types, use them when relating requirements, and filter views to display relationships of interest, including multi-level traceability views such as “stakeholder needs are satisfied by features” and “features are satisfied by user stories and supplementary (non-functional) requirements“. Requirements and their relationships can be displayed in tree views (shown above) and in various row and column views. This helps analysts to uncover traceability gaps (coverage analysis) and to analyze the impact of proposed changes.
  • More in depth cross discipline information with Collaborative Lifecycle Management (CLM) based filtering. With the enhanced filtering, it’s now easier to create, save, and share specific dynamic views that report on key project management concerns.  For example, using CLM links it’s possible to see related development tasks, test plans, and test cases –- and filter on their status –- directly in the requirements management user interface.

Although requirements management tools facilitate the quick capture and elaboration of content, published content can be critical for some teams. Harvesting content from documents into a requirements management tool, and generating documents using templates from a requirements management tools can be a effective means of reducing waste in your requirements management process, and can be a first step at moving away from document-centered collaboration. Automate tasks for generating and publishing hand-offs, contracts, reports, traditional reviews, use case survey documents and requirements specifications using out-of-the-box reporting based on Rational Publishing Engine technology. Out-of-the-box report templates include UI Specification, Use Case Specification, Traceability Report, Review/Approval Summary, and Audit History. A reporting wizard guides you in creating these reports in common file formats, including Microsoft Word, PDF, HTML, and XSL formatting objects (XSL-FO).

The end result of effective requirements management is closing the demand/supply gap faster, with lower costs, and at lower risks. It provides an efficient and effective process for analyzing the impact of change, and for responding to changing business influencers and priorities as needed in a dynamic marketplace. In summary, use requirements management to:

  • Involve all stakeholders and facilitate active collaboration around the requirements
  • Increase efficiency and improve quality by optimizing requirements communication, collaboration and verification
  • Achieve shared vision and understanding of project and program requirements
  • Unify teams around a common vocabulary using the integrated glossary facility
  • Enable new team members to become productive faster by making available online your requirements, business context, and past discussions about them
Posted in Requirements Management | Tagged | Leave a comment

SOA Guiding Principles

I recently worked on an engagement at a large bank in Australia to develop a new solution using SOA principles, Rational Software Architect, and targeted for deployment to WebSphere Process Server. In the process of developing the engagement report, I though I  would provide a brief summary of some SOA Guiding Principles to use during the engagement, and any additional principles that might facilitate future development. To create this summary, I tried doing a Web search on “SOA Guiding Principles”. I got a lot of hits. But to my surprise, most of them were about advice on how to sell SOA, transition to SOA, govern SOA – but not that much on best practices in actually creating SOA models or solutions. When I searched for more specific information about WSDL and SCA best practices, I got lots of information on the standards themselves, but not much on best practices for using them, or many success stories building solutions with SCA. Perhaps this sheds some light on the utility, popularity, or usability of these technologies or approaches. 

I do believe that the underlying principles of SOA are sound and applicable to modern application development. They are mostly based on the Object-Oriented Technology and Component-Based Development technologies, extending them to abstract and encapsulate the relationships between components in order to better manage coupling in complex systems. But somehow these concepts of SOA seem to be somewhat lost on many people who are still taking a more traditional functional decomposition view of their problem domain. This could be a result of the legacy systems they have do deal with, or the somewhat process-centric views of SOA that come from BPM Suites vendors and methods such as SOMA that tend to view services as a means of implementing activities used in business processes.

I thought I would capture some SOA guiding principles as a useful topic of discussion for Solution Architects, especially those that deal with enterprise and solution architecture. Here’s the list, without much explanation, as a topics for future discussion. Let’s use the comments section to refine these guiding principles, understand their implications, and add some more in order to help our clients be more successful using SOA to deliver their solutions.

1. Start analysis with business value or supply chain

Starting the analysis with the business value or supply chain helps ensure the participants and services identified have significant business value. These services can then be managed in a portfolio, assessing their cost, risk, time to value, adherence to enterprise architecture guiding principles, use of consolidated platform, etc. to help determine which services should be developed first.

2. End analysis with implementing business processes

These processes implement the participant services, often by using services provided by others. The architecture is specified by the participants and their interactions, not the decomposition of the processes.

Notice that 1 and 2 are inverted from the SOA Reference Architecture:

SOA Reference Architecture

SOA principles should be applied to all aspects of the enterprise and solution architecture – including the business architecture in order to facilitate separation of concerns and manage cohesion and coupling.

3. Focus architecture on discovering the participants who collaborate to get work done

The participants abstract the consumers and providers of the services. They encapsulate the state and behavior of the consumer’s goals, needs and expectations separated from the provider’s value propositions, capabilities and commitments. SOA is about encapsulating the relationships between the consumers and providers in order to match the consumer needs with the provider capabilities.

4. SOA is more than RPC

A service is work done by one for another. A remote procedure call is a means of supporting late-bound, distributed functional calls, which can describe only part of that work.

SOA is about abstracting and encapsulating the relationships between participants/components in order to provide more effective management of coupling in complex systems. Organizing operations in an interface is only part of this abstraction. There can be multiple interfaces involved in a service, which can involve a long running protocol that describes the agreement for how the participants interact.  Protocols describe the sequence of operation calls, not just the calls themselves, or the message exchange patterns.

5. All data and functionality will be exposed through service interfaces

Decouples specification from implementation, provides more stable interfaces, enables service virtualization for earlier testing, supports distributed development, facilitates reuse.

6. Teams communicate with each other through service interfaces, no other interprocess communication should be allowed

This ensures that all of the interactions between participants are encapsulated in the service contracts and interfaces. As a result, change and impact analysis is simpler and more reliable because there is less possibility of forgetting some hidden connections between parts.

7. Teams can use whatever implementation technology they want or need to implement their services

The implementation should never be exposed in the service interface. It should be encapsulated in the participant’s implementation.

This is similar to the approach to modeling business value chains. The participants in the value/supply chain expose their contracts and agreements, but not the private, internal business processes they use to implement and follow those agreements. That’s where they maintain their IP and competitive advantage.

This is subject to other enterprise architecture guiding principles and the ABBs of the technical architecture, which constrain the implementation variability.

8. All service interfaces must be externalizable

It can then be a business/reuse decision for if, when, and to whom the capability is exposed as a service. If a service isn’t designed to be externalizable, then two things can occur. First the interface may be incompletely or insufficiently defined. The possibility that an interface might become external often results in better design. Second, when a business need for the service arrises, the service should not need to be redesigned, impacting all the current consumers, in order to make it external. If it was not designed to be external, then this would become a barrier to realizing the business need.

9. Refactor commonality up and variability down

This provides the greatest level of reuse, minimizes the chance of redundant implementations and accidental variability, while keeping coupling from variability low.

10. Practice relentless separation of concerns

This has all kinds of benefits including simplification, ease of maintenance, better reuse, easier project management, better connection to requirements and test cases, etc. But it also provides more flexibility for distributing work.

11. Use SOMA techniques as a means of identifying, specifying and realizing services

IBM Services Oriented Method and Architecture (SOMA) is an excellent method for identifying and qualifying candidate services. Rational Software Architect includes a Rational Method Composer (RMC) instance of SOMA that provides detailed guidance on the method and supporting tools.

SOMA

Here’s a list of the approaches SOMA describes for identifying candidate services:

    • Goal-Service Modeling
      • Domain Decomposition (Top Down Analysis)
      • Process Decomposition
      • Functional Area Analysis
      • Information Analysis,
      • Modeling, and Planning
      • Rule and Policy Analysis
      • Variation Oriented Analysis
    • Existing Asset Analysis (Bottom up Analysis)
    • Service Refactoring and Rationalization
      • Service Litmus Tests
      • Exposure Decisions, including Exposure Scope

12. Design models should inform, but not be the implementations

OMG Model-Driven Architecture (MDA), and tools like Rational Software Architect’s support for model-to-code transforms were an admirable attempt to raise the level of abstraction of programming, make it easier to understand the analysis, design and implementation of programs through visualization, and enable lower skilled developers to be productive through rich code generation and testing through executable models.

Unfortunately this hasn’t worked out as well in practice as we might have hoped. MDA tends to commingle design and implementation making the models tedious to develop, hard to maintain, and too detailed to be useful for reasoning about complex systems and how they change over time. At the same time, rich programming environments and modern programming languages have addressed similar goals, making modeling less attractive to developers.

I prefer to separate analysis and design from implementation, use models for the former, and programming languages for the latter. This gives the best of both worlds. The models are easy to create, modeling only what is needed in order to understand the problem and guide the implementation. Its still important to keep the models in sync with the code. But this should be treated as a change management and governance process, not something that requires the models and code to be isomorphic and always exactly in sync. That’s just doing the implementation twice.

Here’s some goals for modeling:

    • Ensure the implementation is consistent with the design decisions
    • Speed up development through initial code generation
    • Automate the creation of Java and XML details that can be tedious and error prone when done manually
    • Provide new developers with an initial implementation to get them on-boarded faster
    • An effective means of introducing new technologies to developers through generated examples that have immediate business value
    • Improve the productivity and utilization of less skilled developers
    • Provide an initial implementation that meets design guidelines that is easier to manage and govern when utilizing off-shore developers by maintaining more control of the implementation
    • Support implementation evolution if development and testing discover that the initial design decisions or target platform architecture are insufficient to meet development schedules or quality of service commitments
    • Keep design and implementation as separate concerns, but formalizes the relationship between them through manageable coupling

13. Use IBM Process Designer for relatively simple, possibly changing, automation of existing manual business processes that require a lot of fairly simple user interaction (CRUD)

This guiding principle, and the next two, deal with the relationship between process modeling and implementation, and SOA modeling and implementation. As discussed in item 2, the SOA Reference Architecture, and common practice, is to start with business processes, and use SOA as a means of automating activities in the business processes. This is a good approach in many cases. But more complex systems of systems might benefit from taking more of a SOA approach across all the domains.

If you take this process-centric approach, then there are some useful guidelines for making the best use of the available tools and technologies. These recommendations are closely aligned with the SOA Reference Architecture.

IBM Process Designer is very useful for opportunistic or situational development of relatively simple business processes. It provides “direct-to-middleware” capability that supports easy development, deployment and change management of business processes. This is a very good way to automate high-level, paper-based processes that involve human tasks.

14. Use IBM Integration Designer when there is some assembly required.

IBM Integration Designer can support richer processes and services as it provides additional capabilities for exploiting WSDL, XSD, and BPEL to develop and integrate processes and services.

Use modules and BPEL to assemble existing, well-designed services that require limited and relatively simple message mapping and choreography, perhaps including compensation. Integration Designer is also a good choice when monitoring is a key part of the application environment.

15. Use a modern programming language if the integration is complex, low-level, or has high performance requirements.

If the problem domain is complex, utilizes a number of APIs in a common programming language, is low-level, or has high performance requirements, then direct coding might be faster and easier.

16. Apply SOA as an architectural style to any domain

SOA identifies the participants involved and the interactions between them, regardless of the domain, scope, time horizon, level of detail, etc.

SOA is an architectural style, not approach to IT implementation. It shouldn’t be confused with Web Services (WSDL, XSD, BPEL) which is only one of many possible technical architectures that can be used to deploy SOA solutions. You can also use SOA principles to design RESTful Web Services, JEE session beans, or JMS messages.

SOA

17. Separate the persistent data model from the service data model

Persistent data models describe the information used by the enterprise to implement its applications. Service data models describe the information exchanged between service consumers and providers. Service data is described as part of the service contracts and interfaces. These two data models are related, but they are usually not the same model. Service data modeling has some unique challenges, especially when it is coupled too tightly with the persistent data models:

    • Define model service data items once and reuse them in different services
    • Wrap request and response using reusable Service Data Objects (SDO)s with appropriate evolution/variability
    • Extra fields are often required, but not used in subsequent services. They become part of the common model can result in fields that are never populated or used
    • Consumers of SDOs can be confused by extra data that doesn’t describe the intent of the contract
    • Often service requests only need an ID, but end up getting all the content defined with the ID
    • Services end up having too much data. WSDL has large XSDs with lots of unused fields

Service data should be modeled separate from the persistent data. Service data should be informed by the persistent data schema, but it need not be the same. There should be limited need for automated mapping between service data and persistent data. Rather that is the responsibility of the implementing service operations, not the service data itself. Good service data design can lead to better reuse of SDOs and simpler service interfaces. It can also lead to less data transforms between service consumers and providers as well as less transform layers in order to overcome data impedance mismatches in the call stack required to implement a service.

A typical pattern might be that service data is interchanged by the service operations as they do the work. The implementation of the services involve processes that utilize persistent data as resources in their implementation. These are related by different concerns that will require different data models.

Posted in Solution Architecture | Tagged , | Leave a comment