Beruflich Dokumente
Kultur Dokumente
There has been a great deal of discussion on the need to better align business with IT in order to successfully implement service-oriented architectures (SOAs). While many developers agree an SOA should ultimately cater to the needs of the business, there are differing opinions on how exactly this should occur. Should a top-down, business-centric approach be employed or a bottom-up approach, in which the Business Unit (BU) is more reactive and sensitive to the realities of IT? In John Crupi's Weblog, the chief technology officer of enterprise Web services practice at Santa Clara, Calif.-based Sun Microsystems Inc., said that SOA is a business-driven architectural style, and for it to be successful, it must employ a "top-down" approach. The BU should own the business drivers, use-cases and processes, according to Crupi. It is then IT's job to implement the BU requirements and own the service definitions. Crupi advised against using a "bottom-up" approach to SOA development, in which existing systems are simply wrapped using Web services to create a service layer. Crupi, who has worked on large projects such as the re-architecting of the eBay 3.0 application, has had many discussions with customers who have failed in their attempts to weave their systems into an SOA by simply wrapping them in Web services. Taking an existing asset or system and making it a Web service creates an immediate mismatch between the new Web service interaction style and the existing system, Crupi said. Meanwhile, J2EE [Java 2 Enterprise Edition] architect Bill de hra argued in his Weblog that the probability of failure is higher with a big, top down approach that has the ambition of spanning an enterprise. "The difficulty with a solely top-down approach is that there is no top," de hra said. "SOA systems in reality tend to be decentralized - there's no one point of architectural leverage or governance." "The goal for any enterprise should be to wean off building big, centralized systems and focus on how to network smaller, more adaptable ones together," de hra said.
Evidence of this disconnect can be seen in efforts to implement Business Process Management (BPM) projects, according to a recent report from Cambridge, Mass.-based Forrester Research Inc. The success of a BPM project depends on how effectively the "top-down" and "bottom-up" cultures in an organization can be made to co-operate, according to the report. It said the "bottom-up" phenomenon occurs when implementers' resistance to change is driven by the fear of potential job losses. On the other hand, the "top-down" approach is the willingness of senior managers to forcefully drive process improvement amongst implementers, driven by the fear of "loss of authority." As BPM tools continue to mature, organizations are leveraging them to enforce business processes in their SOAs. SOA provides a sufficient level of abstraction which BPM systems can leverage to enforce a company's process definitions. According to Forrester, integrated suites for enterprise application integration (EAI) and BPM are empowering business users with the tools to develop composite applications, potentially replacing the need for programmers.
SOA From the Bottom Up - The Best Approach to Service Oriented Architecture
The general attitude of business organizations towards Service Oriented Architecture, or SOA, has changed significantly over the course of the term's existence. When SOA first made its appearance as a buzzword in the early 2000s, enthusiasm for the new model quickly reached a fever pitch. Companies with big infrastructure problems were so sure that SOA was the fix they'd been waiting for that they were willing to pour millions of dollars into massive top-down SOA initiatives with long, hazy ROI timelines. By 2009, things had changed. Service Oriented Architecture was no longer the belle of the ball, to say the least. The vast majority of the sweeping top-down SOA initiatives that had been launched with such high hopes had failed miserably, leaving companies millions of dollars in the hole and years behind on architectural improvements. Some studies estimate that as few as 20% of the SOA initiatives launched at the peak of the model's popularity were ever been fully realized. The negative backlash towards SOA was so immediate and strong that one industry analyst went so far as to post a mock-obituary for SOA on their blog in January of 2009.
Top Down SOA Wisdom: The SOA "Adoption Team" selects and purchases a proprietary SOA Governance product. Development teams will then learn and use this product, both to re-design all existing systems and to design future projects. Why It Doesn't Work: Massive expense combined with lack of developer input and vendor lock-in is a recipe for disaster. In the top-down SOA model, companies often sought to pass the complex task of SOA adoption to a single team. This team would then be responsible for driving all aspects of adoption. In the days when SOA was the hottest buzzword around, these departments were under heavy pressure to put an SOA in place as soon as possible, and SOA vendors were more than happy to prey on these fears. As a result, the first step in the SOA process for many companies was the purchase of a multimillion dollar SOA Governance Framework. There are three problems with this approach. First, it virtually guarantees vendor lock-in. While vendor lock-in is sometimes tolerated by companies in application server products, where the loop of interoperability is fairly closed, it has absolutely no place in an integration architecture. It's hard enough to make accurate predictions about how your needs may shift in the future without having made a multi-million dollar commitment to a single company's roadmap. SOA is about what YOUR organization needs - not what a vendor tells you that you need. Don't forget that your needs aren't just a list of systems that need to work together - your solution needs to make things easier for your developers and users, too. This brings us to the second problem with the top-down model - developer adoption. Your development team isn't sitting around waiting for the chance to implement SOA for you - in fact, in addition to their regular workload, they're probably also kept busy putting out the day to day fires that already plague your network. The effort required to switch to a new model is not trivial on its own. When coupled with a mandate from on high to use a new tool simply because that's what the company has purchased, the task can become insurmountable. Just a few bugs or design flaws in the SOA tool can be enough to make a busy development team less than enthusiastic about the whole project. Finally, let's talk about the money. SOA is a big change. Making a huge initial investment in a single product is a sure-fire way to kick your entire organization into panic mode, when what you need is a clear, orderly plan that you can implement incrementally, with plenty of input from all arms of your organization. This allows you to ensure that each part works perfectly - integration, services, best practices, adoption - without interrupting your day to day operations or overloading your teams. Top Down SOA Wisdom: SOA means an organization-wide paradigm shift, and everyone's efforts rely on everyone else's. Thus, the whole shift must happen simultaneously. Why It Doesn't Work: The majority of organizations do not have the resources to drop everything and focus on SOA. SOA that falls from the sky is a pipe dream - well-planned incremental adoption is not. When dealing with a task as complicated as implementing SOA, the amount of changes that need to be made can be daunting. It's tempting to think of the situation as a "catch 22" - we can't start using SOA without writing the services, and we can't write the services without understanding our SOA model. There's only one way out of this catch 22 - the drop everything, rip and replace SOA model, where everything, from development processes to hardware is changed simultaneously.
The problem? This approach is statistically proven to fail. For most organizations, choosing this model is a surefire way to kill your SOA plans. Fortunately, SOA is not as much of a Catch 22 as it seems. From a top-down perspective, SOA can seem like an irreducibly complex initiative. But from the bottom-up, SOA is a manageable, sensible proposition. We've seen this time and time again in the Mule user community. Good developers understand the value of service-oriented development. Open-source ESB technologies such as Mule allow teams to follow best practices for SOA without a heavyweight governance model, building out RESTful interfaces that can be re-used right away, and will integrate seamlessly with any SOA governance model as the company moves forward. Sometimes, a new Web Service isn't even what you need - if you have a well-designed solution in place already, simply use Mule components to quickly connect it to the rest of your architecture, and move on to an area where the initial outlay associated with building a new service yields a bigger margin of value to your organization. Begin evangelizing your teams today, get them hooked, and then gradually introduce smart, lightweight SOA Governance at a pace that matches your actual available developer resources. Top Down SOA Wisdom: The SOA Service Repository saves developers time by giving them reusable components. That's why teams must keep all the information in the repository up to date. Why It Doesn't Work: The point of SOA is to make development easier, not load developers down with menial tasks. Your SOA solution should automate the cataloging of services. Like many assumptions made by top-down SOA advocates, this approach is based on the idea that developers are a resource, not teams of skilled professionals. Think of the switch to SOA as a sale you're making to your company. The value proposition is faster development, ease of management, and less time doing tedious integration work. That's why it is a case of serious cognitive dissonance to make your developers responsible for keeping your repository up to date - you're basically saying they will become more productive and do less tedious work by doing additional tedious work. A good SOA Governance model always makes things easier and reduces complexity. Using Mule's open source components, our users have built some amazing, bottom-up SOA enabling tools - things like Java classes with metadata that automatically populate the repository with service information, or REST integration of the repository, placing all services directly in front of developers. When combined with an approach that does not require millions of dollars of lock-in dollars as its first step, this means you can add additional complexity-reducing tooling as SOA technology continues to mature or real world pain points surface, future proofing your architecture and continually improving your ROI. Top Down SOA Wisdom: The key to successful SOA is an organizational culture shift towards "virtuous" architecture decisions. Why It Doesn't Work: The key to successful SOA is a plan made up of clear, achievable goal sets with well-defined benefits. Yes, "virtuous" was really a word that was used by top-down SOA advocates to describe the adoption process. The idea was that SOA was so feel-good that everyone would adopt it not only as a timesaving technology but as an ideology.
This is a nice way to think, but it's also a good way to sink your SOA effort by leaving your team in the dark. SOA is not about ideology. It's about doing things in the simplest, most efficient way. Teams are motivated by clear, achievable development goals that have proven, clearly defined benefits. Ditch the top-down SOA soft sales pitch, and show your teams how simple changes in the way they think about development will result in greater productivity down the line.
Figure 1: Initial business agility and functional maturity of various categories are shown here. It also shows the preferred state to be achieved through SOA. Figure 2: Relative ROI over time for four hypothetical but related changes is shown here. An enterprise under category SOA transition (i) leverages functional maturity to reap quick benefits early on but has relatively flat returns over time. On the other hand, an enterprise under category SOA embarkment (iii) has steep returns in the long run leveraging predictable and shorter timeto-market potential and less integration costs. Identifying the category would help set the focus and expectations. It also helps in narrowing down the service identification approaches.
enterprise. Regardless of the nature of the SOA initiative (highlighted above), this practice is vital.
contracts would then be used to address these scenarios through simulation, while assessing the impact on the system. Key metrics for assessment can include: i) Service reuse ratio (Number of services reused / Total number of services in scenario) ii) Service leverage ratio (Number of services reused / Total number of services in inventory) iii) Service revision ratio (Number of services revised / Number of services reused) iv) Service creation ratio (Number of new services created/Total number of services in scenario) and v) Service utilization ratio (For a given service, Number of service consumers identified / Total number of services in scenario )
These IT metrics can be normalized for complexity and analyzed together to assess impact on time-to market characteristics. This may result in further restructuring impacting service inventory. An enterprise that has gone past initial roadmap phases can include historical data to generate business and financial metrics that are even better during BASS.
Conclusion:
Composite applications assembled from an inventory of services enable business agility. Service identification yields this list of business and technical services. It's relatively easy to identify a set of services, however the ROI would be governed by the nature of SOA initiative, frequency of changes and the magnitude of changes. This article highlights key best practices to identify, validate and verify service inventory content well before implementation.
commonly requiring CRUD-type functions. This approach relies on the use of canonical data models (CDMs) that standardize information exchanged between services. Therefore, this approach can be considered supply-driven. Canonical services rely on technology resources that use their own particular data models. Data consistency is achieved by mapping between the applications' data models and the CDM. The data elements that comprise a single CDM object can hence be managed in different applications. Nearly all current ESB products offer support for CDMs. The real challenge lies in achieving consensus with regard to the exact definitions of common objects. A strong point of this approach is that the semantics of services receive attention in early modeling stages, thereby reducing the amount of undesirable design changes required when projects get closer to production phases. The main pitfall of this method is the need for standardized data models. Depending on the scope of the SOA project, this requirement can result in "analysis paralysis", despite the fact that only the business objects that play a role in exchanging data need to be modeled. A domain-based roll-out of services can help overcome concerns about having to establish global data models.
Method 5: Goal-driven
With this approach a project team decomposes a company's goals down to the level of services. In this context, a service is regarded as a goal that can be executed through automated support [REF-5]. For example, a goal such as "increase customer retention" can result in a service called "register customer for loyalty program".
The advantage is the strong relationship forged between services and company strategy. However, there are two distinct problems to this method: goals tend to be subjective, and a fair amount of IT cannot be directly aligned to business goals. Subjectivity may well cause two business goals to be decomposed into two distinct services, even though the desired functionality is identical (which means that using a single service would have been preferable). Also, because many IT capabilities cannot be directly related to business goals, there is a constant risk that many potentially useful services will simply be overlooked.
Method 6: Component-based
The essence of using components is to divide IT-functionality into units with maximal internal cohesion and minimal external coupling. Components are truly self-contained units of functionality. Various methods to identify components have already been introduced in the realm of component-based development. A guiding principle in these approaches is that each component has exactly one owner and that the responsibilities of each component have to be defined as precisely as possible. These responsibilities can be used as a starting point for identifying services. In theory, component-based development results in a functionally organized IT enterprise. Components can be custom-coded or purchased off-the-shelf. Additionally, a need arises to compose services offered by components into composite services. Currently suppliers of large monolithic applications (such as ERP and CRM systems) tend to organize their applications in a more modular fashion and to make them available through services. These modules correspond roughly with components. The benefit to basing services on components is that the service identification process is greatly simplified. The bulk of the analysis work has already been carried out as part of the component-based development method. However, in reality, this can lead to several problems. Modern-day services and traditional components rarely share the same goals, requirements, and expectations. Creating a series of fine-grained services that mirror underlying components can severely inhibit an SOA initiative from attaining strategic goals that were never taken into consideration when the components were first designed.
cluster the functionality and remove functional overlaps by combining similar services into a single (often monolithic) service. The main advantage of bottom-up delivery is that it requires little time to reach a first definition of services. It is an appropriate approach if the functionality of the existing applications is urgently required and perhaps also sufficient to support both current and future business processes. A potentially positive side-effect of this method is that it can be used in a context where little process or function models are available. However, ultimately this is not a recommended approach for defining services in support of service-orientation. The Law of Conservation of Challenges will rear its ugly head in that badly designed applications that have been adapted to changing circumstances many times over (and are tightly coupled to the business processes) will make it very difficult to design reusable and future-proof services. In the end, this approach almost always leads to the creation of new application silos. It just happens that in this case, the silos are themselves services.
Method 9: Infrastructure
Platform independence might well be an accepted architectural principle for services, but composite services in particular demand extra attention. This method acknowledges that services cannot always be identified independently of the technical infrastructure that is being used.
How convenient is it when a service composes two utility-centric services that run on separate platforms (e.g. a mainframe and a Unix machine)? Consider the required connectivity, execution and potential rollback of transactions, variations in availability, security and monitoring, network traffic, etc. Note that even though it is nearly always technically possible to solve these issues (using modern middleware), a cost-benefit analysis might indicate that an alternative solution is called for. Non-functional requirements also play a part in this analysis (see Method 10). When discussing the advantages of this approach we can be brief: it should only be used when absolutely necessary. The core idea of service-orientation is to hide and abstract the underlying application environment (and especially its supporting infrastructure layer!).
The Basic Service Design Pattern Language [REF-4] establishes the foundation for service identification, definition, and design by providing seven basic design patterns that form a fundamental service pattern language. This pattern language can be considered a primitive process that addresses only the most necessary steps for creating services. You can trace all ten of the methods explored in this article back to this fundamental process. Of course, each method has its own priorities and trade-offs that can affect the extent to which any given service design pattern is supported. But by understanding this basic design pattern language, you can better evaluate these methods as to how well they support service-orientation in general.
General pitfalls
As with the pursuit of anything worthwhile, the road to a attaining a good service portfolio is lined with pitfalls. Here are some common examples that apply to service identification:
Services in name only - The terms "SOA" and "service" are used rather loosely in many IT environments. Project teams may choose to label their applications as "serviceoriented" simply because it sounds more cutting edge or due to the common misperception that the use of Web services alone constitutes an SOA. Either way, when it comes to implementing the "services" in these types of initiatives, the programmers tend to run the show. They create a plethora of (mostly technology-centric) Web services, disregarding business/IT-alignment, reusability or any other properties a service should have. The end-result is an application that uses Web services but is not itself service-oriented. Ultimately, this pitfall leads to great disappointment when the expected benefits of SOA are never realized. Perfect non-existent services - On the other side of the spectrum lies the danger of having analysts and architects model wonderful services that simply cannot be built using today's technologies - or - they can only be realized via murderous costs. This pitfall can be avoided by constantly ensuring that all modeling efforts are balanced with a dose of reality. And never shall they meet services - When different project teams within the same organization commit to radically different service definition and delivery methods (such as the opposing top-down and bottom-up approaches), collections of services can be created that will simply never be compatible. These can form natural silos that will eventually impose significant integration effort when cross-silo communication is required. Babel services - If an organization does not have a canonical data model (and therefore the definition of the service semantics is not clear), the services are automatically incompatible. The result is an environment that will depend on transformation and
bridging technologies for many, many years to come. This will ultimately inhibit every aspect of service-orientation. Spaghetti services - A problem that can occur when services are been defined on multiple levels of granularity is that technical terminology and business terminology can get so mixed up that the services themselves can become unintuitive, confusing, and sometimes just unusable. There are some simple rules to avoiding all of these pitfalls: 1. Adhere to the principles of service-orientation. These are essential and fundamental to creating well-defined services that support the strategic goals of SOA. 2. Understand your options when it comes to service identification and definition approaches by studying the methods covered in this article. 3. Measure the effectiveness of service identification approaches you are considering by mapping proposed service design processes to the fundamental service design pattern language. Conclusion Using proven methods to tackle the issue of creating well-defined services is certainly recommended. It allows you to leverage the experience of those that have already been through this process many times. However, no one method is perfect. Each has its own benefits and trade-offs. It's always in your best interest to take proven methods as a starting point and then consider how they can be optimized in support of realizing the requirements that are specific to your SOA goals.
Issues in Service Identification The identification of services can have much effect on the resulting IT landscape. In fact most of the advantages a Service-Oriented Architecture can offer depend on what the system classifies as services. More specifically: the granularity of the services is very important in reaching flexibility and reuse of services. Some other issues are also related to this granularity, a couple of them are explained now.. Note: I talk about automated services now, of course services exist which need human involvement, how to deal with that can be read in theprevious post . Flexibility: By employing a SOA one tries to establish a flexible IT landscape which is easily adaptable when changing business needs demand this. However, when all the needed functionality is defined in, for instance, three different services, not much differentiation is possible in orchestrations. By distinguishing a lot of small services, a lot of different orchestrations can be developed, which can also be reused as services. Roughly speaking: the higher the granularity, the higher the resulting flexibility. Performance: Using orchestration and services can affect the performance of the total system. Nowadays BPEL (Business Process Execution Language) mainly is used for defining the orchestration, WSDL (Web Service Definition Language) for defining the service interface and SOAP (Simple Object Access Protocol) for defining the messages. All these standards are XML based. This makes them human -readable', but also creates a lot of overhead for system-to-system communication. Imagine the difference in performance between a Java function call (which is compiled into byte code) and a service invoke sending a SOAP message over HTTP. Conclusion: the higher the granularity, the more SOAP calls are needed. More SOAP calls means a lower performance. Reuse: The use of services, defined in a unified way, gives the opportunity to reuse these services in an easy way. A service directory can be created and different process orchestrations can reuse the same service. But the granularity of the services also affects the possibilities you have. Again, when specifying just a few services reuse is very hard or isn't possible at all. It is easy to see that using smaller services will give more opportunities for reuse. Complexity: Implementing a SOA for a big enterprise can result in a lot of services. The governance of all these service is a big challenge. A service directory is needed with good search capabilities. Furthermore all services need to be specified in a clear, unified way. These metadata specifications are hot issues in the current market. But that's not all! Think about different versions of the same service due to further development, changing regulations, bug fixes, and so on. Also services developed in different business units of a company which are just slightly different can give a lot of trouble. When different business units use the same service the question arises who's responsible for it. You can imagine that a higher granularity will push the complexity to a maximum. Figure 2 summarizes the issues in service identification. Neither try to see relations between distances and curves nor search for scientific foundations, the Figure just attempts to clarify how they relate. On the x-axis the granularity is shown. From left to right the granularity increases meaning that the services become smaller. On the y-axis the four issues are shown. Low and high complexity, flexibility and performance are straightforward. By a high reuse the percentage of services that are used in more than one orchestration is meant.
Figure 2 - Overview of issues in service identification How can we find the optimal decomposition of our business needs into services? First some issues in component identification are explained, subsequently a strategy for choosing the service landscape which best fits the enterprise architecture is proposed.
Issues in Component Identification Which services and data should be in the same component? This question doesn't have one right' answer. When identifying the different components also a lot of issues play a role. a couple of them are explained in detail. Existing systems: When attempting to identify business components one always has to deal with existing systems. A green-field approach will (almost) never occur. This means that the existing IT landscape has to be analyzed to determine which components already exist and which services they deliver. This existing components can be best-of-breed or custom-made applications. The problem with these components is that they not always fit exactly into the new service-oriented architecture. If they deliver more services than you need, these services should be disabled. However, not every application supports that. In some cases it is also difficult to make system-to-system connections with such applications. For example, when you need data from an existing application for use in another component, or when you would like to force an existing application to use a data source you provide. This article will not delve deeper into this field, called Enterprise Application Integration, but be aware of the difficulties influencing the identification of business components. For more information concerning this subject, a good starting point is the book of Linthicum [4]. Performance: Which services and data are coupled in the same component can affect the performance of the whole system. In principle the following heuristic can be applied: " Choose the elements so that they are as independent as possible; that is, elements with low external complexity (low coupling) and high internal complexity (high cohesion)". This is not only a good heuristic for reducing the complexity in your system, it is easy to see that coupling elements with a high cohesion reduces the component-to-component communication needed. When components are deployed on different servers or when communication protocols are used with some overhead, the performance advantages delivered by a good component identification process are huge. Maintainability: One of the most important issues in IT is maintainability. Maintainability can be defined as "The ease with which a software system or component can be modified or adapt to a changed environment "[5]. Using small, well-defined components satisfying the heuristic stated above can help a lot in increasing the maintainability of a system. Big, monolithic components often lead to so-called spaghetti-code, meaning that they are horrible to maintain for anyone else than the developers who build the component. This leads to the same
conclusion as for the previous issue: a good component identification process can increase maintainability significantly. So component identification is important, as is service identification. Can this be achieved optimally? Or do we have some best practices? Strategy Service identification can best be performed in a top-down manner. After defining the process architecture the needed services can be determined. The issues mentioned before, complexity, flexibility, reuse and performance, should be kept in mind. As we have seen using small services gives us a high flexibility and services can be reused very much. On the other hand complexity will become higher and higher, while performance decreases. An optimal size for each service doesn't exist. The best approach is to determine the processes within the process architecture of the enterprise with which the enterprise differentiates itself from competitors. For example, processes for accounting mostly are not the differentiators, but processes describing the handling of client support could be. The services used in these processes should be kept small to achieve high adaptability. The alignment of business and IT should be as high as possible for these processes. For other processes bestof-breed applications can be bought.
Top-down - Has the advantage that the services identified throughout the layers of the solution are aligned with the business processes which provided the scope for the solution. It is also attractive from a project management perspective in that the business process under consideration provides a natural project scope for the development effort. However, the major drawback to this approach (and the reason the customer I mentioned earlier got unstuck) is that it becomes harder to ensure that you develop services for reuse (some thoughts on SOA and Reuse) as developers are looking to develop services that support this process rather than ones that will contribute to an enterprise-wide service portfolio.
Bottom-up - A bottom-up approach has the potential to develop a set service that can support a number of processes, addressing the concern above, as the developers are looking across a broad set of artifacts. The issues here are that where data is the focus of the artifact analysis the tendency is to generate CRUD services (which is bad) or to develop access operations that do not match well the requirements of processes and therefore require business services to make multiple calls into datamanagement services.
The best advise right now seems to be that you should lead with a top-down approach, that development teams and projects should be managed in such a way, but, in parallel you should have an SOA architecture team that can review service specifications, propose existing services for reuse and validate new services to ensure they fit into the enterprise service portfolio. This really frees project teams from having to understand all the existing portfolio, and allows the architecture team to also capture reuse guidelines and to act as intermediaries between different project teams. This does not imply that there is no bottom-up identification, but that it happens within the scope of a project which is top-down.
Top-down - Has the advantage that the services identified throughout the layers of the solution are aligned with the business processes which provided the scope for the solution. It is also attractive from a project management perspective in that the business process under consideration provides a natural project scope for the development effort. However, the major drawback to this approach (and the reason the customer I mentioned earlier got unstuck) is that it becomes harder to ensure that you develop services for reuse (some thoughts on SOA and Reuse) as developers are looking to develop services that support this process rather than ones that will contribute to an enterprise-wide service portfolio.
Bottom-up - A bottom-up approach has the potential to develop a set service that can support a number of processes, addressing the concern above, as the developers are looking across a broad set of artifacts. The issues here are that where data is the focus of the artifact analysis the tendency is to generate CRUD services (which is bad) or to develop access operations that do not match well the requirements of processes and therefore require business services to make multiple calls into datamanagement services.
The best advise right now seems to be that you should lead with a top-down approach, that development teams and projects should be managed in such a way, but, in parallel you should have an SOA architecture team that can review service specifications, propose existing services for reuse and validate new services to ensure they fit into the enterprise service portfolio. This really frees project teams from having to understand all the existing portfolio, and allows the architecture team to also capture reuse guidelines and to act as intermediaries between different project teams. This does not imply that there is no bottom-up identification, but that it happens within the scope of a project which is top-down.
P.S. Beware any development process that dictates such a rigid set of techniques and an equally rigid step-bystep approach - for there you will find a process that has either only been used on a single project or never been used at all.
The difficulty with a solely top-down approach is that there is no top. SOA systems in reality tend to be decentralised - there's no one point of architectural leverage or governance, no one person who's going to be able to say and then enforce "a decision in ten minutes or the next one is free". This debate has been going on for years. At the end of the day, it seems that some tool vendors have chosen the bottom-up strategy. The advantage with a bottom-up approach is that you can use the exposed end-points as building blocks for functionality and integration tasks that you didn't even think of when you started out.
Figure 1 - A comparison of bottom-up and top-down delivery strategies. As shown in the figure, each approach has its own benefits and consequences. While the bottom-up strategy avoids the extra cost, effort, and time required to deliver services via a top-down approach, it ends up imposing increased governance burden as bottom-up delivered services tend to have shorter lifespans and require more frequent maintenance, refactoring, and versioning. The top-down strategy demands more of an initial investment because it introduces an up-front analysis stage focused on the creation of the service inventory blueprint. Service candidates are individually defined as part of this blueprint so as to ensure that subsequent service designs will be highly normalized, standardized, and aligned.