Sie sind auf Seite 1von 2

Canonical Schema -- Common Data Definition Across Web Services

The company I work for is currently developing bottom-up Web Services. We have differeing viewpoints on how to define the interface for these services. Our data modelers prefer we use a Canonical Schema (a subset of the Enterprise Canonical Data Model that represents the data we will pass between systems). In practice, the development teams are struggling with the amount of changes needed because the Canonical Model and Canonical Schema are not mature. How are your companies handling common data defintions across Web Services? If you are using a shared definition, is this loosely bound or tightly bound to the Web Services? Thank you in advance for your input!

- excellent blog. I as well woud likes To create a url but routinely The advice is every so big or smaller. Is There a web site which could help me experience great niches.? I want how to find a website which will list brand new trends on-line and offline. new offerings which might be marketed. something positive I may write blogs regarding or lens at. Ho did end users pick The themes being Your New homepage? - This probably one topic where you will get many opinions on and suggestions that each have their own merits. We currently have achieved a workable model at our company that from reading the previous posters suggestions/patterns most closely aligns to Leo Shuster's model(s). 1. We centrally manage the canonical model/ model entities as base data entities and simple related entities without any namespace independent of the services that ultimately use them. (Leo's central management pattern) 2. At service interface design time we pull from the canoncial model catalog of entities to build up the service model which is where the canonical entities take on the semantics of the service function, as well as augment the service with service only data elements. (a hybrid of Leo's facade pattern). 3. We then generate our service contracts independent of any language to insure interoperability where the teams then construct the solution The reason we take is this approach is to be more agile. The central model is always evolving and will never be static or done, which is one of the problems you are experiencing in your post. Decoupling the actual canonical model from the service semantic model allows your projects to move forward at a different rate than you canonical model and to be flexible to take the model changes that make sense. Yes, you may not have the latest canoncial data element in a given service version, but you are a step closer to that shared semantic information model because it is 90-95% there. And the next time the service is augmented for a project we attempt to sync it up to the latest canonical model versions. This model is not without its cons as you have to keep track of multiple versions/representations of canonical entities across service implementations but with some simple policy conventions it is doable. There is also a tool from a company called igniteXML which helps in this area. Separation of concerns is the key here and knowing where to couple and decouple will help you achieve the right balance between architecture standards and project delivery. - I recommend you to use a top-down approach for defining service contracts. You should try to build a service inventory as big as possible whithin your organisation. All services in an inventory have to use a standardised service contract. When multiple domains can be identified in your organisation you could define a number of domain level service inventories. When communication between services from different service inventories is needed, a data model/format transformation for these messages is needed. You should try to use this design pattern the least possible. You could try to protect yourself from changes in the canonical schemas by defining a layer where you can transform the canonical schema to an internal schema used whitin the service. But remember this causes a transformation step which also causes development overhead and increased runtime resource usage. I think business people should try to focus on a small part of the model in the organisation. Maybe starting with a number (preferrably small) of domain level service inventories (invoicing, accounting, order handling,...). For each of these inventories a canonical model is created.

In a future step you can try to refactor these different canonical models/ domain level service inventories to one (or smaller number). In this manner you decrease the need for transformation steps when exchanging messages. Hope this helps you... - My general rules-of-thumb to take into consideration: * How many distinct applications / client platforms will be consuming the service? If there are only 1 or 2 - and you don't see that changing for at least 3-5 years - then don't invest in the additional overhead of maintaining a canonical message mapping layer. * If the application is slated for replacement within 2-5 years - go ahead and invest in the canonical abstraction layer - it will save [hopefully a lot of effort later when you are ready to migrate to the new application. - Canonical models can create intensive quarrel. And, althought they are great for in-house development, they can't be applied to interfaces that are a part of a software package bought from a vendor. I believe the best approach is to have a metadata solution on your SOA infrastructure, where you can persist diferent concurent canons, and versions of the same canon; that allows cross reference to logs, setting the conditions for fine auditing. I have seen such solutions implemented, and I believe they address the problem in a much more flexible way than having a design time canon that developers must follow. Having a global data dictionary can help translate rebel implementations to the canon, and apply new versions of the canon to old data. - Since you judge your legacy Canonical Models and Schemas not enough mature to make life easier for developers. You can for instance either extend or adapt them to TMF SID and OSS/J standards. Kind regards, - My 2 cents 1.Define an extensible base canonical schema and try to achieve further extensions/modification with the help of extensible nodes (data type *any) With this approach existing services need not to be changed for any extensions in the canonical schema (provided base structure is untouched) 2.Use XSLT based approach (load XSLT dynamically) for XML validation and let XSLT and Canonical schema be in synch with each other for any changes in canonical.