Sie sind auf Seite 1von 67

YunXiao Li

A TEXTUAL LANGUAGE FOR MODULAR METAMODEL SPECIFICATION

Author: Yunxiao Li Supervisor: Dr. Dimitris Kolovos Department: Computer Science The University of York September 2010

YunXiao Li

CONTENTS
Contents...............................................................................................................................2 Acknowledgement...............................................................................................................3 Abstract................................................................................................................................4 1. Introduction......................................................................................................................4 2. Literature Review............................................................................................................4 2.1 The Model Driven Engineering (MDE).....................................................................4 2.2 Metamodelling...........................................................................................................6 2.3 Eclipse........................................................................................................................9 2.4 EMF.........................................................................................................................13 2.5 Emfatic.....................................................................................................................15 2.6 Xtext.........................................................................................................................16 2.7 Extended Backus-Naur Form Expressions (EBNF).................................................17 3. Motivation......................................................................................................................18 4. Requirements.................................................................................................................21 4.1 The stakeholders of the textual language.................................................................22 4.2 The core requirements of the textual language........................................................22 4.1.1 The functional requirements..............................................................................23 4.1.2 The non-functional requirements......................................................................24 5. Methodology and Infrastructure....................................................................................27 5.1 The Iterative and Incremental Development methodology......................................27 6. Design............................................................................................................................31 7. Implementation..............................................................................................................33 7.1 Construction of the abstract syntax..........................................................................33 7.2 Transformation From Xtext Model to Ecore Model................................................39 7.3 Separating annotation model to individual files......................................................43 8. Evaluation......................................................................................................................50 9. Conclusion.....................................................................................................................55 References..........................................................................................................................56 References

YunXiao Li

ACKNOWLEDGEMENT
I would like to show my deeply gratitude to those who helped me during the process of the project and dissertation. First and foremost, I am particularly indebted to my supervisor, Dr. Dimitris Kolovos for his constant encouragement and careful guidance through all the stages of dissertation. Without his patient supervisions, the dissertation would not achieve present form. Secondly, I wish to extend my sincere thanks to other teachers and professors at the department of computer, who inspired and supported me to devote to the study of computer science and also gave illuminating instructions to me. Thirdly, I want to express my heartfelt appreciation to my beloved parents for their endless love and unconditional support. At last but not the least, my friends, who shared my worries and offered selfless help in the difficulty of my dissertation.

YunXiao Li

ABSTRACT

1. INTRODUCTION

2. LITERATURE REVIEW
2.1 The Model Driven Engineering (MDE) At present, Model Driven Engineering (MDE) is an emerging development methodology that focuses on productivity, interoperability and portability. The MDE approach in software development encourages people to develop the models of a system first before turning it into a real application and to use models at several different levels of abstraction, thereby raising the level of abstraction in software development[1]1. Another related technology that often confused with MDE is Model Driven Architecture (MDA). MDA can be seen as another vision on MDE of Object Management Group (OMG). MDA focuses more on the transformation from the platform-independent model (PIM) to domain-specific models (PSM). Similarly, supported by B. Appukuttan and et al [2]2, in order to reach the full purposes of rapid and consistent software development, the capability to control the transformation from platform-independent model (PIM) to domain-specific models (PSM) is essential. MDE is a transformation from object-oriented technology to the model engineering. The object-oriented technology is mainly about the objects and classes and the relationship between classes and objects such as instantiation. However MDE is about the models, metamodels, model transformations and also the relationship between models and metamodels such as representation and conformance. In other words, system and model are the two core concepts of MDE whilst conformance and representation are the two

YunXiao Li

basic relationships of MDE. And these features compose the basic set of Model Driven Engineering principles. According to J. Bezivin[3]3, A model is a complex structure that represents a design artifact such as a relational schema, an interface definition (API), an XML schema, a semantic network, a UML model or a hypermedia document. In the model engineering, models are regarded as first class citizens. In order to represent real-world situations, models are often used for that purpose to represent a system. Models are supposed to be exchangeable hence they need to conform to consensual standards. Many models may be written by agents that are not in the computer science sector. For this reason, these models might be written in domain specific languages (DSL). And one important feature of MDE is the mapping of these models written in DSL to operational technology. Also according to Selic 2003[4]4, an engineering model should conform to following five key features:

Abstraction. A model is always a reduced rendering of the system that it represents. Understandability. It is not sufficient just to abstract away detail; we must also present what remains in a form (e.g., a notation) that most directly appeals to our intuition.

Accuracy. A model must provide a true-to-life representation of the modeled systems features of interest. Predictiveness. We should be able to use a model to correctly predict the modeled systems interesting but non-obvious properties, either through experimentation (such as by executing a model on a computer) or through some type of formal analysis.

Inexpensiveness. A model must be significantly cheaper to construct and analyze than the modeled system.

Atkinson and Kuhne[5]5 suggested that a model driven development infrastructure should define:

The conception has the efficiency in the construction of models and the principles managing their application. To describe models, the notation needs to be used. How the relationship between the models and the elements of real world is.

YunXiao Li

The concepts for enhancing the model mappings from customer defined to other forms.

2.2 Metamodelling With regard to metamodelling, it plays a very important role in Model Driven Engineering. In addition, metamodelling is the main for both MDE and MDA and is a significant facility in the new MDA paradigm[6]6. Then what is a metamodel? One proper definition is that A metamodel is a precise definition of the constructs and rules needed for creating semantic models.[7]7 or we can say that a metamodel is a model of a model and it is an important concept to define models. In computer science, this terminology is used quite often and has slightly different meanings in different areas, for instance:

In Conceptual Modeling, metamodel is a model of a data model such as an EntityRelationship (ER) model of an ER model/relational model. In Databases, the metamodel specifies the data about data such as schema and data dictionaries.

Following Figure 1 shows an example about the relationship between a model and a metamodel. The upper part of the model case representation is the metamodel which is a type of schema that defines the student id, student name, department and the average score of a certain student record. And the lower part of the model case representation shows the instance of that schema and specified the data model in detail. It defined what the exact id, name, department and the average score of a student entity. Another more detailed example is given below in order to explain the difference between models and metamodels. Suppose there is a database of a certain university and there are several tables under that database such as student, lecturer and staff. These tables represent the system in a real world. For the table of student, there are several columns called StudentId, Name, Gender, Age, Department and Address.

YunXiao Li

Figure 1 Example of models and metamodel The student, lecturer and staff are defined to be models. And the information of schema used to create such table is referred to as metamodel. For instance, the name of the table, the column name and the type of the column are all metamodels. A metamodel is used to describe a model and each model is defined in conformance to a metamodel. Also metamodels describe the relationships of how they are related and constrained. A metamodelling framework is relied on the architecture with four layers: metametamodel, metamodel, model and user objects. Below Figure 2 shows an example that represents the four layer metamodel structure of the UML. The level M0 is a runtime instance. The level M1 is the model of level M0. The level M2 is the model of the model on M1 which can be described as a metamodel. The level M3 holds the information data of level M2 and M3 is regarded as the meta-metamodel. According to the OMG a metamodel is a language definition for the subject model [8]8. The concept is also germane to the Meta-Object Facility (MOF) of OMG. MOF is a standard for MDE and a language for metamodelling. As mentioned by Stuart Kent, the strongpoint of using MOF is that the languages defined by MOF like XML can be processed by machine. In particular, the MOF standards specified the relationship between MOF models and the

YunXiao Li

models in XML format and how other models could be retrieved from the models defined by MOF language [9]9.

Figure 2 Example of the four layer metamodel hierarchy As the MDA in OMG, The OMGs mission is to help computer users solve integration problems by supplying open, vendor-neutral interoperability specifications. The Model Driven Architecture (MDA) is OMGs next step in solving integration problems. As noted by John D. Poole[10]10, In MDA, the platform independent models (PIMs) firstly introduced in a platform independent modeling language like UML. And through mapping the platform independent model to some other language such as Java, the PIM is later converted to platform specific model (PSM). When a language specification is written in the metamodel format, the attention will be focus on the abstract syntax. However, as denoted by Anneke Kleppe[11]11, a metamodel cannot include all the information related to the concept in metamodel so as to represent them to the language user. This can be sometimes enhanced by the attributed metamodel which holds the attributes that specify the notation such as the Graphic Modeling Project that based on EMF an Graphical Editing Framework (GEF) in Eclipse.

YunXiao Li

2.3 Eclipse As mentioned above, the main development platform that used in this project is Eclipse. The Eclipse organisation defined the Eclipse as a community that dedicated to establish a customizable platform for development and some frameworks to create, manage software through entire lifecycle in software development. From the technique perspective, Jim D'Anjou, Scott Fairbrother, Dan Kehn describe Eclipse as a platform for tools and all sorts of custom applications. Also it is a development environment particularly for Java. [12]12 Similarly, Steven Holzner notes that Eclipse is not only an Integrated development environment (IDE) to develop Java, but also a universal tool platform. It supports various kinds of tools beyond Java language. There are three subprojects as follows build up the whole Eclipse project [13]13:

The very key point of Eclipse: the Eclipse platform itself. The Java Development Toolkit (JDT) The environment that allows users to create their own tools for Eclipse: The plugin development environment (PDE).

These diverse subprojects are separated into other JDT subprojects such as debug and core subprojects. There are more than 60 open source projects that held by the community of Eclipse. All of these projects can be divided into seven categories such as Rich Client Platform, Embedded Development, Application Frameworks and Service Oriented Architecture (SOA). Eclipse is designed for running in various kinds of operation system such as windows and UNIX/Linux which gives efficient integration to any operation systems. One of the advantages of Eclipse is the rapid development on the basis of a plug-in model. The key architecture of Eclipse is the plug-in discovery and running dynamically. Eclipse platform is also made of subsystems which built upon a runtime engine. Here are some main runtime components as plug-ins that consist the Eclipse platform. One is the runtime platform; it searches plug-ins models dynamically. Another one is the workspace or the resource management which specified API to control resources such as files system. Others components like user interface also called workbench, help system, debug tools are of importance as well. The platform of Eclipse is built closely connected with

YunXiao Li

plug-ins concept. Figure-3 shows the platform structure and the plug-ins of Eclipse. The fundamental platform and two main tools used for plug-in development are included in the Eclipse SDK. Also with the Java development tools (JDT) and Plug-in Developer Environment (PDE) give the good example how new tools could be integrated to the platform. The whole project is developed under the Eclipse platform (version: 3.5.1). The reason to choose Eclipse as the development framework is not only because it is a Java tool or platform, but also because several other reasons listed as follows: Firstly Eclipse is a free development tool; users can use the code of Eclipse under an open source licence called Common Public Licence (CPL). This feature could avoid many troubles that may caused by the licence of software for the developers and engineers. Eclipse is a platform, this means although Eclipse is well known as the Java development environment, it is mainly a platform which designed for more than only the Java language. Also Eclipse is easy to extend for both individuals and entities because of its open architecture. Many other languages and tools could be easily integrated into the platform while Eclipse is already a wonderful development environment for Java programming language. Eclipse can also used for other language development such as C, C++, Python, XML, HTML, JSP, etc. Thanks to Eclipses open architecture, the plug-ins of the Eclipse can almost handle everything while a pure Java editor is not able to do so. Eclipse is quite easy to extend, due to the plug-in development environment (PDE), Eclipse allows the developers to extend the platform with no restrict. There are a huge amount of plug-in projects already exist in the Eclipse community for instance, the well known Junit, Ant, AntlrSupport, Emfatic and Xtext.

Eclipse has the mature Modeling Framework. As mentioned in above item, Eclipse has some modelling tools such as Xtext and Emfatic for building tools and other products on the basis of a structured data model.

YunXiao Li

Xtext and Eclipse Modeling Framework are the basic facilities for this project and this is another important reason that Eclipse is chosen for the development framework of this project.

Figure 3 Eclipse Platform structure. Source: http://www.eclipse.org/documentation/ Since this project is developed rely on the Xtex and EMF plug-in of Eclipse platform. It is necessary to find out what the plug-in exactly is in Eclipse platform. Simply put, Eclipse is a combination of plug-ins or extensions. Different plug-ins can be integrated into it; however they all share a common interface in spite of their various functions and purposes.[14]14 The Plug-in Development Environment (PDE) is used to extend the

YunXiao Li

Eclipse if users attempt to add some specific requirements when Eclipse does not meet them. And PDE extends the Java Development Tools (JDT), supplies some wizards and features to develop plug-ins. The plug-in development perspective looks like the java perspective but also includes some new views such as plug-in explorer view and error log view in order to show the content of error file. Since the Eclipse is the container of plugins and the plug-in must rely on the host, a new instance of Eclipse, which is called the runtime instance, will be started up when plug-ins running for testing and debugging. [15]15 Inside the range of the Eclipse workbench, a plug-in is a component that supplies a kind of service. The runtime Eclipse supplies a facility to support a bundle of plug-ins working together and to provide a precise environment for developing activities. In a running instance of Eclipse, a plug-in can be regarded as an instance of some plug-in runtime class. It provides the management and configuration support to the plug-in instance. For every plug-in, it is installed under the folder called plugins. Basically the following files can be found in each plug-ins folder: *.jar, about.html, plugin.properties, plugin.xml, lib and icons. Among these files, the plugin.xml file plays the key roles in the plug-ins. This XML manifest file is used for describing the plug-in to Eclipse. As the matter of fact, this file is the only file that user needs to create a plug-in.[13]13 The content of the manifest file is accessible through the plug-in registry API. And below file fragment shows how the plug-in manifest file look like: <?xml version="1.0" encoding="UTF-8"?> <?eclipse version="3.0"?> <plugin> <extension id="emf" name="emf*" point="org.eclipse.emf.ecore.generated_package"> <package uri = "http://www.googlecode.com/emfatix/Emfx" class = "com.googlecode.emfatix.emfx.EmfxPackage" genModel = "com/googlecode/emfatix/Emfx.genmodel" /> </extension> </plugin>

YunXiao Li

The xml file depicts a plug-in that supplies emfatix infrastructure to the workbench of Eclipse. Within the Eclipse, there are two relationships describe a plug-in that maybe related to other plug-ins. One is called dependency, another one is called extension. Dependency. It is used to specify other plug-ins that required for the operation of the current plug-in. Extension. It is the process for a plug-in extending or adding another plug-in or elements to the host plug-in. On Eclipse start up, the plugin.xml and the MANEFEST.MF files will be searched by the plug-in loader scans to find every plug-in and establish a structure holding these information. Although this process will occupy some memory, it speeds up the loader to find the required plug-in and it also requires much more less space than loading all the plug-in code. However the plug-ins is said to be Loaded But Not Unloaded. The plugins will be loaded in a lazy manner in a session but not unloaded in Eclipse 3.1. And this issue might be solved by unloading plug-ins if they will not be used any more.[16]16 2.4 EMF In this project, a modeling framework, which is on the basis of Eclipse, is introduced. Its called Eclipse Modeling Framework (EMF). The EMF is a modelling framework and code generation facility for building tools and other applications based on a structured model.[17]17 Its an open source code generation tool that integrated within Eclipse. It is also an excellent tool for supporting the OMGs Model Driven Architecture (MDA) approach of using models. EMF is able to create complex editors from different abstract models but the model in EMF is not as general as the common accepted explanation. EMF uses XMI (XML Metadata Interchange) to represent model definition and there are several ways to make the model into that form: Create the XMI document directly, using an XML or text editor Export the XMI document from a modeling tool such as Rational Rose Annotate Java interfaces with model properties Use XML Schema to describe the form of a serialization of the model

YunXiao Li

Furthermore, the EMF modeling concepts directly related to their implementation so that there will be a low entry cost for the Eclipses Java development. EMF is a framework and code generation facility that lets you define a model in such forms like Java, XML and UML, from which you can then generate the others and also the corresponding implementation classes.[18]18 Although basically EMF is a framework that describes models and generate other forms from it but in EMF modeling and programming have the same importance not like other tools. EMF integrated both of the high level modeling or engineering and the low level programming rather than separating one from another. Also EMF tries to make a balance between modeling and programming. As a matter of fact, EMF model is basically a subset of UML class diagram or a class of a simple mode. And then the surprising benefits can be obtained within a standard Java development environment. A developer and a user do not need to understand the process how to map a high level modeling language to the Java code in EMF. A Java developer can easily understand the mapping between the EMF model and Java. EMF resembles other Java binding frameworks, like JAXB or XMLBeans. Therefore, EMF is able to produce Java source code that will give you an allowance to perform examples of your models involving in creativity, query, update, deserialization and serialization under the circumstances of a model having been given. A large portion of Java binding frameworks backs up only one category of models. XML Schema, one of sources of code generation due to EMFs support, is one such example. Besides the model code, EMF has the ability to produce a complete application including a customizable editor. The EMF-generated code, which supports cross references, possesses a built-in modification aware feature. Like other languages, an API is provided by EMF in order to obtain the model instances and create users own models dynamically. It also allows the constraint check of models. Some effective tools of code generation of EMF are able to not only regenerate models but also integrate users written code. Moreover, EMF can be regarded as a Java implementation of a core set of MOF (Meta Object Facility) API and to avoid any confusion the metamodelling language used in EMF is called Ecore. Since Ecore is also an EMF model so it is the metamodel of itself or we can say its a meta-metamodel. A complete class hierarchy of the Ecore model is

YunXiao Li

shown in Figure 4. As we can see, it contains a small number of classes representing the EMF model while EMF is only concentrated on the aspect of class modeling of UML.

Figure 4 The Ecore class model hierarchy In Ecore API, a vital interface is called EObject which can be put the same level as java.lang.Object. All other elements implement this interface and from the diagram, it shows that EPackage comprises the information of EClass and EDataType. An EClass indicates a model class, with zero or more attributes and zero or more references while EAttributes denotes basic data which has a name and a type defined by the EDataType. If there is an association between classes, EReference(EClass) is used to represent that. 2.5 Emfatic Emfatic is a text editor used to define Ecore models while allows developers to navigate, edit and convert Ecore models. It uses a compact syntax which is similar to Java and also provides the abilities such as EMF Generics, folding, displaying the EMF type hierarchy

YunXiao Li

and auto-complete, etc[19]19. Also, as explained by Vladimir Bacvanski and Petter Graff in[20]20, Emfatic is a Java-like language for the purpose to representing the Ecore model and Emfatic code can be compiled to Ecore model and vice versa. In fact, the Emfatic editor is also installed in Eclipse as a plug-in. Following code fragment is an example that shows how Emfatic language looks like:
@namespace(uri="http://www.emftext.org/language/forms", prefix="forms") package forms; class Item { val ItemType[1] itemType; ref Option[*] dependentOf; attr String text; attr String explanation; } abstract class ItemType { } class Choice extends ItemType { val Option options; attr boolean multiple; } datatype EInt : int;

Emfatic is made up of such Elements like package, class, datatype, attribute, enum etc. In light of the written Emfatic models, the translator supplied by Emfatic plug-in can convert the Emfatic models to Ecore model. Along with the models, the editor and test plug-ins will also be generated. Nevertheless, Emfatic language also requires some extra information such as the annotations to be embedded in the model itself. And this problem derives the initial motivation to create a new application which allows separated annotation files. The details of the motivation and the problem will be discussed in later chapters. 2.6 Xtext In this project another technique is used, that is Xtext. Xtext is one of the Eclipse projects provides a language development framework/tool for external textual Domain Specific Language (DSL) or even general purpose programming language. The concept of DSL should be explained a little bit here.

YunXiao Li

Domain-specific language (DSL) in software development is a specialized, problem oriented programming language which dedicate to a particular problem domain PEVuZE5vdGU+PENpdGU+PEF1dGhvcj5NZWlqPC9BdXRob3I+PFllYXI+MjAxMD wvWWVhcj48UmVj TnVtPjY2NzwvUmVjTnVtPjxEaXNwbGF5VGV4dD5bMjFdPC9EaXNwbGF5VGV4d D48cmVjb3JkPjxy ZWMtbnVtYmVyPjY2NzwvcmVjLW51bWJlcj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBw PSJFTiIgZGIt aWQ9InhmMnMwNWUwdnMyZnM3ZXY5ZG14YWFzZHMyd3B4MHpzcmFleCI+Nj Y3PC9rZXk+PC9mb3Jl aWduLWtleXM+PHJlZi10eXBlIG5hbWU9IkpvdXJuYWwgQXJ0aWNsZSI+MTc8L3Jl Zi10eXBlPjxj b250cmlidXRvcnM+PGF1dGhvcnM+PGF1dGhvcj5NZWlqLCBFLjwvYXV0aG9yPjxh dXRob3I+VHJp ZXNjaG5pZ2csIEQuPC9hdXRob3I+PGF1dGhvcj5kZSBSaWprZSwgTS48L2F1dGhvcj 48YXV0aG9y PktyYWFpaiwgVy48L2F1dGhvcj48L2F1dGhvcnM+PC9jb250cmlidXRvcnM+PGF1dG gtYWRkcmVz cz5NZWlqLCBFJiN4RDtVbml2IEFtc3RlcmRhbSwgSVNMQSwgU2NpIFBrIDEwNyw gTkwtMTA5OCBY RyBBbXN0ZXJkYW0sIE5ldGhlcmxhbmRzJiN4RDtVbml2IEFtc3RlcmRhbSwgSVNM QSwgU2NpIFBr IDEwNywgTkwtMTA5OCBYRyBBbXN0ZXJkYW0sIE5ldGhlcmxhbmRzJiN4RDtVb ml2IEFtc3RlcmRh bSwgSVNMQSwgTkwtMTA5OCBYRyBBbXN0ZXJkYW0sIE5ldGhlcmxhbmRzJiN4R DtVbml2IFR3ZW50 ZSwgSE1JIEdycCwgRmFjIEVFTUNTLCBOTC03NTAwIEFFIEVuc2NoZWRlLCBOZ XRoZXJsYW5kcyYj eEQ7UmFkYm91ZCBVbml2IE5pam1lZ2VuLCBJRkwsIE5MLTY1MDAgR0wgTmlqb WVnZW4sIE5ldGhl cmxhbmRzJiN4RDtUTk8gSUNULCBOTC02NTAwIEdMIE5pam1lZ2VuLCBOZXRoZ XJsYW5kczwvYXV0

YunXiao Li

aC1hZGRyZXNzPjx0aXRsZXM+PHRpdGxlPkNvbmNlcHR1YWwgbGFuZ3VhZ2Ugb W9kZWxzIGZvciBk b21haW4tc3BlY2lmaWMgcmV0cmlldmFsPC90aXRsZT48c2Vjb25kYXJ5LXRpdGxlP kluZm9ybWF0 aW9uIFByb2Nlc3NpbmcgJmFtcDsgTWFuYWdlbWVudDwvc2Vjb25kYXJ5LXRpdGxl PjxhbHQtdGl0 bGU+SW5mb3JtIFByb2Nlc3MgTWFuYWc8L2FsdC10aXRsZT48L3RpdGxlcz48cGVy aW9kaWNhbD48 ZnVsbC10aXRsZT5JbmZvcm1hdGlvbiBQcm9jZXNzaW5nICZhbXA7IE1hbmFnZW1l bnQ8L2Z1bGwt dGl0bGU+PGFiYnItMT5JbmZvcm0gUHJvY2VzcyBNYW5hZzwvYWJici0xPjwvcGVy aW9kaWNhbD48 YWx0LXBlcmlvZGljYWw+PGZ1bGwtdGl0bGU+SW5mb3JtYXRpb24gUHJvY2Vzc2l uZyAmYW1wOyBN YW5hZ2VtZW50PC9mdWxsLXRpdGxlPjxhYmJyLTE+SW5mb3JtIFByb2Nlc3MgTW FuYWc8L2FiYnIt MT48L2FsdC1wZXJpb2RpY2FsPjxwYWdlcz40NDgtNDY5PC9wYWdlcz48dm9sdW1l PjQ2PC92b2x1 bWU+PG51bWJlcj40PC9udW1iZXI+PGtleXdvcmRzPjxrZXl3b3JkPmluZm9ybWF0aW 9uIHJldHJp ZXZhbDwva2V5d29yZD48a2V5d29yZD5tZXRhLWxhbmd1YWdlPC9rZXl3b3JkPjxrZ Xl3b3JkPmxh bmd1YWdlIG1vZGVsaW5nPC9rZXl3b3JkPjxrZXl3b3JkPnF1ZXJ5IG1vZGVsaW5nPC 9rZXl3b3Jk PjxrZXl3b3JkPnF1ZXJ5IGV4cGFuc2lvbjwva2V5d29yZD48a2V5d29yZD5pbmZvcm1h dGlvbi1y ZXRyaWV2YWw8L2tleXdvcmQ+PGtleXdvcmQ+ZG9jdW1lbnQtcmV0cmlldmFsPC9r ZXl3b3JkPjxr ZXl3b3JkPnJlbGV2YW5jZSBmZWVkYmFjazwva2V5d29yZD48a2V5d29yZD5zZWFy Y2g8L2tleXdv cmQ+PGtleXdvcmQ+cGVyZm9ybWFuY2U8L2tleXdvcmQ+PGtleXdvcmQ+bWVkbGl uZTwva2V5d29y

YunXiao Li

ZD48L2tleXdvcmRzPjxkYXRlcz48eWVhcj4yMDEwPC95ZWFyPjxwdWItZGF0ZXM +PGRhdGU+SnVs PC9kYXRlPjwvcHViLWRhdGVzPjwvZGF0ZXM+PGlzYm4+MDMwNi00NTczPC9pc 2JuPjxhY2Nlc3Np b24tbnVtPklTSTowMDAyNzk0MTQyMDAwMDc8L2FjY2Vzc2lvbi1udW0+PHVybH M+PHJlbGF0ZWQt dXJscz48dXJsPiZsdDtHbyB0byBJU0kmZ3Q7Oi8vMDAwMjc5NDE0MjAwMDA3PC9 1cmw+PC9yZWxh dGVkLXVybHM+PC91cmxzPjxlbGVjdHJvbmljLXJlc291cmNlLW51bT5ET0kgMTAu MTAxNi9qLmlw bS4yMDA5LjA5LjAwNTwvZWxlY3Ryb25pYy1yZXNvdXJjZS1udW0+PGxhbmd1Y WdlPkVuZ2xpc2g8 L2xhbmd1YWdlPjwvcmVjb3JkPjwvQ2l0ZT48L0VuZE5vdGU+AG== PEVuZE5vdGU+PENpdGU+PEF1dGhvcj5NZWlqPC9BdXRob3I+PFllYXI+MjAxMD wvWWVhcj48UmVj TnVtPjY2NzwvUmVjTnVtPjxEaXNwbGF5VGV4dD5bMjFdPC9EaXNwbGF5VGV4d D48cmVjb3JkPjxy ZWMtbnVtYmVyPjY2NzwvcmVjLW51bWJlcj48Zm9yZWlnbi1rZXlzPjxrZXkgYXBw PSJFTiIgZGIt aWQ9InhmMnMwNWUwdnMyZnM3ZXY5ZG14YWFzZHMyd3B4MHpzcmFleCI+Nj Y3PC9rZXk+PC9mb3Jl aWduLWtleXM+PHJlZi10eXBlIG5hbWU9IkpvdXJuYWwgQXJ0aWNsZSI+MTc8L3Jl Zi10eXBlPjxj b250cmlidXRvcnM+PGF1dGhvcnM+PGF1dGhvcj5NZWlqLCBFLjwvYXV0aG9yPjxh dXRob3I+VHJp ZXNjaG5pZ2csIEQuPC9hdXRob3I+PGF1dGhvcj5kZSBSaWprZSwgTS48L2F1dGhvcj 48YXV0aG9y PktyYWFpaiwgVy48L2F1dGhvcj48L2F1dGhvcnM+PC9jb250cmlidXRvcnM+PGF1dG gtYWRkcmVz cz5NZWlqLCBFJiN4RDtVbml2IEFtc3RlcmRhbSwgSVNMQSwgU2NpIFBrIDEwNyw gTkwtMTA5OCBY RyBBbXN0ZXJkYW0sIE5ldGhlcmxhbmRzJiN4RDtVbml2IEFtc3RlcmRhbSwgSVNM

YunXiao Li

QSwgU2NpIFBr IDEwNywgTkwtMTA5OCBYRyBBbXN0ZXJkYW0sIE5ldGhlcmxhbmRzJiN4RDtVb ml2IEFtc3RlcmRh bSwgSVNMQSwgTkwtMTA5OCBYRyBBbXN0ZXJkYW0sIE5ldGhlcmxhbmRzJiN4R DtVbml2IFR3ZW50 ZSwgSE1JIEdycCwgRmFjIEVFTUNTLCBOTC03NTAwIEFFIEVuc2NoZWRlLCBOZ XRoZXJsYW5kcyYj eEQ7UmFkYm91ZCBVbml2IE5pam1lZ2VuLCBJRkwsIE5MLTY1MDAgR0wgTmlqb WVnZW4sIE5ldGhl cmxhbmRzJiN4RDtUTk8gSUNULCBOTC02NTAwIEdMIE5pam1lZ2VuLCBOZXRoZ XJsYW5kczwvYXV0 aC1hZGRyZXNzPjx0aXRsZXM+PHRpdGxlPkNvbmNlcHR1YWwgbGFuZ3VhZ2Ugb W9kZWxzIGZvciBk b21haW4tc3BlY2lmaWMgcmV0cmlldmFsPC90aXRsZT48c2Vjb25kYXJ5LXRpdGxlP kluZm9ybWF0 aW9uIFByb2Nlc3NpbmcgJmFtcDsgTWFuYWdlbWVudDwvc2Vjb25kYXJ5LXRpdGxl PjxhbHQtdGl0 bGU+SW5mb3JtIFByb2Nlc3MgTWFuYWc8L2FsdC10aXRsZT48L3RpdGxlcz48cGVy aW9kaWNhbD48 ZnVsbC10aXRsZT5JbmZvcm1hdGlvbiBQcm9jZXNzaW5nICZhbXA7IE1hbmFnZW1l bnQ8L2Z1bGwt dGl0bGU+PGFiYnItMT5JbmZvcm0gUHJvY2VzcyBNYW5hZzwvYWJici0xPjwvcGVy aW9kaWNhbD48 YWx0LXBlcmlvZGljYWw+PGZ1bGwtdGl0bGU+SW5mb3JtYXRpb24gUHJvY2Vzc2l uZyAmYW1wOyBN YW5hZ2VtZW50PC9mdWxsLXRpdGxlPjxhYmJyLTE+SW5mb3JtIFByb2Nlc3MgTW FuYWc8L2FiYnIt MT48L2FsdC1wZXJpb2RpY2FsPjxwYWdlcz40NDgtNDY5PC9wYWdlcz48dm9sdW1l PjQ2PC92b2x1 bWU+PG51bWJlcj40PC9udW1iZXI+PGtleXdvcmRzPjxrZXl3b3JkPmluZm9ybWF0aW 9uIHJldHJp ZXZhbDwva2V5d29yZD48a2V5d29yZD5tZXRhLWxhbmd1YWdlPC9rZXl3b3JkPjxrZ

YunXiao Li

Xl3b3JkPmxh bmd1YWdlIG1vZGVsaW5nPC9rZXl3b3JkPjxrZXl3b3JkPnF1ZXJ5IG1vZGVsaW5nPC 9rZXl3b3Jk PjxrZXl3b3JkPnF1ZXJ5IGV4cGFuc2lvbjwva2V5d29yZD48a2V5d29yZD5pbmZvcm1h dGlvbi1y ZXRyaWV2YWw8L2tleXdvcmQ+PGtleXdvcmQ+ZG9jdW1lbnQtcmV0cmlldmFsPC9r ZXl3b3JkPjxr ZXl3b3JkPnJlbGV2YW5jZSBmZWVkYmFjazwva2V5d29yZD48a2V5d29yZD5zZWFy Y2g8L2tleXdv cmQ+PGtleXdvcmQ+cGVyZm9ybWFuY2U8L2tleXdvcmQ+PGtleXdvcmQ+bWVkbGl uZTwva2V5d29y ZD48L2tleXdvcmRzPjxkYXRlcz48eWVhcj4yMDEwPC95ZWFyPjxwdWItZGF0ZXM +PGRhdGU+SnVs PC9kYXRlPjwvcHViLWRhdGVzPjwvZGF0ZXM+PGlzYm4+MDMwNi00NTczPC9pc 2JuPjxhY2Nlc3Np b24tbnVtPklTSTowMDAyNzk0MTQyMDAwMDc8L2FjY2Vzc2lvbi1udW0+PHVybH M+PHJlbGF0ZWQt dXJscz48dXJsPiZsdDtHbyB0byBJU0kmZ3Q7Oi8vMDAwMjc5NDE0MjAwMDA3PC9 1cmw+PC9yZWxh dGVkLXVybHM+PC91cmxzPjxlbGVjdHJvbmljLXJlc291cmNlLW51bT5ET0kgMTAu MTAxNi9qLmlw bS4yMDA5LjA5LjAwNTwvZWxlY3Ryb25pYy1yZXNvdXJjZS1udW0+PGxhbmd1Y WdlPkVuZ2xpc2g8 L2xhbmd1YWdlPjwvcmVjb3JkPjwvQ2l0ZT48L0VuZE5vdGU+AG== [21]21. DSLs are often not large and usually smaller than general-purpose languages (GPL), focus more on the construction and the problem domain. In other words, DSLs are aimed to solve specific problems, to provide a brief but comprehensive language that could be learnt by a developer or a domain specialist easily. If some patterns in development or models are found repeatedly when using a GPL (e.g. C++ or Java), then a DSL might be created in light of these patterns and logic. And the DSL is usually created on the basis of the standard of metamodelling infrastructure (e.g., EMF/Ecore) [22]22. There exist a huge amount of DSLs today. Some most well known examples are SQL, YACC, BNF,

YunXiao Li

HTML and UNIX shell scripts. In addition, here are some development tools to implement DSLs like Eclipse Xtext, Eclipse GMF, AMMI, CodeFluent Entities SoftFluent, DEViL University Paderborn, Visual Studio Visualisation and Modeling SDK (DSL Tools) and EMFText. In this project Xtext is chosen to implement the textual language. As mentioned above, Xtext is able to describe users own DSL by using the EBNF grammar language of Xtext. A parser, an AST-mata model which is implemented in EMF and a text editor will be hence created by Xtexts generator. Xtext supplies a set of DSLs and efficient APIs for covering the different parts of users programming languages and for a total language implementation that runs on the Java virtual machine. The parser, the scoping framework and the linking, the type-safe abstract syntax tree, the serializer and code formatter, compiler checks and static analysis aka validation and also a code generator or interpreter are included in the compiler components of users language [23]23. It provides powerful default functionality covering all aspects with the DSLs and APIs. And if there are some special requirements for a users own implementation, another chose called Google Guice can be selected in order to replace the default behaviour. The whole IDE infrastructure is based on the lightweight dependency injection (Google Guice) framework and hence the every small class can be changed easily according to that. 2.7 Extended Backus-Naur Form Expressions (EBNF) In the Xtext, all the elements of the language rely on the Extended Backus-Naur Form Expressions (EBNF). The initial version of EBNF is created by Niklaus Wirth. It is an extended formal mathematical way for describing a language on the foundation of BNF. In BNF, there is a limitation for repeating options directly or so called recursion. An intermediate rule has to be created to realize the repetition options. However, EBNF solved this problem through adding three extra operators[24]24:

?: This indicate the left side of the operator is optional, i.e., it can appear zero or only one time. *: This means an element can be repeated for any times include zero. +: This shows that an element can appear one or many times.

YunXiao Li

Here is an real example that in the project:


Package : (modelElements+=ModelElement)*; ModelElement : NamedElement |Annotation; NamedElement : Import |Packages |Classifier |TypedElement |Reference ;

As showed in above example, EBNF can define the recursion element directly without any additional intermediate variable. But this is not means that EBNF is more powerful than BNF, just more convenient. All the EBNF applications could be converted into applications in BNF.

3. MOTIVATION
When it comes to the metamodel in Model Driven Engineering, it is commonly used to define the abstract syntax of modelling languages. One of the most crucial activities in a Model Driven Engineering process is constructing and maintaining metamodels. Besides specifying the abstract syntax, additional information within metamodel can also be captured through extending annotations such as define constrains or the graphical syntax of the language. However current metamodelling tools require all such extra information to be embedded in the metamodel itself. The main purpose for this project is to create and design a text-based modular metamodelling language. This language should be able to maintain extra or additional information in separate physical files for the metamodel language designer so as to avoid polluting the core metamodel itself [25]25. Although some modelling tools have already existed in Eclipse projects such as Emfatic, these tools all require the additional information such as the annotation definition be in the same file where specifies metamodels. Due to this limitation, the project for this

YunXiao Li

thesis is supposed to be able to support individual annotation physical files and combine them with the file where specifies the abstract syntax, then finally to convert these files into one Ecore file. The syntax should be similar to the one of Emfatic and should have the same elements as much as possible in order to make the two languages to be resembled from each other. In other words, the two languages should be similar from the structure and the syntax perspective. In order to explain the motivation in a more vivid way, below are the comparisons of an Emfatic file and the expected language that allows separated annotation files:
package rootPackage; package rootPackage;

@namespace(uri="http://www.emftext .org/eg/annotations", prefix="annotations") package annotations {

@namespace(uri="http://www.emftext. org/eg/annotations", prefix="annotations") package annotations {

abstract class Annotable { @ddAnnotation(uri="attr", prefix="annotations") attr double dd; @refAnnotation(uri="ref", prefix="annotations") ref Reference [*] annotation; @oprationAnnotation(parameter="ann o", type="Annotable", name="so", type="Sound") op int a(Annotable an, b.sub.subsub.Sound so);

abstract class Annotable { @ddAnnotation(uri="attr", prefix="annotations") attr double dd; @refAnnotation(uri="ref", prefix="annotations") ref Reference [*] annotation; @oprationAnnotation(parameter="anno ", type="Annotable", name="so", type="Sound") op int a(Annotable an, b.sub.subsub.Sound so);

} } }

YunXiao Li
package rootPackage; class Annotable @AnnotableAnnoFromEmfa uri="OO", prefix="OO", sec="Od", third="Og"

@namespace(uri="http://www.emftext .org/eg/annotations", prefix="annotations") package annotations {

package annotations @annoFromEmfa uri="OO", prefix="OO"

abstract class Annotable { @ddAnnotation(uri="attr", prefix="annotations") attr double dd; @refAnnotation(uri="ref", prefix="annotations") ref Reference [*] annotation; @oprationAnnotation(parameter="ann o", type="Annotable", name="so", type="Sound") op int a(Annotable an, b.sub.subsub.Sound so);

attribute dd @attriAnnotaionFromEmfa nam="sdfaf", value="sdfa"

operation operationA @operationAnnotaionFromEmfa name="sdfaf", value="sdfa"

reference annoRef @referenceAnnotationFromEmfa

} }

In the left part of the first table is the language file written by Emfatic while the right part is a file that supposed to be written by Emfatix. These two files shall support to contain same elements such as package, class, attribute, reference and operation. And both of the two files allow the annotation definition which starts with a keyword @. The second table is the file supposed to be written by Emfatix and in left side of the table is the original file which is the same as one in the first table. However the right side of this table is a separated file which only contains annotation elements. And these two files in the second table shall be supported by Emfatix and shall be able to generate into one Ecore file that contains all the models within the two files.

YunXiao Li

Consequently, language designers who are already familiar with the syntax of Emfatic could easily understand the new syntax without any difficulties. The only extra thing for them is to separate the annotations into other physical files. Since the project is developed based on Xtext and its syntax is similar to the Emfatic language. Decision has been made to name it Emfatix (Emfatic alike and Xtext based). Firstly, a metamodel abstract syntax similar to Emfatic needs to be created and be implemented. Then a certain facility for converting the Xtext file to Ecore file needs to be established as well. Actually, this language is designed on a text based component of Eclipse called Xtext and also the syntax is specified for a both a domain specific language or a general purposed language. It is of very importance to understand the requirements for this project and its context so as to improve the quality and ensure the project has the close connection with the functionality supplied by the language. The requirements presented in following section, focusing on the related stakeholder, functional requirement and non-functional requirements.

4. REQUIREMENTS
There are several typical types of requirements in the developing process when it comes to the requirement. From different aspect, the requirements can be classified as user requirement, software requirement, functional requirement, non-functional requirement, system requirement, technique requirement, production requirement or business requirement[26]26. And the requirements should not include implementation and developing details, project schedule details or testing processes. Distinguish these parts from the requirements can help the developer focus on understanding what the project intends to establish. Reasonable requirements process throughout the project stressed on the collaborative manner of software development, involving various stakeholder relationships. And here is a list of some benefits for such requirements:

Reduced requirement deficiency Fewer rework of the implementation of software Avoid unneeded functions

YunXiao Li

Reduced cost of enhancement More quick for the development More satisfaction for clients and developers

However only some major requirement will be discussed in this thesis since this project is an initial version and might not be used for business purpose. 4.1 The stakeholders of the textual language There are some common stakeholders for this textual language and for the DSLs as well:

Architecture or software engineers, who is in charge of choosing or implementing a proper domain specific language. Clients, who are responsible to give the feedback after using a domain specific language. Developers, who are the persons to establish and maintain the description of the domain specific language.

4.2 The core requirements of the textual language With regard to the requirement of the abstract language, two categories of requirement is used in this article that they are functional requirement and non-functional requirement. The functional requirement specifies the functionality of a software system and captures the prospective of the software system. The non-functional requirement defines some condition in order to evaluate a system instead of specific behaviours. Stellman Andrew and Greene, Jennifer[27]27 tried to explain what the non-functional requirement is: the users hold some implied expectations for how software works. These features include how easily the software is used, how fast the language is implemented, how reliable it is and how well it handles the exceptions when occurs. The non-functional requirements define the system of these areas. Some typical measurement to specify the nonfunctional requirement is quantitative analysis, for instance, supply specific measurement of non-functional requirements, the maximum disk size of a database or the maximum number of concurrent users supported by the software.

YunXiao Li

Following are some functional requirement for this project: 4.1.1 The functional requirements
Requirement Identifier Functional requirement Requirement Description

A textual abstract syntax

The project need to create a textual based abstract language.

An abstract syntax that is similar to Emfatic elements should such as be designed and implemented first. It should define all the classes, attributes, operations, annotations, packages,

references and all the types etc. This is the basic and the prerequisite for a textual language and is the metamodel for the whole project. And this file extension name is specified as .emfx. This syntax is implemented under the Xtext framework and hence needs the support of the Xtext plug-in.

Individual syntax for annotaion

An individual syntax shall be created for annotation specification.

Besides the main abstract syntax file that specifies all the major elements just like package and classes, there should be another syntax that only defines annotation elements. This syntax is used to specify the annotation in a separate physical file. And the extension name of the file is specified as .emfa. This syntax language should under an individual project of Xtext as well in order to avoid conflicts with the major syntax.

YunXiao Li

Ability to create a concrete language

A concrete language shall be created as an Eclipse plug-in which based on the abstract syntax.

After implemented two syntaxes within Xtext, the project should now be able to create concrete languages based on the abstract syntaxes. editor Also these has two the and languages should be able to have the corresponding capability suggestions for which showing whenever errors

inappropriate

statement is written by a user.

Syntax similar to The concrete Emfatic language shall have the same concrete syntax with Emfatic.

The language based on the new syntax should have the similar behaviour with the Emfatic and have such elements like package, class, abstract, attr, ref, val , op, double, etc,. And these elements of the language shall be highlighted as well.

Annotaion validation

The new plug-in language shall have a certain facility to validate the annotation within individual physical files. The new plug-in language should have the capability to convert from the concrete language to Ecore file.

The new plug-in language need to support the validation for all the individual files that have only the annotation elements. It should be capable to check whether the annotation defined in the emfa files are valid or not.

Model transformation

If there is no error in the emfx files and all the elements defined in the emfa file can be found in the emfx file, the application shall have a function to allow the user selecting one emfx file and one or more emfa file.

YunXiao Li

4.1.2 The non-functional requirements Non-functional requirement is another key point in requirement development. According to Dimitrios S. Kolovos, et al, there are a set of core requirements of the project will be discussed below [28]28:
Requirement Identifier Non-Functional requirement Requirement Description

Language conformity

Conformity

The

abstract

languages

should

correspond to the domain concept of Emfatic. In other words, the syntax should defined in the way that conform the standards of Emfatic so that the new plug-in language can be easily accepted by those who familiar with the syntax of Emfatic. And the cost of learning the new language will be reduced in a large extent.

Language supportability

Supportability

This language should be able to support via tools and program management such as editing, debugging and transforming. Simply put, this language needs to be able to manipulate by the developer for creating and other operations of this new textual language.

Language simplicity.

Simplicity

This language should be as simple as possible so as to concentrate on the interests of the language. And it should works in the way that users and stakeholders desired.

Language flexibility

Flexibility.

If the users or stakeholders try to extend or add extra functions to the language, it should be able to accomplish this. For

YunXiao Li

instance, if a user or a stakeholder intends to add new element like a interface, the new element should be integrated to the abstract language easily and is not supposed to spend much effort to complete it. Language robustness Robustness The new application should be able to handle errors or exceptions correctly and even gracefully. This should also include the tolerance of invalid element defined in the language, some annotation matching errors, invalid file selected, no emfx file contains and some unexpected operation processes. Language usability Usability This language should be easy to use, that is to say, this language should have the capacity to be understood, studied, and used by its potential users. The process of using the language should not cause momentous troubles. Language reliability Reliability The application should have the

capability to perform and retain the function over time and unexpected circumstances as well; it is supposed not to fail or exit often during the runtime. It should be stable to use and handle unexpected conditions in a consistent manner or in other words, should have the capability to perform its desired function under stated circumstances for a certain period of time.

Language

Reusability

The new language is supposed to have

YunXiao Li

reusability

the capability to be used across multiple products other than the one for which it is firstly created.

Language longevity

Longevity

This language should be used for a long period of time so that to make sure the tool support and should persist for a plenty length period of time to show the stability of its features.

5. METHODOLOGY AND INFRASTRUCTURE


5.1 The Iterative and Incremental Development methodology During the development process of this project, the iterative and incremental development (IID) methodology is used hence this project is developed incrementally and iteratively. Below model graph Figure 5 specifically illustrate the six phases of the iterative process of the project. The first step is about the planning and the requirement of the application. At the very start, the new concept of model driven engineering, model driven architecture, metamodelling, Ecore model and the EMF framework of Eclipse need to be turned strangeness into familiarity. The core concepts of the project are the model driven engineering and the metamodel. Before the steps of design and implementation, all of the notions have to be perceived in order to provide better quality of the implementation of the software. Also the relevant techniques should be specified such as the main development platform and the programming language and the modelling framework should be designated as well. Due to the mature technology of metamodelling, the Eclipse Modelling Framework is chosen for the main framework and Eclipse is the major development environment. This procedure can be regarded as a part of the initial planning. Meanwhile, the initial requirement has been put forward.

YunXiao Li

Since the initial requirement of the project is to create a textual language based on modular metamodel specification, the Xtext as the language development framework of Eclipse community is used for this purpose. Xtext can be easily integrated into Eclipse as a plug-in which provides the facility for design users own programming language or domain specific language (DSL). This framework also includes compilers and interpreters and efficient API of Xtext. Such facilities could meet the requirement for creating an abstract syntax and the metamodels; reduce the cost of development and smooth the learning curve as well. To be able to trace the progress and control the source system of the project, the requirement of a source control system has been advocated. Comparing CVS and Subversion (SVN), and Eclipse SVN called Subclipse is used as the code version control tool. The reason to select Subclipse is not just because it is an open source project, but there are several reasons to select Subclipse SVN other than CVS.

Subclipse can be integrated into Eclipse easily as a plug-in. As mentioned earlier, the main project is developed under the Eclipse platform, a source code control tool that based on the Eclipse plug-in should be the best choice.

The subversion provides the features of atomic commits. SVN regards the branching is not important. SVN makes it easier for moving files and restricting the source code system. SVN supports the control of all file types.

After choosing the Subclipse as the source code control tool, the place for hosting the current project should also be decided. In order to access the source of the project easily by multi-users, the project hosting service of GooleCode has been chosen. The main reason to choose googlecode as the project hosting is because that it supplies the facility to synchronize the project from Subversion and Mercurial. The characteristic provides the development a great convenience to keep track of the project that is the key item of concern. The next step is the design of the project. The project will go smoothly on condition that the establishment of a new abstract language should be accomplished at the very beginning. This textual language is developed under the Xtext plug-in of Eclipse.

YunXiao Li

Figure 5 The iterative process of the project However, the metamodels of the abstract language are developed according a general purposed language such as Java or C++. The fundamental of this project is the construction of a textual abstract syntax before concrete syntax. That is to say, the abstract syntax is not relying on the building of the concrete syntax. On the contrary, the structure of the concrete syntax is on the basis of the abstract syntax. Although the initial notion of the model design is still a little obscure, on account of the iterative and incremental development methodology the requirement and the design will be modified soon after. The iterative and incremental development of the redesign will be discussed later of the chapter. Then it comes to the testing part, the testing is running through the whole development, and is also the iterative process of the development. The abstract language should first past the Modeling Workflow Engine2 (MWE2). As the MWE2 syntax is automatically generated by Xtext, the user just needs to focus on the abstract language implementation. The initial xtext file should pass the testing first and then it should be able to be transformed to Ecore model files. After that, another xtext file that defines only the annotation models should also pass the testing.

YunXiao Li

As mentioned earlier, the initial requirement and design is not detailed all-around yet. And the design should be changed after the new requirement has been proposed. This is the why iterative and incremental development is used for this project. The project is divided into several small slices that are developed, implemented and integrated when finished. After the requirement is detailed enough, it appears that the abstract language should be redesigned according to the models of another language called Emfatic. As the project development is constructed on iterations where every iteration extends the application until the project is accomplished. Also because the project is developed incrementally, each iteration contains all steps such as some new requirement, some design, some implementation and some testing. Due to this reason, the abstract syntax is redesign easily and passed the testing as well. Such development process allows the developer make the use of the advantages of what has been known through the development process in an earlier and incremental manner. At each iteration, the new design requirements are decided according to the new function added. This makes the structure and design of the project development process to be adjusted in time according to the additional needs. The iterative and incremental development allows the smooth modular development, while the development of each module can be modified and tested at the same time, and also reduces the development cost and increases the flexibility. For this project, the implementation of each xtext file can start from the basic model elements such as the class, package, attribute, operation etc. and the testing can be performed while the development. No need to define all the elements of the entire abstract language. If there are new demands coming, it can be easily added into to the current development at any time without fully rebuilding the whole syntax. The combination of incremental and iterative development enables the development, testing and assessment to be performed at the same time. This makes the initial project be completed earlier. In the iterative and incremental model, the software is developed as a series of incremental components of design, implementation, integration and testing. As the user's needs cannot be defined completely at the outset, they are usually refined continuously in the follow-up phase. Therefore, the iterative process of this model make the adaption to changes in demand would be easier.

YunXiao Li

Upon the completion of the abstract syntax, the transformation from the abstract language model to Ecore model can be started immediately. At the same time, the prior abstract syntax can be modified and enhanced as well. When the transformation between abstract syntax is completed, a preliminary framework has been established. In this way, later improvements can be added continued such as separate out the annotation definition from one xtext file, enable the application supports the transformation of one emfx file and several emfa files instead of only one emfx file, allow the project support annotation validation, improve the error message handling and enrich the attributes of Ecore model.

6. DESIGN
In this sector, the design and the higher level abstraction of the structure of the project will be discussed. First of all, the main purpose of this project is to create a textual language for metamodel specification. To achieve this, Eclipse, Eclipse Modeling Framework and Xtext are chosen to be the development platform and development tools. In some cases, structure of concrete syntax often affects abstract syntax and some even derive the abstract syntax from the concrete syntax[29]29. However, in this project which based on the EMF framework, the Ecore metamodel and Emfatic language will decide the structure of the textual syntax. For the Xtext, it defines a mixture of the abstract and textual syntax of a language. Also Xtext will generate the abstract syntax from the Xtext language. The Xtext project can be used for the design of the textual language. Two Xtext projects needs to be created, one called Emfx which defines the most model elements of the textual langage and another one called Emfa which only specifies the annotation element in order to support multi file and annotation validation. Besides, an Eclipse plug-in project is also need to be implemented to realize the facility for model transformation. The structure of the project can be illustrated by following chart (Figure 6). As shown in the diagram of the project structure, when an Xtext project is created three projects will be generated automatically by Xtext. One is the project for the model grammar, another one is the generator and the other one is called UI. In the first project, the grammar of the textual language is defined. Also in the second project is used to do

YunXiao Li

the code generation and make models executable while the last project is used to define IDE related aspects such as the editor and the views. In this project, the main attention will be focus on the grammar definition.

Figure 6 Structure diagram for the project With the three Xtext projects, it comes to next process Xtext Runtime. In this process, Xtext will generate the Ecore model and the parser ANTLR according to the textual grammar while conforms to the Eclipse Modeling Framework (EMF). Next process is the transformation between Xtext models to Ecore models. With the Emfatix2Ecore and the Xtext running on the Eclipse as plug-ins, the new spawned workbench shall be able to convert emfx and emfa files into one single Ecore file. The Emfatix2Ecore project is design for model transformation and annotation validation which will be elaborated in next chapter.

7. IMPLEMENTATION
7.1 Construction of the abstract syntax

YunXiao Li

Before the development, the develop tools and the development platform need to be set up as prerequisite. Since the project development is based on Eclipse plug-ins, all the required plug-ins need to be integrated into the Eclipse platform. The plug-ins required for this project is Xtext, EMF, Epsilon and Subclipse. After the integration of plug-ins, there comes to the establishment of the abstract syntax. Firstly, an Xtext-based project named com.googlecde.emfatix has been created and the DSL-file's extension has been defined as emfx. Meanwhile, a file named emfx.xtext had been created automatically afterwards and some Xtext runtime aspects such as linking, scoping and validation had been created for a default version by Xtext. Then the major concern has become the establishment of the abstract syntax. The first step is to define the grammar at the first line in the emfx.xtex file. In this project, the grammar is as follows:
grammar com.googlecode.emfatix.Emfx with org.eclipse.xtext.common.Terminals

the second part of the with statement (org.eclipse.xtext.common.Terminals) is a Xtext library grammar which defines some most commonly used terminal rules i.e. ID, STRING, ML_COMMENT and WS. And another statement is called generate in order to let the parser know which EClasses to use at the time it building the Abstract Syntax Tree. These two statements are the prerequisite before defining the syntax. In light of the models in Emfatic, the first Element of the abstract is the Package element. And this is the root element for the whole syntax. The package consists one or more elements called ModelElement, so the rule of package delegates to another rule of ModelElement. And its cardinality is * means a Pacakage has an arbitrary number of ModelElement. So the first rule in the abstract syntax is:
Package : (modelElements+=ModelElement)*;

Then the ModelElements also delegates two other rules named: NamedElement and Annotation. A NamedElement also contains five elements that they are Import, Packages, Classifier, TypedElement and Reference. And the Classifier rule also delegates three rules called Class, DataType and MapEntry while the TypedEmelemt delegates the rule

YunXiao Li

of StructuralFeature and any one of the three is a possible element. In order to implement this, an alternative operator | is used, i.e.:
NamedElement : Import |Packages |Classifier |TypedElement |Reference ;

After that is to define the StructuralFeature, it has been defined to delegate another two rules: Attribute and Operation. Next is to describe the details of each rules and terminal rules. For the Class rule, the first is to define the keywords of the rule. The keywords are declared as string literals with the Class rule such as abstract, class, interface and extends. It looks like similar to Java, but there is no keyword like implements in Java. Since a class could have several super classes, the +=operator is needed for the assignment in order to be able to get a list of super classes, e.g. the superTypes+= corresponds to getSuperTypes.add() and superType = corresponds to setSuperType(). In this case, every class starts with the keyword class or interface followed by the class name and extends superTypes if exist. After these elements, an opening curly brace is defined followed by several features and ends with a closing curly brace. A class could contain any of the features such as Attribute, Operation, Reference and Annotation. Below is the real example of the class:
Class : (('abstract')? ('class'|'interface') name=ID ('extends' superTypes+=[Class|QualifiedName] (',' superTypes+=[Class| QualifiedName])*)? ) (':' ID ('.' ID)* ) ? begin='{' (attributes+=Attribute |op+=Operation |ref+=Reference |anno+=Annotation )* end='}' ;

Another key point needs to be mentioned in this case is the so called cross-reference. The right hand assignment of the extend clause (superTypes+=[Class|QualifiedName]) . The cross-references are used to declare the crosslinks within the language. As the same

YunXiao Li

with some other compilers, the crosslinks establishment is not performed during the parsing process but in an after linking sector. However the developers can specify the crosslink information in Xtext[30]30. In this case, a superType of a class can cross reference to another class within a same namespace. Only the name of the cross-reference will be parsed with the QualifiedName rule. And a link in terms of the cross-reference name will be established later. Then other model elements had been defined such as DataType, Annotation, Package, Enum, MapEntry and Attribute. Following is some fragment of these elements defined in the syntax:
DataType : ('transient')? 'datatype' name=ID ':' ( typeName = ID ('.' ID)* | typeName = BasicType ) ';' |Enum; Packages : ('package' name=ID ';')|( 'package' name=QualifiedName begin='{' ((subpackages+=Packages) |(classes+=Class) |(datatype+=DataType) |(mapEntry+=MapEntry) |(anno+=Annotation) )* end='}'); Enum : 'enum' name=ID '{' (ID '='? (INT)? ';' |anno+=Annotation )* '}'; MapEntry: 'mapentry' name=ID ':' key=BasicType '->' value=BasicType';' ; Attribute : QualifiedModifier? 'attr' (type=[AttributeTypes|QualifiedName] | BasicType Multiple? )? name=ID ('=' (value=ID | STRING | INT))? ';' ; Operation : 'op' (type=[Class | QualifiedName] Multiple?| BasicType Multiple?) name=ID '(' ((valueTypes+=[Class | QualifiedName] | BasicType)? valueNames+=ID (',' (valueTypes+=[Class | QualifiedName] | BasicType) valueNames+=ID)* )? ')'

YunXiao Li
('throws' exceptionTypes+=[Class | QualifiedName] (',' exceptionTypes+=[Class | QualifiedName])* )? ML? ';' ;

Another model element that is different from Java is the Reference. In order to allow the definition of a reference and its opposite reference value. Two elements called Reference and ValueReference were defined at first. However, this way of definition a Reference was not very suitable to the models of Emfatic. So the two statements had been integrated into one statement as follows:
Reference : (QualifiedModifier? ('ref'|'val') ( type=[RefType | QualifiedName ] bound+=Multiple? | BasicType Multiple? )? | QualifiedName Multiple?) ('#' ref=[RefType | QualifiedName])? name=ID ';' ; RefType: Class | MapEntry | Reference;

In this case, unlike other models there are two keywords defined for the reference: ref and val. An element starts with either keyword ref or val will be regarded as the Reference model. Another cross-reference needs to be added for the sake of linking to the opposite of a reference, class or mapentry. In order to delegate different objects together, a new Type called Ref Type which delegate any of the three types by using the | operator. Now the Reference is able to refer to any types of elements through only one single cross-reference. And a parser rule named QualifiedName was created so as to allow the format of the name like package.subpackage.class. This rule is not only used in the reference rule but also used for any other parser rules that require full qualified name. Below list shows the rule of QualifiedName and other terminal rules. The ML terminal rule is used in Operation model in order to allow the user defined comments.
QualifiedName: ID ('.' ID)* ; terminal ID : '^'?('a'..'z'|'A'..'Z'|'_'|'$'|'~') ('a'..'z'|'A'..'Z'|'_'|'0'..'9'|'$'|'~')*; terminal Multiple : '^'?('*'|'+'|'..'|'['|']'|'?') ('*'|'+'|'..'|'?'|'['|']'|'0'..'9')*;

YunXiao Li

terminal ML : '{[[' -> ']]}';

So far the initial version of the syntax had been built. Next procedure is to test the syntax via running the MWE2 workflow. There exists a MWE2 file called GenerateEmfx.mwe2 next to the grammar file Emfx.xtext. By clicking the mwe2 file and select Run as MWE2 workflow, the Xtext language generator will be triggered and the parser and some other related code will be generated as well. After making sure there is no error or wrong syntax defined in the metamodel specification file (Emfx.xtext) and passing through the MWE2 workflow. A bunch of code and files will be generated under the src-gen folder. A file generated under this folder need to be mentioned here is Emfx.ecore. This file is generated by Xtext which had been transformed from Xtext metamodel to the Ecore model. Figure 6 illustrated the structure of the Ecore file represent all the elements in a tree view. And Figure 7 shows the all the object in a class model and the relationship between these models which give the user a more visualized way. Since the code generation is successful, the new syntax now is able to be tested in the IDE integration environment. The user can select the project and choose to run it as an Eclipse Application. This procedure will spawn a new Eclipse workbench with the integrated plug-ins of the project selected. And user can create a file with .emfx as file extension; this will open the generated emfx editor.

YunXiao Li

Figure 6 The structure graph of Emfx.ecore file This generated editor should have the capability to define a concrete syntax by using the grammar that had been defined in Emfx syntax. And this editor should also has the function to highlight the keywords such as package and class; inform any syntactic error if the concrete syntax not being defined in light of the Emfx abstract syntax.

YunXiao Li

Figure 7 The Ecore model diagram and their relationships for Emfx.text 7.2 Transformation From Xtext Model to Ecore Model The next step is to transform the Xtext models to Ecore models, which is also a key process in this project. In order to achieve this purpose, there are two ways of doing this. One is to create another code generator and the other is to interact with Xtext model dynamically. After comparing the two methods, the latter one is used in this project since its a more effective way that loading the code at runtime and use them programmatically.

YunXiao Li

In order to integrated the feature of model transformation into Eclipse and allow the popup menu when clicking the relevant files, a new Eclipse plug-in project called com.googlecode.emfatix.emfatix2Ecore had been created. For the dependencies in the configuration file of the plug-in should include Emfatix and Xtext so as to access the Emfatix models and the API of Xtext. To add a new menu in Eclipse, a popup menu had been added in the extensions under the plugin.xml file. A menu called Emfatix was defined as the first menu and a sub-menu called Generate Ecore was defined to perform the transformation function. Also a generated java class under the plug-in project called Emfatix2EcoreAction is used to perform the actual process for transforming the Xtext model to Ecore model. This Emfatix2EcoreAction class is the main entry for the translate function, which implements the IObjectActionDlegate interface. The prelude for the transformation is to load an xtex file and parse them to the required objects. This function is already implemented by Xtext and files are parsed and represented as object graphs in memory. These objects are regarded as the Abstract Syntax Tree (AST) and models in Xtext not only provide getter and setter methods for different models but also a long list of advanced concepts and semantics[30]30. In Xtext language, it implements the Resource interface which is used for the model persistence in EMF. The statements in the class for loading the xtex file and parsing to resource are listed as below:
resourceSet = new ResourceSetImpl(); resource =resourceSet.createResource(URI.createFileURI(getFilePath())); try { resource.load(null); } catch (IOException e) { e.printStackTrace(); } Package metamodel = (Package) r.getContents().get(0); emfatixResources.createEcoreResource(); Resource EcoreResource = resourceSet.createResource(URI .createPlatformResourceURI(getFilePath()+ ".ecore", true));

The first line is to create a ResourceSet and the second line is to load the Resource by using the resource set. A URI needs to be passed in to Resource which points to the file

YunXiao Li

such as /MyDSL/test.emfx. And then use the load() method to load the resource as a map properties. Then using the resource.getContentes() method to get the first element from the content of a resource; actually there is only just one root element in an Xtext Resource. And this root element contains all the models that specified in an xtext file. The last two lines are used to create a new resource for saving the transformed model in to a different file. The new file that contains the transformed elements has the same name of previous xtext file but with an additional suffix as .ecore. Thats the prelude before the model translation. After the preparation, the process then goes to the part for searching the elements within the root element. The searching mechanism can be described like following:

Searching all the elements within a file in a recursion way. Compare each element type found with the various types of xtext model such as Class, Package, Attribute, Operation, Reference, etc. Add new model element to the resource according to different type matched.

Below is some code fragment as an example for converting a Class element to EClass element of Ecore and add it into a package:
if (ob instanceof com.googlecode.emfatix.emfx.Class) { classes = (com.googlecode.emfatix.emfx.Class) ob; EClass eClass = EcoreFactory.eINSTANCE.createEClass(); eClass.setName(classes.getName()); epackage.getEClassifiers().add(eClass); }

The statement EcoreFactory.eINSTANCE.createEClass() is used to create a new EClass object for Ecore model so that it can be added into an EPackge as a EClassifiers just like the last statement in above example. Other elements transformation did the similar process whilst some special elements need some extra information such as crossreference of the Reference and the Operation. In order to achieve this, two HashMaps are used to store all the class and references. And in later process, for instance, a reference can search the class map and matching required class to set into its type. Below code fragment is the implementation for this:

YunXiao Li
public void setEReferenceEType(Map referencesMap, Map classesMap) { Iterator refIt = referencesMap.entrySet().iterator(); // Set EType of EReferences while (refIt.hasNext()) { Map.Entry entry = (Map.Entry) refIt.next(); Object key = entry.getKey(); Object value = entry.getValue(); Reference tempRef = (Reference) key; if (tempRef.getType() != null) { EReference reference = (EReference) value; if (tempRef.getType() instanceof com.googlecode.emfatix.emfx.Class) { EClass eClass = (EClass) classesMap .get((com.googlecode.emfatix.emfx.Class) tempRef .getType()); reference.setEType(eClass); }

After searching and converting all the elements from Xtext model to Ecore model, a new file with the same name as the emfx file but followed by an additional .ecore prefix should be generated under the current folder e.g., dsl.emfx.ecore. This can be performed by the resource.save() method. Below is the code fragment for file saving:
try { if (statusInt == 0) { System.err.println("*******Save File********"); ecoreResource.save(null); }else{ System.err.println("*******Matching Error********"); } } else if (emfxFile == null) { Status status = new Status(IStatus.ERROR, "Emfatix Plug-in ID", 0, "Emfx file not found", null); ErrorDialog.openError(shell, "Emfx File Error Message", "Please select one Emfx file first!", status); }

If the status returned is 0, it shows there is no matching error for the transformation approach and hence an ecore file will be saved. However if there is no emfx file include, an error window will be popped up in order to notify the user that the major emfx file had not been included. Other implementation is So far, an initial version for Emfatix had been established. The process from create abstract syntax to the transformation to Ecore models was succeeded more or less.

YunXiao Li

Nevertheless, the purpose to separate the annotations into an individual physical file from the language itself had not been implemented yet. Thus next procedure is to define another textual language with only the annotation element and integrated into Eclipse as well. 7.3 Separating annotation model to individual files This step is to separate the additional information such as annotation outside the core metamodel file to avoid polluting the metamodel itself. Based on the textual syntax that had been implemented, another Xtext project named com.googlecode.emfa is created. The file name is defined as Emfa.xtex. In this file, the main element defined is annotation which allows matching the models for class, package, datatype, enum, attribute, reference and operation. Below list is the major specification for the syntax:
EmfaModel : contents+=AnnotatedElement*; AnnotatedElement : type=("class"|"package"|"dataType"|"enum"|"attribute"|"reference" |"operation") elementName=StringOrID annotations+=Annotation* ; Annotation : anno+='@' name=StringOrID ('('? details+=AnnotationDetail ("," details+=AnnotationDetail)* ')'?)? ; AnnotationDetail : key=StringOrID '=' value=STRING; StringOrID : STRING | ID ('.' ID)*;

After defining the annotation syntax, an iterative testing had been performed. This new syntax should also pass the MWE2 workflow and related code should also be created properly. Then it shall be able to integrate into Eclipse IDE as another plug-in. The new syntax running on a new Eclipse workbench shall also have the capability to highlight the keywords. And it shall allow the annotation format with a character @ as a start follow by a model name and the annotation details such as:
class Annotable @AnnoFromEmfa (uri="OO", prefix="OO")

YunXiao Li

Only defining a different syntax is not enough for the initial purpose for this project which shall be able to validate whether the annotation defined with the new syntax is correct or not. In other words, the models defined by the new syntax (emfa file) should exist in emfx file. Furthermore a certain facility shall be provided for converting the annotation elements to the ecore file as well. Simply put, when user created a emfx file and one or more emfa files, Emfatix shall has the capability to convert all the models specified in these files to Ecore model and save them to a single (*.ecore) file. In order to achieve this feature, the Eclipse plug-in project - emfaix2ecore needs to be enhanced. First, the plugin.xml needs to be modified to allow multiple file selection. That is to say, one emfx file and an arbitrary number of emxa files can be selected together by the user and the Emfatix and Generate Ecore options should be displayed on Eclipse menu as well. In the Extensions of plugin.xml, the nameFilter option is set to be *.emf*. Thus both file with the emfx as suffix and the file with emfa as suffix will be allowed for model transformation. Figure 8 shows the detail for how to define the extension element in plugin.xml for model transformation.

Figure 8 Configuration details of Extensions of plugin.xml Next is to modify the Java class - Emfatix2EcoreAction for runtime transformation. When user select emfx and emfa file and click Generate Ecore, all the file resources can be accessed in the class and saved into a selection called IStructuredSelection. In the

YunXiao Li

entry method of the class, emfx file and emfa files had been distinguished and converted into different IFiles. All the emfa files were saved into an ArrayList if a file name contains emfa keywords. Following statements explained how to distinguish these files:
IFile tempFile = null; IFile emfxFile = null; ArrayList emfaList = new ArrayList(); for (Iterator it = selection.iterator(); it.hasNext();) { tempFile = (IFile) it.next(); if (tempFile.getName().contains(".emfa")) { IFile emfaFile = tempFile; emfaList.add(emfaFile); } else if ((tempFile.getName().contains(".emfx"))) { emfxFile = tempFile; } }

Afterwards, for each emfa file another Ecore ResourceSet and Ecore Resource should be created in terms of the file path. All the annotation metamodels could be thus obtained from the Ecore Resouce. Moreover, decision had been made to create another method named searchAnnotation for searching and validate all the annotation models. The functionality of this method can be described as follows:

Compare each annotation element type with the types: class, package, dataType, enum, attribute, reference and operation. After matching a certain type e.g. an annotation element for a class, iterate all the EClass from the Ecore resource and compare the type name with the name of each EClass.

If matching, create a new EAnnotation and add it to the element.

The reason that all the element models in emfx file are available for annotation validation is because they share a common Ecore resource. Before the process to validate annotation, all the models had already been loaded to that resource. In addition, there should be a mechanism to notify the user when no element was matched with the annotation element. In the first place, a matching error message would be just popped up by an Eclipse error dialog. However, this kind of way for showing the

YunXiao Li

matching error was not so satisfying since only one error could be displayed at one time. The user have to modify the current matching error first before knowing if there were any other matching errors exist. Also the error message can neither indicate the exact line number of the error. Hence a more reasonable way of showing errors had been decided. That is to use the NodeAdapter from Xtext API for obtaining the line number of current EObject and use the IMarker provided by Eclipse API for displaying errors on the editor of plug-in. The NodeAdapter is under the xtext parsetree package, which is used for acquiring the parsed node from an element tree. From that node the line number for the EObject can be accessed as well. Furthermore, the IMarker is the general mechanism to link the note and metadata within resources. Following code fragment elaborated the mechanism of matching annotation for a class and error notification:
if (annoEle.getType().equalsIgnoreCase("class")) { boolean classMatch = false; TreeIterator tit = ecoreResource.getAllContents(); while (tit.hasNext()) { Object o = tit.next(); if (o instanceof EClass) { EClass ecl = (EClass) o; if (ecl.getName() .equalsIgnoreCase(annoEle.getElementName())) { classMatch = true; for (Object object : annoEle.getAnnotations()) { EAnnotation eAnnotation = EcoreFactory.eINSTANCE .createEAnnotation(); com.googlecode.emfa.Annotation anno = (com.googlecode.emfa.Annotation) object; eAnnotation.setSource(anno.getName()); EList details = anno.getDetails(); for (Object de : details) { AnnotationDetail detail = (AnnotationDetail) de; if (detail.getKey() != null&& detail.getValue() != null) { eAnnotation.getDetails().put( detail.getKey(), detail.getValue()); } } ecl.getEAnnotations().add(eAnnotation); if (!classMatch) { try { final NodeAdapter adapter = NodeUtil .getNodeAdapter(annoEle); if (adapter != null) { int line = adapter.getParserNode().getLine();

YunXiao Li
IMarker m = iFile.createMarker(IMarker.PROBLEM); m.setAttribute(IMarker.LINE_NUMBER, line); m.setAttribute(IMarker.MESSAGE,"No Element Match for : " + annoEle.getElementName()); m.setAttribute(IMarker.PRIORITY, IMarker.PRIORITY_HIGH); m.setAttribute(IMarker.SEVERITY, IMarker.SEVERITY_ERROR); return -1;

Up to now, all the code implementation had been finished. However, all the code is written in one file that is Emfatix2EcoreAction. This file contains too many lines to read. Hence to improve the readability, the single file was separated into five different files: Emfatix2EcoreAction.java, EmfatixResources.java, ElementTypes.java, ModelElements. java and AnnotaitonElements.java. To have a comprehensive view of these class and their relationships from each other, Figure 9 illustrated the five classes for model transformation with all the methods and Figure 10 showed the sequence diagram for these classes:

YunXiao Li

Figure 9 Class diagram for model transformation

YunXiao Li

Figure 10 The sequence diagram for five classes The thing left is the testing relevant, another iterative testing need to be performed again in order to make sure all the functions run smoothly especially for the independent annotation files validation and transformation to Ecore models. When running Emfatix in a new Eclipse workbench with two plug-ins (Emfx and Emfa), the language syntax according to Emfx and Emfa should able to be created correctly. The functions of keywords highlighting and cross-references should act properly. When a use select one emfx file and emfa file(s) and right-click these files, a popup menu should display the Emfatix and Generate Ecore options. After selecting the Generate Ecore option, a new ecore file should be created under the current folder. More important, the ecore file should not only contains all the models within emfx file but also should contains the annotations that defined in emfa file(s).

YunXiao Li

8. EVALUATION
In order to evaluate this project comprehensively, a set of test files that written by the textual language need to be created. A proper way to test the language is to generate Ecore models and compare the result with the same file that written in Emfatic language. Through this procedure, the difference of the result is easy to be noticed and to be assessed. Furthermore, the emfa files also need to be tested on the base of the previous result. The first part of the two testing procedures is to make sure the textual language works properly and conforms to the format of the language of Emfatic. Since there are no ready-made Emfatic files for complete syntaxes but some languages written as Ecore metamodel, the Emfatic files have to be obtained by generating from Ecore models to Emfatic file. Hence, ten Ecore files were found and had been translated into Emfatic files (*.emf) in the first instance. According to these Emfatic file, ten emfx file were created: abnf.emfx, c_sharp.emfx, efactory.emfx, feature.emfx, forms.emfx, FormularMM.emfx, java.emfx, regular.emfx, tcl.emfx, xml.emfx. Each file is a complete language for a Domain Specific Language (e.g., regular.emfx, forms.emfx and FormularMM.emfx) or modelling language like efactory.emfx or General Purpose Languages such as java.emfx and c_sharp.emfx. Most of these files can passed the language grammar without any error except one file that is java.emfx. There are some errors in the file initially and the reason is in this file a class named Class was defined under classifier package and this is the keyword in the new textual language. However the keyword is not allowed in the name for the operation elements in the new language yet. Like the following code, the operation name shows an error because of this problem.
class Class extends ConcreteClassifier, Implementor { op classifiers.Class getObjectClass();

This limitation is not covered by the new grammar yet, so as an alternative the Class name has to be changed to ~Class to solve the issue like following example:
class ~Class extends ConcreteClassifier, Implementor { op classifiers.~Class getObjectClass();

YunXiao Li

By doing this the name of the class and operation was changed to ~Class and will not be highlighted as a keyword. After the amendment, all the files passed the definition of the new syntax and no errors contained in these files. Next the additional annotation integration features was tested. Two files with the same contents were created, one is test.emfx another is test.emf. The purpose of the files is to compare the result after both of them being generated to Ecore model. The file covers all the elements for the textual language and the Emfatic language such as package, class, interface, attribute, reference, valueReference, operation, dataType, enum and mapEntry. And the contents for the two files are as follows:
package rootPackage;

package annotations { abstract class Annotable { @attrAnnotation(name="attr", prefix="annotations") attr double cs; @refAnnotation(name="ref", prefix="annotations") readonly transient ref Annotable#eSubpackages eSuperPackage; val Annotable[*]#eSuperPackage eSubpackages; @oprationAnnotation(name="op", type="Sound") op int operationA(Annotable an, b.sub.Sound so); } }

@dt(name="anno", type="Annotable") datatype dataType : Annotable;

@enumAnnotation(name="enum", prefix="annotations") enum Enumeration{ }

YunXiao Li

package b{ package sub{ mapentry mapEntry22 : int -> double; interface Sound {

op double operationB(int a, long b); } } }

After generating Ecore models from both of them, two new files: test.emfx.ecore and test.ecore had been created. And the result comparison is showed on Figure 11.

YunXiao Li

Ecore model generated from test.emfx (new grammar)

Ecore model generated from test.emf (Emfatic language)

Figure 10 Comparison graph for two languages As displayed on the above graph, the main elements generated by the two languages are the same. But in the test.emfx file which generated from Emfatix did missed some properties of some models. Due to the time limitation, some extra information for models has not been implemented yet. The missed parts are as follows:

The return type of operations. Other types support except Class in operation such as EInt and ELong. The cardinality of references and valueReferences. The instance name of MapEntry. The modifier properties for models such as abstract, transient and readonly.

These are the known missing parts for the model transformation of Emfatix. Apart from these issues, the main function for the transformation from Xtext model to Ecore model runs properly. The next is to test the support of separate annotation files and its validation. Based on the previous testing file test.emfx, an extra file for annotation definition called anno.emfa was created. The details of the file are as follows:
class Annotable @AnnotableAnnoFromEmfa(type="class", name="Annotable") @AnnotableAnnoFromEmfa2(type="class", name="AnnotableSecond") package annotations @annotationsAnnoFromEmfa(type="package", name="annotations") attribute cs @csAnnotaionFromEmfa(type="attribute", name="cs") operation operationA @operationAnnotaionFromEmfa(type="operation", name="operationA") reference eSuperPackage @dataTypesAnnotaionFromEmfa(type="reference", name="eSuperPackage") reference eSubpackages @dataTypesAnnotaionFromEmfa(type="reference", name="eSubpackages") dataType dataTypes @dataTypesAnnotaionFromEmfa(type="dataType", name="dataTypes") enum Enumeration @EnumerationAnnotaionFromEmfa(type="enum", name="Enumeration")

YunXiao Li
class mapEntry22 @mapEntry22AnnotaionFromEmfa(type="mapEntry/class",

name="mapEntry22")

The file only contains the annotation element whilst the name of an annotation starts with the character @. An element can define one or more annotations like the Annotable class defined as two annotations. Each annotation has a key-value set such as type and name. Now the user should be able to select both test.emfx file and anno.emfa file and generate them into a single ecore file. A file called test.emfx.ecore should be created and contains all the models from the two files. If the generation was correct, the result should be the same as showed in Figure 12: In the chart, we can see the extra annotation elements had been added to the Ecore model. For instance, the enum element Enumeration now has two annotation elements. The enumAnnotaion is from the original file while the EnumerationAnnotationFromEmfa is the new one which is get from the emfa file. It contains the key-value set like type=enum and name=Enumeration.

YunXiao Li

Figure 12 Result for model generation with separate annotation file The finally part of the testing is the annotation validation. To test the scenario for error notification, some element names in anno.emfa file need to be changed and make sure no such element existed in test.emfx file. In this scenario, two annotations defined for the class Annotable has been changed to AnnotableDummy and for the package annotations has been changed to annotationsDummy. When runs the two files again, all the matching error will be displayed on the editor, and a message note will popup like No Element Match for: annotationsDummy if the user move the mouse point to the error. Figure 13 shows the exact result for this scenario.

YunXiao Li

Figure 13 Result for annotation validation 8.1 The assessment for functional requirements Below tables described the result for the functional requirements assessment:
Requirement Identifier Requirement Evaluation Accomplishment

A textual abstract syntax

An abstract syntax, which is similar to Emfatic, Completed has been designed and implemented. It defines all the elements such as classes, attributes, annotations, packages, operations, references etc. And the extension name of the file has been defined as .emfx.

Individual syntax for annotaion

An individual syntax that only defines annotation Completed elements has been implemented. And the extension name of the file has been defined as .emfa.

Ability to create a concrete language

The project has the ability to create concrete Completed languages based on the textual syntaxes. The editor also has the capability for showing errors and suggestions whenever inappropriate statement is written by a user.

Syntax similar to Emfatic

The language based on the new syntax has the Completed similar behaviour with the Emfatic and have such elements like package, class, abstract, attr,

YunXiao Li

ref, val , op, double, etc,. And these elements of the language can be highlighted as well.

Annotaion validation

The new plug-in language supports the validation Completed for all the individual files that have only the annotation elements. It is able to check whether the annotation defined in the emfa files are valid or not.

Model transformation

The application has the function to allow the user Basically selecting one emfx file and one or more emfa completed file. And these file can be generated to Ecore models. However, some additional information has not been completed yet such as the return type of operations, types for EInt and ELong, the cardinality of references and valueReferences, the instance name of MapEntry, the modifier properties for models such as abstract, transient and readonly.

8.2 The assessment for non-functional requirements In addition, below table elaborated the evaluation for non-functional requirements:
Requirement Identifier Requirement Description Accomplishment

Language conformity

The abstract language has been designed and Completed conformed to the standards of Emfatic so that the new plug-in language can be easily accepted by those who familiar with the syntax of Emfatic. And the cost of learning the new language will be reduced in a large extent.

YunXiao Li

Language supportability

This language is able to support via tools and Basically program management such as editing, debugging and transforming. And the tools are based on the Eclipse plug-in. Ten files have been tested for the supportability. However, some keywords such as Class cannot be supported yet. completed

Language simplicity.

This language has been designed as simple as Completed possible and it concentrate on the interests of the language. A user can easily created a language file and generate to Ecore models easily.

Language flexibility

A user or a stakeholder is able to add new Basically elements to the textual language. But the Xtext metamodel needs to be modified and re-run. completed

Language robustness

The new application is able to handle errors or Completed exceptions correctly based on the Eclipse platform. This also include the tolerance of invalid element defined in the language, some annotation matching errors, invalid file selected, no emfx file contains and some unexpected operation processes.

Language usability

This language is easy to use and has the capacity Completed to be understood, studied, and used by its potential users since it is designed conforms to Emfatix language.

Language reliability

The application has the capability to perform and Completed retain the function over time and unexpected circumstances which is also supported by the Eclipse development environment.

Language reusability

Since the language has been designed mainly for Uncompleted the model transformation and only tested on one project. So the reusability has not been tested.

YunXiao Li

Language longevity

This language is used during the development and Basically is supposed to be run for a plenty length period of time, but has not been tested. completed

9. CONCLUSION

YunXiao Li

REFERENCES

[1] [2]

D. Gasevic, Model driven engineering and ontology development, 2nd ed. New York: Springer, 2009. B. Appukuttan, T. Clark, S. Reddy, L. Tratt, and R. Venkatesh, "A model driven approach to model transformations," Model Driven Architecture: Foundations and Applications, 2003.

[3]

J. Bezivin, "Model driven engineering: An emerging technical space," Generative and Transformational Techniques in Software Engineering, vol. 4143, pp. 36-64, 2009.

[4] [5] [6] [7]

B. Selic, "The pragmatics of model-driven development," Ieee Software, vol. 20, pp. 19-25, Sep-Oct 2003. C. Atkinson and T. Kuhne, "Model-driven development: A metamodeling foundation," Ieee Software, vol. 20, pp. 36-+, Sep-Oct 2003. L. Favre, "Well-founded metamodeling for model-driven architecture," Sofsem 2005:Theory and Practice of Computer Science, vol. 3381, pp. 364-367, 2005. I. Grid. What is metamodeling, and what is it good for? Available: http://infogrid.org/wiki/Reference/WhatIsMetaModelinghttp://infogrid.org/wiki/R eference/WhatIsMetaModeling

[8] [9] [10]

OMG.(2007). OMG Unified Modeling Language (OMG UML), Infrastructure, V2.1.2.OMG Document Number: formal/2007-11-. S. Kent, "Model Driven Engineering." J. D. Poole, "Model-Driven Architecture: Vision, Standards And Emerging Technologies," Workshop on Metamodeling and Adaptive Object Models, April 2001.

[11]

A. Kleppe, "A Language Description is More than a Metamodel," In: Proc. of theFourth International Workshop on Software Language Engineering (ATEM 2007), 2007.

[12]

J. D'Anjou and S. Shavor, The Java developer's guide to Eclipse, 2nd ed. Boston: Addison-Wesley, 2005.

YunXiao Li

[13] [14] [15] [16] [17] [18] [19] [20] [21]

S. Holzner, Eclipse. Beijing ; Sebastopol, CA: O'Reilly, 2004. E. Gamma and K. Beck, Contributing to Eclipse : principles, patterns, and plugins. Boston: Addison-Wesley, 2004. C. Judd and H. Shittu, Pro Eclipse JST : plug-ins for J2EE development. New York: Apress, 2005. E. Clayberg and D. Rubel, Eclipse : building commercial-quality plug-ins, 2nd ed. Upper Saddle River, NJ: Addison-Wesley, 2006. EMF project home page. Available: http://www.eclipse.org/modeling/emf/http://www.eclipse.org/modeling/emf/ D. Steinberg, EMF : Eclipse Modeling Framework, 2nd ed. Upper Saddle River, NJ: Addison-Wesley, 2009. Emfatic project home page. Available: http://wiki.eclipse.org/Emfatichttp://wiki.eclipse.org/Emfatic V. Bacvanski and P. Graff, Mastering Eclipse Modeling Framework: InferData Ltd., 2005. E. Meij, D. Trieschnigg, M. de Rijke, and W. Kraaij, "Conceptual language models for domain-specific retrieval," Information Processing & Management, vol. 46, pp. 448-469, Jul 2010.

[22]

D. S. Kolovos, S. Zschaler, N. Drivalos, R. F. Paige, and A. Rashid, "DomainSpecific Metamodelling Languages for Software Language Engineering," Software Language Engineering, vol. 5969, pp. 334-353, 2010.

[23]

Eclipse Xtext project. Available: http://www.eclipse.org/Xtext/documentation/1_0_0/xtext.html#WhatisXtexthttp:// www.eclipse.org/Xtext/documentation/1_0_0/xtext.html#WhatisXtext

[24]

L. M. Garshol. (2006). BNF and EBNF: What are they and how do they work? Available: www.garshol.priv.no/download/text/bnf.htmlwww.garshol.priv.no/download/text/ bnf.html

[25]

D. S. Kolovos. 2010. Available: http://wwwusers.cs.york.ac.uk/~dkolovos/Research/StudentProjectshttp://wwwusers.cs.york.ac.uk/~dkolovos/Research/StudentProjects

YunXiao Li

[26] [27] [28] [29]

K. E. Wiegers, Software requirements, Second Edition. Redmon, Wash.: Microsoft Press, 2006. A. Stellman and J. Greene, Applied software project management. Sebastopol, CA: O'Reilly, 2006. Dimitrios S. Kolovos, Richard F. Paige, Tim Kelly, and F. A. C. Polack, "Requirements for Domain-Specific Languages." P. A. Muller, F. Fondement, F. Fleurey, M. Hassenforder, R. Schnekenburger, S. Gerard, and J. M. Jezequel, "Model-driven analysis and synthesis of textual concrete syntax," Software and Systems Modeling, vol. 7, pp. 423-441, Oct 2008.

[30]

Xtext Project. Available: http://www.eclipse.org/Xtext/documentation/1_0_0/xtext.html#syntaxhttp://www. eclipse.org/Xtext/documentation/1_0_0/xtext.html#syntax

Das könnte Ihnen auch gefallen