Quantcast
Channel: 5ise
Viewing all 57 articles
Browse latest View live

COMASSO BSW generation and validation

$
0
0

The COMASSO initiative provides a community-sourced AUTOSAR 4.x basic software stack and tooling to work with that. One of the tools is the BSWDT, which uses the Eclipse projects Xpand and Xtend for model checking and code generation. It provides not only the BSW in readable form, but also the code generators and validators are there for you to inspect, learn from and modify. This blog article shows how to find your way around those files.

First of all, we need to create a simple Eclipse project (File->New Project…) and then make it a BSW configuration project by invoking the context menu on the project and setting the project configuration:

2013-11-04_15h50_35

After that, we import the BSW and BSW code generators into the project:

2013-11-04_15h52_46You will notice that there is a directory for each of the BSW modules. You will find the Xpand/Xtend files under each modules sub-path scripts/templates/

2013-11-04_15h54_09The relevant file types are:

  • .chk: These contain model validations and checks
  • .xpt: The actual code generators
  • .ext: Model-to-Model transformations and utility functions
  • .oaw: Workflow files (not discussed here)
  • .bamf: Build action manifests.

All files except the .bamf come from the Eclipse Project xpand/mwe. The .bamf files are an implementation of the AUTOSAR build action manifest. In short, the bamf files invoke the oaw-files which in turn invoke .chk, x.pt and .ext. The details will not be discussed here.

Checks

However, we will have a look at the .chk files. A .chk files consists of a number of context statements. A context statement of a specific type will be invoked for all occurances of that type in your model. This is a very simple check for the ComMChannel. No need to loop through the model yourself, everything is done by the framework:

2013-11-04_15h58_00More complex checks can be factored into dedicated subroutines:

2013-11-04_15h59_58Depending on the .bamf-Configuration a failed check can result in a stop of the build process.

Code

Code generation is done with Xpand. Xpand is a template based language that allows you to easily specifiy the generators. The big advantage is the flexible type system, that derives the available types from the param-defs in your workspace. So if you have vendor-specific param defs, all the types will be immediately available in the language as first-class types.

2013-11-04_16h02_24Note that everything highlighted blue in the example above will be printed 1:1 in the output and the statements between ‘<<’ and ‘>>’ will be evaluated on the model. Note in line 124 that complex queries are possible. Of course, Xpand provides a comfortable set of additional statements for outputing code.

Transformation

The files under “generateid” are used for automatically setting information prior to generation – e.g. by generating ids. An example from LIN could be:

2013-11-04_16h08_26Note that Xtend/Xpand offer full content assist – no need to refer to the AUTOSAR spec to find out which elements are supported.

Profiling

The system comes with a profiling component that supports analysis of the time spent in checks and generation. Profiling will be introduced in another blog post.

 

 

 

 


Functional Architectures (EAST-ADL) and Managing Behavior Models

$
0
0

In the early phases of systems engineering, functional architectures are used to create a functional description of the system without specifying details about the implementation (e.g. the decision of implementation in HW or SW). In EAST-ADL, the building blocks of functional architectures (Function Types) can be associated with so-called Function Behaviors that support the addition of formal notations for the specification of the behavior of a logical function.

In the research project IMES, we are using the EAST-ADL functional architecture and combine it with block diagrams. Let’s consider the following simplified functional model of a Start-Stop system:

2013-11-10_12h53_32In EAST-ADL, you can add any number of behaviors to a function. We already have a behavior activationControlBehavior1 attached to the function type activationControl and we are adding another behavior called newBehavior.2013-11-10_12h54_06

A behavior itself does not specifiy anything yet in EAST-ADL. It is more like a container that allows to refer to other artefacts actually containing the behavior specification. You can refer to any specification (like ML/SL or other tools). For our example, we are using the block diagram design and simulation tool DAMOS, which is available as open source at www.eclipse.org.

We can now invoke the “Create Block Diagram” on the behavior. 2013-11-10_12h55_34

The tool will then create a block diagram with in- und output ports that are taken from the function type (you can see in the diagram that we have two input ports and one output ports).2013-11-10_12h56_20 Add this point, the architects and developers can specify the actual logic. Of course it is also possible to use existing models and derive the function type from that (“reverse from existing”). That would work on a variety of existing specifications, like state machines, ML/SL etc.

A DAMOS model could look like this:2013-11-10_12h56_48

At this point of the engineering process, we would have:

  • A functional architecture
  • one or more function behaviors with behavior models attached to the function types

Obviously, the next thing that comes to mind is  to combine all the single behavior models and to create a simulation model for a (sub-) system. Since we might have more than one model attached to a function, we have to pick a specific model for each of the function types.

This is done by creating a system configuration.2013-11-10_13h02_52That will create a configuration file that associates a function type (on the left side) with a behavior model (right side). Of course, using Eclipse Xtext, this is not only a text file, but a full model with validation (are the associated models correct) and full type aware content assist:2013-11-10_13h16_52After the configuration has been defined, a simulation model is generated that can then be run to combine the set of function models.

 

 

 

 

 

Support for vendor specific parameter / module definitions in COMASSO basic software configuration tool

$
0
0

During the 6th AUTOSAR Open Conference one of the presenters pointed out, that integration of vendor specific / custom parameter definitions into the tool chain can be problematic because of insufficient tool support. Some tools seem to have hard coded user interface for the configuration or proprietary formats for customization.

The COMASSO basic software tooling chose a different approach: User interface and code generator framework are dynamically derived from the .arxml in the workspace and all the configuration data is directly stored in .arxml (so no Import/Export of models is required, just drag and drop).

NVRAM Example

In one minimal example that we use for training purposes, we have a parameter definition for a NvRam module (stripped from the official AUTOSAR definition). In the “file system” view, the param def can bee seen in the workspace.2013-11-15_14h14_09

This results in the “Model view”:2013-11-15_14h14_30Introducing a vendor specific module

Now, assume the construed scenario, that we would like to provide a BSW/MCAL module for a Graphical Processor Unit (GPU). For the configuration tool to work, all what we need is a param definition for that.

Although COMASSO does not require a specific project structure, for convenience, we copy the NvM file structure and create an new param def callsed GpU_ParamDef.arxml. In XML, we define a new module:

<ECUC-MODULE-DEF UUID="ECUC:add28e00-89a6-45a7-91ec-b997291c0da6">
	<SHORT-NAME>GPU</SHORT-NAME>
	<DESC>
		<L-2 L="EN">Configuration of the Graphical Processing Unit.</L-2>
	</DESC>

	<LOWER-MULTIPLICITY>0</LOWER-MULTIPLICITY>
	<UPPER-MULTIPLICITY>1</UPPER-MULTIPLICITY>
	<SUPPORTED-CONFIG-VARIANTS>
		<SUPPORTED-CONFIG-VARIANT>VARIANT-LINK-TIME</SUPPORTED-CONFIG-VARIANT>
		<SUPPORTED-CONFIG-VARIANT>VARIANT-PRE-COMPILE
		</SUPPORTED-CONFIG-VARIANT>
	</SUPPORTED-CONFIG-VARIANTS>
	<CONTAINERS>

		<ECUC-PARAM-CONF-CONTAINER-DEF
			UUID="ECUC:592eb68c-8ec5-46e2-aa34-9d55999b105e">
			<SHORT-NAME>GpuDescriptor</SHORT-NAME>
			<DESC>
				<L-2 L="EN">Container for a management structure to configure the
					composition of a given GPU.</L-2>
			</DESC>
			<LOWER-MULTIPLICITY>1</LOWER-MULTIPLICITY>
			<UPPER-MULTIPLICITY>65536</UPPER-MULTIPLICITY>
			<MULTIPLE-CONFIGURATION-CONTAINER>false
			</MULTIPLE-CONFIGURATION-CONTAINER>
			<PARAMETERS>
				<!-- PARAMETER DEFINITION: GpuBlockLength -->
				<ECUC-INTEGER-PARAM-DEF UUID="ECUC:60ef00b3-517e-49f5-8942-16bc295ac588">
					<SHORT-NAME>GpuBlockLength</SHORT-NAME>
					<DESC>
						<L-2 L="EN">Defines the block data length in bytes.</L-2>
					</DESC>
					<INTRODUCTION>
						<P>
							<L-1 L="EN">Note: The address of the tool.</L-1>
						</P>
					</INTRODUCTION>
					<LOWER-MULTIPLICITY>1</LOWER-MULTIPLICITY>
					<UPPER-MULTIPLICITY>1</UPPER-MULTIPLICITY>

					<ORIGIN>AUTOSAR_ECUC</ORIGIN>
					<SYMBOLIC-NAME-VALUE>false</SYMBOLIC-NAME-VALUE>
					<MAX>65535</MAX>
					<MIN>1</MIN>
				</ECUC-INTEGER-PARAM-DEF>

After saving the new param def .arxml, the tool scans the workspace and rebuilds the user interface.

2013-11-15_14h26_25Without a single line of additional code, the tool knows about the vendor specific parameter definition. The icon in front of the GPU is grey, because we do not have configured any data yet. Opening the editor shows that it knows about the VSMD now as well:

2013-11-15_14h29_42Any configuration here will be directly stored as values in a .arxml file.

 Code generation

But not only the user interface knows immediately about the vendor specific modules – so does the generation and validation framework. As you can see below, we can now directly access the GPU module configuration from the AUTOSAR root (line 11) and then loop through the new GPU descriptors (line 12).

And the content assist system is working with the information as well. No need to remember what we defined. In line 16 we invoke content assist on “d” (which is a GPUBlockDescriptor) and get suggestions on all the attributes.

2013-11-15_14h34_47Open Language

The code generation and validation framework is Eclipse Xtend/Xpand. It is publically available and not a proprietary solution.

Parameter Description

In addition, when clicking on an element in the configuration editor, there is a documentation view that shows the important information from the .arxml in a nicer view. Any documentation that is stored in the .arxml will be easily accessible.

2013-11-15_14h38_30

 

 

 

 

 

 

 

AUTOSAR reports with Artop and BIRT

$
0
0

Even in times of model-based development, artefacts like pdf and Word are still important for documentation. With Eclipse based technologies, it is easy to produce documents in various formats from your AUTOSAR xml.

One approach would be to use Eclipse M2T (model-to-text) technologies to generate your own reports. Technologies for that would be Xtend2 or Xpand. An article can be found here.

However, with the Eclipse BIRT project, there is also a more WYSIWYG approach. Taking a very simple AUTOSAR model as an input, we can easily create reports in most formats.

2013-12-17_11h34_27

After creating a BIRT project, we have to configure a so called “Data Source”, that simply takes our ARXML as input:

2013-12-17_11h35_30We then configure one or more “Data Sets” that actually pull the data from the data source:

2013-12-17_11h36_48From there it is mostly drag and drop and maybe a little scripting to create the report in the WYSYWIG editor – it also supports a lot of styling and templating. Including images, custom headers etc:

2013-12-17_11h38_04As you can see, we do create a table with a row for each SWC-Type with the shortname and a subtable for all the ports of that SWC-Type. The BIRT Report Viewer shows the report in HTML.

2013-12-17_11h39_54

By clicking on “Export Report” we can export to a number of formats:

2013-12-17_11h41_39

PDF:

2013-12-17_11h43_12

Excel:

2013-12-17_11h44_27BIRT and ARTOP provide powerful reporting functionality.

 

 

Open Collaborations for Automotive Software – and Eclipse

$
0
0

On June 4th itemis is hosting a conference on Open Collaborations for Automotive Software (in German) in Esslingen near Stuttgart. There are a lot of interesting talks and – while not immediately obvious – all of them do have some relation to Eclipse.

  • Open Source OSEK: Erika Enterprise is the first certified Open Source OSEK. And it uses Eclipse based tooling for the configuration / development of OSEK based software
  • COMASSO is a community based AUTOSAR BSW implementation. It comes with BSWDT, an Eclipse-based AUTOSAR configuration tool that leverages technologies like EMF and Xpand
  • SAFE RTP Open Source Platform for Safety-Modeling and -Analysis: This tooling is completely based on Eclipse and features the EAST-ADL implementation EATOP, that is published within the Eclipse Automotive Industry Working Group
  • openMDM: Open Measured Data Management: Again, a community based tooling for the management of measured data, based on Eclipse.
  • Strategic Relevance of Open Source at ETAS GmbH: ETAS GmbH builds a lot of automotive tools on Eclipse and they will talk about the business relevance of open source / Eclipse.
  • Advanced Technology for SW Sharing – Steps towards an open source platform for automotive SW Build Systems: A new build system framework for automotive ECU framework – based on Eclipse

Eclipse is a key technology in automotive software engineering and the conference will introduce many interesting aspects.

Travel Tips for EclipseCon France

$
0
0

EclipseCon France is one of my favourite conferences. If you are going, here are some tips:

  • I prefer to book a hotel in the city centre, e.g. near Jean Jaures (https://goo.gl/maps/RKkQJ). A lot of the restaurants are within walking distance and there a lot of locations that you can use to meet with other participants.
  • There is a bus shuttle between the airport and the city centre (Jean Jaures) that also stops right in front of the EclipseCon venue. And it is reasonably priced.
  • Toulouse has a dense network of public bicycle rental stations. So instead of using the metro, you might want to choose to pedal your way to EclipseCon – it is within reasonable distance and there are bike stations in front of the venue and in near vicinity.
  • Although you will find Wifi at the venue and at hotels, I prefer the prepaid internet SIM by SFR or Orange. If you have a mobile wlan hotspot or you can use SIM cards in your notebook, that is my preferred option. Orange and SFR are both right at the city center. You can get a prepaid SIM easily – but bring a passport / ID card for registration.

Enjoy an interesting conference in a beautiful city in early summer!

 

Supporting Model-To-Transformation in Java with AspectJ

$
0
0

There are a number of Model-To-Model-Transformation Frameworks provided under the Eclipse umbrelle (e.g. QVTO). Some of these provide a lot of support. However, in some scenarios, you need to either adapt them or implement your own M2M with Java. For us, some of the cases are:

  • Framework does not support model loading saving with Sphinx
  • Framework does not support EMF unsettable attributes (eIsSet)
  • Framework performance.

However, one of the most annoying problems in writing transformations is caching already created elements. Consider the following abstract example:

  • Our source model has an element of type Package ‘p’
  • In the package, there are sub elements ‘a’ and ‘b’
  • ‘a’ is a typed element, it’s type is ‘b’

Usually when mapping the contents of p, we would iterate over its contents and transform each element. However, when transforming ‘a’, the transformed element’s ‘A’ type should be set to the transformed ‘b’ (‘B’). But we have not transformed it yet. If we transform it at this point in time, it will be transformed again when iterating further through the package, resulting at two different ‘B’ in the target model.

Obviously we should keep a Cache of mapping results and not create a new target instance if a source element has been transformed before. However, managing those caches will clutter our transformation rules.

But it is quite easy do factor this out by using AspectJ. First, we define an annotation for methods that should be cached:

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Cached {
}

 

Then we define an aspect that intercepts all calls to methods annotated with @Cached:

@Aspect // ("perthis(execution(* *.*(..)) && @annotation(Cached))")
public class CachingAspect {

	 protected Map<List, Object> cache = new HashMap<List,Object>();
	 protected Transformation ctx = null;

	  @Around("execution(* *.*(..)) && @annotation(Cached)")
	     public Object doNothing(ProceedingJoinPoint thisJoinPoint) throws Throwable {
		  	System.out.println("@Cached "+thisJoinPoint);
		  	System.out.println(thisJoinPoint.getThis());
		  	System.out.println(thisJoinPoint.getArgs());
		  	System.out.println(thisJoinPoint.getSignature());

		  	List<Object> key = new ArrayList<Object>();
		  	key.add(thisJoinPoint.getThis());
		  	key.add(thisJoinPoint.getSignature());
		  	key.addAll(Arrays.asList(thisJoinPoint.getArgs()));

		  	if(cache.containsKey(key)) {
		  		System.out.println("Would be cached to "+cache.get(key));
		  		return cache.get(key);
		  	}
	       Object result = thisJoinPoint.proceed();
	       cache.put(key, result);
	       if(thisJoinPoint.getThis() instanceof AutoAdd) {
	    	   ((AutoAdd)thisJoinPoint.getThis()).autoAdd(result);
	       }
	       return result;
	     }
}

  •  In line 14, we build a cache key out of the current object, the method called and its arguments
  • If this combination is already cached, we return the key entry
  • otherwise we proceed to the original method and cache its result

The transformator then looks like this:

def EAPackage transformL(QudvLibrary l) {
		val p = Eastadl21Factory.eINSTANCE.createEAPackage
		p.setShortName(l.name)
		addContext = p
			val trafo = l.elements.map[transform(it)].filterNull.filter(typeof(EAPackageableElement))
			p.element.addAll(trafo)
		p
	}

	@Cached
	def  dispatch transform(EObject l ) {
		null
	}

	@Cached
	def  dispatch  org.eclipse.eatop.eastadl21.Unit transform( Unit l) {
		val n = Eastadl21Factory.eINSTANCE.createUnit
		n.shortName = l.name
		n.quantity = transform(l.quantityKinds.head as QuantityKind) as Quantity
		n
	}

	@Cached
	def dispatch org.eclipse.eatop.eastadl21.Quantity transform(QuantityKind l) {
		val n = Eastadl21Factory.eINSTANCE.createQuantity
		n.shortName = l.name
		return n as org.eclipse.eatop.eastadl21.Quantity

	}

(OK, so this is Xtend2 rather than Java, because the polymorphic dispatch comes handy).

Note that we are transforming EAST-ADL models here. The package contains Units and Quantities. Units refer to Quantitites. Quantity transformations are called twice: From the package transformation and from the Unit transformation. However, by using the caching, we get the correct structure.

In addition, we also solve a little annonying problem: Elements have to be put in the containment hierarchy as well. But that would require additional code in the transformators and passing around the containers.

In line 25 in the aspect, we check if the target object implements the AutoAdd interface and call the method in that case. So the transformation can take care of the addition to the containment hierarchy at a defined place.

The code above is an implementation prototype to show initial feasability.

 

 

Sphinx is listening to Resources – Synchronizing Model Descriptors

$
0
0

The Eclipse Sphinx project provides a number of useful features for (EMF-based) model-management.  One of the features is reloading the model when the underlying resources change. Sphinx uses so called ModelDescriptors to define which files belong to a model, and on a change these have to be updated as well.

One of the relevant code pieces is org.eclipse.sphinx.emf.Activator:

/**
		 * Starts automatic synchronization of models wrt resource changes in the workspace. Supports
		 * loading/unloading/reloading of complete models when underlying projects are
		 * created/opened/renamed/closed/deleted or their description or settings are changed as well as
		 * loading/unloading/reloading of individual model resources when underlying files are created/changed/deleted.
		 */
		public void startWorkspaceSynchronizing() {
			ModelDescriptorSynchronizer.INSTANCE.start();
			ResourceScopeMarkerSynchronizer.INSTANCE.start();
		}

 
The ModelDescriptorSynchronizer registers itself with standard means at the Eclipse Workspace to listen for resource changes and adds a BasicModelDescriptorSynchronizerDelegate delegate :

public void start() {
		ResourcesPlugin.getWorkspace().addResourceChangeListener(
				this,
				IResourceChangeEvent.PRE_BUILD | IResourceChangeEvent.PRE_CLOSE | IResourceChangeEvent.PRE_DELETE | IResourceChangeEvent.POST_BUILD
						| IResourceChangeEvent.POST_CHANGE);
	}

	public void stop() {
		ResourcesPlugin.getWorkspace().removeResourceChangeListener(this);
	}

	/**
	 * Protected constructor for singleton pattern.
	 */
	protected ModelDescriptorSynchronizer() {
		addDelegate(BasicModelDescriptorSynchronizerDelegate.INSTANCE);
	}

        @Override
	protected IModelDescriptorSyncRequest createSyncRequest() {
		return new ModelDescriptorSyncRequest();
	}

 

The major task of that class is to map the IResourceChangeEvent from the Eclipse framework to Sphinx’ IModelDescriptorSyncRequest which specify what to update in model descriptors:

@Override
	public void handleFileAdded(int eventType, IFile file) {
		if (eventType == IResourceChangeEvent.POST_CHANGE) {
			syncRequest.addFileToAddModelDescriptorFor(file);
		}
	}

But it also does some handling of the model registry caches:

@Override
	public void handleFileMoved(int eventType, IFile oldFile, IFile newFile) {
		if (eventType == IResourceChangeEvent.POST_CHANGE) {
			// Remove entry for old file from meta-model descriptor cache and add an equivalent entry
			// for new file
			/*
			 * !! Important Note !! This should normally be the business of MetaModelDescriptorCacheUpdater. However, we
			 * have to do so here as well because we depend on that cached metamodel descriptors are up to date but
			 * cannot know which of both BasicModelDescriptorSynchronizerDelegate or MetaModelDescriptorCacheUpdater
			 * gets called first.
			 */
			InternalMetaModelDescriptorRegistry.INSTANCE.moveCachedDescriptor(oldFile, newFile);

			// Remove descriptor for model behind old file from ModelDescriptorRegistry if it is the
			// last file of the that model
			syncRequest.addFileToRemoveModelDescriptorFor(oldFile);
			syncRequest.addFileToAddModelDescriptorFor(newFile);
		}
	}

org.eclipse.sphinx.emf.internal.model.ModelDescriptorSynchronizer itself extends org.eclipse.sphinx.platform.resources.syncing.AbstractResourceSynchronizer which itself implements IResourceChangeListener from Eclipse and is a base class for handling resource changes. It basically executes the the syncRequests:

@Override
	public void resourceChanged(final IResourceChangeEvent event) {
		try {
			switch (event.getType()) {
			case IResourceChangeEvent.PRE_CLOSE:
				getSyncRequest().init();
				doPreClose(event);
				getSyncRequest().perform();
				break;
			case IResourceChangeEvent.PRE_DELETE:
				getSyncRequest().init();
				doPreDelete(event);
				getSyncRequest().perform();
				break;

In our case, the sync requests are of type ModelDescriptorSyncRequest, because that is how createSyncRequest is @Overriden in org.eclipse.sphinx.emf.internal.model.ModelDescriptorSynchronizer.

Now the Model Descriptors are actually processed in this class:

@Override
	public void perform() {
		if (!canPerform()) {
			return;
		}
		if (projectsToMoveModelDescriptorsFor.size() > 0) {
			moveModelDescriptors(new HashMap<IProject, IProject>(projectsToMoveModelDescriptorsFor));
			projectsToMoveModelDescriptorsFor.clear();
		}
...
	private void moveModelDescriptors(final Map<IProject, IProject> projects) {
		Assert.isNotNull(projects);

		if (projects.size() > 0) {
			/*
			 * !! Important Note !! Perform as asynchronous operation with exclusive access to workspace root for the
			 * following two reasons: 1/ In order to avoid deadlocks. The workspace is locked while
			 * IResourceChangeListeners are processed (exclusive workspace access) and updating the model descriptor
			 * registry may involve creating transactions (exclusive model access). In cases where another thread is
			 * around while we are called here which already has exclusive model access but waits for exclusive
			 * workspace access we would end up in a deadlock otherwise. 2/ In order to make sure that the model
			 * descriptor registry gets updated only AFTER all other IResourceChangeListeners have been processed which
			 * may be present and rely on the model descriptor registry's state BEFORE the update.
			 */
			Job job = new Job(Messages.job_movingModelDescriptors) {
				@Override
				protected IStatus run(IProgressMonitor monitor) {
					try {
						SubMonitor progress = SubMonitor.convert(monitor, projects.size());
						if (progress.isCanceled()) {
							throw new OperationCanceledException();
						}

						for (IProject oldProject : projects.keySet()) {
							IProject newProject = projects.get(oldProject);
							ModelDescriptorRegistry.INSTANCE.moveModels(oldProject, newProject);

And we are finally at the org.eclipse.sphinx.emf.model.ModelDescriptorRegistry, where the information about all the loaded models is actually kept (to be detailed in a further blog post).


Sphinx is listening – editingDomainFactoryListeners

$
0
0

One of the mechanism that Sphinx uses to get notified about changes in your Sphinx-based models is registering Listeners when the editingDomain for your model is being created. This mechanism can also be used for your own listeners.

Sphinx defines the extension point org.eclipse.sphinx.emf.editingDomainFactoryListeners and in org.eclipse.sphinx.emf it registers a number of its own listeners:

Listeners registered by Sphinx

Listeners registered by Sphinx

The registry entries for that extension point are processed in EditingDomainFactoryListenerRegistry.readContributedEditingDomainFactoryListeners(). It creates an EditingDomainFactoryListenerRegistry.ListenerDescriptor which stores the class name and registers it internally for the meta-models that are specified in the extension. The ListenerDescriptor also contains code to actually load the specified class and instantiate it (base type is ITransactionalEditingDomainFactoryListener).

The method EditingDomainFactoryListenerRegistry.getListeners(IMetaModelDescriptor) can be used to get all the ITransactionalEditingDomainFactoryListener that are registered for a given IMetaModelDescriptor.

These are in turn invoked from ExtendedWorkspaceEditingDomainFactory.firePostCreateEditingDomain(Collection, TransactionalEditingDomain). ExtendedWorkspaceEditingDomainFactory is responsible for creating the editing domain and through this feature we have a nice mechanism to register custom listeners each time an editing domain is created by Sphinx (e.g., after models are being loaded).

Sphinx is blacklisting your proxies

$
0
0

EMF proxy resolving is one area where Sphinx adds functionality. Sphinx was designed to support, amongst others, AUTOSAR models and with AUTOSAR models, references have some special traits:

  • References are based on fully qualified names
  • Any number of .arxml files can be combined to form a model
  • Models can be merged based on their names. So any number of resources can contain a package with a fully qualified name of “/AUTOSAR/p1/p2/p3/…pn”

Blacklisting

Obviously, it would be highly inefficient to try to resolve a proxy each time it is encountered in code in scenarios like this. So Sphinx can “blacklist” the proxies.

The information about the proxy blacklist is used in org.eclipse.sphinx.emf.resource.ExtendedResourceSetImpl. At the end of the getEObject() methods we see that the proxy is added to the blacklist if it could not be resolved:

if (proxyHelper != null) {
			// Remember proxy as known unresolved proxy
			proxyHelper.getBlackList().addProxyURI(uri);
		}

And of course, at the beginning of the method, a null is returned immediately if the proxy has been blacklisted:

if (proxyHelper != null) {
			// If proxy URI references a known unresolved proxy then don't try to resolve it again
			if (proxyHelper.getBlackList().existsProxyURI(uri)) {
				return null;
			}

Getting de-listed

Now Sphinx has to find out when it would make sense not to blacklist a proxy anymore but try to resolve it. So it registers a org.eclipse.sphinx.emf.internal.ecore.proxymanagement.blacklist.ModelIndexUpdater by the extension point described in this previous post.

So the ModelIndexUpdaer will react to changes:

@Override
	public void resourceSetChanged(ResourceSetChangeEvent event) {
		ProxyHelper proxyHelper = ProxyHelperAdapterFactory.INSTANCE.adapt(event.getEditingDomain().getResourceSet());
		List<?> notifications = event.getNotifications();
		for (Object object : notifications) {
			if (object instanceof Notification) {
				Notification notification = (Notification) object;
				Object notifier = notification.getNotifier();
				if (notifier instanceof Resource) {
					Resource resource = (Resource) notifier;
					if (notification.getFeatureID(Resource.class) == Resource.RESOURCE__IS_LOADED) {
						if (resource.isLoaded()) {
							proxyHelper.getBlackList().updateIndexOnResourceLoaded(resource);
						} else {
							// FIXME when called on post commit, resource content is empty
							proxyHelper.getBlackList().updateIndexOnResourceUnloaded(resource);
						}
					}
				} else if (notifier instanceof EObject) {
					// Check if new model objects that are potential targets for black-listed proxy URIs have been added
					EStructuralFeature feature = (EStructuralFeature) notification.getFeature();
					if (feature instanceof EReference) {
						EReference reference = (EReference) feature;
						if (reference.isContainment()) {
							if (notification.getEventType() == Notification.SET || notification.getEventType() == Notification.ADD
									|| notification.getEventType() == Notification.ADD_MANY) {
								// Get black-listed proxy URI pointing at changed model object as well as all
								// black-listed proxy URIs pointing at model objects that are directly and indirectly
								// contained by the former removed
								proxyHelper.getBlackList().updateIndexOnResourceLoaded(((EObject) notifier).eResource());
							}
						}
					}

The class that actually implements the blacklist is org.eclipse.sphinx.emf.internal.ecore.proxymanagement.blacklist.ModelIndex, which mostly delegates to org.eclipse.sphinx.emf.internal.ecore.proxymanagement.blacklist.MapModelIndex

When a resource has changed or being loaded, the MapModelIndex tries to see if the blacklisted proxies could be resolved against the resource:

/**
	 * Try to resolve proxy objects against this resource.
	 */
	public void updateIndexOnResourceLoaded(Resource resource) {
		if (resource != null && !resource.getContents().isEmpty()) {
			for (URI proxyURI : new ArrayList<URI>(proxyURIs)) {
				// FIXME Potential EMF bug: NumberFormatExeption raised in XMIResourceImpl#getEObject(String) upon
				// unexpected URI fragment format.
				try {
					// If proxy URI is not fragment-based, i.e. includes segments pointing at the target resource, we
					// have to make sure that it matches URI of loaded resource
					if (proxyURI.segmentCount() == 0 || resource.getURI().equals(proxyURI.trimFragment().trimQuery())) {
						// See if loaded resource contains an object matching proxy URI fragment
						if (resource.getEObject(proxyURI.fragment()) != null) {
							removeProxyURI(proxyURI);
						}
					}
				} catch (Exception ex) {
					// Ignore exception
				}
			}
		}
	}

When the resource is actually removed from the model, all proxies from that resource are removed from the index:

public void updateIndexOnResourceUnloaded(Resource resource) {
		if (resource != null) {
			TreeIterator<EObject> iterator = resource.getAllContents();
			while (iterator.hasNext()) {
				EObject currentObject = iterator.next();
				if (currentObject.eIsProxy() && existsProxyURI(((InternalEObject) currentObject).eProxyURI())) {
					removeProxyURI(((InternalEObject) currentObject).eProxyURI());
				}
			}
		}
	}

So Sphinx avoids re-resolving of proxies.

Sphinx – how to access your models

$
0
0

When you work with Sphinx as a user in an Eclipse runtime, e.g. with the Sphinx model explorer, Sphinx does a lot of work in the background to update models, provide a single shared model to all editors etc. But what to do when you want to access Sphinx models from your own code.

EcorePlatformUtil

EcorePlatformUtil is one of the important classes with a lot of methods that help you access your models. Two important methods are

  • getResource(…)
  • loadResource(…)

They come in a variety of parameter variations. The important thing is, getResource(…) will not load your resource if it is not yet loaded. That is a little bit different than the standard ResourceSet.getResource(…) with its loadOnDemand parameter.

On the other hand, loadResource(…) wil only load your resource if it is not loaded yet. If it is, there will be now runtime overhead. Let’s have a look at the code:

 

public static Resource loadResource(IFile file, Map<?, ?> options) {
		TransactionalEditingDomain editingDomain = WorkspaceEditingDomainUtil.getEditingDomain(file);
		if (editingDomain != null) {
			return loadResource(editingDomain, file, options);
		}
		return null;
	}

Sphinx uses its internal registries to find the TransactionalEditingDomain that the file belongs to and then calls loadResource(…)

public static Resource loadResource(final TransactionalEditingDomain editingDomain, final IFile file, final Map<?, ?> options) {
		if (editingDomain != null && file != null) {
			try {
				return TransactionUtil.runExclusive(editingDomain, new RunnableWithResult.Impl<Resource>() {
					@Override
					public void run() {
						URI uri = createURI(file.getFullPath());
						setResult(EcoreResourceUtil.loadResource(editingDomain.getResourceSet(), uri, options));
					}
				});
			} catch (InterruptedException ex) {
				PlatformLogUtil.logAsError(Activator.getPlugin(), ex);
			}
		}
		return null;
	}

So we have to look at org.eclipse.sphinx.emf.util.EcoreResourceUtil to see what happens next. There is just a little fragment

public static Resource loadResource(ResourceSet resourceSet, URI uri, Map<?, ?> options) {
		Assert.isNotNull(uri);
		return loadResource(resourceSet, uri, options, true);
	}

that leads us to

private static Resource loadResource(ResourceSet resourceSet, URI uri, Map<?, ?> options, boolean loadOnDemand) {
		Assert.isNotNull(uri);

		// Create new ResourceSet if none has been provided
		if (resourceSet == null) {
			resourceSet = new ScopingResourceSetImpl();
		}

		// Try to convert given URI to platform:/resource URI if not yet so
		/*
		 * !! Important Note !! This is necessary in order to avoid that resources which are located inside the
		 * workspace get loaded multiple times just because they are referenced by URIs with different schemes. If given
		 * resource set were an instance of ResourceSetImpl this extra conversion wouldn't be necessary.
		 * org.eclipse.emf.ecore.resource.ResourceSet.getResource(URI, boolean) normalizes and compares given URI and to
		 * normalized copies of URIs of already present resources and thereby avoids multiple loading of same resources
		 * on its own. This is however not true when ExtendedResourceSetImpl or a subclass of it is used. Herein, URI
		 * normalization and comparison has been removed from
		 * org.eclipse.sphinx.emf.resource.ExtendedResourceSetImpl.getResource(URI, boolean) in order to increase
		 * runtime performance.
		 */
		if (!uri.isPlatform()) {
			uri = convertToPlatformResourceURI(uri);
		}

		// Just get model resource if it is already loaded
		Resource resource = resourceSet.getResource(uri.trimFragment().trimQuery(), false);

		// Load it using specified options if not done so yet and a demand load has been requested
		if ((resource == null || !resource.isLoaded()) && loadOnDemand) {
			if (exists(uri)) {
				if (resource == null) {
					String contentType = getContentTypeId(uri);
					resource = resourceSet.createResource(uri, contentType);
				}
				if (resource != null) {
					try {
						// Capture errors and warnings encountered during resource creation
						/*
						 * !! Important note !! This is necessary because the resource's errors and warnings are
						 * automatically cleared when the loading begins. Therefore, if we don't retrieve them at this
						 * point all previously encountered errors and warnings would be lost (see
						 * org.eclipse.emf.ecore.resource.impl.ResourceImpl.load(InputStream, Map<?, ?>) for details)
						 */
						List<Resource.Diagnostic> creationErrors = new ArrayList<Resource.Diagnostic>(resource.getErrors());
						List<Resource.Diagnostic> creationWarnings = new ArrayList<Resource.Diagnostic>(resource.getWarnings());

						// Load resource
						resource.load(options);

						// Make sure that no empty resources are kept in resource set
						if (resource.getContents().isEmpty()) {
							unloadResource(resource, true);
						}

						// Restore creation time errors and warnings
						resource.getErrors().addAll(creationErrors);
						resource.getWarnings().addAll(creationWarnings);
					} catch (Exception ex) {
						// Make sure that no empty resources are kept in resource set
						if (resource.getContents().isEmpty()) {
							// Capture errors and warnings encountered during resource load attempt
							/*
							 * !! Important note !! This is necessary because the resource's errors and warnings are
							 * automatically cleared when it gets unloaded. Therefore, if we didn't retrieve them at
							 * this point all errors and warnings encountered during loading would be lost (see
							 * org.eclipse.emf.ecore.resource.impl.ResourceImpl.doUnload() for details)
							 */
							List<Resource.Diagnostic> loadErrors = new ArrayList<Resource.Diagnostic>(resource.getErrors());
							List<Resource.Diagnostic> loadWarnings = new ArrayList<Resource.Diagnostic>(resource.getWarnings());

							// Make sure that resource gets unloaded and removed from resource set again
							try {
								unloadResource(resource, true);
							} catch (Exception e) {
								// Log unload problem in Error Log but don't let it go along as runtime exception. It is
								// most likely just a consequence of the load problems encountered before and therefore
								// should not prevent those from being restored as errors and warnings on resource.
								PlatformLogUtil.logAsError(Activator.getPlugin(), e);
							}

							// Restore load time errors and warnings on resource
							/*
							 * !! Important Note !! The main intention behind restoring recorded errors and warnings on
							 * the already unloaded resource is to enable these errors/warnings to be converted to
							 * problem markers by the resource problem handler later on (see
							 * org.eclipse.sphinx.emf.internal.resource.ResourceProblemHandler#resourceSetChanged(
							 * ResourceSetChangeEvent)) for details).
							 */
							resource.getErrors().addAll(loadErrors);
							resource.getWarnings().addAll(loadWarnings);
						}

						// Record exception as error on resource
						Throwable cause = ex.getCause();
						Exception exception = cause instanceof Exception ? (Exception) cause : ex;
						resource.getErrors().add(
								new XMIException(NLS.bind(Messages.error_problemOccurredWhenLoadingResource, uri.toString()), exception, uri
										.toString(), 1, 1));

						// Re-throw exception
						throw new WrappedException(ex);
					}
				}
			}
		}
		return resource;
	}

  • In line 25, the standard EMF ResourceSet.getResource() is used to see if the resource is already there. Note that loadonDemand is false.
  • Otherwise the resource is actually created and loaded. If it does not have any content, it is immediately removed
  • Information about loading errors / warnings is stored at the resource

EcorePlatformUtil.getResource(IFile)

This method will not load the resource as can be seen from the code:

public static Resource getResource(final IFile file) {
		final TransactionalEditingDomain editingDomain = WorkspaceEditingDomainUtil.getCurrentEditingDomain(file);
		if (editingDomain != null) {
			try {
				return TransactionUtil.runExclusive(editingDomain, new RunnableWithResult.Impl<Resource>() {
					@Override
					public void run() {
						URI uri = createURI(file.getFullPath());
						setResult(editingDomain.getResourceSet().getResource(uri, false));
					}
				});
			} catch (InterruptedException ex) {
				PlatformLogUtil.logAsError(Activator.getPlugin(), ex);
			}
		}
		return null;
	}

ModelLoadManager

In addition, the ModelLoadManager provides more integration with the Workspace framework (Jobs, ProgressMonitor,etc.):

ModelLoadManager.loadFiles(Collection, IMetaModelDescriptor, boolean, IProgressMonitor) supports an “async” parameter. If that is true, the model loading will be executed within a org.eclipse.core.runtime.jobs.Job. The method in turn calls

ModelLoadManager.runDetectAndLoadModelFiles(Collection, IMetaModelDescriptor, IProgressMonitor), which sets up progress monitor and performance statistics, itself calling detectFilesToLoad and runLoadModelFiles. detectFilesToLoad will be discussed in a dedicated posting.

runLoadModelFiles sets up progress monitor and performance statistics and calls loadModelFilesInEditingDomain finally delegating down to EcorePlatformUtil.loadResource(editingDomain, file, loadOptions) that we discussed above.

Towards a generic splitable framework implementation – Post #2

$
0
0

This blog post is the 2nd in a series of posts detailing some concepts for a generic framework for the support of splitables. For this concept we will include the term “slice”. A “slice” is one of the partial model elements that together form the merged splitables.

Effects of being splittable

If an model element (EObject) is splitable / has a splitable feature, this has two kinds of impacts:

  1. Impact on the “behavior” of the class itself.
  2. Impact on other classes, that have features of a splitable type

Impact on the Class ITSELF

  • any feature call on such an object needs to join the features of all this objects’ slices. This includes primitive and more complex features.

IMPACT ON OTHER CLASSES

  • By Feature: any feature call on another class’ feature, that could contain a splittable object, needs to filter the result of the feature to return a result, where the slices are merged.
    This affects the entire model, all classes could be affected, since all could have a feature of a splitable type.
    This affects any features that are a super-class of a splittable object.
  • By containment/aggregation: If a class is splittable, then all possible classes in all containment trees that lead to a splittable class must also be splittable. Otherwise it would not really be possible to create a spittable tree, since the splittable element could be only in one file when the containment hierarchy is not splittable.
  • By inheritance: That also implies, that if a class gains “splittable” through this containment rule, all of it’s subclasses must be splittable too, since it could be that they have the specific instance of the containment feature.

Components of a splittable tooling

  • ISplitKeyProvider: For a given object, identifies a “key” that is used to identify slices that constitute an splitable. E.g., this could be the fully qualified name.
  • ISliceFinder: Given an object, finds all object that constitute that splittable.
  • ISplittingConfiguration: Given an object or structural features, decides if that is splittable.
  • IPrimarySliceFinder: Given an object, finds the object that represents the joined splitable.
  • IFeatureCalculator: Create a joint view of the splittables
  • Arbiter: Decide where to put new elements / how to move elements

ISplitKeyProvider

For an object, returns the key that is used to identifiy the slices that together make up a model. In a simple implementation, this could e.g. be the fully qualified name. For some AUTOSAR elements, this is the FQN plus additional variant information.

Slice Finder

In a simple implementation, all the slices can be identified through their key and the slice finder finds all objects with the same key. In more advanced implementation, caching by key.

Primary Slice Finder

Returns an object, that represents the fusioned object given a specific key. That could be a new object, in case a model is copied. Or, if we inject behavior with AspectJ, one of the slices that is used. Important: Must always be unique, i.e. for a given split key, this must _always_ return the same object.

IFeatureCalculator

Must be able to create a joint view of all the slices. Can be e.g. on copy when creating a shadow model, it could also be dynamic, creating a common view e.g. with AspectJ.

Joining Structural Features

  • For references / containment, that could contain a splittable, only return the PrimarySlice within the feature.
    • 0..1: Intercept call and replace with PrimarySlice
    • 0..n: Return new list that replaces contents with primary slice
  • Primitives / All objects with 0..1 multiplicity (including references, containment)
    • Return value when all slices have the same value or 0
    • Throw exception otherwise (merge conflict)
  • Needs a list implementation that delegates the contents of the original list. Should always return the same list –> Feature Calculator is responsible. Could maintain own cache or add a new value by means of AspectJ

 

 

 

 

 

 

 

Towards a generic splitable framework implementation – Post #3

$
0
0

In the previous blog posts I have layed out the basic thoughts about a framework for splitable implementation. As indicated, we are using AspectJ to intercept the method calls to the EMF model’s classes and change the behavior to present a merged model to the clients. If the AspectJ plugins are used in the clients’ executable, it will see merged models.

So for a model org.mymodel  we intercept all calls to the getters by declaring the JoinPoint

@Pointcut("execution (public * com.mymodel.*.get* ()) && this(t) ")
	public void allGetMethods(EObject t) {}

and in the actual code for the execution we want to do the following :

  • Check if the return type (could be the direct return type or the parameter of a parameterized type) is splitable. If so, we need to replace the returned object with the primary slice.
  • If the object that the method invoked on is splitable, we need to find all slices and return the joined results.

This is the preliminary code for illustration (not yet finished for all cases, since it is just used for a specific meta-model right now):

@Around("allGetMethods(t) && !aspectFlow() && !fromResourceSet() && !SphinxResourceUtil() && !ProxyResolution()")
	 public Object  
	 splitPackage(ProceedingJoinPoint thisJoinPoint, EObject t) throws Throwable {
		 boolean paramTypeIsAssignableFromSplittable = false ;
		 boolean featuresHaveToBeJoined = split.getSplittingConfiguration().isSplittable(t);
		 //	Important:
		 // Some Eclipse elements (navigator) etc. will eat  class cast exceptions that exist from returning wrong elements
		 // This could be from returning a wrong element here. If this aspect throws an exception, it should be logged here
		 // Otherwise it might not be shown anywhere
		 //
		 //		 	System.err.println("Override "+t.getClass()+"  getter "+thisJoinPoint.getStaticPart().getSignature().getName());
		 
		// Get information about the method that is being executed / the Join Point
		 //
		Method method = ((MethodSignature)thisJoinPoint.getStaticPart().getSignature()).getMethod();
		Type returnType = ((MethodSignature)thisJoinPoint.getStaticPart().getSignature()).getMethod().getGenericReturnType();
	
		try{
			
			// If the return Type is a parameterized Type, the check should actually work on the parameter
			//
			if (returnType instanceof ParameterizedType) {
				ParameterizedType pt = (ParameterizedType) returnType;
				paramTypeIsAssignableFromSplittable = split.getSplittingConfiguration().isAssignableFromSplittable((Class)(pt.getActualTypeArguments()[0]));
			}
		}catch(Exception e) {
			e.printStackTrace();
			throw e;
		}
		
		// Not all returned features are joined yet, since the current implementation just supports joining of 
		// packages. To be generalized in the future.
		
		Object proceed = thisJoinPoint.proceed();
		if(proceed instanceof EObject && split.getSplittingConfiguration().isSplittable((EObject)proceed)) {
			// System.err.println("Single Object");
			return split.getPrimarySliceProvider().primarySlice((EObject)proceed);
		} else if (proceed instanceof EList) {
			// System.err.println("Collection "+proceed);
			
			// When the object (this) itself is splittable, then we need to take all the other references into account
			//
			
			if(featuresHaveToBeJoined) {
				// Object itself is splittable. Return sum of all slices
				//
				// System.err.println("Invoke method");
				proceed = split.getFeatureCalculator().calculate(t, method);
			} 
			if(paramTypeIsAssignableFromSplittable )
			{
				// Filter results if one of the contents could be splittable
				//
				// System.err.println("Filter results if one of the contents could be splittable");
				proceed =  split.getPrimaryListFilter().filter((EList<EObject>)proceed);
			}
			// System.err.println("//" +proceed);
			return proceed;
		}
		return proceed;
		
	 }

If we look at the implementation of FeatureCalculator, we see that internally we call the same method on all the slices:

public EList<? extends EObject> calculate_internal(EObject ob, Method method) {
		ArrayList<EObject> result = Lists.<EObject> newArrayList();

		List<EObject> slices = sliceFinder.slices(ob);
//		System.out.println("DBGM " + ob);
//		System.out.println(slices);
		for (EObject sl : slices) {
			try {
				result.addAll((List<EObject>) method.invoke(sl));
			} catch (IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
				// TODO Auto-generated catch block
				e.printStackTrace();
			}
		}
//		System.out.println(result);

		EList<EObject> filter = primaryListFilter.filter(result);
		SplitEList<EObject> eList = new SplitEList<EObject>(ob,method,filter,sliceFinder);
		
		return eList;

	}

But why does this not cause an infinite loop, since the get method’s are intercepted. Because through the definition of the join points, we make sure that our aspect is not executed when called from an Aspect or from the Splitable Frameworks classes. That means that all our implementation of the framework can safely operate on the source models.

We are almost set. But when we start a client with that AspectJ configuration, we will see that EMF calls will be intercepted when loading a model. But obviously the models should be first loaded as source models and only later accesses should be intercepted to present the joined model.

We can configure that through additional Join Points:

The following additional definitions are being used to disable the Aspect if the current call stack (control flow) is from certain EMF / Sphinx infrastructure classes:

@Pointcut("cflowbelow(execution( *  org.eclipse.emf.ecore.resource.impl.ResourceSetImpl.* (..)))") 
	public void fromResourceSet() {}
	
	@Pointcut("cflow(execution(*  org.eclipse.sphinx.emf.util.EcoreResourceUtil.loadResource (..)))")
	public void SphinxResourceUtil() {}
	
	@Pointcut("cflow(execution(* org.eclipse.sphinx.emf.workspace.loading.ModelLoadManager.forceProxyResolution (..)))")
	public void ProxyResolution() {}

 Result

With this configuration, our models load fine and the Sphinx model explorer shows the joined elements wherever a slice is in the model. Right now our implementation supports the joining of models and packages (i.e. it supports containment references) but can be extended to do more.

Upcoming

The following features will be added to the framework:

  • Intercepting more of the necessary methods (e.g. eContainer)
  • Support modification of models (i.e., the intercepted set-methods will modify the source models. The component “Arbiter” will be introduced to decide which source models to modify.
  • (De-)Activiation of the joined view based on transactions. A settable flag to indicate whether the client operates on the joint view our on the source view.
  • Integration with IncQuery

Sphinx

We hope to release the framework as part of the Sphinx project.

Writing Fast tests for uniqueness in AUTOSAR (and other models) with Java 8

$
0
0

One of the repeating tasks when writing model checks for AUTOSAR (but also for other models) is to check for uniqueness according to some criterion. In AUTOSAR, this could be that all elements in a collection must be unique with respect to their shortName, or e.g., that all ComMChannels must be unique with respect to their channel ids.

Such a test is very easily written, but many of the ad-hoc implementations that we see perform badly when the model grows – which is easily missed when testing with small models.

Assume that we want to test ComMChannels for uniqueness based on their channel ids. To illustrate the problem, two slides from our COMASSO slides. A straight forward implementation with Xpand Check (in the COMASSO framework) would be:2015-01-16_09h50_30For each ComMChannel it checks if any of the other ComMChannel in the same ComM has the same ID. BUT:

2015-01-16_09h50_43So this costs a lot of performance. The trick is to loop only once and use a Set to store data. This can be done in Xpand and other languages. Java 8’s lambdas make it especially nice, since we can have a single method that collects the duplicates:

public static <T,X>  Multimap<T,X> groupBy(Iterable<X> l,Function<X,T> f) {
		Multimap<T, X> multiMap = HashMultimap.<T,X>create();
		for(X i : l) {
			multiMap.put(f.apply(i), i);
		}
		return multiMap;
	}

This code uses both templates methods as well as passing functions to have a very generic interface. We can use it anywhere to search for duplicates in a collection.
Now looking for duplicates in ComMChannel ids would look like this:

Multimap<Integer, ComMChannel> r = groupBy(comM.getComMChannels(), 
x -> x.getComMChannelId());

And thanks to template methods and Java 8 we can use the same groupBy to look for duplicate Shortnames:

Multimap<String,Identifiable> r = groupBy(iterable, 
							(Identifiable x) -> x.getShortName() );

Similar code could also be written in the Xtend2 language.

With the new Sphinx Check framework, a check in an Artop based tool might look like this:

@Check(constraint="NAMENOTUNIQUE",categories="Basic")
	void checkDuplicate(ComMConfigSet comM) {
		Multimap<Integer, ComMChannel> r = groupBy(comM.getComMChannels(), x -> x.getComMChannelId());
		Iterable<Integer> problems = Iterables.filter(r.keySet(), 
				(x) -> r.get(x).size()  > 1 );
		
		for(Integer name : problems) {
			for(ComMChannel o : r.get(name)) {
				issue(....);
			}
		}
		
	}

 

 

Lightweight Product Line AUTOSAR BSW configuration with physical files and splitables

$
0
0

During AUTOSAR development, the configuration of the basic software is a major task.  Usually, some of the parameters are going to be the same for all your projects (company-wide, product line) and some of them will be specific to a given project / ECU. And you might have a combination of these within the same BSW configuration container.

One of the approaches uses the AUTOSAR Split(t)able concept, which allows you to split the contents of a model element over more than one physical ARXML-file.

So let us assume, we are going to configure the DemGeneral container, and we assume that we have a company-wide policy for all projects to set DemAvailabilitySupport to 1. And let’s assume that the DemBswErrorBufferSize within the same container should be set specific to each ECU.

So let us create an .arxml for the company-wide settings. That might be distributed through some general channel, e.g. when setting up a new project.

2015-02-18_10h11_58Now we can place an Ecuc specific .arxml next in the same project.

2015-02-18_10h11_58bNow any tool that supports splitables (e.g. the COMASSO BSWDT tooling) will be able to create a merged model. The following screenshots are based on our own prototype of an AspectJ based splitable support for Artop. By simply using the context menu

2015-02-18_10h37_26

we will see the merged view:

2015-02-18_10h19_18You can see a few things in that screen shot:

  • First, and most important, the two physically separated DemGenerals are now actually one. The tooling (code generators etc.) will now only see one model, one CONF package and one DemGeneral.

Our demonstrator shows additional information as decorators after the element (decorators can be deactivated through preferences in Eclipse).

  • The “[x slices]” decorator shows how many elements are actually used to make up the joined element. We have 2 physical DemGeneral, so it says “2 slices”. We have one DemAvailabilitySupport, so it shows “1 slices”
  • The decorator at the end shows the physical files that are actually involved for the merging of a given element.

Now if for some reason, a new version of the company-wide .arxml is provided, we only need to exchange / update that file – there is no need for a complex diff and merge.

There are some usecases that cannot be covered with this scenario, but it opens a lot of possibilities for managing variations in BSW configuration on a file level.

 

 

 

 

 


Visualizing Build Action Manifests with COMASSO BSWDT

$
0
0

The AUTOSAR Build Action Manifest (BAMF) specifies a standardized exchange format / meta-model for the build steps required in building AUTOSAR software. In contrast to other build dependency tools such as make, the BAMFs support the modeling of dependencies on a model level. The dependencies can reference Ecuc Parameter definitions to indicate which parts of the BSW are going to be updated or consumed by a given build step. However, even for the standard basic software these dependencies can be very complex.

The COMASSO BSWDT tool provides a feature for visualizing these dependencies. In the example that we use for training the Xpand framework with the BSW code generators, we see that we have three build steps for updating the model, validation and generation.

2015-03-14_17h46_06The order of the build steps is inferred by the build framework from the modeled dependencies. But in the build control view cannot really see the network of dependencies, only the final flat list.

With File->Export.. we can export the BAMF dependencies as a gml-Graph file.

2015-03-14_17h51_31The generated file can then be opened in any tool that can deal with the GML format. We use the free edition of yED and see the dependencies:

2015-03-14_17h55_50

To optimize the visualization, you should choose the hierarchical layout.

2015-03-14_17h58_09The solid lines indicate model dependencies, dotted lines explicit dependencies. If a real life project yields a grpah that is too complex, you can easily delete nodes from the list on the bottom left.

 

 

 

 

 

 

 

Maven, Tycho, Surefire, AspectJ and Equinox Weaving

$
0
0

This is a very short technical post. But since it took us some time to find the solution (there is little information on the Web), we wanted to add another possible search hit for others.

JUnit tests from within Eclipse work well with AspectJ runtime weaving in Equinox. Finding information on how to activate the same in Maven Tycho surefire was more difficult.

You have to configure the surefireplugin to use the Equinox Weaving hook and start the AspectJ Weaving Bundle:

 

<bundleStartLevel>
							<bundle>
				               	  <id>org.eclipse.equinox.weaving.aspectj</id>
				                 <level>2</level>
				                 <autoStart>true</autoStart>
              				</bundle>
						</bundleStartLevel>
						<frameworkExtensions>
			            <frameworkExtension>
			              <groupId>p2.osgi.bundle</groupId>
			              <artifactId>org.eclipse.equinox.weaving.hook</artifactId>
			      			<version>1.1.100.weaving-hook-20140821</version>
			            </frameworkExtension>
			          </frameworkExtensions>
In addition, on command line configuration, you could pass debug information. We have set the property:
<aspectj.weavingargs>-Daj.weaving.verbose=true -Dorg.aspectj.weaver.showWeaveInfo=true -Dorg.aspectj.osgi.verbose=true</aspectj.weavingargs>
and use it later on
<argLine>${tycho.surefire.vmargs} {aspectj.weavingargs} ... </argLine>

Managing Autosar Complexity with (abstract) Models

$
0
0

A certain number of developers in the automotive domain complain that the introduction of AUTOSAR and its tooling increases the development efforts because of the complexity of the AUTOSAR standard. There is a number of approaches to solve this problem – one of the most used is the introduction of additional models.

The complexity of AUTOSAR

One often heard comment about AUTOSAR is that the meta-model and the basic softwae configuration is rather complex and that it takes quite some time to actually get acquainted with that. One of the reasons is that AUTOSAR aspires to be an interchange standard for all the software engineering related artefacts in automotive software design and implementation. As such, it has to address all potentially needed data. However, for the single project or developer, that also means that they will be confronted with all that aspects that they never had to consider. Overall, one might argue that AUTOSAR does not increase the complexity, but it exposes the inherent complexity of the industry for all to see. That however implies, that a user might need more manageable views on AUTOSAR.

Example: Datatypes

The broad applicability of AUTOSAR leads to a meta-model, that will take some analysis to understand. For application data types, the following meta-model extract shows relevant elements for  “specifying lower and upper ranges that constrain the applicable value interval.”

Extract from AUTOSAR SW Component Template

Extract from AUTOSAR SW Component Template

Which is quite a flexible meta-model. However, all the compositions make it complex from the perspective of an ECU developer. The meta-model for enumerations is even more complex.

2015-04-01_22h30_31

Extract from AUTOSAR SW Component Template: Enumerations

If you come from C with its lean notation

enum cards {
    CLUBS    = 1,
    DIAMONDS = 2,
    HEARTS  ,
    SPADES   
};

it seems quite obvious that the AUTOSAR model might be a bit scary. So a lot of projects and companies are looking for custom ways to deal with AUTOSAR.

Custom Tooling or COTS

In the definition of a software engineering tool chain, there will always be trade-offs between implementing custom tooling or deploying commercial tools. Especially for innovative projects, the teams will use innovative modeling and design methods and as such will need tooling specially designed for that. The design and implementation of custom tooling is a core competency of tooling / methodology departments.

A more abstract view on AUTOSAR

The general approach will be to provide a view on AUTOSAR (or a custom model) that is suited to the special needs of a given user group and then generate AUTOSAR from that. Some technical ideas would be:

“Customize” the AUTOSAR model

In this approach, the AUTOSAR model’s features are used to annotate / add custom data that is more manageable. This case can be subdivided into the BSW configuration and the other parts of the model.

BSW: Vendor specific parameter definitions

The ECUC configuration supports the definition of custom parameter definitions that can be used to abstract the complexity. Take the following example:

  • The standard DEM EventParameter has various attributes (EventAvailable, CounterThreshold, PrestorageSupported).
  • Let’s assume, that in our setting, only 3 different combinations might be valid. So it would be nice if we could just say “combination 1″ instead of setting those manually
  • So we could define a VSMD that is based on the official DEM and have a QXQYEventParameter that has only one attribute “combination”
  • From that we could then generate the DEMEventParameter’s values based on the combination attribute.

This is actually done in real life projects. Technologies to use could be:

  • COMASSO: Create a Xpand script and use the “update model”
  • Artop: Use the new workflow and ECUC accessors mechanism

SYstem / SW Templates:

The generic structure template of AUTOSAR allows us to add “special data” to model elements. According to the standard “Special data groups (Sdgs) provide a standardized mechanism to store arbitrary data for which no other element exists of the data model.” (from the Generic Structure specification). Similar to the approach of the BSW we could add simple annotations to model elements and then derive more complex configuration for that (e.g. port communication attributes).

Adapt standard modeling languages

Another approach is to adapt standard (flexible) modeling languages like UML. Tools like Enterprise Architect provide powerful modeling features and the UML diagrams with their classes, ports and connections have striking similarities to the AUTOSAR component models (well, component models all have similarities, obviously). So a frequently seen approach is to customize the models through stereotypes / tagged values and maybe customize the tool.

The benefit of this approach is that it provides more flexibility in designing a custom meta-model by not being restricted by the AUTOSAR extension features. This approach is also useful for projects that already had a component-based modeling methodology prior to AUTOSAR and want to be able to continue using these models.

A transformation to AUTOSAR is easily written with Artop and Xtend.

Defining custom (i.e. domain specific models)

With modern frameworks like Xtext or Sirius, it has become very easy to implement your own meta-models and tooling around that. Xtext is well suited for all use cases where a textual representation of the model is required. Amongst others, that addresses a technical audience that is used for programming and wants fast and comfortable editing of the models with the keyboard.

German readers will find a good industry success story in the 2014 special edition of “heise Developer Embedded” explaining the use of Xtext at ZF Friedrichshafen.

The future?

AUTOSAR is a huge standard that covers a broad spectrum. Specific user groups will need specific views on the standard to be able to work efficiently. Not all of those use cases will be covered by commercial tools and innovation will drive new approaches.

Custom models with transformations to and from AUTOSAR are one solution and the infrastructure required for that is provided in the community project Artop and the ecosystem around the Eclipse modeling framework EMF.

 

 

Using the Xtend language for M2M transformation

$
0
0

In the last few month, we have been developing a customer project that centers around model-to-model transformation with the target model being AUTOSAR.

In the initial concept phase, we had two major candidates for the M2M-transformation language: Xtend and QVTO. After doing some evaluations, we decided that for the specific use case, Xtend was the technology of choice.

 

TopicExplanation
ComfortXtend has a number of features that make writing model-to-model-transformations very concise and comfortable. The most important is the concise syntax to navigate over models. This helps to avoid loops that would be required when implementing in Java


val r = eAllContents.filter(EcucChoiceReferenceDef).findFirst[
shortName == "DemMemoryDestinationRef"]
}
Traceability / One-Pass TransformationXtend provides so-called "create" methods for creating new target model elements in your transformation. The main usage is to be able to write efficient code without having to implement a multi-pass transformation. This is solved by using an internal cache to return the same target object if the method is invoked for the same input objects more than one time

However, the internally used caches can also be used to generate tracing information about the relationship from source to target model. We use that both for

  • Writing out trace information in a log file

  • Adding trace information about the source elements to the target elements


Both features have been added to the "plain" Xtend, because we can use standard Java mechanisms to access them.


In addition, we can also run a static analysis to see what sourcetarget metaclass combinations exist in our codebase.
PerformanceXtend compiles to plain Java. This gives higher performance than many interpreted transformation languages. In addition, you can use any Java profiler (such as Yourkit, JProfiler) to find bottlenecks in your transformations.
Long-Term-SupportXtend compiles to plain Java. You can just keep the compiled java code for safety and be totally independent about the Xtend project itself.
Test-SupportXtend compiles to plain Java. You can just use any testing tools (such as JUnit integration in Eclipse or mvn/surefire). We have extensive test cases for the transformation that are documented in nice reports that are generated with standard Java tooling.
Code CoverageXtend compiles to plain Java. You can just use any code coverage tools (such as Jacoco)
DebuggingDebugger integration is fully supported to step through your code.
ExtensibilityXtend is fully integrated with Java. It does not matter if you write your code in Java or Xtend.
DocumentationYou can use standard Javadocs in your Xtend transformations and use the standard tooling to get reports.
ModularityXtend integrates with Dependency Injection. Systems like Google Guice can be used to configure combinations of model transformation.
Active AnnotationsXtend supports the customization of its mapping to Java with active annotations. That makes it possible to adapt and extend the transformation system to custom requirements.
Full EMF supportThe Xtend transformations operate on the generated EMF classes. That makes it easy to work with unsettable attributes etc.
IDE IntegrationThe Xtend editors support essential operations such as "Find References", "Go To declaration" etc.

The Xtend syntax on the other hand is not a language based on any standard. But it’s performance, modularity and maintenance features are a strong argument for adding it as a candidate for model transformations.

Integrating Rhapsody in your AUTOSAR toolchain

$
0
0

UML tools such as Enterprise Architect or Rhapsody (and others) are well established in the software development process. Sometimes the modeling guidelines are following a custom modelling, e.g. with specific profiles. So when you are modelling for AUTOSAR systems, at sometimes you are faced with the problem of transforming your model to AUTOSAR.

For customer projects, we have analyzed/implemented different stratgies.

Artop as a integration tool

First of all, if you are transforming to AUTOSAR, the recommendation is to transform to an Artop model and let Artop do all the serialization. Directly creating the AUTOSAR-XML (.arxml) is cumbersome, error-prone and generally “not-fun”.

Getting data out: Files or API

To access the data in Rhapsody, you could either read the stored files or access the data through the API of Rhapsody. This post describes aspects of the second approach.

Scenario 1: Accessing directly without intermediate storage

In this scenario, the transformation uses the “live” data from a running Rhapsody as data source. Rhapsody provides a Java based API (basically a wrapper to Windows COM-API). So it is very easy to write a transformation from “Rhapsody-Java” to “Artop-Java”. A recommended technology would be the open source Xtend language, since it provides a lot of useful features for that use case (see a description in this blog post).

Scenario 2: Storing the data from Rhapsody locally, transforming from that local representation

In this scenario, the data from Rhapsody is being extracted via the Java-API and stored locally. Further transformation steps can work on that stored copy. A feasible approach is to store the copied data in EMF. With reflection and other approaches, you can create the required .ecore-definitions from the Rhapsody provided Java classes. After that, you can also use transformation technologies that require an .ecore-definition as a basis for the transformation (but you can still use Xtend). The stored data will be very close to the Rhapsody representation of UML.

Scenario 3: Storing the data in “Eclipse UML” ecore, transforming from that local representation

In this scenario, the data is stored in the format of the Eclipse provided UML .ecore files, which represent a UML meta-model that is true to the standard. That means that your outgoing transformation would be more conforming to the standard UML meta-model and you could use other integrations that use that meta-model. However, you would have to map to that UML meta-model first.

There are several technical approaches to that. You can even to the conversion “on-the-fly”, implementing a variant of Scenario 1 with on-the-fly conversion.

 Technology as Open Source

The base technologies for the scenarios are available as open source / community source:

  • Eclipse EMF
  • Eclipse Xtend, Qvto (or other transformation languages)
  • Artop (available to AUTOSAR members)

 

Viewing all 57 articles
Browse latest View live


Latest Images