Hamish King

Hamish King

Technical Business Analyst, currently based in London but originally from New Zealand.

Check out my personal blog www.hamishking.com for more EA tips and articles along with analysis content

Following on from my previous post on creating and simulating state machines in Enterprise Architect (EA), I will walk through the process of adding a UI to prototype and further interactivity to your model.

If you recall the previous article, I walked through the process of setting up ‘Triggers’ to run scenarios through your state machine and set simulation variables at state or sub-state level to better represent your application. All of this information was available using the EA ‘variables’ or recording to the Console. We can go a step further and prototype a quick UI to represent our application and/or provide a ‘dashboard’ view of states and variables.

 

Setting up the state machine

image
the state machine from the previous post

Using the state machine I used last time, we need to create a place-holder ‘state machine’ element so we can reference it as a simulation start point.

Add a ‘User  Interface’ diagram to the appropriate place in the package browser.

 

image
add a user interface to the diagram

 

This will be our place-holder to put the state machine and User Interface elements.

On the diagram create a new state machine element.

image
create a new state machine element

Then right-click the newly created element and select your existing state machine diagram we created in the previous example.

image
select a composite state diagram

Locate the state machine diagram in the package browser.

image
Locate the state machine in the package browser

Now we can create a new User Interface diagram and add that as a frame on to the current diagram (keeps it neater and modular).

image
add another UI element on drag into onto the master canvas

And drag that on to the Model Default (or whatever you named the original) diagram canvas.

To complete the process of linking a UI to a simulation, we want to open the Execution Analyser and create a script. If you don’t have the window open, select ‘Analyzer’ - ‘Execution Analyzer’ to bring it back up

image
Show the Execution Analyzer

 

From the Execution Analyzer window, select to add a new script

image
Add a new simulation script to the Execution Analyzer

And they key step for us is the last tab, ‘Simulation’ and defining the ‘Entry Point’ and Input behaviour.

Select the ‘..’ icon to locate your state machine element for the Entry Point.

image
Define a state machine to be the simulation entry point

That sets the entry diagram we want to use for the simulation.

To connect the User Interface to the simulation, we need to define a behaviour (in JavaScript notation) to display the correct UI element.

The format of this command is as follows:

dialog.CreateOrder.Show = True;

where ‘CreateOrder’ is the name of User Interface we created earlier (‘CreateOrder’ in my example above).

As you may have guessed from the above JavaScript, the model simulation script will start the simulation on the ‘Validate Example’ state machine and as it’s first point of business, it will display the ‘CreateOrder’ User Interface we created.

Note: Another method of displaying the UI is to assign the same command (above) to the first transition of the state diagram (e.g. Initial –> Created in my example). Whichever method works for you!

 

Creating the UI and assigning actions to things!

So now we have a model simulation that will give us an actual rendered screen we should probably put something on it!

Open up the CreateOrder User Interface  we created earlier and lets start adding elements.

Using the Toolbox you can add standard UI elements or use the Win32 elements to add things onto your prototype. Here is a horridly rushed example of some text and a button (note the list of Win32 UI control types in the toolbox)

image
building a basic user interface

So now I’ve added a UI to the model and hooked it up using a simulation script, what happens if I run it? Well let see.

Going to back to the ‘Execution Analyzer’, right click the script and select ‘Start Simulation’

image
start the model simulation from the execution analyzer

What do we get?

image
example rendered UI element during a simulation

A ‘nicely’ rendered (I use the term nicely loosely of course!) dialog box that we mocked up but the button doesn’t do anything – lets fix that!

 

Adding Signal behaviours to Button and making our UI interactive

Head back to our CreateModel user interface and view the properties of the ‘No More Items’ button. On the ‘tagged values’ tab we want to add a new tagged value.

image
creating an OnClick action for the button

The naming of this tagged value is very specific and must always be named ‘OnClick’ with the following value ‘BroadcastSignal(“ “)

image
defining the trigger to use for the button

When the simulation is interpreted, EA will parse the above tagged value as a JavaScript function and attempt to fire the trigger “Signal Name”. So what we were doing before manually using the ‘Simulation events’ and ‘Waiting Triggers’ window in EA, we can no build into our interactive prototype to reasonable something more alike our application!

It is important to note that the trigger type must be ‘Signal’ for this to work. To check this, return to our state machine and find an appropriate transition. In my example we are using a button called ‘No more items’ which corresponds to following transition on my state machine.

clip_image001
locating the appropriate trigger

Bringing up the properties of this transition will allow me to view the Triggers associated with it.

image
checking the trigger type of the transition

Notice the type is ‘Signal’. Had this been one of the other trigger types our simulation would not register the event.

So when we run the simulation again, we can actually use the button on the UI to progress the simulation.

image
simulation before clicking the rendered button

 

Selecting the button progresses us further into the state machine as if we had used the ‘waiting triggers’ window in the bottom right of my example.

image
simulation progressed after selecting the rendered button

So you've now added some interesting tricks to your model but at this point you still might be thinking – “that's all very well, but that isn't all that useful to me”. Well the initial driver for me to head down this path wasn't to create a prototype application based on my state machine but was in fact to provide a better view of the variables and states I was assigning throughout my model.

Introducing the Model Simulation Dashboard!

So go along with my very basic prototype I built an even simpler dashboard view which sits along side the prototype and displays the variable states (as recorded in the EA ‘Locals’ window) and any additional info I want to record in my simulation.

To this all I do is create a second UI diagram, and call this at the first transition (from Initial –> Created in my example)

image
linking a second UI for a 'dashboard' type view

dialog.Screen1.Show=true;
dialog.CreateOrder.Show=true;

In the above example, Screen1 (horribly named I know!) is my dashboard view and CreateOrder is the first UI screen from our example above.

What does that look like?

image
example of dashboard view alongside the running prototype

I have my dashboard running alongside my UI screens to give an alternative view of things I need to track at any given time.

Protip: You can change the stereotype of a Screen element to be a Win32Dialog and get additional properties, such as setting the window to be in the centre – very useful when using multiple Screens.

image
Protip: adding a centre position property to the screen element

All done!

Well thats the basics covered of adding a prototype UI to your model simulations and if there is further interest I can share some tips on making a more dynamic prototype in EA, hiding elements until required – but I shall save that for another day.

Hope that was useful to someone!

Recently I have been reverse engineering a state transition (officially a state machine but I prefer the term state transition model) to understand the complex life-cycle of our application’s central object.

I had seen a demo on model simulation but the focus was more on a generated model from source rather than a hand fashioned UML model. This post is a walk through on the process and hopefully some takeaways from what I’ve learned.

 

Understanding complex systems

I came into a project that is well down the road to completion but sorely lacking in any concrete documentation which was becoming a pain point for everyone – and particularly for me – the new analyst tasked with helping out.

One of my first tasks was to document some ‘light’ form of a state model to get an idea of all of the possible scenarios and break down some of the complexity. I’ve iterated through several versions (or styles) of the state model trying to provide a simpler view of states and state transitions but the model was just very complicated and the diagrams were too busy. Understanding it myself was hard enough and trying to validate each of the scenarios was proving far too time-consuming.

 

States and multiple sub-states – too many paths!

My main problem was due to having states and multiple sub-states being dependent on single transitions. For example, we had something like this

OBJECT (super state life-cycle)

-- sub item1 (sub state life-cycle)

-- sub item2 (sub state life-cycle)

-- shipping item (sub state life-cycle)

-- other items (sub state life-cycle)

So for every transition, we could have movements of one or all of the sub-states, and depending on what states they were in, we could also have super-state changes.

image
example of super-state / sub-state model I quickly threw together to serve as an example

 

I threw together a quick example of what things (sort of!) looked like in our world – its a horribly rushed example but gets across the point.

 

Model simulation and simulation variables!

So I got to the point where I’d modelled enough states and sub-states that it was very difficult to validate the accuracy – so I decided to use a bit of smarts and try out model simulation.

I setup my workspace to get the simulation controls ready. It looks somewhat like this.

image
the simulation workspace I setup in EA

With this setup we can kick off model simulations to run through transitions, select the required ‘trigger’ to choose paths and even save triggers to define pre-defined scenarios to run through. Note: I’ll post a separate article on how to do each step in more detail.

All of this is nothing new? Well that's get into some simulation variables to track what's going on at each point.

First of all, I’ve run through a created scenario manually (interpreted in EA language) by manually selecting the triggers as they come up. You can then save these as a trigger set and select to signal the triggers automatically – now we can fly through our models after defining our criteria.

image
model simulation in progress

The above shows a simulation in action as it goes through the path I defined manually and you can see the trigger sequence waiting to be fired to progress the simulation further.

image
showing the simulation console and 'waiting triggers' panel

What I end up at the end is a list of states / decisions / elements that I have passed through in the console log and a list of triggers that were fired.

For me this wasn’t all that useful – just a flat list of states and sub-states at a single level wasn't;t really enough – plus there was more detail I wanted to capture that would make these even more useful.

Enter simulation variables

I stumbled on simulation variables when trawling through some EA help file about getting my simulations to work and instantly through of a use (more on that soon).

Given I am reverse engineering a model and not trying to do and model-driven development, I feel I can use simulation in this way but I certainly appreciate this is a hack job and not their intended purpose (sorry everyone!).

I added an operation to each state called ‘updateVars’ and added a behaviour to each to assign simple variable.

image
adding JavaScript simulation variables to add another layer of interactivity

EA uses JavaScript notation when performing UML model simulations, so we can just assign variables as we need them. Now you see why I had the variable window open before right?

Protip: To avoid cluttering up your diagram with all your updateVars on each state element – hide the operations from the diagram in the diagram properties. Un-tick the ‘Operations’ box under Show Compartments.

image
Protip - hide your operations from your state machine diagram to avoid clutter

 

So now that I have added simulation variables to my model, I can see what the states and sub-states are at any given point in the model – either by pausing or setting simulation break points (another great feature!).

image
example of paused simulation showing variables and waiting triggers

Writing to simulation console

Although simulation variables were interesting they didn't really offer me much over what I had before – just a different way of viewing the same state/substate information. What I really wanted was to be able to use this concept to show other important things that are going on at certain steps.

Another cool feature Sparx have included here is being about to output to the simulation console so you can supplement the standard list of element names with your own text.

So at certain steps, I have UI actions or account deductions, that I wanted to bring into my model and highlight to my stakeholders where and when these were actually occurring. Strictly nothing to do with a state machine but I'm not here to win UML medals – just provide my stakeholders something they can understand!

So on one of the updateVars actions I can add another line like this:

image
adding extra information to the console log

The magic words here are Trace() which allow you to output a string to the console log. What does it look like in the simulation console? I'm glad you asked – it looks something like this.

image
seeing the Trace() output onto the simulation console

I ended up adding custom Trace() events for each state so I could better represent state /sub state movements and also record important events along the model.

Hopefully you followed along with me on this and I’ve had some interest regarding doing a screencast on the topic so please let me know in the comments if you’re interested and I’ll get one together.

This was an overview article to demonstrate the main features and I will come back and go through some of the steps I rushed through in more detail in the next post.