Enterprise Architect version 14.0

download
purchase

enzh-CNcsnlfrdehiiditkoplptruskessv

My Profile

Social Media Channels

facebook google plus twitter youtube linkedin

Tuesday, 11 September 2018 11:51

Data Identity Security

Abstract

We live in an information paradigm. The standard model of physics is being challenged and replaced by a data centric probabilistic universe. It turns out that our deterministic material reality was a subjective human view of events, when viewed in the light of advances in quantum energy, fractals and chaos mathematics.

Automated industrial and personal computing applications are now completely pervasive in the fabric of human society in both the developed and developing world. And they are all built of blocks of data. We do have to evolve our concepts of data to understand that information technology is now a virtualization of information services, and networking connected devices of all kinds. It is comprised merely of bits of information, independent of wires, chips and electronic components, that over the next decade will probably be replaced by quantum computers of a very different physical design.

Recent heightened public awareness of protection of data privacy, triggered by electronic security failures, is an opportunity to redefine the view that data is merely an electronic representation of information. Security issues have raised the alarm, consequently we must track the data lifecycle more effectively.  The simplest solution may be to manage data holistically, from inception to end-of-life.

There is a clear requirement to standardise and categorise data in a way that we can continue to evolve our technology to meet the challenges of global information dissemination for exchange of scientific, humanitarian and world trade data. Advanced UML models and modelling technology is already being used for structured data terminologies.  An identity management domain incorporating blockchain as a class model serves to demonstrate that a platform independent model can easily extend any of the industry common information models to implement nexus identity management.

Data has a very specific meaning and lifecycle, in terms of creation, context, transformation and transportation of representation from one location to another. This entails residency, ownership and above all accessibility. Over the past decades, evolving legal frameworks around data custody, intellectual property, management responsibility and data-as-an-asset have been developed as part of an evolving set of data practices.

Yet we do not have a complete view of the data lifecycle.  Today there are a patchwork of individual standards of governance, usage policies and structural definitions in most international jurisdictions for what is now arguably the most valuable asset on the planet.  And at the same time the misuse of data is now one of the most common crimes in every society. The key to facilitating an evolution of global data residency may be by enabling a revolutionary international approach to role-based data access control throughout its lifecycle.

It seems possible that data itself requires an identity, a standard tagging of data elements with category, ownership, authorship, purpose, security classification and permitted jurisdiction and residency, that can be formed, encrypted, distributed, stored and updated with a secure practice in place of user and application credentials.  While this may seem like a complete shift, in reality it may not be so difficult with co-operation and collaboration amongst interested parties.

There is a proliferation of fraudulent use of data to the benefit of a few and the detriment of most people. The question must be asked, do we require a consolidated global standard for data lifecycle, with a transparent public audit trail of role-based data access, perhaps a blockchain approach to logging all data change transactions?

Data Identity Security

Timeline in Brief

As information technology went global in the 1980s, there were many initiatives to standardize data across industries, such as telecommunications, manufacturing and health, some of which were more successful than others.  Currently there are few precise agreed definitions of common terminologies and concepts for most fields of technology, science and industry, and a large scope for misinterpretation.

With the advent of Web 2.0, the focus shifted to user generated content at the end of the 1990s, and with the consequent proliferation of applications for everything by everyone, the momentum for standardization was lost, with the resulting increased heterogeneity, diversity and disparity of data terminologies that we experience today.  While search capabilities have improved, and context is now an important element of data definition, because of the sheer volume of information collected by individuals, NGOs, corporations and governments, the signal is being lost in the white noise, both for structured and unstructured data elements and collections. 

The security of data is falling and failing.  It is true that if there is sufficient motivation, either financial or political, a work around can be found for the current generation of security vulnerabilities to protect essential data. The efficacy of new security measures is ephemeral, short-lived and effective only until research by well-funded state sponsored actors and criminal organizations develops a new exploit.  This is because we have not address the fundamental problem, which is that the basic network and application protocols were designed without security in mind. 

The success of fraudulent misuse of data has led to the current situation, a proliferation of security tools and technologies marketed as the answer to data protection. They have been developed in response to security breaches and vulnerabilities.  In the words of Symantec , ‘We are only one step ahead of the hackers’. 

Identity and Access Management

The biggest weakness, the exploit vector bypassing data security, is identity fraud, either counterfeiting identity credentials or access tokens. As all systems of data accessibility depend on access privileges that can be compromised by persistent, carefully planned and patient interception attacks, the race against global fraud is in danger of becoming a lost cause.  The success of these attacks is not only because of poor implementation of security standards, but also the information protocols themselves, developed ad hoc, inherited from an electronic age where networking and hardware were essentially deployed in a hub and spoke configuration. Today interconnectivity is decentralized, and potentially global communications across partner organizations. The current generation of security technology is no match for attackers who are extremely well funded with resources matching those that large corporations and governments spend on cyber defence.

Identity and Access Management still depends largely on good working practices by responsible people, establishing, maintaining and securing access privileges, making use of the excellent advances in cryptography. Given the paradigm of current work practices, where people are working on and off site, over network connections that are more or less secure, this is hardly sufficient. By analogy, pilot error is still the largest cause of air safety violations, and the same is true for identity management. Human failings aside, the corruption capabilities of the extraordinary profits from identity fraud is a considerable factor.

Currently security professionals acknowledge there is no fool proof method of preventing security breaches, and that new variations of old attack methods are constantly surfacing.  Even ‘Zero Trust’ measures such as virtual variations on physical isolation of servers can be compromised over time by capturing identity details, understanding authorization mechanisms and spoofing authentication credentials of people, roles and applications. At the network layer, forms of single packet inspection to identify communications, are innovative and successful within bounds.  However these methods that are only as secure as the systems that collect the encapsulated identity data, providing a single point of failure.  Once authorization (identity verification) has taken place there is plenty of evidence of systems being compromised with exploits such as ‘golden tickets’ and ‘golden tokens’ giving intruders administrator privileges for network and identity tokens.

If privileged access to highly sensitive or classified data is the basis for data security, violation of trust is bound to increase, as the incentives for malpractice grow.  There are huge profits involved. Global political uncertainties and provocations, aligned with growing international tensions can only lead to increases in attacks on essential infrastructure.

Journey to Data Privacy

All industries hold personal data, and for those to which the European GDPR directive apply, legal protection is required. The European Commission defines personal data as any information that relates to an identified or identifiable living individual. Different pieces of information, which collected together can lead to the identification of someone, also constitute personal data. This includes technology information such as IP address and device identifier.

GDPR EU legislation is applicable to organizations either processing personal data in the EU, or relating to EU citizens. The legislation applies to organizations inside and outside of the EU. Non-compliant organizations may find it more difficult to do business in Europe. GDPR EU legislation became law in 2016, and on 25th May 2018 the stringent penalties for non-compliance came into play. There is a wide range of personally identifiable information, including personal demographic, employment, financial, healthcare and social data, that must now be adequately protected under European law.

A better way to provide data assurance and governance, rather than closing a vulnerability attack after the fact, may well be to develop a data security protocol that is secure by design from the outset, with a focus on protection of the data itself. Can we adopt a data identity standard that mandates practices, protocols and methods of non-repudiation that are focussed on the stored data representation?  Currently the focus is  the user applications, Internet Protocol, and the various integration methods and protocols of network connectivity.

A new data identity protocol could address the entirety of the data lifecycle, including creation, acquisition, encryption, storage and disposal of the data set and component elements. Standard cryptographic algorithms applied at data source, distributed to a network of identity providers for non-repudiation may be a cost-effective improvement for providing data protection.  The current  situation is a  cycle of a proliferation of information security tools applied at every stage of application access, integration, network connection, and data transportation.  These tools and techniques have largely been developed in view of attack vectors that have already happened. The cost of securing data has increased dramatically over the past decade.

Data Identity Standard

It is time to rethink the paradigm of sensitive, classified data, to provide a distributed security context for the data itself, independent of the facilitating technology services.  One innovation may be to provide collected information with an identity, a type of signature that records the registration, authorship, usage, persistence, access, update and disposal of data sets.  This accompanying metadata could remain throughout the creation and operational use of data throughout its lifecycle, protected by a distributed chain of transactions that require the consensus of the network ownership for change not only to data, but to the accompanying metadata.      

The technology is readily available, provided it is implemented and deployed as a well-designed public cloud collection and storage mechanism, with a careful use of the currently available set of security mechanisms, cryptography and key management, audited by logging and monitoring services that would be extremely difficult to corrupt, and virtually impossible if corroborated across more than one public cloud audit trail.

Industry specific terminologies have been developed over the past decades.  Telecommunications, Health and Energy industries have developed common models. The Telecommunications Information Framework (SID) provides a reference model and common vocabulary for all the information required to deploy network operations for fixed line and mobile service operations.  In electric power transmission and distribution, the Common Information Model (CIM) is a standard developed by electric power utilities to allow application software to exchange information about energy networks. Network power and telecommunications data elements are very sensitive from the point-of-view of security of public operations, and as such are obvious targets for disruption from hostile actors. OpenEHR is a specification that describes the management and storage, retrieval and exchange of electronic health records. Patient records contain some of the most important personal data to be protected from security vulnerabilities.

Starting with well-structured industry terminologies, data modelling standards groups could develop and provide recommendations on the classification of data elements, to which could be applied a consistent protocol for securing data identity. This could be a range of standard measures including cryptography, content validation, and a blockchain of independent identity providers for non-repudiation, with an audit service backed by logging and monitoring capabilities replicated across public cloud providers for non-repudiation.

Data Identity Technology

The most effective way to secure information is a combination of physical security, best practice cryptography and a multi-pass verification of identity credentials.  Currently there are standards such as OAuth 2.0 and OpenID Connect applied to end users and applications for authorization and authentication.  There is no real co-ordination of authentication across the network, transport and application layers, meaning that data integrity is only as good as the weakest security measure in the chain of protocols across networking endpoints, internet (TCP/IP) and applications (e.g. HTTP). End-to-end security is currently not secure by design, rather the result of  security patched onto and overlaid over an older paradigm of applications and data running on physical hardware and local area networks.

Blockchain was originally developed as a protocol to timestamp transactions for non-repudiation. It was adapted to its use as a bitcoin currency in 2008. A block chain is an additive set of blocks, linked by encrypting the previous links in the chain, stored as a distributed ledger. By 2018, private (permissioned) blockchain was adapted as a technology for a variety of business uses.  Once recorded, the data in any given block cannot be altered retroactively without alteration of all subsequent blocks, requiring consensus from members of the chain.

As blockchains use cryptographic algorithms, they are compute intensive, eminently suitable for low volume transactions, such as the storage of sensitive or classified data. Any data change transactions would require informed consent from the blockchain members.  The distributed data store is comprised of all the databases maintained by the members network group, meaning that there is no centralized distribution point for the sharing and replication of data.  A data set could be stored as a distributed record of transactions, broadcast simultaneously to all participants in the network, making misuse of stored data much more difficult.  

Identity Blockchain

Figure 1: Blockchain Identity Security Logical View

Such an initiative might be the only way to minimise the requirement for the proliferation of network security monitoring from devices over virtual wide area networks, connecting data centers to public clouds. The catalogue of prevention, and detection tools accompanying application security from mobile application registration and login, integration and application servers, distributed databases and third-party services could be rationalized. An organization wide security review  could address the problem that currently all of these measures have known flaws and weaknesses, with new vulnerabilities exposed while existing threat vectors are addressed.  Currently the surface area is too large for security assurance to be real.

In Summary

The current paradigm of data in the wild, protected by a patchwork of technology services, some secure, some inherently insecure, has no real future in addressing the global security of data.  Security is only as strong as the weakest link in the existing chain of application and network measures used to protect information. The global regulatory environment is rich in process, and poor in compliance and therefore security effectiveness.

Data has gone global, yet the definitions of use and abuse of information are completely different from one society to the next. There is misuse of  personal data in every industry with machine learning algorithms from web behaviours, , including not only marketing, but also  finance, social security, defence, civil administration and national security.

Initiatives such as GDPR in the European context, PCI-DSS in the finance industry are a good start, although as yet we have not found an effective method of addressing the root of data misuse -  networking and application technologies are inherently vulnerable, and when linked together, even more so. The standards for accountability for data, while worthy, are not working in practice. 

To continue to evolve the current generation of technology, a different paradigm is required to resolve this problem, as the level of financial misappropriation and vulnerability of essential infrastructure continues to grow.

All data originates from people in various roles – creator, author, publisher, distributor, manager, buyer, seller or end user of data.  People engaged in data collection are as diverse as members of the public, small business operators, employees and consultants in public and private organizations. A multitude of technology applications are proliferating around access to data in the form of identity management and federation, authorization and authentication of data at rest and in motion. Personal, sensitive and classified data persists and proliferates across networks and databases, with varying strength cryptography, and all too often in plain text.  

While we have many partial solutions to the problem of global data security, residency and accessibility, most technologies have known or potential security vulnerabilities, which when linked together into an end-to-end business technology solution, are insecure by nature. This situation can only intensify in view of the accelerating trend to network data across  on premises traditional data center infrastructure, private and public clouds, using identity federation, while data is increasingly stored internationally. 

Published in White Papers
Monday, 30 October 2017 06:00

HTML Report 2, Electric Boogaloo

HTML Report 2, Electric Boogaloo

As a sequel to my previous article on how to edit the CSS generated by Enterprise Architect to personalise your generated HTML report I was prompted by Guillaume to create this follow up as he gave me an idea on how to make these changes prior to generation. As we have already identified the changes that we need to make in the previous article we can actually create our own customised version of the CSS EA would use and specify that this is then used as we create our HTML report.


So how do we do this?

Keeping in mind the list of changes that we need to make:

  1. Change the attributes for .IndexHeader so that the logo fits within the header of the report.
  2. Change the attributes for .IndexBody to cater for the changes that we have made to the header.
  3. Change the attributes for #contentIFrame so that the content section of the report fills the correct amount of the screen.
  4. Change the attributes of #tocIFrame so the table of content is displayed correctly.

With these changes in mind we are ready to get to work on defining our template. To do this you will need to fire up EA. At the bottom of your project browser you will see a tab labelled Resources, click this…

HTML Report 2, Electric Boogaloo

Having clicked this your view will change to…

HTML Report 2, Electric Boogaloo

You will then need to click into the section for Document Generation…

HTML Report 2, Electric Boogaloo

Next, right click Web Style Templates and choose Create HTML Template. This will bring up the following dialog…

HTML Report 2, Electric Boogaloo

Enter a name for your new template and a new window will open…

HTML Report 2, Electric Boogaloo

You can select any of the options listed in the left hand pane for editing by simply clicking on them. As the changes we are looking to make are relatively simple the only option that we need to concern ourselves with is CSS – Main. Click this option and you will see the following…

HTML Report 2, Electric Boogaloo

You will then see the CSS display itself in the right hand pane, navigate to the areas of interest by pressing ctrl+f and entering the headings outlined earlier. Clicking on the find next button will take you to these areas in the CSS and you can make the necessary changes. Once you have made the changes click save and then click close. With these actions complete you are now ready to generate a new HTML report that will automatically use your custom CSS.

As before right click on your project in the project browser and choose HTML Report…

HTML Report 2, Electric Boogaloo

Clicking this will display the familiar dialog…

HTML Report 2, Electric Boogaloo

Under the Style option you will now be able to have a drop down menu that will list the names of any custom templates that you have created. For this example, I have changed the style to be the CUSTOM template created earlier in this article, this means that when we generate our report EA will now use our predefined custom CSS rather than the standard set. We also need to specify our logo as before, then click Generate.

The result is that our HTML report has been generated with no changes to be made and that we can re-use our template every time that we need to create this kind of report…

HTML Report 2, Electric Boogaloo

Published in Tutorials

During his time as an Enterprise Architect consultant, Jan van Oort trained numerous SparxSystems Central Europe customers eager to improve their modeling skills and methodologies. When he co-founded the startup KIVU in 2016, he naturally introduced Enterprise Architect there as well, which quickly became a key development tool. KIVU Technologies is a provider of scalable software for network analysis which is currently in demand, not only in the security sector.

In times of increasing monitoring and collection of mass data, KIVU is an exceptional example. Using well-designed software, KIVU enables the analysis of networks (not just social networks) from known nodes. The software is designed to assist analysts in narrowing down their data and connections to relevant, manageable networks, enabling them to focus on pertinent content and behaviour at greater speed.  Jan van Oort, Chief Engineer of KIVU: "As a former Enterprise Architect Trainer, I recognized the potential of model-based development right from the start of our project. Enterprise Architect supports me mainly in three key areas: Requirements definition, communicating with investors and customers, and presenting our project at events.”

As a result, KIVU recently completed a seed financing round of EUR 1.8 million and is thus able to push ahead with its development. Hans Bartmann, Managing Director at SparxSystems Software Central Europe: "We congratulate KIVU on a successful financing round. At the same time, we are pleased that one of our former trainers is now leveraging the potentials of model-based development to create a data-protection-friendly network analysis platform. This approach combines many positive aspects and has the best prerequisites for an international victory from Austria."

 

Requirements are easily defined in the model

“The KIVU platform consists of two parts: a graphical user interface (GUI) and a database (backend server) called TARIM. Right at the start of the development of TARIM, I realized that the requirements defined here had to be clearly understandable for every developer. A model is ideal for this purpose, because it allows requirements to be defined graphically, regardless of how the programming based on it is handled," explains van Oort. Based on this requirement shown in the model, a programmer creates source code, which is then stored in a version control system (Github). In this way, van Oort can always keep track of whether a requirement has been successfully completed or whether subsequent improvements are necessary.

While the Chief Engineer deliberately does not oblige the programmers to work with the model-based approach, they still see the benefits. “As our GUI has continued to grow over the past year, the developers recently asked me if they could work with Enterprise Architect, not least due to the fact that over 40,000 lines of code can very practically be handled in a single model.” The first model (database) has therefore now been merged with the second model (GUI). The GUI is created in JavaScript, has to run in every current browser and allows the display of different views. It has a connection to the database at any time in order to be able to display changes immediately.

Since the platform is designed for the throughput of large amounts of data (social networks, telephone, time or bank data, etc.), all analyses are carried out in the database. This relieves the GUI and ensures that the displays are always current. By using special filters, only highly relevant data is analyzed. “Our data processing and filtering must be very transparent in order to be able to disclose it at any time should we be requested to do so by the authorities. On the one hand, we must guarantee the required level of data protection, while providing a powerful network analysis tool on the other,” explains van Oort.

Due diligence mastered with modeling

These days, a ‘technical due diligence’ examination is usually required on the way to start-up financing. An external expert assesses whether the start-up can really perform the service as claimed. KIVU also had to take this step, but did not want to disclose its own source code. “I can only recommend to any software start-up to use a model for this purpose. Since our Bulgarian auditor works with Enterprise Architect himself, we were able to use shared model views to successfully and quickly complete the audit via the Internet,” van Oort emphasizes. Last but not least, the KIVU team uses the views from the model in lectures, most recently at the first VÖSI (Austrian Software Industry Association) Software Day in Vienna. “We usually show our approach at security conferences in front of developers who of course want to see something concrete and understand the interrelationships. With the help of model views, this is no problem.” The views can be varied according to the target group, which significantly increases the comprehensibility and effectiveness of the presentations.

1 Team KIVU

Image 1: The KIVU team (from right to left): in front, Christian Weichselbaum, Daniela Klimpfinger, Julia Franciotti; in back, Robert Wesley, Jan van Oort (in a white t-shirt) and Frazer Kirkman in back.

Tarim

Image 2: The TARIM database developed by KIVU

(All images ©KIVU Technologies)

3 KIVU API

Image 3: This image represents the top layer of  the KIVU API  in the form of UML / Java interfaces, as well as the "tip of the iceberg" with regard to the API's actual implementation.  Concrete classes will often appear in one or more sequence diagrams. These diagrams (the associated code) are what developers at KIVU get to work with. The interfaces are round-trip engineered against the source code: a modification by the Chief Engineer on one side (code or model) results into an update at the other side, and forces the developers to implement it. All the while, the Chief Engineer doesn't need to look at implementation details, although at any time he can reverse-engineer the implementation source code into the model. Similar diagrams exist of protocol layers, specific parsing utilities etc. etc.  

About KIVU Technologies

KIVU Technologies is a provider of scalable software for the analysis of networks in the security sector and beyond. The company was founded in 2016 in Vienna by Robert Wesley, Jan van Oort and Christian Weichselbaum, and recently received seed financing of EUR 1.8 million. Austrian aws Gründerfonds and btov Partners led the financing round with the participation of APEX Ventures. In addition, Ewald Hesse and Louis Curran are supporting the start-up as angel investors. The KIVU team consists of engineers, developers, data scientists, analysts and security experts.

http://kivu.tech/

About Sparx Systems

Sparx Systems was founded in Australia in 1996 and is the producer of Enterprise Architect, the world’s premiere UML modeling platform. Enterprise Architect is used to design and produce software systems, business process modeling, and modeling of any process or system. Enterprise Architect has been implemented by over 650,000 users due to its high performance at an unbeatable price. Enterprise Architect is an easy-to-understand, team-based modeling environment that helps organizations analyze, design and create well-documented systems precisely and comprehensibly. It also allows companies to collect and present the often distributed knowledge of teams and departments.

In order to support customers in their own language and time zone, SparxSystems Software Central Europe was created in 2004 to provide for the entire German-speaking region with software licenses, training and consulting.

You can find more information at www.sparxsystems.eu

Published in Case Studies
Tuesday, 17 October 2017 06:00

RAMI 4.0 Toolbox

RAMI 4.0 Toolbox

The RAMI 4.0-Toolbox deals with the complexity that comes up with Industrie 4.0. Implemented as extension for the modeling tool Enterprise Architect it provides a framework for modelling an architecture based on cyber physical systems.

RAMI Toolbox 2 768x432

History

Building on the success that comes with the SGAM-Toolbox we started creating a new concept for Industrie 4.0 in 2016. Currently the first launch of the toolbox is available which includes some adaptations and improvements.

 

Technology

The toolbox is based on the “Model Driven Generation” technology provided by Enterprise Architect where the results are stored in a XML-based file. The language of choice is C# which allows to make use of all functionalities that come with it.

The RAMI 4.0 Toolbox has been made possible and in cooperation with SparxSystems Central Europe (DE: www.sparxsystems.de, EN: www.sparxsystems.eu

 

Outlook

The next step is to improve the toolbox according to usability and scope of functions. To achieve this, the toolbox is under continuous processing and integration of suggestions from constant exchange with the community.

Complete documentation: https://www.en-trust.at/wp-content/uploads/Introduction-to-RAMI-Toolbox.pdf 

Press Relase (german): https://www.pressebox.de/pressemitteilung/sparxsystems-software-gmbh/SparxSystems-CE-RAMI-40-modellbasiert-umsetzen/boxid/876469

 

Published in Community Resources

Editing an HTML report generated from Enterprise Architect using CSS

Introduction

This article will walk you through the process of making a couple of simple tweaks to your HTML Report generated from Sparx Systems Enterprise Architect.


So what's the challenge here?

If you have ever needed to create a quick and simple report to walk a colleague or stakeholder through certain aspects of your model, then by far the quickest and easiest route is to generate an HTML Report from EA.

This will create an HTML version of your project locally that can be navigated & drilled down into (but not updated etc). When creating this report, you have the option to include your own logo as a way of adding a little bit of an extra visual engagement to your publication.

The challenge that you can, and will likely, run into is that there is a set size for the logo that EA does not tell you about and you will not see the effect of until you view the report and see that your logo is cut off by content.


How do I fix this?

To start you will need to generate an HTML report from your model. If you are not sure of how to do this simply right click the root node of your model in EA and choose “HTML Report” from the menu…

Editing an HTML report generated from Enterprise Architect using CSS

When you click this option you will be presented with the following dialog…

Editing an HTML report generated from Enterprise Architect using CSS

In this dialog check all the options that you want to include in your report and specify your output destination folder and your logo image. When ready, click “Generate” and you will have a progress bar pop up momentarily while EA generates your report.

When this process has finished you can either click “View” or navigate to your Output folder & open the file “Index.htm” (there will be other files & folders generated as well but for now this all you will need).

When opened you will see something like this….

Editing an HTML report generated from Enterprise Architect using CSS

As you can see, the logo is too big for the report and there is no way to address this issue inside of EA.

So what do we do?


The Solution!

The first thing we do is to open up the HTML report using Chrome. This itself posed its own challenge initially as out of the box Chrome does not really support file based URLs, but there is a work around for this (thanks to Phil Chudley for showing me this).

Firstly, find your shortcut for Chrome, right click it and choose “Properties”. When the Properties window appears locate the section labelled “Target” and add the following to the end of the information there:

--allow-file-access-from-files

Make sure that you include a space between …chrome.exe” and the string shown above for this to work.

Editing an HTML report generated from Enterprise Architect using CSS

We now need to make some changes to the HTML report.

Earlier I mentioned that there are several files generated at your output destination when you create this HTML report. One of those folders is titled CSS and contains two files, you will need to open the “ea.css”, personally I use Notepad for this but there are a host of tools you could use.

With your CSS file open and with Chrome displaying your report it’s time to start editing.

Hover your cursor over your logo, right click and choose the “Inspect” tool…

Editing an HTML report generated from Enterprise Architect using CSS

This will open up a new Chrome window displaying the developer tools…

Editing an HTML report generated from Enterprise Architect using CSS

The sections that we will need to pay attention to are:

  • IndexHeader; this will be apparent immediately if you choose to inspect the logo
  • IndexBody; you will see this below IndexHeader but you will also need to expand this section clicking the triangular icon to the left of it to expose the other areas we need:
    • tocIFrame; this is the section of the page containing the model tree in your report
    • contentIFrame; this is the main section of your report that displays your information

.IndexHeader

Editing an HTML report generated from Enterprise Architect using CSS

This is the CSS controlling the display of this section of the report. The important factor here is the height property. As you can see, it is by default only 60 pixels tall and in our example the logo is larger than this.

To adjust this click into the area where it displays “60px”. You can overwrite this with your desired figure or you can adjust it your logo by using the up arrow to increase the height one pixel at a time. In our example I changed mine to 91px.

With your ea.css file open you will need to find .IndexHeader in there and adjust the height to your new value. Save but don’t close your ea.css file.

You will, for now, still see something similar to this…

Editing an HTML report generated from Enterprise Architect using CSS

What we now need to do is make the body of our report work with the changes that we have made to the header.

To do this we need to make some changes to the section of the CSS for .IndexBody

.IndexBody

Using the developer tools window you now need to pay attention to the CSS for this section…



The change we need to make here is to the position attribute from position: absolute; to position: inherit;

You will see that this changes the position of the body of the report allowing the header section to be fully displayed along with our logo...

Editing an HTML report generated from Enterprise Architect using CSS

Make sure to adjust this section in your ea.css file & save.

You will notice that this has created a new issue in that the frame housing our table of contents has now shrunk. You can still expand and use this section as you normally would however it quickly results in excessive scrolling. This is something that we don’t want and so we will need to make a couple more changes to fix this.

#contentIFrame

This is the frame that displays our content & in the developer tools it will look like this…

Editing an HTML report generated from Enterprise Architect using CSS

In the developer tools this section will appear grey and so cannot be edited there. Instead you will need to locate this section in the ea.css file and make the change there without previewing it.

The change that you will need to make is to the attribute height: 100%; to instead be height: -webkit-fill-available; as shown above. Then save your ea.css file. This will set the size for the main display frame.

Next we need to look at the frame for the table of contents.

#tocIFrame

To make our Iframe look correct we will need to make the same change as we have just made to the content frame…

Editing an HTML report generated from Enterprise Architect using CSS

Again the change that you will need to make is to the attribute height: 100%; to instead be height: -webkit-fill-available; as shown above.

Then save your ea.css file.

You have now completed the changes necessary to have your HTML report display properly and with your own logo. From now on you will see something akin to this upon opening…

Editing an HTML report generated from Enterprise Architect using CSS

Everything is now exactly where you would expect it to be and in a useable fashion.

There is something to bear in mind should you choose to use this method. If after following this method, you need to make changes to your model & regenerate the HTML report to the same output destination then your changes to the CSS will be overwritten as well.

To get around this, before you regenerate your HTML report simply rename the ea.css file to something else e.g. ea – NEW.css and then regenerate the report.

What you find now is that in the CSS folder in your output destination there will now be another file called ea.css and this is the one that the report will default to. Simply delete this file and rename the ea – NEW.css file back to ea.css having done this your updated report will open & use your modified CSS still.

Obviously there is a lot more styling you could apply by using the CSS, this is just a simple fix for a particularly common bugbear with the HTML Report.

You can find more content like this on our YouTube channel, Facebook and Twitter.

Published in Tutorials

Enterprise Architect User Group

London 2017; 18th - 19th May

EA User Group - London 2017The London

2017 meeting of the Enterprise Architect User Group sees a shakeup to the agenda in the form of an additional day being added to the roster. In additional to the traditional presentation day of User Stories, How to's etc the extra day added to the event is taking the form of a training day.

The training day adds to the event a selection of six, three hour training sessions on a variety of subjects from BPMN to TOGAF and Model Curation.


Location

Code Node, 10 South Place, London, EC2M 7BT

Get Directions

EA User Group - London 2017

 

 

 

 

 

 

 

 

 


Agenda; Thursday 18th May

EA User Group - London 2017

You can find information on these training sessions over at the EA User Group website.


Agenda; Friday 19th May

EA User Group - London 2017

You can find a synopsis for each of these presentations over on the EA User Group website.


How to buy your tickets...

Tickets for the event are available directly from the EA User Group website and are priced as follows:

  • Full two day event ticket; £550.00 +Vat
  • Friday only ticket; £75.00 +Vat

EA User Group - London 2017

Published in News
Wednesday, 11 January 2017 09:51

London User Group; Call for Speakers

 
 

If you are a user of Sparx Systems’ Enterprise Architect, we’re inviting you to share your user stories with the EA community at the next User Group event in central London on May 19th 2017.

 

We are interested in just about everything you do with Enterprise Architect, from the organisation of your model to enhancements you have made using MDG or the automation API, or even just a project with which you are especially happy. That said, we are not just after the sunshine stories and would be interested in hearing about any experiences learned the hard way.

Presentations of an obvious or purely commercial nature will not be accepted. View our speakers style guide for tips.
 JOIN US AT Code Node, London
This year's London event is returning to the fantastic Skills Matter venue, Code Node. But this time around we're adding a twist to the proceedings. London 2017 will see an additional day added to the event roster which will be on the 18th May before the usual day of presentations and networking on the 19th. This extra day of content will be a training day with no less than six half day training sessions running! Full details on the training day will be published on the EA User Group website, along with ticketing information very soon.
 
 
Published in News
Friday, 02 December 2016 12:03

How to use the Relationship Matrix

Dunstan Thomas ConsultingAbove all else one of the most recurring questions Dunstan Thomas Consulting has encountered from clients over the years is "How do we use the Relationship Matrix?"

With that in mind we've got a short clip on how you can start effectively putting the Relationship Matrix into use for yourself...

https://www.youtube.com/watch?v=miiWN5PBuk0&t=30s

 

Sam Nice
Online Training, Marketing & Product Specialist
Dunstan Thomas Consulting
@DTUML 

 

Published in Tutorials
Wednesday, 24 August 2016 11:21

New EA workshops from Dunstan Thomas

NEW: Sparx Systems Enterprise Architect Workshops from Dunstan Thomas Consulting

logoHex_cl

As an alternative to our traditional classroom style training Dunstan Thomas Consulting now offer a series of Sparx Systems Enterprise Architect workshops.

These workshops provide all the fundamental practical skills that are necessary in order for you to use Enterprise Architect efficiently and effectively. The emphasis is on the practical rather than the theoretical and we will work with you so that exercises can be tailored to meet your specific modelling requirements.

 


Available Workshops

Sparx Systems Authorised Training Partner - ArcGIS Geodatabase Modelling in EAOur current offering of Sparx Systems Enterprise Architect Workshops include:


All details are available on our website or call our sales team on +44 (0) 23 9282 2254.

Published in News

Re-Using Elements

In this latest instalment in the series Phil Chudley will be looking at how to re-use Elements from your repository in Enterprise Architect.

https://www.youtube.com/watch?v=_DIs2ROV8fM 

As always all of our videos are available right now via our YouTube channel ... and don't forget to subscribe!

Published in Tutorials
Page 1 of 3