doug rosenberg

doug rosenberg

doug rosenberg

Parallel Agile, Inc. (Founder, Chief Technology Officer) - formerly ICONIX (CEO)
After running ICONIX for 35 years and writing 7 books on UML, use cases, and agile software development, Doug discovered a new way to improve productivity by leveragng parallel development, and founded Parallel Agile (www.parallelagile.com) in 2018 after 4 years of test projects at the USC Center for Software and Systems Engineering, where he's been working with Prof. Barry Boehm.   A new book "Parallel Agile - Faster Delivery, Fewer Defects, Lower Cost" is mostly written and will be released during 2019.   We're also developing a Parallel Agile Add-In for Enterprise Architect and are available for training and consulting.  
In his previous lifetime...
 
Doug Rosenberg founded ICONIX in his living room in 1984 and began training companies in object-oriented analysis and design around 1990. ICONIX specializes in JumpStart training for UML and SysML, and offers both onsite and open-enrollment courses.
Doug developed a Unified Booch/Rumbaugh/Jacobson approach to modeling in 1993, several years before the advent of UML, and began writing books around 1995. Design Driven Testing is his 6th book on software engineering. He’s also authored numerous multimedia tutorials (including Enterprise Architect for Power Users) and several eBooks, including Embedded Systems Development with SysML.
Doug has spent the last few years doing "deep dive" consulting into cutting-edge technology including cross-platform mobile app development, REST APIs, and NoSQL databases, and gaining first-hand experience on some "hardcore agile" projects of varying sizes.  He's also been working with dozens of graduate students at the University of Southern California Center for Systems and Software Engineering (USC CSSE), managing Directed Research projects and developing/piloting the Parallel Agile process.

 

We've just released a major upgrade to the Parallel Agile CodeBot.  As you might already know, CodeBot generates database schema, database access functions, and a REST API to access the database from a UML domain model.  But you might not know specifically how the class diagram is interpreted during code generation.

Here’s a simple domain model for a video editor that allows you to annotate an image for machine learning.

domainmodel

This example only uses Aggregation, but several other relationships are possible.  The table below explains how CodeBot interprets what’s on the diagram.

UML relationships

You should find Composition useful for creating nested JSON structures; something which particularly lends itself to MongoDB collections, where the recommended “best practice” is that child elements with a strong ownership relationship are nested within each parent element, rather than being put into separate collections.

All other relationship types – Dependency, Realization, Responsibility etc (ad infinitum) – are treated as “non-semantic” by CodeBot, so can safely be used to define more abstract concepts that document the model rather than drive it.

How Multiplicity affects what’s generated

If you define multiplicity in the relationships (e.g. 0..1, 1..*), CodeBot uses these wherever possible for validation checks. If you don’t define the multiplicity, it defaults to either “0..1” or “1”, depending on the context.

Multiplicity also affects whether fields are generated as a single item or a list of items. Additionally, in languages that support optional types (e.g. Option in Scala, or Optional in Java), a multiplicity of 0..1 will be generated as an optional type.

Let’s quickly illustrate that with some brief code examples:

multiplicity

Try CodeBot for free at http://www.parallelagile.com/codebot.html

 

Thanks to my colleague Matt Stephens for writing this tutorial on the CodeBot that he's built.  You can download the CodeBot tutorial here, and also watch a CodeBot Tutorial video  here.
codebot
Parallel Agile CodeBot™ is accesible through the Parallel Agile Add-in for Enterprise Architect.  The CodeBot generates a complete REST API from your UML domain model. This article illustrates step-by-step how to run CodeBot from within EA, and incorporate the instant generated API into your domain driven project.
Find out more about Parallel Agile and try out CodeBot for free.
By the way, we also offer training, consultancy, and support for projects using CodeBot and the Parallel Agile development process. If this sounds like just what your organization needs, contact us at This email address is being protected from spambots. You need JavaScript enabled to view it.

This article will introduce you to both the Parallel Agile (PA) process and to the Parallel Agile Add-in for Enterprise Architect, which enables the PA process for Sparx customers.

Download the article in PDF here.

 

Parallel Agile Addin for Enterprise Architect

 

The Parallel Agile Add-in generates database access code and REST APIs from domain models, and it works in conjunction with the ICONIX Agile DDT Add-in, which generates acceptance tests from requirements and use cases.   Both add-ins are free.

We’ll discuss:

  • What’s Parallel Agile?
  • Compressing schedules with parallel development
  • Improving quality while compressing schedule
  • Why did we build an Enterprise Architect Add-in?
  • What’s an Executable Architecture?
  • What’s a Parallel Agile CodeBot?
  • Using the CodeBot to generate code for your domain model
  • Using our Cloud-Based Hosting Service to Test Your Generated API
  • Use Case Complexity Analyzer
  • Parallel Agile MDG Technology - supports Sprint Plans

 

 

 

 

In Parts 1, 2, and 3 of this article series we introduced a student project that I’m managing at the University of Southern California Center for Systems and Software Engineering.  This article (Part 4) describes the results of our first semester’s effort.  We are currently getting started again on enhancements with a new group of students for the Spring 2017 semester.  USC’s location in downtown Los Angeles is at the epicenter of a lot of bad driving, so we’re attempting a “crowdsourced bad driver reporting system”.

Our system consists of a voice-activated “dashboard-cam” mobile app connected to a Mongo database in the cloud, via a Node JS REST API, and some Angular JS webpages to file, review and query bad driver reports.  This technology stack for our web-app is sometimes referred to as MEAN stack (Mongo, Express, Angular, Node).  We developed a native Android mobile app in Java, and a native iOS app in SWIFT.  Following the Resilient Agile process, and using Enterprise Architect to model the project, we attempted to go from zero to a working system in about 12 weeks of class time, by having students develop use cases in parallel with each other.  Previous articles in the series have presented snippets of the UML model but everybody knows that successful implementation is where the rubber meets the road.  So this article will show you how far we got.  

When the driver issues either the “Report Bad Driver” or “Emergency Alert” command, the mobile app triggers video upload and server-side creation of the Bad Driver Report or Emergency Alert Report, as appropriate .   The server then sends an email to the driver’s posting account with a link to a new report that’s pre-populated with the video.

While submitting the report, the poster reviews the video, records the license plate number, and grabs a single video frame that most clearly captures the offending vehicle.  The report is then made available for independent reviewers to evaluate.  The system requires unanimous agreement from 3 independent reviewers that the report is accurate.  Once this consensus has been achieved, the report is entered into a database that is queryable by insurance companies.

From a development standpoint, we were able to exploit parallelism among the students to complete this set of use cases (including defining requirements, UML design, and coding) in approximately 12 weeks of calendar time with 15 students each contributing 5 hours a week.  This calculates out to 900 student-hours or 22.5 equivalent full-time work weeks.  In other words, about half a person-year total effort.

 

In Part 1 and Part 2 of this article series we introduced a student project that I’m managing at the University of Southern California Center for Systems and Software Engineering (USC CSSE).  USC’s location in downtown Los Angeles is at the epicenter of a lot of bad driving, so we’re attempting a “crowdsourced bad driver reporting system” this semester, and because we need to be really productive, we’re using Enterprise Architect to model the project.

 

The statistics on the costs of bad driving to society and to insurance companies are staggering.  Among the high points from the article linked above:  

·      A trillion dollars a year in costs due to vehicle accidents.  

·      Over 5 million reported accidents a year.  

·      An estimated 10 million additional unreported accidents a year

 

With these numbers there's a strong argument to be made for a crowdsourced approach – it’s the only viable way to get enough eyes on the road.  So this semester I’m working with a group of 15 Masters students to build a “proof of concept” system, following the Resilient Agile process and leveraging parallelism, with each student assigned to a different use case and all students working in parallel. 

Download the attachment to read the full article.

Read Part 1 of this Case Study

We’re attempting a “crowdsourced bad driver reporting system” this semester, and because we need to be really productive, we’re using Enterprise Architect to model the project, field-test the Resilient Agile process, and to coordinate all of the student homework.  Students communicate with each other and with me using a shared EA model.

This semester I’m working with a group of 15 Masters students and an aggregate effective time budget of 80 student hours per week.  We’ve got about 12 usable weeks of student time, so it works out to a time budget of roughly 1000 student hours (that’s about half-a-person-year at 40 hours a week) over a 3 month schedule.

Resilient Agile is a flexible process in that it can be employed with traditional Scrum/Kanban sprints and backlogs, or alternatively we can leverage parallelism, and each student can be assigned a use case and develop their use case independently. 

I’ve been a big fan of leveraging parallelism in software development since I was a programmer at NASA/JPL way back in the 80s when I rescued a late project using a “divide-and-conquer” coding strategy, so we’re trying to see how far we can push the limits on massively parallel development with student projects at USC.  Communication and well-defined interfaces are key when team members are working in parallel, so the shared EA model is critically important.

Parallel modeling and development has also been a theme of our ICONIX JumpStart classes for the last 20 years, where we go into industry and work a client’s real project by splitting the class up into “lab teams”.  Typically in ICONIX JumpStart classes we put 3 or 4 students on a package of use cases, whereas on this project each student got a single use case.

If you’re going to leverage parallelism in development you have to do things a little bit differently.  Here’s an overview of the process we’re following:

1.     Plan for Parallelism (identify dependencies and architect for parallelism)

2.     Build the Right System (discover requirements, prototype areas of technical risk, and agree on conceptual designs)

3.     Build the System Right (carefully review detailed designs)

4.     Integrate as often as necessary

Enterprise Architect is a key enabler of the above process.   I would never attempt this approach without a good solid modeling tool at the heart of it. This article will show how we’ve used EA to accomplish the 4 steps above. 

 Read Part 3 of this Case Study

 

Introduction (See attachment for full article)

 

For the past several years I’ve enjoyed a mostly informal association with the University of Southern California Center for Systems and Software Engineering (USC CSSE).  I was on-staff at USC a few years ago teaching SysML and Model Based Systems Engineering, but for the last few years I’ve been mentoring Computer Science grad students in two Masters courses: CS577 Software Engineering and CS590 Directed Research.   The Directed Research (DR) course is basically a mechanism for students who are about to graduate from the Masters program but are one or two units short of the required number to pick them up by participating in a project with a mentor from industry (that would be me).   Students are expected to work 5 hours per week per unit.  

Teaching at USC is fun (I graduated from SC back in ancient times), gives me an opportunity to work with a lot of bright young software engineers, to stay current on new technology (in particular cloud-connected mobile app development) and also gives me an excuse to work with Prof. Boehm (author of Balancing Agility and Discipline among numerous other titles), who has happily taken an interest in some of my ideas related to improving productivity by innovating better software processes and allowing me to test my ideas out with USC grad students.  

This process work has included the development of the Resilient Agile process, an attempt to develop a better agile methodology which started out as an experiment called Massively Parallel Use Case Modeling that we did with the CS577 class a few years ago where we developed a complete location-based advertising system by handing one use case to each of 47 grad students and having each student develop their use case independently.  

This semester I’m working with a group of 15 Masters students, mostly taking a single unit of  DR.  One student is taking two units, so my team has an effective time budget of 80 student hours per week.  Although the semester at USC is 16 weeks long, by the time the student teams get formed, and with midterms and finals, we’ve got about 12 usable weeks of student time.  So it works out to a time budget of roughly 1000 student hours (that’s about half-a-person-year at 40 hours a week) over a 3 month schedule. 

Because I like challenges, we’re attempting a “crowdsouced bad driver reporting system” this semester, and because we need to be really productive, we’re using Enterprise Architect to coordinate all of the student homework.  This is the first article in a series that will describe our progress.

Are we crazy to think that we can get this system built in 3 months with a total of half-a-person-year of developer time?  Stay tuned for our next article to see how we’re doing.

Read Part 2 of this Case Study

 

Overview

Your organization is committed to Agile, Scrum and TDD. That’s not going to change.  But somehow things aren’t going as smoothly as you’d like. Your project is on the “underplanned” end of the spectrum and you feel like everything could be working more smoothly.

If I’ve just described your current situation, then this article might be for you.  The Resilient Agile (RA) process combines Test Driven Development with Design Driven Testing, resulting in improved coverage for both unit and acceptance testing, and helps you plan your sprints better by introducing visual modeling of user stories, epics, and tasks.  

RA is agile on the project management side, and scenario driven on the technical side.  Enterprise Architect can help you to build, manage and enhance your Agile development projects.  This article show shows how Enterprise Architect can be used to implement Resilient Agile in your project.

BetterAgileProcess.png

Resilient Agile interfaces Design Driven Testing to Test Driven Development -- and it's only supported by Enterprise Architect

 

Resilient Agile is balanced between planning and feedback

The agile movement properly recognizes the value of getting to code early and not spending years on “Big Design Up Front” without demonstrating executable software early in the lifecycle. Agilists are correct in believing that a great deal of knowledge will be gained about system requirements by putting live software in the hands of users early. However, agile development as currently practiced across much of the industry has removed too much engineering from software engineering.

Agile/scrum and kanban put the focus on project management rather than engineering. Engineering tends to be left to the discretion of the developers, and there is an active mindset of “code first, refactor later” as opposed to systematic exploration of requirements and designs. Agile in practice is often used as an excuse for not doing architecture, not doing engineering and avoiding thinking about rainy day scenarios, leading to buggy, fragile (non-resilient) software. There is a general mindset on agile projects of achieving quality by testing rather than achieving quality by design. This mindset very often tends to be overly-focused on unit testing because acceptance testing requires that rainy-day scenarios be fully modeled and accounted for.

RA puts the engineering back but still fits with scrum and TDD

Complicating the problem, basic software engineering skills like developing quality use case models that fully explore sunny/rainy day behavior, how to properly decompose a use case into models, views, and controllers (MVC) and how to model the problem domain as a set of conceptual objects are often not taught adequately at the university level.Resilient Agile (RA) is an attempt to put the necessary engineering back into software development without losing the “get to code early” focus that agile gets right. RA is based on a time-tested method (ICONIX/DDT) of developing an initial problem domain model, then decomposing a system into collaborating use cases and elaborating each use case by doing a conceptual MVC decomposition.

 

ICONIX Resilient Agile - A Better Agile Methodology

 

Friday, 22 July 2011 00:00

Design Driven Testing for Systems

Design Driven Testing (DDT) for software was first outlined in the book Use Case Driven Object Modeling with UML: Theory and Practice (by Doug Rosenberg and Matt Stephens), and then described in detail in Design Driven Testing: Test Smarter, Not Harder by the same authors.

DDT is a highly methodical approach to testing, allowing you to know when you’ve finished – i.e. when you’ve written enough tests to cover the design and the requirements. It helps you to “zoom in” and write algorithmic tests to cover intensive or mission‐critical sections of code, and to know when it’s safe to “zoom out” and write fewer tests for boilerplate or less critical code.
In this Article, Doug extends the concept of DDT for hardware/software systems, allowing SysML‐based designs to be tested in a highly rigorous, systematic way. It’s still Design Driven Testing, but now the design elements that need to be tested include all of the “four pillars of SysML”, whereas DDT for software focuses on testing behavior.

DDT Systems

The attached presentation, from the recent ESRI Developer Summit, presents the design of the hotel mapping application, and discusses bugs that were detected and fixed before product release following the DDT approach.


ICONIX has collaborated with Sparx Systems to produce a uniquely powerful set of capabilities for driving testing from Enterprise Architect UML and SysML models.    By combining the functionality of Sparx's structured scenario editor and that of the Agile/ICONIX add-in, it's now possible to automatically generate test cases at multiple levels of abstraction.   These include two levels of developer testing, including automatic generation of unit test code for JUnit, NUnit and FlexUnit, and two levels of acceptance/QA testing, including generation of test cases for Requirements, and scenario-based testing using a "use case thread expander" which automatically generates test scripts covering all permutations of sunny-day/rainy-day scenarios.

We've published our approach in a book called Design Driven Testing (DDT), which I co-authored with Matt Stephens, and we proved our concepts on a real, commercially available product, an interactive mapping system built with ESRI's ArcGIS Server geospatial engine, and in commercial use on a travel website, VResorts.com.

The attached presentation, from the recent ESRI Developer Summit, presents the design of the hotel mapping application, and discusses bugs that were detected and fixed before product release following the DDT approach.

Page 2 of 4