agile@atrify: Agile product development at atrify - From our beginnings to the process today

23 Oct 2019

As befits an agile product development process, it has also evolved steadily at atrify over the last few years. In today's blog post, I would like to talk a little about this process and how it has matured from our first attempts at agile to our current, customer-centric product cycle.

Essentially, two major areas influence the content of our releases: the input from our strategic portfolio management and what we as product owners generate as ideas and feedback via the community and the active users of the applications.

Feedback is the key - quantitatively and qualitatively!

We don't know what our users want. At least not until we ask them. And this is exactly where we are now investing a lot of effort.

Regular surveys of our customers provide us with quantitative feedback on their satisfaction and missing features. This usually takes the form of interview questionnaires that allow answers based on a scale as well as open-ended questions about features that are missed the most, for example.

We have been measuring the Net Promoter Score and carrying out a system usability test for around two years. In simple terms, the Net Promoter Score is a recommendation rate, the value of which should ideally increase with each new survey. The system usability test takes the form of a standardized questionnaire in order to obtain quantitative feedback on the usability of the software.

Further valuable quantitative feedback allows us to collect data in the form of business analytics or feature measurement via Google Analytics. Here we can see exactly how often certain features are used in the applications or find out about abandonments in the user journey.

On the one hand, we receive valuable information on how to improve certain processes and features, but we also know which features are not well received and can simplify and streamline the software if necessary. This pleases the users, as it offers a tidy interface with only the useful features, and the developers, as the software becomes less complex.

Simplicity - the art of maximizing the amount of undone work - is essential

Based on the surveys mentioned above, we often respond to individual feedback in the form of user interviews and seek personal contact with individual users.

These user interviews are usually conducted on the basis of negative feedback. It is often worth asking the users concerned what caused the negative feedback and how the situation can be improved. Quite often, there are quick wins that can be implemented quickly and easily to bring about an improvement.

We Listen

As a 100% subsidiary of GS1 Germany , we also work with GS1 organizations all over the world. We operate our application as a service for them in their countries and under their auspices. In this case, these GS1 organizations act as multipliers in their markets. They know the trade and industry best, as well as the processes and expectations there.

Reason enough to dedicate a separate format to these key customers, which we have entitled "We listen". Here, these customers have the opportunity to talk to us about the activities in their markets and to inform us of any need for change and feature requests. It is not uncommon for similarities to emerge across the various organizations.

However, it is not only the direct but also the indirect line to the customer that provides important insights. This is why we also work closely with Sales and potential customers on the one hand, and with support and active customers on the other. Support in particular, which is closest to the customer and usually knows exactly how they work and how they use the application, is indispensable when it comes to discovering and clustering problem areas in the use of our software.

Strategic portfolio management

Anyone who has ever studied Dr. Klaus Leopold's Flight Level Model will know how important strategic portfolio management is and that agile working methods must also be established in the upper echelons of a company.

Prioritization for initiatives, features and projects must already be discussed there and a holistic understanding of the things in progress must be created.

Goals that are supported by features in the applications or that require major and holistic adjustments to the software are just as much an input from this board as paid customer customizations or major initiatives that can affect several products at once.

Pre-prioritization of product ideas

As you can well imagine, the first area in particular, with its diverse survey methods, generates a lot of ideas and features. So that we as product owners can spend time on the most important things for the product and not spend time discussing features that won't make it into the product in the near future anyway, we roughly prioritize these ideas in the first step.

A so-called MoSCoW matrix helps us at this point. This is actually very simple and offers four quadrants divided into Must, Should, Could and Won't. While the Must section represents the absolutely essential things for the product, the Should section contains valuable features that are also worth implementing, but not necessarily with the next release or the one after that, as they won't cause user rejection if they are not implemented, for example.

Then there is the Could section, which collects everything that makes sense if time and resources are available, and the Won't section. And last but not least, the question is certainly justified as to why the Won't section is attached to the scale at all if you are not planning to implement the things below it anyway. For us, however, this serves to visualize that these topics are not considered useful, especially because the board is regularly reviewed and these topics are then at least known to everyone or can be discussed again.


We also have several agile teams working in different areas. Some of them are divided up according to product, but they also like to rotate when there are major issues or initiatives that need to be implemented. A current example of this is the conversion of the software from an Oracle persistence to PostGreSQL. Or a major data quality initiative at our parent company.

If you look at the visualization, the whole thing reminds you a little of Less. We have a Chief PO who coordinates with the POs for the individual teams in daily dailies as well as in more detailed meetings every two weeks.

Below that, our teams generally work according to pure Scrum, i.e. they have their daily Scrums, review meetings, refinements and planning meetings.

Some of the review meetings take place on a team basis, but it is also common for several teams to present their results at a joint review meeting. We post the dates for the reviews on a corresponding board in our employee lounge so that everyone who wants to can take part. As our meeting rooms do have capacity limits, the code for the video conference is directly included on the post-its.

Deliver often

We now deliver a release every six weeks. That makes a total of around 8 releases per year, which is twice as many as in the past. Changes to the attribute model, code lists or validations are released even more frequently and often on-demand.

Back then, we started with a quarterly release and every delivery felt like an adventure. In some cases, far-reaching code changes lasting more than twelve weeks concerning various underlying technologies coupled with adjustments to the data model were always a challenge for the rollout.

The adjustments we now make within six weeks, on the other hand, are mostly manageable. The more frequent deployment frequency also ensures that deployment becomes a routine rather than a challenge.

Deliver working software regularly within a few weeks or months, preferring the shorter timeframe

Six weeks has proven to be a good best practice for delivery. Even shorter release cycles would make little sense, as we often have features that cannot be fully completed within one or even two sprints. With the six weeks, we also seem to be in prominent company, as Google with the Chrome browser or Mozilla with Firefox, for example, deliver with the same frequency.

Development & release cycles

We generally develop three sprints â two weeks on a release, then go into a two-week code freeze phase during which the release is thoroughly tested again before customers are also given a two-week test phase. Only then does the release go into production with the new features.

Pre-release notes and a final version of the document (there may well be last-minute changes, which are then explicitly retested) keep customers up to date.

Welcome changes to requirements even late in the development process. Agile processes use changes to the customer's competitive advantage

Every two releases, the entire company is usually brought together and there is a product launch event. This is where the new features in the software releases are presented to a larger audience by the product owners. This can also be done live on the systems and with the support of marketing-ready slides! This helps everyone in the company to keep up to date with the further development of the products and also helps the support team in particular to prepare for the new features and potential customer requests.

That's all, folks

That's it for now with our software development process. At some point, I also started to visualize the process as such...

Of course, this will also continue to evolve and my head is already full of ideas. Drivers such as test automation and the future reorganization of our product management will have a particular influence on this.

But the regular exchange with a strong, agile Cologne community also provides constant inspiration and brings new ideas to light.

About the author

Daniel Haupt is a certified Product Owner at atrify and is responsible for the agile product development of solutions for industry and commerce. He always enjoys learning about new approaches and methods in an agile environment and only likes waterfalls in the great outdoors.

Daniel Haupt

Daniel Haupt

Daniel is Product Owner at atrify and is together with his team responsible for the further development of our atrify publishing software. He likes to learn new approaches and methods in an agile environment and likes waterfalls only in the wild.

More articles in the Info Hub