With an objective to enable continuous learning and progression for our learners, PremierAgile curated several learning articles in the areas of Agile, Scrum, Product Ownership, Scaling, Agile Leadership, Tools & Frameworks, latest market trends, new innovations etc...
You might have seen manufacturing units relying on inspection or quality checks. They inspect or run a product via the set of quality procedures they follow after it is produced. But, experts in the manufacturing field suggest that inspection can neither guarantee nor enhance the quality of a product. This holds for any type of product for that matter. The reason is that quality irrespective of whether it is bad or good is thus far within the product. Rather than inspecting quality, the product will succeed only when it has quality built-in. This is why one of the core values of SAFe is built-in quality.
SAFe built in quality makes sure that every aspect and each increment of the solution showcases quality standards followed all through the development lifecycle of a product. Built-in quality means that quality is not added after the product is produced. But it is built inside the product when it is produced.
Lean and Flow practices understand the importance of building quality within a solution. These practices suggest that without quality built-in, the chances of organizations facing unvalidated and unverified work will be more. As a result, the number of reworks will be more. The outcome will be a reduction in the speed as the teams will have to work again and again on the previously produced products rather than focusing on producing new ones.
SAFe core values bring quality focus on five particular aspects. They are release quality, system quality, code quality, design & architecture quality and flow. Let us understand the same from the picture below:
Source: Scaled Agile
Agile teams function in a flow and fast-based system for swift development and release of business capabilities with the best quality. The good thing about the Agile approach is that as against performing tests at the close, teams carry out many evaluations early and even in the middle. Scaled Agile built in quality makes sure that the frequent changes that are made in the Agile development process do not bring up new mistakes. Even, it helps with dependable and quicker execution.
The good thing about Agile teams is that they create tests for everything starting from codes, stories and features. But, the test is done following the test-first approach. Yes, it is done either prior to or at the time of the item creation. The good thing about the test-first approach is that it creates tests premature in the development cycle as against the traditional V-Model. You can gain a better understanding of the difference between the traditional approach and the test-first approach from the picture below:
Source: Scaled Agile
Further, the test-first approach makes sure that the tests are small and automated. The reason is that at full length, UI-based, larger tests take a longer time. With this thought, the test-first approach creates a balanced evaluation pyramid. The idea here is to motivate organizations to rely on affordable and quick small tests as against costly and time-consuming larger tests.
Scaled Agile Framework built in quality aids with the creation of an ongoing delivery pipeline. It even generates the ability to deliver on demand. You can get to know about the ongoing integration part of the continuous delivery pipeline from the picture below:
Source: Scaled Agile
From the picture above, you can understand that the continuous integration portion of the continuous delivery pipeline reveals how modifications made to each component are evaluated across different atmospheres. The testing is done even before the component arrives in production. The concept of test doubles is used for quickening the testing process. This is carried out by replacing sluggish and costly components with quicker and affordable proxies.
When tests take a longer time, they can contribute to the delay in the Agile teams. Tests take a longer when complete test suites are used. The reason is that these suites can consume a long time not only to build but also to put into practice. So, Scaled Agile Framework built in quality suggests teams create reduced test data and test suites. When teams do it, they can make sure that the most crucial functionality works well prior to moving through other stages of the production pipeline. The teams associate with the system team for balancing velocity and quality so that flow can be ensured as in the figure below:
Source: Scaled Agile
It will be possible to judge how good a system can underwrite the present and future needs of the business with its design and architecture. When they are of good quality, they can make the implementation of future needs with ease. Even they will make systems simple to test and can help teams satiate non-functional requirements.
When changes take place in the market, the requirements of a business will also change. In the same way, the requirements change with development discoveries and due to other reasons as well. When the requirements change, the architecture and designs should also evolve at the same pace. When it comes to traditional processes, they demand quick decisions. In turn, there are chances of inappropriate choices. In turn, the chances of rework later and inefficiencies will increase. Spotting the best decisions needs knowledge gained via experimentation, prototyping, simulation and modeling along with other education activities. Even, it needs a set-based design model. Only then, it will be possible to test multiple choices to make the best resolution. Once the best choice is made, the developers rely on the architectural runway for the implementation of the finally chosen decision. Here, the good thing about Agile architecture is that it offers guidelines for implementation and inter-team design sync.
When there is an improvement in the system requirements, the design should also improve to back them. With designs of poor quality, it will be hard to comprehend and make changes. In turn, the chances of slow delivery and other defects will increase. When good cohesion/coupling along with suitable encapsulation and abstraction are applied, implementation becomes easier not only to understand but also easier to make changes. The Solid principles followed by Agile will make systems flexible. In turn, they can underwrite new requirements with ease.
In SAFe, design patterns provide popular methods to underwrite these principles. Even, design patterns offer a common language that is not only easily readable but also simple to understand. When an element is named service or factory, it is an indication that it is connected to broader systems. To arrive at the best choice of design, set-based design in SAFe identifies multiple solutions and not the one that the team comes across first.
In SAFe Agile built in quality, design and architecture are also used for identifying the testability of a system. With test doubles, testers can replace slow and costly components with the help of seams created by well-defined interfaces. This happens due to the effective communication between modular components. You can understand this concept better from the picture below:
Source: Scaled Agile
From the picture above, you can understand that the speed controller elements require the location of the present vehicle from a GPS location element to make changes in the velocity. When you will have to test the speed controller using the GPS location of the vehicle, you should have appropriate signal generators and GPS hardware that clones GPS satellites. When you replace this complexity using a test double, it will be possible to bring down the effort and time needed for developing and testing the speed controller.
These design principles of SAFe also suit cyber-physical systems. Engineers working in different domains use simulation and modeling to better understand the design. Even, hardware designs these days apply the test doubles concept via models and simulations or they offer a wood model before they cut a metal. This often demands a change in the mindset. Similar to software programs, hardware components will also change over the lifecycle of the system. For better long-term results, it is better to plan for future changes by focusing on quality. This would be a good move against optimizing to complete a design for the present requirement.
The code or component of a system executes the system capabilities. So, the easiness and quickness at which new capabilities can be added rely on how dependably and swiftly developers can make changes. To achieve the best code quality, SAFe suggests the following ideas:
Shortly called TDD, the test-drive development practices guide the generation of unit tests. The practice does it by suggesting the test for a change before the change is actually created. In turn, developers can think about the issue in a broader sense. They do this inclusive of boundary conditions and edge cases before implementation. When developers do these things with better understanding, they can engage in quicker development with lesser rework due to reduced chances of error.
When it comes to unit testing, it divides the code into portions and makes sure that every part has automated tests for putting it into practice. In other words, these tests auto-run following every change and permit developers to make modifications quickly. They can also do this with confidence that the change will not break some other part of the system. Even, the tests function as documentation and as examples that can be executed. It will show how the element should be used and how the element communicates with its interface.
This involves the working of two developers towards the same change from the same workplace. One of these developers will write the code, while the other will navigate. The second person will carry out a review and give feedback on the spot. These two developers will also have the liberty to interchange their roles of writing and reviewing. This is a good practice as it involves the joined knowledge of two developers. Even, this approach will help with broadening the skill set of the entire team. The reason is that every member of the team gets the opportunity to learn from the other.
When the ownership is collective, it will bring down reliabilities between teams. Even, it will ensure that a developer will not hinder the quick flow of value delivery. Any person in the team can refactor, improve designs, fix errors and can add functionality. As a single person in the team is not the owner of a code , it will support coding standards. Even, it will help with consistency as everyone can gain knowledge and contribute to quality maintenance.
Indeed, it is not essential for every hardware system to have a code. Nevertheless, the formation of physical artifacts is a process that needs effort from all. For instance, when you take the case of CAD tools that are used in hardware development, they offer unit tests in the form of electronic design assertions. Thereafter, the analysis and simulations are done in mechanical design. Coding standards, collective ownership and pairing everything can bring about similar advantages to create designs that are easier to modify and maintain. In the case of hardware, some design technologies are the same as codes with definitive outputs and inputs that are best for practice.
The next dimension of SAFe built in quality is to work towards achieving system quality. When design quality and code quality can make sure of easy understanding and changing of the system artifacts, system quality makes sure that the systems function as per the expectation. Even, it will ensure that everyone has a clear understanding of the changes to make. Here are the tips given by SAFe for achieving system quality:
Knowledge sharing and alignment help with bringing down delays made by developers. Even, they help with bringing down the chances of rework. In turn, fast flow can be ensured. Behavior-Driven Development BDD explains collaborative practices in which the team members and product owner consent to an unambiguous behavior for a specific feature or story. When BDD is applied, it will help developers develop the right behavior right at the initial try. In turn, errors and rework reduce. This alignment is scaled by the model-based system engineering to the system as a whole. With the help of a synthesis and analysis process, the Model-Based System Engineering offers a complete and high-level view of all the planned functionality for a system.
As a result of scaling agility, many engineers make many minor changes that must be evaluated continuously for errors and conflicts. Developers can get quicker feedback with continuous delivery and continuous integration practices. This enables them to implement changes quickly and they can be quickly integrated and tested at different levels. This encompasses even the deployment environment. Continuous integration and delivery automate the process to ensure that the changes are taken across all phases and understand how to respond when a test does not get through. When continuous delivery and integration try to make all tests automatic, automation is not possible for some NFR and explanatory tests.
Quicker flow can be supported by cyber-physical systems as well. Previous versions of hardware components, models, simulations and other proxies can replace system components at the close. From the picture below, you can understand how a system team provides a platform that can be demonstrated to test upscalling behavior by putting together these element proxies:
Source: Scaled Agile
When there is a maturity in each element, the full length integration platform equally matures. When this methodology follows, element teams turn answerable for standing by their part of the final output and the full length testing platform maturity.
Releasing permits an organization to evaluate how efficient is the benefit hypothesis of a feature. When an organization releases quickly, it can understand things faster. When the organization understands things faster, it can deliver better value to customers quickly. When an organization follows modular architecture, it will explain standard interfaces between elements. In turn, component-level and smaller changes can be delivered without any dependence. Smaller modifications contribute to quicker and more regular releases with lesser risk. But, to ensure quality, it needs an automated pipeline.
As against traditional server infrastructure, an infrastructure that cannot be muted does not permit modifications to be made directly and manually to production servers. On the other hand, modifications are put to server images, tested and launched for replacing the servers that run presently. This approach creates more predictable and consistent releases. It also permits recovery automatically. If the operational atmosphere identifies an error in the production, it can take back the deliverable by easily releasing the previous picture for replacing the one with errors.
For systems that must explain objective proof for audit or compliance, releasing has extra conditions to be met. These companies must demonstrate that the system meets its set purpose and does not have any harmful and unintended effects. The Quality Management System of Lean explains approved procedures, policies and practices that support the continuous-integrate-deploy-release, flow-based and Lean-Agile process.
Definition of Done in Scaled Agile Framework is a crucial technique to making sure of graduational of value. The ongoing development of gradual system functionality requires a definition of done. This is done for making sure that the right task is carried out at the appropriate time. You can understand from the table below. Each enterprise, train and team should develop its own definition. When these might be varied for every Agile Release Train or team, they generally have a core set of things in common:
Source: Scaled Agile
The release quality sentiment is not for permitting changes to lay dormant staying to be integrated. Rather, it involves bringing changes frequently and swiftly through bigger part of the system one after the other still such time the change reaches an atmosphere for validation. Some cyber-physical systems might authenticate in the customer atmosphere. Others proxy that atmosphere with a single or more mockup that tries to get premature feedback. The end-to-end platform evolves, thereby providing a highhower level of loyalty that makes early validation and verification along with compliance efforts. For most systems, this premature feedback on compliance and V&V are crucial for gaining knowledge on the capability to release and produce products.
References