Why does Agile work?

×

Summary

A discussion of the fundamental basis of Agile Software Development.

Software development is chaotic

Chaos here is closer to the mathematical definition of "chaos" than the psychological or sociological definition.

The initial conditions of a software project significantly impact the final project. Small changes in the project can have significant impacts to the probability of achieving the project goals. These kinds of observations are typical in mathematical chaos.

As a thought experiment, imagine a project with 4 developers in it. Say it's complete in 1 year.

Now, take the exact same team and write the same project. It will take less time and the final code base will be different from the previous version. Why? Because the state of the 4 developers inner knowledge base is different than a year ago. They have knowledge and insight that they did not have previously.

Now, take a new team of 4 developers and, again, write the same project. It will take more or less than 1 year i.e. a different time frame than the first team. And the code base will be substantially different than theirs as well. Why? Because those developers have a completely different knowledge base, and have different different insights into software development and the project details than the first team.

Last one. Take a new team of 4 developers, write the same project. One of those developers takes 4 months off to do another project and then comes back on this one.
Even though the team is the same 4 individuals the code base will be different. Why? Because during that developer's hiatus, the rest of team was making decisions and learning while he was not. When he comes back on the team, the social dynamics and knowledge base changes.

Software development is chaotic.

Feedback

There have been many strategies in human history to deal with chaos. The only effective ones have had a common theme: feedback.

Why? Because feedback converts the situation into a Complex Adaptive System (CAS).

  • The entities in the system are independent.
  • They interact with other entities.
  • They adapt their behavior to what the other entities are doing.
  • All of this is in pursuit of their goal.

Agile development does exactly this. It increases the frequency and variety of the feedback produced during the project. That feedback is used by the development teams and other teams to adjust and accommodate their behaviors to achieve the project goals.

Why use feedback? Simply because people are fallible, they do make mistakes. Feedback gives us confirmation that our beliefs and assumptions are correct in real time. But that feedback has ramifications and those need to be known and accepted or least discussed, again, in real time.

An example of this in manufacturing is "Lean Manufacturing" which uses feedback to adjust workflows throughout an assembly line. Also see "Kanban".

Mitigating Project Risks

The intent of software project strategies like Agile are to mitigate project risks from these sources:

  • Software industry wide risks
  • Company/industry specific risks
  • Project specific risks

Having feedback helps identify which of these risks are happening at the current time. We need to know these:

  • something is failing
  • the extent of the failure: is it localized or is it in multiple areas
  • is there a correlation from a change or action that was done and the new failure?
  • what is the risk severity and possible impact on the project goals and deadlines?
  • how can that risk and similar risks be mitigated or, hopefully, completely avoided?
  • what is the data we need to identify failures, bugs, and other potential issues
  • what is the data we need to identify other possible project risks

Other information to share:

  • Which feature(s) are the most important?
  • Is this feature what the user really wanted?
  • Does this feature actually work?
  • What's the coding impacts of fixing that bug and of not fixing it? What are the schedule impacts?

How does Agile mitigate those risks

Agile uses broad spectrum feedback:

  • continual interaction with stakeholders
  • continual opportunities for stakeholders to try, hands-on, features as they are developed
  • automated UTs and other testing
  • continual tracking of issues and tasks

Also note that the stream of information from the feedback is valuable in itself. The stream is a form of information since it shows trends and probabilities over a longer period of time. If that stream changes in any way, that could be an indication of a risk or a pending risk.

As well, the feedback from these is not static. If a risk does manifest itself, then Agile promotes changing the development process or adding additional feedback paths to detect these risks sooner.

The sources and destinations of that feedback is varied. From the highest perspective, software projects have many stakeholders. Each of these has information, insights and knowledge that is relevant to other teams and to themselves. Sharing that insight and knowledge increases the overall likelihood the project will be successful.

  • corporate/management team
  • users and user representatives
    • quality assurance team
    • marketing
    • legal
  • development teams
    • software
    • hardware
    • mechanical
  • test teams
    • automated test team
    • manual test team

Risks

There are many risks to be considered in any project.

Project level:

  • Schedule Flaws - the schedule is missing tasks or has unneeded tasks
  • Dependencies - on other teams, or on other resources that are not accounted for
  • Requirements Inflation - additional but unnecessary features are being added
  • Missing requirements - Un-implementable or unclear requirements, implied requirements
  • Specification Breakdown - the definition of what the users want is not clear or is misleading
  • Under Performance - some members are not meeting their stated expectations

App level:

  • Surprise bugs/problems at the end of the project
  • Coupled phases/stages; high internal coupling
  • Bad/incorrect design decisions or architecture(s)
  • Code base is stale, corrupted, unstable, etc.; Code components don’t work together.
  • distractions to project focus or progress: documentation, status reporting, external stakeholder demands, field support, other/older version support, etc.

Example Feedback

In medical devices, there are a set of requirements that the device must meet. A Software Requirements Specifications (SRS) document details these. A key goal is that all of these requirements must be declared, reviewed and tested before an FDA submission can be done.

Using Agile we can monitor these activities and get feedback to track the progress of that goal.

There are three possible scenarios. We are using an automated test system that tests the requirements, there is a manual team, or it is a combination of both. For this example, I'll use the third scenario.

Use iterations

In a greenfield project, choose the top 1 or 2 requirements that can be met during the Agile iteration. These will be happy path behaviors that the app/device must be able to perform for any user interactions. For example, the starting screen content and layout, logging in, etc.

review

Choose as few requirements as possible:

  • they can be achieved in the iteration or at least significant progress towards them can be met.
  • the dev team does a design session to break these down into iteration stories, estimates are done.
  • the manual team is advised that these should be done by the end of the iteration
  • the automated team is given stories to create any system tests for these requirements
  • all three teams should review the requirements, wording, etc. to ensure they all understand and are in agreement to what the expected behaviors are.

This review is a form of feedback. If there is any confusion, issues, then the requirements can be augmented or changed to clarify the expected behaviors.

tracking the testing

Both the manual and automated teams need a way to track which requirements have been tested and their current pass/fail/untested state.

do the development

The dev team does the stories and adds UTs as necessary.

One useful coding convention to introduce is to add a comment in all function/method blocks indicating which SRS/SDS requirements are implemented. When a requirement test fails then searching for it can help narrow the debug effort significantly.

dev team testing

UTs should test implementation, but there can be additional "unit tests" that confirm at a higher, system behavioral level that the expected requirement behaviors are being met.

These may not be sufficient to fully test a requirement but if they do pass, then the likelihood that the requirement tests, automated or manual, are going to pass. And if these fail, then there is a high likelihood that a requirement test will fail too.

This is feedback to the dev team. If the CI/CD unit tests fail, then they know in a very time if their current changes have caused a regression for the other teams. The effect of this feedback is that these regressions become very, very rare.

manual testing

At the end of the iteration there is a release of the app to the manual team. A list of additional requirements that should pass can easily be given to them.

They then do a test of the entire set of requirements. It pays for them to start with these new requirements. Any failures will go back to the dev team for consideration to be fixed in their next iteration's stories. Putting a high priority on fixing these kinds of bugs is good for the dev and test teams.

automated testing

The automated team has already be writing tests for these requirements. They can use the released app for this iteration to run their tests and confirm both there tests and the app are working as expected.

This is also good feedback to pass on to the manual team. A failure in the automated tests that the manual team did not see, or a failure the manual team saw, but the automated tests didn't find, should be a red flag. One or both of there test procedures needs to be double checked.

overall progress feedback

The overall requirement testing status is a great way to check the overall progress of the team. When the dev team reports they have 20% of the requirements done, it has credence. The manual and automated testing confirms that 20%.

are we done?

When three teams all state that 100% of the requirements are met and they've been testing the app/device continually to double-check that status, then it is reasonable to conclude, that yes, the project development is complete and is ready for the formal requirements testing needed by the FDA.

  • The ongoing review of the requirements gives a good probability that there won't be any last minute surprises.
  • The test procedures used by the manual team are being run (and therefore "reviewed") per iteration, so again no surprises.
  • Ditto for the automated team.
  • Both teams, and the dev team are also reviewing the requirements themselves, and so yet again no surprises.
  • At the end of project, the formal run of the requirements is identical to what those teams have been doing for the entire duration of the project. And so, again, no surprises.

The overall effect of the high feedback rates from these teams leads to a key goal: no surprises.

- John Arrizza