Software development friends: we need to talk. Our definition of high-quality software is broken, and it has been for a while. Most likely you knew this already, but we’ve all been busy and who has the time to make things more complex?
There’s a great opportunity here, but first we need to understand what’s not working and why.
Understanding how we measure systems quality
In systems engineering, there’s a widely accepted definition that a system is broken down into two categories:
- Functional components and requirements. These are what the system has to do or the purpose it serves.
- Non-functional requirements. These are characteristics of the system that make it fit for purpose, maintainable, and stable.
At the heart of the potentially huge list of what could go in that category of non-functional requirements is a group that we know makes the difference between a system just operating and being high-quality.
Because nearly all of them end in “-ility”, they’re sometimes known as the “-ilities”, and while there are many variations on what this group includes, the following are typically in the mix.
Performance is the system’s ability to meet timing requirements within given constraints like speed, accuracy, and memory usage.
* the reason I needed to say “nearly all of them end in “-ility”
Scalability allows a system to respond gracefully to the demands placed on it and handle the increased workload without impacting the performance or its ability to expand the architecture to accommodate more users, processes, or data.
Usability describes the software’s ability to provide a high-quality user experience.
Availability is whether a system can be accessible and usable, especially after something goes wrong.
Extensibility is how easily a system can cater to future changes through flexible architecture, design, or implementation.
Supportability is the system’s ability to provide helpful information for identifying and solving problems.
If you’re curious to know more, check out Semi Koen’s primer on these baseline -ilities.
High-quality code needs to consider these -ilities
We use these non-functional requirements all the time, whether we realize it or not.
They’re the basis by which we extend our peer and code reviews to include more than just “does this work?”. We often review the code and its resulting functionality with a view to the quality of the implementation and the maintainability of the code as written. We think about whether the code is elegant or well-designed, not just that it gets the job done.
We’ve developed approaches to testing that will measurably prove if these requirements are met, and our systems testing peers have been working on automating these checks so that all software can be checked before use.
Something’s missing from how we define high-quality software
The -ilities serve an important purpose, and we’re missing an essential one.
Defensibility is the system’s ability to reduce the risk posed by unexpected or malicious use, detect suspicious activity when it happens, and respond appropriately to minimize impact.
Making defensibility mandatory
At the moment, most of us are somewhat aware that software security is something we need to consider in our SDLC. The trouble is that defensibility stands separately from the overall quality of the software.
This happens in a few different ways, and maybe one or more of these scenarios is familiar to you.
- Defensibility is considered by an entirely separate team — one who reviews the code after it’s done or runs tooling separate from the SDLC.
- Defensibility is considered only when required by audit or external stakeholders.
- Defensibility is considered only when something bad has happened, triggering a review of the incident and its cause.
When we allow ourselves to make any of the -ilities optional or standalone, we reduce our responsibility. We say it’s okay to compromise that element of quality, or make someone else take care of it.
Once we would have done this with scalability, by pushing it on to our operations team so they can apply more physical hardware to solve issues caused by poor resource management.
We would have ignored usability, not understanding the impact and friction we were causing by not considering “how” a system is used in the full sense of the word.
This wouldn’t be acceptable now. These shortcuts or oversights would (hopefully) not pass review. The code would be rejected.
So why is it okay for us to make defensibility optional?
My guess is that this behavior doesn’t come from a place of shortcuts and laziness, but from a place of not yet having the skills and confidence we need.
As a software development community, it’s taken us a long time as a community to understand how the other -ilities impact our code and how to design and build systems with them in mind.
Now we need to upskill and have the same focus on defensibility, understanding how and why this non-functional requirement matters.
The impact of defensibility
Changing our approach and making defensibility part of our core non-functional systems requirements is critical if we want to secure the incredible software we are building today.
While it means more work, and there are very few resources available to make this easy right now, it means that cyber security will finally become a core part of systems development.
It means that for code to be acceptable and production-ready, security (and defensibility) will become table stakes.
It brings ownership of one of the biggest challenges in application security to the right team. Instead of making defensibility the domain of ever-harder-to-find specialists, we put it in the hands of those who understand the system best: the engineers.
Bringing defensibility into the -ilities as a core part of software quality is the next step for software development worldwide. It turns the cyber security problem from an unknown challenge out of our control into an interesting problem we solve as part of building software.
And after all, isn’t problem-solving what software engineers do best?