Understanding the impact of insecure design on software security
As the dust settles on releasing the latest version of the OWASP Top 10, our team has been discussing the inclusion of insecure design on the list. Specifically, we’ve been thinking about what that means for everyone involved in delivering software products.
If you haven’t yet been involved in a collaborative threat modeling exercise, or you’re pretty sure a paved road is purely for riding your bicycle, read on as our COO Erica and Principal Developer Advocate Christian break down what insecure design means for practitioners at various levels.
The challenges of measuring a secure design
Laura has already touched on some of the challenges with the OWASP Top 10, notably the ongoing move from syntax-specific flaws to high-level themes and subjective design principles.
The Top 10 authors themselves understand that the Top 10 has historically been used by the industry in ways that mean it has to wear many hats, and fit into many unusual situations.
Is it a list of application security risks? Or a list of technical vulnerabilities? Can each of the Top 10 be tested with automated tools? Is the intended audience penetration testers, software engineers, or software architects? Adding insecure design as the new number 4 risk has been central to some of the contention about the Top 10’s focus, but we think it’s a great move — even if knowing whether you’ve addressed the issue may be a little confusing.
Raising the visibility and importance of secure design practices will help manage risks in your organization. With this latest iteration of the Top 10, its authors have done their best to clarify its purpose: "The OWASP Top 10 is primarily an awareness document."
You can also read it as “not an implementation document.” If you need to implement technical validation or testing automation, try the OWASP Application Security Verification Standard (ASVS) instead.
So, what is insecure design?
OWASP categorizes insecure design as weaknesses introduced by missing or ineffective control design. In particular, these are design flaw security risks rather than insecure implementation risks.
Let’s say you’re building a software product to process financial receipts and extract information for reporting. This product handles untrusted data and has to try and parse various image and file types.
If the software is designed to perform parsing logic on the primary application server and that logic is found to be vulnerable to a Remote Command Execution flaw, then exploiting this issue may impact and compromise everything else that runs on the application server.
A more secure design decision may have led to the implementation of this parsing logic executing on separate, isolated servers or maybe even temporary cloud environments.
The OWASP Insecure Design page includes a few more example attack scenarios. It’s important to note that none of the examples are similar, and the recommended prevention techniques are different for each. This is because we can design our systems in many different ways, using many different technologies. Each of these designs has its risks.
The goal of secure design is to consider these risks as early on in the development lifecycle so you can prevent them from showing up in the first place. If they do show up, you can at least have alternative controls — like increased logging and alerting.
Getting started with secure design
While the preventative techniques listed by OWASP include effective security principles, like limiting resource consumption, writing unit tests, and using segregation, we’re going to focus on the following:
Secure development lifecycle
Threat modeling
The paved road (also known as secure design patterns)
Securing the software development lifecycle
A secure development lifecycle is a framework to layer people, processes, and technology throughout a systems (or software) development lifecycle, also known as an SDLC.
One of the first publicly shared secure development lifecycle frameworks was released by Microsoft back in the 2000s, and was focused on finding and addressing software security flaws throughout a typical software project.
Over time, both software development methodologies and means of securing software have evolved. Many organizations embrace agile software development styles, which results in having to deliver cyber security processes in very different ways, often at a higher velocity.
Whether your organization follows agile or other methodologies, there are common themes in most secure development frameworks, including:
Defined responsibilities for software and product security
Security training
Security architecture and design requirements or patterns
Security risk assessment, or threat modeling
Security automation technology, such as static application security testing tools
Security testing, such as penetration testing
Security operations
Just like no two organizations are alike, it’s also unlikely that a secure development lifecycle will be similar from one company to another.
Everyone develops and releases software slightly differently, and your requirements for documenting and performing security throughout your projects will also vary — and that’s totally okay.
This is why our Secure Development program is based on principles rather than technology or syntax. Every organization is different, and even different development teams in one organization may choose different languages, technologies, and tools.
If you understand the context of assessing the cyber security risk for your system, or the different ways you can assess or model your system for threats, then you can apply those skills in any situation you need.
Understanding threat modeling
Let’s dig into one of those SDLC steps: threat modeling. Threat assessment is a form of technical risk assessment that offers a systematic and consistent approach to identifying cybersecurity flaws in your software products from a design perspective.
A threat model typically has three phases:
Decompose and document your system and environment
Examine and brainstorm on the in-scope system to identify cyber security risks
Construct plans to mitigate these risks
You’ll get many benefits from performing a threat assessment. Some common ones include:
Identifying cyber security issues earlier in the development lifecycle, potentially preventing rework or refactoring to address issues later in the process
Having a way to simplify your product’s design and understand the cyber security implications of architectural decisions
Increasing collaboration and shared understanding of risks amongst your teams, mainly if the threat assessment is done collaboratively between multiple teams.
Check out our Threat Assessment for Software Development course if you’d like to build or hone your threat assessment skills.
Using paved roads for software security
The concept of a paved road in software development is a fairly new term that came from Netflix. The idea was to embed safe security defaults and security automation through the same delivery pipeline that most software engineers followed to deploy to production.
By lowering the friction of security processes, and simply having them execute in the same way they wanted to deploy their products, teams were transparently onboarding their software into aspects of the security development lifecycle.
Another benefit of a paved-road approach is that the security team can implement security patterns directly into the tooling the other software engineering teams use. A recent blog post by Netflix discusses this exact situation, where they were able to embed single sign-on into the core gateway product. Over time, they could add security header validation, logging, denial of service protections, and other security benefits directly into the platform. If this all sounds a bit complex, don’t worry.
How Netflix approaches cyber security is probably different from how your organization does it, and that’s fine. The important aspects to consider are:
What security defaults you can use in your environments without introducing significant overhead or friction
What you can do to foster collaboration between software engineers and security folks
The value of starting small and iterating over time
These aspects are similar to those we consider when considering adopting new practices like DevSecOps. This term gets used a lot these days and can be defined as a set of practices that aims to integrate cyber security into how our teams develop and manage (or operate) software.
This integration can’t happen overnight, and that's why starting small, iterating over time, and working together is so important.
We also have a DevSecOps course planned for 2022, which will explain these ideas in more detail.
Next steps
If you and your team want to learn more about secure design, we’d love for you to check out our Secure Development program.
The currently available courses will help you set the groundwork for understanding and applying the principles we’ve been talking about, and we deliver new content regularly.