How to improve software security outcomes by teaching good engineers to be bad people

You are a good person. You like to build things and solve problems. It's not your fault. You also follow the rules. That's not your fault either.

From our parents to our schooling, from our communities to the laws of the countries we live in - we are taught to behave from a very young age. While we are naturally inquisitive as children, we dial those behaviors down as we age. We remain curious and playful at our core, but we change our behavior in external situations, such as in the workplace, to fit the mold. And it doesn't stop there.

We judge others on whether or not they fit in that mold. We promote workers based on whether they comply with rules and frameworks, tell those who step outside the line about it, and label people who think differently as disruptive or annoying.

But, as software engineers, curiosity and playfulness are often exactly what we need. So how do we tune into our inner child to create our best work?

Creativity and play are important

Play has an important role in how our brains work. It's a way to replenish our energy, and it helps us solve problems. 

As engineers, we know that sometimes, to do our best work, we need to get away from a keyboard and play around with an idea first. We rapidly generate ideas and approaches, looking for those that seem strong enough to move into prototyping and beyond.

This is what we call constructive or fantasy play. It's the grown-up equivalent of pretending you are a superhero or building your dream castle out of Lego.

We even echo this positive focus in our software development terminology - with concepts like “happy paths”, which is the ideal way to lead the user through our application without issues, errors, or exceptions.

But sometimes, this instinct to focus on solutions blinds us from the unusual ways attackers can use our systems and the vulnerabilities this causes.

Wandering off the happy path

Superheroes can't always win. Sometimes Godzilla destroys our perfect Lego castle, and sometimes our systems are used in a way we never expected. 

This unexpected use (particularly if malicious) can trigger the same feelings in us as if another kid ruined our sand castle. When other people's behavior conflicts with our good behavior, we can feel surprised and affronted.

Like most negative responses, this reaction comes from a misalignment of your expectations of a situation and what actually happened. While it's normal to feel this way, the misalignment and surprise are not good for the quality and cyber security of our software.

Sometimes we have to be a little bit naughty

As software developers, we should never be surprised by an outcome in our system. We built it. We know the pathways through our code and the potential permutations of its features.

Well, I mean, we should know, right?

Unlike our cousins in the software testing space, we don't often think about how something could go wrong. Software testers look at a system and ask, "what if?". We need that skill now more than ever and earlier in the process than we're used to. Because by the time we are testing, it's too late. 

We must ask "what if?" from the start of the process, from the initial idea to when we start creatively playing with the potential design.

That "what if?" has to include the happy paths in our system and the use cases we never expected.

Let's take a look at a couple of examples.

Money laundering in games

A Vice investigation in 2019 found that criminal networks were using the marketplace feature of Counter Strike: Global Offensive for money laundering, leading to Valve closing the online marketplace entirely. There's similar money laundering in games such as Fortnite.

Now imagine you were on that engineering team. The task: build a trading system that allows you to buy and sell in-game items and customize characters. Very few of us would naturally look at that model and say, "What if someone wanted to use this to launder money?".

Crimes against puppies

Dating apps have become big business with a recent uptick in apps aimed at not just matching you with your perfect partner but also matching your pets. Doggy dating apps such as FetchaDate provide usable, cute ways to make your life a little more love-filled. 

But according to the American Kennel Club, dog theft is also on the rise

Can you think about any "what if?" scenarios that would link together pet dating apps and puppy theft? 

Welcome to the world of Threat Assessment and Modelling

If these are new terms for you, that's okay. 

In simple language, Threat Assessment and Modelling is a repeatable way to think about the "what if?" situations in our system, particularly those related to malicious intent or systems misuse.

They give us a framework for consistently asking the right questions about our systems and software so that we can understand the threats our systems face (the ways they could be misused) and why that might happen.

Key questions asked by threat models

There are many frameworks and books that introduce threat modeling, but they all focus on identifying three elements:

Who 

The individuals, groups, or entities that would behave in this way or choose to misuse your system in some way.

Why

Their motivation for doing so. For example, financial gain, political influence, revenge, or ego.

What

How they would behave or use the system in question to reach their goals.

Once identified, these elements are combined with an analysis of the likelihood and impact of these threats, allowing us to decide which ones we need to factor into our design and what controls we need to build to reduce the risk.

Thinking like a bad person doesn't make you a bad person

For those of us who have grown up as law-abiding, well-behaved people, it can feel strange to intentionally think about doing bad things. 

After all, you don't sit in a cafe planning how to steal people's wallets, right? But when building systems, this process of thinking like a bad person allows us to identify threats and turn them into known pathways through our system.

Once they are known, we can plan to stop them or identify and respond to them if that's not possible.

Previous
Previous

What are the “Secure by Design” and “Secure by Default” approaches to software security?

Next
Next

How to introduce a new security rituals into an existing software development lifecycle