How ChatGPT impacts cyber security and how to get your team safely started with it 

You’re hiring for a new engineering role within your team. Great. You’ve made a shortlist, interviewed a bunch of people, and sent them a coding exercise they need to complete and submit. Now let’s add ChatGPT into the mix. How can we be sure that the code we receive from prospective hires is actually written by them? We can’t. Unless someone writes the code in front of us, there’s no way of knowing.

That takes some getting used to. Until recently, we've been able to trust that — unless candidates were getting other people to do the exercise for them — what we were getting was actual proof of their skills and experience.

This is why we think a culture of “curious vigilance” is important. Does ChatGPT present us with challenges? Yes. Is it all doom and gloom? Nope. The next few years will likely see a combination of chaos, creativity, and unpredictability as we all get used to the direct impact of AI on our personal and work lives. So, how do you prepare yourself and your team?

Figure out how you feel about AI and ChatGPT first

It’s important to have an honest conversation with yourself first to process how you’re feeling about AI before you even start thinking about it in the context of your organization. When we’ve done this ourselves at SafeStack, it’s been an interesting realization to discover how strong some of our reactions to AI actually were. Like many other people around the world, we wondered what it meant if AI could replicate parts of our jobs — which is a confusing experience, to say the least. And we've realized that not only is it okay to feel like that, it's important to acknowledge those feelings and work through them.

Here are some questions you might want to consider:

  1. What parts of my role could be affected by AI?

  2. How does that make me feel, and how is that reaction changing how I think about AI in that context?

  3. What could go wrong if this did happen? What’s the risk?

  4. What if it worked and I didn’t need to do this particular task anymore?

Substantial change makes us feel vulnerable - even as security people. Your approach to policy, process, and managing this transition will be directly impacted by how you personally feel and how vulnerable it makes you and your organization feel in turn.

Inform your team about the pros and cons

Everyone needs to understand the good and the bad of AI and ChatGPT. We all go through an existential crisis when we realize the impact AI will have on our lives, careers, and communities. If your team members haven’t understood the full extent of the existential crisis, they’re not ready to talk about the pros and cons of AI yet.

When they are ready, start by thinking about where AI is likely to show up in your organization. Make a list, and write it all down. There will also be other places outside of your organization - many of which you can’t control - where AI will show up. What tools are you using right now that use AI? Which of your organization’s data is now saved in those tools? Do you have a good overview of all of this?

Make a list of third-party software using AI

Having a list of all the apps, tools, and third-party software where AI shows up is super helpful. It allows you to be proactive instead of reactive because knowledge is power, and by knowing which tools use AI (even if your company has decided it will not), you’re one step ahead of the game. If you’re very strict and have a zero AI policy, you’ll likely need to eliminate almost all of the tools your company uses. But knowing which ones use AI will create space for curious vigilance and watching the trend.

Safety first

In the rapid adoption phase of new tech - which is what we’re in now with ChatGPT - we’ll see a lot of change quickly. There’s a low barrier to entry for people to use ChatGPT, which is creating a massive growth curve. If your team is already happily working with an AI-based system, that’s great, and you’ll likely see an increase in productivity.

But as the main concern for most businesses is where their data goes when they’re using AI, make sure that you’re not in any way sharing any sensitive customer or commercial information when working with AI systems. There aren’t currently any automatic controls in place that avoid mishaps like that, so it's worth being careful.

Here are a few ways to stay safe out there:

  • Read the terms and conditions of any new tool you’re using (or any changes made to existing tools in your workflows)

  • Build a simple guide to allowed usage for the data held within your organization, which is specific in what each type of data can be used for and how

  • Create a register of AI usage so you know what tasks are being done, by who, and using what data and system. This enables you to respond quickly if something changes that might introduce risk.

Train your people to work alongside AI

Focus your employee training on what AI can’t do. People who have been living and breathing software development and cyber security for decades can still do a better job at critical thinking than artificial intelligence. But AI can speed our work up once we’ve set the course. So choose where AI can help you in your business and embrace it while staying vigilant.

Wrapping it up

There’s a ton of cool stuff happening in the space of AI and Chat GPT - even as cyber security people here at SafeStack, we can’t deny that. As our CEO Laura said: “If you’re in tech and you don’t see the potential cool stuff coming out of this, then this is a bad time for you. You’re going to have a really, really sad few months.” Yes, we’ll all be learning some hard lessons on what we trust and how we verify that as we get a grip on the rapid AI growth. But, if you combine the excitement with a degree of caution, common sense, and vigilance around cyber security, your journey will be much smoother.

Previous
Previous

Introducing new rituals into software development lifecycles

Next
Next

Behavior-Driven Development (BDD) goes rogue