A Scary Future in CISO-land: AI and Security
GPTs make hackers more efficient too. How can we accelerate the preparation of massive mega-attacks, fueled by GPTs in the wrong hands?
In the spirit of spooky Halloween, let me share my biggest fears when it comes to AI and its potential to wreak havoc on cybersecurity.
TL;DR
Programming languages are limited domains and limited domains of “knowledge” are perfect for transformers (GPTs). Not only does the wave of GPTs make everyone a coder, but malicious hobbyists can now find the right attacks faster, as they can iterate faster. Attackers have an even bigger upper hand right now, and I don’t see that the CISO community is well prepared for the scale of AI-generated attacks ahead. Remember, the attacker just has to be right once, while the defender has to be right every time. Speed beats accuracy in this case! There is a risk of mega-attacks taking entire companies down, while we are only discussing regulations…
With More Color
We’ve all learned that the dwell time of attacks is decreasing. However, the time from exploit to scalable attack is also dropping. Zero-day attacks were on a record high in September. Why aren’t more CISOs losing sleep over this? My data point that they are not is this year’s RSA agenda - I found many talks on cloud/api security and compliance, while I found few to none on how to prepare for massive AI attacks or the impact of GPT models from the threat perspective. RSA has always been a forefront conference with a next-threat feel and a good finger on the cybersecurity pulse - a good litmus test of what keeps or should keep CISOs busy. It seems as if this year’s aggregation of “top of mind” of CISOs, reflected in the agenda topics, to a large extent remains the same, while the world has just changed by lightyears. Why isn’t the mind set of the CISO shifting more rapidly? I think it needs to, due to the speed AI is rapidly progressing. The recent frog-leaps in AI are dramatically changing the game, and the cybersecurity mindset needs to change with it to not end up in backwaters.
Some have expressed worry about AI in the way of “is it a human or a bot attacking?” To that I say, it doesn’t really matter anymore. Most attacks today are already automated and merely governed by a human. There is no importance, in my opinion, in figuring out if it is “an AI” attacking you: we should assume all scaled attacks are created or augmented by AI anyway. Instead focus on:
Whether you are Indeed under attack
If your organization is a potential target of a new attack
Is there a new attack coming and if so, what its variants could potentially look like
You can’t treat AI as code. With AI, data is intertwined with the code. I fully agree that protecting data is key and governing output of models is important - for the enterprise and for individuals relying on those enterprises. However, organizations should only spend limited cycles worrying about regulations and trying to prepare for what’s coming. Yes, user rights and data privacy are important, but if your company is constantly struggling with even the basic availability due to constant new waves of new, massive attacks, you may end up with no business in the end.
I am not saying you should not worry about protecting your business data and your clients rights. What I want to highlight is that there are much bigger worries on a macro level and that those threats will come to attack us faster than anyone can prepare for. There are scary new possibilities emerging of how AI in the wrong hands could sink entire companies. And those people or organizations will not waste any time worrying how regulations or certification requirements may pan out!
A prominent tool in the arsenal of defenders and regulators is certification. Is there any research to back up that a certification does anything for your business today – other than a checkmark to open doors for business with certain entities? I could not find any. A certification does not prevent attacks! This is the old mindset I am referring to: “certifying code” to be “safe”. It is a false safety! A regulatory compliance certificate will not protect you from the rising super-advanced AI attacks. Hence, CISOs are focused on solving the wrong top problem if they think scanning and certifying is the highest priority today.
Old mentality is useless with the threat of scale in AI-fueled attacks. There is a new paradigm on the rise. And regulating AI will put more certifications in place, so enterprises can feel “safe”, but really, aren’t we kidding ourselves thinking hostile groups will also comply? Or will somehow magically be deterred by a “certified infrastructure”?
So what SHOULD we worry about? Time to scale. How fast can you intercept (time to detect)? Vet if your current solutions help you to detect and intercept faster. If not, those solutions are probably on the B-list of priority, IMO. Prioritize finding a new solution that detects or intercepts faster. Maybe start by taking a look at SOCPrime?
Let me paint it out for you. Exploits that a malicious actor previously had to think about, test, validate, code, scale, and deploy in a month or so, can now be automatically done in minutes or seconds. Further, a malicious program does not have to work perfectly. Good enough could be very consequential, if it is fast enough. What does this mean? Today someone, with very mediocre coding skills and little exposure to ideas and creativity, can suddenly generate pretty sophisticated, malicious software. More people can generate. More ideas can be turned into exploits in a shorter time. The massive volume of attacks will only become much more massive. Remember, attacks never get worse: they only get better.
See it now? One human can cause bigger damage than before. Faster. With less skills. Generative AI available to everyone has enabled more humans to do more with less. More damage included.
Still not feeling the panic? How long would it take a non-English speaker to create a phishing email in English a few years ago? Now imagine that same time with a GPT handy? Exponentially faster. How long would it take for a hacker to come up with a potential exploit or to find a vulnerability? How many exploits do you think that person can implement and evolve - with its variants - in a day with a GPT? How long would it take to validate that an exploit caused actual damage and achieved its objective? Now you can iterate 100s if not 1000s in a day. And you can build systems that go from GPT output to action, automatically, streamlined, and scaled - and those systems will only get better over time.
In Summary
Volume of attacks. Variety of attacks. Scale of attacks. This is terrifying for the right caliber CISO. It’s not “whether you pass the next certification or not”. I still lay awake at night worrying about AI in the hands of malicious people or groups and am hoping for the best, and I am not even a CISO. What keeps you awake at night?