5 minute read
The Cybersecurity Association of Maryland, Inc. (CAMI) has our state’s cybersecurity scene covered. As a nonprofit, CAMI’s mission is to strengthen Maryland’s cybersecurity industry by growing the talent pool and connecting business and government with the cybersecurity professionals, products, and services that best meet their needs. Networking is a big part of what CAMI does. Their website hosts a cybersecurity jobs board and vendor directory, and yes, they sponsor lots of great events throughout the state.
Why is CAMI interested in Mind Over Machines? We’re not a cybersecurity firm. No, but our Director of Emerging Technologies Tim Kulp has his finger on the pulse of a little phenomenon threatening to disrupt cybersecurity and pretty much every other major industry: Artificial Intelligence (AI). That’s why CAMI invited Tim to present at their July MD Cyber Breakfast Club this past Tuesday. He was tasked with exploring the interplay of AI and cybersecurity, and of course Tim brought along his signature storytelling chops.
AI Defined, Sans Hype
Our MINDs love doing events like this one. They are a great chance to meet smart, inquisitive, local entrepreneurs and business professionals and learn more about what they’re building and the obstacles they’re working to overcome.
AI is still largely a murky area for business today. People don’t feel like they have a handle on what it is, but they know one thing: they’re sick of the hype and the inevitable threat of the robots taking over. The tendency is to throw AI into a bin of “tech coming down the line” and push it to the back of your mind. But AI is happening right now, and we need to figure out how to make it work for us because criminal hackers are already making it work for them.
Tim defines AI as “making computers more human by providing capabilities to reason, understand, and naturally interact with humans.” Basically, we’re training computers to do the same things we do all day every day without even thinking about it. We reason (analyze imperfect data to draw conclusions), understand (interpret the meaning of all kinds of different data), and communicate via language, voice, and gestures.
Now that’s a real brief overview. If you want the deep dive, you have to invite Tim out for coffee and tell him to bring his slides.
AI in Cybersecurity: Guardian, Trickster, Snake Venom
Obviously, AI has the power to seriously up your cybersecurity game. Like Norse god Heimdall, guarding the gate of Asgard, you can train your machines to keep watch in a constantly changing environment. AI injects greater speed, accuracy and intelligence into your infosec efforts. It can more quickly identify breaches, mitigate false positives and negatives, and offer up appropriate courses of action. But in order to do all that, AI needs to know what “normal” is for your company and your network. So, if you’re trying to figure out where to start with your AI adoption, that’s a good place.
Do you know your Normal? Do you have the data to teach your AI guardian what Normal is?
A business technology consultancy like, oh, let’s just say Mind Over Machines can help you define your normal, and CAMI can hook you up with plenty of cybersecurity experts. But the real problem, the crux of the issue, is that the Black Hats can put AI to work just as handily as all us White Hats. In fact, just like Norse trickster god Loki, they can shapeshift to lure our trusty steeds away from us. By watching how your defenses work, malicious AI agents can learn and adjust to attack weaknesses found in real time. This changing nature of Machine Learning-based attacks challenges cybersecurity experts to step up their defense game.
Like any tool, AI can be used for good or evil. One last time we’ll dip into the world of Norse mythology for an appropriate analogy. AI is like the Midgard serpent’s eitr venom; it’s deadly poison with the power to create life. How’s that for a dichotomy? “AI can make life easier, but also carries with it a whole host of new problems,” Tim explains.
AI Needs A Hero: Grab Your Cape
We know AI is vulnerable. The bad guys can sabotage the machine learning process. “Poisoning the well” is manipulating AI’s training set so an attack goes unnoticed. It’s like if all your elementary school teachers forced you to memorize your times tables wrong (2×2=5, 2×3=7. . .).
Similarly, if an attacker figures out how an algorithm is set up, they can poison the data AI collects, introducing misleading data that builds a counter-narrative about what content or traffic is legitimate versus malicious. For example, attackers may use bots to run campaigns on thousands of accounts to mark malicious messages or comments as “Not Spam” in an attempt to skew an algorithm’s perspective.
It’s time to flip the narrative. AI isn’t going to save humanity. Humanity is going to save AI. You are the hero in this scenario. You and your cybersecurity team have the knowledge, experience and skill to keep your AI safe and apply it to the right problems to get the right solutions.
We need to stop thinking about AI as something that is going to replace people and start seeing it as a tool to enhance how people work. Think of the great things your team is accomplishing right now. AI should empower them to do those things faster and better with increased access to information and more accurate insights.
Okay, Let’s Do This.
Are you ready to cut through the hype to meet AI where it is right now and start putting it to use? The steps are going to sound pretty similar to every other innovation you’ve embraced to date:
- Recognize opportunities
- Get educated
- Define an investment plan
- Attack low-hanging fruit
If you need a partner to help you brainstorm the AI use case that makes the most sense for your business, our Emerging Technologies Practice is here to do exactly that. If you’re ready to add AI to your cybersecurity team, we’d like to introduce you to some great people we met Tuesday morning.