ChatGPT is about to revolutionize cybersecurity

Join top executives in San Francisco July 11-12 to hear how leaders are integrating and optimizing AI investments to succeed. Learn more


Unless you’re deliberately avoiding social media or the internet altogether, you’ve probably heard of a new AI model called ChatGPT that’s currently open to the public for testing. This allows cybersecurity professionals like myself to see how useful it can be in our industry.

The widespread use of machine learning/artificial intelligence (ML/AI) for cybersecurity practitioners is relatively new. One of the most common use cases has been endpoint detection and response (EDR), where ML/AI uses behavioral analysis to detect anomalous activity. It can use known good behavior to recognize outliers, then identify and close processes, lock accounts, trigger alerts, and more.

Whether it’s used to automate tasks or to help generate and refine new ideas, ML/AI can certainly help you bolster your security efforts or build a solid cybersecurity posture. Let’s look at a few possibilities.

Artificial intelligence and its potential in cybersecurity

When I started working in cybersecurity as a junior analyst, I was responsible for detecting fraud and security incidents using Splunk, a security information and event management (SIEM) tool. Splunk has its own language, Search Processing Language (SPL), which can increase in complexity as queries become more sophisticated.

Event

Transform 2023

Join us in San Francisco on July 11-12 as top executives share how they have integrated and optimized AI investments to succeed and avoid common pitfalls.

Register now

This context helps to understand the power of ChatGPT, which has already learned SPL and can turn a junior analyst’s prompt into a query in seconds, significantly lowering the entry bar. If I asked ChatGPT to write an Active Directory brute force alert, it would create an alert and explain the query logic. Since this is closer to a standard SOC type alert rather than an advanced Splunk search, it can be an excellent guide for the novice SOC analyst.

Another attractive use case for ChatGPT is to automate daily tasks for an overloaded IT team. In almost any environment, the number of outdated Active Directory accounts can range from dozens to hundreds. These accounts often have privileged permissions, and while a Full Privileged Access Management technology strategy is recommended, companies may not be able to prioritize its implementation.

This creates a situation where the IT team resorts to the age-old do-it-yourself approach, where system administrators use self-written, scheduled scripts to disable obsolete accounts.

The creation of these scripts can now be delegated to ChatGPT which can build the logic to identify and disable accounts that have not been active in the last 90 days. If a junior engineer can create and schedule this script in addition to learning how the logic works, ChatGPT can help senior engineers/admins free up time for more advanced work.

If you are looking for a force multiplier in a dynamic exercise, ChatGPT can be used for a purple team or a collaboration of red and blue teams to test and improve the security posture of an organization. It can create simple script examples that a penetration tester can use, or debug scripts that might not work as expected.

One MITER ATT&CK technique that is almost universal in cyber incidents is persistence. For example, a standard containment tactic an analyst or threat hunter should look for is for an attacker to add a specific script/command as a startup script on a Windows machine. With a simple request, ChatGPT can create a basic but functional script that will allow the red team to add this persistence to the target host. While the red team uses this tool for penetration testing, the blue team can use it to understand what these tools might look like to create better alert mechanisms.

The benefits are many, but there are also limitations

Of course, if analysis of a situation or research scenario is needed, AI is also an extremely useful aid in speeding up or introducing alternative paths for that required analysis. Especially in cybersecurity, whether for automating tasks or creating new ideas, AI can limit efforts to strengthen a solid cybersecurity posture.

However, there are limits to this usefulness, and by that I mean complex human cognition coupled with real-world experiences that are often involved in decision-making. Unfortunately, we cannot program an AI tool to act like a human being; we can only use it as support, to analyze the data and create output based on the facts we input. Although artificial intelligence has made great strides in a short time, it can still generate false positives that a human needs to identify.

Still, one of the biggest advantages of AI is the automation of everyday tasks, allowing people to focus on more creative or time-consuming work. For example, AI can be used to create or enhance the performance of scripts used by cybersecurity engineers or system administrators. I recently used ChatGPT to rewrite a dark web scraping tool I created that reduced turnaround times from days to hours.

Undoubtedly, artificial intelligence is an important tool that security professionals

If AI has flaws informing human decision making, I would say that every time we use the word “automation” there is a tangible fear that the technology will evolve and eliminate the need for humans in their jobs. In the security sector, we also have tangible concerns that AI could be used in nefarious ways. Unfortunately, the latter fear has already been confirmed, and cybercriminals are using tools to create more convincing and effective phishing emails.

In terms of decision-making, I think it’s still very early to rely on AI to make final decisions in practical, everyday situations. The human ability to use universally subjective thinking is critical to decision-making, and so far AI has no way of mimicking these skills.

So while the various iterations of ChatGPT have generated quite a bit of buzz since last year’s preview, as with other new technologies, we need to address the concern they have created. I do not believe that artificial intelligence will eliminate jobs in IT or cybersecurity. On the contrary, AI is an important tool that security practitioners can use to alleviate repetitive and mundane tasks.

While we are witnessing the beginnings of AI technology and even its creators seem to have limited understanding of its power, we have barely glimpsed the possibilities of how ChatGPT and other ML/AI models will transform cybersecurity practices. I look forward to further innovations.

Thomas Aneiro is Senior Director of Technology Consulting Services at Moxfive.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is a place where experts, including data technicians, can share insights and innovations related to data.

If you want to read about cutting edge ideas and current information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider writing your own article!

Read more from DataDecisionMakers

Leave a Reply

Your email address will not be published. Required fields are marked *