ChatpGPT has taken the world by storm. Daily news reports give us insight into something new it can do that will automate our world and make our lives better. But there is a downside to ChatGPT as well. Some would call it a dark side. As you’ll see in this article, CyberHoot will outline 5 different ways in which ChatGPT can be harmful to society. Read on to learn more.
ChatGPT Dark Traits To Worry About
- Teach hackers how to exploit vulnerabilities.
- Write the malware for you.
- Write grammatically correct spear-phishing emails in any language.
- Practice Social engineering attacks in realistic simulations.
- Write convincingly using patently false arguments.
ChatGPT helps Hackers Hack
ChatGPT has a dark side that enables otherwise unskilled hackers to hack. ChatGPT can create convincing phishing attack emails in the language of your target. It can write functional malware to breach a computer system or network. It can explain the tools and methods needed to exploit a vulnerability you might find with a vulnerability scanning.
"Our experiment showed that AI could give detailed advice on exploiting any vulnerabilities we encounter." Checkpoint Security Researcher ("CSR")
These threats represent an expansion of the universe of potential hackers targeting your business. What follows is a detailed review of the threats AI represents to our businesses and some of the countermeasures you need to combat them.
Five Ways you can Hack with ChatGPT
Let’s look at various ways ChatGPT opens hacking up to unskilled individuals resulting in real-world successful exploitation risks to your company.
Dark Trait #1: Learn how to exploit vulnerabilities
As you can see in the screenshot above, ChatGPT will detail exactly how to exploit TLS 1.1 vulnerabilities in a website. It provides real-world exploits (BEAST and CRIME) and names the tools needed to exploit them. This opens hacking up to the world’s unskilled wannabe hackers, greatly expanding the scope of who can learn the skills needed to become a successful hacker. Then making hacking more attainable to individuals who might otherwise have no way of learning the skills, tools, and techniques needed to attack our companies and online properties. It’s like putting a loaded gun in vending machines all over the city you live in.
Dark Trait #2: Write malware for you
ChatGPT can write code given the right input request. That code can be malware such as ransomware, password cracking, and more. This puts more dangerous hacking tools in the hands of less experienced hackers.
"CSR's analysis of several major underground hacking communities shows cybercriminals using OpenAI to develop malicious tools... less experienced cybercriminals are using it to get into the game."
The results could be more attacks with less care for the potential negative side effects. This article details how a hacking organization gifted the decryption key to ransomware used to infect a children’s hospital because the “ransomware for hire” group that created the tool was upset that hackers targeted a Children’s Hospital with their malware. If hackers can generate ransomware, will there be any code of conduct for attacks? Let’s move on to the most common attacks out there – phishing, and see what ChatGPT can do.
Dark Trait #3: Write Spear-Phishing Emails in Any Language
Some telltale signs you’ve received a phishing attack are spelling, punctuation, and grammar mistakes. Cybersecurity professionals teach others to watch for these hallmarks to help spot and avoid phishing attacks. ChatGPT can not only create grammatically correct phishing attacks, but it can also do so using inputs from social media. This turns a vanilla phishing attack into a high-octane spear-phishing attack that can more easily fool the target recipient. Additionally, it can do so in any language ChatGPT supports. CyberHoot has tested translation capabilities from English into languages known to our staff and ChatGPT did a better job than Google Translate. The language capabilities are incredible.
Dark Trait #4: Create Novel Social Engineering Attack Dialogue for Practice
ChatGPT can help you create novel social engineering dialogue (inquiries and responses) to conduct more realistic and convincing, well-researched and rehearsed attacks. Knowing what’s expected in an interaction with a help desk when asking for a password reset will help hackers role-play to prepare for the expected line of questioning. See the ChatGPT dialogue written below for reference.
Practice makes perfect. Unskilled hackers can practice making help desk requests and learn the types of information they will be asked to provide. This will allow them to dream up counter arguments to circumvent protective measures. Does your company’s help desk authenticate every password reset request effectively, every time?
Dark Trait #5: ChatGPT writes confident arguments even when it’s wrong.
ChatGPT’s meteoric rise was in part due to its ability to write university quality papers. It summarize complex ideas. It writes cogently on any topic you ask it to. However, in many documented cases, researchers have identified ChatGPT reporting patently false information. There is even a term for AI reporting bad information. It’s called an AI hallucination.
In artificial intelligence (AI), a hallucination or artificial hallucination is a confident response by an AI that does not seem to be justified by its training data.[1]
In natural language processing, a hallucination is often defined as “generated content that is nonsensical or unfaithful to the provided source content”.
ChatGPT is capable of making up facts and arguing them very convincingly. This can lead to misinformation being spread across the Internet in the guise of expert opinions.
So, with all these risks of ChatGPT causing problems from misinformation to writing malware, to providing the recipe for exploiting vulnerabilities in real-time to unskilled and skilled hackers, what should companies do to prepare? What countermeasures would help protect your company from a compromise at the hands of ChatGPT?
Countermeasures:
Interestingly, the countermeasures that combat the ChatGPT risks are largely the same as before ChatGPT existed. These countermeasures are simply becoming more urgent to implement at your company because the number of hackers capable of executing complex, sophisticated, and believable attacks is increasing dramatically with ChatGPT. However, the attacks themselves remain largely the same as before. Let’s look at each of the five dark traits ChatGPT enhances and its countermeasure.
Countermeasure #1: How to prevent Vulnerability Exploits in your Company
To counteract vulnerability exploitation in your Internet-facing and internal devices, you need to scan those devices on an ongoing basis and patch all critical risks identified. This is known as vulnerability scanning. You can do this externally with nearly any vulnerability scanner. Internally, you will need to deploy a local scanner that you give credentials to. Doing this will allow the scanner to interrogate not only the ports and protocols in use on the network but the Operating System of the device being scanned. This will allow you to identify missing patches, firmware upgrades you need, and even weak configurations. The correct scanning tool will help you prioritize your work and you can spend your time closing the most critical gaps in your infrastructure hopefully before hackers exploit them.
Countermeasure #2: Fighting the Onslaught of Malware Attacks
According to Verizon’s latest annual Data Breach report, 82% of all breaches tie back to the human element. Your employees are responsible for 82% of the breaches. The unfortunate truth here is it isn’t their fault. Would you blame an employee for being unable to name the president of 10 foreign countries? Yet we blame them for clicking on phishing emails, for reusing passwords, and for falling victim to social engineering attacks when we have never taught them these critical cybersecurity skills!
Awareness training, phishing testing, and governance policies should be on the top of every company’s list of countermeasures to combat the current downpour of hacker attacks. ChatGPT just opened the doors to even more hackers getting into the game. Will you and your staff be ready for the increase in attacks made possible by ChatGPT? These attacks will have higher clarity, stronger accuracy, and better persuasiveness than anything we’ve seen before. The time to govern, test, and train users is now, before a breach.
Countermeasure #3: Combatting Better more Frequent Phishing Attacks
The countermeasures here are the same today, as they were for the last decade and perhaps will be for the decade to come. Companies need to augment traditional attack-based phishing emails with new and arguably better assignment-based phishing exercises designed to educate instead of punish users. This new breed of phish testing is far more positive for the MSPs who adopt it. Executives also benefit from consuming compliance metrics that report on every last user’s successful passing of the phishing test (never possible before). End-users who improve their knowledge based enough to confidently, efficiently, and effectively spot and avoid each phishing attack they face. MSPs benefit from not having to set up allow lists for phishing testing, and they also stop getting those nuisance emails asking “Is this a phishing attack email? I’m just not sure.”
Countermeasure #4: Make sure your SMB or MSP is prepared for Social Engineering Attacks
Countermeasure #5: Countering incorrect Arguments Based upon Misinformation Stated as Fact
Misinformation has grown tremendously in recent years. Social media has spawned 1000 conspiracy theories. For anyone with a post-graduate education, you will remember that any fact has to be backed up with a citation. Un-cited facts are folly in an academic paper.
Companies need to research their blog articles through the source material. CyberHoot stopped citing a fact that is prevalent in the cybersecurity circles that states: “60% of businesses that are compromised go out of business in 6 months.” because we can not find the original citation for that quote. It’s been passed down from blog article to blog article and is patently false information. We won’t cite it. You need to test your facts with citations and not blindly trust the answers ChatGPT gives you. Have a healthy dose of skepticism. Use sources of truth that follow this approach to reporting. Many news organizations have a whole group of people called fact checkers whose job is to verify the facts in an article. This isn’t to say there aren’t “biases” in reporting that still make it through. Facts are like statistics, they can often be manipulated to say exactly what your bias wants. Google “implicit bias” for more information on this topic.
ChatGPT Hacking Conclusions
The attacks that ChatGPT enables aren’t really new attacks. If anything they are old attacks that competent hackers who trained themselves diligent could pull off. ChatGPT just opens the door for the thousands more would-be hackers. Previously the less skilled hackers would not be able to perform the 5 attacks made easier or made possible by ChatGPT. This means all of us have to prepare for yet another increase in the number of attacks. Not only that, companies that might have been off-limits for some attacks, such as a children’s hospital might be attacked by less accountable hackers. The time to prepare your organization is now. Prepare with governance policies, awareness training, phish testing by assignment, and basic education. You’ll be glad you did.