spot_img
Sunday, March 3, 2024
HomeTechnologyHackerOne: How AI Is Changing Cyber Threats and Ethical Hacking

HackerOne: How AI Is Changing Cyber Threats and Ethical Hacking

-

HackerOne is an online security platform and hackers’ community, organized a roundtable discussion on the 27th of July on a Thursday concerning the ways artificial intelligence that is generative can transform the way we think of cybersecurity. Hackers and experts from industry discussed the importance of generative AI in a variety of aspects of cybersecurity. This included new attack surface types and the things that organizations need to be thinking about in relation to large-scale models of language.

Generative AI Can Pose Risk if Businesses Adopt it too Rapidly

Expert hacker Joseph “rez0” Thacker, a senior offensive security engineer at security software-as-a-service provider AppOmni, warns businesses using generative AI, like ChatGPT, to generate code, not to include flaws in their haste. 

For instance, ChatGPT doesn’t have the background to know the vulnerabilities that could be present in the code it generates. It is up to the organizations to ensure that ChatGPT will be able to generate SQL query that isn’t prone to SQL injection, Thacker stated. Hackers who gain access to user accounts or the data that is store across various parts of an organization are often the cause of weaknesses that penetration testers often search for as well, and ChatGPT may not be able take these into consideration in its software.

The two biggest risks for companies who are likely to adopt the generative AI tools are

  • The LLM to be made available at all times to outside users who have access to the internal database.
  • Connecting various plugins and tools using the AI feature that could access non-trusted data, even though it’s internal.

What Threat Actors Can Make Use of Artificial Intelligence Generative

We must keep in mind that systems like GPT models refocus information that already exists and has already taught, not produce new information.The author predicted that less technically savvy individuals would have access to their own GPT models, which could either teach them how to write ransomware from scratch or assist them in creating ransomware that currently exists. 

Prompt Injection

Anything you do on the internet as an LLM could do — can cause the same kind of issue.

A possible way to attack cybercriminals against chatbots that are LLM-base is prompt injection. It makes use of the prompt functions that are programme to prompt the LLM to carry out certain actions.

For instance, Thacker explained that if an attacker employs prompt injection to gain charge of the situation of the LLM function the attacker can then exfiltrate data through the web browser feature and then move the information that’s exfiltrated onto the side of the attacker. An attacker can send a prompt injection payment to an LLM that is tasked with responding to emails and reading them.

Roni “Lupin” Carta, an ethical hacker said that programmers using ChatGPT to aid in installing prompt software on their machines could encounter problems when they request the artificial intelligence AI to locate libraries. ChatGPT creates library names that are distort which hackers can use to gain advantage by reversing these fake libraries.

Attackers may insert malicious words in images, too. If an image-interpreting AI such as Bard scans an image, the text could be displayed as a prompt to tell an AI to perform specific functions. In essence, attackers could use prompt injection to manipulate the image.

Security Coverage That Must Be Read

Custom cryptors, Deepfakes, and other Security Threats

Carta noted that the bar is being lower for those who wish to employ Social Engineering as well as deepfake video and audio technology that could also be use to defend.

This is fantastic for cybercriminals as well as red teams who use social engineering in their work, according to Carta.

From a technical perspective, Klondike point out the manner in which LLMs are design can make it difficult to remove personal information from their databases. He also said that internal LLMs may still display the data of employees or threat actors or perform functions which are meant to be kept private. This isn’t a complicated prompt injection. It could simply be a matter of asking the right questions.

There will be whole new goods, but Thacker also predicted that there will be more of the same kinds of vulnerabilities that have always been in the threat landscape.

Security teams will likely be more aware of low-level attacks, as amateur threat actors employ techniques such as the GPT model to carry out attacks explained Gavin Klondike, a senior cybersecurity consultant at hacker as well as community of data scientists AI Village. Cybercriminals at the top can create custom cryptors, software that obfuscates malware and malware using generative AI, he added.

Nothing a GPT Model Produces is Novel 

There was some discussion during the panel on whether it was true that generative AI addressed the same issues similar to other tools or even presented fresh ones.

Katie Paxton-Fear is a security expert and lecturer at Manchester Metropolitan University. She says, “I believe that we have to keep in our minds that ChatGPT is train on things like Stack Overflow.” “”Nothing that is generate by the GPT model is novel. You can get the entire information by using Google.

Authentic education shouldn’t be criminalize, in my opinion, when we talk about excellent and terrible artificial intelligence.

Carta has compared generative AI to the knife. Like the knife it can also be a weapon, or an instrument to cut steaks.

The key, according to Carta, is not what AI is capable of, but rather what humans are capable of.

Thacker has resisted the idea of a knife, arguing that generative AI isn’t like a knife, because it’s the only tool that humanity has ever used to “… come up with novel concepts that are completely original due to its vast field of experience.”

Then, AI could end up as a combination of a clever tool and an innovative consultant. Klondike said that while those with low-level threats will be the most benefited by AI which makes it easier to create malicious code, the ones who are most benefiting in the professional cybersecurity realm will be at a higher stage. These professionals already have the ability to create software and develop their own processes, and they’ll seek out AI to assist in other areas.

How can Companies Ensure Generative AI is Secure

A security model Klondike as well as his colleagues developed in AI Village recommends software vendors consider LLMs as users and set up security measures around the information they have access to.

Treat AI as an End-User

Threat modeling is essential in working with LLMs, the expert said. Monitoring remote execution like a recent issue where an attacker who was targeting the LLM-powered tool for developers, LangChain could transfer code directly into an Python code interpreter. This is also significant.

Klondike asserts that we must impose authorisation between the end user and the resource on the back end that they are attempting to access. 

Don’t Overlook the Fundamentals

A few suggestions for businesses who would like to make use of LLMs safely will be similar to anything else, panelists shared. Michiel Prins, HackerOne co-founder and director of professional services, said that in the case of LLMs companies seem to have neglected the traditional security instruction of “treat user input as dangerous.”

Regarding the architecture of some of these products, Klondike asserts that we have “almost forgotten the last 30 years of cybersecurity lessons.” 

Paxton-Fear regards an aspect of generative AI that is new as an opportunity to integrate security right from the beginning.

It would be wise to take a step back and build security into the system as it develops rather than adding it after the fact, ten years from now.

To See Our other latest posts Visit

Related articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest posts