A new cyber-attack technique using the OpenAI language model ChatGPT has emerged, allowing attackers to spread malicious packages in developers' environments.
Vulcan Cyber's Voyager18 research team described the discovery in an advisory published today.
"We've seen ChatGPT generate URLs, references and even code libraries and functions that do not actually exist. These large language model (LLM) hallucinations have been reported before and may be the result of old training data," explains the technical write-up by researcher Bar Lanyado and contributors Ortal Keizman and Yair Divinsky.
By leveraging the code generation capabilities of ChatGPT, attackers can then potentially exploit fabricated code libraries (packages) to distribute malicious packages, bypassing conventional methods such as typosquatting or masquerading.
No tags.