ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands

The newly released OpenAI ChatGPT Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit.
“The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to the agent,” NeuralTrust said in a report

Leave a Reply

Your email address will not be published. Required fields are marked *