OpenAI opens bug bounty program for vulnerabilities in GPT APIs

Spread the love

ChatGPT developer OpenAI has opened a bug bounty program. Security researchers can earn up to $20,000 as a reward from artificial intelligence vulnerabilities. Information from prompts is not in scope.

The bug bounty program relates to the OpenAI APIs, which apply to the GPT models GPT 3.5 and GPT 4. Vulnerabilities in the underlying Azure structure on which the GPT models run are also within the scope of the program. OpenAI specifically mentions ChatGPT as a subject that falls within the scope, but it only applies to issues such as authentication bypasses or bugs in the payment methods.

OpenAI explicitly says that information coming out of chatbot ChatGPT is out of scope. For example, prompts can trick the bot into writing malware or saying other ‘bad things’, which is against ChatGPT’s policy. That information cannot be provided through the bug bounty program. Other hallucinations of the bot cannot be reported as bugs either. For that, OpenAI a separate policy.

OpenAI offers rewards between $200 and $20,000, or $182 and $18,200. The latter only applies in exceptional cases; per vulnerability in the normal tiers up to a maximum of USD 6,500 is paid out. The bug bounty program runs via the external platform Bugcrowd. Researchers must keep vulnerabilities secret until they receive permission from OpenAI to publish about them. The company says it wants to give that within ninety days, but gives no guarantees.

You might also like