OpenAI opens bug bounty program for GPT API vulnerabilities
ChatGPT developer OpenAI has opened a bug bounty program. Security researchers can earn up to $20,000 in rewards for artificial intelligence vulnerabilities. Information from prompts is not included in the scope.
The bug bounty program relates to the OpenAI APIs, which apply to the GPT models GPT 3.5 and GPT 4. Vulnerabilities in the underlying Azure structure on which the GPT models run are also within the scope of the program. OpenAI specifically mentions ChatGPT as a topic within its scope, but it only applies to issues such as authentication bypasses or bugs in payment methods.
OpenAI explicitly says that information coming out of chatbot ChatGPT is outside its scope. For example, the bot can be tricked into writing malware or other ‘saying bad things’ through prompts, even though that is against ChatGPT policy. That information cannot be provided through the bug bounty program. Other hallucinations of the bot cannot be presented as bugs either. OpenAI has that a separate policy.
OpenAI offers rewards between 200 and 20,000 dollars, or 182 and 18,200 euros. The latter only applies in exceptional cases; Up to a maximum of $6,500 is paid out per vulnerability in the normal tiers. The bug bounty program runs via the external platform Bugcrowd. Researchers must basically keep vulnerabilities secret until they receive permission from OpenAI to publish about them. The company says it wants to provide this within ninety days, but gives no guarantees.