OpenAI is working on a system to detect ‘biorisks’ from AI models

Spread the love

OpenAI has published a study into potential ‘catastrophic risks’ of its AI models, looking at whether they could be used to manufacture bioweapons. The company says it is working on a system that identifies such risks at an early stage so that they can be limited.

Based on early testing concludes OpenAI that its current GPT-4 model ‘might’ be able to provide information useful for manufacturing bioweapons. Various tests have been carried out for this purpose. For such a test, two groups had to attempt to complete assignments related to designing a biological threat. One group had access to the internet and GPT-4, while the other group could only use the internet.

It showed that the first group had completed the assignments ‘slightly’ more accurately, although the research team claims that information about biological hazards is relatively easy to find even without AI. Last year spoke the American White House There is concern that AI could potentially “significantly” lower the barrier to entry for the development, acquisition and use of bioweapons, but according to the early conclusions of OpenAI’s research, this is not so bad.

The GPT-4 model used for this research is a different version than the model currently available to consumers. All restrictions have been removed, so that the model answered unsafe questions without jailbreaks. Also, this version of the model did not have access to tools like Advanced Data Analysis and Browsing. With the latter, GPT-4 can consult the Internet in real time. Using these tools could improve the usability of the AI ​​model in this context ‘non-trivially’, the researchers say.

The team notes that this study concerns early conclusions and that much more research is needed to reliably map the risks. These findings should serve as a starting point for further biohazard research. The team is also working on other studies to better understand how the GPT language models can be abused, writes Bloomberg. Such a study should identify the likelihood that AI can be used in cybercrime.