Apple bans internal use of AI tools like ChatGPT for fear of leaks

Spread the love

Apple prohibits employees from using AI tools such as ChatGPT and GitHub’s CoPilot over fears of confidential information being exposed. That writes The Wall Street Journal based on internal sources.

ChatGPT explains why it can be useful to ban the bot
in the workplace.

Apple fears that employees using generative AI tools such as CoPilot and ChatGPT could disclose confidential data. Apple would have stated that in an internal document, writes The Wall Street Journal. Software code, among other things, could leak out like this, is the fear, because chatbots can be trained on conversations with users.

OpenAI added a feature to ChatGPT last month that allows users to disable chat history so that the conversations cannot be used for training purposes. However, those chats can still be viewed by OpenAI moderators for thirty days. On Thursday, OpenAI released another iOS app from ChatGPT.

According to sources the WSJ spoke to, Apple is also currently working on its own large language models, although it is not clear what they should be used for. Previously wrote The New York Times reports that several Apple teams, including Siri engineers, test new language-generating concepts “on a weekly basis.”

Samsung confirmed earlier this month that it also prohibits employees from using ChatGPT and other generative AI tools on company equipment, citing such tools as a security risk. Internal source code of Samsung software has reportedly been leaked in the past due to employees using ChatGPT.

You might also like