Amid rumors that Apple may be working on its own generative AI, the iPhone maker is reportedly limiting how its employees use chatbots and AI writers such as ChatGPT, Bard and GitHub Copilot due to concerns about confidential data leakage.
Textual information such as appointments, notes and documents, as well as the code used to create software such as iOS, is considered highly guarded, so if it falls into the wrong hands, it can have catastrophic consequences for the company.
Of course, it comes as no surprise that a secret company has reportedly restricted such technologies, given confidentiality agreements and other measures designed to keep the information secret.
Apple employees are not allowed to use ChatGPT
It’s not clear if the company has issued an outright ban on generative AI, or if it has imposed some restrictions, such as those set by rival phone maker Samsung, which uses character counts to prevent employees from revealing too much information.
OpenAI, the company behind ChatGPT, says: “As part of our commitment to safe and responsible AI, we review conversations to improve our systems and ensure that content complies with our policies and security requirements.”
It also uses conversations for training purposes, which is why they are sometimes seen by employees who may be exposed to Apple’s confidential secrets if used as hints.
Rumors have also spread in recent weeks about Apple tinkering with its own generative AI and LLM, though many suggest the company is unlikely to create its own rival to ChatGPT. Instead, technical improvements could lay the groundwork for a better Siri that has been left behind by rivals Google and Amazon.
TechRadar Pro asked Apple to confirm any restrictions placed on generative AI, but the company did not respond immediately.