Apple CEO Tim Cook recently shared the company’s vision for integrating artificial intelligence (AI) into its products during a Q&A session in an earnings call. Although Cook refrained from revealing specific product roadmaps, he emphasized the importance of adopting a “deliberate and thoughtful” approach in incorporating AI technology. Cook acknowledged the potential of AI as being “huge” and expressed the company’s intent to continue weaving it into their products thoughtfully.

Apple’s cautious approach to generative AI may be due to the persistent challenge of bias in AI models. Bias occurs when an AI model makes unfair or inaccurate predictions based on incorrect or incomplete data, which raises concerns for the safe and ethical development of generative AI models. A research paper slated for publication at the Interaction Design and Children conference in June explores a novel system for combating bias in the development of machine learning datasets.

The proposed system involves multiple users contributing to an AI system’s dataset with equal input, integrating human feedback at the early stages of model development. This “hands-on, collaborative approach” aims to create balanced datasets by democratizing the data selection process. Although the research study was designed as an educational paradigm to encourage novice interest in machine learning development, it presents an alternative approach to addressing bias in AI.

Scaling these techniques for use in training large language models (LLMs), such as ChatGPT and Google Bard, may be challenging. However, developing an LLM without unwanted bias could mark a significant milestone in creating human-level AI systems. These systems have the potential to disrupt various technology sectors, including fintech, cryptocurrency trading, and blockchain. For instance, unbiased stock and crypto trading bots capable of human-level reasoning could revolutionize the global financial market by democratizing high-level trading knowledge.

Moreover, demonstrating an unbiased LLM could help address government safety and ethical concerns for the generative AI industry. This is particularly relevant for Apple, as any generative AI product they develop or support could benefit from the iPhone’s integrated AI chipset and its extensive user base of 1.5 billion.