In the past weeks, leaders in AI and machine learning worldwide have debated the “immediate pause of the training of AI systems more powerful than GPT-4”. Within a span of two weeks, the Future of Life Institute’s open letter has received over 27,000 signatories, including Elon Musk.
Having spoken at great length about AI’s potential dangers, Musk co-founded OpenAI as a non-profit research agency in 2015 to mitigate those inherent threats. Ironically, OpenAI is now the primary focus of the open letter due to the rapid advances of its Large Language Model (LLM) GPT-4 - otherwise known as ChatGPT-4.
On the opposing side are Bill Gates and other AI developers, who believe that any pause would be difficult to execute and that the way forward is to instead tackle the “tricky areas” of AI. Yann LeCun, Meta’s chief AI scientist, and Andrew Ng, Google Brain co-founder, described the pause as a "terrible idea" and called for a need to "balance the huge value AI is creating versus realistic risks".
The decision to pause the development of new and more powerful LLMs will be complex, with implications on commercial competition, public policy, and potentially national security.
The AI gold rush
Today’s digital economy is creating greater volumes and types of data from more sources than ever before. Businesses are turning to AI and machine learning technologies to extract deeper insights to drive revenue and efficiencies.
A recent global IBM study notes that AI-based technologies are expected to change the way business is conducted, with a third of surveyed global business leaders believing that generative AI will transform their enterprise.
Figure 1: Top 5 emerging technologies that are expected to change industries
In APAC, IDC expects AI spending to hit US$49.2 billion by 2026. Most countries in the region, Singapore included, have announced national AI strategies to propel AI adoption and use cases. For example, Singapore is investing considerable resources into its National AI Strategy to develop AI solutions that benefit businesses and citizens.
This growing interest in AI is what concerns the signatories of the open letter - the imminent gold rush could result in AI that “not even their creators can understand, predict, or reliably control”.
Before hopping on the AI bandwagon, businesses should thoroughly evaluate the potential risks of the technologies that they are deploying - particularly those built on third-party platforms, such as GPT-4.
While GPT-4’s recent data breach was caused by a bug in its backend system, the incident has brought concerns about the security of such tools to the forefront.
With the recent furore surrounding Samsung Electronics engineers sharing sensitive corporate information with ChatGPT, we must be more aware of the data that we are feeding into these systems, and the potential fallout if that data were to be leaked.
We have moved from building systems that make decisions based on human-defined rules to systems trained on data. In developing these increasingly sophisticated, autonomous systems, we must also remember that we are ethically responsible for them.
Ethical AI seeks to design AI systems that are fair, accountable, and transparent to reduce bias and prejudice. In the absence of regulatory framework standards, the onus is on businesses to approach AI systems with clear intentions and careful consideration of their impact.
Unlocking the power of AI through data governance
Fairness and ethics have driven the focus on interpretability and transparency within AI systems in recent years. That motivation is now evolving. It is no longer sufficient to understand and place guardrails around the behaviour of models to protect consumers.
Organisations must take a step back and look at what forms the foundation of AI - data. Humans possess many forms of ingrained biases, like confirmation bias, that will potentially be present in data. Using AI to automate business processes like hiring, loan approvals, and job dismissals will magnify any inherent biases in the data, which can result in unintended consequences and discrimination.
The potential benefits of adopting AI are significant, with many exploring its application to solve problems, spark innovation and provide value to both organizations and consumers. However, very few organisations have the resources to train LLMs of the magnitude of GPT-4.
To develop ethical AI systems that can deal with bias, causality, correlation, uncertainty, and human oversight, it is imperative to maintain strong controls over data management and governance, and the ability to reliably reproduce outcomes.
For organisations that possess a vast amount of customer data through digital touchpoints, these data sets can be fed into the LLM to further finetune its generative capabilities. Data of this nature will need to be organised and secured with an end-to-end data management and analytics platform.
Utilising a data management platform that enables data sensitivity checks and data organisation can help an enterprise avoid potential pitfalls and reap the benefits of AI by excluding personal and private data from training or fine tuning data.