John McCarthy, credited as the Father of Artificial Intelligence, defined it as “The science and engineering of making intelligent machines, especially intelligent computer programs.”
However, through decades of romanticizing the technology by way of movies and novels, the lines have blurred between what we aspire it to be and what is possible today.
That said, with recent advances in machine learning, natural language processing and problem-solving, have these technologies sufficiently matured to warrant concerns among consumers and governments?
For example, COVID-19 and the ensuing track and trace methods used by some governments may be raising concerns about the ethical boundaries that organisations need to take to protect public health and safety.
It should be noted that as enterprises return to work, some are also opting to implement measures to monitor staff movement – in and around the work environment.
Watch this YouTube video detailing how an AI algorithm works on a banking use case. One of the key striking points in the video is a dependence of the AI on the data model. At least for now, it lives up to the adage: garbage in, garbage out.
EY published a report titled Bridging AI’s trust gap – based on a survey conducted between late 2019 and early 2020 covering 71 policymakers, 284 companies across 55 countries.
FutureCIO spoke to Wai Keat Cheang, partner and head of consulting for EY Singapore, on his observations around the technology and some of the growing concerns around AI.
In this podchat, he answers questions aimed directly at a July 2020 EY report, Bridging AI’s trust gaps, which points to AI discrepancies existing in four key areas: fairness and avoiding bias, innovation, data access and privacy and data rights.
Click on the podcast player to listen to Cheang’s observations around the development of AI, its adoption by large enterprises, the approaches smaller businesses can undertake to benefit from the same innovation, and how governments may consider approaching the technology to narrow the trust gap that current exists among employers who may feel threatened by the technology.
Q1: The report noted that as AI adoption accelerates, this is triggering a ‘techlash’ from the public. What do we mean a techlash and what are consumers’ top concerns?
Q2: AI promises things like being more intimately familiar with the customer’s needs – to a certain degree it is a response to some consumer research calling for better personalization of customer engagement. As companies deploy AI, are they also prioritizing ethical concerns? Or are they more concerned with other principles in developing AI applications?
Q3: The EY and The Future Society’s research revealed widespread disagreement between corporates and policy makers on ethical principles. As policy makers move toward regulatory enforcement, what risks do corporates face if they ignore these misalignments?
Q4: The report warns that a lack of trust between policymakers and companies is one of the greatest risks. How can companies proactively address this risk?
Q5: What should companies do to address these risks and prioritize AI governance?
Q6: What are the challenges that companies will face in addressing these risks?
Q7: Who should lead this effort and who should contribute to the effort?
Q8: Given that SMEs do not have the same resources as large enterprises, how should they approach risk issues around AI?
Q9: How should policy-makers in government approach the task on educating and encouraging AI development?
The EY lists four implications for enterprises looking to tap into the AI opportunity:
- Focus on AI’s emerging ethical issues. GDPR was just the beginning. AI will raise a host of new ethical challenges.
- Which AI ethical principles are most important in your sector or segment?
- How will they affect your business?
- How should your strategy respond?
- Engage with policymakers. If you’re not at the table, you’re on the menu. Policymakers are ready to move ahead — but, without industry input, blind spots could lead to unrealistic or onerous regulations.
- How do policymakers view AI governance and regulation in your sector or segment?
- What real-world issues are critical to understanding your business?
- How will you be part of the conversation?
- Be proactive with soft and self-regulation. Stakeholders expect more now. If companies want to lead on AI innovation, they need to lead on AI ethics as well.
- Have you developed a corporate code of conduct for AI — and does it have teeth?
- How aligned is it with the ethical principles consumers and policymakers prioritize?
- How are you working with your peers (e.g., through trade organizations) on these issues?
- Understand and mitigate risks. AI governance — and particularly the “hard regulation” variant — will create new challenges and risks. Companies’ misalignment with policymakers only increases those risks.
- What risks might the move to AI governance/regulation create for your business?
- How are you mitigating and preparing for these challenges?