If you’re like most leaders we speak with, you’re still figuring out generative AI (genAI) and other AI solutions for employees. You’re also underestimating the amount of training and upskilling your employees need to productively use these tools.
As I wrote in a recent report, most companies are on the cusp of underinvesting in the training necessary to successfully use AI by an order of magnitude. You probably don’t know how ready your employees are, what gaps in readiness are impeding them, or how to improve their readiness. AIQ (the artificial intelligence quotient) can help.
AIQ measures the readiness of individuals, teams, and organisations to adapt to, collaborate with, trust, and generate business results from generative AI and other forms of AI. Low AIQ creates risks for your organisation:
Low AIQ undermines AI ambitions
Low-AIQ employees shouldn’t be deploying customer-facing applications. We don’t know for sure the AIQ of the employees at the following organisations, but we cite them as cautionary tales. The City of New York deployed a genAI-based chatbot that gave incorrect legal information to citizens and small businesses. TurboTax and H&R Block deployed a chatbot that didn’t perform accurately in answering the questions of customers preparing their taxes.
While we can’t say for sure that low AIQ caused these missteps, we do believe that low-AIQ employees are more likely to make similar errors.
Low-AIQ employees won’t capture productivity benefits. Everyone seeks increased productivity with generative AI tools. Employees with higher AIQ will use genAI tools more effectively, yielding higher productivity.
Employees with low AIQ might not adopt the tools at all — or might misuse them in a way that drives negative productivity, as they’re forced to redo work that genAI did incorrectly the first time (but they didn’t notice).
Benchmark AIQ
AIQ fills a gap in traditional thinking about enterprise AI adoption, which tends to focus on vendor selection, technical skills, and data. In reality, two companies could acquire the same technology, hire comparable technical talent, use a similar data set … yet still generate radically different business results.
That’s because people — and the understanding, skills, and ethics that they possess — are crucial to the success of generative AI and other AI systems.
Readiness and perceptions
AIQ solves this problem. It employs 12 statements that evaluate how employees feel about their readiness and 12 parallel statements (but reworded for leaders) to measure leaders’ perceptions of employees’ readiness. Your results — which you can track over time — anchor your analysis of next steps.
Example: Apply AIQ To Calibrate the AI Applications That You Offer Employees
Perhaps precision is particularly important to a specific role, such as contract negotiators, lawyers, or curriculum developers. Also imagine that you’ve deployed our AIQ survey to your team, and there are gaps. They’ve scored “medium” or “low” on AIQ. What now? Calibrate your choice of AI tool to the audience you possess and its AIQ.
Practical steps
With those roles — and their AIQ scores — in mind, perhaps you can pilot a document-centric solution like Adobe Acrobat’s AI Assistant (currently in beta). Adobe’s AI Assistant is expert in interrogating the approximately 3 trillion PDF documents in the world but also works with other file types (like Word documents).
Because the user is engaging with a specific document, the scope for hallucination and coherent nonsense can, in theory, be reduced. (Architecturally, this approach resembles retrieval-augmented generation, or RAG, which my colleagues discuss in this report).
But if the employees scored “high” in AIQ, perhaps they’re ready for something that requires more skill to use, such as Microsoft Copilot for Microsoft 365.
This quick example can be applied with all sorts of parameters. The key is to benchmark the AIQ of your user base and calibrate your choice of application to their AIQ scores, roles, and workflows.
Originally posted on Forrester