Lumenalta's latest whitepaper reveals that only 8% of companies experience 'extreme success' with their AI initiatives due to data integrity and security challenges.
Major challenges
The study found that almost all IT leaders (98%) cite data quality and integrity as a significant roadblock in AI/ML adoption. In comparison, security and privacy in their AI/ML implementations concern 86% of respondents.
All respondents cited adopting data cataloging tools, but only 60% and 61% leverage data lineage tracking and dedicated governance platforms, respectively. Over half (53%) have not implemented bias mitigation safeguards for their AI systems.
Advancing AI success
Michael Hagler, president of Lumenalta, suggests implementing principles from trusted data governance frameworks that prioritise data integrity, security, and accessibility to drive success.
"Today, the vast majority of projects don't use or follow specific frameworks unless they have an established team dedicated to data governance. By ensuring high-quality, trusted data flows through every stage of their AI pipelines, companies can reduce risks and achieve more accurate insights and reliable outcomes from their AI investments," Hagler said.
Strengthening data integrity and security
Hagler believes that GenAI adoption can be the goal and catalyst for strengthening data integrity and security.
"Through automated metadata generation, data quality monitoring, data cleansing and enrichment, data lineage and provenance, real-time data validation, and anomaly detection, companies can conservatively expect a 20-40% reduction in data management costs and a halving of deployment time. Clear governance practices not only help organisations address compliance requirements proactively but also foster stakeholder trust in AI systems," he said.