The Gartner Hype Cycle for Emerging Technologies 2024, a crucial roadmap for CIOs and CTOs, highlights four dynamic areas: autonomous AI, developer productivity, total experience, and human-centric security and privacy programs, encompassing 25 groundbreaking technologies.
While some of these innovations are still in their infancy, often lacking proven use cases and the expertise to implement them, they present both a challenge and an opportunity for CIOs and CTOs. The question arises: why should leaders invest their attention and resources in these nascent technologies? How should CTOs and CIOs approach these technologies in their quest to drive innovation, growth, and profitability?
To navigate this landscape, decision-makers must develop a keen eye for potential. Evaluating the benefits and risks of these emerging innovations involves looking beyond the hype. CIOs and CTOs should consider how these technologies align with their strategic goals, assess their scalability, and explore pilot programs to gauge real-world impact.
By staying ahead of the curve and embracing these advancements with a critical mindset, organisations can position themselves to adapt and thrive in an increasingly digital future.
Autonomous AI
Arun Chandrasekaran, distinguished VP analyst at Gartner, acknowledges the unprecedented pace of innovation around artificial intelligence development in the last few years. "The focus is shifting towards increasing automation, beyond current chatbots requiring constant user prompts. Future AI models aim to function as agents that can operate autonomously on our behalf," he elaborated.
Key areas likely to be impacted include:
Customer Service: Initial applications will enhance internal customer service functions before moving to external roles.
Software Development: AI agents are being developed to automate the software development lifecycle.
“Autonomous AI has significant potential in industrial automation, utilising techniques like reinforcement learning and large language models. This could transform manufacturing, retail, oil and gas, agriculture, and automotive sectors,” posits Chandrasekaran. “The goal is to empower AI with greater agency, enabling it to perform tasks independently and effectively.”
Guidelines for the deployment of autonomous AI systems
Lately, concerns have arisen regarding the accelerated use of AI tools without established standards of practice. "The prospect of malicious actions enabled by AI-assisted tools is concerning risk leaders worldwide," said Gamika Takkar, director of research in the Gartner Risk & Audit Practice. She reminds us that the relative ease of use and quality of AI-assisted tools, such as voice and image generation, increase the ability to carry out malicious attacks with wide-ranging consequences.
Chandrasekaran offers four guidelines to ensure responsible and ethical use of the technology:
Explainability: Users must understand the reasoning behind AI agents' actions and suggestions. Explainability should be a fundamental aspect of any autonomous AI system.
Fairness: AI tools must avoid propagating biases or misinformation. Since models are often trained on biased internet data, organisations should implement tools and processes to identify and mitigate these biases.
Data privacy: As AI agents interact with multiple data sources and applications, protecting intellectual property and sensitive information is essential. Organisations should prioritise data privacy to safeguard their data.
Security risks: Be aware of potential security threats, such as hackers targeting AI agents. Emerging risks, like prompt injection attacks, could manipulate agents into unintended actions. Organisations must proactively address these vulnerabilities.
"By focusing on these aspects, organisations can better govern the deployment of autonomous AI systems and minimise associated risks," he continued.
Boosting developer productivity
One of AI's early use cases is application development. A recent Gartner survey found that 58% of respondents said their organisation is using or planning to use generative AI over the next 12 months to control or reduce costs.
"AI-augmented development tools integrate with a software engineer's development environment to produce application code, enable design-to-code transformation, and enhance application testing capabilities," Gartner VP analyst Joachim Herschmannsaid. "Investing in AI-augmented development will support software engineering leaders in boosting developer productivity and controlling costs and can also improve their teams' ability to deliver more value."
Chandrasekaran suggests CIOs adopt the following tools and practices:
Utilise cloud and platform tools: Leverage cloud infrastructure and platform tools to enhance developer access and capabilities. Establish platform engineering teams to improve collaboration and productivity.
Adopt modern development tools: Implement tools like containers, Kubernetes, and CI/CD (Continuous Integration/Continuous Deployment) systems. Developer portals can also facilitate developers' self-service provisioning.
Integrate AI across the development lifecycle: Encourage the use of AI tools for code generation, code completion, and integration into development environments. AI technology can improve quality assurance (e.g., automated testing, generating mock data) and documentation as it matures.
Support application modernisation: Use AI to help modernise legacy applications, migrating them to more current programming languages, which can address skill shortages in older languages.
Total Experience
According to Chandrasekaran, organisations generally prioritise customer experience over partner and employee experience, but there has been a shift in focus, particularly in the knowledge and service industries. He lists four points include:
Value of human capital: Companies recognise the importance of providing employees with the necessary tools for collaboration and productivity, especially in knowledge-based sectors.
Impact of COVID-19: The pandemic fundamentally changed work dynamics, leading to a rise in hybrid work models. Employees have resisted returning to the office full-time, prompting organisations to adapt.
Hybrid work reconciliation: Many organisations have accepted that employees will work a mix of in-office and remote days. This shift necessitates delivering a seamless experience, regardless of location.
Investment in tools: There has been increased investment in digital experience tools and security measures to support remote work. Some industries are exploring technologies like augmented and virtual reality for enhanced collaboration.
He concluded that the pandemic has significantly influenced how organisations approach employee and partner experiences, leading to more attention and investment.
Human-centric security and privacy programs
Few would argue that humans are the weakest link when it comes to an organisation's security, in part because they are unpredictable and have consistently shown themselves to be easy targets. Chandrasekaran believes that to enhance their security posture while respecting user privacy, organisations may want to divide their technologies into two categories: AI-enabled and non-AI-enabled.
Non-AI-enabled technologies:
Advanced encryption: There's a growing focus on quantum-safe cryptography to protect against future quantum computing threats. Techniques like homomorphic encryption allow for data analysis without decryption.
Holistic cloud security: Organisations are adopting centralised security management to ensure consistent policy enforcement across various environments, including cloud, on-premises, and edge devices.
AI-Enabled Technologies:
Data masking: New technologies can obfuscate data used in AI models, enhancing security and privacy.
Protection against attacks: Emerging solutions aim to defend AI systems from attacks, such as prompt injection.
Content moderation: With the rise of AI-generated misinformation, organisations need to implement measures to verify the authenticity of the content and identify AI-generated materials, similar to watermarking techniques but more subtle.
Chandrasekaran concedes that organisations face significant challenges in balancing security and privacy, particularly as AI technology evolves.
Click on the PodChats player and listen to Chandrasekaran's observations and advice on how organisations can gain business value from emerging technologies like AI.
Autonomous AI:
- Are there specific business processes or functions where autonomous AI could significantly impact? How do you map this to other efforts like autonomous finance?
- As concerns about AI ethics continue to rise (and mature), can you suggest guidelines for the organisation to guide/govern the deployment of autonomous AI systems?
Boosting Developer Productivity:
- What tools and practices can CIOs adopt to streamline software development and reduce time-to-market?
- How do we balance speed with quality in the development processes without dropping the security side of DevOps?
Total Experience:
- Do organisations pay attention to partner and employee experience?
- Are there any pain points in current interactions that need improvement?
Human-Centric Security and Privacy Programs:
- What technologies can enhance an organisation's security posture while respecting user privacy?
- Is it possible to build a security-aware culture across the organisation? How do you ensure this practice is sustainable as organisations through generations of employees, customers, and partners?
Do you have any advice for CIOs, CTOs, and CFOs on approaching these emerging technologies to extract business value?