While bring your own AI (BYOAI) is intriguing, advancing our cybersecurity and risk management practices to support AI's innovative potential is crucial. This isn't about stifling innovation but safeguarding our most valuable assets and ensuring business continuity. Before democratising AI in organisations, several key checkpoints must be addressed.
The past few years have seen rapid advancements in AI, particularly with Generative AI (GenAI). As we enter 2025, Agentic AI is taking centre stage, bringing new business considerations. Despite predictions of an AI hype collapse, the momentum remains strong, with new models emerging quickly.

This means users will encounter many new AI products, raising the question: Is BYOAI the next trend after bring your own device (BYOD)? Unlike BYOD, BYOAI introduces significant cybersecurity, risk, and governance issues. A recent survey projects that 66% of cybersecurity vulnerabilities in 2025 will stem from AI and Machine Learning vectors.
Key questions for organisations:
- Where are you on your Zero Trust journey?
Zero-trust security is essential for organisations with critical assets and regulatory requirements. With advancements in AI and quantum computing, Zero-Trust principles must evolve to address new threats.
Implementing Zero Trust involves protecting all internal assets and adopting a least privilege model. As AI continues to evolve, staying ahead of zero-day threats is a constant challenge, but adhering to fundamental security practices can help mitigate risks.
- How mature is your data privacy and governance model?
Many organisations are still developing mature governance models. Adopting multi-cloud solutions presents challenges in establishing robust IT governance and data privacy models.
BYOAI adds further risks, impacting security and privacy. Implementing robust information protection (IP) and data loss prevention (DLP) strategies and fostering a cybersecurity culture are necessary. Achieving maturity in governance involves continuous monitoring and a comprehensive framework.
- Are you sure BYOAI isn't happening without your knowledge?
With numerous free GenAI tools available, users may use them without oversight, leading to data leaks. Organisations must monitor data usage and revisit vendor contracts to ensure data privacy.
Protecting sensitive data from being ingested into AI models is crucial. This ease of access to AI tools means that individuals might inadvertently expose sensitive information, which could be used to train AI models without proper security measures.
- How AI-literate are your cybersecurity, risk and governance teams?
Cybersecurity, risk, and governance teams must understand AI to defend against new threats. Responsible AI use and understanding AI's impact on decision-making are crucial. Teams must be prepared to evaluate and assess AI adoption with relevant checkpoints and guardrails.
Adopting Generative AI introduces new security and risk vectors, such as poisoning and hallucination in large language models (LLM), which require a solid understanding to manage effectively.
- Are you introducing new threats, governance, and compliance risks to BYOAI?
Imagine a scenario where your B2B company or industry doesn't handle or process Personal Identifiable Information (PII) that falls under GDPR or similar data privacy regulations.
The prospect of using a cutting-edge AI note taker for meetings or a BYOAI product that translates your executive's speech into another language can be incredibly exciting. These tools promise to enhance productivity and communication.
However, it's important to recognise that using such tools involves capturing and potentially storing individuals' voices or video personas. This introduces new considerations around data privacy and security.
Moreover, smaller organisations in creative fields and other industries should be particularly vigilant about potential intellectual property (IP) violations. While innovative and efficient, AI tools' output may inadvertently infringe on existing IP rights.
These organisations must implement robust policies and practices to safeguard their creative assets and ensure compliance with IP laws.
Balancing innovation with responsibility is key to protecting personal data and intellectual property. A comprehensive cybersecurity culture is necessary to manage the risks associated with BYOAI.
As we embrace advanced technologies, it's essential to ensure that we protect personal data and intellectual property. Organisations must remain vigilant and proactive in their approach to cybersecurity and governance to fully leverage AI's benefits while mitigating associated risks.