Over a third (36%) of employees upload sensitive company information into AI tools, ranging from strategic plans (44%) to technical data (40%), financial information (34%), and internal communications (28%), according to the Shadow AI Report 2025 by SaaS management platform Josys.
Shadow AI
The survey of 500 Australian technology decision-makers also found a surge in “shadow AI,” or employee use of unauthorised AI platforms that bypass security protocols, exposing Australian companies to serious data risks.

Jun Yokote, COO and president of Josys International, paints a sobering picture: “Shadow AI is no longer a fringe issue. It’s a looming full-scale governance failure unfolding in real time across Australian workplaces.”
Governance and compliance gaps
Alarmingly, the report found that only 33% of organisations are fully prepared to assess AI risks, with nearly 20% not prepared.
Moreover, 63% of professionals reported a lack of confidence in using AI securely. Full preparedness was unsatisfactory even in highly regulated environments, such as finance ( 52%), IT/telecom (55%), and healthcare (62%).
Forty-seven per cent of respondents cite upcoming AI model transparency requirements and Privacy Act amendments as top compliance hurdles.
Half of the respondents still rely on manual policy reviews, while 33% have no formal AI governance processes in place. Only 25% believe their current enforcement tools are highly effective.
Yokote underscores the need for a unified approach to AI governance: “While the nation is racing to harness AI for increased productivity, without governance, that momentum quickly turns into risk. What’s needed is a unified approach to AI governance which combines visibility, policy enforcement, and automation in a single, scalable framework.”