Why privacy matters when using AI
Most AI tools work by sending your input (text, data, documents) to servers overseas for processing. This means that when you paste a customer's email into ChatGPT, or upload a document containing personal details to an AI tool, that information may be stored and processed by a third party in another country.
For Australian businesses, this raises obligations under the Privacy Act 1988 and the Australian Privacy Principles (APPs).
Note: This guide provides general information only and is not legal advice. If you handle sensitive data at scale, consult a privacy professional.
The Australian Privacy Principles in plain English
The APPs apply to businesses with annual turnover over $3 million, as well as to health service providers and certain others regardless of size. If you're below this threshold, you may still want to follow good practice — it builds customer trust and may become mandatory in future reforms.
The key principles relevant to AI use:
- Only collect what you need — don't feed AI tools data that isn't necessary for the task
- Tell people how their data is used — your privacy policy should mention AI tool usage if it involves customer data
- Keep data secure — understand where AI tools store data and whether that meets your obligations
- Don't use data in ways people wouldn't expect — customers sharing their details for a quote don't expect that info to be processed by an overseas AI platform
What data is safe to use with AI tools?
Generally safe:
- Your own business information, processes, and documents
- Anonymised or aggregated data (no names, emails, or identifying details)
- Publicly available information
- Your own writing, ideas, and drafts
Handle with care:
- Customer names and contact details
- Employee information and performance records
- Financial records with identifying information
- Any health-related information
Practical rule: Before pasting anything into an AI tool, ask yourself — "Would my customer be comfortable knowing their information was sent to an overseas server for processing?" If the answer is no, anonymise it first or don't use AI for that task.
Understanding AI tool data policies
The major AI platforms have different approaches to data retention and training:
- ChatGPT (OpenAI) — by default, conversations may be used to improve their models. You can opt out in settings. The ChatGPT Team/Enterprise plans offer stronger data controls.
- Microsoft Copilot (business plans) — Microsoft commits to not using your data to train AI models, and data stays within your Microsoft 365 tenant.
- Google Workspace AI — similar to Microsoft; business plans offer data protection commitments.
Always check the privacy policy and data processing terms of any AI tool before using it with customer data.
Simple steps to protect your business
- Update your privacy policy to mention that you use AI tools in your operations, and what kinds of data may be processed
- Use business accounts rather than free personal accounts for AI tools — business plans typically have better data protections
- Anonymise before processing — replace customer names with "Customer A" when using AI to analyse feedback or draft responses
- Don't store sensitive data in AI chat history — clear conversation history or use tools that don't retain data
- Train your team — make sure anyone using AI tools understands what they can and can't share
The bottom line
Privacy compliance doesn't mean avoiding AI — it means using it thoughtfully. Most small businesses can use AI tools safely with a few simple habits: anonymise customer data, use business-grade tools, and update your privacy policy.
The businesses that get into trouble are usually those who paste sensitive customer data into free AI tools without thinking about where it goes.
Ready to go deeper?
Get your personalised 90-day AI roadmap
This guide gives you the foundation. Your roadmap gives you a step-by-step plan built around your specific business, tools, budget, and goals.
Start the Assessment — $247 AUD