Setting Boundaries: What Not to Do with AI Tools

Learning Objectives

By the end of this lesson, you will understand common pitfalls when using Chat AI tools at work, be able to identify inappropriate uses, and recognise the possible negative outcomes that can result from such misuse, including legal, ethical, and operational risks. You will also gain practical insight into how to set boundaries for responsible AI tool usage within your organisation.

  1. Recognise Inappropriate Uses: Learn to identify tasks or requests that should not be delegated to AI tools—such as generating confidential reports, making HR decisions, or answering legal queries without human oversight.
  2. Understand Organisational Policies: Review your company’s data handling and IT security policies to see how they apply to AI use.
  3. Assess Content Before Sharing: Double-check outputs from AI tools for accuracy and appropriateness before distributing within or outside your organisation.
  4. Be Aware of Consequences: Familiarise yourself with potential risks, including breaches of confidentiality, spreading misinformation, or violating regulatory requirements.
  5. Promote Ethical Use: Consider the wider implications of AI responses, including bias, fairness, and honesty in communications generated by AI tools.

Setting Boundaries: What Not to Do with AI Tools Overview

Chat AI tools like ChatGPT have revolutionised workplace productivity, offering quick responses and valuable insights. However, with these powerful capabilities comes the responsibility to use them wisely and within clear boundaries. Misusing such tools can lead to the spread of misinformation or inadvertent sharing of sensitive company data.

This lesson will help you recognise the lines that should not be crossed when using AI tools in your organisation. You’ll discover why certain uses are considered inappropriate and explore the risks and consequences for employees and companies who do not implement proper safeguards.

Commonly Used Terms

Here are some key terms you’ll encounter when considering boundaries with AI tools in the workplace:

  • Misinformation: False or inaccurate information generated or spread by AI, which can cause confusion or reputational damage.
  • Policy Breach: When a user does not adhere to their organisation’s rules, such as sharing confidential data with an unauthorised tool.
  • Confidential Data: Sensitive information that must be protected and should not be entered into public AI platforms.
  • Bias: When AI tools give unfair or unbalanced responses due to the way they are trained.
  • Regulatory Compliance: Following laws and guidelines (like GDPR) for data protection in the UK workplace.

Q&A

Can I use Chat AI tools to draft emails that contain client information?

Generally, you should not input any confidential or personally identifiable client information into AI tools unless your organisation has approved their use for this purpose and you are certain the tool complies with relevant data protection laws (such as GDPR).


What are the risks of relying solely on AI-generated information?

AI-generated content may contain factual errors, outdated data, or unintended bias. Relying on it without human oversight can lead to the spread of misinformation, poor decision-making, or reputational harm to your organisation.


How can I check if my use of an AI tool is within company policy?

Review your organisation’s IT and data security policies, and consult your manager or IT/security team if you’re unsure. When in doubt, avoid sharing sensitive information and always seek clarification from the appropriate department.

Case Study Example

Case Study: Healthcare Company Data Breach

A UK-based healthcare company adopted a Chat AI tool to assist customer service representatives in answering patient queries more efficiently. An employee, seeking to resolve a complex patient request, entered sensitive patient information into the AI tool without checking company guidelines. The AI tool’s server was outside the UK, resulting in a minor data breach and non-compliance with NHS Digital standards.

This lapse triggered an internal investigation, and the employee faced disciplinary action for mishandling confidential data. The organisation incurred extra costs on legal advice and an immediate policy update for the responsible use of AI tools. This real-world incident illustrates the serious consequences of using AI tools inappropriately, highlighting the importance of always following established boundaries and organisational regulations.

Key Takeaways

  • Not all tasks are suitable for Chat AI tools—never use them for sharing confidential or sensitive data.
  • Misinformation generated by AI can spread rapidly and damage your organisation’s reputation.
  • Using AI tools against company policy or regulatory rules can result in legal or employment consequences.
  • Always double-check AI-generated insights before sharing them externally or making business decisions based on them.
  • Establish clear boundaries and communicate them to all staff who may use AI tools.

Reflection Question

How do you determine when it is inappropriate to use AI tools in your day-to-day work, and what steps can you take to protect yourself and your organisation?

➡️ Module Navigator

Previous Module: What Chat AI Can and Can’t Do

Next Module: AI and Compliance: What to Know