ITGUYS Ethical AI Usage Policy
Last updated 23/02/2026

How we use AI — responsibly, transparently, and with common sense.
ITGUYS is a London-based managed IT support company and a certified B Corporation. We help purpose‑driven organisations work securely, sustainably, and effectively. When we use AI, we do it in a way that supports those goals — and never at the expense of client trust, data security, or human judgement.
We don’t build our own AI models. We use carefully selected third‑party tools, and we remain accountable for how those tools are used.

1) Compliance with UK law and good practice
We use AI in line with UK GDPR, the Data Protection Act 2018, and relevant UK regulatory expectations. Where needed, we carry out Data Protection Impact Assessments (DPIAs) and document the decisions behind how and why AI is used.
We monitor emerging UK guidance so that when rules evolve — and they will — we can update our practices quickly and responsibly.

2) Security and data privacy by design
We design our AI use around security and privacy from the start. This means:
Data minimisation
We only share the minimum information needed for a task.
Access controls
We use least‑privilege access, MFA, and secure handling processes at all times.
Client confidentiality
We treat client data as confidential. We do not allow client information to be used to train public AI models.
Sensitive data controls
We avoid sharing unnecessary personal or sensitive information with AI tools and use privacy‑respecting configurations where available.
No secrets in AI tools — ever
We never input passwords, API keys, internal logs, system architecture details, or anything that could compromise security.
Approved environments only
Staff may only use AI tools that have been reviewed and approved by ITGUYS.

3) Human oversight and accountability
AI assists — humans decide.
Any AI‑generated content used for client work (drafts, summaries, recommendations, or analysis) is reviewed by a qualified ITGUYS team member before it’s relied upon or shared.
AI output is treated as a draft, not a source of truth. We keep clear records of the human review applied, especially where the output is client‑facing or materially influences a decision.
We stay fully accountable for accuracy, professional judgement, and security — not the tool.

4) Fairness, inclusion, and avoiding harm
We don’t use AI in ways that could cause unfair, biased, or discriminatory outcomes.
When AI is used for documentation, user guides, or client communications, we check tone, accessibility, and unintended consequences before publishing. If something feels “off”, we fix it — or we don’t use it.

5) Ethical sourcing and continuous review
We choose AI tools from providers with strong security, transparent data handling, and clear commitments to responsible practice.
We review our AI tools monthly — or sooner if risks change, behaviour becomes unreliable, or standards drop. If a tool behaves unexpectedly, the issue is escalated to leadership for investigation.
We keep the ITGUYS team trained on safe, responsible use and the limitations of AI-generated content.
Environmental considerations
As a B Corp, sustainability is embedded in how we choose technology. We consider the environmental impact of AI tools and favour providers that optimise energy usage or demonstrate responsible infrastructure practices — reflecting the same principles we apply to sustainable IT more broadly.

6) Transparency with clients
We’re open about where and how AI contributes to client work. If AI plays a meaningful role in a deliverable, or if disclosure helps set expectations around accuracy, data handling, or turnaround times, we will tell you.
Clients are always welcome to ask how AI is being used within their service. We’ll explain clearly, in plain English.

7) Acceptable use for ITGUYS staff
To keep things simple and safe:
  • Staff only use AI tools approved by ITGUYS.
  • If you accidentally enter information into an AI tool that you shouldn’t have, report it immediately so we can act quickly. No blame — quick fixes beat slow cover‑ups.