Ai use policy

Effective Date: 27 May 2025

Applies to: Users of WorkTransformers.ai

  1. Purpose
    This AI Use Policy outlines how Work Transformers Ltd uses artificial intelligence (AI) technologies on the Platform (WorkTransformers.ai). It aims to promote transparency, trust, and accountability in line with UK GDPR, the EU AI Act, ISO/IEC 42001, and global ethical AI standards. This policy supports informed use of AI tools, mitigates potential risks related to automated processing, and affirms our commitment to responsible innovation.
  2. Use of AI Models
    The Platform leverages AI models to support natural language processing and analysis. Personal data processed by these AI models is minimized and pseudonymized wherever possible to prevent exposure of directly identifiable personal information. Sensitive categories of personal data are not processed.

Use cases include but are not limited to:

  • Text summarisation and transformation
  • Scenario-based workplace diagnostics
  • AI-generated strategy proposals
  • Semantic clustering of workplace dynamics
  • Sentiment and tone detection in organisational inputs
  • Early flagging of team risk patterns based on behavioural signals
  • Natural language understanding and context extraction for decision-support

Outputs are designed to support human-led decisions. No AI-generated content should be considered legally, clinically, or strategically binding without human oversight.

  1. Human Oversight
    All AI-generated results must be reviewed by users before being acted upon. Users remain responsible for interpreting and validating outputs. The Platform does not autonomously make decisions that produce legal or material effects.

Work Transformers Ltd explicitly disclaims responsibility for outcomes arising from unaudited reliance on AI-generated content. Human-in-the-loop validation is a condition of use. For enterprise clients, human review is integrated into workflow protocols by default. Supervisory checkpoints are enabled in workflows deemed high-impact.

  1. Limitations and Accuracy
    AI systems are probabilistic and may produce inaccurate, outdated, or biased content. Limitations include:
  • Sensitivity to ambiguous, incomplete, or inconsistent inputs
  • Potential reproduction of training biases or societal stereotypes
  • Hallucination of facts or statistics not present in source data
  • Reduced performance on atypical industry-specific cases
  • Failure to reflect evolving regulatory or market dynamics in real-time

Users should always supplement AI outputs with expert analysis, particularly in regulatory, legal, financial, or HR settings. Advisory disclaimers are embedded where high-risk use may occur.

  1. Ethical Commitments
  • AI will not be used for discriminatory profiling, surveillance, or manipulation.
  • The Platform does not support deepfake generation or deceptive automation.
  • We do not use client data to train public or shared models.
  • Data used to improve internal models is fully anonymised and aggregated.
  • Clients may opt out of internal model refinement entirely.
  • All internal evaluations consider fairness, transparency, and non-discrimination criteria.
  • Automated decisions are only permitted where fully reversible and explainable.
  • Fairness is defined and assessed based on demographic parity and outcome sensitivity metrics.
  1. Transparency and Explainability
    We strive for explainable AI. Where feasible, we:
  • Annotate outputs with source attribution
  • Provide summaries of model reasoning or input-output mapping
  • Inform users when a result is AI-generated
  • Label confidence scores or uncertainty ranges on selected outputs
  • Indicate if content was derived via classification, regression, or generative methods

Users are notified when outputs are derived through generative or inferential processes. Black-box models are limited to low-risk contexts and monitored via post-deployment performance evaluation. Explainability is validated through proxy measures such as feature attribution analysis and counterfactual consistency.

  1. Incident Handling
    Users may report problematic AI behaviour via the support interface. We commit to:
  • Acknowledging reports within 48 hours
  • Investigating high-risk or repeated issues
  • Disabling features or models that pose unresolved risks
  • Logging all reported AI incidents for continuous auditability
  • Issuing incident reports for enterprise clients upon request

Major incidents will be disclosed to affected enterprise clients with remediation steps and risk mitigation plans. An internal postmortem and RCA (root cause analysis) is triggered for any level 1 AI malfunction.

  1. Feedback and Continuous Improvement
    We welcome feedback on AI outputs and encourage users to report any inappropriate, offensive, or misleading results. Feedback is incorporated into model evaluations, retraining cycles, and roadmap planning. We review flagged outputs weekly and use structured QA audits to assess emerging risks. Feedback loops are anonymised unless the user explicitly consents to attribution.
  2. Governance and Risk Management
    Work Transformers Ltd maintains an internal AI Risk Register and applies a tiered risk classification to all AI use cases (minimal, limited, high-risk). Enterprise clients may request access to our internal AI Governance Framework, DPIA templates, and AI impact assessment summaries. Governance is overseen by the Data & AI Ethics Committee, which meets quarterly and is responsible for alignment with applicable law and policy. External audits may be supported upon request or as required by regulation.
  3. Updates to This Policy
    This policy may be updated periodically to reflect changes in technology, regulations, or business practices. Material changes will be communicated through the Platform or via email at least 14 days in advance. Version history will be maintained for transparency.

 

For questions regarding this AI Use Policy, please contact:
info@worktransformers.ai