AI responsible use.
Last reviewed: May 15, 2026
1. What we build
Loom builds production AI for enterprise and high-growth clients. The work is roughly four shapes: applied AI for specific business problems, orchestration and harness frameworks for agentic workflows, forward-deployed engineering teams embedded with client engineering, and enterprise transformation programs that connect AI to operating-model change.
Every engagement is governed by a Master Services Agreement and a per-engagement Statement of Work (RFC 023, §5.1 and §5.2). For client data we sign a Data Processing Agreement (§5.4) and treat the client as the controller and Loom as the processor.
2. What we do not build
We decline engagements in the following categories:
- Consumer-facing chatbots that misrepresent themselves as human. Disclosure of AI involvement is non-negotiable.
- Surveillance or biometric-identification systems sold to law enforcement or other state actors without independent civil-rights oversight.
- Autonomous weapons systems and kinetic-effect systems. No lethal-effect work, no targeting systems, no battlefield decision support.
- Persuasion or manipulation systems aimed at political behaviour. No micro-targeting for political campaigns, no synthetic-media systems designed to deceive about a public figure.
- Systems whose primary function is to deceive their users. If the user does not benefit from the truthful operation of the system, we will not build it.
This list is non-exhaustive. We reserve judgment on the close cases; a Loom principal makes the call before an engagement is signed, and the rationale is recorded internally.
3. Training-data posture
Loom does not train foundation models. We fine-tune and adapt models on client-supplied data, and we use commercial frontier models (OpenAI, Anthropic, Google, others) as production inference layers.
Client data used for fine-tuning or evaluation is governed by the engagement DPA and the per-engagement SOW. Client data is never combined across engagements. Client data is never used to improve Loom's reusable IP without explicit, written, narrowly-scoped consent.
4. Production controls
Every production deployment includes, at minimum:
- An evaluation harness that runs against the deployed system continuously, not only at training time, so silent quality regressions surface (see SLOs for agents).
- Refusal policies encoded into the orchestration layer, so the system declines when inputs are ambiguous or out of scope rather than guessing.
- Audit trails recording every decision, model call, retry, and refusal as primary evidence — sufficient to reproduce any outcome (see Orchestration is where projects die).
- Replay capability, so an incident can be re-run from any point with full state.
- Human-in-the-loop checkpoints on any decision with material consequences for a person — hiring, credit, healthcare, legal, safety.
These are not optional in our engagements. If a client wants to ship without them, we don't take the work.
5. Compliance posture
We design for the regulatory frameworks our enterprise clients operate under, including the EU AI Act, NIST AI Risk Management Framework, and sector-specific rules (HIPAA, GLBA, FCRA, GDPR, CCPA). We are not currently SOC 2 certified or ISO 27001 certified; if an engagement requires a specific certification, we discuss it at scoping.
6. Raising a concern
If you have a concern about a specific engagement or about a Loom-built system you have encountered, please contact us at responsible-ai@loom.technology. Engagement-specific concerns are routed to the engagement lead and a Loom principal; we aim to acknowledge within two business days.
7. Internal governance
This statement is the public-facing summary of our internal AI ethics and safety policy (RFC 010). The internal document is more detailed and includes engagement decision criteria, escalation paths, and review cadences.