Thou shalt not use AI

The Newest “Boogeyman” Clause in Professional Services Contracts has arrived … and it is about … surprise … GenAI.

In the world of enterprise software, things move at breakneck speed. Yet, in our professional services work, we’ve recently noticed a fascinating and, frankly, regressive trend. A growing number of customers want to insert a brand new clause into their contracts: a blanket ban on the use of generative AI (GenAI). They want it hard-coded into the legalese, a full-throckle guarantee that we won’t use tools like ChatGPT to help them implement our products. It’s a bit like a modern-day customer asking us to build their software implementation by hand, without a single automated tool, all because they’re afraid of the latest technology.

We can, of course, understand the hesitation. GenAI is a powerful and still-developing frontier. The concerns about data privacy, security, and the potential for a “hallucinated” output are not unfounded. But a blanket ban isn’t the solution. It’s a refusal to adapt, and in our business, standing still is the same as falling behind.

The True Value is in the Human, Not the Tool

When a customer hires a professional services team, they aren’t paying for us to type code or configure settings line by line. They’re paying for the intangible things: our expertise, our critical thinking, and our experience. GenAI is not a replacement for these core skills. Rather, it’s a powerful assistant, much like a modern IDE with autocomplete or a search engine that instantly pulls up documentation. It’s a tool that accelerates our process, allowing our human experts to focus on what actually matters:

  • Understanding your unique business challenges.
  • Designing a solution that truly solves your problems.
  • Navigating complex technical issues.
  • Thinking strategically and anticipating future needs.

A GenAI-powered workflow enhances our capabilities, allowing us to complete tasks like drafting documentation, creating boilerplate code, and identifying potential errors much faster. This means we can deliver your project more quickly, helping you see value sooner and giving us more time to focus on the high-value work.

A Better Approach: Responsible AI Governance

Instead of a blanket ban, the far more productive conversation is one about responsible (AI) governance. The goal is not to prohibit these tools but to ensure they are used safely and ethically, building on the policies already in place for data privacy and security. This is not some futuristic concept; it’s a practical and immediate conversation.

A responsible approach means:

  • Strict Security Protocols: We must ensure that no sensitive or proprietary customer data is ever used to train a GenAI model. Note that most Customer contracts today already contain language about data confidentiality that likely cover this topic.
  • Human-in-the-Loop Safeguards: Every output from a GenAI tool must be reviewed and validated by a human expert. The AI is the first draft; our team provides the final, polished, and accurate version. And again, we’d do the same if we have a newbie or junior on the team. It is called quality control.
  • Transparency and Trust: Open communication is key. We should discuss how and where GenAI is being used to deliver value to you, building a partnership based on trust rather than a contract based on fear.

Ultimately, GenAI is not a replacement for human expertise; it’s a multiplier. By banning it, you are not protecting your business. You are denying it the benefits of efficiency, quality, and innovation that your competitors are already leveraging. The real power is in a partnership that embraces these powerful new tools while establishing the right safeguards to ensure they are used responsibly.

Leave a Reply

Your email address will not be published. Required fields are marked *