This week, we weren’t just advising on AI governance; we were living it.
It’s easy to write about values on a corporate website. It’s another thing entirely to let them guide you when it becomes difficult, costly, or inconvenient. This week, for us, it became difficult.
Our team, like many others, relies on a suite of advanced automation tools to enhance our efficiency. One of these, an AI-powered platform that aggregates market and competitor data, has been instrumental in our research process. It’s fast, powerful, and has delivered significant value.
Then, on Tuesday morning, the news broke: the company behind the tool is facing a major lawsuit for alleged privacy law violations, specifically concerning how it scrapes and processes data without consent.
Suddenly, a tool that was a symbol of our efficiency became a source of a profound dilemma.
The Easy Path vs. The Right Path
The easy path was clear: do nothing. The rationalizations came quickly and easily. “It’s their legal battle, not ours.” “We’ll wait for the verdict.” “Our use case is probably fine.” This path would cause zero disruption to our workflow.
But it felt wrong. It was wrong.
This is where abstract principles are forged into concrete action. We couldn’t, in good faith, advise clients on ethical AI while our own supply chain had a potential weak link. The next morning, our team—Dr. Allison Fisher, Dr. Johanna Farnhammer, and Shafira Noh—convened not just to solve this single problem, but to address the systemic risk it revealed.
The conversation was fascinating. We discussed how intuition gives us a powerful sense in these situations—that feeling in your gut that tells you when a convenient path might be a compromised one. We mapped the issue against legal frameworks like GDPR and Malaysia’s PDPA, considering how regulators would view a company’s responsibility for the tools it uses.
A Local Reminder of a Global Problem
The timing felt particularly poignant. Just recently in Malaysia, a famous F&B company went viral after a customer’s personal data was carelessly leaked between departments. It was a stark, local reminder that as we automate and integrate AI, our responsibility for data protection intensifies, it doesn’t diminish. The more we use AI, the more conscious and proactive our decisions must be.
This is why we believe this conversation is so critical. Your vendor’s ethics are now an extension of your own.
Drafting Our First Layer: The AI Tool Vetting Process
Our discussion is leading to the creation of a formal internal policy for managing third-party AI risk. We’re just at the beginning of this process, starting with internal and public-facing drafts for team circulation and feedback. In the spirit of transparency, we want to share the first layer of our thinking with you.
The core of our draft policy is a simple framework built on three critical questions we will now ask before adopting any new AI tool:
-
Data Provenance: Where does your data come from? Can you provide clear documentation on your data acquisition methods and their compliance with privacy laws like PDPA and GDPR?
-
Risk Assessment: What data will this tool access? Is it public data, confidential internal data, or client data? The level of scrutiny must match the level of risk.
-
Ethical Alignment: Does this vendor’s public stance on ethics and privacy align with our own? Do they have a clear process for handling data responsibly?
This week’s challenge was a powerful reminder that true governance isn’t a certificate on a wall; it’s a live, active process of asking hard questions, especially when it’s inconvenient. It’s about choosing the right path, even when it’s the harder one.
We wanted to share this process transparently because we know we aren’t the only ones facing these new challenges.
Have you experienced a similar dilemma in your organization? How did you navigate it, and what were the sets of questions you asked internally to make a decision?