Technology Salon

Washington DC

Sponsored by

a discussion at the intersection of technology and development

You Need Generative AI Guidance and Data Governance Policy

genai guidance policy

All the hype around artificial intelligence – including generative AI – really starts with “old school” machine learning. The grandest large language models are basically algorithms looking at reams of data.

That means training data is key for any GenAI activity, and an AI activity. The most accomplished GenAI tools are software agents that return data based on machine learning rules. Hence data governance rules that say who can access what for which reasons are the core basis for all AI solutions.

That universal truth is how we started the recent Technology Salon that asked What is Your Generative AI & Data Policy? We had a vibrant discussion lead by:

They led us in a conversation that centered around data governance, and how integral it is to have solid firm-level data governance rules in the age of generative AI.

You Have a Data Policy, Right?

Even before GenAI took over every digital development conversation, data governance was a key discussion point in every organization.

We all work with non-public data that is confidential to our companies. We may work with client data, entrusted to us by outside firms who would take immediate practical and legal action if we shared that data with a third party. This privileged information has always been governed by rules, regulations, even laws to keep it safe.

In the age of generative AI, we need to double down on data governance. Do you really trust Google, Microsoft, Facebook(!) to keep your data separate and safe from GenAI ingestion? You definitely should not! They are wantonly violating laws to get more data. Which leads into a few questions:

  • Does your organization have a data governance policy?
  • Do you know what it says? Does your CIO?
  • How do you know if you’re compliant? Or your partners?
  • Or do you just “feel” that you’re doing the right thing?

Generative AI Guidance, Not Policy

In the age of generative AI, the technology is moving too fast for a policy just around GenAI. Salon participants agreed that flexible guidance is better. Best if it’s rooted in your company’s values.

Guidance that is based on organizational norms, not specific use cases, and gets to the basic idea of upholding the values of the company. Staff will then “feel” if an action upholds or violates company values, making the guidance easy to follow.

One aspect of good GenAI guidance is to require human review and intervention. GenAi is best likened to an eager intern. GenAI is excited to give you output and will always try, even if it’s clueless. Hence, the more detailed and clear directions you give it, the better your results.

However, you would never submit an intern’s work as your own, so don’t do that with GenAI either. Always, always check GenAI’s output yourself. Then take the next crucial step – share your result with another human for their input, just like you would any other valuable work product.

You need to own your work, regardless of how it’s created.

That brings us to workforce capacity, which is always an issue. By definition, half of the world is below average intelligence. Half of any large organization’s staff will be below the average intelligence level of that organization. This is where we need to be careful with GenAI – or any other work products for that matter.

There are tools to assess GenAI risk levels. Perplexity and other LLMs give links in their results. At a previous employer, I designed a system where link relevance was also shown to help users know the risk that GenAI was relying on erroneous data. Yet even then, we need to have humans in the loop.

Government GenAI Policy

The OECD has an AI Policy Observatory that highlights government AI policies. Many governments have efforts, like Rwanda’s National AI Policy, that focus on economic development opportunities. Others, like the EU AI Act, focus on potential harms.

The US Executive Order on AI seeks a government-wide policy for GenAI but it gives each Agency wide variability to understand and enact AI polices. It also is thinking about AI today, not looking into the future of agentic AI – when AI starts doing things for us, from buying a plane ticket, to driving a car, to… robbing a bank?

We Must Manage GenAI

How do we manage these outward facing agents? Right now, we treat them like any other corporate representative and hold the company accountable for the bot’s actions. If the chatbot says you get a cheap plane ticket, you can sue and win since it is acting as a literal agent of the company.

This brings us back to a strong data governance policy. What data is the chatbot or human agent relying on? How was that data obtained? How “good” is it? How “good” is the underlying algorithm that the chatbot (or human agent) uses it to make decisions?

Share Your GenAI Guidance

We should be sharing our data governance policies and AI guidance to learn from each other, before humanitarian chatbots make their own errors that cost us more than discounted airfare. So.. what is your data governance policy or GenAI guidance? Please share it with us now.

Here are a few ideas to get us started.

Comments are closed.