A year in review for the ChatGPT (chatbot generative pre-trained transformer) — and a look ahead at the global response to AI & what governance “guardrails” are required.
A year in review for the ChatGPT (chatbot generative pre-trained transformer) — and a look ahead at the global response to AI & what governance “guardrails” are required.

Happy birthday, ChatGPT! Celebrating 1 year with the top 10 milestones in AI governance

Vivian Susko |

A year in review for the ChatGPT (chatbot generative pre-trained transformer) — and a look ahead at the global response to AI & which governance “guardrails” are required.

On November 30th, 2023, the world will celebrate the one-year anniversary of ChatGPT’s public debut. And in recognition of 12 months and “the coming of age” of AI tech innovation and adoption, we’re walking through 10 major market and regulatory milestones that surround its release (in no particular order).

A global call for AI governance — years in the making

Long before ChatGPT’s debut, steps were already being taken to pass legislation on AI governance. In April 2021, the European Commission proposed the first EU regulatory framework for AI (#1). They have since revisited it and made major moves to pass what could be the world’s first pieces of legislation on AI governance, which aims to set a global standard for the social, ethical, and economic governance of AI-enabled technology.

But we’ll get to that. First, let’s take a quick look at some of the global regulatory activity pre-dating ChatGPT’s release.

#2 — Jan 2019 — Singapore’s Model AI Framework & AI Governance Testing Toolkit

The first edition of Singapore’s Model AI Governance Framework was first released on January 23, 2019, at the 2019 World Economic Forum Annual Meeting in Davos, Switzerland.

Singapore has since remained a leading participant in AI governance activity:  On May 25th, 2022, the country launched the world’s first AI Governance Testing Framework and Toolkit [“AI Verify”] (#3), which is intended for companies wishing to demonstrate responsible AI usage in an objective and verifiable manner.

#4 — May 2019 — OECD Principles and Classification of AI Systems

The first set of standards from the OECD for Classifying AI Systems, which aims to help policymakers characterize AI systems against these pre-established principles, was launched in May of 2019. The framework defines a clear set of goals and seeks to establish a common understanding of AI systems, inform data inventories, support sector-specific frameworks, and assist with risk management and risk assessments.

#5 — June 2022 — Canada’s Draft AI Act

In June 2022, the Canadian government tabled the Artificial Intelligence and Data Act (AIDA) as part of the Digital Charter Implementation Act, 2022 (Bill C-27). Canada remains a strong collaborator with international partners like the European Union and the United States in the development and adoption of new AI governance standards.

2022: The year ChatGPT enters the evolving AI landscape

Nov 2022 — ChatGPT is released

OpenAI launched the large language model-based chatbot, ChatGPT, on November 30, 2022 — and it quickly took off. Within five days, the platform attracted over one million users. (it took Twitter 2 years to achieve this mark)

Dec 2022 — ChatGPT hits 100 million users

ChatGPT became the fastest-growing consumer internet application platform in history when it hit an estimated 100 million users within months of its release.

Large companies like Microsoft leaped at the opportunity to incorporate AI into their applications (although not everyone struck multiyear, multibillion-dollar deals with OpenAI like Microsoft did). More recently, Google invested $300 million for a 10% stake in San Francisco-based generative AI startup Anthropic, a company founded in 2021 by OpenAI researchers with an intelligent chatbot by the name of Claude, according to the Financial Times.

#6 — Jan 2023 — NIST releases AI Risk Management Framework

The National Institute of Standards and Technology (NIST) released the NIST AI Risk Management Framework (AI RMF 1.0) on Thursday, January 26, 2023. Designed for voluntary use, the framework helps improve transparency and trust surrounding the design, development, use, and evaluation of AI products, services, and systems.

#7 — Feb 2023 — ISO 23894

The International Standards Organization (ISO) was published on February 6, 2023, to deliver guidance on artificial intelligence (AI) risk management for any organizations developing, deploying, or using AI systems to manage risks.

Keeping your HR policies compliant

#8 — March 22, 2023 — A letter of AI concern

More than 33,000 tech leaders joined together to sign an open letter warning of societal-scale risks from genAI. Though the letter had very little tangible impact, it is believed to have inspired some of the recent regulatory activity we’re seeing globally (like the EU AI Act).

#9 — Oct 2023 — Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence

President Joe Biden signed a first-of-its-kind AI executive order establishing new safety standards, security precautions, and future guidelines for the use of Artificial Intelligence on October 30th, 2023.

Deemed “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust” in an official White House address, the AI executive order already has 15 major American tech companies agreeing to implement voluntary AI safety requirements. “It’s the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks,” said White House Deputy Chief of Staff, Bruce Reed.

#10 — Nov 2023 — AI Summit in UK

Britain hosted the A.I. Safety Summit in November, bringing together international governments, leading AI companies, civil society groups, and experts in research. Britain Prime Minister Rishi Sunak and his government presented “The Bletchley Declaration.” Now signed by representatives from the 28 countries attending the Summit (including the U.S.), the declaration acknowledges the dangers and responsibilities of managing AI systems. The next AI Safety Summit is scheduled to be held in six months in South Korea, with a follow-up in France six months after that.

(Bonus) Nov 2023 — ChatGPT turns 1

This brings us to today — November 2023. ChatGPT is now a year old, and regulatory activity surrounding AI governance is still on the rise.

We only see more and more innovative ways of people pushing the boundaries with AI – and with this, the potential of opening their firms up to new risks. We predict more regulatory commentary in 2024 for firms to deal with, giving global firms the usual headache of meeting multiple regulatory agendas. That said, we see firms recognizing the need to get ahead of AI’s risks and putting appropriate guardrails in place. For example, we are seeing firms proactively getting a handle on the use of AI within their businesses, creating inventories of their AI applications, and starting to build out their AI Governance frameworks.

cyber risk management program

A closer look at successful AI governance

What does it mean to have an AI governance policy in place that’s effective, measurable, and defensible?

Look for technology that offers:

  • A single, centralized inventory of AI and ML technology within your firm
  • Customizable, consistent risk rating of AI against firms’ risk appetite
  • Full visibility of validation and testing of AI
  • Ability to define appropriate governance level
  • Full version and change control with transparency around peer review – particularly when not in control of IT
  • Technical scanning capabilities to ensure the completeness of the AI inventory
  • Documenting and managing any resulting risks or violations of regulations

Want to learn more about automated GRC solutions for AI Governance? Get in touch with our team today, or explore our resources below.

Explore Mitratech’s Comprehensive GRC platform portfolio

Best-in-class, scalable solutions to help elevate your risk management, responsiveness, resilience, and reputation.