The Sam Altman saga reveals the need for transparency in AI
It’s been a rollercoaster week for Sam Altman, the past, now present, and hopefully future CEO of artificial intelligence giant OpenAI. Last weekend, the company’s board of directors shocked the tech world by firing Altman, for no apparent reason. The move left Microsoft, OpenAI’s largest investor, reeling. After failing to reinstall it, Microsoft CEO Satya Nadella announced that Altman and his co-founder Greg Brockman would jump ship to lead Microsoft’s new AI research arm.
The next step was a near company-wide revolt, as most of OpenAI’s 800 employees made it clear that they wanted Altman back, or were willing to follow him to Microsoft. And so, by midweek, Altman had been reinstated at OpenAI, accompanied by a new board of directors, including former Harvard president Lawrence Summers.
The whole thing has been stealthy, both in speed and possible subterfuge. What were the real reasons for firing Altman, an enormously capable leader who, among other things, was leading a funding round that valued OpenAI at $87 billion? That’s probably a question for ChatGPT.
OpenAI began life as a non-profit organization tasked with promoting responsible AI research, but has more recently transformed into a typical high-growth technology company. Some board members, including the company’s chief scientist and an AI ethicist, were concerned that Altman was moving away from the company’s founding principles of altruism. They feared that Altman’s focus on the bottom line (and new AI products reaching a near-intelligent state) could put humanity at risk.
The Altman-OpenAI saga has left many industry observers with a case of Silicon Valley-style whiplash. There is also quite a bit of uncertainty surrounding this next-generation OpenAI, both in terms of its current stability and its approach to the future growth of AI as a whole.
Will this week’s secret machinations further build up existing tech giants like Microsoft? Or will fast-growing startups like OpenAI continue to be the stewards of the future of AI? Will governments accelerate the growth of AI through burdensome new regulations? Or will the so-called “heavy” AI skeptics turn the public against AI even before it begins to fully function?
The truth is that neither of these options addresses AI’s biggest concern: the obscurity over how to responsibly train, build, and ship new AI products. And solving this problem starts with doubling down on openness and transparency. In fact, Microsoft’s Nadella called the appointment of a new OpenAI board a key first step toward “effective and well-informed governance.”
For AI to safely reach its potential at scale, we need improvements in transparency at the each passed. We need to decentralize the existing AI framework so that it is governed by the many and not the few. Adopting decentralized decision-making reduces any point of failure, such as a disgruntled board of directors, a charismatic CEO, or an authoritarian regime.
As Walter Isaacson wrote: “Innovation occurs when ripe seeds fall on fertile soil.” In other words, AI technology is fertile; To cultivate it, we must plant new and more inclusive ideas.
Let’s start at the bottom of that stack, with the hardware.. Today, three companies (Amazon Web Services, Microsoft and Google) control three-quarters of the cloud computing market and store all that AI data. One company, NVIDIA, makes most of the chips. Decentralization would allow smaller, user-owned networks to offset this hegemony, while adding much-needed capacity to the industry. Altman was in the Middle East raising money for a new hardware company that would rival NVIDIA when he was fired. To completely dislodge the big players, he should adopt a decentralized model.
Next up are the so-called “core models,” the AI “brains” that generate language, create art, and write code (and silly jokes). Companies protect these models with little oversight or transparency. OpenAI models, for example, are closed to public scrutiny. User-owned networks with input from multiple stakeholders would be better than Microsoft or OpenAI having complete fundamental control, which is where we’re headed.
Equally important is real data. To “train” a basic AI model, we need a lot of data. Companies like Microsoft and Amazon have become rich and powerful by amassing mountains of user data; That’s one of the reasons OpenAI partnered with Microsoft to begin with. However, users are unaware of how these AI companies exploit their personal data to train their models. Decentralized data marketplaces, like Ocean Protocol, allow people and organizations to securely (and accurately) share their data with AI developers. Tech giants’ data silos become less important.
Finally, at the top of the stack are Applications. Imagine a chatbot for K-12 students that acts as their personal tutor, fitness instructor, and guidance counselor. We want transparency in AI products that target our children and everyone else. We also want someone to have a say in what these apps collect and store about us, how they use and monetize this information, and when they destroy it. OpenAI currently offers little of anything.
AI could profoundly alter the destiny of humanity. But so far, only a select few (including Altman and Nadella) are determining their future behind closed doors. They claim to represent the interests of all humanity, but no one really knows.
We also don’t know why OpenAI initially sent Altman packing last week. But his critics cited a lack of “consistent candor,” also known as “transparency.” In the place where it all began, Altman will likely emerge stronger than ever. He must now use that force to promote the core “openness” that OpenAI has always claimed to value.
Alex Tapscott is the author of “Web3: Charting the next economic and cultural frontier of the Internetr” (Harper Collins, now available) and portfolio manager at Ninepoint Partners.