In Silicon Valley, crypto and the metaverse are out. Generative A.I. is in.
That much became clear Monday night at the San Francisco Exploratorium, where Stability AI, the start-up behind the popular Stable Diffusion image-generating algorithm, gave a party that felt a lot like a return to prepandemic exuberance.
The event — which lured tech luminaries including the Google co-founder Sergey Brin, the AngelList founder Naval Ravikant and the venture capitalist Ron Conway out of their Zoom rooms — was billed as a launch party for Stability AI and a celebration of the company’s recent $101 million fund-raising round, which reportedly valued the company at $1 billion.
But it doubled as a coming-out bash for the entire field of generative A.I. — the wonky umbrella term for A.I. that doesn’t just analyze existing data but creates new text, images, videos, code snippets and more.
It’s been a banner year, in particular, for generative A.I. apps that turn text prompts into images — which, unlike NFTs or virtual reality metaverses, actually have the numbers to justify the hype they’ve received. DALL-E 2, the image generator that OpenAI released this spring, has more than 1.5 million users creating more than two million images every day, according to the company. Midjourney, another popular A.I. image generator released this year, has more than three million users in its official Discord server. (Google and Meta have built their own image generators but have not released them to the public.)
That kind of growth has set off a feeding frenzy among investors hoping to get in early on the next big thing. Jasper, a year-old A.I. copywriting app for marketers, recently raised $125 million at a $1.5 billion valuation. Start-ups have raised millions more to apply generative A.I. to areas like gaming, programming and advertising. Sequoia Capital, the venture capital firm, recently said in a blog post that it thought generative A.I. could create “trillions of dollars of economic value.”
But no generative A.I. project has created as much buzz — or as much controversy — as Stable Diffusion.
Partly, that’s because, unlike the many generative A.I. projects that are carefully guarded by their makers, Stable Diffusion is open-source and free to use, meaning that anyone can view the code or download it and run a modified version on a personal computer. More than 200,000 people have downloaded the code since it was released in August, according to the company, and millions of images have been created using tools built on top of Stable Diffusion’s algorithm.
That hands-off approach extends to the images themselves. In contrast to other A.I. image generators, which have strict rules in place to prevent users from creating violent, pornographic or copyright-infringing images, Stable Diffusion comes with only a basic safety filter, which can be easily disabled by any users creating their own versions of the app.
That freedom has made Stable Diffusion a hit with underground artists and meme makers. But it has also led to widespread concern that the company’s lax rules could lead to a flood of violent imagery, nonconsensual nudity, and A.I.-generated propaganda and misinformation.
Read More on Artificial Intelligence
Already, Stable Diffusion and its open-source offshoots have been used to create plenty of offensive images (including, judging by a quick scan of Twitter, a truly astonishing amount of anime pornography). In recent days, several Reddit forums have been shut down after being inundated with nonconsensual nude images, largely made with Stable Diffusion. The company tried to rein in the chaos, telling users not to “generate anything you’d be ashamed to show your mother,” but has stopped short of setting up stricter filters.
Representative Anna Eshoo, Democrat of California, recently sent a letter to federal regulators warning that people had created graphic images of “violently beaten Asian women” using Stable Diffusion. Ms. Eshoo urged regulators to crack down against “unsafe” open-source A.I. models.
Emad Mostaque, the founder and chief executive of Stability AI, has pushed back on the idea of content restrictions. He argues that radical freedom is necessary to achieve his vision of a democratized A.I. that is untethered from corporate influence.
He reiterated that view in an interview with me this week, contrasting his view with what he described as the heavy-handed, paternalistic approach to A.I. taken by tech giants.
“We trust people, and we trust the community,” he said, “as opposed to having a centralized, unelected entity controlling the most powerful technology in the world.”
Mr. Mostaque, 39, is an odd frontman for the generative A.I. industry.
He has no Ph.D. in artificial intelligence, nor has he worked at any of the big tech companies from which A.I. projects typically emerge, like Google or OpenAI. He is a British former hedge fund manager who spent much of the past decade trading oil and advising companies and governments on Middle East strategy and the threat of Islamic extremism. More recently, he organized an alliance of think tanks and technology groups that tried to use big data to help governments make better decisions about Covid-19.
Mr. Mostaque, who initially funded Stability AI himself, has quickly become a polarizing figure within the A.I. community. Researchers and executives at larger and more conventional A.I. organizations characterize his open-source approach as either naïve or reckless. Some worry that releasing open-source generative A.I. models without guardrails could provoke a backlash among regulators and the general public that could damage the entire industry.
But, on Monday night, Mr. Mostaque got a hero’s welcome from a crowd of several hundred A.I. researchers, social media executives and tech Twitter personalities.
He took plenty of veiled shots at tech giants like Google and OpenAI, which has received funding from Microsoft. He denounced targeted advertising, the core of Google’s and Facebook’s business models, as “manipulative technology,” and he said that, unlike those companies, Stability AI would not build a “panopticon” that spied on its users. (That one drew a groan from Mr. Brin.)
He also got cheers by announcing that the computer the company uses to train its A.I. models, which has more than 5,000 high-powered graphics cards and is already one of the largest supercomputers in the world, would grow to five or 10 times its current size within the next year. That firepower would allow the company to expand beyond A.I.-generated images into video, audio and other formats, as well as make it easy for users around the world to operate their own, localized versions of its algorithms.
Unlike some A.I. critics, who worry that the technology could cost artists and other creative workers their jobs, Mr. Mostaque believes that putting generative A.I. into the hands of billions of people will lead to an explosion of new opportunities.
“So much of the world is creatively constipated, and we’re going to make it so that they can poop rainbows,” he said.
If this all sounds eerily familiar, it’s because Mr. Mostaque’s pitch echoes the utopian dreams of an earlier generation of tech founders, like Mark Zuckerberg of Facebook and Jack Dorsey of Twitter. Those men also raced to put powerful new technology into the hands of billions of people, barely pausing to consider what harm might result.
When I asked Mr. Mostaque if he worried about unleashing generative A.I. on the world before it was safe, he said he didn’t. A.I. is progressing so quickly, he said, that the safest thing to do is to make it publicly available, so that communities — not big tech companies — can decide how it should be governed.
Ultimately, he said, transparency, not top-down control, is what will keep generative A.I. from becoming a dangerous force.
“You can interrogate the data sets. You can interrogate the model. You can interrogate the code of Stable Diffusion and the other things we’re doing,” he said. “And we’re seeing it being improved all the time.”
His vision of an open-source A.I. utopia might seem fantastical, but on Monday night, he found plenty of people who wanted to make it real.
“You can’t put the genie back in the bottle,” said Peter Wang, an Austin-based tech executive who was in town for the party. “But you can at least have everyone look at the genie.”