Musk and AI: Less culture wars, more Star Wars
With help from Derek Robertson
This weekend, Elon Musk unveiled the beta version of Grok, his new AI chatbot, which drew deeply on science fiction references: the bot’s name refers to the Martian science fiction novel “Stranger in a Strange Land.”
Musk’s early pitches for the project emphasized how politically different it would be from existing chatbots like ChatGPT. He unveiled it in an April interview with conservative pundit Tucker Carlson in which he described his interest in AI as motivated by fears that existing models, like the ones created by OpenAI, were baking in lefty bias. “I’m worried about the fact that it’s being trained to be politically correct,” he said.
But his latest framing offers a vision of the future that is more, well, future-y. Per this weekend’s announcement, Grok has been released to a small number of users for beta testing, with plans to roll it out to premium subscribers of X, formerly known as Twitter.
In unveiling the bot on Saturday, Musk’s firm, xAI, cited a very different kind of inspiration: “Grok,” the announcement began, “is an AI modeled after the Hitchhiker’s Guide to the Galaxy.”
Tech industry watchers say there is a good reason for this evolution. While stoking culture war controversy can help generate buzz, many are skeptical outrage can attract a large number of users.
“Sarcastic replies will entertain Musk superfans, but they’ll ultimately be measured by their accuracy,” said Nu Wexler, a former Senate staffer who has worked at Facebook and Twitter. “Like social media platforms and search engines, there’s not a big market for anti-woke AI, or any chatbots with a distinct political bent.”
Musk’s brand has long been wrapped up in the futuristic technologies produced by his companies, which he has tied to long-term goals for the future, like colonizing the solar system.
In recent years, though, his public persona has become more political. He’s become an outspoken critic of progressive mores, describing his acquisition of Twitter —since rebranded as “X” — last year as part of a quest to defeat “the woke mind virus.” Musk has tied his anti-”woke” quest to his hands-off approach to moderation, an approach that has also allowed him to cut costs and led to a European Commission investigation of X’s compliance with the continent’s digital content rules.
With Grok, Musk appears to have taken a new tack. He has pledged Grok will be “based” (the antonym of “woke” in online slang), but he’s leading his pitch with the vaguer offer that Grok will be “spicy.”
The bot is not yet widely available for testing, but so far “spicy” seems to be more PG-13 than politically explosive.
Asked for help making cocaine, in one exchange highlighted by Musk, Grok offered vague instructions like “obtain a chemistry degree” before adding “Just kidding” and adding a disclaimer that disavows illegal activity. Another Grok answer that Musk tweeted compared a computer programming challenge to “trying to keep up with a never-ending orgy.”
“Oh this is gonna be fun 🤣🤣” Musk remarked.
What about the technicals? One of the big selling points touted this weekend was the capability to use data from X to provide up-to-date responses. OpenAI’s ChatGPT does not incorporate information about events that have occurred in recent months. GPT-4 cited an April 2023 cutoff when explaining that it could not respond to DFD’s request to discuss the recent news about Grok.
But that would not exactly revolutionize the current state of AI. Microsoft Bing’s Copilot, for one, already offers responses that incorporate information about recent events. Asked by DFD Monday morning about the release, Copilot responded, “Grok is being trained by having ‘real-time access’ to information from the platform,” meaning X, before going on to cite more details of the rival chatbot’s release.
So for the moment, Musk’s promise for the future of AI is leaning heavily into zaniness and cinematic allusion. He’s also invoked “Terminator” and “2001: A Space Odyssey” in discussing his AI ambitions. “What if they just one day get a software update and they’re not so friendly anymore?” Musk mused in a conversation with British Prime Minister Rishi Sunak on Thursday, “Then we’ve got a [“Terminator” director] James Cameron movie on our hands.”
Given Musk’s sweeping ambitions for his AI, it’s probably not a bad thing that he’s mining sci-fi for scenarios to avoid: For one thing, he tweeted this weekend that he’d like to one day integrate Grok into his Tesla vehicles.
As AI policy fights take over Washington, K Street is cashing in.
POLITICO’s Hailey Fuchs and Brendan Bordelon reported over the weekend on how the growing field of AI policy has all the makings of a big payday for the lobbying industry — think the crypto lobbying blitz, but bigger.
Hailey and Brendan write that AI lobbyists say nearly every industry has realized it will have to reckon with AI, with groups from Nike to the Mayo Clinic looking for an information advantage. But there’s just one problem, at least for now: institutional Washington isn’t exactly crawling with AI experts.
“Every lobbying firm in town is trying to make themselves out to be an expert in everything to try and lure in clients, so AI is just one of them,” one lobbyist said. “I’d be hard-pressed to name you an AI expert downtown. It’s hard enough to pick the AI experts in policymaking positions.” — Derek Robertson
Now that last week’s AI Safety Summit is over, what’s going to actually happen?
POLITICO’s Morning Tech U.K. newsletter recapped some of the actionables this morning, from international research collaborations to the important topics that didn’t get covered during the summit. A few of their takeaways:
Setting up a research network. The Bletchley Declaration called for a research group meant to complement all involved countries’ AI safety programs. A difficult enough task in its own right, this group also notably includes China, with whom there are plenty of barriers to that level of cooperation already.
Pulling together a report. …And that research partnership also calls for a “state of the science” report on frontier model capabilities and risks. Mariano-Florentino Cuéllar, president of Carnegie Endowment for International Peace and one of the key advocates for this push, called for reports at a pace of every one to six months.
Oh yeah, and killer robots. The military use of AI went strangely undiscussed at a summit ostensibly all about “safety.” Also last week, the United States made a declaration on responsible military use of AI that 31 countries have already endorsed.
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); Nate Robson ([email protected]) and Daniella Cheslow ([email protected]).