“Robots committing art heist.” - generated by Midjourney just for you.
Hello you,
Full version as usual is here on Puck. Keep reading!
My whirlwind tour of the United States continues. I started this week at home in LA, rolled down to San Diego to deliver a keynote address at the PBS Annual Meeting (yo, PBS is lowkey crushing it with upcoming programming!), swung through Vegas to speak about transformation at a conference there, and at this very moment I’m in the Salt Lake City Utah airport. I’m on my way to Miami to join the Summit at Sea boat where I’ll be participating in a number of conversations about democracy, citizenship, and probably A.I. Say hi if you’ll be on board!
I’ve also been feeling the feels for a few reasons. We just had Mother’s Day, and my own mother passed away from colon cancer at the young age of 65 back in 2005. So I’m thinking of her and anyone else who’s lost their mom. I’m also finding myself thinking a lot about Jordan Neely, the New York City resident killed on the subway by a fellow passenger, Daniel Penny, who put him in a chokehold after Neely allegedly made threats, scared passengers, and behaved erratically on the train. Neely’s own mother was murdered 16 years ago. I wasn’t on that train, so it’s hard to speak to specifics of that incident, but I know that last year New York City set an all-time record for deaths of unhoused people in public spaces, as did Los Angeles.
As we’ve learned from too many instances of police violence, there are other ways to handle people in distress. If we see someone at our door we don’t know, we don’t have to shoot them. We can ask if they’re lost. If we see someone acting out on the New York City subway (as they’ve done for decades), we don’t have to subdue them with deadly force. We can ask if they need help. I hope we remember the other choices available to us in moments of fear and discomfort. We all have a role to play in ending homelessness and treating our neighbors as human beings. If you want to do more about it than read my words, check out the Built for Zero campaign from Community Solutions. They’ve got a great track record and specific opportunities for action by business leaders, citizens, elected officials, investors, journalists and more.
In today’s dispatch, my thoughts and reflections on perhaps the most important and controversial dimension of the writers’ strike: the battle over the future of A.I. in Hollywood, a topic that’s badly in need of more nuance. This is now my fourth essay on A.I. in Puck since December (first, second, third). If you’ve been resisting the urge to subscribe, just succumb to it already! It’s worth it for this series on A.I. alone, and then you also get insights and reporting from my colleagues covering D.C. politics, Wall Street shenanigans, Silicon Valley comeuppance, and Hollywood strategies. Also, we added fashion!
I’ll share some highlights and bullet points from the extended thoughts on Twitter and the A.I. battle over livelihoods for creative artists, but the full version is the best version. So unlock that free sample or subscribe, both right here.
Elon’s “Velvet Hammer”
Last week, Elon Musk surprised allies and critics alike by announcing that he’s hired a C.E.O. to run Twitter. Sort of. More accurately, he hired a C.E.O. to run X Corp., the new parent company he formed after taking control of Twitter. Linda Yaccarino, the former Turner Entertainment exec turned NBCU chair of ad sales, will be stepping in to undo his self-inflicted wounds and try to presumably grow the business Elon sloppily bought and has chaotically run for the past several months.
I’ve been transparent about my frustration with the new Twitter, Elon’s “free speech” nonsense, his callous firings and the needlessly confusing Twitter Blue rollout. Nevertheless, I connected with my friend Steven Wolfe Pereira, a veteran media and marketing executive who is the co-founder of Encantos PBC, an award-winning children’s entertainment company, and also chief business officer of the global entertainment company, 3Pas Studios. Steven is also friends with Yaccarino, and had real insight into how she might run the company. Below, in lightly edited form, our conversation about her background, why she would leave NBCU to work for Elon, and what she might do to stabilize or even grow the platform.
——
Baratunde Thurston: When I first saw the news about Linda, I had a bit of an eyeroll. Here we go again with a woman being asked to serve as “the adult in the room” for someone who is a grown ass man. What do you think she brings to Twitter that is most useful or helpful given her capabilities and style?
Steven Wolfe Pereira: Linda is pretty unique in the industry and I think this makes her a great choice for this role (which objectively will be hard for a myriad of reasons that you’ve cited, especially Elon). She wasn’t just driving NBCU ad sales growing it to over $13 billion over the past decade. Yes, she’s beloved and has wonderful relationships with brand marketers and media agencies (who control the ad dollars on behalf of their brand clients). She has also been one of key executives driving industry change across many fronts.
… (lots more discussion in the full piece about Linda’s history and experience, the possibility of that elusive Twitter “super app,” and how she’s a no-nonsense person unlikely to put up with Elon’s b.s. I’ll skip to a closing thought from Steven).
The biggest question, of course, is whether Elon will let Linda be the C.E.O. and build out the team, products and services that she will need in order to be successful? If he truly empowers her, she will be wildly successful. Many marketers have short-term memories and they will want to be a part of the “cool new thing.” Remember people “banning” Facebook or YouTube? How long did that last? For better or worse, brands need to grow and marketers are measured everyday on how they are impacting the business. If a platform becomes “too big to ignore” because it has audience and drives revenue, it will be difficult for marketers not to spend with it. This will be the ultimate test for Linda—if she can turn Twitter from a “nice to have” ad buy/platform partner to a “must have.” If anyone can do it, it’s Linda.
and now for some highlights of the A.I. piece
Hollywood’s A.I. Art Heist Problem
With the Hollywood writers’ strike showing no signs of immediate resolution, I’ve found myself increasingly concerned about the rights and roles of artists in this emerging world of generative artificial intelligence tools. Namely, how can we build and deploy these tools with much more robust systems of consent, control, and compensation for human creators? Despite calls for a pause, signed by 30,000 (and growing) of the world’s leading business leaders and academics, the industry is not slowing down.
In fact, the opposite seems to be true. Consider Anthropic, one of the leading large language model-based companies, which recently boasted that the “context window” in its model can handle twice as much as its well-known rival, OpenAI. The result? “Claude” (why are we giving these things human names?) can ingest and process a novel in seconds and can maintain the thread of a chat conversation for much longer without “hallucinating.” This will make it much easier to interrogate large sets of documents, or analyze and summarize data sets and long texts. It also means these systems can increase the size of their outputs, so they can write novel-length texts, too. A machine that can devour or even generate a full novel in mere minutes. Is that impressive, terrifying, or utterly silly? The answer is yes.
Meanwhile, Google increasingly wants in on the game. At its I/O developer conference last week, the company announced long-expected deeper integration of A.I. into its Google Workspace productivity suite via Duet AI (similar to Microsoft’s Co-Pilot AI for Office apps). A.I.-generated music and search are on the horizon, with their chatbot, Bard, now fully public. I gave Bard a little spin, testing it on its knowledge of me. It got the broad strokes right, but completely invented “facts” about me that were utterly untrue—or maybe the bot just believes deeply in the act of manifesting, and hopes that by declaring that I’m the co-founder of organizations I didn’t found, a writer for publications I don’t write for, and host of TV shows that don’t exist, that it might inspire me to do those things!
In the past few months, we’ve witnessed radically new ways to make music, words, and images that require exponentially less human effort compared to this time only one year ago. We’ve heard a lot from the machines themselves, as well as from the people programming them and experimenting with their capabilities. But now, the chorus of human artists who will be impacted by these changing norms is growing louder, and they want to draw a clear line in the shifting sands as these technologies settle into our reality, not only upending the creative practices but livelihoods in the process.
A.I. usage is one of the key sticking points in negotiations between the striking Writers Guild of America (disclosure: I’m a member of WGA East) and the Alliance of Motion Picture and Television Producers. Writer, show creator, and WGA Negotiating Committee member Adam Conover shared a summary of the union’s proposals, and the AMPTP’s responses. The union demanded limits on how artificial intelligence could be used, saying, “A.I. can’t write or rewrite literary material; can’t be used as source material; and MBA-covered material can’t be used to train A.I.” According to the union, the AMPTP’s counter was to offer “annual meetings to discuss advancements in technology.” (“MBA” is short for “Minimum Basic Agreement” and refers to the collective bargaining agreement that covers most work done by WGA members). We haven’t heard about many instances of A.I. displacing human scriptwriters yet, but it’s clearly increasingly possible, so I’m glad the Guild is trying to get a handle on it sooner than later. Given how quickly we’ve gone from spellcheck and autocomplete to self-writing emails, I don’t think an annual “meeting” to discuss vague advancements in technology is enough.
Meanwhile, visual artists are also trying to get ahead of the A.I. tidal wave. Artist Molly Crabapple is a friend who I’ve cited in these pages before about her opposition to A.I. art, and even the use of the word “art” to define images created by generative systems. She and the Center for Artistic Inquiry and Reporting published an open letter signed by over 3,000 artists, actors, writers, and academics calling for publishers to not use A.I.-generated art in their publications. Imagine newspaper cartoons, book and magazine cover art, those human-made portraits certain media outlets use to portray interview subjects, all replaced by images created by a system like DALL-e or Midjourney. The letter isn’t opposed to A.I. illustrations simply for the sake of maintaining a nostalgic creative enterprise: The signatories make a twofold economic and justice-focused argument to preserve the livelihoods of artists, striking at the very foundation of these oft-dubbed “foundation models.”
The letter says what the founders and funders of technology companies generally don’t: “A.I.-art generators are trained on enormous datasets, containing millions upon millions of copyrighted images, harvested without their creator’s knowledge, let alone compensation or consent. This is effectively the greatest art heist in history. Perpetrated by respectable-seeming corporate entities backed by Silicon Valley venture capital. It’s daylight robbery.” To put a finer point on it, the letter goes on to describe A.I. art as “vampirical, feasting on past generations of artwork even as it sucks the lifeblood from living artists.” That’s good human writing right there.
While I witness and sometimes join this opposition, I’m also experimenting with, and thinking about, positive use cases for A.I. We are in the chaotic early days of a technology that will fundamentally alter how we tell our stories. Now is the time to get serious about organizing our laws, economies, and norms to provide something that feels like fairness and a life of opportunity for more than the handful of folks making and financing these tools. In that spirit, here are a few thoughts-in-progress about where this all goes.
I go much farther in this piece than I have before. A few select moments
There needs to be clarification around source material, and where the underlying ability of these generative tools comes from, in order to manage the confusion around representation once their creations are out in the world. As the CAIR letter makes clear (as well as several lawsuits), these generative systems have hoovered up a tremendous amount of “training data”—which is coded language for saying they’ve ingested and copied from vast troves of existing work and intellectual property. I’m sure OpenAI didn’t get signed release forms from all the writers and artists that inform its model. They will argue it’s “fair use,” but in practice, the program can unfairly commercialize someone else’s work. A prompter like me can say, “make me an image of Joe Biden in the style of Molly Crabapple,” without me actually having to learn her style at all, and while Molly is still very much alive and might never agree to make that work herself. What I might intend as flattery can also be exploitation.
As my friend Dr. A.D. Carson, a professor of hip hop at the University of Virginia, told me recently about these tools, “It perfects the trend of making Black art without Black bodies.” Basically, what Elvis did to Black music can now be accomplished at scale. You can “generate” hip hop without all the fuss of engaging with humans and history, and with the lived experience of humans born of that history. It’s possible to make the A.I.-generated hip hop sound Black by using the existing voices of real Black artists without their consent, harkening back to the old days of shady record deals that denied Black artists rights and ownership over their words or voices. And those days of course harken back even further, to an era when Black people lacked rights over their very selves. We are going backwards even as we go forwards.
There’s something disturbing about asking a writer not to write, but to instead rewrite or punch up a “first draft” generated by an A.I.—which was only able to make that first draft by regurgitating the work of human writers. It reminds me of feeding bacon to a pig. If your book or TV episode is an ingredient in the large language model being used to replace you, then you are due for something like, wait for it… residuals, similar in spirit to the residuals you would have gotten if someone resold or re-monetized a content catalog you’d contributed to.
As always, there’s more on my mind than I have space for in a single dispatch. You could say my context window has reached its limit for now. But I think these emergent problems should encourage us to think actively about what parts of being human we value. Sam Altman at OpenAI has been open about the company’s desire to create machines that are more human, an “artificial general intelligence,” capable of learning and accomplishing any intellectual human task. Yes, the driving force behind A.I.’s progress is an economic system that values output, efficiency, and profit. There’s much more to the human experience than these. But as we interact with these technologies, born of that economic impetus, and they become more pervasive throughout our lives, they’ll make us more like machines, rather than the other way around.
Full piece here. Much love!
How many battles can we fight at once?