Discover more from Recommentunde
Me, Myself, and A.I.
New Puck Piece, Back in Florida
me about to ride with the Buffalo Soldiers Motorcycle Club in North Florida
As usual, you can get the full version of this piece in Puck. Subscribe! It’s good!
I’m writing this week from Alachua, Florida, where I’ve seen a lot of “Ron DeSantis: Keep Florida Free” signs and where we’re filming another episode of my PBS series, America Outdoors. Thanks to my time here, I’m no longer afraid of alligators. Instead, I think of them as adorable, tiny dinosaurs. And speaking of small relics from an ancient past, a Florida Republican state senator has introduced a bill forcing bloggers to register with the state if they make money from their blog and write about the governor, his cabinet, or members of the legislature. I’m not sure what’s more embarrassing: attempting to stifle speech in “free” Florida, or thinking that the best way to stifle it is to clamp down on bloggers. Florida man strikes again, it seems.
For more on DeSantis, be sure to read my colleague Tina Nguyen’s latest appraisal of his wide-ranging endorsements from across the Republican Party. For hints of what a world looks like without DeSantis or any attention-hungry elected official, listen to the latest episode of my How To Citizen podcast which we titled “Democracy without Politicians.” I talk with Claudia Chwalisz about citizen assemblies, an expression of democracy in which members of society are selected, by lottery, to serve in a deliberative body.
Meanwhile, it’s been three months since I wrote about ChatGPT and the implications of generative A.I. models. In the intervening time, I’ve experimented, been impressed, been scammed, and now I’m back with a set of updated observations on this trend that might never go away. Herewith, henceforth, and other words I don’t use to simply indicate a segue, five things I need to say about A.I. right now.
Me, Myself, & A.I.
Billions of dollars are rushing into another high-tech hype cycle, this time around ChatGPT and other large language models. This time, it’s different.
I. This Time Is Different
We’ve experienced a lot of technology hype over the past few years: crypto, Web3, and the metaverse, etcetera. In each of those cases, proponents made bold claims about how the technology would change everything. In some cases, it started to: venture capitalists invested billions in dubious startups; tech workers fled traditional tech for web3 concepts; your relatives and rideshare drivers all started talking to you about crypto.
But in all cases, the momentum waned. The government is coming for crypto after the collapse of various coins and exchanges. Web3 is still hard to define. Mark Zuckerberg changed the name of his entire company just to ride the trending topic of the metaverse, but in his latest earnings call he barely mentioned the technology, instead suggesting that 2023 would be “the year of efficiency.” I’m surprised he didn’t change the company name to Net Profits.
As we find ourselves in another technological hype cycle, this time circling ChatGPT and other large language models (L.L.M.s), it feels fair to ask: Will this cycle be any different? Will anyone be investing in, talking about, or using chatbots two or three years from now? I’m betting the answer is yes.
ChatGPT is already having a meaningful impact on our culture. It reached 100 million active users only two months after its launch, making it the fastest-adopted technology ever. Microsoft deployed the thing in Bing (which got the entire world to realize that Bing is still a thing!) and Google and Meta are racing to catch up. Generative A.I. is dynamic and pervasive enough that it will find its way into multiple areas of our lives beyond what we already see—and we see it everywhere. Your filters on TikTok, Snapchat, and Instagram? That’s A.I. The autocomplete in Google Docs? A.I. The auto-framing feature in Adobe Premiere? A.I. That annoying chat with customer service? Probably A.I. This is just the beginning, not a fad.
II. Be Careful What You Ask For
Despite my own awareness of the limitations of large language models, and my explicit reference to the bullshit text it can generate, I was recently fooled. In my last piece, The Black Liberation Paradox, I wrote at length about the conundrum I’ve faced in defining what freedom for Black Americans really means. I had outlined the piece and worked on it for literally months. As I neared the end, I knew that I wanted to make a point about the power of fiction and imagination, and decided to try out ChatGPT as a research assistant. My prompt: “Please share examples of Black writers, artists, and intellectuals who believe in the value of imagination in the effort to achieve Black Liberation.” I was fishing for a reference or quote I didn’t already know; Octavia Butler always comes to mind, but ChatGPT “informed” me that James Baldwin had written about this very topic.
With the unabashed confidence of a university student who definitely didn’t do the reading, yet eagerly volunteers to answer the professor’s question, ChatGPT said, “In his essay The Creative Process, Baldwin wrote that ‘the imagination creates the space for us to dream beyond what is immediately visible, to be more than what our circumstances might suggest.’” This sounded great, but there was a catch: Baldwin did write an essay called “The Creative Process,” but he did not write those specific words in that essay or any essay. In fact, based on my online searches, no one wrote those words. ChatGPT invented them.
Thankfully, my still-human editors fact-checked the piece, saving us all some embarrassment, and some old-fashioned manual research led me to an alternative Baldwin reference which had the benefit of being real. When I shared this story with a software developer friend, he told me I can avoid this by instructing ChatGPT not to invent things. I wouldn’t have to say that to a human research assistant, but these bots need explicit guidance on the whole misinformation thing. And sure enough, I got better, accurate results after I resubmitted the request with this addendum: “Do not invent quotes. Provide only examples and quotes you can support with clear attribution.” I think I’m also going to start adding “...and please don’t kill all the humans” to each prompt from now on, just to be safe. You’re welcome.
As more of these generative tools are tested publicly, it’s becoming clear that how you ask for a result is just as important as what you ask. I had joked on a recent episode of Puck’s podcast, The Powers That Be, that software engineers will soon be replaced by prompt engineers. Within hours of saying that, I learned there really is such a thing, and prompt engineering is a fast-growing field! There are online marketplaces where people share and even sell prompts, and leaders in A.I. are saying things like, “The hottest new programming language is English.”
III. A.I. Spammers and Scammers
Because my YouTube algorithm is quite good at surveilling me, I’ve been getting a bunch of video recommendations related to A.I., and one type in particular keeps coming up. It’s a man—always a man—explaining to me how some finite number of A.I. tools, always less than 10, can be used to grow my business. These A.I. evangelists aren’t interested in helping me be a better writer or artist or citizen. They ostensibly want to help me track down sales leads, and automate outbound messages. Essentially, they are next-gen spam artists. And the spam—and far more malicious uses for A.I.—is about to get turned up to 11.
Imagine your parents or in-laws getting a call from someone that sounds like you, asking for their bank or other personal information. Imagine our social media feeds primarily filled with A.I.-assisted or generated content. In the near future, I could just tell Instagram to post a video every day, in my voice, in which I comment on the weather and news headlines for that day. Industrial scale content farming and constant growth hacking will follow. Bots will scrape LinkedIn for any possible sales leads then bombard them with entreaties.
To counter this, researchers are working on developing watermarks that can be embedded in the large language models of chatbots, making it easier for us to distinguish synthetic text from text truly written by humans. This will help teachers, employers, and others, but we’ll need a lot more. We are in for an avalanche of bullshit not only in text, but audio and video. Ironically, the best way to police fraudulent use of A.I. tools may be more A.I. This might be the true “A.I. arms race,” not one big tech company versus another, but A.I. truth detectors versus misinformation spreaders. As in traditional wars, the true winner will be those supplying arms.
And I’m saving the rest of this for the full paid version which includes some of the following highlights
My Lenovo Late Night I.T. series. In a recent episode, I spoke with a chief automation officer and a digital transformation leader, and both made the point that I.T. departments are understaffed and unable to support all the technology that employees use. Everyone is looking to increase efficiency and the best people to design tech tools aren’t those in the “tech” department; it’s the people actually using them day-to-day throughout the organization.
My friend Ron J. Williams, an entrepreneur, investor, and partner at a venture studio, recently wrote about his belief that we’re entering an era of “Radical Comprehensibility” in business, as opposed to an era of hiding your terms or pricing and betting your customers won’t find out. As he put it, “in a generative A.I.-everywhere world, it will be tough to bury the implications of choices. Hiding where and how you make money will be impossible over time… because consumers won’t need a calculator and a lawyer to understand traps in the fine print.” I literally used ChatGPT to make sense of my healthcare plan!
Another friend built me my own chatbot interface into the How To Citizen podcast catalog, so I can literally have a conversation with an “expert” on citizening based on the transcripts of our conversations.
A.I. will be like a performance enhancing drug, and we have to decide if we’re OK using it and under what conditions. There will always be a niche community of resistant old-schoolers and landliners and vinyl fans, but the vast majority go where the momentum is.