Radio is facing a number of hard choices. Where do you stand?
Last week we published a huge number of promotional ideas for Christmas and New Year’s, authored by our new editorial assistant, Ann Ingram. One alert reader—and only one—Erica Farber of the RAB emailed me, “Curious minds want to know: would your new editorial assistant Ann Ingram be AI generated?”
First, kudos to Erica for resurrecting a very old tagline used by the Philadelphia Inquirer (“Inquiring minds. . .”). And Erica wins the “Winner Winner, Tofurkey Dinner” award for figuring out that “Ann” is a completely made-up person—her initials are AI, and her name is an homage to my late friend, New York super-jock Dan Ingram. “Ann”’s image was generated by a program called CrAIyon, where I asked for “a 26-year-old brunette professional woman” and chose from among the generated images. (That’s how Erica figured it out; she told me that the picture “didn’t look right.”)
But here’s the astonishing thing: I gave the following assignment to ChatGPT and Google’s Bard AI engines: “Give me 100 Christmas and New Year’s radio promotions.” Ten seconds later, I had what you saw in last week’s newsletter. Sure, some of the ideas are a little far-flung, and I had to weed out a handful of duplicates, but in general, the results were staggeringly good.
I pump out about 4000 words a week and I want those 4000 words to be as relevant as possible to your business, so being able to issue such a command and get the results in 10 seconds gave me about 2500 words of solid content. If I were to go digging for that content, it would take me the better part of a day, at least. (As it was, the most time-consuming part of the project was formatting the headlines for each idea.)
While we are confessing stuff, let me also say that for about a month now I have been using AI to write different versions of the news stories we present on the first page or two of every issue. Sure, the result is not the usual sparkling prose for which I wish to be known, but it is certainly the quality one would expect from, well, a 26-year-old editorial assistant with average to above-average writing skills.
What are you thinking right now, knowing that a significant part of our newsletter for the past couple of weeks has been generated by an artificial intelligence engine? Does that devalue the content? I would be interested in your thoughts.
But I’m just a newsletter, kind of a side road on the broadcasting map, and I don’t face a lot of the same issues that you do in your broadcasting business around artificial intelligence.
I hear from many broadcasters in small markets that at this point, as one put it, “We all use AI for copy.” Many broadcasters, including our friend Larry Fuss, are using AI-generated jocks, at least to fill out-of-the-way dayparts, like nights and weekends. Larry has taken it one step further by asking RadioGPT to clone his Pago Pago general manager and popular air personality, Joey Cummings. When I talked to Larry the other day, he told me he hasn’t put anything on the air yet, because he and Joey are still tweaking it to get it to sound more natural.
Larry sent me a spot, however, which sounded pretty darn good. In fact, the bulk of the spot was done by an AI voice, with a tag done by a local announcer, and it did not make the local announcer sound very good by comparison. In other words, AI is of particular interest to small market broadcasters because the level of quality is quite a bit higher in general than you’re going to get from your real live people.
Here’s the spot:
The Ethics of AI in Radio
In navigating the road ahead—not my side road, but your thoroughfare—there are two comparisons that come to mind: one is payola and the other is outsourcing manufacturing. Let me explain.
Do you realize that the entire payola scandal of the 1950s, 60s, and 70s could have been completely avoided with a short announcement in reasonable rotation, saying, “Some of the records heard on WXXX have been paid for.” By disclosing that to your audience, a lot of scandal and heartache and destroyed careers could have been avoided. (Or, they just didn’t have to take the goodies in the first place.)
It may come to this in terms of AI as well: morally, if not at this point legally, you would be in the clear if you ran such an announcement regarding your air staff, or better still, make your AI jock a big deal. If the “personality” is good enough—and right now we don’t get, or expect, much more than what we used to call “Triple-T” (time, temp, title) jocks—the AI voice will, I’m afraid to say, sound a lot better than some kid fresh out of high school or, worse still, broadcasting school (if they still exist at all).
So what’s this about outsourcing? What does that have to do with anything? In the world of manufacturing, business owners face a choice: have your widgets made domestically, and pay a fair wage to your workers, knowing that you have contributed to the economic well-being of the individual and our domestic economy—but at the cost of slimmer margins. The other choice is to outsource to China, or the Philippines, or wherever, pay a fraction of the money, and end up with fundamentally the same widget—at the expense of displacing American workers, and contributing to domestic unemployment. I’m not taking sides on this issue, because both methods have their merits, but it is a useful and appropriate analogy to what a small market station owner (in other words, small business owner) faces: it’s hard enough to make a profit nowadays in the radio business, so why not take advantage of the AI technology to give yourself more of an edge? After all, your AI voices are going to cost something, but nothing like what a real person might command, even at the crap wage level. . .and the AI voice always shows up, is always up, never gets drunk, and never swears on the radio, or says anything inappropriate (although that last part has to be monitored carefully, because if left unchecked, AI engines will pull things from the vast storehouse of knowledge that we call the Internet that are just flat out not true or, in some cases, they just make stuff up).
Quick (albeit self-congratulatory) story: At the first radio station we bought, we walked into a situation where they had six full-time live jocks plus a news person, each of whom was making a barely livable wage. Over the course of time, through the use of automation and syndicated programming, we reduced the air staff to three full-time people, but each of those people was making a decent amount of money, to the extent that they could hold their heads high in the community, and live like real people, with families and houses and everything. (I’m proud of the fact that two of the the three slots were occupied by people who stayed with the station throughout their careers, and retired well from the station and continue to live in the community.)
So, did our slow-moving RIF do good, or not? On paper, we reduced our staff by over 50%, but the fewer positions allowed us to pay more money, get better people, and, more importantly, keep those people throughout their careers.
Those of you reading this whose stations are part of a large group have probably received a memo from Corporate outlining the permitted use of artificial intelligence in your operation. I’ve seen some of those memos; some are well-thought-out and reasonable, and some are knee-jerk obstructions to the inevitable. Those of us who own and operate independent stations and groups can make our own policies.
I’d very much like to get your thoughts on the matter, either at firstname.lastname@example.org or as a comment on this article. We may not be able to avert a Terminator scenario, but we should be able to cover it.