Ah, the hype.
There really isn’t anything quite like a Silicon Valley generated hypefest and the introduction of Artificial Intelligence technology – most famously in its ChatGPT and (for images) Lensa AI iterations – is as one Silicon Valley pundit put it “a Netscape moment”.
It is a dramatic departure that points digital communication and information storage, processing and distribution in new directions.
But we don’t know what those directions will be quite yet. Just think, if you’re old enough to remember the first time you saw the Netscape browser, did you honestly believe it would evolve into Facebook?
And, of course, the importance of what Netscape started – the ability for anyone to use the Internet, not just a bunch of folks who understood the ins and outs of Pine and Berkeley Unix – was not understated at the time.
Marc Andreessen, the guy who created Netscape, is now an Silicon Valley legend. Smarter people than the one typing this newsletter hang on his every word. His venture capital investment firm, Andreessen Horowitz (A16Z to insiders – get the little engineer’s joke?) backed Facebook, Airbnb, Lyft and a host of firms you’ve never heard of but which have changed how you conduct business.
In 2011, Andresseen wrote “Why Software is Eating the World” an essay with a lot of smart predictions and observations that have been borne out – many of them by companies his firm has funded. Lately Andreessen’s been writing about AI.
But in a twist that shows just how much things have changed since 1995, Wired magazine editor-in-chief Gideon Lichfield took Andreessen’s essay apart and came up with some smart ideas about how AI hype serves a not-so-obvious purpose for those outside the Silicon Valley insider investment circles. For starters, check out the list of A16z’s investments in that tech.
As a result, Lichfield’s essay, “Mark Andresseen is (Mostly) Wrong This Time” is worth your attention because it does a fantastic job of explaining AI in terms that are understandable to folks who don’t know the ins and outs of Java, C-Sharp and Python.
Here’s Lichfield on how ChatGPT (a Large Language Model) works (emphasis added):
“Large language models (LLMs) are statistical inference algorithms. They predict the next likeliest thing in a sequence of things, such as words in a sentence. They produce what looks very much like human writing because they’ve been trained on vast quantities of human writing to predict what a human would write.”
Which gives you an idea of why political folks have taken to AI a whole lot faster than they did to almost any other digital technology. It’s a machine that apes the behavior it has observed. And AI has observed far, far more than any one person and it can find patterns in those observations that may not be obviously or speedily available to living breathing humans. But it’s horrible at nuance, it’s as creative as your toaster – if your toaster could generate text – and it’s got no sense of irony.
So, let’s get this out of the way. There will be AI-generated campaign ads, AI-generated social media posts, AI-generated music, speech and who knows what else. Some of it will fool people and much of it will be published via automated channels where AI’s fellow machines rule the roost. So, it’s inevitable that there will be a big to-do about political campaign disinformation and platform credibility between now and November 5, 2024. This is likely to raise – again – the questions of how political ads are bought, sold and reviewed in the digital world – a conversation that has been neglected for far too long.
Lichfield points out one overlooked fact: AI machines are built by people. They can be trained and restrained by the people who build them. And that’s worth remembering.
Because AI is going to be used. It will come in real handy for finding stock images – and manipulating them. It’ll make it a lot easier to come up with talking points – and it will pull a few of its own out of thin air. It will build field and media plans and suggest buys and neighborhoods that you didn’t consider – and that might include The Daily Planet and a list of the Fabulous Five‘s home addresses.
So the limitations of what AI can – or should – do will be fairly obvious. But that doesn’t mean it can triumph.
One good tool for political folks worried about “deep fake” AI-generated reels and images is this very nerdy consortium created to authenticate images used in the public domain. You know, like campaign ads. It’s something that the American Association of Political Consultants might find useful as a tool to authenticate the images they are creating.
Other ideas include clear and stated policies like the ones Lichfield has outlined for the magazine he runs.
At Spot-On, we’re spending our time thinking about how those processes can be corrupted, misfire or create havoc and what we think needs to be done to make sure that the ads we deliver aren’t machine-generated nonsense. But we’re going to look – and look long and hard – at other uses because we think its the boring backroom stuff that will, over time, make political AI a technology both a boon and, if it’s not carefully managed, a curse.
Our first step: Walk away from the hype.