I’m not afraid of AI (Artificial Intelligence). I am afraid of human greed and exploitation and violence and lies and xenophobia, but I have trouble understanding how I could be afraid of intelligence. I had always hoped to have a chat with a self-aware non-human intelligence, which could be a so-called artificial intelligence or it could be an alien. The reason would be to get a different perspective on my own intelligence, as a kind of reality check. At any rate, intelligence, whether artificial or otherwise, is pretty far down on the list of things that I think contribute to global catastrophe. But all of those things in the list that I am afraid of, they are way up at the top of that list. And to look at it the other way, on the list of things that might help provide a way forward out of the Anthropocene Extinction, I would say intelligence would be at the top of the list. Love would be up there too, but I personally have difficulty distinguishing between the two, love and intelligence. That’s one of the things I would want to talk to a AI or an alien about, to see if they reach that same conclusion. So intelligence is not the problem. The misuse of some of the fruits of intelligence, that definitely could be a problem. That’s why I get so vehement about clickbait.
Clickbait is a text or a thumbnail link that is designed to attract attention and to entice users to follow that link and read, view, or listen to the linked piece of online content, being typically deceptive, sensationalized, or otherwise misleading.
AI is trained on clickbait, so no wonder it writes good social media posts and advertising copy. Clickbait is just the filler between the advertisements. Its sole purpose is to keep you clicking so you’ll see more advertisements. It has no other purpose than to waste your time in order to get money for showing you advertisements. You are better off staring at the wall or at your big toe then reading clickbait. Well I could explain that I guess, staring at the wall, at least the possibility could arise that you could make a thought of your own. Clickbait is worse than porn, which at least stimulates your reptile brain. Unfortunately, clickbait is not only just empty drivel. It has a more nefarious purpose, which is not only to keep you around for the advertising, but to prepare you to receive it in a susceptible way. So, in my humble opinion a world with less clickbait would be a better world. I could explain some of my vehemence pretty easily I think. The reason AI won’t do anything of real interest to me anytime soon is that ChatGPT and other Large Language Models are trained by predicting the next expected word, billions and billions of times. I do love it that nobody seems to know exactly what a large language model like ChatGPT is actually doing, that is, how it produces what it produces. I won’t be a bit surprised though if it turns out that it is like a cat’s cradle or, let’s say, like a rope tied in slip knots. So then, AI is trained to predict the expected next words, but I am a philosopher and poet and what philosophers and poets are interested in is the unexpected next word.
I’m not playing word games here. LLM’s are about imitating what has been done in the past, and philosophy and poetry are about doing something new. I think that’s a pretty significant difference between the unexpected next word and the expected next word. One opens the world to new value and meaning, and one doesn’t. I guess I could stop here, but I hope I am communicating why I feel so strongly about it, because the LLMs are not even about imitating the best of the past, rather they are being trained on the most mediocre of the mediocre, internet clickbait, which is designed to be unremarkable, just to fill up space between the advertisements without giving you any reason to think, because if you were thinking then you wouldn’t be in a receptive state for the advertisements. So, as a philosopher and poet, AI is no threat to my creative activities. I’m also a photographer, and as a visual artist AI is not a threat to me either. Artists are concerned with the process of making art. Visual artists are interested in doing the process and making something visually interesting. AI is irrelevant to that. Artists don’t want something that will do it for them. Making art and marketing art are two separate processes. Vincent made all his art and sold one thing before he died. It didn’t stop him that he wasn’t selling anything, though he did for sure complain about not having any money. But the lack of financial success didn’t stop him, anymore than the fact that he suspected that his art was destroying his reason stopped him. “That’s all right,” he said. I’m no Vincent but I can understand some little bit of that, because I started photography because I was interested in the philosophy of aesthetic experience, and I thought involvement in the process was helpful or even necessary to that investigation. And I have found that it is worth it in and of itself. (Look, I’m sure you see what I’m doing by now, arguing that AI is not a threat to the processes that I care about, and I’m fully aware that AI could absolutely destroy the wordsmith and visual artist job markets. if you will just bear with me for a bit I’ll be right back to that topic. Okay?) Further, not only is AI not a threat to my involvement with the creative process, but in analyzing it I discover that actuality it illustrates some of the most important things about the experience of art. AI may very well make some visual images that people find interesting. For one thing that illustrates that what’s in the artist’s mind when they make an image has nothing to do with whether a viewer finds it aesthetically interesting. The viewer either experiences it as interesting or doesn’t. That is a very hard thought for some people. They always ask, “But what does it mean?” People who market art prey upon this misconception that one should understand what it means. They want you to give them your money to let you into the club of people who understand what it means. But that’s just marketing. The valuable thing aesthetically is that people should be able to describe what they see, the relationships among all the different components of the work, that constitute the work. So AI also helps us get clear about that second very difficult thought about art. An artist may well have something in mind when they are working. An artist may even believe that they are communicating something on some level in some way through the work, such that the message helps them make the work. That’s all fine. It’s just that whatever it is that makes the piece an artistically interesting work to the viewer is not whatever the message might happen to be. Understanding the message is not what leads the viewer to say WOW! A viewer could misunderstand the message and still say WOW! Or a viewer could have no clue that there even is a message and still say WOW! Yes, it’s the WOW! that we’re interested in. Looking at a work by an AI or by a human or for that matter by an elephant, is all the same. It either WOW!’s us or not.
In some way that we don’t understand, these works by StarryAI purport to be somehow based on the quotation I gave it from this article, “People who market art prey upon this misconception that one should understand what it means.” You can see a larger version of each image by clicking on it when it comes around. Additionally, some philosophers, like me, are interested in the relation between language and thinking, and I’ve already found that conversing with AI’s can be very useful in exploring that relationship. That’s one of the things that I’ll explore in subsequent articles in this series. Another thing will be why intelligence is one of the few things that I am not afraid of. I’ll also investigate the hugest question I know: will an AI become the evil psychopathic insane monster villain portrayed in 1950’s sci-fi? All in all then, in my humble opinion a world with less clickbait would be a better world and, further, a world with fewer human beings wasting their lives creating more clickbait would also be a better world. But for practical reasons, putting all the clickbait writers out of work all at once would be a catastrophic dislocation. You also hear it said that AI is going to put artists out of business. To this I would say the same thing, there are already far too many people living on the streets. Artists are in no danger because they are used to starving. But the people who sell art are. Just as with the clickbait writer, AI could very well take over jobs from commercial artists who create the graphics and images that decorate the clickbait. And just as with the clickbait writers, in the long term that would be no loss if the AI put the whole clickbait art industry out of business. But in the short term that’s another big job dislocation and it deserves our consideration.
It’s always delightful to see the unexpected, and the unexpected of the CEO of ChatGPT begging for governmental regulation is a classic! We are so used to the opposite that he surprised the pants off everybody. Because of the jobs issue I think we ought to listen to him. The big problem is that I doubt that anybody in the House or Senate or the government in general has the experience or wisdom to know even the first thing about what kind of regulation would be effective or how to regulate such a thing as AI. But you have to “dance with who brung you,” and maybe they’ll rise to the occasion. Yesterday I saw a brief mention of an academic or some kind of expert who was eager to nominate himself as a regulator. I think the same about that as about anyone who wants to hold political office, wanting it should be a disqualification! I don’t think that any kind of absolute ban or prohibition would be effective anymore than it was in the case of drugs and for similar reasons. It’s easy to make meth or grow weed and the absolute prohibition just gave people incentive to do it. By the same token it is extremely easy now to grow your own AI. I was checking the cost recently, and my guess is that $100,000 would do it. You just need a whole lot of computer, and, well, that’s really all you need. And a person that knows how to use it, but pretty soon there will be the kids, and the open wikis about how to build your own. If, however, the problem is simply to make the jobs transfer less catastrophic, we ought to have people that could figure that out and it might all be for the better, because we need to do some collective cultural thinking about what our jobs are like anyway. It’s just a carrot and a stick thing: we can discourage people from using AI commercially and give them incentive to write things other than traditionally calorie-free clickbait. The issue of regulation provides another example that just this simple discussion of AI has helped us clarify. As the chairman of ChatGPT speaks publicly about the necessity and desirability of regulating AI, maybe we ought to think about some of the regulations we had in the past that were working perfectly, because we can learn from those, and we can also learn from the fact that we created a lot of our own current problems by abolishing the regulations we had that were working. Think of the Fairness Doctrine, and the regulations that prevented dark corporate money from corrupting politics.
The Fairness Doctrine, the principle that provided for fair use of the media was a U.S. communications policy that required licensed radio and television broadcasters to present fair and balanced coverage of controversial issues of interest to their communities.
It seems only fair, doesn’t it, that since the broadcast airwaves are a public resource that they ought to serve the public interest. The abolition of he Fairness Doctrine led directly to the current state of toxic partisanship, by way of the creation of propaganda channels that serve the interests of conspiracy theorists and other con-men who will say anything that increases contributions, rather than serving the public interest in honest political dialogue.
The Supreme Court’s 2010 decision in Citizens United v. Federal Election Commission allowed corporations and other outside groups to spend unlimited funds on elections in total secrecy, overturning century-old campaign finance restrictions.
The quoted summaries by the way are written by an AI named Summarizer which is a feature of the Brave Browser’s search engine. Please know that the Fairness Doctrine had been basically in effect since the creation of the mass media 100 years ago, and the regulation of corporate money in politics has gone back a hundred years also. In your opinion was there a big problem that required changing things that had been working so well, or working well enough at any rate? Do you agree with me that we were better off before those two regulations were done away with? At any rate they indicate that the United States has a long and successful history of regulation in the best interest of the people and the communities, so if it’s necessary now we’re capable of it, or we were in the past anyway, and we could use these examples of successful regulation as blueprints in creating a regulatory structure for AI.
(This essay was written by Larry Short, who is a long time teacher of the humanities, philosophy, the arts, world music, jazz, world religions, and semiotics. His doctoral degree is in the phenomenology of aesthetic experience and religious experience. He has academic publications in those areas in The Journal of the American Academy of Religion, the British Journal of Aesthetics, and Recherches Sémiotiques/Semiotic Inquiry.)