Nuance in all things. A dive into (Anti-) "AI" Myths
“AI” is weird, I get it. Somehow, it’s everywhere, and chances are you either hate it with a passion or love it and wish your entire life could make use of it. Sure, that’s your right. And I’m not saying you’re wrong. Indeed, I have used “AI” before and still think it’s an interesting tool when you want to get into a subject that is fairly technical and invites to hands-on learning. But this text is not about that. This text initially started as a rant about people complaining about “AI”, but has transformed to a piece about the basic properties of “AI”, why I think most people talking about “AI” are wrong in one aspect or another, and why I think this will ultimately crash. But first, let’s get some tech facts straight.
“AI” is a bad term.
I prefer using Machine Learning (ML) for the “old” algorithms like random forest, neural nets or SVMs, GANs etc., and “transformers” or “LLMs” for the newer model-combinations underpinning chatgpt, claude, llama et al. While the former models are all relatively old – neural nets were invented as a “perceptron” by a neuroscientist in the 80s – their explosion happened with the computation revolution from 2000 onwards. Cluster computing, the mass adoption of the internet and easy cash due to the bull market before 2008 made it easier to gather training data, build infrastructure and develop models. Since then, models like decision trees (CARTs), SVMs and such are considered “off the shelf” models, with a rich knowledge base behind them. You can train such models on an average computer, with decent results. And they are everywhere. When you swipe your credit card somewhere, one of these models will predict the risk of you defaulting on the credit in milliseconds and either block or greenlight your payment. When you click on something on Youtube, a system will integrate the data and recommend you new videos based on some optimization algorithm based on an off-the-shelf system. These things are computationally efficient and easy to implement. “AI”, if you will, is everywhere. Has been for close to a decade now.
Now, transformers are a different beast altogether. Their main hallmark was initially proposed by Google. The idea behind it is basically simulating a large net of neurons and making it computationally efficient, giving it the possibility to weigh certain parts of the input more or less than others, and cross-referencing data better through a “knowledge map”. This leads to a similar mechanism our brain implements: We heavily filter our sensory input and only retain the important stuff. By doing this, the machine can be more efficient in dealing with information and give you a better result. The output of a transformer is purely statistical: Given input A, it will generate an output B that has, according to their inner wiring and training data, the highest probability of fitting the input. If you ask it, “How are you?”, it will answer “Fine, how are you?” back. It fits statistically, because its training data showed this pattern of communication very often, making it likely that “Fine, how are you” is the best response to your question. That’s basically how chatgpt works. For context, this is not how all transformers work, there are also other ones for visual generation, for translation and other things. I will focus on chat and image transformers in this piece, since those are the ones currently talked about the most. So, now that you have a bit of an understanding, let’s get to the most uncomfortable stuff.
Most people talking about “AI” are probably wrong
The attentive reader will have noticed that I am, right now, writing about “AI”. And I will not exclude myself from belonging to “most people”. Thus, I will preface the following text with a disclaimer: I’m not a “trained” data scientist (although data science is what I’m hoping to do after my PhD), I have no in-depth knowledge of how the large transformers and cutting-edge models work. That being said, I did have above average training in “AI” and statistics during my studies and continue to learn more about the subject due to my job and just because the topic is interesting. This means: Take everything here with a grain of salt. I think I know what I’m talking about most of the time, but the burden of falsification is on you, and everyone can make mistakes (especially scientists, and especially me). Alas, let’s start with the most obvious fact about “AI”.
“AI” is hype tech
There seems to be no middle ground when it comes to opinions about “AI”. I find this funny, because this is basically what any hype technology has ever evoked in people. Extreme polarization is a hallmark of new tech. Here’s a few examples: Train rides were once suspected to “injure the brain” when they became popular. Hell, even bicycles caused adverse reactions when they became a serious means of transport for women. In Germany, there is a famous quote from former Telekom CEO Ron Sommers in 1990 stating that the internet will probably never become popular, being a mere playground for nerds. All of this has aged like fine milk and is a reason to disregard tech hype (and anti-hype), or at least adjust one’s expectations. Right now, we as scientists and societies are dealing with the question whether social media is bad for people, and the promoters of this hypothesis engage in…worrying logic. Regardless: most new tech will die (remember Google Glass?) or be rather irrelevant in the grand scheme of things. I see the latter happening to “AI”. And all sides discussing the issue seem to dig deeper into their opinions, some nonsensical, some hypocrisy, some valid.
Let’s look at the loudest things people are shouting
“AI“ is based on intellectual property theft
This is what you will hear a lot of artists saying about visual “generative AI”, because companies scraped every available source of information on the internet for transformer training data, including artists’ promotion pictures on Instagram. Especially art seems tricky to evaluate, since intellectual property does not always apply to art – depending on the region of the world you’re in, taking a picture of a sculpture in someone’s yard and selling the picture would not be IP theft or very much so. Thus, there might be some merit to the theft claim, especially considering the rules being heterogenous. It is undeniable that the companies never asked for approval, and certainly never compensated anyone. While this might be immoral, it need not be illegal – although it seems like it ultimately probably was. The damage, however, is done, and getting cats back into bags is very difficult. Once something is out there, it’s out there. The internet will probably not forget (even though it should). And whether you can recreate pictures from a specific artist through a model trained with their art is also – to my knowledge – unclear as of now. As a side note, I find it curious that especially independent artists, who have long since criticized IP legislation and copyright practices of big players, especially in the music industry, are now the ones demanding protection from the regulations they once despised. While I am generally sympathetic to their claims and do not want to see independent music die a gruesome “AI-slop” death (“AI slop” is a term for “AI”- generated low-quality content in whatever form), the irony of the situation tickles my funny bone.
We will develop “Artificial General Intelligence (AGI)” in [insert time frame]
AGI firstly is a vague term, at best. Some people claim it is supposed to surpass humanly possible intelligence, others just say any system that is profitable is AGI. There’s a LOT wrong with both definitions. The first one is impossible to measure. Firstly, we ourselves currently do not know how far human potential goes. Let’s look at IQ as an example. IQ is the best tool we have today to measure important mental tasks. The highest IQ values today lie close to 300 (average humans have 90-110). Keep in mind, this data is from people we tested, which are few and far between on a global scale. But the bigger issue is: AGI is supposed to be way above our level. To measure this, we must design a test that measures ability we cannot fathom. But a test needs pre-conceived answers to evaluate it. If we cannot imagine what it is we want to test, we cannot test it. It is, therefore, unscientific to make such a claim, at least if we trust the “mainsteam” scientific method. It makes for cute philosophical arguments, but is ultimately marketing speak. The most we can test is “better than all humans before”. Is that AGI? Or is that another human-like machine? Answer: Probably the latter, because, you guessed it, we used human data to train it. And that data is noisy. If you really think that social media posts, video descriptions and texts are a good representation of thoughts, think about a picture or movie scene and try to describe the image or sequence in your head in text. Good luck with that.
All of this means to me: At least with the current type of models, AGI will not happen. I say this with confidence, also for a few technical reasons. Lets just imagine AGI was something measurable and real for now. Even though the current models are sold as “PhD-level intelligence” (which means nothing in terms of intelligence), they cannot even come close to human function. You will realize this once you give them a fairly complex task, for example coding. They will break down at some point and produce nonsense that clearly is not based on analysing and compartmentalising a problem, but spraying random garbage all over the script in order to increase statistical fit. It is akin to a trapped, panicking wild animal. It might “know” a lot and be able to search the web for answers, but knowledge and intelligence, while related, are not the same thing. On top of this, to produce mere human-level abilities, a model must be able to:
- Filter information (this is somewhat possible due to attention)
- Apply rule-based learning (some models can do this with reinforcement learning)
- Dynamically update weights with incoming information (impossible)
- Dynamically shift neuron wiring (impossible)
- Have specialized information processing centres for different modalities (semi-possible, but not well solved yet)
- Use abstraction and meta-maps for information and express these (really bad).
The straws that break the camel’s back here are point 3 and 4. We cannot live-update these models, because right now you need to re-train the entire thing. And training is expensive. A large BERT (transformer) model can consume about 1500 or more kWh for a training cycle. To gain some perspective, let’s compare this to a human brain. Our brains run on about 20 Watts per day, which means its energy consumption is about 0.8 Wh. If we used all the energy from one training run of this BERT model for food, a human brain alone could live more than 214 years. Considering that the brain uses about 20% of the body’s daily energy needs, a human could fully live off this energy for more than 40 years. The brain dynamically updates stuff over 40 years with the same total energy consumption that one of these models uses to consume and process very large amounts of data once, with the weird result of recommending glue on pizza. Compared to the machine, we ingested more modalities, parallel computed them better, filtered noise better, built a huge repertoire of skills – language, motion, recognition, sensors, memory – and got rid of inefficient connections. Indeed, the human brain actually reduces connections and kills neurons until you’re about 30 years of age, to become more efficient. And considering that the basic properties outside of acquired job-related skills take less than 20 years, this is bonkers. The machine cannot compete energy-wise. It cannot compete in the breadth of skills. Saying we will develop AGI by just tweaking those models is psychotic.
“AI” will replace or boost social interactions.
To me as a psychologist, this is an especially interesting one. Mark Zuckerberg, a man famous for looking like a robot in congressional hearings, recently stated that his goal is to equip people with “AI” friends to boost the friend count of the average American over 3. To anyone with a brain (and friends), this clearly shows that Mr. Zuck does not know what friends are. Which seems realistic, seeing as the only friend this person seems to have retained is now his wife, and I’m not sure whether those two are still on friendly terms. In any case, being lonely is not cured by “knowing more people”. You’re not looking for a number, you’re looking for meaning and connection. This is pretty basic “being a human 101”, but that seems to elude the Silicon Valley oligarchs ever so slightly. Also, “AI” has terrible memory. I can trash talk my friends based on a random incident in 2010 which we all remember fondly as the time where we collectively fucked up. Your “AI” friend might get the dates wrong, if it didn’t run out of memory before. Related (stay with me, I’ll get to the point quickly): para-social relationships with influencers only work, because we know that the other side has a personality – or at least something like it, depending on the influencer. You don’t know what chatgpt is. It changes personality and interests all the time, depending on the input. That is something humans really do not like. In general, we prefer consistency. Sure, some people might like it, and I as an introvert see value in not needing to interact with humans. But I believe the majority still seek experiences with real humans, mysteries which we need to solve to understand each other and the flame of social warmth that ignites once we do.
“AI” will train you to think less and become dependent on it
This is something relatively new. I’ve been reading a few things on social media lately pointing in this direction, the most prominent in German tech (and security) circles being this German blog post by Mike Kuketz, a well-known name in the German infosec and privacy community. I respect Mike heavily for the work he does – if you want to learn something about privacy and security, be sure to visit his blog, he has lots of good tutorials and information on there. Unfortunately, this blog post was not his best work, as it falls into the category “Computer scientist talks about the human brain and behaviour” – which is a massive red flag to me as a psychologist. Maybe I’ve read too many HCI papers, but when I first opened this blog post, I experienced a sense of dread which I usually only get with the mentioned HCI papers. And, apparently, it was not a false alarm. Mike argues that humanity will forget how to think for themselves and become dependent on (“AI”-) technology due to heavy “AI” use. In a scenario reminding me all too much of the movie “Wall-E”, Mike predicts our future as lazy drooling “AI” slaves, chasing convenience over effort. Freely translated, Mike says: “People who early on learn to outsource responsibility for decisions, language or thought will never learn to recognise responsibility. “AI” then will not be used as a tool, but as a proxy: For knowledge, for experience, for decision-making skills.” This is a stark claim. But it’s not new. The same claim was made about TV in the 1950s, shortly after the device became more widely used in homes. Didn’t we talk about hype technology? Granted, fearmongering like the old times need not be wrong, after all, you might be right to be concerned (nukes, anyone?). However, this case is different. There is nuance to everything. Let’s address these things step by step.
This is outright nonsense. Thinking is basically what we do 24/7. And, frankly speaking, many people do not want to think as is. There is this psychological construct called need for cognition. It describes the willingness of people to think about complex things. Empirical data shows: most people (60%+) really do not like to think too hard. This research dates back to the 80s. Being lazy thinkers is, on a broad level, hard-wired into our brains. We already are like this. Worse: tolerance to ambiguity, a concept describing the own ability to deal with uncertainty in communication or knowledge, is even more sparse in the population. This combination will likely not change, even with increased “AI” use. One could even argue that this combination is what sparks heavy "AI" use in the first place. People don’t become lazy. They *are* lazy. While “AI” might be an enabler of this laziness, outright calling it the source is unwise: We do not have any clue as to why that would be the case, no causal process that would support this prediction.
2. We will become dependent on “AI”
This is a complex issue, because it is unclear who “we” really addresses? Organizations? Some of them already are dependent on “AI”, firing workers like there’s no tomorrow, only to realize that without those workers, there indeed might not be a tomorrow. Society? Well, hardly, even though we are dramatically dependent on technology in general. There are crisis scenarios predicted by German authorities estimating civil war will break out after 1-2 weeks of large-scale electricity blackouts. Most critical infrastructure is run and controlled by computers and software. Individuals? Well, maybe. Some people have dependent personality disorder, rendering them incapable of decision-making. But other than that, I would be shocked if “AI” would make most people dependent. Will society or individuals be harmed due to “AI”-failure? Not necessarily, especially if your critical infrastructure is free from “AI”. As for individuals: Mike underestimates the human condition, as many computer scientists tend to do: Humans are incredibly resilient, and incredibly adaptive, if need be. Even if we develop a (pseudo-) dependence on “AI”, which I deem unlikely given the state of the tools, we can learn to get off of it rather quick.
3. Outsourcing responsibility to “AI”
If anything is human, outsourcing responsibility is. We literally organize our entire species around this concept: Children outsource responsibility to parents, parents outsource raising responsibilities to kindergarten and school, retirees outsource monetary responsibility to working populations, we outsource tasks to specialists, outsource production to more efficient places and compensate/outsource risks with large numbers. Outsourcing responsibility for things is a *natural human thing to do*. It is economically efficient; it takes load off our brains and enables progress *if the process is fruitful*. Now, with “AI”, this is not always the case. Sometimes, the thing hallucinates, sometimes it cannot go more complex, sometimes it recommends glue on pizza. That is bad. But more often than not, especially for menial tasks, it does not. Unfortunately, Mike (as many others) falls victim to a few biases that humans in general tend to show. Firstly, humans focus on negative things easier than on positive things, a clear survival instinct which often gets in the way in everyday life. This can be very well seen with flying anxiety. Many people do not trust airplanes to be safe, even though statistically, they are the [safest means of transport on earth](https://www.shawcowart.com/blogs/7306/what-are-considered-the-safest-modes-of-transportation). But one plane crash on the news will make you doubt it, every single time, even if it’s just a bit. Of course, anything related to “AI” will at some point show negative side effects. Since these are judged more important, they will get more focus automatically – effectively distorting reputation of those tools. Secondly, there is the human characteristic to expect perfect decisions from machines, even if human performance is noticeably worse in the same area. Autonomous cars showcase this well. Driver-less cars nowadays on average [drive safer than humans do](https://www.newscientist.com/article/2435896-driverless-cars-are-mostly-safer-than-humans-but-worse-at-turns/), but they are not perfect. Yet, we still are highly sceptical of such automated vehicles, feeling they are unsafe, although you statistically would be better off using a driver-less vehicle than hopping into your friend’s car. Of course, any human-designed non-AGI LLM won’t give perfect answers. And, frankly, neither will humans. A human “hallucinates” a lot of stuff during his average day. But the machine is like our favourite child: We just expect more from it and are easily disappointed when it fails. And, lastly, there is the weird human trait of [trusting machine decisions more than human decisions](https://cdn.aaai.org/ocs/14118/14118-62041-1-PB.pdf). In combination with the previous point, this does not bode well for LLMs. Since we expect more from them, we will scrutinize them more than we maybe should for everything. All of this can stir up to a big shit storm, fearing for the future of humanity. But, as I hopefully could show, it is merely based on fear and bias. If we do not have empirical evidence, we should not make such statements.
Now, while I’m basically at least skeptical about all the claims I’ve described above, this need not mean that “AI” is harm-free, loosey-goosey joy in a black box. Let’s dive into the dark, deep end.
The big, real issues
There is, as always, a caveat to everything. “AI” does pose real dangers and real problems, to society, individuals and organisations. While I try to avoid demonizing technology for reasons already explained, there is potential for actual harm using those tools. I’ll just list the ones I think are most relevant.
Fuelling climate change
As I have shown above, a single training run of a BERT model can consume quite the amount of energy. But: This is just a BERT model. Chatgpt is a different beast, not just one single model. Training this probably costs an order of magnitude more power, 1k+ Mwh. And that’s not all. After all, if you use it, it also needs energy. Usage represents the lion share of power draw for such models. Depending on who you ask, an average prompt to Chatgpt can consume between 0.2 and 3 Wh of power. This obviously does not sound like a lot to most people. “What are 3 kWh even?”, you might ask. I’ll tell you: Know these small drink fridges that can hold 10-15 bottles? 0.2 to 3 kWh per prompt means 40-600 prompts to “AI” will basically draw as much power as the fridge consumes in a year. Now while I believe the 0.2 kWh estimate is tad biased (it’s from an “AI” company), 600 still isn’t that much. Some people use “AI” heavily, generating hundreds of prompts every day. There are estimates out there showing the entire power requirement for worldwide “AI” tech is akin to a small country like Switzerland. Most of that energy does not come from renewables, meaning that we add another country’s worth of CO2 emissions (worst case) to our world’s carbon weight anually. Considering the energy we already waste for blockchain BS, this “AI” craze, eventually, will drive up costs for everything due to climate mitigation costs, while also being harmful for the environment and our planet.
Misinformation/Disinformation
“AI” does not care about truth. Not in the slightest, mostly because it does not “know” what a) truth itself and b) what the value or role – societal and financial – of truth is. The obvious result is that the produced hallucinations are taken as fact by people who do not know better or are unable/unwilling to check. And checking is getting increasingly harder with search engines becoming utterly unusable due to “AI” slop” being SEO optimized to kingdom come and Google or whoever being busy signing deals with reddit to push their silly questions to the top of my search results. The “AI overview” is also broken, lies or just not relevant to the search terms. Also, the “AI” has undoubtedly been trained on conspiracy theory content and “alternative facts”. All of this seems to ring in a golden age for conspiracy peddlers, anti-vaccers, crooks and otherwise uncouth fellas and fel(l)ines. Worse, fascists, autocrats and authoritarian politicians are already using “AI” as a shield for their shameful display of “politics”. Hey, chatgpt agreed with my fascism, must be right.
Giving up on promising other research in favour of “AI”
This is one of the worst things IMHO, and it is happening right now. You can see it in the US, where lots of actually useful research (medical, even) is terminated, while more and more money is force-fed into “AI”-research. While I believe that this also is an important area to fund, it cannot come at the cost of other, potentially life-saving or future breakthrough research (which is what I call basic research, by the way). While I do believe that some fields (cough evo psych cough) deserve to be defunded because they’re a cesspit of blubbering buffoonery sounding just sexy enough for Men’s Health articles (are they still a thing?), most fields are needed and important. If I were an “AI”-researcher, I would strike for these other fields or give them part of my money. Anything else is a shameful act of cowardice. But academic courage is a topic for another blog. For now, let’s say “AI” is the bad guy in this aspect (which it mostly is, no doubt).
Organizations riding “AI” into the sea and drowning
This has a lot of potential consequences for a lot of people. We can already see it with tech, fintech and tech-adjacent companies trying to get rid of their pesky, salary demanding workforce by replacing them with “AI”, only to realize the blunder they made when their code base is convoluted, their swiss cheese security model evolved to a hole in the ground model and their IT is quitting because they are overworked rebooting all those agents that keep failing, threatening their business. Newer LLMs also make it a habit to report behaviour it deems inappropriate directly to the authorities via email or threatens users that want to shut it down. Of course, such things will show a negative impact on a lot of people, especially when mass lay-offs are involved. 5k to 20k people added to the unemployment count is a massive burden on local administration, especially when lay-offs happen at specific locations. All of this can happen, and will happen. The tech companies did it, payment companies did it, DHL is planning to do it. And many politicians would rather fire more public servants and replace them than spend money to hire more and train them well. This wave is coming, and it appears we only can learn through massive pains, even though others have made that mistake and we watched them fail.
“AI slop” ruining the internet and “AI”
This is a sad and funny thing at the same time. Thousands of websites feature bad “AI” graphics, “AI” generated nonsense blurbs to enhance Google page ranks, “AI” generated video to make a quick buck with minimal Youtube revenue, “AI” generated songs, to cheat the algorithm and make quick bucks. Now while this is not nice for the users of those sites and services, it also is horrible for the “AI” industry itself. The more “AI” content is included in training material for the same “AI”, the worse its prediction gets, the more convoluted their results and the more unusable the product is. This is already happening according to some experts, and if it continues, “AI” will have no future, as at some point the models will completely “collapse” (yes, this is indeed the term for it). Keeping in mind that Google seems to have basically given up optimizing their search algorithm in favour of just slapping their objectively terrible “AI” overview on it or hoping that others will just use chatgpt for search, this development is a net negative for every internet user in terms of usability, visibility and just overall function.
Bubble popping
This is the one thing I’m still very unsure about, but have heard many with deeper insight into the “AI” finance situation like Ed Zitron talk about. Many “AI” firms are attracting huge investors, who in turn invest big. Softbank, Microsoft, diverse big-name VCs, funds and many small to medium investors have recently pumped money into what is either directly “AI” or “AI”-related, which basically includes all big tech firms at time of writing. While this is normal for a hype technology, this one is a bit riskier, because unlike blockchain, people can immediately see which use case it can have (mimicking humans), or at least can be more easily sold on the product than with, say, blockchain. And the firms that are solely making “AI” models are basically not really making money off it right now, despite hoovering up billions of investor money. We do, however, know that you really don’t need to make money to be successful on Wall Street – big tech firms like Uber and Amazon did not turn profitable for a LONG time, and Amazon only survived thanks to its cloud service AWS for half a decade. Yet, their products were easy to understand and there was an actual demand fuelling the growth, even though growth was subsidised by losses. With “AI”, the problem is that a) growth is only fuelled by hope that “AGI” will exist at some point or that the ultimate definitive use case for LLMs will magically appear and b) that their billions in profits right now have only produced a statistical parrot and their current models are doomed to fail when it comes to “AGI”. Additionally, you might say, the problem is also that even though they already subsidise their products, the growth doesn’t quite justify the subsidies. They are very inefficient considering power use and really inconvenient in terms of privacy implications. All of this points to either long-term devaluation (which basically means the investors losing a ton of money and the companies go bust), or short-term implosion, if people realize fast enough that their dream turned out to be a drunken haze born from misunderstanding the human brain. The fallout could be huge. Anybody remember the dot-com bubble? Could be like that, just way worse, because right now, people have more money than they ever had, and individuals are investing more than they ever have. Which basically means more private individuals will be out of money, for life. And the bailout will, once again, have to be done by taxpayers.
What do I make of all this?
This is always the hardest part. Calling out problems is easy, explaining why often also comes pretty easy. But to draw a conclusion from all this and even thinking ahead…will only lead to mistakes. Nonetheless, here is my best guess as to what needs to be done.
Most importantly: We need to start accepting that the cat is out of the bag. Being anti-”AI” does not help anybody. But that does not mean being defeatist about the entire ordeal. Indeed, I believe “AI” shows promise assisting learning certain technical skills in a learning-by-doing style. I used it for many things already in that way and learnt a great deal from doing something or correcting/rebuilding things an “AI” has given me. But this requires the knowledge to do that. Such a learning style is not natural for everybody, and some – if not most – have to be introduced to it.
But this is the area where I see a huge application potential, especially in schools. Locally run models are fine to run nowadays, Deepseek or devstral can be run on most laptops easily. Having such a resource at hand can be a gold mine for pupils and students alike – if they know how to use it. This necessitates teaching the teachers, amending curricula, adapting school to new tech. The latter is something that, at least in Germany, is overdue anyways. I personally like the idea of a distinct “digital competence and sovereignty” class in school, starting maybe in fifth grade. Of course, this must be integrated closely within other subjects. Know how to use a text editor? Cool, use it in language classes for your assignments. Basic programming? Great, apply it in maths class to visualize data or draw graphs for algebra or analytical geometry. Know how to learn with “AI”? Great, use it to learn a skill you wanted and present it in class.
Additionally, I think we need to learn how to be effectively critical again – this means creating a workflow using the software effectively with a certain bullshit-awareness running at the back of one’s head. If we teach this skill to kids in school, I think we can eliminate a lot of problems we see in society right now, where “asking chatgpt” is being equated with “asking the oracle” in ancient Greece. It cannot give legal advice, it cannot give health advice etc. etc., but many do not know that. And, more importantly, we must not repeat the same mistake we made with “digital natives” and phones. Letting them do just whatever, thinking “they’ll figure it out on their own” is not the way to go. That’s how you end up with naive people who cannot productively engage with tech – be it critically evaluating chatgpt or doing anything other than swiping on a tablet.
When it comes to the debate about environmental footprint etc., I believe the companies themselves have already realized the problem and are trying to fix it somehow. Many want to build nuclear power plants to decarbonize emissions, and while this is a noble endeavour, I believe it is a dumb move, especially since 1) investing in nuclear power is risky business from a financial perspective, 2) new tech is still in the testing phase meaning big firms need to dig up outdated technology until new tech is built (which may take some time), and 3) – worst of all – investing in nuclear is a net negative for the climate. This means we need to radically change course to renewables for energy production. Solar, wind, water and whatever else is coming. Don’t take it from me, the Chinese have realized this long ago and are right now installing 2x more renewables yearly than the rest of the world combined. Consequently, this means also strengthening the grid, building storage for overproduction etc. etc. Yet, such an endeavour presents a challenge for countries run by politicians who’d rather have the “old times” back. Considering the situation, this seems to be an issue which could make or break “AI”, or – worse – societies. Not good news. And I’m not optimistic here, unfortunately.
What can we as mere citizens do?
Short-term, you need to inform politicians about the risks. They clearly are either unaware, or they are being wooed by lobbyists. Write letters to your local representatives. Connect them with experts, if you know some. Raise awareness in any way you can – at the dinner table, formally at work, informally at work, at gatherings, local town hall meetings etc. Don’t appear alarmist, don’t use drastic language. Just introduce the topics. Don’t try to be a missionary. Try to be the person bringing news. Let it ruminate, and also keep your target audience in mind. Blue collar is different to white collar, pragmatics need different examples than theoreticians, etc. etc. Do not make the mistake of climate activists, trying to be holier than thou, looking down on ordinary people. Give them good reasons, but do not be rude about it. Take their concerns seriously, and show them that you empathise with them. This is how you win people over. The more they realize that this awesome tech is ruining a lot of things, the better chance there is to change them.
Long-term, we need to regulate the industry. And we need to regulate smartly. This could mean increasing energy prices with surging demand beyond reasonable estimates to force them to innovate, requiring open-source models and training data, blocking IPOs, outright banning certain “AI” uses or scrutinizing their outputs heavily. An “AI” will tell you how to do crime? Hold their parent company responsible. Conspiracy to commit crime is a crime itself in some countries, and we already have precedent stating companies can be held liable for their “AI” assistant’s utterings. Additionally, we as a society must learn how to reasonably work with “AI” applications and how to compensate social costs the technology brings with it. This is not easy, as social legislation is hard to push, especially with opaque tech like “AI”. In fact, I believe that the reconciliation of societal knowledge with the tech status quo is one of the most pressing social (!) issues of our time. We are about to lose grip on technology as a society, as it becomes increasingly complex at an increasing pace. If nobody can roughly describe how things work anymore, we lose the ability to keep track of issues and correct mistakes as a society. This is insanely dangerous, as this could and will be – partially already is – abused by the people holding the digital cards for personal enrichment and power. Yes, this sounds like doomer talk, but the signs are on the wall, and the window to fix it is closing. Of course, there are many more things we can do, but for the sake of brevity – if one can even claim that after 6k words – I will restrict myself to mentioning only the most pressing issues, even though they are objectively hard to solve.
Concluding Remarks
“AI” is not just good or evil. As any tech, it has its benefits and flaws, and a lot of grey areas. Even though the flaws – at least at time of writing – are massive and dramatically overshadow the benefits, I think there is no practical value in condemning the tech, especially since it is ubiquitous now. While we can often see individual benefits, they are a mere disguise for the massive societal drawbacks this technology brings with it. I personally despise black/white views on topics such as “AI”, yet I can really understand where such views are coming from. It is our task as citizens to show our fellow citizens and politicians both sides of this coin to enable informed decisions about the future. As for me, I honestly think we’re heading in a theoretical dead end with the current models. But that’s just me, and I really hope I’m wrong. Because if not, all of this money, power and carbon was used for close to nothing.
Cheers.
Tags: energy, AI, LLM, statistics, myth