AI is probably the most controversial subject right now, and I’m not going to get into the ethics of it. This has been discussed probably more than anything else lately. Instead, I just wanted to point out how AI works and what one ends up with.
In this case, I’m talking about LLMs — text generators. The way these models work, they actually create a string of text based on the probability of the next word. They statistically calculate what’s the next most likely word to be used and use it. That’s why sometimes the result is not precise. The model can be wrong or fill in gaps with made up facts. In some use cases, LLMs are great; for example, to process large batches of data.
However, more and more I see people use it to express themselves. Sure, it helps you to form an expanded thought, it’s quick and easy! Yet, there’s a big “but.” But is it really your thought? Or is it an average? What you end up with is an average opinion, nothing more. For some, being average is enough. With so many brains working with the same thought process and environmental influences, originality is hard to come by. But simplifying the process to AI generated texts makes those tiny threads of originality drown in the tub of average.

Leave a Reply