In 2024, artificial intelligence will make the world even less real: illusion will be impossible to distinguish from truth
In 2024, images created by artificial intelligence will obviously continue their journey to reveal our understanding of visual truth. And they will do it as efficiently as any such innovation has ever managed before.
"All of the advances in the field," TechRadar reports, "from pen and ink illustrations to the capabilities of Adobe Photoshop and computer graphics, pale in comparison to images from MidJourney, DALL-E, and Adobe Firefly. These generative image systems make the literal out of the imaginary, effortlessly and in minutes.
And when such a neural network creates another masterpiece, you will never be able to identify its sources of information - intermediaries or artists whose work it has used.
Each pixel will look as imaginary or real as you want it to. And if you let these images (or videos) travel the world, the question of their origin will remain a mystery to everyone.
Is the truth dead?
We can no longer believe in what we see. The images of generative AI have unintentionally but finally destroyed the concept of "Seeing is believing."
And this turning point in image-making could not have come at a worse time, when the real and the authentic is under increasing attack, no one believes the government, the media, or the church anymore, and officials, who until recently were considered the backbone of society, have finally discredited themselves.
People increasingly want to have their own "world" that confirms their ideas and beliefs that they had previously held. And into this vacuum of truth, generative AI, with its capabilities and desire for self-improvement, has entered, "as fittingly as possible". Now, if you don't have an image or video to support the result you need, you can easily do so with the help of a "priors" in Midjourney or a similar neural network.
Image generation with AI at this level is certainly interesting and dangerous. It is a tool like any other, and it is usually a reflection of the person using it.
People with good intentions will use the power of AI's increased productivity for good. And those with bad intentions will actively spread disinformation and lies.
The world has become even more illusory
Humans are not designed to search for truth. We react to what we see, hear, touch, and feel. Most information comes to us through sight. What we see is what we believe. If this were not the case, such a wonderful idea as cinema would have failed at the moment of its birth.
A movie is a series of shots, and our brains combine everything we see into a believable whole. No one moves smoothly in front of us when we watch TV and movies. The higher the FPS, the smoother the action. But this is not real.
We watch movies and know that dinosaurs are computer-generated, but that doesn't stop some part of our self from feeling emotions and worrying about their fate. Unfortunately, we are very easy to manipulate, and we are happy to let it happen with our own money.
What is the problem with AI in this context?
Neural network generative images exploit this vulnerability of people without permission. They present images that, no matter how implausible, look quite realistic and therefore become a part of us, making us believe in things that are not real.
Dangerous workarounds
Most image creation tools have a very weak security system. Yes, they prevent you from creating explicit adult content or depicting public figures in compromising situations, but these prohibitions are quite easy to circumvent. For example, someone aptly pointed out that if you can't depict certain politicians covered in blood, you can ask Midjourney to pour red syrup on them, and it will immediately comply.
There's more to come. And in 2024, we are no longer talking about static images. Midjourney's AI is now in full swing learning how to create videos. The process of creating believable videos with generative artificial intelligence is several orders of magnitude more complicated than static images, but within the next 6 years, we will probably see 10-second videos that are virtually indistinguishable from real life.
Therefore, seeing more does not mean believing. What appears on the screen may or may not reflect the truth. And if you see something that confirms your assumptions, look again. Examine the image for anomalies or even too much perfection. Look for pores on the skin (or lack thereof). Count the fingers. Study the background. If nothing helps, start your own research, become a truth hunter, and question everything.