Henry Jobe, Contributing Writer
In the mid-20th century, rapid improvements and growing access to technology resulted in the term “Information Age.”
This historical period was characterized by access to incredible quantities of knowledge. Anyone from almost anywhere in the world could access information on the internet. This accessibility only grew as more data was uploaded, and technologies like cell phones and personal computers became cheaper and more available.
We have reached the definitive end of the Information Age.
With the rise of misinformation, heightened by social media bots and generative artificial intelligence programs like the newly released Sora 2, it’s becoming increasingly hard to believe anything you see on the internet.
There has always been a level of awareness required to navigate the internet effectively. It isn’t particularly difficult to create a mildly convincing Photoshop or improperly edit information on open-source websites like Wikipedia. However, these methods require skills with editing programs or the ability to sneak past attentive moderation. Not only are AI programs free and accessible, they are also incredibly easy to use — and abuse.
These programs have improved exponentially in such a short time so that many of the tells that could be relied upon to spot fake images — too many fingers or inconsistent backgrounds, for example — no longer work. It has become increasingly difficult to determine whether videos and images are AI-generated or not without careful inspection.
Just look at websites like Facebook or X, which have become flooded with provocative videos and photos that are passed off as real.
The VCU community has already gotten a taste of the issues AI social media posts create. In October 2024, VCU’s Board of Visitors member Rooz Dadabhoy posted a fake image of a girl in a lifejacket holding a puppy during a hurricane with a politically-charged and aggressive caption.
Using fake images or videos to make a political statement is grossly irresponsible and has become concerningly common.
Several government agencies, including the Department of Homeland Security, have posted numerous AI-generated images and videos online. These come across as exceptionally sinister, presenting entirely fictionalized videos of fabricated events as facts.
Video evidence is also an area of concern. For the longest time, recording devices such as dashcams and security cameras served as irrefutable evidence in court, unmistakable proof of who committed a crime. But AI has become very good at mimicking security camera footage, and unless courts can consistently verify video evidence as legitimate, it will likely lose much of its power.
With easy access to this new technology, what is to stop someone from modifying security camera footage to replace a burglar with someone else, or creating entirely new dashcam footage that frames the perpetrator as faultless?
All of these elements together have led to a necessary rise in distrust of what we see online, as the internet is flooded with false information, whether textual or visual.
With tech companies like Google and Meta uninterested in stopping the flood, it’s up to each of us to navigate the flow and stay properly informed in the Misinformation Age we find ourselves in.
Two methods can help protect you from unknowingly consuming falsified information.
First, find news and social media accounts that you trust to not use AI — even better if they have staunchly refused to. While this won’t prevent you from seeing AI-generated posts entirely, you will have confidence that the posts from these accounts are real.
Second, improve your own ability to recognize AI posts — there are still ways to spot them if you know what to look for. Instagram accounts such as @showtoolsAI have been essential in keeping me up to date on what to look for. While many of the older AI glitches aren’t as common these days, there are still ways to tell, and developing that skillset will only become more important as Generative AI gets harder to parse.
Most importantly, remember nothing exists in a vacuum — leaving a like on an AI video of raccoons bouncing on a trampoline or Stephen Hawking hitting “sick” tricks at the skatepark tells botfarms and corporations there is a market for this content. Using or viewing AI content helps it improve, and giving it attention is a reward.
Use this as an opportunity to improve your media literacy — your future self will thank you.
