Navigating the AI Challenge: Growing Distrust in Online Videos and Images

As artificial intelligence continues to expand its capabilities in generating videos and images, the authenticity of all visual content on the internet and social media is increasingly being called into question.
My own skepticism toward clips and images on social networks has intensified to the point of obsession, becoming an automatic truth-detection mechanism. This process now kicks in even when viewing content that is evidently real, causing me to hesitate and verify its authenticity. I call this phenomenon the ‘truth-checking syndrome’—an irritating yet useful defense against deception.
You may have experienced something similar: a compulsion to doubt the legitimacy of online photos or videos.
The key question is whether there are ways to distinguish AI-generated content from authentic visuals.
Fortunately, there are numerous methods for identifying fake images and videos produced by AI, although none are 100% reliable. AI has advanced so rapidly that it now challenges human intelligence, and it is set to become even more formidable in the future.
Personally, I use several tools and software applications to detect falsified content. However, due to my general distrust of internet-based tools, I refrain from recommending them to others. There’s always the risk that some may contain malware, potentially causing harm.
Beyond using technology, some experiential and visual cues can help identify fake videos and images, especially when scrutinizing videos. These common indicators include:
– Unnatural details in hands and fingers
– Repetitive and uniform patterns in backgrounds
– Illogical shadows and lighting
– Blurred or incomplete objects
– Micro-jitters during motion
– Sudden or inexplicable object appearances
– Incorrect text or numbers on signs and clothing
– Unnatural appearance of hair, jewelry, teeth, and ears
– Audio that doesn’t sync with visuals
While these techniques are not foolproof, they can be helpful. The most important tool in spotting fake visual content is our own intelligence and general knowledge of the world. The more aware and insightful we are, the less likely we are to be deceived by AI-generated material. However, it’s important to recognize that advancing technology will make detecting deepfakes increasingly difficult.
In the past, images were manipulated using Photoshop and videos were edited manually. Today, we are dealing with deepfakes that can even fool seasoned professionals. For example, images attached to this article were created by AI, yet they do not appear fake at first glance.
While we should embrace technological progress, the misuse of AI not only disrupts everyday life but also poses a threat to professions like judicial work and detective work, which rely heavily on visual evidence. We may reach a point where videos and images no longer serve as credible proof in legal matters.
Simultaneously, as technology advances, cybercrimes are also on the rise, posing a direct threat to human lives. In this digital landscape, we may be forced to adopt a default stance that all images and videos online are fake—until proven otherwise.
Mohammad Moradi




