Recently Forbes Magazine detailed “an important and dangerous new phenomenon in AI: deepfakes. Deepfake technology enables anyone with a computer and an Internet connection to create realistic-looking photos and videos of people saying and doing things that they did not actually say or do.”[i]
Deepfakes often feature famous people such as celebrities or politicians that are produced by mapping neural networks within the face and then using those points of imagery to swap faces between two people, bringing an individual who was never actually there into the final product.[ii] This type trickery was once available only to the best in Hollywood with the equipment and software only filmmakers had access to. However, thanks to new strides in software development, deep-faking is now easily available across the Internet to anybody via numerous downloadable apps. Furthermore, the digital realism attainable with this type of software is more convincing and simpler to use than ever, meaning the type of training and expertise (not to mention time and money) it once took to produce such high-end results can now be obtained by anyone with an Internet connection.[iii]
While the newest deepfakes often involve an actual swapping of faces between an actor and a “target” (the unsuspecting person who, unbeknownst to them, is about to star in a production without their own knowledge or consent), tampering with imagery to skew people’s concepts of an individual is not necessarily new, and has not always involved trading people’s actual faces. It can be as simple as lightly tampering with lighting or speed—as illustrated by a recently doctored video of House Speaker Nancy Pelosi, wherein the speed was slowed just enough to make it appear that she was under the influence of some type of substance, slurring her words as her eyes dragged dully across the screen.[iv] During the weeks following the release of this altered video, headlines and social media assertions regarding her mental capacities during the address raised questions about her competence in general.[v]
The untampered video, played at regular speed, showed the speaker giving a simple address.[vi] Other pranksters have used the same technology to superimpose Nicholas Cage’s face into movies he didn’t star in, such as The Disaster Artist, The Sound of Music, and an entire scene of Friends[vii] wherein each of the six comrades’ faces were mutated into his. Other efforts went so far as to plug his face into the role of Lois Lane in the Superman movie Man of Steel.[viii]
Reddit is a social website where users post material, usually via hyperlink, that others can view and rate based on their own opinions. The term “deepfakes” originally showed up on this site, where a user by the same name began to create and publish pornographic videos in which the faces of certain celebrities were imposed over those of the actors in Reddit’s films.[ix] Before long, the notion caught on, and many freelance software writers began to create similar versions of the programs. Soon, these were introduced on the Internet, many of them free. Despite early attempts to contain and eliminate the production of such films, the means had been released, and the trend gained momentum. Now, new, often harmless or silly versions of such deepfake creations emerge almost daily—usually targeting celebrities or politicians—and the technology is here to stay.[x]
How does deepfake work?
Many people are familiar with whimsical websites like JibJab that allow users to upload and paste an image of someone’s face into an animation—typically a dance routine. This (now outdated) software often misses the mark when it comes to authenticity; the result is typically silly-looking, with a visible disconnect between the head and body of the image, creating a caricature-like effect. But the technology available today to produce a high-quality deepfake has progressed far beyond this simple cut-and-paste technique.
In order to understand how a computer program can convincingly interchange the images of human faces, it helps to have a basic understanding of how the technology in this area has evolved over recent years. The software that generates deepfakes is widely available; whether you realize it or not, you’ve probably had your own interactions with it. For example, you or someone you know may have taken a selfie that morphs a person’s face into, say, a puppy’s, or makes one look much older or younger. Or you may have experimented with filters that add such features as horns, haloes, or beautiful, glowing flowers to the top of a person’s head. If you’ve done this, then you’ve tinkered with the very technology that makes these sometimes-deviant, deepfake videos possible.[xi]
Recall the first time you saw this type of app, likely on your smartphone or a social media account: it’s likely that the interface as a first step in the setup said it needed to create a “mask” or digital image of your face. This springs from facial-recognition technologies that have been in operation since the end of the 1990s in a variety of facial-recognition technologies.[xii] Since its initial development, however, the technology’s effectiveness has greatly improved, with much more accurate depictions of superimposed faces.[xiii] Essentially, this new and improved software not only reads the whole face like its predecessors, but it’s now able to detect key, pinpointed locations on the face from which expressions are generated, such as the borders of the lips, the areas around the eyes and nose, the borders of the jaw, and the continuous line that indicates the shape across both brows.[xiv] Then, using the textures, colors, and measurements of the facial structure, the system creates a “point-mask,”[xv] similar to a digital, three-dimensional map of the face. By tracking these interconnections across motions, the user is able to move his or her facial image on camera while the program traces the movements of the face and keeps the point-mask in place.[xvi] This causes the digital “add-ons” (such as the flowers, etc.)) to appear to remain attached to the image, even when the person is in motion.
Understanding this basic technique, it becomes more apparent how a computer can effectively superimpose one face into another’s. All it takes is for both faces to have a point-mask created (which can be effectively done via multiple still-frame images if live video is not available), and then for one person’s key point locations to be fused to correlating locations on the other person’s image. This is why the result doesn’t look like a dated, cut-and-paste process, but rather, one’s face appears to literally become the others. Similarly, this is why, when meshing together the facial images of two people whose bone structures are greatly varied, the result is undeniably the target’s face, with something appearing “off.”
Many of the deepfakes currently circulated are produced good-naturedly, like the collection of laughable Nicholas Cage movies. However, someone wanting to create a convincing deepfake for malevolent purposes would begin by making a live-action video of the desired content, then fusing the target’s face into place by simply uploading a variety of images of the unsuspecting person into the desired software. This is why celebrities and politicians can be particularly vulnerable to this type of forgery: their pictures are easily accessible, and the more images of a person one is able to upload, the more convincing the swap will be, due to the point-mask having plenty of facial angles to map. Because the software maps the features during movement, it follows the points of motion during speech, causing the incoming face to follow the expressions of the originally filmed individual. Once the face-swap has been made, the computer then checks and rechecks itself, correcting signs of counterfeiting until it no longer detects flaws in the video.[xvii] All it really takes is a simple app and an actor/actress “of similar build [and body language] and most of the work is done for you by the algorithms.”[xviii]
Furthermore, the target’s voice can be copied as well with software that uses an adaptive algorithm to analyze the fluctuations and pitch of a person’s voice, along with their vocal tones. Once this analysis is completed, all one needs do is type a desired phrase into a computer and the audio will “speak” the words, digitally matching the voice of the man or woman analyzed.[xix] Again, this is one reason celebrities and politicians are easier targets for deepfake efforts; abundant audio clips are readily accessible.
Modern software, unlike the older cut-and-paste technology—is the digital equivalent of attaching one person’s skin and soft tissue to another’s skeleton. This said, it is understandable that some deepfakes are more easily identified than others. As mentioned earlier, if the two facial images being fused don’t have similar bone structure, the result is the image of a face that is almost identical to the target, but that still appears slightly off.
Take, for example, the deepfake that morphed James Franco’s face into that of Nicholas Cage, making Cage the “replacement” star of the counterfeit version of The Disaster Artist. Because Franco’s face has a wider bone structure than Cage, it’s fairly easy to see that, despite the fact that it’s Cage’s face, something isn’t quite right about it: The cheekbones are wider set and the jaw is broader than Cage’s.[xx] However, the image of Keanu Reeves’ face after it is placed over that of Tom Hanks Forrest Gump is an absolutely convincing replacement.[xxi] On the other hand, in another example, it’s easy to see that President Trump’s face has been superimposed over Alec Baldwin’s in a creative revamp of a Saturday Night Live impersonation.[xxii]
Another way to spot a deepfake is body language and structure. Since the images are acted first by an impersonator, mistakes in mimicking the target’s body language or even differences in build at times act as a giveaway of the switch. However, as mentioned previously, a good impersonator/look-alike can overcome a lot of this variance. Further, artificial intelligence is being used to perform checks and rechecks via a perfection technique called generative adversarial networks (GAN), which will soon make it nearly impossible even for a computer to distinguish the real from the fake. Using GAN, computer software “compete[s] with itself,”[xxiii] essentially checking the fake for telltale flaws, then correcting them until the software itself is fooled.[xxiv] It is said of this technique: “The image-generating software will keep improving until it finds a way to beat the network that is spotting fakes, producing images that are statistically precise, pure computational hallucinations.”[xxv]
Who can or would make a deepfake?
Remember back in the late 1990s, when producing photos went beyond developing rolls of film, and became digital? Perhaps you recall those early days of mainstream digital photography, when Photoshop had recently become available to the general public, and it was an exciting new capability to be able to edit your own pictures. Women were able to easily “shed” pictorial pounds, men’s hair became suspiciously fuller, and whimsical projects—such as pasting your Aunt Sally’s face over that of the Statue of Liberty—became a satisfying pastime for those looking to add some innocent spice to their vacation images. Such alterations are easily spotted by the naked eye and are now dubbed “shallowfakes.”[xxvi]
It certainly seemed like innocent fun as long as three elements could be counted upon: (1) the alterations were easily spotted due to rudimentary software, (2) augmentations were harmless or complimentary (such as trimming a few inches off the hips), or (3) the changes were imposed over a still-frame pic. Why is this? The answer is simple: People see no harm in making innocent adjustments to pictures, and even if an adjustment does seem malicious or questionable, people like to believe that they can always go to the moving footage for a true, trustworthy documentation of actual events. But with deepfake, the public no longer has any such certainty.
In the 1990s TV series Babylon 5, Captain Sheridan of the space station the series was named for had, in a political altercation, been betrayed by a friend and turned over to adversarial forces. These captors tortured, starved, intimidated, and attempted to bribe the steadfast captain in an effort to get him to confess to false charges, but he held his ground. In the face of impending execution, Sheridan’s captor explained that even if he refused to meet their requests of confession unto the death, he would still confess, but posthumously:
“The best way out for everyone is for you to confess…whether… [the confession is] true or not, it doesn’t matter. Truth is immaterial, they can sell it, and [if you cooperate] they will let you live. Note: I said, it [confession] was the best way; I didn’t say it was the only way. The other way, Captain, is a posthumous confession. Your signature is not a problem. They have your image on file—they can create you reading the confession… I’m told that, as of this morning… [posthumous confession] is an acceptable option.”[xxvii]
When this scene was produced and aired in 1997, the average viewer would have never dreamed that software already in existence would rapidly morph into the elements necessary to make such a threat possible. This scene, a portrayal of futuristic sci-fi at its finest, caused gripping suspense because the implications of the interrogator’s suggestion were a complete violation of a person’s code of ethics, conduct, and rights to make his or her own choice about what to (or not to) engage in. There remained a certain sense of safety, however, because of the (at that time) knowledge that one would never really be subjected to such a violation. One could not just create footage of other people doing something they had not—or would never have—done…right?
However, advancements in technologies (such as Photoshop), which as noted, in earlier days were easily identifiable and largely used for innocent fun, have taken a malevolent and potentially disastrous turn. As these digital capabilities improve for both still-frame and video technology, it is quickly becoming simpler for fraudulent images and videos to be made that are nearly—or completely—impossible for the naked eye to spot. “Before deepfakes, a powerful computer and a good chunk of a university degree were needed to produce a realistic fake video of someone. Now some photos and an internet connection are all that is required.”[xxviii] Currently, the software exists that literally decodes the movement of a person’s lips in relation to his or her words and then helps generate an image that makes it believable that the inserted words actually come from the subject’s face.[xxix] This new face can then be seamlessly fixed over another person’s face to appear to be a part of the con-man’s moving body.[xxx] The extraordinarily frightening element of this type of technology is that a person can be (and many already have been) depicted as taking part in activities that are, at the very least, intensely personal, and at worst, depraved and even heinous—and worst of all, without his or her consent.
Will Deepfakes Impact the 2024 Election?
US government officials are increasingly concerned that this trend could impact the election of 2024 as well as those in subsequent years. Since these works can be so convincing, the worry remains that even those who carefully scrutinize news sources for fakes will be unable to spot them—and this says nothing of those who are unaware of the many types of deception that permeate the media, thus may carelessly cast misinformed votes. The pressing query on the part of all who look ahead to the next election is what would happen if a damaging deepfake were to be released just in time to sway the election’s outcome but is not brought to light as fraudulent until after the votes are counted.
Deepfake videos could even “cause worldwide chaos and pull society apart,” according to experts.[xxxi]
EU tech policy analyst at the Centre for Data Innovation Eline Cheviot warned that there is a growing imbalance between technologies for detecting and producing deepfakes and that this means there is a lack of tools needed to efficiently tackle the problem.
And she warned humans are no longer able to spot the difference between deepfakes and the real thing, and so are unable to stop weaponised fake news from spreading.[xxxii]
In a recent article discussing such a possibility, Karl Stephan of Texas State University stated: “It takes time and expertise to determine whether a video or audio record has been faked…by the time a video that influences an election has been revealed as a fake, the election could be over.”[xxxiii] Surely, even as efforts are made to perfect software that could identify fakes, the fakers are upping their own game, counteractions that result in and feed a strange game of cat and mouse that many authorities remain unsure they can win. Even as it stands, a recent fake featuring Obama was perceived to be real, even by those who know him personally.[xxxiv]
On another hand, some have expressed concern that deepfakes will only fortify the beliefs of those who have already made up their minds, whether right or wrong. To explain further, some people hear or see what they want, and anything that reinforces their position is welcome—fake or not.[xxxv] And, unfortunately, once a video has been viewed and accepted by the public, experts say many are resistant to newly emerging truths regarding the fake: “Once the doubt has been sowed…[about details later revealed to be misinformed] a non-trivial portion of viewers will never forget that detail and suspect it might be true.”[xxxvi]
Particularly vulnerable to such fakery are voters who remain on the fence during the final days preceding an election. For some voters, simply knowing that fraudulent videos indeed circulate will suffice to keep their votes headed in the right direction. These individuals will investigate before changing their decisions (however, this still does not alleviate the issue of a deepfake being nearly impossible to verify). Those who watch the pre-election fiascos casually, however, and who are not wholeheartedly invested in any one candidate, will be easier targets for a late-timed, incriminating deepfake to persuade them with deceit,[xxxvii] essentially stealing their vote.
The threat to our nation by these deviant productions goes farther than the possibility of unjustly swaying the outcome of an election. Republican senator of Florida, Marco Rubio, has explained that, whereas previously, national security would likely only be threatened by a physical attack, modern technology has left us vulnerable to a strike against our “internet system…banking systems…electrical grid and infrastructure, and increasingly…the ability to produce a very realistic fake video”[xxxviii] adds to this problem. For example, imagine the crisis that would ensue if someone were to create a fake alerting the public to a national emergency, declaration of war, announcement of a pandemic, or worse.[xxxix]
For these reasons, the US government is currently working with artificial intelligence (AI) experts across the country to tighten security regarding these fakeries and to ramp up the software that exists for detecting fraudulent releases. The House Intelligence Committee has been holding hearings with AI experts to strategize what can be done to stop the situation, while Congress deliberates how to design legislation that could likewise regulate the creation and spread of such productions—a phenomenon taking place in such sheer volumes that those attempting to stop them are overwhelmingly outnumbered. In response to the enormity of the task, computer-science professor Henry Farid of the University of California flatly stated: “We are outgunned.”[xl] Creating legislation against deepfakes is more complicated than it may seem at first glance: Government officials face obstacles barring “governmental overreach and the perceived threat to the First Amendment.”[xli]
Involved in the prevention of electoral disturbance due to fakery is DARPA (Defense Advanced Research Projects Agency), which recently announced that it will be working to create resources for identifying and containing harmful deepfakes. The agency is currently working on building new tools for verifying videos, as older ones “are quickly becoming insufficient.”[xlii] Precautions now being taken to safeguard the upcoming election from fraudulent videos that may mislead voters include banking gratuitous amounts of footage of key personalities expected to be involved in the 2024 election to encrypt a clear digital study on how their faces and bodies move while they talk, a signature set of unique movements involving distinct individual body language and facial expression that they call “fingerprinting.”[xliii] Essentially, DARPA and the government are working together, utilizing these personalized characteristic databanks to produce a website where new videos involving these people can be uploaded and verified as true or false before they are released to the mainstream media. The goal is to have all key 2024 election players effectively fingerprinted for safeguarding against fakes.[xliv]
However, this does not alleviate the concern that a good impersonator will mimic the facial expressions and body language of a targeted person. Furthermore, experts state that “detection techniques that rely on statistical fingerprints can often be fooled with limited additional resources.”[xlv] Once uploaded, a video in question would be sorted by AI first for what DARPA calls “semantic errors,”[xlvi] such as potential flaws that point to errors in editing, including “mismatched earrings.”[xlvii] Then, they would be arranged by category: whether the production was made for a malevolent reason or otherwise. Based on these results, a video would be flagged for “human review.”[xlviii]
The hope behind DARPA’s endeavor is that those in the media will upload videos to verify authenticity before running stories based on them. However, one has to wonder if this approach will be successful. First of all, should a video be flagged for “human review,” what would the time frame for such scrutiny be? Surely, those within the media who have come across a story will want to be the first to release it. It is difficult to imagine members of the press being willing to wait for verification of video submission before running a story. Certainly, for at least some, the fear will be too pressing that another news agency might “get the jump” on the news while they were awaiting additional review. And, unfortunately, for such an undertaking to be successful, it will require the cooperation of everyone. As it stands, stories are often told in the media before agencies or individuals have taken the time to be certain that all of their facts are straight. And, this is only accounting for those stories from the news agencies, with no involvement from individuals on public outlets who share articles and information without considering accountability of any kind. A person who really wants to hurt a candidate’s reputation could simply post a deepfake on social media, bypassing news agencies completely. All it takes is one well-timed, convincing fake and an entire election has the potential to spiral into a direction it may not otherwise have taken. Furthermore, software often takes time to perfect. Can we indeed trust that what is “verified”—even by agencies such as DARPA—is truly authentic? What could happen if a deepfake video were to be mistakenly (or not so mistakenly…) affirmed as real? How many careers would be ruined by the malicious elements in a fraudulent release, or—just as frightening—how many people who have done wrong could be indemnified by false videos that somehow act as proof of their nobility? As experts work furiously to build a verification system using AI, the aforementioned GAN technique (generative adversarial networks) could just as quickly sabotage these efforts to determine what is authentic. Without a surefire way of digitally validating videos, the 2024 election holds the undertone of “Buyer, beware.”
To understand how damaging deepfakes can be to our modern society, it helps to understanding a concept known as gaslighting. Gaslighting is a manipulation technique wherein one person makes another doubt his or her own reality until that person is easily controlled because he or she has lost trust of his or her own senses. The term originated from the 1944 movie Gaslight, in which a husband, in an effort to make his wife doubt her own sanity, continued to change the light level in their home, pretending he didn’t notice any difference. This makes the wife doubt her own senses, until the man’s true intentions come under scrutiny. In modern arenas, this tactic is said by Psychology Today to be employed commonly by “abusers, dictators, narcissists, and cult leaders.”[xlix]
The issue, in relation to deepfakes (especially considering the upcoming 2024 election), is that as people lose trust in their own perception, they often become less determined to closely follow the actions of those around them. This lack of willingness to place effort in holding others accountable is a byproduct of underlying suspicion that they are only about to—once again—find that they have been duped. A certain danger is attached to the demoralization of a public that no longer knows what is real and what is not, which sources to believe and which cannot be trusted. By and large, people begin to succumb to a certain fatigue from the mental rollercoaster of the continual extension of and then removal of hope. It is demoralizing enough, without deepfakes, for a population to trust political figures who fail to follow through on pre-election promises. Consider how much worse it would be if, amidst pre- and post-election political adaptations, candidates began to adamantly deny the very content of videos on which individuals based their voting decisions. Not only would this hold devastating ramifications for the 2024 elections, but it’s likely such activity would severely damage future voter interest and turnout.
Deepfakes could easily be used to sabotage the elections of 2024 and subsequent years, not only by discouraging voter involvement, but (obviously) by the false information they could spread that would likely be received by a trusting public as true. Good candidates could easily be portrayed in pornographic or otherwise compromising settings, and unfortunately, even after such a video would be finally verified as “fake,” the damage to the candidate’s reputation will have been done. Very few people, after such a blow, would see their campaign revitalize after the fact.
Not only can deepfakes be used to destroy good reputations of innocent people, but they can likewise be an excuse for someone who has done wrong. In other words, someone caught—with videographic evidence—committing a legally or morally incriminating act could attempt to escape censure by claiming that the evidence is a deepfake. And, since these are getting so difficult to verify, a claim of fakery could soon be as difficult to confirm as an assertion of validity. This loss of confidence in what an individual sees and hears with his or her own eyes and ears is equivalent to the ultimate “gaslighting” of a populace. With such doubt placed on what we , experience via our own senses (sight and sound), how can we trust the results of our own elections or know how to vote? Many media and political observors agree that this is a concern. Claire Wardle is an expert in online manipulation, social media, and tracking of mis- and disinformation, as well as cofounder of First Draft .[l] She stated: “As public trust in institutions like the media, education, and elections dwindles, then democracy itself becomes unsustainable.”[li]
What’s Really Scary about Deepfakes
Particularly concerning about deepfakes is their ability to override a person’s consent. It has been mentioned already that countless videos have been made portraying certain celebrities in pornographic situations. Vengeful exes sometimes take the initiative to create damaging images in retribution for what they consider to be satisfying revenge. Those who may want to undermine high-profile CEOs or individuals whose jobs would be placed in jeopardy, should they be found in a compromising situation—such as ministers, coaches, or teachers—may decide to take such deviant actions against these unsuspecting individuals.
When people choose not to engage in certain activities and then they’re portrayed as participating in those activities, the effect can be highly destructive. For example, someone depicted, publicly, in a sexually compromising situation without their consent would be similar to the violation one would feel if their own body was subjected to this override of consent. Furthermore, reputations that stand to be destroyed by a video that goes viral may not be easily repaired once the truth about a deepfake is revealed, should its fakery ever be verified and brought into the open.
Additionally, the potential held by this technology to undermine the innocence in culture as a whole is alarming. Consider the beautiful, wholesome works of times gone by that could be corrupted. It has been mentioned previously that Nicholas Cage’s face has been swapped out for many leading actors and actresses in a variety of movies—not the least whimsical is that of Julie Andrews in The Sound of Music—an example of this technology being used for harmless and silly fun. But consider other such capabilities utilizing these types of face-swaps: What kind of corruption could be injected into an old movie starring Doris Day with Rock Hudson, Vivien Leigh with Clark Gable, or Audrey Hepburn with Gregory Peck? The potential threat to previously innocent works by this new technology adds to the overall negative impact these fakeries could have upon modern society.
In addition to the need for new legislature and image-verification systems to counter the emerging menace presented by deepfake technology, some agencies—such as the Social Science Research Center—along with other authorities are suggesting a new type of precaution: a thing they call “immutable life logs.”[lii] What is such a thing? It is a digital trail of a person’s moves, locations, and actions, making it “possible for a victim of a deepfake to produce a certified alibi, credibly proving that he or she did not do or say the thing depicted.”[liii]
Let’s get this straight: To prevent a predatory, digital violation of ourselves, we must embrace the 24-7 tracking of a digital counter-bully to act as a guardian over our real circumstances. Does this sound like a prelude to the mark of the beast to you? Even for minds less conspiratorial than what is required to jump to such conclusions, the huge invasion of privacy is ominous. Furthermore, once we relinquish our solitude and consent to the monitoring invasion of having each moment of our lives recorded digitally, we have no guarantee of where the data will be stored, who will have access to it, how it could be used against us, or to what outside agencies this data could be sold.[liv] Furthermore, there is speculation that coupling such material with geo-location technology would likely be highly accurate at predictive tracking. This means that not only will anyone with access to such personal information know where we have been, but would likely analyze algorithms within the data that would also reveal where we are going.[lv] In other words, we will never be alone or have privacy again.
How do we keep ourselves from being duped?
Data & Society researcher Britt Paris explains that, while legislation and image-verification software are a good start, the battle against deepfakes needs to take place in the public forum; platforms that propagate or tolerate the spread of deepfakes must be held accountable.[lvi] These entities must be exposed and victims vindicated—publicly. Furthermore, Paris recommends that information outlets that hide behind the excuse that the volume of such fakery is too numerous to address should dial back their volume and hire more employees until they can effectively manage their content.[lvii] Certainly, many companies will balk at such a proposed solution, but Paris argues that this is part of the due-diligence process that should be required of any organization.
Claire Wardle, mentioned previously, has in mind another method of countering deepfakes, a method that, if put to use, would be the best solution: Check facts before you act, vote, or post a video on social media.[lviii] She states that the responsibility rests equally upon outlets and individuals. Social media and press hubs, Wardle says, need to be held responsible for their content, but the general public has a duty to verify what they are sharing: “The way we respond to this serious issue is critical…if you don’t know 100%…don’t share, because it’s not worth the risk.”[lix]
OCEANIA’S “DEEPFAKE” YEAR-ZERO END GAME
To achieve its ultimate goals, Orwell’s Oceania acclimatizes the careful obfuscation of history in which details of the nation’s past and the rise of Big Brother’s totalitarianism is deliberately obscured through chronological muddying. Whether through deepfake storytelling or historical revisionism, monuments of previous wars and administrations are smashed, histories are reimagined, and insults to the government’s status quo are demonized to diminish any lasting resistance to authoritarian rule.
In political theory, this also relates to the term “Year Zero” reflected in such historical events as the 1975 takeover of Cambodia by the Khmer Rouge and to the “Year One” of the French Revolutionary calendar.[lx]
During the French Revolution, after the abolition of the French monarchy (Sept. 20, 1792), the National Convention instituted a new calendar and declared the beginning of the Year I. The Khmer Rouge takeover of Phnom Penh was rapidly followed by a series of drastic revolutionary de-industrialization policies resulting in a death toll that vastly exceeded that of the French Reign of Terror.
The main idea behind Year Zero “is that all culture and traditions within a society must be completely destroyed or discarded and a new revolutionary culture must replace it, starting from scratch. All history of a nation or people before Year Zero is deemed largely irrelevant, as it will ideally be purged and replaced from the ground up.”
This kind of history purge is happening all across America today, from rewriting the role of religion in this nation and removing paintings of George Washington at San Francisco’s George Washington High School[lxi] to the eliminating Confederate statues and monuments nationwide.[lxii]
Also like Orwell’s dystopian vision, along with their Year Zero stratagem, the hypocrisy by the left is suffocating.
Remember when Democrat “bastions of liberty” bemoaned ISIS turning the Syrian city of Aleppo into rubble by bulldozing the antiquities of the ancient city of Palmyra? Or how the social radicals cursed the incivility and loss of historical artifacts? Indeed, when a new simulacrum of the pagan Archway of Baal (that was destroyed at Palmyra) was recreated, it was immediately sent globe-trotting to be erected wherever the world’s liberal elites gather. This in-your-face reconstruction was widely celebrated by so-called champions of freedom, who conveniently failed to mention how children in olden times were carried through such archways and sacrificed to the Baals of the ancient world—something far worse than any confederate soldiers did.
Yet now, the same liberals who cried to high heaven over the destruction of Baal’s bloodstained gateway have done an about face and pulled out their Oceania playbook. Suddenly, like Isis before them, they, too, want historical monuments that offend their sensitivities and contradict their political posturing crushed, including Civil War markers and statues from public grounds that are connected to the slave-owning side of our past. It’s immaterial how contradictory such doublethink and newspeak is; the end game is all that matters. The maintenance of power and the uninterrupted establishment of cross-party monopolies require whatever deflection and tortured logic is needed to be manufactured until citizens accept as substantive those “viewpoints” that Big Brother wants maintained on the streets and in the echo chambers of fake news outlets, especially if such can be used to infer wrongdoing or failure to act on the part of the Trump administration.
Thus, a new French Revolution Year One mindset is on the march to erase America’s politically incorrect antiquity despite the fact it was the Revolutionary War and then the Civil War on this continent that ultimately led to the abolition of slavery here and eventually around the world.
Students of history have often looked with interest at the French Revolution and what dynamics caused it to result in the Year One horror of death and torture under Robespierre compared to the Revolutionary War in America that resulted in unprecedented freedoms and monetary success. While citizens in this country were rejoicing in newfound liberty, in Paris, more than twenty thousand people were beheaded in the guillotines. The years following in France were marked by a reign of terror leading up to Orwellian totalitarianism and Napoleon (whose name actually means Apollo incarnate, the same spirit that will inhabit Antichrist, according to the book of Revelation). Why were the American and French Revolutions followed by such contrasting conclusions? The difference was that the American Revolution was fought on Christian principles of liberty, while the French Revolution—like many of the statue-smashing leftists and neofascist agitators reflect currently—was anti-God. The forces behind the French Revolution were out to eliminate people of faith as the enemies of France and to shut the mouths of God-fearing dissenters. They even placed a nude statue of a woman on the altar in the church at Notre Dame and proclaimed the God of Christianity dead. Soon thereafter, the French government collapsed (by the way, this is why I placed a so-called “Easter egg” reference on the cover of my book Blood on the Altar (FREE IN OFFER HERE)—a book about the coming genocide of true Christianity—in the form of a gargoyle from the Church of Notre Dame, a silent gesture nobody seemed to catch).
And make no mistake about this, either: Many of the people involved in such revolutions as the French one—like those occultists in Washington, DC, briefly mentioned at the start of this chapter—are aware that their politics can be assisted or “energized” by powerful supernaturalism, which they seek to make covenants with and that, under the right circumstances, can take on social vivacity of its own (see Ephesians 6:12).
For instance, concerning the French Revolution specifically, some scholars note how practitioners of occultism comingled with evil nonhuman energies that emanated from their actions, symbols, and incantations and that, once summoned, were released upon a gullible society to encourage a destructive collective group mind. As people passed these “thoughtforms” or memes from one to another and the ideas became viral, the power and reach of “the entity” spread with it until it became an unimaginably destructive force. Writing about the Masonic involvement in the French Revolution, Gary Lachman makes an extraordinary and important observation about such immaterial destructive forces—which had unseen plans of their own—released as a result of occult politics:
Cazotte himself was aware of the dangerous energies unleashed by the Revolution.… Although Cazotte didn’t use the term, he would no doubt have agreed that, whatever started it, the Revolution soon took on a life of its own, coming under the power of an egregore, Greek for “watcher,” a kind of immaterial entity that is created by and presides over a human activity or collective. According to the anonymous author of the fascinating Meditations on the Tarot, there are no “good” egregores, only “negative” ones.… True or not, egregores can nevertheless be “engendered by the collective will and imagination of nations.” As Joscelyn Godwin points out, “an egregore is augmented by human belief, ritual and especially by sacrifice. If it is sufficiently nourished by such energies, the egregore can take on a life of its own and appear to be an independent, personal divinity, with a limited power on behalf of its devotees and an unlimited appetite for their future devotion.” If, as some esotericists believe, human conflicts are the result of spiritual forces for spiritual ends, and these forces are not all “good,” then collective catastrophes like the French Revolution take on a different significance.[lxiii]
Fast forward to today, and anyone who thinks the eradication of public knowledge about the role that God played in American history and even our Civil War is divorced from supernaturalism or will stop with a few monuments being pulled down is in for a history lesson of their own. Behind the chaos-magic and/or meme-magic nonstop deployment by fake news outlets and sufferers of Trump-Derangement Syndrome are deceiving spirits called egregore above and “archons” and “kosmokrators” in the Book of Ephesus. These are rulers of darkness who work in and through human political counterparts, commanding spirits of lesser rank until every level of earthly government can be touched by their influence. Their currency includes propaganda (or “deception” and “lies,” as the New Testament describes) and as surely as Orwell’s protagonist Winston Smith spent his time at the Ministry of Truth modifying the past by correcting “errors” in old newspapers that embarrassed the party, today’s revisionists empowered by deceptive spirits will not be satisfied until their versions of doublethink and newspeak control everything the masses, especially Bible-believing Christians, can know, think and say. This is why, by the way, California lawmakers recently proposed Resolution 99 to govern what church leaders and religious counselors will be allowed to preach or say in the future.[lxiv]
In this regard, America and the world could be entering a prophetic, ruthless form of censorship that will undergird a time when all, both small and great, will bow before “a king of fierce countenance” (Daniel 8:23). With imperious decree, this Man of Sin will facilitate an Orwellian one-world government, universal religion, and global socialism. Those who refuse his Oceaniac empire will inevitably be imprisoned or destroyed until at last he exalts himself “above all that is called God, or that is worshiped, so that he, as God, sitteth in the temple of God, showing himself that he is God” (2 Thessalonians 2:4).
[ii] ColdFusion. “Deepfakes—Real Consequences.” April 28, 2018. YouTube Video, 13:12. Accessed September 6, 2019. https://www.youtube.com/watch?v=dMF2i3A9Lzw.
[iii] Stephan, Karl. “Seeing may not be believing: AI deepfakes and trust in media.” Mercatornet. October 15, 2018. Accessed September 6, 2019. https://www.mercatornet.com/connecting/view/seeing-may-not-be-believing-ai-deepfakes-and-trust-in-media/21827.
[iv] Kiely, Kathy. “Facebook refusal to curb fake Nancy Pelosi drunk video highlights need for responsibility.” USA Today. May 28, 2019. Accessed September 10, 2019. https://www.usatoday.com/story/opinion/2019/05/28/facebook-fake-video-nancy-pelosi-drunk-responsibility-column/1249830001/.
[v] Harwell, Drew. “Faked Pelosi videos, slowed to make her appear drunk, spread across social media.” May 24, 2019. Accessed September 10, 2019. https://www.washingtonpost.com/technology/2019/05/23/faked-pelosi-videos-slowed-make-her-appear-drunk-spread-across-social-media/.
[vi] “Why it’s getting harder to spot a deepfake video.” CNN Online Accessed September 7, 2019. https://www.cnn.com/videos/business/2019/06/11/deepfake-videos-2020-election.cnn.
[vii] Derpfakes. “Nicholas Cage: Mega Mix Two.” February 2, 2019. YouTube: 2:05. Accessed September 10, 2019. https://www.youtube.com/watch?v=_Kuf1DLcXeo.
[viii] Usersub. “Nick Cage DeepFakes Movie Compilation.” January 31, 2018. YouTube: 2:17. Accessed September 10, 2019. https://www.youtube.com/watch?v=BU9YAHigNx8.
[ix] “What is a deepfake?” The Economist. August 7, 2019. Accessed September 6, 2019. https://www.economist.com/the-economist-explains/2019/08/07/what-is-a-deepfake.
[xi] ColdFusion. “Deepfakes-Real Consequences.” April 28, 2018. YouTube Video, 13:12. Accessed September 6, 2019. https://www.youtube.com/watch?v=dMF2i3A9Lzw.
[xxi] TheFakening. “Keanu Reeves as Forest Gump Deepfake – It’s Breathtaking!” July 24, 2019. YouTube: 3:12. Accessed September 10, 2019. https://www.youtube.com/watch?v=cVljNVV5VPw&t=72s.
[xxiii] “What is a deepfake?” The Economist. August 7, 2019. Accessed September 6, 2019. https://www.economist.com/the-economist-explains/2019/08/07/what-is-a-deepfake.
[xxvi] The New York Times. “Deepfakes: Is This Video Even Real? NYT Opinion.” August 14, 2019. YouTube Video: 3:38. Accessed September 10, 2019. https://www.youtube.com/watch?v=1OqFY_2JE1c.
[xxvii] LaFia, John. “Intersections in Real Time.” Babylon 5: Season 4, Episode 18. 1997; Burbank, CA: Warner Brothers, 1997. DVD.
[xxviii] “What is a deepfake?” The Economist. August 7, 2019. Accessed September 6, 2019. https://www.economist.com/the-economist-explains/2019/08/07/what-is-a-deepfake.
[xxxiii] Stephan, Karl. “Seeing may not be believing: AI deepfakes and trust in media.” October 15, 2018. Accessed September 10, 2019. https://www.mercatornet.com/mobile/view/seeing-may-not-be-believing-ai-deepfakes-and-trust-in-media/21827.
[xxxvi] Porup, J.M. “How and why deepfake videos work-and what is at risk.” CSO US Online. April 10, 2019. Accessed September 10, 2019. https://www.csoonline.com/article/3293002/deepfake-videos-how-and-why-they-work.html.
[xxxvii] Stephan, Karl. “Seeing may not be believing: AI deepfakes and trust in media.” October 15, 2018. Accessed September 10, 2019. https://www.mercatornet.com/mobile/view/seeing-may-not-be-believing-ai-deepfakes-and-trust-in-media/21827.
[xxxviii] Porup, J.M. “How and why deepfake videos work-and what is at risk.” CSO US Online. April 10, 2019. Accessed September 10, 2019. https://www.csoonline.com/article/3293002/deepfake-videos-how-and-why-they-work.html.
[xl] Tangermann, Victor. “Congress Is Officially Freaking Out about Deepfakes.” Futurism Online. June 13, 2019. Accessed September 10, 2019. https://futurism.com/congress-deepfakes-threat.
[xlii] Corrigan, Jack. “DARPA Is Taking On the Deepfake Problem.” Nextgov. August 6, 2019. Accessed September 10, 2019. https://www.nextgov.com/emerging-tech/2019/08/darpa-taking-deepfake-problem/158980/.
[xliii] CNN Business. “Why it’s getting harder to spot Deepfake videos.” June 12, 2019. YouTube: 2:45. Accessed September 10, 2019. https://www.youtube.com/watch?v=wCZSMIwOG-o.
[xlv] Corrigan, Jack. “DARPA Is Taking On the Deepfake Problem.” Nextgov. August 6, 2019. Accessed September 10, 2019. https://www.nextgov.com/emerging-tech/2019/08/darpa-taking-deepfake-problem/158980/.
[xlix] Sarkis, Stephanie. “11 Warning sSigns of Gaslighting.” Psychology Today. January 22, 2017. Accessed September 10, 2019. https://www.psychologytoday.com/us/blog/here-there-and-everywhere/201701/11-warning-signs-gaslighting.
[l] “Dr. Claire Wardle: Co-Founder and Leader of First Draft.” Cyber Harvard. September 25, 2018. Accessed September 10, 2019. https://cyber.harvard.edu/people/dr-claire-wardle.
[li] New York Times. “Deepfakes: Is This Video Even Real? NYT Opinion.” August 14, 2019. YouTube Video: 3:38. Accessed September 10, 2019. https://www.youtube.com/watch?v=1OqFY_2JE1c.
[lii] Powers, Benjamin. “‘Deep fake’ video can ruin reputations. Can life logs prevent that?” Public Security Today Online. November 28, 2018. Accessed September 10, 2019. https://publicsecurity.today/deep-fake-video-can-ruin-reputations-can-life-logs-prevent-that/.
[lviii] New York Times. “Deepfakes: Is This Video Even Real? NYT Opinion.” August 14, 2019. YouTube Video: 3:38. Accessed September 10, 2019. https://www.youtube.com/watch?v=1OqFY_2JE1c.
[lxiii] Gary Lachman, Politics and the Occult: The Left, the Right, and the Radically Unseen, (Quest Books; 1st Quest Ed edition, November 1, 2008), 97–98.
Category: Featured Articles