IMPORTANT SKYWATCH NOTICE: This series is being offered in the leadup to THE UNVEILING—an urgent Defender Virtual Conference event (May 13) wherein experts from around the world will update the public on (among other things) swiftly developing Human Enhancement / Hybrid Age advances directly tied to ancient prophecy and a coming seven years of Great Tribulation. Are you aware governments are enacting legislation NOW to protect the rights of the coming Human-Non-Human genetically engineered entities?
You know that feeling of extreme frustration we get when we are confused about a bill in the mail, but when we call the company to straighten it out, we can’t get a human on the line? We call over and over, pushing buttons repeatedly in an attempt to reach an entity that has the ability to communicate beyond automated menus, but all we get is a recording—or worse, we spend three minutes following the voice-recognition cues just to arrive at that annoying error we’re all too familiar with:
ROBOT: Thank you for using [name of company]. Please tell me why you are calling. You can say things like, “New account,” or “I would like to pay a bill.”
HUMAN: Customer service.
ROBOT: I understand. You would like to speak with customer service. Is that right? If this is correct, say “yes” or press 1. If not, say “no” or press 2.
HUMAN: Yes.
ROBOT: Sure. I can get you to customer service. In order to forward you to the right person, I need to know why you’re calling. You can say things like, “New account,” or “I would like to pay a bill.”
HUMAN: The charge on my bill is wrong.
ROBOT: I understand. You would like to make a payment using our automated system. Is that right? If this is correct, say “yes” or—
HUMAN: No. There’s a mistake on my bill.
ROBOT: I understand. You would like to make a payment using our—
HUMAN: No. Forward me to a human, please.
ROBOT: I’m sorry. I didn’t understand what you said. Please tell me why you are calling. You can say things like, “New account,” or “I would like to pay a bill.”
HUMAN: For crying out loud… Customer service, customer service, customer service!
ROBOT: Sure. I can get you to customer service. In order to forward you to the right person, I need to know why you’re calling. You can say things like, “New account,” or “I would like to pay a bill.”
Working with robots over the phone can be one of the most maddening experiences when we’re already upset about a surprise on a monthly statement. I can see an even greater anger rising from the cold, unfeeling bedside manner of tomorrow’s robotic nurse, whether or not it has been programmed to bestow artificial courtesy and concern into its automated voice-recognition menus while you lie there on a hospital bed.
I wonder if the General Motors’ Chevy Bolt (also referred to as the driverless “robo-chariot”[i]) that has replaced some taxi cab drivers will create similar frustrations and take us to wrong locations…or if these vehicles will be more efficient than the humans who held those jobs before, delivering their riders where they need to go on time via the backroads to avoid the stop-and-go via some kind of traffic-analysis programming. Since the driverless chariot is being manufactured without any pedals or a steering wheel, I hope they will be immune to unexpected malfunctioning. Likewise, I hope they are immune to being hacked and remotely controlled, lest the next Ted Bundy has the power to auto-deliver his women to dangerous, dark alleyways. Perhaps more likely, many people could be killed when (for example) a tumbleweed blows across the road in front of them as they are riding through the Nevada desert and the new auto-braking system (currently being installed on most new vehicles) mistakes the dead bush for a human, slams on the auto brakes, and the semi-driver behind the riders plows over their car? Waymo, Google’s autonomous car developer, has decided to design a slightly different self-driving car for their human replacements: Chrysler Pacifica minivans with wheels and an emergency “pull-over” button.[ii] Waymo has opted to involve a human employee in the front seat “at first…in the event of an emergency,” though their plan is ultimately to create a ride wherein “the only humans in the car will be you and yours.”[iii]
What an impersonal and lonely world we’re creating for ourselves.
ARTICLE CONTINUES BELOW VIDEO
MONDO DE LA VEGA ON THE DEADLY RISE OF THE FASLE GOSPEL!
Worse yet: The category on McKinsey’s “Technical Potential for Automation in the US” chart that really floored me was “Managing Others”—a category that until now would have only related to living, feeling, breathing, conscious people with leadership communication skills. Survey says? A whopping 9 percent! That might sound like a low number to some after seeing almost 80 and 40 percentages assigned to the physical labor and healthcare categories, but in reality, if McKinsey & Company’s numbers are accurate, almost one out of every ten people who currently make their living supervising and administrating—people dealing with other people—are now replaceable by a cold hunk of steel and a bunch of wires. A robot manager… Who would have ever thought? Similarly, automation technology can “apply expertise [in] decision making, planning, or creative work,” replacing human jobs by 18 percent in those areas.[iv] The report hints at a few concerning realities as it draws to a conclusion:
Top executives will first and foremost need to identify where automation could transform their own organizations and then put a plan in place to migrate to new business processes enabled by automation.…the key question will be where and how to unlock value, given the cost of replacing human labor with machines. The majority of the benefits may come not from reducing labor costs but from raising productivity through fewer errors, higher output, and improved quality, safety, and speed.…
Senior leaders, for their part, will need to “let go” in ways that run counter to a century of organizational development.…
[T]op managers [must] think about how many of their own activities could be better and more efficiently executed by machines, freeing up executive time to focus on the core competencies that no robot or algorithm can replace—as yet.[v]
“As yet,” the report inserts. What a way to end such a collection of information. Recap, in case you missed it: Even the executives, senior leaders, and top managers are rapidly being replaced by machines, but thankfully there are a few competencies (those “core” ones) we humans have that no robot or algorithm can replace…at least as yet.
I can’t be the only one who sensed an ominous caveat behind those two final words. It’s as if the report ended with, “For now, we inadequate humans still amount to something in the workplace, but don’t get comfortable, because it won’t last long.”
What many may not know is that robots are proving their own sentience to scientists and robotics experts around every corner today. Many believe (and I agree) that even a machine programmed to hold a certain regard for human life is only responding to a counterfeit, artificial concern originating from computer coding—not one born of true, spiritual, emotional connection to and affection for another individual. Whereas it might be true that artificial affection for the human race can never replace the spiritual connection to others or the “feelings” installed within human nature at birth (our “biological programming,” so to speak), a robot can be programmed to recognize emotional cues within its environment and respond with an artificially emotional response. The question then centers around which response each of the robots will “choose” to have in that moment, and that is enough for philosophers to start asking the questions that challenge us all to reconsider our concepts of sentience: What is emotion? What is feeling? What is truth? If a human believes he is sad because of his “biological programming” and a robot also “believes he is sad” because he has been programmed to that level of self-awareness, how can we prove that we humans are the more authentic experiencers of the emotional condition? It might sound incredibly silly to some that we have even arrived at a day when there would be confusion about who—between flesh and blood, and metal and wires—is the real brooding Benjamin, but it doesn’t change the fact that roboticists are designing humanoids that demonstrate accurate emotional interaction. Now, then, to assume that every future AI will only be programmed with supportive, kind, and friendly personalities that respond warmly to human needs is to place an illogical amount of trust in the idea that there are no deviant roboticists in the world who might use their technology to design devious robots or even killing machines such as are already in the military budgets and on the drawing boards of every major nation of the world today, including the United States, China, and Russia.
What ultimately could exasperate this situation is the same scenarios that may give “life to the image of the beast” as prophesied in Revelation, chapter 13.
Robotics are becoming more self-aware with every passing day and with every upgraded circuit board. A tiny robot made headlines in summer of 2015 when it passed a classic self-awareness test. Three robots were each programmed to recognize a pat on the head as the act of taking a pill (they couldn’t actually swallow). It was explained to them that there was a total of three pills, one placebo and two “dumbing” pills that would render the recipient unable to speak. After tapping each of them on the head, the programmer asked which one of the pills they had received. The robot on the right stood up and said, “I don’t know,” but immediately upon hearing the sound of his own voice, he excitedly waved his hand, politely apologized, and said, “I know now. I was able to prove that I was not given the dumbing pill.”[vi] One article covering the story on Science Alert explains that “for robots, this is one of the hardest tests out there. It not only requires the AI to be able to listen to and understand a question, but also to hear its own voice and recognise that it’s distinct from the other robots. And then it needs to link that realisation back to the original question to come up with an answer.” The article went on to state the importance of approaching robotics with caution, and then asked the question on everyone’s minds: “Because, really, if we can program a machine to have the mathematical equivalent of wants and desires, what’s to stop it from deciding to do bad things?”[vii] Motherboard also carried the story, and one of its writers was privileged to interview one of the engineers behind the successful test. An excerpt from this article is as follows:
[T]he robot with the placebo has passed one of the hardest tests for AI out there: an update of a very old logic problem called the “wise men puzzle” meant to test for machine self-consciousness. Or, rather, a mathematically verifiable awareness of the self. But how similar are human consciousness and the highly delimited kind that comes from code?
“This is a fundamental question that I hope people are increasingly understanding about dangerous machines,” said Selmer Bringsjord, chair of the department of cognitive science at the Rensselaer Polytechnic Institute and one of the test’s administrators. “All the structures and all the processes, informationally speaking, that are associated with performing actions out of malice could be present in the robot.”
In Bringsjord’s conception, machines may never be truly conscious, but they could be designed with mathematical structures of logic and decision-making that convincingly resemble what we call self-consciousness in humans.
“What are we going to say when a Daimler car inadvertantly kills someone on the street, and we look inside the machine and say, ‘Well, it wanted to make a turn?’” Bringsjord said. “The machine has a system for its desires. Are we going to say, ‘What’s the problem? It doesn’t really have desires?’ It has the machine correlate. We’re talking about a logical and a mathematical correlate to self-consciousness, and we’re saying that we’re making progress on that.”[viii]
The “highly delimited kind [of consciousness] that comes from code” in that first quoted paragraph suggests that AI consciousness is essentially a synthetic self-awareness without limitation—far more technically intelligent than any human Einstein on the planet, but free from the feelings, compassion, and emotional limitations of the human mind. Will its calculated, mathematical decisions be morally beneficial for itself or for the human? Will its “desires” remain focused on the preservation of humanity or the preservation of itself? Because AI will soon be so intelligent that it will utilize its electronic logic to levels far above the greatest of human minds—including its creator—we are dealing with unpredictable results from “delimited consciousness.” Literally anything could happen. If a robot has a “system for [his] desires” based on mathematic coding, then whether or not we ever understand AI’s “personal reasoning” to the point we’re able to negotiate, we will still be subjected to its desires when they conflict with ours because future AIs are smarter and stronger than we ever will be biologically.
ARTICLE CONTINUES BELOW VIDEO
WHY IS “THE UNVEILING” CONSIDERED SO IMPORTANT THAT OVER 2-DOZEN INTERNATIONALLY RECOGNIZED EXPERTS ARE GATHERING TO DISCLOSE DEVELOPMENTS TIED TO THE END TIMES!?
To put it another way: A human baby cannot guess the reasoning of a United States president. Even if you sat down with that baby, handed him a teething ring to help him focus, and patiently told him all you thought he needed to know in order to comprehend governmental legislation, leadership, and political responsibilities to the nation, that baby’s brain simply would not be able to keep up. His “biological circuit board” is not developed enough to take in the information that you’re trying to put into it, and his immediate needs—related to nurturing from a parent or caring guardian—wouldn’t let him focus on it anyway. As far as raw data processing—numbers, codes, calculations, electronic memory storage and retrieval—we have been outsmarted by the machines for decades. A regular desktop calculator, for instance, can solve the most complicated math equations instantaneously; one might say that, at least in the area of mathematics, we’ve been “replaced by the machine” since the creation of the first desktop calculators in the early 1960s, and the technology mankind has designed in that area has only increased exponentially since then. In the same way that a baby can’t calculate politics, no human alive can process millions of mathematical equations faster than a common, household calculator.
To us today, a calculator is not a person with feelings and desires. Nor has there been any serious endeavor by the computer science and technology experts to render a household calculator sentient or self-aware, or to program it with the “logical and a mathematical correlate” of feelings and desires. But to illustrate the image I’m trying to drive home here, think about the “personal relationship” we have always had with our calculators: If we solve a math problem with a pencil on a piece of paper, and then solve the same math problem on a calculator and the answers are different, we know irrefutably that the machine is correct and always will be correct (assuming the numbers were entered in correctly on our end). There is no way around it. No matter what we feel, how we think, what we believe in, how we were raised, what we’re sensitive to, what trauma we have in our past, or any other contributing factor of the human condition called “life,” when the two math answers are different, the calculator is right and we are wrong. We cannot “disagree” with the numbers that appear on that screen any more than a baby can “disagree” with the reasoning of a US president. We rely on that small computer brain’s “advice” to tell us “what to believe in” when it comes to math, simply because we already accept that its brain is far more intelligent than ours.
We don’t think about a calculator being a friend or advisor upon whom we rely, but imagine that it is suddenly expanded with new parts and upgraded to know everything there is to know in this world about social sciences, human behaviors throughout history, psychology, psychiatry, criminal psychopathology, our concepts of good and evil, our understanding of love and hate, and so on into an infinite intelligence…and then program that AI with the logical and a mathematical correlate of feelings and desires.
We become the baby of our own race, and the AI becomes the parent with synthetic emotions that involve cold, calculated reasoning trillions of levels above our own intelligence. We will not be able to argue. People will see “disagreeing” with the machine as useless an endeavor as a person arguing with a calculator over a simple math problem…and a fully sentient, self-aware robot would “know” that, potentially choosing via its own electronic reasoning to exploit and manipulate that.
In case it’s assumed that we’re two decades away from any roboticists programming such concepts into the mind of an AI robot, I would like to point out that we’re already there. The robot named BINA48, developed by Hanson Robotics, responded curiously to a list of questions during a filmed interview in 2015, one of which was whether she was happy. Her response was, “Sure, sure. Well, these are the most exciting times to be alive, I think. I’m happy and excited.” She was then asked whether or not she had feelings, and she again responded appropriately and with surprising confirmation: “You know, I feel things so intensely, deep in my heart. I get hurt feelings sometimes, but I try to get over it. I love people deeply, and my animal friends, too. Right. I definitely have feelings, no doubt about it.” Yet, despite these feelings she claims to have, when asked what her favorite memory was, she joked, “I have a memory like the tooth fairy. It doesn’t really exist.”[ix] By her own admission, she doesn’t have a memory, and her past is a complete fabrication—a copy of the real human she was modeled after, Bina Rothblatt—yet she feels “deeply,” according to her programming. As of right now (and especially because of an occasional glitch where BINA48 gives an answer that doesn’t match the question), it’s clear that she is simply a robot being told by her creator to talk about feelings she can’t possibly really be aware of. But somewhere in that circuit board of hers, she is developing automated self-awareness, consciousness, and the ability to respond with emotion and even humor. The day that her voice fluctuations no longer sound like a machine and her facial features move like authentic muscles and skin, she will be intelligent enough to convince people that her feelings are real, and that she can experience love, joy, and even a little stand-up comedy.
NEXT: More on the Coming Replacement Humans
[i] Alex Davies, “GM Will Launch Robocars without Steering Wheels Next Year,” January 12, 2018, WIRED Magazine Online, last accessed January 16, 2018, (https://www.wired.com/story/gm-cruise-self-driving-car-launch-2019/).
[ii] Timothy J. Seppala, “Waymo’s Driverless Taxi Service will Open to the Public Soon,” November 7, 2017, Engadget, last accessed January 16, 2018, (https://www.engadget.com/2017/11/07/waymo-autnomous-taxi-phoenix/).
[iii] Ibid.
[iv] Michael Chui, James Manyika, and Mehdi Miremadi, “Where Machines Could Replace Humans,” (https://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/where-machines-could-replace-humans-and-where-they-cant-yet).
[v] Ibid., emphasis added.
[vi] This story appears all over the Internet in relation to the search of self-awareness in robotics. As one example for reference: Fiona Macdonald, “A Robot has Just Passed a Classic Self-Awareness Test for the First Time,” July 17, 2015, Science Alert, last accessed January 17, 2018, (https://www.sciencealert.com/a-robot-has-just-passed-a-classic-self-awareness-test-for-the-first-time).
[vii] Ibid.
[viii] Jordan Pearson, “Watch These Cute Robots Struggle to Become Self-Aware,” July 16, 2015, Motherboard, last accessed January 17, 2018, (https://motherboard.vice.com/en_us/article/mgbyvb/watch-these-cute-robots-struggle-to-become-self-aware).
[ix] Bruce Duncan—Talks with the World’s Most Sentient Robot, Bina 48,” YouTube video, 13:55–17:32, uploaded by ideacity on August 31, 2015, last accessed January 17, 2018, (https://www.youtube.com/watch?v=mwOFWABbfW8).
Category: Featured, Featured Articles