There are two reasons that concerns about AGI have become more plausible — and pressing — all of a sudden. The first is the unexpected speed of recent AI advances. “Look at how it was five years ago and how it is now,” Hinton told the New York Times. “Take the difference and propagate it forwards. That’s scary.” The second is uncertainty. When CNN asked Stuart Russell — a computer science professor at the University of California, Berkeley and co-author of Artificial Intelligence: A Modern Approach — to explain the inner workings of today’s LLMs, he couldn’t. “That sounds weird,” Russell admitted, because “I can tell you how to make one.” But “how they work, we don’t know. We don’t know if they know things. We don’t know if they reason; we don’t know if they have their own internal goals that they’ve learned or what they might be.” And that, in turn, means no one has any real idea where AI goes from here. Many researchers believe that AI will tip over into AGI at some point. Some think AGI won’t arrive for a long time, if ever, and that overhyping it distracts from more immediate issues, like AI-fueled misinformation or job loss. Others suspect that this evolution may already be taking place. And a smaller group fears that it could escalate exponentially. As the New Yorker recently explained, “a computer system [that] can write code — as ChatGPT already can — … might eventually learn to improve itself over and over again until computing technology reaches what’s known as “the singularity”: a point at which it escapes our control… (READ MORE)
Category: Featured Articles