It seems that many of us are looking for AI to eventually reach “human-level intelligence”, but is that really the right goal? It sounds good on the surface, but it may not mean much. I think when people say that they are looking for human-level intelligence out of AI, they’re really just asking when it will be able to trick us into thinking it’s human.
This came up while reading Yuval Noah Harari’s book “Nexus“, where he compared “human-level intelligence” of AI with “bird-level flight” for planes:
“As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien. It should also be noted that people often define and evaluate AI through the metric of “human-level intelligence,” and there is much debate about when we can expect Als to reach “human-level intelligence.” The use of this metric, however, is deeply confusing. It is like defining and evaluating airplanes through the metric of “birdlevel flight.” AI isn’t progressing toward human-level intelligence.
It is evolving an entirely different type of intelligence.“
AI is already eons past human skills in so many areas: in speed, in combining information, in changing the tone of a large piece of content, to creating new works of art in seconds, and much more.
There are certainly areas where humans are better, and I hope we’re able to hold those to ourselves for a long time to come. However, even if/when AI catches up in those other areas, it won’t be humanlike at all. As Harari says, it’s evolving into an entirely different type of intelligence.