
A new algorithm has been discovered that allows computers to learn like we do.
A research team has just discovered a new algorithm that allows computers to learn in the same way humans do — and for people like physicist Stephen Hawking, that’s potentially a problem for humans down the road.
As we recently reported, scientists created the Bayesian Program Learning framework, which teaches computers to both identify and reproduce handwritten characters using one example of those characters. In a test, both computers and humans were asked to produce the letter after seeing it once, and a human panel of judges couldn’t tell the difference between a human’s reproductions and a computers.
It’s an exciting breakthrough that could have major ramifications for how computers assist humans in tasks that would normally require a human. But for some, it may be a concerning development.
Stephen Hawking is one person who has been sounding the alarm on artificial intelligence. The famed physicist has been on the record as saying that we need to be extremely careful with AI. He’s part of a growing group of scientists who are concerned about strong artificial intelligence that could exceed that of a human, causing the AI to seek out its own aims and become out of human control, according to a Live Science report.
Hawking even went so far as to say in an interview with BBC last year that it could result in the end of the human race.
And he’s not the only famous figure voicing this concern. Elon Musk, the CEO of SpaceX and Tesla Motors, said AI is the biggest existential threat to humanity. That doesn’t mean mankind should avoid AI, only that they should be very careful and understand the risks before creating a stronger version of it.
Still, others argue that humanity is nowhere near the point where it has developed AI strong enough that it could result in a threat, so that concern is many years down the road.
Leave a Reply