Whether it's Hollywood movies or speeches by Tesla's Elon Musk, there is a growing cultural debate around the power of artificial intelligence. "AI" is the ultimate future of Big Data, a world where machines think for themselves based on a fountain of information.
But much of the current debate is fear-based: What happens when the machines become too smart and turn on us? It is the Terminator future that alternately fascinates and terrifies people.
Beyond the sci-fi thrills, this creates a practical problem in the present. Any step toward gathering and processing more information can easily spook people who worry that it's leading to that inevitable machine takeover.
To counteract that fear, there is a movement to research and consider the ethical implications of AI and data. The latest entry into the field was announced recently, with the creation of the Leverhulme Centre for the Future of Intelligence at Cambridge University.
Backed by a $15 million grant, the new center will pull together technologists along with people from the humanities: philosophers and sociologists. The goal: "Examine the technical, practical and philosophical questions artificial intelligence raises for humanity in the coming century."
According to the announcement, there is a belief that the rate of advances in machine learning means we could achieve human-level intelligence in machines in the foreseeable future.
"While it is hard to predict when this will happen, some researchers suggest that human-level AI will be created within this century," the release says. "Freed of biological constraints, such machines might become much more intelligent than humans. What would this mean for us?"
Of course, it was just this concern that prompted Musk to famously warn of the dangers of AI last year in a talk at MIT:"I think we should be very careful about artificial intelligence," he said. "If I had to guess at what our biggest existential threat is, it's probably that. I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish."
Musk was worried enough about AI to lead a $11 million donation back in July to the Future of Life Institute in Cambridge, MA. The goal of the new program is to keep "AI robust and beneficial."
"Building advanced AI is like launching a rocket," said Skype and FLI founder Jaan Tallinn, in a statement at the time of the donation. "The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to focus on steering."
The possibility for AI to become a not-too-distant reality is intriguing. Perhaps the right framework is to think of it through the lens of human relationships. Relationships are the core to business and society and always have been - and today data and technology enable them on a depth and scale never before possible. How can AI be developed in ways that enhance relationships - not detract from them?
Predictions are funny things. They involve a combination of data, analysis and a healthy dose of crystal ball guesswork. And sitting behind the deductive process is human collaboration and brainwork. It's fair to say that the progress we make with the power of data and AI intelligence will continue to require the human element.
Indeed, demonstrating that there is still a human behind the wheel, that someone has thought through the consequences, could go a long way to ensuring folks that the potential benefits of AI and Big Data will far outweigh the risks.