10 Surprising Things I Learned About AI From a Talk by Two Great Minds

CXOTalk: The Future of Cognitive Computing with Stephen Wolfram and Anthony Scriffignano

What happens when you observe two brilliant people with over 80 years of combined experience in computer science, data science, linguistics, physics, and mathematics (among other things) get together and chat about the future of artificial intelligence (AI) and cognitive computing?

I can tell you what happens to me. I hang on every word and open my mind to understand and learn concepts I know little about.

The CXOTalk: The Future of Cognitive Computing with Dr. Stephen Wolfram, the creator of Mathematica, Wolfram Alpha and the Wolfram Language, and the founder and CEO of Wolfram Research, and Anthony Scriffignano, PhD, SVP and Chief Data Scientist at Dun & Bradstreet, perfectly fits the phrase “a meeting of the minds.” Over 2,600 people listened online to these two men, giants in their fields yet humble at heart, ponder how humans and machines are getting along in our world today, what’s possible, and what should give us pause.


Here are my ten surprising “ahas!” from this fascinating conversation:

1 – AI has been around for decades. What’s unique about today is that we can finally use AI to solve a wide range of real-world problems.

“I think, artificial intelligence, as now discussed, and I’ve watched its evolution over the course of nearly 40 years now, it’s really an extension of a long-running story of technology now, which is, ‘How do we take things which we humans have known how to do, and make machines do them for us?’ And typically, the pattern is, in the end, we humans define the goals that we’re trying to achieve, and then we want to automate as much as possible getting those goals done.” - Stephen Wolfram

2 – Let’s not fool ourselves; an Internet search doesn’t have all the answers.

“Orchestrating the answer in an amount of time that we’re comfortable with is no small feat. None of us would believe that the truth is out there on the Internet all the time, and yet sometimes we behave that way. And so, just adjudicating the truth is a challenge...As you unpack this thing, you get more and more surprises, and it becomes a more curiouser and curiouser world. And so, part of it is making it (AI) look more intelligent, and part of it is having it give an intelligent, empirical answer that you can scale and reproduce and learn from.” - Anthony Scriffignano

3 – To work well, AI needs not just an understanding of natural language, but also an understanding of the world.

“The key extra ingredient that one needs to do good natural language understanding is not just being able to pick apart pieces of English or whatever language one's dealing with, but also having a lot of actual knowledge about the world, because that's what allows one to determine when somebody says ‘Springfield,’ for example. You have to realize, ‘Well, which Springfield are they probably talking about?’ If you know about all the different Springfields, and you know what their populations are, how popular they are on Wikipedia, where they are relative to the person asking the question, then you can figure out what they're talking about. But without that underlying knowledge, you can't really do that.” - Stephen Wolfram

4 – The confounding characteristics of human language are a problem for AI.

“I do a lot of work in computational linguistics across languages, and within the sphere of two or more languages, something that you think is pretty obvious to say can’t easily be transformed or translated into a language. In a legal context, there’s a very vast distinction between 'You should' and 'You must.' It’s critical in English. There are multiple words for 'should' and 'must' in Chinese and some of them are used synonymously. So, you have to understand the context to understand if it’s a 'should' or a 'must' and sometimes native speakers disagree as to the interpretation of that. Think about what that does to a [legal] contract. Now, try to imagine talking to a machine that sets its goals at a level of ambiguity…and we can’t even get 'should' and 'must' right. That’s a very big problem.“- Anthony Scriffignano

5 – We need to start thinking about standards and guidelines to frame AI development and use.

"I’ve spent a huge amount of time developing very high-level computer languages that let one express things in a way that are the highest possible level of expression for humans or what they want to do that can also be understood by machines. As we look towards the smart contract world of telling our machines in general terms, ‘What do you want to achieve?’ we need a language to do that. We want to give some overall guidelines for how the AIs should behave with respect to us. I’ve been interested in this problem of what is the language that we can use to write the constitution for the AIs. And the next question is, 'What should the constitution actually say?'” - Stephen Wolfram

6 – We are entering the age of “computational everything.” Eventually most human knowledge could be transformed into computable objects.

“What's happening is there is this way of thinking about things in computational terms that really is extremely powerful. There either is, now, or soon will be a field called computational X, and that will be the future of that field. Some of those fields already exist: computational biology, computational linguistics, others are just emerging. I think it’s the defining idea of the century.” - Stephen Wolfram

7 – Emerging technologies present tremendous monetization opportunities, but we must be cognizant of unintended consequences.

“I was at a conference last week where someone was talking about the autonomous self-driving vehicles, and the importance, obviously, of object recognition and having an algorithm drive a car…Apparently, youths have found an amusing pastime of making homemade stop signs and holding them up in front of the autonomous self-driving vehicles and making them stop in the middle of traffic. A human being wouldn't be confused by a kid holding up a fake stop sign, but an algorithm designed to recognize a stop sign, until it realizes it's being tricked that way because someone told it, will stop every time. Sometimes in this rush to get to market, these sort of unintended consequences don’t get thought about; sometimes, they’re funny like the stop sign; sometimes, they’re not so funny, and people find a way to take down half the internet with a denial of service attack on security cameras because people don’t update their software. I think that people could do a better job of thinking about unintended consequence.” - Anthony Scriffignano

8 – It is doubtful that AI will ever replace the need for human expertise.

“There’s this whole question about how do you put knowledge into compute-able form? How do you go from what's out there in the world, to something where knowledge is organized, where you know systematically about companies and their characteristics? And one of the questions is always, ‘Can you just go to the web…have a machine intelligence figure out what the right answer is?’ And, the basic answer is you can get 85% of the way there. The problem is, you don’t know which 15% are completely wrong. And, to know that is where one needs a process of curation that is a mixture of really good automation, with human activity. My observation has been, you’ve gotten a certain fraction of the way there. But in the end, us humans aren’t useless after all, and you actually need to inject that sort of moment of human expertise into the thing.” - Stephen Wolfram

9 - A good model for deploying AI is an interplay between humans, who identify problems and set goals, and machines, which help us solve and achieve them.

“There’s never going to be a substitute for understanding the problem, for humans to continue to advance the art. The machines can help convince the art. But for the foreseeable future, I think we still get to conduct the orchestra.” - Anthony Scriffignano

“I think that the main question is, what can be automated in the world? And, the fundamental thing to realize is that what can’t be automated is what you’re trying to do. That is the definition of the goal. There is no abstract sort of ultimate, automatic goal. The goal is something that’s defined by us humans on the basis of our history, and culture, and characteristics, and so on.“ - Stephen Wolfram

10 – The promise of discovery using our brains and technology is that we’ll solve bigger and harder problems, some we haven’t yet anticipated.

“Being able to curate, to understand, to put like with like, to triangulate, to test for veracity, to have some experience; things get new and things change; when the environment changes, to understand how it’s changing. These are the critical moments where I think there’s still hope for the need for our human brains, and I think we’re not going to program ourselves out of business here. I think that we’re going to get to solve bigger and better problems. And these technologies will help get everything else out of the way, if we let it.” - Anthony Scriffignano

There are so many more thought-provoking threads to Anthony and Stephen’s discussion. Their parting words in particular give me pause as I think about what AI will mean to the future of the human race. But I don’t want to spoil the surprise!

So check out the recording – I think you’ll find the dialogue riveting for some time to come.

CXOTalk: The Future of Cognitive Computing with Stephen Wolfram and Anthony Scriffignano