AI Has a Barrier of Meaning?

November 6, 2018

The NY Times ran an opinion piece with the catchy title, Artificial Intelligence Hits the Barrier of Meaning. Mostly, there are some decent points about Deep Learning systems not being robust, followed by claims that studying human cognition is the key to developing trustworthy AI systems.

Today’s A.I. systems sorely lack the essence of human intelligence: understanding the situations we experience, being able to grasp their meaning. The mathematician and philosopher Gian-Carlo Rota famously asked, “I wonder whether or when A.I. will ever crash the barrier of meaning.” To me, this is still the most important question.

I was unaware of the Rota quote until I read the article, but I guess it makes sense?

But ultimately, the goal of developing trustworthy A.I. will require a deeper investigation into our own remarkable abilities and new insights into the cognitive mechanisms we ourselves use to reliably and robustly understand the world. Unlocking A.I.’s barrier of meaning is likely to require a step backward for the field, away from ever bigger networks and data collections, and back to the field’s roots as an interdisciplinary science studying the most challenging of scientific problems: the nature of intelligence.

The challenge with this rationale is that it leans too heavily on an anthropomorphic view of AI. For me, it’s a tough sell that studying human cognition will unlock improvements in AI systems instead of improvements in data quality and abundance and related algorithms. Despite my disagreements, I’m happy that a researcher is putting forth these ideas to the public to move the conversation beyond that of Skynet and whatever nonsense marketing buzz IBM comes up with.

comments powered by Disqus