Cognitive computing and AI begin to grow together

By Sue Feldman

When we first attempted to define cognitive computing, we found clear differences between it and AI. We posited that for software to be considered a new type of computing - “cognitive,” it must solve problems that were insoluble today. This new class of problem has no precise answers. Instead, it is open to interpretation - it is ambiguous or has no one right answer that is amenable to computation.

For an application to be considered “cognitive,” 5 years ago, we proposed that it be adaptive, interactive, iterative, stateful, and, above all, contextual.

In the 5 years since cognitive computing appeared, however, our understanding of the missing pieces that this new kind of computing might provide has evolved. We have found that different depths of analysis are required for different purposes. This means that potential users and buyers must define what their purpose is in investing in a new technology.

While understanding the full meaning of text might be attractive, it is possible to understand enough of a subset so that meaning can be inferred. For example, emotion analysis might require more in-depth understanding of the differences between “like” and “love” than simply labeling both words as “positive.”

This gradual adjustment in requirements makes it easier for developers to address only the needs of their target market. More nuanced, more in-depth products can be developed at a slower pace while their features are explored.

Developing suitable expectations

How do we manage the hype and promise for new inventions while making sure that they represent a realistic opportunity? Can we invent self-driving cars or a Boeing 737 MAX without exposure to the risks these innovations can pose to our lives?

New technologies rarely spring full-blown to the marketplace. Instead, they evolve incrementally, building on a myriad of designs, experiences, and market demands that have come before.

When our inventions were software only, or were minor devices that had small impact, we could afford to allow designers to experiment and correct their mistakes. But we are reaching the brink of disaster when lives are imperilled. Whom do we trust to make decisions that release new inventions to unsuspecting customers? Who should be trusted?

The truth is, we don’t know yet, and technology alone can’t give us sufficient answers. All we know is that trust and reliability are imperative, even if we don’t know yet what impacts to expect.

That is certainly the case today with cognitive computing. Cognitive applications and platforms have seen a marked evolution that has given us a mixture of features that solve problems rather than insist solely on purity of design. They are purpose-built experiments, mixing cognitive, AI, machine learning, and new kinds of interfaces in order to address a specific purpose or need.

Developers and innovators face a dilemma: Predicting the next big thing is nearly impossible. What will grab the market or fulfil a need? Part of the design process is to select features that solve a problem and that have been proven to work for a specific purpose.

For instance, does a search application require deep parsing in order to determine meaning, or will shallow parsing work in the majority of cases? How can we work with potential buyers of technology to help them understand the trade-offs and choices that need to be made?

Each degree of depth requires integration with the software app and the interface as a whole. Which features (such as real nuanced language understanding) that seem logical are so difficult to achieve that they delay the launch of a valuable, market-breaking application?

To complicate matters further, features that seem elementary catch on - often, for the wrong reasons. For instance, suppose that all I want to know is when my spouse has left the office, and what the traffic volume is today on his way home. It’s pretty obvious which datapoints need to be confined for this simple bit of information.

And yet, all kinds of imponderables need to be factored into the equation, e.g., how fast does my spouse drive? Is there a repair truck in the way? To make things more difficult yet, the salient factors for a successful new application are rarely known and may be largely dependent on the user and her needs.

For instance, one of the most useful features (for me) of Alexa is having the device tune in or change radio channels while my hands are dirty. Its voice interface is a life (or dinner) saver for me.

How much is “good enough?”

There are several factors that go into our use of technology. Each of these is both a threat and a promise. For instance, a device that promises to behave to perfection, not to intrude but to augment our lives, is welcome.

Assuming that the device remains quietly in the background, volunteering only when summoned, it is deemed useful. But if it intrudes (volume drowns out human activity) or, worse, threatens an activity or a human interaction, then it must be adjusted. In the worst situations, a device or a technology may actually threaten human welfare.

The Boeing 737 MAX is a prominent example. By relying only on its design, without the opportunity to modify an outcome, the plane took lives because there were no humans in the loop.

This is an extreme example, but it is also a good lesson about how humans and machines must both contribute to the functioning of safe human-technology environments. And that’s the problem.

What are the boundaries in device design and human reactions that must be built into an innovation? This is no mean determination to make.

How do you prevent risk in self-driving cars? How much should we override the next technology if it intrudes on human activities? How much should we risk reliance on smart machines in unpredictable conditions?

These are questions that we should ask now, before disasters that are preventable intrude on our lives. We need guidance on how to perform the risk-benefit analysis.

Sue Feldman is president of Synthexis  and co-founder of the Cognitive Computing Consortium, e-mail sue@synthexis.com.