You’ve heard about machine learning and its close, more developed cousin artificial intelligence. Machine learning is a well-established practice in computing science, and artificial intelligence is a quickly developing field that promises to produce super-smart computers, potentially machines that are smarter than the intelligence of the whole human race – combined.

But how will we get there?

Cognitive computing is one potential way of reaching our full potential in supercomputing. It offers an intimate combination of cognitive science with advanced computing techniques to create computers that mimic the way the human brain works.

It creates potential not only for smarter for computers but for smarter ways of interacting with those computers from a human perspective.

What is cognitive computing, and how is it already changing our lives? We’ll show you.

Cognitive Computing: Theory and Practice

Cognitive computing is an attempt to create code that simulates the human brain and thought process.

It requires the combination of:

  • Self-learning algorithms
  • Data mining
  • Natural language processing
  • Pattern recognition

Cognitive computing models are an attempt to elevate smart algorithms. While computers can do math or process data faster than humans, there are plenty of areas in which humans remain superior.

For example, a computer can recognize patterns in images, but it can’t recognize and name a unique object portrayed in an image.

It might recognize a series of triangles and place it in context, but it won’t recognize a parallelogram if it doesn’t have a frame of reference. More importantly, it won’t recognize what a parallelogram means. The computer will know something is different and often become stuck.

To break it down into simpler terms: cognitive computing could help machines perform human tasks more like a human would by using the human brain as an example.

The Basic Components of a Cognitive System

Cognitive computing systems are easily identifiable from machine learning or basic computational systems because it performs using four important features:

  • Interactive
  • Context
  • Adaptive
  • Iterative

Cognitive systems interact with users and other components of the system like devices, processors, and Cloud technologies. Without simple interaction, neither can define their needs and the system is a bust.

These systems are also adept at working with information in context. In fact, its ability to use contextual information is what sets it far apart from previous computing systems. Like humans, cognitive computing can understand information not only as it stands but also according to time and place as well as the underlying task and goal the information is part of.

The ability to adapt is also important because it’s what the human brain does. The code needs to adapt to new surroundings and work dynamically with data, tasks, and obstacles to achieve goals.

The Human Brain Stands in the Way

Cognitive Computing

There are still a few things standing in the way of cognitive computing reaching its full potential in replicating the human decision-making process in an algorithm.

The biggest roadblock to reaching cognitive computing’s full potential is that we don’t yet understand the human brain.

Cognitive computing rests on the premise that human knowledge is made only of words that the brain organizes into descriptions of rules and patterns. It’s a rule that guides the development of computer programs and has a strong background.

In many ways, knowledge is the equivalent to a stored model, which can be replicated by code.

But there’s still more to learn. For example, there’s evidence that viewing the brain as a set of rules and pattern recognition systems is a very limited view of our mental processes.

Thus, cognitive science in both biology and computing remains limited to contemporary understandings of the brain.

The result: cognitive computing recognizes ways in which it’s similar to the brain rather than focusing on the multiple ways it’s potentially different.

Who’s Leading the Way in Cognitive Computing

Let’s start with the origins of cognitive computing and AI. There’s no place better to start than with IBM.

IBM & Watson

IBM’s contribution to cognitive computing is manifested in the IBM Watson product.

Watson is a cognitive computing system able to answer questions asked not in code but natural language. The project was developed as part of the DeepQA project by a team led by David Ferrucci, but its name came from the first CEO of IBM – Thomas J. Watson. According to IBM, Watson uses:

“more than 100 different techniques are used to analyze natural language, identify sources, find and generate hypotheses, find, and score evidence, and merge and rank hypotheses.”

Watson started out as a joke. It was supposed to do little more than be able to answer trivia questions on the tv program Jeopardy. The punchline arrived when Watson became a contestant on Jeopardy! In 2011 and beat previous champions Ken Jennings and Brad Rutter to score IBM the top prize of $1 million.

Although it began life as the kind of joke only an engineer could love, the potential for systems like Watson was quickly recognized; though, it’s potential was foreseen well before development.

Watson became available for commercial use in February 2013 within the medical sector. Memorial Sloan Kettering Cancer Center was the first hospital to employ it to help manage decisions made across a patient’s course of treatment for lung cancer. It was a hit: IBM reported out of all the nurses who encounter Watson in the field, 90% follow its suggestions.

It’s worth noting that it may make hospital workers’ lives easier, but there is not yet any data available on patient outcomes. In other words, it helps make decisions, but it’s still unclear whether it saves lives.

As it stands, Watson offers endless applications. It tackles not only natural language but vast amounts of unstructured data: combining human skills with those only a computer can perform.

Google DeepMind

DeepMind is now synonymous with Google, but it isn’t a Google creation rather than an investment by the tech giant in AI.

DeepMind was founded in London in 2010 with the purpose of creating a neural network to play video games in the same way a human does – or better. In 2016, DeepMind’s AlphaGo beat a professional video gamer in a match in the game Go. The win wasn’t a fluke: the computer beat world champion Lee Seedol in a five-game test.

Go players aren’t the only ones who should be worried about their titles: AlphaGo now beats the best chess and shogi players in the world.

The key differentiation between DeepMind’s program and previous computer programs is in self-learning. While previous computing systems were taught to play a game before mastering it, AlphaGo learned how to play all on its own via self-play networks.

Google hasn’t used DeepMind for much – or, the company doesn’t talk about many applications. The primary application is using DeepMind algorithms to cool its data centers. The latest version of AlphaGo – AlphaGo Zero – runs on only four processors compared to the 48 required by the original. It’s a valuable tool for Google, whose processing requirements are closer to uncountable than not.

Like IBM Watson, DeepMind has been deployed to healthcare settings in the UK.

Microsoft Cognitive Services

The Microsoft Azure platform offers what it calls Microsoft Cognitive Services, a platform using AI to solve enterprise problems.

Its cognitive services are available for websites, apps, and even bots as a way of communicating with users through natural language. Some of its applications include:

  • Image-processing algorithms
  • Conversion of audio into text
  • Voice verification algorithms
  • Sorting and mapping data
  • Engaging with customers with chatbots

You can see a full list of capabilities here.

Additionally, if you want to learn more about how Microsoft envisions AI and what it means, consider getting involved in the company’s entry-level software and AI development courses that were recently launched via

Cognitive Computing in the Real World

Most of the world’s biggest tech firms are making major investments in cognitive computing and other forms of AI. But what’s the point? Where do they aim to go?

Like AI, cognitive computing is set to change the way we perform basic and complex functions both at home and more importantly, at work.

In fact, outside of the enterprise offerings described above, cognitive computing is already being widely adopted by various industries that are plagued with vast amounts of data and no time to process it.

Here are some examples of cognitive computing in the real world:

  • Vantage Software

Vantage Software is a software product for the finance industry by providing reporting and analytics to small hedge funds and private equity firms. It allows financial managers to make faster, smarter decisions based on data.

  • Lifelearn

Lifelearn is a veterinary care product that helps veterinarians better diagnose illnesses in animals and find better treatments. The software looks at thousands of resources and produces a list of evidence-based options based on its scan.

  • Wayblazer

Wayblazer takes the search out of roaming by acting as a ‘cognitive-powered personal travel concierge.’ Travellers ask Wayblazer about their trips, and the tool produces whatever relevant data they need.

Cognitive Science Wins Computing

Machine learning and artificial intelligence sound like foreign concepts, and in some ways, they are foreign to humans. Cognitive science blurs the line between machine and humans by attempting to replicate your thought processes through code to help you make better decisions.

Have you used any cognitive computing programs? Did you come to the same conclusion as the program? Share your thoughts in the comments below.

Pin It on Pinterest

Share This