BETA
This is a BETA experience. You may opt-out by clicking here
Edit Story
Intel AI Article - Texture - Issue 1 Forbes

Always Learning, Always Growing: How Neural Networks Do The Hard Work

Intel AI Article - Texture - Issue 1 Forbes
Intel AI

The year 1958: Dwight D. Eisenhower was in the White House, Buddy Holly’s “Peggy Sue” ruled the airwaves, and a psychologist named Frank Rosenblatt was putting the finishing touches on a machine called the Mark 1 Perceptron. Not that he was overly excited about it: Rosenblatt told The New Yorker that he thought the machine was “of no practical use.”

Sixty years later we can safely say Rosenblatt underestimated his invention. True to its name, the Mark 1 in fact marked the first artificial neural network. The way it worked, simply, was this: The Mark 1 Perceptron used 400 randomly connected photocells to “see” a triangle—that is, not just capture its image the way a camera might, but in fact to “recognize” it for future reference.

Today, neural networks and neurocomputing have revolutionized artificial intelligence (AI) and made great advances in deep learning possible. Their use also extends into natural language processing, speech recognition and computer vision—the very purpose of the original Perceptron.

Here’s a look at the science, architecture and advantages of neural networks, all of which begins with how they emulate the human brain. 

Modeled On The Brain

Neural networks (also known as artificial neural networks or neural nets) are computer systems modeled on the human brain and nervous system. Their solid foundation in computer science owes a lot to brain science and the burgeoning field of neuroplasticity in particular. Popularized by books such as The Brain That Changes Itself by Dr. Norman Doidge, neuroplasticity centers on the theory that brains can change as a person’s circumstances do—from stroke trauma or repeated exposure to a stimulus, for example.

Researchers in neuroplasticity now contend that the concept of a “hard-wired” brain—one fixed and unchanging in its makeup—is inaccurate. “Neurons that fire together wire together” is a phrase they coined to explain how we learn and retain knowledge over time. In like fashion to the neuroplastic human brain, neural networks challenge the idea of hard-wired computing, since they can learn without any previous knowledge of how to perform a task.

Think of an office worker who learns to perform a repetitive task through exposure to patterns, practice and an increasing repertoire of efficient procedures. All of this over time becomes imprinted on her firing neurons. Now, imagine a computer that can recognize patterns through what are called training examples—for example, thousands of images fed into a neural network that learns to distinguish among a dog, a mop and a coffee cup. The network then refines its knowledge while it performs its assigned tasks in real time, a procedure known as deep learning.

Building The Simulated Brain

Neural networks function as software simulations of the brain. Thus, an artificial neural network (ANN) isn’t a structure but a computer program or algorithm that organizes the billions of transistors in a computer to operate as though they were interconnected neurons in a human brain. But how exactly does that work?

In the simplest terms, artificial neurons—also referred to as “nodes” or “units”—are mathematical functions carried out by transistors and take their directions from the program. These neurons are organized into three types of layers: input, hidden and output. The key to understanding neural networks is by following the flow of how they process information. As an example, let’s use a network programmed to recognize a handwritten number from zero to nine.

1. Input layers

Input layers receive the information from an external source—in this case, the visual image. Imagine plugging a video camera into a computer programmed as a neural network: The camera’s image would first hit the input layer. No actual computation is taking place here: The input units are simply receiving information. That’s it. So if the image of a number is broken down into 784 pixels (28 by 28), the input layer is simply telling the network which pixels are lit up, and which ones aren’t.  

2. Hidden layers

That information is then passed onto hidden layers—so called because they aren’t connected to the outside world, the way inputs and outputs are. Here, the input information becomes more precisely defined with every hidden layer it passes through. Just like neurons in the human brain, neurons in the hidden layer “fire,” and based on how they fire, neurons in the next layer fire in turn. 

In our example, hidden layer one might determine whether the lit pixels are organized into edges; layer two, whether the edges are organized into patterns; layer three, whether those patterns are straight or looped; and so on.

3. Output layers

Finally, the output layer is where the network gives us the final result: a digit from zero to nine.

An artificial neural network isn’t a structure, but a computer program or algorithm that organizes the billions of transistors in a computer to operate as though they were interconnected neurons in a human brain.

This process—which you can visualize as a left-to-right flow—is known as a “feed forward” network. But what if you want your neural network to “learn”: that is, to keep refining its outputs until it gets faster and more accurate at what it does? That involves taking the network’s output, comparing it with an ideal result, and feeding it back into the network from scratch: a feedback-loops process known as “backpropagation.”

The breakthrough discovery of backpropagation came in 1986, when Geoffrey E. Hinton, a professor at Carnegie Mellon University,became one of the first researchers to describe what he called “learning procedures”—described in his seminal paper as ways computers could learn by performing a task over and over, each time with the computer’s neural network “then adjusted in the direction that decreases the error.”

So, a computer receives an input, processes it through hidden layers, outputs it, and through a backpropagation algorithm re-inputs it to refine its performance and update its knowledge.

Built For Speed And More

The one-two-three punch of speed, scale and accuracy represents the key advantage that neural networks provide. Consider financial services, where thousands of credit card transactions per minute pass through a computer system. A neural network can not only keep up, but it can also flag potentially fraudulent transactions, based on a number of input variables—and with fewer “false positives.”

The variables might range from the number of transactions in a short period to whether the card is being used in an unusual location. The case would then be passed on to a human operator, with the card temporarily frozen to protect the consumer, who is then alerted.

Neural networks can also learn and calibrate fast enough to handle a host of ever-changing conditions; classic cases of this include the airplane autopilot that applies course correction as needed, and the current development of self-driving cars. 

Putting It All Together: Big Brain Gains

At present, neural networks cannot rival the number of connections in the human brain, which some estimate as at least 100 trillion. But the application of Moore’s Law tells us something about where neural network technology is headed. That is: Looking ahead, even in the short term, magnificent leaps in processing power, speed and learning ability are probable as opposed to possible.

Of course, Rosenblatt’s Mark 1 Perceptron could barely see a triangle, let alone the future. But in terms of foreshadowing an era when computers would learn like never before, it indeed offered a very rare glimpse.

Learn more about how companies are leveraging AI today.

CREDITS: Akrain/iStock