Nerual networks cover photo

Neural Networks explained to a 5 year old.

The term ‘Neural Network’ is often tossed around in conversations pertaining to Artificial Intelligence and Machine Learning (Will cover those terms in another article). This article will breakdown what Neural networks are and why they are important 

Traditional code is great at doing relatively simple and monotonous jobs that require low skill and high repeatability. That’s why you often see large scale robots implemented in factories doing simple, repeatable jobs while the more complex tasks in the factories are still handled by humans. The reason for this is that traditional code are weak at making difficult or complex decisions. 

As code and robotics move towards becoming smarter and using artificial intelligence, they need to learn how to think more like humans. Neural networks bridges this gap by programming & implementing decision trees similar to what humans use.

Neural Networks vs Traditional Code

Traditional programming/code & its weaknesses

If you have ever learn’t programming or seen any basic code, you know that traditional code mostly look like flowcharts (if you haven’t learn’t programming or coded before, I will explain what a traditional code flowchart looks like). 

Traditional Code Flowchart

Let us take an example of a Facebook fake profile filter. Our Filter is going to use 3 criteria to determine if someone’s profile is real or if someone/some robot created multiple fake profiles. 

Criteria 1:  If someone has more than 10 friends, they are probably a real person.

Criteria 2: If someone posts often, they are probably real.

Criteria 3: If someone has a last name (in addition to their first name), they are probably real.

A flow chart for such a situation would look something like this 

Neural networks, traditional code flowchart
Traditional coding flowcharts for a Facebook real profile detector

Issues with traditional code flowchart

Issues: as the number of criteria grows, any one of them can classify the profile as real or fake. For example, it is extremely easy for any spammer or robot to give their fake Facebook profile a fake last name making that criteria mark the last name as true. Nonetheless, we can’t remove this criteria as it is one of easiest signs for if a profile might be fake. 

Giving a different weightage to each criteria is the solution to this issue. (Weightage means to give a value of importance to your criteria) 

Neural Networks: Weighted criteria

Here is where neural networks shine. Remember the criteria we made earlier(Number of friends, has a last name, post frequency), these criteria are called Neurons

Advantage 1: Each criteria/neuron returns a value from 0-9 vs true or false (1 or 0)

When each criteria/neuron is tested, it give a value from 0-9 instead of a value of Yes or no (0-1).

For example, in the ‘number of friends’ criteria of our flowchart, our traditional code will give a ‘yes’ on more than 10 friends and a ‘no’ for less than 10 friends.

However the in a neural network, we will get a value from 0-9. A 0 might mean 0-2 friends. A value of 1 might mean 3-5 friends. A value of 8 might mean 80-100 friends. etc. 

Advantage 2: Each neuron has a different weight/importance

This 0-9 value will be provided by each of the criteria/neuron blocks. However, here is were Neural networks get really interesting. Remember how we mentioned how it’s really easy to fake a profile’s last name vs faking other criteria such as number of friends. Well, with neural networks we can give criterias such as ‘last name’ a lower importance/weight than other criteria. How we achieve this is by multiplying each criteria with a number between 0-1. 

For example, lets give ‘last name’ criteria/neuron a weightage of 0.2 and lets give the ‘number of friends’ criteria/neuron a wightage of 0.9. Now if a tested facebook profile has a good last name, the criteria will return a value of 9 (from 0-9) and then multiply the 9 by the weightage of 0.2 giving us the number 1.8. Meanwhile our tested facebook profile has 40 friends, our criteria/neuron might give us a number from 7 (from 0-9) and then multiply that 7 with the weightage of 0.8 giving us 5.6. 

Although the ‘number of friends’ neuron gave us a smaller initial value of 7 (from 0-9) than the 9 we got from the ‘last name’ neuron, it was still considered to be a more important criteria than ‘last name’ and hence was multiplied by a bigger weightage number giving us a final score of 5.6 (number of friends) vs a score of 1.8 (last name)

If that’s confusing, refer to the picture below

Neural network version of our facebook fake profile classifier

Advantage 3: Neuron Threshold

Finally we reach the end of the decision tree. Based on all these results from the neurons, should we consider this to be a real or fake profile (Based on our Facebook spam neural network).  

The first step is to sum all of our inputs/ neurons. The max number we can get is 9×0.2 = 1.8 (1st neuron/input) + 9×0.8 = 7.2 (2nd neuron/input) + 9×0.6 = 5.4 (
1st neuron/input) = 1.8+7.2+5.4 = 14.4. Now we know that the max sum we can get from our inputs is 14.4. The minimum sum will be 0 since 0 multiplied into any multiplier will be 0. 

Here is where thresholds kick in. In our picture above we chose a final threshold of 12: A high value since we want our neural network to be very strict in selecting fake profiles. If all our sums add up to 12 or more, the neural network will classify our Facebook profile as real!

Downsides of Neural networks

So far so good. Neural networks seem amazing. However, they have downsides too.

Assigning values to our neurons

Firstly, we need to figure our what data will constitute as a value from 0-9. Taking the example of the number of friends neuron/category, we will have to decide what a value 1 might mean (maybe 0-2 friends), what the value of 5 means (maybe 20-30 friends), what the value 9 means (maybe 100+ friends) etc. 

Calculating weights/multipliers for our neurons

Next we need to calculate what optimal multipliers/weights to give to each neuron/ input. I selected the arbitrary values of 02, 0.6, 0.8 for each of our neurons/inputs but it is very hard to reach an accurate number.

Intertwined neural networks

Lastly a neural network can become incredibly intertwined. In the case of ‘number of posts’ neuron. It is very easy for a spam profile/ bot profile to create posts with gibberish and pictures stolen from google images. The same thing exists for the neuron that tells us ‘how many friends a profile has’. Perhaps all the friends of fake profile are also fake profiles.

In such a case we have employ more neural networks that feed into that input neuron. In the case of the ‘number of posts’ we might have a neural network that runs the check through a readability test, a plagiarism test too see if the post is real. etc

This can create incredibly long and intertwined neural networks as seen in the photo below. 

complex neural network
complex neural network

Neural Networks and Machine Learning

Machine learning can greatly increase the usefulness of neural networks. With machine learning we can feed in data to a neural network and use it to let the computer automatically calibrate the values of the thresholds and a multipliers/weights. More on Machine Learning in another post!

Further Reading

Link 1: https://www.digitaltrends.com/cool-tech/what-is-an-artificial-neural-network/

Link 2: https://towardsdatascience.com/how-to-build-your-own-neural-network-from-scratch-in-python-68998a08e4f6

Link 3: https://en.wikipedia.org/wiki/Artificial_neural_network

Link 4: http://neuralnetworksanddeeplearning.com/chap1.html

Zeen is a next generation WordPress theme. It’s powerful, beautifully designed and comes with everything you need to engage your visitors and increase conversions.