Have you ever wondered how computers can recognize faces, predict stock prices, or even write stories?
Behind many of these magic tricks is something called a Neural Network.
In this blog, I’ll explain what a neural network is in very simple English, and then we’ll build a small one in Python step by step.
What is a Neural Network?
Think of a neural network as a mini-brain made of math.
- Our brain has billions of neurons connected together.
- A neural network is the same idea, but with math:
- Inputs → numbers we give to the network.
- Hidden layers → where the network learns patterns.
- Output → the answer the network gives.
Example:
If you show the network a picture of a cat:
- Inputs = pixel values of the image
- Hidden layers = math that learns shapes, ears, eyes
- Output = “Cat”
Structure of a Neural Network
A very simple neural network looks like this:
- Input layer: The raw data (e.g., numbers).
- Hidden layer(s): Middle layers that learn patterns.
- Output layer: The prediction (e.g., 1 = Cat, 0 = Not Cat).
How Does it Learn?
The learning process is called training.
- Feedforward: Send data from input → hidden → output.
- Compare: Check how wrong the answer was (error).
- Backpropagation: Adjust the connections (weights) to make the network smarter.
- Repeat many times until it learns.
Step-by-Step Python Implementation:
Let’s build the simplest neural network using just numpy
(no fancy libraries like TensorFlow or PyTorch).
We’ll create a network that learns the OR function:
Input → Output
0,0 → 0
0,1 → 1
1,0 → 1
1,1 → 1
Step 1: Import Libraries
import numpy as np
We use numpy
for matrix math (fast calculations).
Step 2: Define Inputs and Outputs
X = np.array([[0,0],
[0,1],
[1,0],
[1,1]])# Output data (y)
y = np.array([[0],
[1],
[1],
[1]])
This is our training data: the OR truth table.
Step 3: Activation Function
We use sigmoid to squash numbers between 0 and 1.
def sigmoid(x):
return 1 / (1 + np.exp(-x))def sigmoid_derivative(x):
return x * (1 - x)
Step 4: Initialize Weights
np.random.seed(1) # to get same random results each time
weights = 2 * np.random.random((2,1)) - 1
Step 5: Training the Network
for epoch in range(10000): # train for 10,000 times
# Step 1: Forward pass
input_layer = X
outputs = sigmoid(np.dot(input_layer, weights))# Step 2: Error
error = y - outputs
# Step 3: Adjust weights
adjustments = error * sigmoid_derivative(outputs)
weights += np.dot(input_layer.T, adjustments)
- Feedforward → Get prediction.
- Compare → Find error.
- Backpropagation → Adjust weights
Step 6: Test the Network
print("Final outputs after training:")
print(outputs)
Output
After training, the network should output something close to:
[[0.01]
[0.98]
[0.98]
[0.99]]
That means it has learned the OR function!
Final Thoughts
- Neural networks are just math that learn patterns.
- They work by adjusting weights to reduce error.
- Even a simple 2-layer network can learn basic logic like OR.
In real life, we use deeper networks (with many hidden layers) for complex tasks like image recognition, chatbots, and self-driving cars.
This was your first step into Neural Networks. From here, you can move on to libraries like TensorFlow or PyTorch to build more advanced models.