MLP Visualization

Network Structure

Input Layer

Regularization & Normalization

How the MLP Works

The Multi-Layer Perceptron (MLP) is a feedforward neural network that transforms input data through multiple layers using weights, biases, and activation functions.

Each layer computes a weighted sum of its inputs, adds a bias, and passes the result through a non-linear activation function. Regularization techniques such as dropout and batch normalization help improve performance and prevent overfitting.

Activation Function

This function introduces non-linearity. The chart below shows the current activation function's curve.

Forward Pass

Data flows from the input layer through hidden layers to the output layer, with each layer transforming its inputs in real time.

Weight Initialization & Regularization

Weights are initialized using various strategies to speed up convergence. Dropout randomly deactivates neurons and batch normalization standardizes layer inputs.