- Hands-On Neural Networks with Keras
- Niloy Purkait
- 264字
- 2025-04-04 14:37:32
Output
In our simple perceptron model, we denote the actual output class as y, and the predicted output classes as . The output classes simply refer to the different classes in our data that we are trying to predict. To elaborate, we use the input features (xn), such as temperature (x1) and air pressure (x2) on a given day, to predict whether that specific day is a sunny or rainy one (
). We can then compare our model's predictions with the actual output class of that day, denoting whether that day was indeed rainy or sunny. We can denote this simple comparison as (
- y), which allows us to observe by how much our perceptron missed the mark, on average. But more on that later. For now, we can represent our entire prediction model using all that we have learned so far, in a mathematical manner:

The following diagram displays an example of the preceding formula:

If we graphically plot our prediction line () shown precedingly, we will get to visualize the decision boundary separating our entire feature space into two subspaces. In essence, plotting a prediction line simply gives us an idea of what the model has learned, or how the model chooses to separate the hyperplane containing all our data points into the various output classes that interest us. Actually, by plotting out this line, we are able to visualize how well our model does by simply placing observations of sunny and rainy days on this feature space, and then checking whether our decision boundary ideally separates the output classes, as follows:
