- Hands-On Neural Networks with Keras
- Niloy Purkait
- 209字
- 2025-04-04 14:37:33
Compiling the model
Now we can compile our freshly built model, as is deep learning tradition. Recall that in the compilation process, the two key architectural decisions are the choice of the loss function, as well as the optimizer. The loss function simply helps us measure how far our model is from the actual labels at each iteration, whereas the optimizer determines how we converge to the ideal predictive weights for our model. In Chapter 10, Contemplating Present and Future Developments, we will review advanced optimizers and their relevance in various data processing tasks. For now, we will show how you can manually adjust the learning rate of an optimizer.
We have chosen a very small learning rate of 0.001 on the Root Mean Square (RMS) prop for demonstrative purposes. Recall that the size of the learning rate simply determines the size of the step we want out network to take in the direction of the correct output at each training iteration. As we mentioned previously, a big step can cause our network to walk over the global minima in the loss hyperspace, whereas a small learning rate can cause your model to take ages to converge to a minimum loss value:
from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(1r=0.001),
loss='binary_crossentropy',
metrics=['accuracy'])