learningrate scheduler tensorflow

tf.keras.callbacks.LearningRateScheduler(
    schedule, verbose=0)

# This function keeps the initial learning rate for the first ten epochs
# and decreases it exponentially after that.

def scheduler(epoch, lr):
  if epoch < 10:
    return lr
  else:
    return lr * tf.math.exp(-0.1)

model = tf.keras.models.Sequential([tf.keras.layers.Dense(10)])
model.compile(tf.keras.optimizers.SGD(), loss='mse')

callback = tf.keras.callbacks.LearningRateScheduler(scheduler)
history = model.fit(np.arange(100).reshape(5, 20), np.zeros(5),
                    epochs=15, callbacks=[callback], verbose=0)

Here is what the above code is Doing:
1. The LearningRateScheduler callback takes a function as its argument.
2. This function takes the current epoch and current learning rate as arguments and returns the learning rate to be used for the next epoch.
3. The LearningRateScheduler callback then applies this function to each epoch, passing the epoch number and current learning rate as arguments.
4. The LearningRateScheduler callback uses the output of the function as the learning rate for the next epoch.