Introduction to TensorFlow in Python
Isaiah Hull
Visiting Associate Professor of Finance, BI Norwegian Business School
Stochastic gradient descent (SGD) optimizer
tf.keras.optimizers.SGD()
learning_rate
Simple and easy to interpret
Root mean squared (RMS) propagation optimizer
tf.keras.optimizers.RMSprop()
learning_rate
momentum
decay
Allows for momentum to both build and decay
Adaptive moment (adam) optimizer
tf.keras.optimizers.Adam()
learning_rate
beta1
Performs well with default parameter values
import tensorflow as tf
# Define the model function
def model(bias, weights, features = borrower_features):
product = tf.matmul(features, weights)
return tf.keras.activations.sigmoid(product+bias)
# Compute the predicted values and loss
def loss_function(bias, weights, targets = default, features = borrower_features):
predictions = model(bias, weights)
return tf.keras.losses.binary_crossentropy(targets, predictions)
# Minimize the loss function with RMS propagation
opt = tf.keras.optimizers.RMSprop(learning_rate=0.01, momentum=0.9)
opt.minimize(lambda: loss_function(bias, weights), var_list=[bias, weights])
Introduction to TensorFlow in Python