Q-learning

Reinforcement Learning with Gymnasium in Python

Fouad Trad

Machine Learning Engineer

Introduction to Q-learning

  • Stands for quality learning
  • Model-free technique
  • Learns optimal Q-table by interaction

Diagram showing the steps involved in Q-learning including initializing a Q-table, choosing an action to perform, receiving a reward from the environment, and updating the table. The agent continues this loop until convergence is achieved after a certain number of episodes.

Reinforcement Learning with Gymnasium in Python

Q-learning vs. SARSA

SARSA

Image showing the mathematical formula of the SARSA update rule.

  • Updates based on taken action
  • On-policy learner
Q-learning

Image showing the mathematical formula of the Q-learning update rule.

  • Updates independent of taken actions
  • Off-policy learner
Reinforcement Learning with Gymnasium in Python

Q-learning implementation

env = gym.make("FrozenLake", is_slippery=True)

num_episodes = 1000 alpha = 0.1 gamma = 1
num_states, num_actions = env.observation_space.n, env.action_space.n Q = np.zeros((num_states, num_actions))
reward_per_random_episode = []
Reinforcement Learning with Gymnasium in Python

Q-learning implementation

for episode in range(num_episodes):
    state, info = env.reset()
    terminated = False
    episode_reward = 0

while not terminated:
# Random action selection action = env.action_space.sample()
# Take action and observe new state and reward new_state, reward, terminated, truncated, info = env.step(action)
# Update Q-table update_q_table(state, action, new_state)
episode_reward += reward state = new_state
reward_per_random_episode.append(episode_reward)
Reinforcement Learning with Gymnasium in Python

Q-learning update

Image showing the mathematical formula of the Q-learning update rule.

def update_q_table(state, action, reward, new_state):

old_value = Q[state, action]
next_max = max(Q[new_state])
Q[state, action] = (1 - alpha) * old_value + alpha * (reward + gamma * next_max)
Reinforcement Learning with Gymnasium in Python

Using the policy

reward_per_learned_episode = []
policy = get_policy()

for episode in range(num_episodes): state, info = env.reset() terminated = False episode_reward = 0 while not terminated: # Select the best action based on learned Q-table action = policy[state] # Take action and observe new state new_state, reward, terminated, truncated, info = env.step(action) state = new_state
episode_reward += reward
reward_per_learned_episode.append(episode_reward)
Reinforcement Learning with Gymnasium in Python

Q-learning evaluation

import numpy as np
import matplotlib.pyplot as plt

avg_random_reward = np.mean(reward_per_random_episode) avg_learned_reward = np.mean(reward_per_learned_episode)
plt.bar(['Random Policy', 'Learned Policy'], [avg_random_reward, avg_learned_reward], color=['blue', 'green']) plt.title('Average Reward per Episode') plt.ylabel('Average Reward') plt.show()

Image of a bar plot showing that the learned policy yields a much higher return (around 0.26)than the random policy (around 0.01).

Reinforcement Learning with Gymnasium in Python

Let's practice!

Reinforcement Learning with Gymnasium in Python

Preparing Video For Download...