Proximal policy optimization

Deep Reinforcement Learning in Python

Timothée Carayol

Principal Machine Learning Engineer, Komment

A2C

  • A2C policy updates:
    • Based on volatile estimates
    • Can be large and unstable
  • May harm performance

A Mars rover lies broken after experiencing an accident on rough terrain

PPO

  • PPO sets limits on the size of each policy update
  • Improves stability

A Mars rover is happily cruising on the martian surface

Deep Reinforcement Learning in Python

The probability ratio

  • PPO main innovation: a new objective function
  • At its core:

Probability ratio: ratio between the probability of an action under the new policy and the probability under the old policy, and denoted r_t

  • How much more likely is action $a_t$ with $\theta$ than with $\theta_{old}$?

 

  ratio = action_log_prob.exp() / 
          old_action_log_prob.exp().detach()

# Or equivalently ratio = torch.exp(action_log_prob - old_action_log_prob.detach())
  • detach the denominator to prevent gradient propagation
Deep Reinforcement Learning in Python

Clipping the probability ratio

 

  • Clip function:

A graph of the clip(x, 0.8, 1.2) between x=0.6 and x=1.4; the function equals .8 below x=0.8; x between 0.8 and 1.2; 1.2 above 1.2.

The clipped probability ratio is clip(r_t, 1-epsilon, 1+epsilon)

 

 

clipped_ratio = torch.clamp(ratio,
                            1-epsilon, 
                            1+epsilon)
Deep Reinforcement Learning in Python

The calculate_ratios function

 

def calculate_ratios(action_log_prob, action_log_prob_old, epsilon):

prob = action_log_prob.exp() prob_old = action_log_prob_old.exp() prob_old_detached = prob_old.detach() ratio = prob / prob_old_detached clipped_ratio = torch.clamp(ratio, 1-epsilon, 1+epsilon)
return (ratio, clipped_ratio)
Example with epsilon = .2:

Ratio: tensor(1.25)
Clipped ratio: tensor(1.20)
Deep Reinforcement Learning in Python

The PPO objective function

 

J surr = E_t(r_t * advantage)

surr1 = ratio * td_error.detach()

surr2 = clipped_ratio * td_error.detach()
objective = torch.min(surr1, surr2)

 

  • Surrogate with clipped ratio:

$$\mathrm{clip}(r_t(\theta),1-\varepsilon,1+\varepsilon)\hat{A}$$

  • PPO clipped surrogate objective function:

Clipped surrogate objective function: expected value of minimum between ratio * advantage and clipper ratio * advantage.

  • More stable than A2C
Deep Reinforcement Learning in Python

PPO loss calculation

 

def calculate_losses(critic_network, 
                     action_log_prob,                                        
                     action_log_prob_old,
                     reward, state, next_state,
                     done
                     ):

    # calculate TD error (same as A2C)
    value = critic_network(state)
    next_value = critic_network(next_state)
    td_target = (reward + 
                 gamma * next_value * (1-done))
    td_error = td_target - value
    ...

 

    ...
    ratio, clipped_ratio = 
            calculate_ratios(action_log_prob, 
                             action_log_prob_old,
                             epsilon)

surr1 = ratio * td_error.detach()
surr2 = clipped_ratio * td_error.detach()
objective = torch.min(surr1, surr2)
actor_loss = -objective
critic_loss = td_error ** 2 return actor_loss, critic_loss
Deep Reinforcement Learning in Python

Let's practice!

Deep Reinforcement Learning in Python

Preparing Video For Download...