Actor Critic Learning for Cartpole V1
Actor Critic method
Actor Critic methods are a popular choice for on policy learning which combined a policy gradient update with a value function. Policy gradients are a way to modify your policy to go straight from a state to an action without the need of a value estimate. The critic critiques a state and estimates the value of a state. The actor proposes an action for the state. The actor uses the estimate from the critic to determine the direction to move its actions. This is done through something called a TD error expressed as \[\delta_t = r_{t+1} + \gamma V(s_{t+1}) - V(s_t)\]
One can think of this as where is an estimate of at the current state. A positive TD would mean that the action taken was a good action and the agent should do more of that action. A negative TD would mean that the agent wants to steer away from the action that it just took. The critic learns depending on what the actor does and so this is an example of on policy learning.
In order to update the critic, I collect experience tuples of (s, a, r, s’) and store these into separate experience buffers. I have an experience buffer for the actor and one for the critic. When the agent is exploring, I save the experience tuples inside both the critic and actor buffers. Otherwise, I just save the experience into the critic buffer. The intuition to do this is so that the actor can learn from the actions that it has attempted before. Additionally, the problem is small enough such that the critic can use all the experience tuples and not just train on just the tuples gathered from non-exploration. These tuples are then extracted from the experience buffer in batches for training.
Because I use Neural Networks as function approximators with parameters to approximate both the state value function and the action, I apply targets to the value function in the form of TD errors and actions to the actor to update their respective parameters through gradient descent. The critic updates itself through TD targets. The actor essentially queries the critic to determine whether or not to update its action towards the action taken at state . I use positive temporal differencing in order to determine if the actor should update itself. That is, the actor does not weight the action based upon the magnitude of the TD error but treats all actions the same.
Exploration
Actor exploration is also implemented in an greedy fashion. Because the actions are discrete, with probability the actor chooses a random discrete action. In a continuous action space, action exploration may be handled by using an algorithm called CACLA (Continuous Actor Critic Learning Automaton) which essentially searches the action space. The output of the discrete actor is a softmax probability distribution of the actions. The softmax is expressed as: \[ \sigma (z)_j = \frac{e^{z_j}}{\sum_k e^{z_k}} \] where is the action output from the network and the k iterates through all possible actions. We can choose the action stochastically or deterministically through a max depending on how we want to evaluate the action. The actor is then trained using a categorical cross entropy loss, which just nudges the probability of choosing the target action higher. Exploration is annealed linearly as the number of experiences grows.
Training
As noted previously, the training is done by periodically selecting random batches from the experience buffer to present to the critic and actor networks. The actor only updates itself if there is a positive TD error from the critic. Otherwise, it ignores the experience altogether.
The update of the Critic looks like this:
The update for the actor looks like this:
I evaluate the agent every 50 episodes to check its performance. Once the performance has reached the maximum goal, I then set the agent to take the most optimal action at its current state. An error graph looks like this: