You've encountered a common issue in Q-Learning implementations: state-action values growing too high. Let's explore this problem and provide a solution.
Understanding the Issue
Your agent attempts to maximize the expected total reward. However, your reward function returns positive rewards for game continuation (0.5). This incentivizes the agent to prolong games indefinitely, resulting in unbounded expected total reward and excessively high Q-values.
Solution: Adjusting the Reward Function
To resolve this issue, adjust your reward function to provide negative rewards for every time step. This will penalize the agent for prolonging games and encourage it to seek a winning strategy. For example, you could use the following reward scheme:
Implementation Considerations
In your code, you're using agent.prevScore as the reward for the previous state-action. However, this should be the actual reward received, not the Q-value. Make this adjustment in your code:
<code class="go">agent.values[mState] = oldVal + (agent.LearningRate * (reward - agent.prevScore))</code>
Expected Behavior
After implementing these changes, you should observe the following behavior:
Keep in mind that reinforcement learning algorithms sometimes exhibit non-intuitive behaviors, and understanding the underlying principles is crucial for developing effective solutions.
The above is the detailed content of Why are my Q-Learning Values So High? A Solution to Unbounded Expected Rewards.. For more information, please follow other related articles on the PHP Chinese website!