Time horizon T in policy gradients (actor-critic)
$begingroup$
I am currently going through the Berkeley lectures on Reinforcement Learning. Specifically, I am at slide 5 of this lecture.
At the bottom of that slide, the gradient of the expected sum of rewards function is given by
$$
nabla J(theta) = frac{1}{N} sum_{i=1}^N sum_{t=1}^T nabla_theta log{pi_theta(a_{i,t} vert s_{i,t}) (Q(s_{i,t},a_{i,t}) - V(s_{i,t}))}
$$
The q-value function is defined as
$$Q(s_t,a_t) = sum_{t'=t}^T mathbb{E}_{pi_theta}[r(s_{t'},a_{t'})vert s_t,a_t]$$
At first glance, this makes sense, because I compare the value of taking the chosen action $a_{i,t}$ to the average value in time step $t$ and can evaluate how good my action was.
My question is: a specific state $s_{spec}$ can occur in any timestep, for example, $s_1 = s_{spec} = s_{10}$. But isn't there a difference in value depending on whether I hit $s_{spec}$ at timestep 1 or 10 when $T$ is fixed? Does this mean that for every state there is a different q value for each possible $t in {0,ldots,T}$? I somehow doubt that this is the case, but I don't quite understand how the time horizon $T$ fits in.
Or is $T$ not fixed (perhaps it's defined as the time step in which the trajectory ends in a terminal state - but that'd mean that during trajectory sampling, each simulation would take a different number of timesteps)?
machine-learning deep-learning reinforcement-learning policy-gradients actor-critic
$endgroup$
bumped to the homepage by Community♦ 4 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I am currently going through the Berkeley lectures on Reinforcement Learning. Specifically, I am at slide 5 of this lecture.
At the bottom of that slide, the gradient of the expected sum of rewards function is given by
$$
nabla J(theta) = frac{1}{N} sum_{i=1}^N sum_{t=1}^T nabla_theta log{pi_theta(a_{i,t} vert s_{i,t}) (Q(s_{i,t},a_{i,t}) - V(s_{i,t}))}
$$
The q-value function is defined as
$$Q(s_t,a_t) = sum_{t'=t}^T mathbb{E}_{pi_theta}[r(s_{t'},a_{t'})vert s_t,a_t]$$
At first glance, this makes sense, because I compare the value of taking the chosen action $a_{i,t}$ to the average value in time step $t$ and can evaluate how good my action was.
My question is: a specific state $s_{spec}$ can occur in any timestep, for example, $s_1 = s_{spec} = s_{10}$. But isn't there a difference in value depending on whether I hit $s_{spec}$ at timestep 1 or 10 when $T$ is fixed? Does this mean that for every state there is a different q value for each possible $t in {0,ldots,T}$? I somehow doubt that this is the case, but I don't quite understand how the time horizon $T$ fits in.
Or is $T$ not fixed (perhaps it's defined as the time step in which the trajectory ends in a terminal state - but that'd mean that during trajectory sampling, each simulation would take a different number of timesteps)?
machine-learning deep-learning reinforcement-learning policy-gradients actor-critic
$endgroup$
bumped to the homepage by Community♦ 4 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
$begingroup$
I think I have found something helpful - John Schulman notes in his thesis: !screenshot So it sounds like either I sample trajectories with variable length (arrival in a terminal state) or I encode the time step into the state and the state value function will also take the time step into consideration?
$endgroup$
– Dummie Variable
Sep 15 '18 at 13:49
add a comment |
$begingroup$
I am currently going through the Berkeley lectures on Reinforcement Learning. Specifically, I am at slide 5 of this lecture.
At the bottom of that slide, the gradient of the expected sum of rewards function is given by
$$
nabla J(theta) = frac{1}{N} sum_{i=1}^N sum_{t=1}^T nabla_theta log{pi_theta(a_{i,t} vert s_{i,t}) (Q(s_{i,t},a_{i,t}) - V(s_{i,t}))}
$$
The q-value function is defined as
$$Q(s_t,a_t) = sum_{t'=t}^T mathbb{E}_{pi_theta}[r(s_{t'},a_{t'})vert s_t,a_t]$$
At first glance, this makes sense, because I compare the value of taking the chosen action $a_{i,t}$ to the average value in time step $t$ and can evaluate how good my action was.
My question is: a specific state $s_{spec}$ can occur in any timestep, for example, $s_1 = s_{spec} = s_{10}$. But isn't there a difference in value depending on whether I hit $s_{spec}$ at timestep 1 or 10 when $T$ is fixed? Does this mean that for every state there is a different q value for each possible $t in {0,ldots,T}$? I somehow doubt that this is the case, but I don't quite understand how the time horizon $T$ fits in.
Or is $T$ not fixed (perhaps it's defined as the time step in which the trajectory ends in a terminal state - but that'd mean that during trajectory sampling, each simulation would take a different number of timesteps)?
machine-learning deep-learning reinforcement-learning policy-gradients actor-critic
$endgroup$
I am currently going through the Berkeley lectures on Reinforcement Learning. Specifically, I am at slide 5 of this lecture.
At the bottom of that slide, the gradient of the expected sum of rewards function is given by
$$
nabla J(theta) = frac{1}{N} sum_{i=1}^N sum_{t=1}^T nabla_theta log{pi_theta(a_{i,t} vert s_{i,t}) (Q(s_{i,t},a_{i,t}) - V(s_{i,t}))}
$$
The q-value function is defined as
$$Q(s_t,a_t) = sum_{t'=t}^T mathbb{E}_{pi_theta}[r(s_{t'},a_{t'})vert s_t,a_t]$$
At first glance, this makes sense, because I compare the value of taking the chosen action $a_{i,t}$ to the average value in time step $t$ and can evaluate how good my action was.
My question is: a specific state $s_{spec}$ can occur in any timestep, for example, $s_1 = s_{spec} = s_{10}$. But isn't there a difference in value depending on whether I hit $s_{spec}$ at timestep 1 or 10 when $T$ is fixed? Does this mean that for every state there is a different q value for each possible $t in {0,ldots,T}$? I somehow doubt that this is the case, but I don't quite understand how the time horizon $T$ fits in.
Or is $T$ not fixed (perhaps it's defined as the time step in which the trajectory ends in a terminal state - but that'd mean that during trajectory sampling, each simulation would take a different number of timesteps)?
machine-learning deep-learning reinforcement-learning policy-gradients actor-critic
machine-learning deep-learning reinforcement-learning policy-gradients actor-critic
edited Aug 29 '18 at 8:16
Dummie Variable
asked Aug 28 '18 at 14:56
Dummie VariableDummie Variable
362
362
bumped to the homepage by Community♦ 4 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 4 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
$begingroup$
I think I have found something helpful - John Schulman notes in his thesis: !screenshot So it sounds like either I sample trajectories with variable length (arrival in a terminal state) or I encode the time step into the state and the state value function will also take the time step into consideration?
$endgroup$
– Dummie Variable
Sep 15 '18 at 13:49
add a comment |
$begingroup$
I think I have found something helpful - John Schulman notes in his thesis: !screenshot So it sounds like either I sample trajectories with variable length (arrival in a terminal state) or I encode the time step into the state and the state value function will also take the time step into consideration?
$endgroup$
– Dummie Variable
Sep 15 '18 at 13:49
$begingroup$
I think I have found something helpful - John Schulman notes in his thesis: !screenshot So it sounds like either I sample trajectories with variable length (arrival in a terminal state) or I encode the time step into the state and the state value function will also take the time step into consideration?
$endgroup$
– Dummie Variable
Sep 15 '18 at 13:49
$begingroup$
I think I have found something helpful - John Schulman notes in his thesis: !screenshot So it sounds like either I sample trajectories with variable length (arrival in a terminal state) or I encode the time step into the state and the state value function will also take the time step into consideration?
$endgroup$
– Dummie Variable
Sep 15 '18 at 13:49
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
In this case, I think it doesn't matter when you reach $s_{spec}$, but how the q-value gets updated because of taking an action at that state.
Therefore there shouldn't be different q-values for each possible $tin {0, ..., T}$, only q-values for each possible actions.
I'm sure it does make a difference for being at a state at a specific timestep, but it's the agents job to learn this by using the RL algorithms (like policy gradient method in the lecture).
In regards to $T$ being fixed or not, horizon $T$ can be infinite or fixed to a finite number.
For example, if $T$ is fixed to $10$, the agent should learn a policy that maximizes the total discounted rewards in the finite amount of time, but it may not be the most optimal policy. When $T$ is infinite, there is more time to explore and figure out the most optimal policy.
The closest method I know that takes notice to when the state-action pair was encountered is Experience Replay that is used in DQN.
I'm also learning Reinforcement Learning right now! I recommend Deep RL Bootcamp since they give you labs in Python which are really intuitive.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f37518%2ftime-horizon-t-in-policy-gradients-actor-critic%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
In this case, I think it doesn't matter when you reach $s_{spec}$, but how the q-value gets updated because of taking an action at that state.
Therefore there shouldn't be different q-values for each possible $tin {0, ..., T}$, only q-values for each possible actions.
I'm sure it does make a difference for being at a state at a specific timestep, but it's the agents job to learn this by using the RL algorithms (like policy gradient method in the lecture).
In regards to $T$ being fixed or not, horizon $T$ can be infinite or fixed to a finite number.
For example, if $T$ is fixed to $10$, the agent should learn a policy that maximizes the total discounted rewards in the finite amount of time, but it may not be the most optimal policy. When $T$ is infinite, there is more time to explore and figure out the most optimal policy.
The closest method I know that takes notice to when the state-action pair was encountered is Experience Replay that is used in DQN.
I'm also learning Reinforcement Learning right now! I recommend Deep RL Bootcamp since they give you labs in Python which are really intuitive.
$endgroup$
add a comment |
$begingroup$
In this case, I think it doesn't matter when you reach $s_{spec}$, but how the q-value gets updated because of taking an action at that state.
Therefore there shouldn't be different q-values for each possible $tin {0, ..., T}$, only q-values for each possible actions.
I'm sure it does make a difference for being at a state at a specific timestep, but it's the agents job to learn this by using the RL algorithms (like policy gradient method in the lecture).
In regards to $T$ being fixed or not, horizon $T$ can be infinite or fixed to a finite number.
For example, if $T$ is fixed to $10$, the agent should learn a policy that maximizes the total discounted rewards in the finite amount of time, but it may not be the most optimal policy. When $T$ is infinite, there is more time to explore and figure out the most optimal policy.
The closest method I know that takes notice to when the state-action pair was encountered is Experience Replay that is used in DQN.
I'm also learning Reinforcement Learning right now! I recommend Deep RL Bootcamp since they give you labs in Python which are really intuitive.
$endgroup$
add a comment |
$begingroup$
In this case, I think it doesn't matter when you reach $s_{spec}$, but how the q-value gets updated because of taking an action at that state.
Therefore there shouldn't be different q-values for each possible $tin {0, ..., T}$, only q-values for each possible actions.
I'm sure it does make a difference for being at a state at a specific timestep, but it's the agents job to learn this by using the RL algorithms (like policy gradient method in the lecture).
In regards to $T$ being fixed or not, horizon $T$ can be infinite or fixed to a finite number.
For example, if $T$ is fixed to $10$, the agent should learn a policy that maximizes the total discounted rewards in the finite amount of time, but it may not be the most optimal policy. When $T$ is infinite, there is more time to explore and figure out the most optimal policy.
The closest method I know that takes notice to when the state-action pair was encountered is Experience Replay that is used in DQN.
I'm also learning Reinforcement Learning right now! I recommend Deep RL Bootcamp since they give you labs in Python which are really intuitive.
$endgroup$
In this case, I think it doesn't matter when you reach $s_{spec}$, but how the q-value gets updated because of taking an action at that state.
Therefore there shouldn't be different q-values for each possible $tin {0, ..., T}$, only q-values for each possible actions.
I'm sure it does make a difference for being at a state at a specific timestep, but it's the agents job to learn this by using the RL algorithms (like policy gradient method in the lecture).
In regards to $T$ being fixed or not, horizon $T$ can be infinite or fixed to a finite number.
For example, if $T$ is fixed to $10$, the agent should learn a policy that maximizes the total discounted rewards in the finite amount of time, but it may not be the most optimal policy. When $T$ is infinite, there is more time to explore and figure out the most optimal policy.
The closest method I know that takes notice to when the state-action pair was encountered is Experience Replay that is used in DQN.
I'm also learning Reinforcement Learning right now! I recommend Deep RL Bootcamp since they give you labs in Python which are really intuitive.
edited Sep 18 '18 at 15:10
answered Sep 18 '18 at 15:00
haruishiharuishi
85
85
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f37518%2ftime-horizon-t-in-policy-gradients-actor-critic%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
I think I have found something helpful - John Schulman notes in his thesis: !screenshot So it sounds like either I sample trajectories with variable length (arrival in a terminal state) or I encode the time step into the state and the state value function will also take the time step into consideration?
$endgroup$
– Dummie Variable
Sep 15 '18 at 13:49