Keras/TF: Making sure image training data shape is accurate for Time Distributed CNN+LSTM
$begingroup$
The comprehensible data shape to me is like:
(9186, 120, 120, 1)
this means 9186 entry, of 120 by 120 pixel grey images. I learnt that using Time Distributed to design a CNN combined with an LSTM model, could learn more from images, knowing they are sequenced.
Following some tutorials, I found out I should expend another dimension which describes moving frames, in my case, I have 6 hours image, this would be good as a moving sequence, (probably I should try other lengths because time-series images are not of regular lengths, example:
8:00 img1 target1
9:00 img2 target2
10:00 img3 target3
11:00 img4 target4
12:00 img5 target5
13:00 img6 target6
19:00 img7 target7
...
My question is, when I expend another dimension of lets say length 6, how could such a model here output for every entrie exactly one prediction, but using a frame of length 6 to learn the next ?
train_images.reshape((1531, 6, 120, 120, 1)).shape
=> This gives me the impression, that such a model would output a prediction for a sequence of 6, which means 1531 result.
Am I understanding it wrong ?
[Edit following wind's answer]
I think my problem persists but I was not aware, I declared the first layer as
TimeDistributed(
Conv2D(64, (3, 3), activation='relu'),
input_shape=(6, 120, 120, 1)
)
for x_train as
train_images_ = train_images.reshape((1531, 6, 120, 120, 1))
while y_train declared as
y_train_ = y_train.reshape((1531, 6, 8, 1))
as you can see x_train
and y_train
have the same length, with a frame of 6, image_length of 120, while, y_train consists of 8 target columns.
To my understanding, this is the way to follow with tensorflow, unfortuanatly, this is the error I am getting...
ValueError: Error when checking target: expected dense_2 to have 2
dimensions, but got array with shape (1531, 6, 8, 1)
keras tensorflow lstm cnn rnn
$endgroup$
add a comment |
$begingroup$
The comprehensible data shape to me is like:
(9186, 120, 120, 1)
this means 9186 entry, of 120 by 120 pixel grey images. I learnt that using Time Distributed to design a CNN combined with an LSTM model, could learn more from images, knowing they are sequenced.
Following some tutorials, I found out I should expend another dimension which describes moving frames, in my case, I have 6 hours image, this would be good as a moving sequence, (probably I should try other lengths because time-series images are not of regular lengths, example:
8:00 img1 target1
9:00 img2 target2
10:00 img3 target3
11:00 img4 target4
12:00 img5 target5
13:00 img6 target6
19:00 img7 target7
...
My question is, when I expend another dimension of lets say length 6, how could such a model here output for every entrie exactly one prediction, but using a frame of length 6 to learn the next ?
train_images.reshape((1531, 6, 120, 120, 1)).shape
=> This gives me the impression, that such a model would output a prediction for a sequence of 6, which means 1531 result.
Am I understanding it wrong ?
[Edit following wind's answer]
I think my problem persists but I was not aware, I declared the first layer as
TimeDistributed(
Conv2D(64, (3, 3), activation='relu'),
input_shape=(6, 120, 120, 1)
)
for x_train as
train_images_ = train_images.reshape((1531, 6, 120, 120, 1))
while y_train declared as
y_train_ = y_train.reshape((1531, 6, 8, 1))
as you can see x_train
and y_train
have the same length, with a frame of 6, image_length of 120, while, y_train consists of 8 target columns.
To my understanding, this is the way to follow with tensorflow, unfortuanatly, this is the error I am getting...
ValueError: Error when checking target: expected dense_2 to have 2
dimensions, but got array with shape (1531, 6, 8, 1)
keras tensorflow lstm cnn rnn
$endgroup$
add a comment |
$begingroup$
The comprehensible data shape to me is like:
(9186, 120, 120, 1)
this means 9186 entry, of 120 by 120 pixel grey images. I learnt that using Time Distributed to design a CNN combined with an LSTM model, could learn more from images, knowing they are sequenced.
Following some tutorials, I found out I should expend another dimension which describes moving frames, in my case, I have 6 hours image, this would be good as a moving sequence, (probably I should try other lengths because time-series images are not of regular lengths, example:
8:00 img1 target1
9:00 img2 target2
10:00 img3 target3
11:00 img4 target4
12:00 img5 target5
13:00 img6 target6
19:00 img7 target7
...
My question is, when I expend another dimension of lets say length 6, how could such a model here output for every entrie exactly one prediction, but using a frame of length 6 to learn the next ?
train_images.reshape((1531, 6, 120, 120, 1)).shape
=> This gives me the impression, that such a model would output a prediction for a sequence of 6, which means 1531 result.
Am I understanding it wrong ?
[Edit following wind's answer]
I think my problem persists but I was not aware, I declared the first layer as
TimeDistributed(
Conv2D(64, (3, 3), activation='relu'),
input_shape=(6, 120, 120, 1)
)
for x_train as
train_images_ = train_images.reshape((1531, 6, 120, 120, 1))
while y_train declared as
y_train_ = y_train.reshape((1531, 6, 8, 1))
as you can see x_train
and y_train
have the same length, with a frame of 6, image_length of 120, while, y_train consists of 8 target columns.
To my understanding, this is the way to follow with tensorflow, unfortuanatly, this is the error I am getting...
ValueError: Error when checking target: expected dense_2 to have 2
dimensions, but got array with shape (1531, 6, 8, 1)
keras tensorflow lstm cnn rnn
$endgroup$
The comprehensible data shape to me is like:
(9186, 120, 120, 1)
this means 9186 entry, of 120 by 120 pixel grey images. I learnt that using Time Distributed to design a CNN combined with an LSTM model, could learn more from images, knowing they are sequenced.
Following some tutorials, I found out I should expend another dimension which describes moving frames, in my case, I have 6 hours image, this would be good as a moving sequence, (probably I should try other lengths because time-series images are not of regular lengths, example:
8:00 img1 target1
9:00 img2 target2
10:00 img3 target3
11:00 img4 target4
12:00 img5 target5
13:00 img6 target6
19:00 img7 target7
...
My question is, when I expend another dimension of lets say length 6, how could such a model here output for every entrie exactly one prediction, but using a frame of length 6 to learn the next ?
train_images.reshape((1531, 6, 120, 120, 1)).shape
=> This gives me the impression, that such a model would output a prediction for a sequence of 6, which means 1531 result.
Am I understanding it wrong ?
[Edit following wind's answer]
I think my problem persists but I was not aware, I declared the first layer as
TimeDistributed(
Conv2D(64, (3, 3), activation='relu'),
input_shape=(6, 120, 120, 1)
)
for x_train as
train_images_ = train_images.reshape((1531, 6, 120, 120, 1))
while y_train declared as
y_train_ = y_train.reshape((1531, 6, 8, 1))
as you can see x_train
and y_train
have the same length, with a frame of 6, image_length of 120, while, y_train consists of 8 target columns.
To my understanding, this is the way to follow with tensorflow, unfortuanatly, this is the error I am getting...
ValueError: Error when checking target: expected dense_2 to have 2
dimensions, but got array with shape (1531, 6, 8, 1)
keras tensorflow lstm cnn rnn
keras tensorflow lstm cnn rnn
edited 2 hours ago
bacloud14
asked 12 hours ago
bacloud14bacloud14
509
509
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
As I understand your question correctly, you worry, that change of input shape (adding time-dimension) will affect your output shape. If so, you're wrong, the output is independent of input. The output shape depends only on the structure of a neural network.
$endgroup$
$begingroup$
This clarify things for me, could you please check my edit, for what follows...
$endgroup$
– bacloud14
2 hours ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f44177%2fkeras-tf-making-sure-image-training-data-shape-is-accurate-for-time-distributed%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
As I understand your question correctly, you worry, that change of input shape (adding time-dimension) will affect your output shape. If so, you're wrong, the output is independent of input. The output shape depends only on the structure of a neural network.
$endgroup$
$begingroup$
This clarify things for me, could you please check my edit, for what follows...
$endgroup$
– bacloud14
2 hours ago
add a comment |
$begingroup$
As I understand your question correctly, you worry, that change of input shape (adding time-dimension) will affect your output shape. If so, you're wrong, the output is independent of input. The output shape depends only on the structure of a neural network.
$endgroup$
$begingroup$
This clarify things for me, could you please check my edit, for what follows...
$endgroup$
– bacloud14
2 hours ago
add a comment |
$begingroup$
As I understand your question correctly, you worry, that change of input shape (adding time-dimension) will affect your output shape. If so, you're wrong, the output is independent of input. The output shape depends only on the structure of a neural network.
$endgroup$
As I understand your question correctly, you worry, that change of input shape (adding time-dimension) will affect your output shape. If so, you're wrong, the output is independent of input. The output shape depends only on the structure of a neural network.
answered 9 hours ago
windwind
1517
1517
$begingroup$
This clarify things for me, could you please check my edit, for what follows...
$endgroup$
– bacloud14
2 hours ago
add a comment |
$begingroup$
This clarify things for me, could you please check my edit, for what follows...
$endgroup$
– bacloud14
2 hours ago
$begingroup$
This clarify things for me, could you please check my edit, for what follows...
$endgroup$
– bacloud14
2 hours ago
$begingroup$
This clarify things for me, could you please check my edit, for what follows...
$endgroup$
– bacloud14
2 hours ago
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f44177%2fkeras-tf-making-sure-image-training-data-shape-is-accurate-for-time-distributed%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown