Multiple keras models parallel - time efficient
$begingroup$
I am trying to load two different keras models in parallel. I tried to use the functional API model:
input1 = Input(inputShapeOfModel1)
input2 = Input(inputShapeOfModel2)
output1 = model1(input1)
output2 = model2(input2)
parallelModel = Model([input1,input2], [output1,output2])
This works but it does not run in parallel actually. Inference time is just the sum of each model's individual inference time.
My question is should this run concurrently?
I also tried to load them in different py files with gpu memory options. Still I haven't got parallelism (inference time is x1.5 for each model)
Is there any way to get inference time of both models as close to a single's model inference time?
Is the only solution to add a second gpu?
UPDATE: in different scripts they seem to be able to run in parallel, so there must be a way to efficiently run in python/keras as well.
keras tensorflow computer-vision gpu parallel
$endgroup$
bumped to the homepage by Community♦ 6 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I am trying to load two different keras models in parallel. I tried to use the functional API model:
input1 = Input(inputShapeOfModel1)
input2 = Input(inputShapeOfModel2)
output1 = model1(input1)
output2 = model2(input2)
parallelModel = Model([input1,input2], [output1,output2])
This works but it does not run in parallel actually. Inference time is just the sum of each model's individual inference time.
My question is should this run concurrently?
I also tried to load them in different py files with gpu memory options. Still I haven't got parallelism (inference time is x1.5 for each model)
Is there any way to get inference time of both models as close to a single's model inference time?
Is the only solution to add a second gpu?
UPDATE: in different scripts they seem to be able to run in parallel, so there must be a way to efficiently run in python/keras as well.
keras tensorflow computer-vision gpu parallel
$endgroup$
bumped to the homepage by Community♦ 6 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
1
$begingroup$
This might help: stackoverflow.com/questions/7207309/…
$endgroup$
– Erik van de Ven
Sep 24 '18 at 8:46
$begingroup$
have you got the answer ?
$endgroup$
– Lion
Mar 27 at 10:18
add a comment |
$begingroup$
I am trying to load two different keras models in parallel. I tried to use the functional API model:
input1 = Input(inputShapeOfModel1)
input2 = Input(inputShapeOfModel2)
output1 = model1(input1)
output2 = model2(input2)
parallelModel = Model([input1,input2], [output1,output2])
This works but it does not run in parallel actually. Inference time is just the sum of each model's individual inference time.
My question is should this run concurrently?
I also tried to load them in different py files with gpu memory options. Still I haven't got parallelism (inference time is x1.5 for each model)
Is there any way to get inference time of both models as close to a single's model inference time?
Is the only solution to add a second gpu?
UPDATE: in different scripts they seem to be able to run in parallel, so there must be a way to efficiently run in python/keras as well.
keras tensorflow computer-vision gpu parallel
$endgroup$
I am trying to load two different keras models in parallel. I tried to use the functional API model:
input1 = Input(inputShapeOfModel1)
input2 = Input(inputShapeOfModel2)
output1 = model1(input1)
output2 = model2(input2)
parallelModel = Model([input1,input2], [output1,output2])
This works but it does not run in parallel actually. Inference time is just the sum of each model's individual inference time.
My question is should this run concurrently?
I also tried to load them in different py files with gpu memory options. Still I haven't got parallelism (inference time is x1.5 for each model)
Is there any way to get inference time of both models as close to a single's model inference time?
Is the only solution to add a second gpu?
UPDATE: in different scripts they seem to be able to run in parallel, so there must be a way to efficiently run in python/keras as well.
keras tensorflow computer-vision gpu parallel
keras tensorflow computer-vision gpu parallel
edited Sep 24 '18 at 5:57
Lara Larsen
asked Sep 7 '18 at 4:19
Lara LarsenLara Larsen
13
13
bumped to the homepage by Community♦ 6 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 6 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
1
$begingroup$
This might help: stackoverflow.com/questions/7207309/…
$endgroup$
– Erik van de Ven
Sep 24 '18 at 8:46
$begingroup$
have you got the answer ?
$endgroup$
– Lion
Mar 27 at 10:18
add a comment |
1
$begingroup$
This might help: stackoverflow.com/questions/7207309/…
$endgroup$
– Erik van de Ven
Sep 24 '18 at 8:46
$begingroup$
have you got the answer ?
$endgroup$
– Lion
Mar 27 at 10:18
1
1
$begingroup$
This might help: stackoverflow.com/questions/7207309/…
$endgroup$
– Erik van de Ven
Sep 24 '18 at 8:46
$begingroup$
This might help: stackoverflow.com/questions/7207309/…
$endgroup$
– Erik van de Ven
Sep 24 '18 at 8:46
$begingroup$
have you got the answer ?
$endgroup$
– Lion
Mar 27 at 10:18
$begingroup$
have you got the answer ?
$endgroup$
– Lion
Mar 27 at 10:18
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
As was suggested by Erik van de Ven, it sounds like running each model on a different process should provide the requested parallelism.
I guess you could either run the fit
function for each model in a different process
Or you could even load them on different cpu cores:
with K.device('cpu0'):
input1 = Input(inputShapeOfModel1)
output1 = model1(input1)
with K.device('gpu0'):
input2 = Input(inputShapeOfModel2)
output2 = model2(input2)
model = Model([input1, input2], [output1, output2])
I haven't tried any of these though, so i'm not sure what would provide the best result
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f37917%2fmultiple-keras-models-parallel-time-efficient%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
As was suggested by Erik van de Ven, it sounds like running each model on a different process should provide the requested parallelism.
I guess you could either run the fit
function for each model in a different process
Or you could even load them on different cpu cores:
with K.device('cpu0'):
input1 = Input(inputShapeOfModel1)
output1 = model1(input1)
with K.device('gpu0'):
input2 = Input(inputShapeOfModel2)
output2 = model2(input2)
model = Model([input1, input2], [output1, output2])
I haven't tried any of these though, so i'm not sure what would provide the best result
$endgroup$
add a comment |
$begingroup$
As was suggested by Erik van de Ven, it sounds like running each model on a different process should provide the requested parallelism.
I guess you could either run the fit
function for each model in a different process
Or you could even load them on different cpu cores:
with K.device('cpu0'):
input1 = Input(inputShapeOfModel1)
output1 = model1(input1)
with K.device('gpu0'):
input2 = Input(inputShapeOfModel2)
output2 = model2(input2)
model = Model([input1, input2], [output1, output2])
I haven't tried any of these though, so i'm not sure what would provide the best result
$endgroup$
add a comment |
$begingroup$
As was suggested by Erik van de Ven, it sounds like running each model on a different process should provide the requested parallelism.
I guess you could either run the fit
function for each model in a different process
Or you could even load them on different cpu cores:
with K.device('cpu0'):
input1 = Input(inputShapeOfModel1)
output1 = model1(input1)
with K.device('gpu0'):
input2 = Input(inputShapeOfModel2)
output2 = model2(input2)
model = Model([input1, input2], [output1, output2])
I haven't tried any of these though, so i'm not sure what would provide the best result
$endgroup$
As was suggested by Erik van de Ven, it sounds like running each model on a different process should provide the requested parallelism.
I guess you could either run the fit
function for each model in a different process
Or you could even load them on different cpu cores:
with K.device('cpu0'):
input1 = Input(inputShapeOfModel1)
output1 = model1(input1)
with K.device('gpu0'):
input2 = Input(inputShapeOfModel2)
output2 = model2(input2)
model = Model([input1, input2], [output1, output2])
I haven't tried any of these though, so i'm not sure what would provide the best result
answered Nov 16 '18 at 20:37
Gal AvineriGal Avineri
567
567
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f37917%2fmultiple-keras-models-parallel-time-efficient%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
This might help: stackoverflow.com/questions/7207309/…
$endgroup$
– Erik van de Ven
Sep 24 '18 at 8:46
$begingroup$
have you got the answer ?
$endgroup$
– Lion
Mar 27 at 10:18