Why model.fit_generator in keras is taking so much time even before picking the data?












1












$begingroup$


In Keras, I am training a model as in below



model.fit_generator(data_generator(),
samples_per_epoch = count,
validation_data=(x_val, y_val),
nb_epoch=50,
callbacks=getCallBacks(),
verbose=1)


In data_generator function, I am printing few debugging statements.



On running above fit_generator function, it takes really long time to print what I had put in the data_generator().



Is there any series of steps which keras takes before picking out the data batch for the training which makes the process so slow or there is some other caveat?










share|improve this question









$endgroup$




bumped to the homepage by Community 8 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.















  • $begingroup$
    Did you put a debug statement at the top of your fit_generator function?
    $endgroup$
    – Imran
    Dec 17 '18 at 21:48
















1












$begingroup$


In Keras, I am training a model as in below



model.fit_generator(data_generator(),
samples_per_epoch = count,
validation_data=(x_val, y_val),
nb_epoch=50,
callbacks=getCallBacks(),
verbose=1)


In data_generator function, I am printing few debugging statements.



On running above fit_generator function, it takes really long time to print what I had put in the data_generator().



Is there any series of steps which keras takes before picking out the data batch for the training which makes the process so slow or there is some other caveat?










share|improve this question









$endgroup$




bumped to the homepage by Community 8 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.















  • $begingroup$
    Did you put a debug statement at the top of your fit_generator function?
    $endgroup$
    – Imran
    Dec 17 '18 at 21:48














1












1








1





$begingroup$


In Keras, I am training a model as in below



model.fit_generator(data_generator(),
samples_per_epoch = count,
validation_data=(x_val, y_val),
nb_epoch=50,
callbacks=getCallBacks(),
verbose=1)


In data_generator function, I am printing few debugging statements.



On running above fit_generator function, it takes really long time to print what I had put in the data_generator().



Is there any series of steps which keras takes before picking out the data batch for the training which makes the process so slow or there is some other caveat?










share|improve this question









$endgroup$




In Keras, I am training a model as in below



model.fit_generator(data_generator(),
samples_per_epoch = count,
validation_data=(x_val, y_val),
nb_epoch=50,
callbacks=getCallBacks(),
verbose=1)


In data_generator function, I am printing few debugging statements.



On running above fit_generator function, it takes really long time to print what I had put in the data_generator().



Is there any series of steps which keras takes before picking out the data batch for the training which makes the process so slow or there is some other caveat?







keras






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Jun 19 '18 at 9:23









Divyanshu ShekharDivyanshu Shekhar

184113




184113





bumped to the homepage by Community 8 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 8 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.














  • $begingroup$
    Did you put a debug statement at the top of your fit_generator function?
    $endgroup$
    – Imran
    Dec 17 '18 at 21:48


















  • $begingroup$
    Did you put a debug statement at the top of your fit_generator function?
    $endgroup$
    – Imran
    Dec 17 '18 at 21:48
















$begingroup$
Did you put a debug statement at the top of your fit_generator function?
$endgroup$
– Imran
Dec 17 '18 at 21:48




$begingroup$
Did you put a debug statement at the top of your fit_generator function?
$endgroup$
– Imran
Dec 17 '18 at 21:48










1 Answer
1






active

oldest

votes


















0












$begingroup$

I have noticed this when training on one or more GPUs. I think it is due to Tensorflow having to acquire the resources (in the background it blocks the entire GPU memory). Perhaps some amount of data e.g. the first batch is copied over to the GPU.



I didn't find a way to reduce the waiting time dramatically, but you could try the tensorflow options to allocate less memory and allow memory growth as required. Check out the official docs and this issue for the Keras version.



Here is the latest recommendation form that issue: setting Tensorflow options befor eimporting Keras.



-------------------------- set gpu using tf ---------------------------
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
------------------- start importing keras module ---------------------
import keras.backend.tensorflow_backend as K
import keras......





share|improve this answer









$endgroup$














    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "557"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f33364%2fwhy-model-fit-generator-in-keras-is-taking-so-much-time-even-before-picking-the%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    I have noticed this when training on one or more GPUs. I think it is due to Tensorflow having to acquire the resources (in the background it blocks the entire GPU memory). Perhaps some amount of data e.g. the first batch is copied over to the GPU.



    I didn't find a way to reduce the waiting time dramatically, but you could try the tensorflow options to allocate less memory and allow memory growth as required. Check out the official docs and this issue for the Keras version.



    Here is the latest recommendation form that issue: setting Tensorflow options befor eimporting Keras.



    -------------------------- set gpu using tf ---------------------------
    import tensorflow as tf
    config = tf.ConfigProto()
    config.gpu_options.allow_growth = True
    session = tf.Session(config=config)
    ------------------- start importing keras module ---------------------
    import keras.backend.tensorflow_backend as K
    import keras......





    share|improve this answer









    $endgroup$


















      0












      $begingroup$

      I have noticed this when training on one or more GPUs. I think it is due to Tensorflow having to acquire the resources (in the background it blocks the entire GPU memory). Perhaps some amount of data e.g. the first batch is copied over to the GPU.



      I didn't find a way to reduce the waiting time dramatically, but you could try the tensorflow options to allocate less memory and allow memory growth as required. Check out the official docs and this issue for the Keras version.



      Here is the latest recommendation form that issue: setting Tensorflow options befor eimporting Keras.



      -------------------------- set gpu using tf ---------------------------
      import tensorflow as tf
      config = tf.ConfigProto()
      config.gpu_options.allow_growth = True
      session = tf.Session(config=config)
      ------------------- start importing keras module ---------------------
      import keras.backend.tensorflow_backend as K
      import keras......





      share|improve this answer









      $endgroup$
















        0












        0








        0





        $begingroup$

        I have noticed this when training on one or more GPUs. I think it is due to Tensorflow having to acquire the resources (in the background it blocks the entire GPU memory). Perhaps some amount of data e.g. the first batch is copied over to the GPU.



        I didn't find a way to reduce the waiting time dramatically, but you could try the tensorflow options to allocate less memory and allow memory growth as required. Check out the official docs and this issue for the Keras version.



        Here is the latest recommendation form that issue: setting Tensorflow options befor eimporting Keras.



        -------------------------- set gpu using tf ---------------------------
        import tensorflow as tf
        config = tf.ConfigProto()
        config.gpu_options.allow_growth = True
        session = tf.Session(config=config)
        ------------------- start importing keras module ---------------------
        import keras.backend.tensorflow_backend as K
        import keras......





        share|improve this answer









        $endgroup$



        I have noticed this when training on one or more GPUs. I think it is due to Tensorflow having to acquire the resources (in the background it blocks the entire GPU memory). Perhaps some amount of data e.g. the first batch is copied over to the GPU.



        I didn't find a way to reduce the waiting time dramatically, but you could try the tensorflow options to allocate less memory and allow memory growth as required. Check out the official docs and this issue for the Keras version.



        Here is the latest recommendation form that issue: setting Tensorflow options befor eimporting Keras.



        -------------------------- set gpu using tf ---------------------------
        import tensorflow as tf
        config = tf.ConfigProto()
        config.gpu_options.allow_growth = True
        session = tf.Session(config=config)
        ------------------- start importing keras module ---------------------
        import keras.backend.tensorflow_backend as K
        import keras......






        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jun 19 '18 at 10:16









        n1k31t4n1k31t4

        6,5512421




        6,5512421






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Data Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f33364%2fwhy-model-fit-generator-in-keras-is-taking-so-much-time-even-before-picking-the%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Ponta tanko

            Tantalo (mitologio)

            Erzsébet Schaár