400 positive and 13000 negative: how to split dataset up (train, test, validation)












2












$begingroup$


Working on a medical diagnostic convolutional neural networking problem, and it's not obvious (to me) how the dataset should be split up.



Do I have enough data to split it in 3, or should I just have train and validation?



What proportion of images should I put in each?



(research article links appreciated :])










share|improve this question









$endgroup$




bumped to the homepage by Community 5 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.




















    2












    $begingroup$


    Working on a medical diagnostic convolutional neural networking problem, and it's not obvious (to me) how the dataset should be split up.



    Do I have enough data to split it in 3, or should I just have train and validation?



    What proportion of images should I put in each?



    (research article links appreciated :])










    share|improve this question









    $endgroup$




    bumped to the homepage by Community 5 mins ago


    This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.


















      2












      2








      2





      $begingroup$


      Working on a medical diagnostic convolutional neural networking problem, and it's not obvious (to me) how the dataset should be split up.



      Do I have enough data to split it in 3, or should I just have train and validation?



      What proportion of images should I put in each?



      (research article links appreciated :])










      share|improve this question









      $endgroup$




      Working on a medical diagnostic convolutional neural networking problem, and it's not obvious (to me) how the dataset should be split up.



      Do I have enough data to split it in 3, or should I just have train and validation?



      What proportion of images should I put in each?



      (research article links appreciated :])







      neural-network deep-learning dataset convnet training






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Jun 18 '18 at 10:40









      A TA T

      1261




      1261





      bumped to the homepage by Community 5 mins ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







      bumped to the homepage by Community 5 mins ago


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
























          2 Answers
          2






          active

          oldest

          votes


















          0












          $begingroup$

          For the problem of an imbalanced dataset, you can look into stratified sampling, or stratified-cross-validation (as mentioned here). One idea might be to create stratified batches from the data.



          I would probably make all attempts to get train/val/test splits, because you will otherwise face issues when claiming a final test accuracy, as the model might well have seen your entire dataset.



          One could imagine splitting to have e.g. 300/9750 (pos/neg) in the training dataset, and during training, you create stratified batches from those 1050 images, so each batch e.g. of 50 images, might contain, 10 positives and 40 negatives. This is still somewhat imbalanced, but you are pushing the balance into a more favourable direction in that the model should be able to learn more efficiently.



          In medical research it is often the case that there are too few samples (in addition to class imabalnces), and so there is usually a huge effort that goes into data augmentation, which you might also be able to make use of it. Here is some related literature (extremely fresh - edited one week ago!).



          Here is another approach, whereby the authors (William Fithian & Trevor Hastie) devise a subsampling method, which uses the features of the samples to accept/reject them. They design it for the simplist case (logistic regression), but perhaps it might give you ideas:




          ... using a pilot estimate to preferentially select examples whose responses are conditionally rare given their features.




          Something to be especially aware of when using the ideas I mentioned above is overfitting. Cross-validation is probably what can best help you out in this respect.






          share|improve this answer











          $endgroup$





















            0












            $begingroup$

            Simple thing to do is to use Stratified sampling as suggested by @n1k31t4. Other thing which people usually do with images is to Image Augmentation. So, you can try to rotate,tilt, mirror your positive data set such that it can increase to 13k. You can take a look about it Here






            share|improve this answer









            $endgroup$














              Your Answer








              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "557"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f33304%2f400-positive-and-13000-negative-how-to-split-dataset-up-train-test-validatio%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              0












              $begingroup$

              For the problem of an imbalanced dataset, you can look into stratified sampling, or stratified-cross-validation (as mentioned here). One idea might be to create stratified batches from the data.



              I would probably make all attempts to get train/val/test splits, because you will otherwise face issues when claiming a final test accuracy, as the model might well have seen your entire dataset.



              One could imagine splitting to have e.g. 300/9750 (pos/neg) in the training dataset, and during training, you create stratified batches from those 1050 images, so each batch e.g. of 50 images, might contain, 10 positives and 40 negatives. This is still somewhat imbalanced, but you are pushing the balance into a more favourable direction in that the model should be able to learn more efficiently.



              In medical research it is often the case that there are too few samples (in addition to class imabalnces), and so there is usually a huge effort that goes into data augmentation, which you might also be able to make use of it. Here is some related literature (extremely fresh - edited one week ago!).



              Here is another approach, whereby the authors (William Fithian & Trevor Hastie) devise a subsampling method, which uses the features of the samples to accept/reject them. They design it for the simplist case (logistic regression), but perhaps it might give you ideas:




              ... using a pilot estimate to preferentially select examples whose responses are conditionally rare given their features.




              Something to be especially aware of when using the ideas I mentioned above is overfitting. Cross-validation is probably what can best help you out in this respect.






              share|improve this answer











              $endgroup$


















                0












                $begingroup$

                For the problem of an imbalanced dataset, you can look into stratified sampling, or stratified-cross-validation (as mentioned here). One idea might be to create stratified batches from the data.



                I would probably make all attempts to get train/val/test splits, because you will otherwise face issues when claiming a final test accuracy, as the model might well have seen your entire dataset.



                One could imagine splitting to have e.g. 300/9750 (pos/neg) in the training dataset, and during training, you create stratified batches from those 1050 images, so each batch e.g. of 50 images, might contain, 10 positives and 40 negatives. This is still somewhat imbalanced, but you are pushing the balance into a more favourable direction in that the model should be able to learn more efficiently.



                In medical research it is often the case that there are too few samples (in addition to class imabalnces), and so there is usually a huge effort that goes into data augmentation, which you might also be able to make use of it. Here is some related literature (extremely fresh - edited one week ago!).



                Here is another approach, whereby the authors (William Fithian & Trevor Hastie) devise a subsampling method, which uses the features of the samples to accept/reject them. They design it for the simplist case (logistic regression), but perhaps it might give you ideas:




                ... using a pilot estimate to preferentially select examples whose responses are conditionally rare given their features.




                Something to be especially aware of when using the ideas I mentioned above is overfitting. Cross-validation is probably what can best help you out in this respect.






                share|improve this answer











                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  For the problem of an imbalanced dataset, you can look into stratified sampling, or stratified-cross-validation (as mentioned here). One idea might be to create stratified batches from the data.



                  I would probably make all attempts to get train/val/test splits, because you will otherwise face issues when claiming a final test accuracy, as the model might well have seen your entire dataset.



                  One could imagine splitting to have e.g. 300/9750 (pos/neg) in the training dataset, and during training, you create stratified batches from those 1050 images, so each batch e.g. of 50 images, might contain, 10 positives and 40 negatives. This is still somewhat imbalanced, but you are pushing the balance into a more favourable direction in that the model should be able to learn more efficiently.



                  In medical research it is often the case that there are too few samples (in addition to class imabalnces), and so there is usually a huge effort that goes into data augmentation, which you might also be able to make use of it. Here is some related literature (extremely fresh - edited one week ago!).



                  Here is another approach, whereby the authors (William Fithian & Trevor Hastie) devise a subsampling method, which uses the features of the samples to accept/reject them. They design it for the simplist case (logistic regression), but perhaps it might give you ideas:




                  ... using a pilot estimate to preferentially select examples whose responses are conditionally rare given their features.




                  Something to be especially aware of when using the ideas I mentioned above is overfitting. Cross-validation is probably what can best help you out in this respect.






                  share|improve this answer











                  $endgroup$



                  For the problem of an imbalanced dataset, you can look into stratified sampling, or stratified-cross-validation (as mentioned here). One idea might be to create stratified batches from the data.



                  I would probably make all attempts to get train/val/test splits, because you will otherwise face issues when claiming a final test accuracy, as the model might well have seen your entire dataset.



                  One could imagine splitting to have e.g. 300/9750 (pos/neg) in the training dataset, and during training, you create stratified batches from those 1050 images, so each batch e.g. of 50 images, might contain, 10 positives and 40 negatives. This is still somewhat imbalanced, but you are pushing the balance into a more favourable direction in that the model should be able to learn more efficiently.



                  In medical research it is often the case that there are too few samples (in addition to class imabalnces), and so there is usually a huge effort that goes into data augmentation, which you might also be able to make use of it. Here is some related literature (extremely fresh - edited one week ago!).



                  Here is another approach, whereby the authors (William Fithian & Trevor Hastie) devise a subsampling method, which uses the features of the samples to accept/reject them. They design it for the simplist case (logistic regression), but perhaps it might give you ideas:




                  ... using a pilot estimate to preferentially select examples whose responses are conditionally rare given their features.




                  Something to be especially aware of when using the ideas I mentioned above is overfitting. Cross-validation is probably what can best help you out in this respect.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Aug 17 '18 at 19:35

























                  answered Jun 18 '18 at 10:58









                  n1k31t4n1k31t4

                  6,5312421




                  6,5312421























                      0












                      $begingroup$

                      Simple thing to do is to use Stratified sampling as suggested by @n1k31t4. Other thing which people usually do with images is to Image Augmentation. So, you can try to rotate,tilt, mirror your positive data set such that it can increase to 13k. You can take a look about it Here






                      share|improve this answer









                      $endgroup$


















                        0












                        $begingroup$

                        Simple thing to do is to use Stratified sampling as suggested by @n1k31t4. Other thing which people usually do with images is to Image Augmentation. So, you can try to rotate,tilt, mirror your positive data set such that it can increase to 13k. You can take a look about it Here






                        share|improve this answer









                        $endgroup$
















                          0












                          0








                          0





                          $begingroup$

                          Simple thing to do is to use Stratified sampling as suggested by @n1k31t4. Other thing which people usually do with images is to Image Augmentation. So, you can try to rotate,tilt, mirror your positive data set such that it can increase to 13k. You can take a look about it Here






                          share|improve this answer









                          $endgroup$



                          Simple thing to do is to use Stratified sampling as suggested by @n1k31t4. Other thing which people usually do with images is to Image Augmentation. So, you can try to rotate,tilt, mirror your positive data set such that it can increase to 13k. You can take a look about it Here







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered Sep 16 '18 at 22:14









                          InAFlashInAFlash

                          3671416




                          3671416






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Data Science Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f33304%2f400-positive-and-13000-negative-how-to-split-dataset-up-train-test-validatio%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Ponta tanko

                              Tantalo (mitologio)

                              Erzsébet Schaár