Adding recommendations to the output of a classification model












2












$begingroup$


I have built a binary classification model using:




  • logit

  • decision trees

  • random forest

  • bagging classifier

  • gradientboost

  • xgboost

  • adaboost


I have evaluated the above models and chose xgboost based on training/test and validation metrics (accuracy, prediction, recall, f1 and AUC).



I want to now productionalize it and share the output with the business. The output would basically have a list of items with the predicted class and that could be filtered based on business needs.



However, Instead of simply giving the business the predicted classes, I want to add insights/recommendations as to why a specific item was predicted with class X and how you could go about working on the item to change its class from say X to Y.



How do I go about this? I thought of using feature importance, but my input data shape is [800,000 * 1,050] and I am not sure if it would the best way to proceed.



Are there any existing industry standard methodologies that can add interpretability to such models and convert them from a black box models to prescriptive models?










share|improve this question











$endgroup$




bumped to the homepage by Community 7 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.















  • $begingroup$
    If the features are sparse, you might want to simply use standardized coefficients from logistic regression, or select some individual decision trees. Explaining how can the label change is however a lot more problematic, and an optimization problem of its own. Generally speaking, if you need explanatory ability, you'll have to move away from black-box models.
    $endgroup$
    – anymous.asker
    Nov 17 '18 at 15:05
















2












$begingroup$


I have built a binary classification model using:




  • logit

  • decision trees

  • random forest

  • bagging classifier

  • gradientboost

  • xgboost

  • adaboost


I have evaluated the above models and chose xgboost based on training/test and validation metrics (accuracy, prediction, recall, f1 and AUC).



I want to now productionalize it and share the output with the business. The output would basically have a list of items with the predicted class and that could be filtered based on business needs.



However, Instead of simply giving the business the predicted classes, I want to add insights/recommendations as to why a specific item was predicted with class X and how you could go about working on the item to change its class from say X to Y.



How do I go about this? I thought of using feature importance, but my input data shape is [800,000 * 1,050] and I am not sure if it would the best way to proceed.



Are there any existing industry standard methodologies that can add interpretability to such models and convert them from a black box models to prescriptive models?










share|improve this question











$endgroup$




bumped to the homepage by Community 7 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.















  • $begingroup$
    If the features are sparse, you might want to simply use standardized coefficients from logistic regression, or select some individual decision trees. Explaining how can the label change is however a lot more problematic, and an optimization problem of its own. Generally speaking, if you need explanatory ability, you'll have to move away from black-box models.
    $endgroup$
    – anymous.asker
    Nov 17 '18 at 15:05














2












2








2


2



$begingroup$


I have built a binary classification model using:




  • logit

  • decision trees

  • random forest

  • bagging classifier

  • gradientboost

  • xgboost

  • adaboost


I have evaluated the above models and chose xgboost based on training/test and validation metrics (accuracy, prediction, recall, f1 and AUC).



I want to now productionalize it and share the output with the business. The output would basically have a list of items with the predicted class and that could be filtered based on business needs.



However, Instead of simply giving the business the predicted classes, I want to add insights/recommendations as to why a specific item was predicted with class X and how you could go about working on the item to change its class from say X to Y.



How do I go about this? I thought of using feature importance, but my input data shape is [800,000 * 1,050] and I am not sure if it would the best way to proceed.



Are there any existing industry standard methodologies that can add interpretability to such models and convert them from a black box models to prescriptive models?










share|improve this question











$endgroup$




I have built a binary classification model using:




  • logit

  • decision trees

  • random forest

  • bagging classifier

  • gradientboost

  • xgboost

  • adaboost


I have evaluated the above models and chose xgboost based on training/test and validation metrics (accuracy, prediction, recall, f1 and AUC).



I want to now productionalize it and share the output with the business. The output would basically have a list of items with the predicted class and that could be filtered based on business needs.



However, Instead of simply giving the business the predicted classes, I want to add insights/recommendations as to why a specific item was predicted with class X and how you could go about working on the item to change its class from say X to Y.



How do I go about this? I thought of using feature importance, but my input data shape is [800,000 * 1,050] and I am not sure if it would the best way to proceed.



Are there any existing industry standard methodologies that can add interpretability to such models and convert them from a black box models to prescriptive models?







machine-learning python classification data-science-model






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 16 '18 at 10:23







praveen

















asked Nov 15 '18 at 12:19









praveenpraveen

1212




1212





bumped to the homepage by Community 7 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







bumped to the homepage by Community 7 mins ago


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.














  • $begingroup$
    If the features are sparse, you might want to simply use standardized coefficients from logistic regression, or select some individual decision trees. Explaining how can the label change is however a lot more problematic, and an optimization problem of its own. Generally speaking, if you need explanatory ability, you'll have to move away from black-box models.
    $endgroup$
    – anymous.asker
    Nov 17 '18 at 15:05


















  • $begingroup$
    If the features are sparse, you might want to simply use standardized coefficients from logistic regression, or select some individual decision trees. Explaining how can the label change is however a lot more problematic, and an optimization problem of its own. Generally speaking, if you need explanatory ability, you'll have to move away from black-box models.
    $endgroup$
    – anymous.asker
    Nov 17 '18 at 15:05
















$begingroup$
If the features are sparse, you might want to simply use standardized coefficients from logistic regression, or select some individual decision trees. Explaining how can the label change is however a lot more problematic, and an optimization problem of its own. Generally speaking, if you need explanatory ability, you'll have to move away from black-box models.
$endgroup$
– anymous.asker
Nov 17 '18 at 15:05




$begingroup$
If the features are sparse, you might want to simply use standardized coefficients from logistic regression, or select some individual decision trees. Explaining how can the label change is however a lot more problematic, and an optimization problem of its own. Generally speaking, if you need explanatory ability, you'll have to move away from black-box models.
$endgroup$
– anymous.asker
Nov 17 '18 at 15:05










1 Answer
1






active

oldest

votes


















0












$begingroup$

a link. This is a link where someone has answered a similar question like that of yours. Have a read to see if it helps.






share|improve this answer









$endgroup$














    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "557"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f41259%2fadding-recommendations-to-the-output-of-a-classification-model%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    a link. This is a link where someone has answered a similar question like that of yours. Have a read to see if it helps.






    share|improve this answer









    $endgroup$


















      0












      $begingroup$

      a link. This is a link where someone has answered a similar question like that of yours. Have a read to see if it helps.






      share|improve this answer









      $endgroup$
















        0












        0








        0





        $begingroup$

        a link. This is a link where someone has answered a similar question like that of yours. Have a read to see if it helps.






        share|improve this answer









        $endgroup$



        a link. This is a link where someone has answered a similar question like that of yours. Have a read to see if it helps.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 17 '18 at 22:25









        SudhiSudhi

        415




        415






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Data Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f41259%2fadding-recommendations-to-the-output-of-a-classification-model%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Franz Schubert

            Erzsébet Schaár

            Ponta tanko