Minimization algorithm that can consider gradient close to solution












0












$begingroup$


I want to minimize a function which has sharp gradients close to each local minimum. Due to process tolerances, I want to find solutions which meet some minimum criterion (e.g. lower than x), but have a shallow gradient near to the found minimum.



My question is: is this simply a case of me assigning a penalty factor to minima with sharp nearby gradients, or is there a class of algorithm that can handle this sort of constraint as part of its optimization routine?










share|improve this question









$endgroup$












  • $begingroup$
    What do you mean by "sharp"?
    $endgroup$
    – anymous.asker
    Nov 8 '18 at 19:51










  • $begingroup$
    @anymous.asker: large gradient.
    $endgroup$
    – Sean
    Nov 8 '18 at 21:27










  • $begingroup$
    Well, every local optima should have a small gradient around that turns to zero as it approaches it, otherwise it wouldn't be a local optimum. If the gradients are not lipschitz, then gradient descent might not be the right tool for the job. If you are up to coding something yourself, you might want to use something like guided local search, perhaps by adding log-barriers or something at those minima. Or you might want to instead try global optimization techniques not based on gradients. Would be helpful if you could provide an example.
    $endgroup$
    – anymous.asker
    Nov 9 '18 at 7:45










  • $begingroup$
    @anymous.asker: I mean that, for a good solution, the approach to the minimum should be shallow, i.e. the rate of change of the function should be less than some parameter. My question is essentially about whether this parameter is something I have to define in my cost function, or if there are algorithms that can consider this property already. Currently I'm trying to find existing algorithms before attemping to code something myself. Right now I just don't know what to search for.
    $endgroup$
    – Sean
    Nov 9 '18 at 8:10










  • $begingroup$
    @anymous.asker: I don't have a problem providing the solution except it's a large piece of code that I don't think is relevant. My question is more about classes of algorithm rather than the intricacies of my particular problem right now.
    $endgroup$
    – Sean
    Nov 9 '18 at 8:11
















0












$begingroup$


I want to minimize a function which has sharp gradients close to each local minimum. Due to process tolerances, I want to find solutions which meet some minimum criterion (e.g. lower than x), but have a shallow gradient near to the found minimum.



My question is: is this simply a case of me assigning a penalty factor to minima with sharp nearby gradients, or is there a class of algorithm that can handle this sort of constraint as part of its optimization routine?










share|improve this question









$endgroup$












  • $begingroup$
    What do you mean by "sharp"?
    $endgroup$
    – anymous.asker
    Nov 8 '18 at 19:51










  • $begingroup$
    @anymous.asker: large gradient.
    $endgroup$
    – Sean
    Nov 8 '18 at 21:27










  • $begingroup$
    Well, every local optima should have a small gradient around that turns to zero as it approaches it, otherwise it wouldn't be a local optimum. If the gradients are not lipschitz, then gradient descent might not be the right tool for the job. If you are up to coding something yourself, you might want to use something like guided local search, perhaps by adding log-barriers or something at those minima. Or you might want to instead try global optimization techniques not based on gradients. Would be helpful if you could provide an example.
    $endgroup$
    – anymous.asker
    Nov 9 '18 at 7:45










  • $begingroup$
    @anymous.asker: I mean that, for a good solution, the approach to the minimum should be shallow, i.e. the rate of change of the function should be less than some parameter. My question is essentially about whether this parameter is something I have to define in my cost function, or if there are algorithms that can consider this property already. Currently I'm trying to find existing algorithms before attemping to code something myself. Right now I just don't know what to search for.
    $endgroup$
    – Sean
    Nov 9 '18 at 8:10










  • $begingroup$
    @anymous.asker: I don't have a problem providing the solution except it's a large piece of code that I don't think is relevant. My question is more about classes of algorithm rather than the intricacies of my particular problem right now.
    $endgroup$
    – Sean
    Nov 9 '18 at 8:11














0












0








0





$begingroup$


I want to minimize a function which has sharp gradients close to each local minimum. Due to process tolerances, I want to find solutions which meet some minimum criterion (e.g. lower than x), but have a shallow gradient near to the found minimum.



My question is: is this simply a case of me assigning a penalty factor to minima with sharp nearby gradients, or is there a class of algorithm that can handle this sort of constraint as part of its optimization routine?










share|improve this question









$endgroup$




I want to minimize a function which has sharp gradients close to each local minimum. Due to process tolerances, I want to find solutions which meet some minimum criterion (e.g. lower than x), but have a shallow gradient near to the found minimum.



My question is: is this simply a case of me assigning a penalty factor to minima with sharp nearby gradients, or is there a class of algorithm that can handle this sort of constraint as part of its optimization routine?







optimization






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 8 '18 at 16:55









SeanSean

101




101












  • $begingroup$
    What do you mean by "sharp"?
    $endgroup$
    – anymous.asker
    Nov 8 '18 at 19:51










  • $begingroup$
    @anymous.asker: large gradient.
    $endgroup$
    – Sean
    Nov 8 '18 at 21:27










  • $begingroup$
    Well, every local optima should have a small gradient around that turns to zero as it approaches it, otherwise it wouldn't be a local optimum. If the gradients are not lipschitz, then gradient descent might not be the right tool for the job. If you are up to coding something yourself, you might want to use something like guided local search, perhaps by adding log-barriers or something at those minima. Or you might want to instead try global optimization techniques not based on gradients. Would be helpful if you could provide an example.
    $endgroup$
    – anymous.asker
    Nov 9 '18 at 7:45










  • $begingroup$
    @anymous.asker: I mean that, for a good solution, the approach to the minimum should be shallow, i.e. the rate of change of the function should be less than some parameter. My question is essentially about whether this parameter is something I have to define in my cost function, or if there are algorithms that can consider this property already. Currently I'm trying to find existing algorithms before attemping to code something myself. Right now I just don't know what to search for.
    $endgroup$
    – Sean
    Nov 9 '18 at 8:10










  • $begingroup$
    @anymous.asker: I don't have a problem providing the solution except it's a large piece of code that I don't think is relevant. My question is more about classes of algorithm rather than the intricacies of my particular problem right now.
    $endgroup$
    – Sean
    Nov 9 '18 at 8:11


















  • $begingroup$
    What do you mean by "sharp"?
    $endgroup$
    – anymous.asker
    Nov 8 '18 at 19:51










  • $begingroup$
    @anymous.asker: large gradient.
    $endgroup$
    – Sean
    Nov 8 '18 at 21:27










  • $begingroup$
    Well, every local optima should have a small gradient around that turns to zero as it approaches it, otherwise it wouldn't be a local optimum. If the gradients are not lipschitz, then gradient descent might not be the right tool for the job. If you are up to coding something yourself, you might want to use something like guided local search, perhaps by adding log-barriers or something at those minima. Or you might want to instead try global optimization techniques not based on gradients. Would be helpful if you could provide an example.
    $endgroup$
    – anymous.asker
    Nov 9 '18 at 7:45










  • $begingroup$
    @anymous.asker: I mean that, for a good solution, the approach to the minimum should be shallow, i.e. the rate of change of the function should be less than some parameter. My question is essentially about whether this parameter is something I have to define in my cost function, or if there are algorithms that can consider this property already. Currently I'm trying to find existing algorithms before attemping to code something myself. Right now I just don't know what to search for.
    $endgroup$
    – Sean
    Nov 9 '18 at 8:10










  • $begingroup$
    @anymous.asker: I don't have a problem providing the solution except it's a large piece of code that I don't think is relevant. My question is more about classes of algorithm rather than the intricacies of my particular problem right now.
    $endgroup$
    – Sean
    Nov 9 '18 at 8:11
















$begingroup$
What do you mean by "sharp"?
$endgroup$
– anymous.asker
Nov 8 '18 at 19:51




$begingroup$
What do you mean by "sharp"?
$endgroup$
– anymous.asker
Nov 8 '18 at 19:51












$begingroup$
@anymous.asker: large gradient.
$endgroup$
– Sean
Nov 8 '18 at 21:27




$begingroup$
@anymous.asker: large gradient.
$endgroup$
– Sean
Nov 8 '18 at 21:27












$begingroup$
Well, every local optima should have a small gradient around that turns to zero as it approaches it, otherwise it wouldn't be a local optimum. If the gradients are not lipschitz, then gradient descent might not be the right tool for the job. If you are up to coding something yourself, you might want to use something like guided local search, perhaps by adding log-barriers or something at those minima. Or you might want to instead try global optimization techniques not based on gradients. Would be helpful if you could provide an example.
$endgroup$
– anymous.asker
Nov 9 '18 at 7:45




$begingroup$
Well, every local optima should have a small gradient around that turns to zero as it approaches it, otherwise it wouldn't be a local optimum. If the gradients are not lipschitz, then gradient descent might not be the right tool for the job. If you are up to coding something yourself, you might want to use something like guided local search, perhaps by adding log-barriers or something at those minima. Or you might want to instead try global optimization techniques not based on gradients. Would be helpful if you could provide an example.
$endgroup$
– anymous.asker
Nov 9 '18 at 7:45












$begingroup$
@anymous.asker: I mean that, for a good solution, the approach to the minimum should be shallow, i.e. the rate of change of the function should be less than some parameter. My question is essentially about whether this parameter is something I have to define in my cost function, or if there are algorithms that can consider this property already. Currently I'm trying to find existing algorithms before attemping to code something myself. Right now I just don't know what to search for.
$endgroup$
– Sean
Nov 9 '18 at 8:10




$begingroup$
@anymous.asker: I mean that, for a good solution, the approach to the minimum should be shallow, i.e. the rate of change of the function should be less than some parameter. My question is essentially about whether this parameter is something I have to define in my cost function, or if there are algorithms that can consider this property already. Currently I'm trying to find existing algorithms before attemping to code something myself. Right now I just don't know what to search for.
$endgroup$
– Sean
Nov 9 '18 at 8:10












$begingroup$
@anymous.asker: I don't have a problem providing the solution except it's a large piece of code that I don't think is relevant. My question is more about classes of algorithm rather than the intricacies of my particular problem right now.
$endgroup$
– Sean
Nov 9 '18 at 8:11




$begingroup$
@anymous.asker: I don't have a problem providing the solution except it's a large piece of code that I don't think is relevant. My question is more about classes of algorithm rather than the intricacies of my particular problem right now.
$endgroup$
– Sean
Nov 9 '18 at 8:11










2 Answers
2






active

oldest

votes


















0












$begingroup$

Well, that’s a tricky question. Do the gradients turn large somehow because they are discontinuous? (gradient descent will likely not work then) Is it maybe because the variables are in too different scales (Newton & related would help)? Is there some sort of barrier that would limit the domain (e.g. log(x))? Does the function have some flat uniform area, e.g. __/? Are these saddle points? Is your problem constrained and these are solutions at the boundaries? In some of these cases, switching to something different like subgradients might help you, but this highly depends on the reason why your problem has these optima that you want to avoid.



Generally speaking, gradient-based techniques converge to whatever local optimum they find first, and if you are not happy with that, you’ll have to use other metaheuristics (e.g. add restarts, incorporate penalties at known optima, use derivative-free methods). Perhaps other techniques that carry momentum like ADAM might somehow allow you to dodge these too.






share|improve this answer









$endgroup$













  • $begingroup$
    My function is continuous. I don't think there are saddle points, just local minima of different depths. There are 6 parameters given to the function, each with the same bounds (they can be between 0 and 0.5). Thank you for your help. I think I will investigate gradient-free methods. I tried a basin hopping algorithm in SciPy, which seems to jump around the parameter space and then try to find the local minimum using gradient descent, which worked reasonably well as far as I can tell but didn't have an option to look for "shallow" minima.
    $endgroup$
    – Sean
    Nov 9 '18 at 9:03





















0












$begingroup$

You can try Backtracking Gradient descent (as well as backtracking versions of Momentum and NAG). More details can be found in my answer in this link (and you can look at the cited paper and link to GitHub, for source code, for more detail):






share|improve this answer











$endgroup$














    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "557"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40933%2fminimization-algorithm-that-can-consider-gradient-close-to-solution%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    Well, that’s a tricky question. Do the gradients turn large somehow because they are discontinuous? (gradient descent will likely not work then) Is it maybe because the variables are in too different scales (Newton & related would help)? Is there some sort of barrier that would limit the domain (e.g. log(x))? Does the function have some flat uniform area, e.g. __/? Are these saddle points? Is your problem constrained and these are solutions at the boundaries? In some of these cases, switching to something different like subgradients might help you, but this highly depends on the reason why your problem has these optima that you want to avoid.



    Generally speaking, gradient-based techniques converge to whatever local optimum they find first, and if you are not happy with that, you’ll have to use other metaheuristics (e.g. add restarts, incorporate penalties at known optima, use derivative-free methods). Perhaps other techniques that carry momentum like ADAM might somehow allow you to dodge these too.






    share|improve this answer









    $endgroup$













    • $begingroup$
      My function is continuous. I don't think there are saddle points, just local minima of different depths. There are 6 parameters given to the function, each with the same bounds (they can be between 0 and 0.5). Thank you for your help. I think I will investigate gradient-free methods. I tried a basin hopping algorithm in SciPy, which seems to jump around the parameter space and then try to find the local minimum using gradient descent, which worked reasonably well as far as I can tell but didn't have an option to look for "shallow" minima.
      $endgroup$
      – Sean
      Nov 9 '18 at 9:03


















    0












    $begingroup$

    Well, that’s a tricky question. Do the gradients turn large somehow because they are discontinuous? (gradient descent will likely not work then) Is it maybe because the variables are in too different scales (Newton & related would help)? Is there some sort of barrier that would limit the domain (e.g. log(x))? Does the function have some flat uniform area, e.g. __/? Are these saddle points? Is your problem constrained and these are solutions at the boundaries? In some of these cases, switching to something different like subgradients might help you, but this highly depends on the reason why your problem has these optima that you want to avoid.



    Generally speaking, gradient-based techniques converge to whatever local optimum they find first, and if you are not happy with that, you’ll have to use other metaheuristics (e.g. add restarts, incorporate penalties at known optima, use derivative-free methods). Perhaps other techniques that carry momentum like ADAM might somehow allow you to dodge these too.






    share|improve this answer









    $endgroup$













    • $begingroup$
      My function is continuous. I don't think there are saddle points, just local minima of different depths. There are 6 parameters given to the function, each with the same bounds (they can be between 0 and 0.5). Thank you for your help. I think I will investigate gradient-free methods. I tried a basin hopping algorithm in SciPy, which seems to jump around the parameter space and then try to find the local minimum using gradient descent, which worked reasonably well as far as I can tell but didn't have an option to look for "shallow" minima.
      $endgroup$
      – Sean
      Nov 9 '18 at 9:03
















    0












    0








    0





    $begingroup$

    Well, that’s a tricky question. Do the gradients turn large somehow because they are discontinuous? (gradient descent will likely not work then) Is it maybe because the variables are in too different scales (Newton & related would help)? Is there some sort of barrier that would limit the domain (e.g. log(x))? Does the function have some flat uniform area, e.g. __/? Are these saddle points? Is your problem constrained and these are solutions at the boundaries? In some of these cases, switching to something different like subgradients might help you, but this highly depends on the reason why your problem has these optima that you want to avoid.



    Generally speaking, gradient-based techniques converge to whatever local optimum they find first, and if you are not happy with that, you’ll have to use other metaheuristics (e.g. add restarts, incorporate penalties at known optima, use derivative-free methods). Perhaps other techniques that carry momentum like ADAM might somehow allow you to dodge these too.






    share|improve this answer









    $endgroup$



    Well, that’s a tricky question. Do the gradients turn large somehow because they are discontinuous? (gradient descent will likely not work then) Is it maybe because the variables are in too different scales (Newton & related would help)? Is there some sort of barrier that would limit the domain (e.g. log(x))? Does the function have some flat uniform area, e.g. __/? Are these saddle points? Is your problem constrained and these are solutions at the boundaries? In some of these cases, switching to something different like subgradients might help you, but this highly depends on the reason why your problem has these optima that you want to avoid.



    Generally speaking, gradient-based techniques converge to whatever local optimum they find first, and if you are not happy with that, you’ll have to use other metaheuristics (e.g. add restarts, incorporate penalties at known optima, use derivative-free methods). Perhaps other techniques that carry momentum like ADAM might somehow allow you to dodge these too.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Nov 9 '18 at 8:40









    anymous.askeranymous.asker

    59618




    59618












    • $begingroup$
      My function is continuous. I don't think there are saddle points, just local minima of different depths. There are 6 parameters given to the function, each with the same bounds (they can be between 0 and 0.5). Thank you for your help. I think I will investigate gradient-free methods. I tried a basin hopping algorithm in SciPy, which seems to jump around the parameter space and then try to find the local minimum using gradient descent, which worked reasonably well as far as I can tell but didn't have an option to look for "shallow" minima.
      $endgroup$
      – Sean
      Nov 9 '18 at 9:03




















    • $begingroup$
      My function is continuous. I don't think there are saddle points, just local minima of different depths. There are 6 parameters given to the function, each with the same bounds (they can be between 0 and 0.5). Thank you for your help. I think I will investigate gradient-free methods. I tried a basin hopping algorithm in SciPy, which seems to jump around the parameter space and then try to find the local minimum using gradient descent, which worked reasonably well as far as I can tell but didn't have an option to look for "shallow" minima.
      $endgroup$
      – Sean
      Nov 9 '18 at 9:03


















    $begingroup$
    My function is continuous. I don't think there are saddle points, just local minima of different depths. There are 6 parameters given to the function, each with the same bounds (they can be between 0 and 0.5). Thank you for your help. I think I will investigate gradient-free methods. I tried a basin hopping algorithm in SciPy, which seems to jump around the parameter space and then try to find the local minimum using gradient descent, which worked reasonably well as far as I can tell but didn't have an option to look for "shallow" minima.
    $endgroup$
    – Sean
    Nov 9 '18 at 9:03






    $begingroup$
    My function is continuous. I don't think there are saddle points, just local minima of different depths. There are 6 parameters given to the function, each with the same bounds (they can be between 0 and 0.5). Thank you for your help. I think I will investigate gradient-free methods. I tried a basin hopping algorithm in SciPy, which seems to jump around the parameter space and then try to find the local minimum using gradient descent, which worked reasonably well as far as I can tell but didn't have an option to look for "shallow" minima.
    $endgroup$
    – Sean
    Nov 9 '18 at 9:03













    0












    $begingroup$

    You can try Backtracking Gradient descent (as well as backtracking versions of Momentum and NAG). More details can be found in my answer in this link (and you can look at the cited paper and link to GitHub, for source code, for more detail):






    share|improve this answer











    $endgroup$


















      0












      $begingroup$

      You can try Backtracking Gradient descent (as well as backtracking versions of Momentum and NAG). More details can be found in my answer in this link (and you can look at the cited paper and link to GitHub, for source code, for more detail):






      share|improve this answer











      $endgroup$
















        0












        0








        0





        $begingroup$

        You can try Backtracking Gradient descent (as well as backtracking versions of Momentum and NAG). More details can be found in my answer in this link (and you can look at the cited paper and link to GitHub, for source code, for more detail):






        share|improve this answer











        $endgroup$



        You can try Backtracking Gradient descent (as well as backtracking versions of Momentum and NAG). More details can be found in my answer in this link (and you can look at the cited paper and link to GitHub, for source code, for more detail):







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited 2 hours ago









        Stephen Rauch

        1,52551330




        1,52551330










        answered 2 hours ago









        TuyenTuyen

        213




        213






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Data Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40933%2fminimization-algorithm-that-can-consider-gradient-close-to-solution%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Ponta tanko

            Tantalo (mitologio)

            Erzsébet Schaár