my PPO implementation for Cartpole, is code review allowed here?












0












$begingroup$


I implemented the clipped objective PPO-clip as explained here: https://spinningup.openai.com/en/latest/algorithms/ppo.html



Basically I used a dummy actor network to find the new action probability without training the local actor network.



"""use temp_actor to get new prob so we don't update the actual actor until
we do the clip op"""
curr_weights = self.actor.get_weights()
self.temp_actor.set_weights(curr_weights)
self.temp_actor.fit(state, advantages, epochs=1, verbose=0)
new_policy = self.temp_actor.predict(state, batch_size=1).flatten()
new_aprob = new_policy[action]


Then I worked out the ratio of probabilities and implemented the PPO clipping parts of the algorithm:



ratio = new_aprob / old_aprob
# scale = min(ratio * advantages, K.clip(ratio, 1 - self.epsilon, 1 + self.epsilon) * advantages)
no_clip = ratio * advantages
clipped = np.clip(ratio, 1 - self.epsilon, 1 + self.epsilon) * advantages

self.actor.fit(state, np.minimum(no_clip, clipped), epochs=1, verbose=0)


The full code is here (please excuse some coarse language in the comments): https://github.com/nyck33/openai_my_implements/blob/master/cartpole/my_ppo_cartpole.py



It seems to work but for Cartpole, slightly slower than my DQN and VPG implementations here: https://github.com/nyck33/openai_my_implements/tree/master/cartpole



The std deviation of returns seems a bit lower than with VPG or DQN so I'm guessing the clipping is stabilizing the learning somewhat. However, once I change the learning rates to anything else than what I currently have, it stops learning, ie. it's very brittle.



Thus, I'm looking for any advice here to make it more resilient. If inappropriate please let me know and I will delete this question promptly.









share









$endgroup$

















    0












    $begingroup$


    I implemented the clipped objective PPO-clip as explained here: https://spinningup.openai.com/en/latest/algorithms/ppo.html



    Basically I used a dummy actor network to find the new action probability without training the local actor network.



    """use temp_actor to get new prob so we don't update the actual actor until
    we do the clip op"""
    curr_weights = self.actor.get_weights()
    self.temp_actor.set_weights(curr_weights)
    self.temp_actor.fit(state, advantages, epochs=1, verbose=0)
    new_policy = self.temp_actor.predict(state, batch_size=1).flatten()
    new_aprob = new_policy[action]


    Then I worked out the ratio of probabilities and implemented the PPO clipping parts of the algorithm:



    ratio = new_aprob / old_aprob
    # scale = min(ratio * advantages, K.clip(ratio, 1 - self.epsilon, 1 + self.epsilon) * advantages)
    no_clip = ratio * advantages
    clipped = np.clip(ratio, 1 - self.epsilon, 1 + self.epsilon) * advantages

    self.actor.fit(state, np.minimum(no_clip, clipped), epochs=1, verbose=0)


    The full code is here (please excuse some coarse language in the comments): https://github.com/nyck33/openai_my_implements/blob/master/cartpole/my_ppo_cartpole.py



    It seems to work but for Cartpole, slightly slower than my DQN and VPG implementations here: https://github.com/nyck33/openai_my_implements/tree/master/cartpole



    The std deviation of returns seems a bit lower than with VPG or DQN so I'm guessing the clipping is stabilizing the learning somewhat. However, once I change the learning rates to anything else than what I currently have, it stops learning, ie. it's very brittle.



    Thus, I'm looking for any advice here to make it more resilient. If inappropriate please let me know and I will delete this question promptly.









    share









    $endgroup$















      0












      0








      0





      $begingroup$


      I implemented the clipped objective PPO-clip as explained here: https://spinningup.openai.com/en/latest/algorithms/ppo.html



      Basically I used a dummy actor network to find the new action probability without training the local actor network.



      """use temp_actor to get new prob so we don't update the actual actor until
      we do the clip op"""
      curr_weights = self.actor.get_weights()
      self.temp_actor.set_weights(curr_weights)
      self.temp_actor.fit(state, advantages, epochs=1, verbose=0)
      new_policy = self.temp_actor.predict(state, batch_size=1).flatten()
      new_aprob = new_policy[action]


      Then I worked out the ratio of probabilities and implemented the PPO clipping parts of the algorithm:



      ratio = new_aprob / old_aprob
      # scale = min(ratio * advantages, K.clip(ratio, 1 - self.epsilon, 1 + self.epsilon) * advantages)
      no_clip = ratio * advantages
      clipped = np.clip(ratio, 1 - self.epsilon, 1 + self.epsilon) * advantages

      self.actor.fit(state, np.minimum(no_clip, clipped), epochs=1, verbose=0)


      The full code is here (please excuse some coarse language in the comments): https://github.com/nyck33/openai_my_implements/blob/master/cartpole/my_ppo_cartpole.py



      It seems to work but for Cartpole, slightly slower than my DQN and VPG implementations here: https://github.com/nyck33/openai_my_implements/tree/master/cartpole



      The std deviation of returns seems a bit lower than with VPG or DQN so I'm guessing the clipping is stabilizing the learning somewhat. However, once I change the learning rates to anything else than what I currently have, it stops learning, ie. it's very brittle.



      Thus, I'm looking for any advice here to make it more resilient. If inappropriate please let me know and I will delete this question promptly.









      share









      $endgroup$




      I implemented the clipped objective PPO-clip as explained here: https://spinningup.openai.com/en/latest/algorithms/ppo.html



      Basically I used a dummy actor network to find the new action probability without training the local actor network.



      """use temp_actor to get new prob so we don't update the actual actor until
      we do the clip op"""
      curr_weights = self.actor.get_weights()
      self.temp_actor.set_weights(curr_weights)
      self.temp_actor.fit(state, advantages, epochs=1, verbose=0)
      new_policy = self.temp_actor.predict(state, batch_size=1).flatten()
      new_aprob = new_policy[action]


      Then I worked out the ratio of probabilities and implemented the PPO clipping parts of the algorithm:



      ratio = new_aprob / old_aprob
      # scale = min(ratio * advantages, K.clip(ratio, 1 - self.epsilon, 1 + self.epsilon) * advantages)
      no_clip = ratio * advantages
      clipped = np.clip(ratio, 1 - self.epsilon, 1 + self.epsilon) * advantages

      self.actor.fit(state, np.minimum(no_clip, clipped), epochs=1, verbose=0)


      The full code is here (please excuse some coarse language in the comments): https://github.com/nyck33/openai_my_implements/blob/master/cartpole/my_ppo_cartpole.py



      It seems to work but for Cartpole, slightly slower than my DQN and VPG implementations here: https://github.com/nyck33/openai_my_implements/tree/master/cartpole



      The std deviation of returns seems a bit lower than with VPG or DQN so I'm guessing the clipping is stabilizing the learning somewhat. However, once I change the learning rates to anything else than what I currently have, it stops learning, ie. it's very brittle.



      Thus, I'm looking for any advice here to make it more resilient. If inappropriate please let me know and I will delete this question promptly.







      deep-learning reinforcement-learning openai-gym





      share












      share










      share



      share










      asked 3 mins ago









      mLstudent33mLstudent33

      457




      457






















          0






          active

          oldest

          votes












          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "557"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49625%2fmy-ppo-implementation-for-cartpole-is-code-review-allowed-here%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49625%2fmy-ppo-implementation-for-cartpole-is-code-review-allowed-here%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Ponta tanko

          Tantalo (mitologio)

          Erzsébet Schaár