Solving an ODE using neural networks (via Tensorflow)












0












$begingroup$


I'm very new to deep learning (coming from a math PDE background), but I'm trying to solve some ODEs using a neural network (via tensorflow). I've solved some simple ones like $u'(x) = f(x)$ with no problem, but I'm trying something a bit harder now: $u''(x) - xu(x) = 0$ with the initial conditions $u(0)=A$ and $u'(0)=B$.



I'm mostly following this paper, and my solution is written as $u_{N}(x) = A + Bx + x^2N(x,w)$, where $N(x,w)$` is the output of the neural net. The loss function I'm using is just the residual of the ODE in a mean square sense, so it's pretty crude: $ell(x,w) = sum_{j=1}^{N} (u_{N}''(x) - xu_{N}(x))^{2}$. I'm having a lot of trouble getting a good numerical solution to this particular equation. You can see a typical result below (orange is the exact solution, blue is my solution). Orange line is the exact solution, blue line is the numerical solution



My current setup is just 2 hidden layers of 400 nodes each (one leaky ReLU and one ReLU) followed by a linear activation layer. My input data is an evenly spaced discretization of the domain. I'm using the Adam optimizer with a batch size of 32 and run for 400 epochs. I can't seem to capture the oscillating behavior in the left of the domain properly no matter what parameters I tweak.



Does anyone have any suggestions for how to improve the result? I'm very very new to deep learning and neural networks. If it helps, my code is included below. I should also probably give credit to Emm and vijay m, whose code for 1D approximation was the base for my code



# Load modules
import tensorflow as tf
import numpy as np
import math, random
import matplotlib.pyplot as plt
from scipy import special

######################################################################
# Routine to solve u''(x) - x*u(x) = f(x), u(0)=A, u'(0)=B in the form
# u(x) = A + B*x + x^2*N(x,w)
# where N(x,w) is the output of the neural network.
######################################################################

# Create the arrays x and y, where x is a discretization of the domain (a,b) and y is the source term f(x)
N = 200
a = -6.0
b = 2.0
x = np.arange(a, b, (b-a)/N).reshape((N,1))
y = np.zeros(N)

# Boundary conditions
A = 1.0
B = 0.0

# Define the number of neurons in each layer
n_nodes_hl1 = 400
n_nodes_hl2 = 400

# Define the number of outputs and the learning rate
n_classes = 1
learn_rate = 0.00003

# Define input / output placeholders
x_ph = tf.placeholder('float', [None, 1],name='input')
y_ph = tf.placeholder('float')

# Define standard deviation for the weights and biases
hl_sigma = 0.02

# Routine to compute the neural network (5 hidden layers)
def neural_network_model(data):
hidden_1_layer = {'weights': tf.Variable(name='w_h1',initial_value=tf.random_normal([1, n_nodes_hl1], stddev=hl_sigma)),
'biases': tf.Variable(name='b_h1',initial_value=tf.random_normal([n_nodes_hl1], stddev=hl_sigma))}

hidden_2_layer = {'weights': tf.Variable(name='w_h2',initial_value=tf.random_normal([n_nodes_hl1, n_nodes_hl2], stddev=hl_sigma)),
'biases': tf.Variable(name='b_h2',initial_value=tf.random_normal([n_nodes_hl2], stddev=hl_sigma))}

output_layer = {'weights': tf.Variable(name='w_o',initial_value=tf.random_normal([n_nodes_hl2, n_classes], stddev=hl_sigma)),
'biases': tf.Variable(name='b_o',initial_value=tf.random_normal([n_classes], stddev=hl_sigma))}


# (input_data * weights) + biases
l1 = tf.add(tf.matmul(data, hidden_1_layer['weights']), hidden_1_layer['biases'])
l1 = tf.nn.leaky_relu(l1)

l2 = tf.add(tf.matmul(l1, hidden_2_layer['weights']), hidden_2_layer['biases'])
l2 = tf.nn.relu(l2)

output = tf.add(tf.matmul(l2, output_layer['weights']), output_layer['biases'], name='output')

return output


batch_size = 32

# Feed batch data
def get_batch(inputX, inputY, batch_size):
duration = len(inputX)
for i in range(0,duration//batch_size):
idx = i*batch_size + np.random.randint(0,10,(1))[0]

yield inputX[idx:idx+batch_size], inputY[idx:idx+batch_size]


# Routine to train the neural network
def train_neural_network_batch(x_ph, predict=False):
prediction = neural_network_model(x_ph)
pred_dx = tf.gradients(prediction, x_ph)
pred_dx2 = tf.gradients(tf.gradients(prediction, x_ph),x_ph)

# Compute u and its second derivative
u = A + B*x_ph + (x_ph*x_ph)*prediction
dudx2 = (x_ph*x_ph)*pred_dx2 + 2.0*x_ph*pred_dx + 2.0*x_ph*pred_dx + 2.0*prediction

# The cost function is just the residual of u''(x) - x*u(x) = 0, i.e. residual = u''(x)-x*u(x)
cost = tf.reduce_mean(tf.square(dudx2-x_ph*u - y_ph))
optimizer = tf.train.AdamOptimizer(learn_rate).minimize(cost)


# cycles feed forward + backprop
hm_epochs = 400

with tf.Session() as sess:
sess.run(tf.global_variables_initializer())

# Train in each epoch with the whole data
for epoch in range(hm_epochs):

epoch_loss = 0
for step in range(N//batch_size):
for inputX, inputY in get_batch(x, y, batch_size):
_, l = sess.run([optimizer,cost], feed_dict={x_ph:inputX, y_ph:inputY})
epoch_loss += l
if epoch %10 == 0:
print('Epoch', epoch, 'completed out of', hm_epochs, 'loss:', epoch_loss)


# Predict a new input by adding a random number, to check whether the network has actually learned
x_valid = x + 0.0*np.random.normal(scale=0.1,size=(1))
return sess.run(tf.squeeze(prediction),{x_ph:x_valid}), x_valid


# Train network
tf.set_random_seed(42)
pred, time = train_neural_network_batch(x_ph)


mypred = pred.reshape(N,1)

# Compute Airy functions for exact solution
ai, aip, bi, bip = special.airy(time)

# Numerical solution vs. exact solution
fig = plt.figure()
plt.plot(time, A + B*time + (time*time)*mypred)
plt.plot(time, 0.5*(3.0**(1/6))*special.gamma(2/3)*(3**(1/2)*ai + bi))
plt.show()








share







New contributor




user1799323 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$

















    0












    $begingroup$


    I'm very new to deep learning (coming from a math PDE background), but I'm trying to solve some ODEs using a neural network (via tensorflow). I've solved some simple ones like $u'(x) = f(x)$ with no problem, but I'm trying something a bit harder now: $u''(x) - xu(x) = 0$ with the initial conditions $u(0)=A$ and $u'(0)=B$.



    I'm mostly following this paper, and my solution is written as $u_{N}(x) = A + Bx + x^2N(x,w)$, where $N(x,w)$` is the output of the neural net. The loss function I'm using is just the residual of the ODE in a mean square sense, so it's pretty crude: $ell(x,w) = sum_{j=1}^{N} (u_{N}''(x) - xu_{N}(x))^{2}$. I'm having a lot of trouble getting a good numerical solution to this particular equation. You can see a typical result below (orange is the exact solution, blue is my solution). Orange line is the exact solution, blue line is the numerical solution



    My current setup is just 2 hidden layers of 400 nodes each (one leaky ReLU and one ReLU) followed by a linear activation layer. My input data is an evenly spaced discretization of the domain. I'm using the Adam optimizer with a batch size of 32 and run for 400 epochs. I can't seem to capture the oscillating behavior in the left of the domain properly no matter what parameters I tweak.



    Does anyone have any suggestions for how to improve the result? I'm very very new to deep learning and neural networks. If it helps, my code is included below. I should also probably give credit to Emm and vijay m, whose code for 1D approximation was the base for my code



    # Load modules
    import tensorflow as tf
    import numpy as np
    import math, random
    import matplotlib.pyplot as plt
    from scipy import special

    ######################################################################
    # Routine to solve u''(x) - x*u(x) = f(x), u(0)=A, u'(0)=B in the form
    # u(x) = A + B*x + x^2*N(x,w)
    # where N(x,w) is the output of the neural network.
    ######################################################################

    # Create the arrays x and y, where x is a discretization of the domain (a,b) and y is the source term f(x)
    N = 200
    a = -6.0
    b = 2.0
    x = np.arange(a, b, (b-a)/N).reshape((N,1))
    y = np.zeros(N)

    # Boundary conditions
    A = 1.0
    B = 0.0

    # Define the number of neurons in each layer
    n_nodes_hl1 = 400
    n_nodes_hl2 = 400

    # Define the number of outputs and the learning rate
    n_classes = 1
    learn_rate = 0.00003

    # Define input / output placeholders
    x_ph = tf.placeholder('float', [None, 1],name='input')
    y_ph = tf.placeholder('float')

    # Define standard deviation for the weights and biases
    hl_sigma = 0.02

    # Routine to compute the neural network (5 hidden layers)
    def neural_network_model(data):
    hidden_1_layer = {'weights': tf.Variable(name='w_h1',initial_value=tf.random_normal([1, n_nodes_hl1], stddev=hl_sigma)),
    'biases': tf.Variable(name='b_h1',initial_value=tf.random_normal([n_nodes_hl1], stddev=hl_sigma))}

    hidden_2_layer = {'weights': tf.Variable(name='w_h2',initial_value=tf.random_normal([n_nodes_hl1, n_nodes_hl2], stddev=hl_sigma)),
    'biases': tf.Variable(name='b_h2',initial_value=tf.random_normal([n_nodes_hl2], stddev=hl_sigma))}

    output_layer = {'weights': tf.Variable(name='w_o',initial_value=tf.random_normal([n_nodes_hl2, n_classes], stddev=hl_sigma)),
    'biases': tf.Variable(name='b_o',initial_value=tf.random_normal([n_classes], stddev=hl_sigma))}


    # (input_data * weights) + biases
    l1 = tf.add(tf.matmul(data, hidden_1_layer['weights']), hidden_1_layer['biases'])
    l1 = tf.nn.leaky_relu(l1)

    l2 = tf.add(tf.matmul(l1, hidden_2_layer['weights']), hidden_2_layer['biases'])
    l2 = tf.nn.relu(l2)

    output = tf.add(tf.matmul(l2, output_layer['weights']), output_layer['biases'], name='output')

    return output


    batch_size = 32

    # Feed batch data
    def get_batch(inputX, inputY, batch_size):
    duration = len(inputX)
    for i in range(0,duration//batch_size):
    idx = i*batch_size + np.random.randint(0,10,(1))[0]

    yield inputX[idx:idx+batch_size], inputY[idx:idx+batch_size]


    # Routine to train the neural network
    def train_neural_network_batch(x_ph, predict=False):
    prediction = neural_network_model(x_ph)
    pred_dx = tf.gradients(prediction, x_ph)
    pred_dx2 = tf.gradients(tf.gradients(prediction, x_ph),x_ph)

    # Compute u and its second derivative
    u = A + B*x_ph + (x_ph*x_ph)*prediction
    dudx2 = (x_ph*x_ph)*pred_dx2 + 2.0*x_ph*pred_dx + 2.0*x_ph*pred_dx + 2.0*prediction

    # The cost function is just the residual of u''(x) - x*u(x) = 0, i.e. residual = u''(x)-x*u(x)
    cost = tf.reduce_mean(tf.square(dudx2-x_ph*u - y_ph))
    optimizer = tf.train.AdamOptimizer(learn_rate).minimize(cost)


    # cycles feed forward + backprop
    hm_epochs = 400

    with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    # Train in each epoch with the whole data
    for epoch in range(hm_epochs):

    epoch_loss = 0
    for step in range(N//batch_size):
    for inputX, inputY in get_batch(x, y, batch_size):
    _, l = sess.run([optimizer,cost], feed_dict={x_ph:inputX, y_ph:inputY})
    epoch_loss += l
    if epoch %10 == 0:
    print('Epoch', epoch, 'completed out of', hm_epochs, 'loss:', epoch_loss)


    # Predict a new input by adding a random number, to check whether the network has actually learned
    x_valid = x + 0.0*np.random.normal(scale=0.1,size=(1))
    return sess.run(tf.squeeze(prediction),{x_ph:x_valid}), x_valid


    # Train network
    tf.set_random_seed(42)
    pred, time = train_neural_network_batch(x_ph)


    mypred = pred.reshape(N,1)

    # Compute Airy functions for exact solution
    ai, aip, bi, bip = special.airy(time)

    # Numerical solution vs. exact solution
    fig = plt.figure()
    plt.plot(time, A + B*time + (time*time)*mypred)
    plt.plot(time, 0.5*(3.0**(1/6))*special.gamma(2/3)*(3**(1/2)*ai + bi))
    plt.show()








    share







    New contributor




    user1799323 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$















      0












      0








      0





      $begingroup$


      I'm very new to deep learning (coming from a math PDE background), but I'm trying to solve some ODEs using a neural network (via tensorflow). I've solved some simple ones like $u'(x) = f(x)$ with no problem, but I'm trying something a bit harder now: $u''(x) - xu(x) = 0$ with the initial conditions $u(0)=A$ and $u'(0)=B$.



      I'm mostly following this paper, and my solution is written as $u_{N}(x) = A + Bx + x^2N(x,w)$, where $N(x,w)$` is the output of the neural net. The loss function I'm using is just the residual of the ODE in a mean square sense, so it's pretty crude: $ell(x,w) = sum_{j=1}^{N} (u_{N}''(x) - xu_{N}(x))^{2}$. I'm having a lot of trouble getting a good numerical solution to this particular equation. You can see a typical result below (orange is the exact solution, blue is my solution). Orange line is the exact solution, blue line is the numerical solution



      My current setup is just 2 hidden layers of 400 nodes each (one leaky ReLU and one ReLU) followed by a linear activation layer. My input data is an evenly spaced discretization of the domain. I'm using the Adam optimizer with a batch size of 32 and run for 400 epochs. I can't seem to capture the oscillating behavior in the left of the domain properly no matter what parameters I tweak.



      Does anyone have any suggestions for how to improve the result? I'm very very new to deep learning and neural networks. If it helps, my code is included below. I should also probably give credit to Emm and vijay m, whose code for 1D approximation was the base for my code



      # Load modules
      import tensorflow as tf
      import numpy as np
      import math, random
      import matplotlib.pyplot as plt
      from scipy import special

      ######################################################################
      # Routine to solve u''(x) - x*u(x) = f(x), u(0)=A, u'(0)=B in the form
      # u(x) = A + B*x + x^2*N(x,w)
      # where N(x,w) is the output of the neural network.
      ######################################################################

      # Create the arrays x and y, where x is a discretization of the domain (a,b) and y is the source term f(x)
      N = 200
      a = -6.0
      b = 2.0
      x = np.arange(a, b, (b-a)/N).reshape((N,1))
      y = np.zeros(N)

      # Boundary conditions
      A = 1.0
      B = 0.0

      # Define the number of neurons in each layer
      n_nodes_hl1 = 400
      n_nodes_hl2 = 400

      # Define the number of outputs and the learning rate
      n_classes = 1
      learn_rate = 0.00003

      # Define input / output placeholders
      x_ph = tf.placeholder('float', [None, 1],name='input')
      y_ph = tf.placeholder('float')

      # Define standard deviation for the weights and biases
      hl_sigma = 0.02

      # Routine to compute the neural network (5 hidden layers)
      def neural_network_model(data):
      hidden_1_layer = {'weights': tf.Variable(name='w_h1',initial_value=tf.random_normal([1, n_nodes_hl1], stddev=hl_sigma)),
      'biases': tf.Variable(name='b_h1',initial_value=tf.random_normal([n_nodes_hl1], stddev=hl_sigma))}

      hidden_2_layer = {'weights': tf.Variable(name='w_h2',initial_value=tf.random_normal([n_nodes_hl1, n_nodes_hl2], stddev=hl_sigma)),
      'biases': tf.Variable(name='b_h2',initial_value=tf.random_normal([n_nodes_hl2], stddev=hl_sigma))}

      output_layer = {'weights': tf.Variable(name='w_o',initial_value=tf.random_normal([n_nodes_hl2, n_classes], stddev=hl_sigma)),
      'biases': tf.Variable(name='b_o',initial_value=tf.random_normal([n_classes], stddev=hl_sigma))}


      # (input_data * weights) + biases
      l1 = tf.add(tf.matmul(data, hidden_1_layer['weights']), hidden_1_layer['biases'])
      l1 = tf.nn.leaky_relu(l1)

      l2 = tf.add(tf.matmul(l1, hidden_2_layer['weights']), hidden_2_layer['biases'])
      l2 = tf.nn.relu(l2)

      output = tf.add(tf.matmul(l2, output_layer['weights']), output_layer['biases'], name='output')

      return output


      batch_size = 32

      # Feed batch data
      def get_batch(inputX, inputY, batch_size):
      duration = len(inputX)
      for i in range(0,duration//batch_size):
      idx = i*batch_size + np.random.randint(0,10,(1))[0]

      yield inputX[idx:idx+batch_size], inputY[idx:idx+batch_size]


      # Routine to train the neural network
      def train_neural_network_batch(x_ph, predict=False):
      prediction = neural_network_model(x_ph)
      pred_dx = tf.gradients(prediction, x_ph)
      pred_dx2 = tf.gradients(tf.gradients(prediction, x_ph),x_ph)

      # Compute u and its second derivative
      u = A + B*x_ph + (x_ph*x_ph)*prediction
      dudx2 = (x_ph*x_ph)*pred_dx2 + 2.0*x_ph*pred_dx + 2.0*x_ph*pred_dx + 2.0*prediction

      # The cost function is just the residual of u''(x) - x*u(x) = 0, i.e. residual = u''(x)-x*u(x)
      cost = tf.reduce_mean(tf.square(dudx2-x_ph*u - y_ph))
      optimizer = tf.train.AdamOptimizer(learn_rate).minimize(cost)


      # cycles feed forward + backprop
      hm_epochs = 400

      with tf.Session() as sess:
      sess.run(tf.global_variables_initializer())

      # Train in each epoch with the whole data
      for epoch in range(hm_epochs):

      epoch_loss = 0
      for step in range(N//batch_size):
      for inputX, inputY in get_batch(x, y, batch_size):
      _, l = sess.run([optimizer,cost], feed_dict={x_ph:inputX, y_ph:inputY})
      epoch_loss += l
      if epoch %10 == 0:
      print('Epoch', epoch, 'completed out of', hm_epochs, 'loss:', epoch_loss)


      # Predict a new input by adding a random number, to check whether the network has actually learned
      x_valid = x + 0.0*np.random.normal(scale=0.1,size=(1))
      return sess.run(tf.squeeze(prediction),{x_ph:x_valid}), x_valid


      # Train network
      tf.set_random_seed(42)
      pred, time = train_neural_network_batch(x_ph)


      mypred = pred.reshape(N,1)

      # Compute Airy functions for exact solution
      ai, aip, bi, bip = special.airy(time)

      # Numerical solution vs. exact solution
      fig = plt.figure()
      plt.plot(time, A + B*time + (time*time)*mypred)
      plt.plot(time, 0.5*(3.0**(1/6))*special.gamma(2/3)*(3**(1/2)*ai + bi))
      plt.show()








      share







      New contributor




      user1799323 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      I'm very new to deep learning (coming from a math PDE background), but I'm trying to solve some ODEs using a neural network (via tensorflow). I've solved some simple ones like $u'(x) = f(x)$ with no problem, but I'm trying something a bit harder now: $u''(x) - xu(x) = 0$ with the initial conditions $u(0)=A$ and $u'(0)=B$.



      I'm mostly following this paper, and my solution is written as $u_{N}(x) = A + Bx + x^2N(x,w)$, where $N(x,w)$` is the output of the neural net. The loss function I'm using is just the residual of the ODE in a mean square sense, so it's pretty crude: $ell(x,w) = sum_{j=1}^{N} (u_{N}''(x) - xu_{N}(x))^{2}$. I'm having a lot of trouble getting a good numerical solution to this particular equation. You can see a typical result below (orange is the exact solution, blue is my solution). Orange line is the exact solution, blue line is the numerical solution



      My current setup is just 2 hidden layers of 400 nodes each (one leaky ReLU and one ReLU) followed by a linear activation layer. My input data is an evenly spaced discretization of the domain. I'm using the Adam optimizer with a batch size of 32 and run for 400 epochs. I can't seem to capture the oscillating behavior in the left of the domain properly no matter what parameters I tweak.



      Does anyone have any suggestions for how to improve the result? I'm very very new to deep learning and neural networks. If it helps, my code is included below. I should also probably give credit to Emm and vijay m, whose code for 1D approximation was the base for my code



      # Load modules
      import tensorflow as tf
      import numpy as np
      import math, random
      import matplotlib.pyplot as plt
      from scipy import special

      ######################################################################
      # Routine to solve u''(x) - x*u(x) = f(x), u(0)=A, u'(0)=B in the form
      # u(x) = A + B*x + x^2*N(x,w)
      # where N(x,w) is the output of the neural network.
      ######################################################################

      # Create the arrays x and y, where x is a discretization of the domain (a,b) and y is the source term f(x)
      N = 200
      a = -6.0
      b = 2.0
      x = np.arange(a, b, (b-a)/N).reshape((N,1))
      y = np.zeros(N)

      # Boundary conditions
      A = 1.0
      B = 0.0

      # Define the number of neurons in each layer
      n_nodes_hl1 = 400
      n_nodes_hl2 = 400

      # Define the number of outputs and the learning rate
      n_classes = 1
      learn_rate = 0.00003

      # Define input / output placeholders
      x_ph = tf.placeholder('float', [None, 1],name='input')
      y_ph = tf.placeholder('float')

      # Define standard deviation for the weights and biases
      hl_sigma = 0.02

      # Routine to compute the neural network (5 hidden layers)
      def neural_network_model(data):
      hidden_1_layer = {'weights': tf.Variable(name='w_h1',initial_value=tf.random_normal([1, n_nodes_hl1], stddev=hl_sigma)),
      'biases': tf.Variable(name='b_h1',initial_value=tf.random_normal([n_nodes_hl1], stddev=hl_sigma))}

      hidden_2_layer = {'weights': tf.Variable(name='w_h2',initial_value=tf.random_normal([n_nodes_hl1, n_nodes_hl2], stddev=hl_sigma)),
      'biases': tf.Variable(name='b_h2',initial_value=tf.random_normal([n_nodes_hl2], stddev=hl_sigma))}

      output_layer = {'weights': tf.Variable(name='w_o',initial_value=tf.random_normal([n_nodes_hl2, n_classes], stddev=hl_sigma)),
      'biases': tf.Variable(name='b_o',initial_value=tf.random_normal([n_classes], stddev=hl_sigma))}


      # (input_data * weights) + biases
      l1 = tf.add(tf.matmul(data, hidden_1_layer['weights']), hidden_1_layer['biases'])
      l1 = tf.nn.leaky_relu(l1)

      l2 = tf.add(tf.matmul(l1, hidden_2_layer['weights']), hidden_2_layer['biases'])
      l2 = tf.nn.relu(l2)

      output = tf.add(tf.matmul(l2, output_layer['weights']), output_layer['biases'], name='output')

      return output


      batch_size = 32

      # Feed batch data
      def get_batch(inputX, inputY, batch_size):
      duration = len(inputX)
      for i in range(0,duration//batch_size):
      idx = i*batch_size + np.random.randint(0,10,(1))[0]

      yield inputX[idx:idx+batch_size], inputY[idx:idx+batch_size]


      # Routine to train the neural network
      def train_neural_network_batch(x_ph, predict=False):
      prediction = neural_network_model(x_ph)
      pred_dx = tf.gradients(prediction, x_ph)
      pred_dx2 = tf.gradients(tf.gradients(prediction, x_ph),x_ph)

      # Compute u and its second derivative
      u = A + B*x_ph + (x_ph*x_ph)*prediction
      dudx2 = (x_ph*x_ph)*pred_dx2 + 2.0*x_ph*pred_dx + 2.0*x_ph*pred_dx + 2.0*prediction

      # The cost function is just the residual of u''(x) - x*u(x) = 0, i.e. residual = u''(x)-x*u(x)
      cost = tf.reduce_mean(tf.square(dudx2-x_ph*u - y_ph))
      optimizer = tf.train.AdamOptimizer(learn_rate).minimize(cost)


      # cycles feed forward + backprop
      hm_epochs = 400

      with tf.Session() as sess:
      sess.run(tf.global_variables_initializer())

      # Train in each epoch with the whole data
      for epoch in range(hm_epochs):

      epoch_loss = 0
      for step in range(N//batch_size):
      for inputX, inputY in get_batch(x, y, batch_size):
      _, l = sess.run([optimizer,cost], feed_dict={x_ph:inputX, y_ph:inputY})
      epoch_loss += l
      if epoch %10 == 0:
      print('Epoch', epoch, 'completed out of', hm_epochs, 'loss:', epoch_loss)


      # Predict a new input by adding a random number, to check whether the network has actually learned
      x_valid = x + 0.0*np.random.normal(scale=0.1,size=(1))
      return sess.run(tf.squeeze(prediction),{x_ph:x_valid}), x_valid


      # Train network
      tf.set_random_seed(42)
      pred, time = train_neural_network_batch(x_ph)


      mypred = pred.reshape(N,1)

      # Compute Airy functions for exact solution
      ai, aip, bi, bip = special.airy(time)

      # Numerical solution vs. exact solution
      fig = plt.figure()
      plt.plot(time, A + B*time + (time*time)*mypred)
      plt.plot(time, 0.5*(3.0**(1/6))*special.gamma(2/3)*(3**(1/2)*ai + bi))
      plt.show()






      neural-network deep-learning tensorflow





      share







      New contributor




      user1799323 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.










      share







      New contributor




      user1799323 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.








      share



      share






      New contributor




      user1799323 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 4 mins ago









      user1799323user1799323

      101




      101




      New contributor




      user1799323 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      user1799323 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      user1799323 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          0






          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "557"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });






          user1799323 is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45015%2fsolving-an-ode-using-neural-networks-via-tensorflow%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          user1799323 is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          user1799323 is a new contributor. Be nice, and check out our Code of Conduct.













          user1799323 is a new contributor. Be nice, and check out our Code of Conduct.












          user1799323 is a new contributor. Be nice, and check out our Code of Conduct.
















          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45015%2fsolving-an-ode-using-neural-networks-via-tensorflow%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Ponta tanko

          Tantalo (mitologio)

          Erzsébet Schaár