Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2D Oscillator instead of 1D Oscillator #2

Open
ShaikhaTheGreen opened this issue Dec 3, 2021 · 3 comments
Open

2D Oscillator instead of 1D Oscillator #2

ShaikhaTheGreen opened this issue Dec 3, 2021 · 3 comments

Comments

@ShaikhaTheGreen
Copy link

Hello and great contribution!

I'm want to make sure of how to expand the current example to 2D instead of 1D (even further 2D + 1D for time). I assume that in the code, I need to change:

  1. The network input
  2. The xphysics structure
  3. The gradients equations

It does seem to be straightforward as I have faced errors when doing this. A 2D example would be hugely helpful.

Thanks!

@benmoseley
Copy link
Owner

Yes, that is exactly what needs changing. Here is a minimal 2D example I wrote, this time solving the time-dependent Burgers equation:

"""Solves the time-dependent 1D viscous Burgers equation
    du       du        d^2 u
    -- + u * -- = nu * -----
    dt       dx        dx^2

    for -1.0 < x < +1.0, and 0 < t

    Boundary conditions:
    u(x,0) = - sin(pi*x)
    u(-1,t) = u(+1,t) = 0
"""

import numpy as np
import torch
import torch.nn as nn
import matplotlib.pyplot as plt

    
# get boundary training points
x1 = torch.stack([-torch.ones(40), torch.linspace(0,1,40)],-1)
u1 = torch.zeros_like(x1[:,0:1])
print(x1.shape, u1.shape)

x2 = torch.stack([torch.ones(40), torch.linspace(0,1,40)],-1)
u2 = torch.zeros_like(x2[:,0:1])
print(x2.shape, u2.shape)

x3 = torch.stack([torch.linspace(-1,1,40), torch.zeros(40)],-1)
u3 = -torch.sin(np.pi*x3[:,0:1])
print(x3.shape, u3.shape)

# get physics loss sample points over full domain
xs = [torch.linspace(-1,1,40), torch.linspace(0,1,40)]
x_physics = torch.stack(torch.meshgrid(*xs, indexing='ij'), -1).view(-1, 2).requires_grad_(True)
print(x_physics.shape)

# get testing locations
xs = [torch.linspace(-1,1,100), torch.linspace(0,1,100)]
x_test = torch.stack(torch.meshgrid(*xs, indexing='ij'), -1).view(-1, 2)
print(x_test.shape)
    

# define NN
class FCN(nn.Module):
    "Defines a fully-connected network"
    
    def __init__(self, N_INPUT, N_OUTPUT, N_HIDDEN, N_LAYERS):
        super().__init__()
        activation = nn.Tanh
        self.fcs = nn.Sequential(*[
                        nn.Linear(N_INPUT, N_HIDDEN),
                        activation()])
        self.fch = nn.Sequential(*[
                        nn.Sequential(*[
                            nn.Linear(N_HIDDEN, N_HIDDEN),
                            activation()]) for _ in range(N_LAYERS-1)])
        self.fce = nn.Linear(N_HIDDEN, N_OUTPUT)
        
    def forward(self, x):
        x = self.fcs(x)
        x = self.fch(x)
        x = self.fce(x)
        return x
    

nu=0.01/np.pi
torch.manual_seed(123)
model = FCN(2,1,64,4)
optimizer = torch.optim.Adam(model.parameters(),lr=1e-4)
for i in range(10000):
    optimizer.zero_grad()
    
    # compute the "data loss"
    y1, y2, y3 = model(x1), model(x2), model(x3)
    loss1 = torch.mean((y1-u1)**2) + torch.mean((y2-u2)**2) + torch.mean((y3-u3)**2)
    
    # compute the "physics loss"
    yp = model(x_physics)
    dx  = torch.autograd.grad(yp, x_physics, torch.ones_like(yp), create_graph=True)[0]# computes dy/dx
    dx1, dx2 = dx[:,0:1], dx[:,1:2]
    dx1dx = torch.autograd.grad(dx1, x_physics, torch.ones_like(dx1), create_graph=True)[0]# computes d^2y/dx^2
    dx1dx1 = dx1dx[:,0:1]
    physics = (dx2[:,0] + yp[:,0] * dx1[:,0]) - (nu * dx1dx1[:,0])# computes the residual of the Burgers equation
    loss2 = (0.1)*torch.mean(physics**2)
    
    # backpropagate joint loss
    loss = loss1 + loss2# add two loss terms together
    loss.backward()
    optimizer.step()
    
    
    # plot the result as training progresses
    if (i+1) % 1000 == 0: 
        y = model(x_test)
        plt.figure()
        plt.imshow(y.reshape(100,100).detach(), vmin=-1, vmax=1)
        plt.xticks(np.linspace(0,100,5).astype(int), np.linspace(0,1,5))
        plt.yticks(np.linspace(0,100,5).astype(int), np.linspace(-1,1,5))
        plt.xlabel("t"); plt.ylabel("x")
        plt.title("Training step: %i"%(i+1))
        plt.show()
torch.Size([40, 2]) torch.Size([40, 1])
torch.Size([40, 2]) torch.Size([40, 1])
torch.Size([40, 2]) torch.Size([40, 1])
torch.Size([1600, 2])
torch.Size([10000, 2])

image

This problem is taken from the original PINN paper: https://maziarraissi.github.io/PINNs/

@ShaikhaTheGreen
Copy link
Author

ShaikhaTheGreen commented Feb 9, 2022

Hi,

I have a question about having the boundary conditions in terms of time. Is it possible to model:


    Boundary conditions:
    u(0,t) = - sin(pi*t)
    u(-1,t) = u(+1,t) = 0

Instead of:


    Boundary conditions:
    u(x,0) = - sin(pi*x)
    u(-1,t) = u(+1,t) = 0

I guess what I'm trying to say is: what if I had a source in my equation, and the source function is dependent on time. Is it possible to model such a setup with certain modifications to the code?

@benmoseley
Copy link
Owner

Hi @engsbk, sure that should be as simple as changing the above lines

x3 = torch.stack([torch.linspace(-1,1,40), torch.zeros(40)],-1)
u3 = -torch.sin(np.pi*x3[:,0:1])
print(x3.shape, u3.shape)

to

x3 = torch.stack([torch.zeros(40), torch.linspace(-1,1,40)],-1)
u3 = -torch.sin(np.pi*x3[0:1,:])
print(x3.shape, u3.shape)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants