Layers

Index

Docs

NeuralGraphPDE.ExplicitEdgeConvType
ExplicitEdgeConv(ϕ; initialgraph = initialgraph, aggr = mean)

Edge convolutional layer.

\[\mathbf{h}_i' = \square_{j \in N(i)}\, \phi([\mathbf{h}_i, \mathbf{h}_j; \mathbf{x}_j - \mathbf{x}_i])\]

Arguments

  • ϕ: A neural network.
  • initialgraph: GNNGraph or a function that returns a GNNGraph
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).

Inputs

  • h: Trainable node embeddings, NamedTuple or Array.

Returns

  • NamedTuple or Array that has the same struct with x with different a size of channels.

Parameters

  • Parameters of ϕ.

States

  • graph: GNNGraph where graph.ndata.x represents the spatial coordinates of nodes. You can also put other nontrainable node features in graph.ndata with arbitrary keys. They will be concatenated like u.

Examples

s = [1, 1, 2, 3]
t = [2, 3, 1, 1]
g = GNNGraph(s, t)

u = randn(4, g.num_nodes)
g = GNNGraph(g, ndata = (; x = rand(3, g.num_nodes)))
nn = Dense(4 + 4 + 3 => 5)
l = ExplicitEdgeConv(nn, initialgraph=g)

rng = Random.default_rng()
ps, st = Lux.setup(rng, l)
source
NeuralGraphPDE.GCNConvType
GCNConv(in_chs::Int, out_chs::Int, activation = identity;
                initialgraph = initialgraph, init_weight = glorot_normal,
                init_bias = zeros32)

Same as the one in GraphNeuralNetworks.jl but with explicit parameters.

Arguments

  • initialgraph: GNNGraph or a function that returns a GNNGraph

Examples

# create data
s = [1,1,2,3]
t = [2,3,1,1]
g = GNNGraph(s, t)
x = randn(3, g.num_nodes)

# create layer
l = GCNConv(3 => 5, initialgraph = g)

# setup layer
rng = Random.default_rng()
Random.seed!(rng, 0)

ps, st = Lux.setup(rng, l)

# forward pass
y = l(x, ps, st)       # size:  5 × num_nodes
source
NeuralGraphPDE.GNOConvType
GNOConv(in_chs => out_chs, ϕ; initialgraph = initialgraph, aggr = mean, bias = true)

Convolutional layer from Neural Operator: Graph Kernel Network for Partial Differential Equations.

\[\begin{aligned} \mathbf{m}_i&=\Box _{j\in N(i)}\,\phi (\mathbf{a}_i,\mathbf{a}_j,\mathbf{x}_i,\mathbf{x}_j)\mathbf{h}_j\\ \mathbf{h}_i'&=\,\,\sigma \left( \mathbf{Wh}_i+\mathbf{m}_i+\mathbf{b} \right)\\ \end{aligned}\]

Arguments

  • in_chs: Number of input channels.
  • out_chs: Number of output channels.
  • ϕ: Neural network for the message function. The output size of ϕ should be in_chs * out_chs.
  • initialgraph: GNNGraph or a function that returns a GNNGraph
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).
  • bias: Whether to add bias to the output.

Inputs

  • h: Array of the size (in_chs, num_nodes).

Returns

  • Array of the size (out_chs, num_nodes).

Parameters

  • Parameters of ϕ.
  • W.
  • b.

States

  • graph: GNNGraph. All features are stored in either graph.ndata or graph.edata. They will be concatenated and then fed into ϕ.

Examples

g = rand_graph(10, 6)

g = GNNGraph(g, ndata = (; a = rand(2, 10), x = rand(3, 10)))
in_chs, out_chs = 5, 7
h = randn(in_chs, 10)
ϕ = Dense(2 + 2 + 3 + 3 => in_chs * out_chs)
l = GNOConv(5 => 7, ϕ, initialgraph = g)

rng = Random.default_rng()
ps, st = Lux.setup(rng, l)

y, st = l(h, ps, st)

#edge features
e = rand(2 + 2 + 3 + 3, 6)
g = GNNGraph(g, edata = e)
st = updategraph(st, g)
y, st = l(h, ps, st)
source
NeuralGraphPDE.MPPDEConvType
MPPDEConv(ϕ, ψ; initialgraph = initialgraph, aggr = mean, local_features = (:u, :x))

Convolutional layer from Message Passing Neural PDE Solvers, without the temporal bulking trick.

\[\begin{aligned} \mathbf{m}_i&=\Box _{j\in N(i)}\,\phi (\mathbf{h}_i,\mathbf{h}_j;\mathbf{u}_i-\mathbf{u}_j;\mathbf{x}_i-\mathbf{x}_j;\theta )\\ \mathbf{h}_i'&=\psi (\mathbf{h}_i,\mathbf{m}_i,\theta )\\ \end{aligned}\]

Arguments

  • ϕ: The neural network for the message function.
  • ψ: The neural network for the update function.
  • initialgraph: GNNGraph or a function that returns a GNNGraph
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).

Inputs

  • h: Trainable node embeddings, Array.

Returns

  • NamedTuple or Array that has the same struct with x with different a size of channels.

Parameters

  • Parameters of ϕ.
  • Parameters of ψ.

States

  • graph: GNNGraph for which graph.gdata represents the graph level features of the underlying PDE. All features in graph.gdata should be a matrices of the size (num_feats, num_graphs). You can store u(x) in graph.ndata or u_j-u_i(x_jx_i) in graph.edata. If g is a batched graph, then currently all graphs need to have the same structure. Note that t is included in graph.gdata in the original paper.

Examples

g = rand_graph(10, 6)
g = GNNGraph(g, ndata = (; u = rand(2, 10), x = rand(3, 10)), gdata = (; θ = rand(4)))
h = randn(5, 10)
ϕ = Dense(5 + 5 + 2 + 3 + 4 => 5)
ψ = Dense(5 + 5 + 4 => 7)
l = MPPDEConv(ϕ, ψ, initialgraph = g)
rng = Random.default_rng()
ps, st = Lux.setup(rng, l)
y, st = l(h, ps, st)
source
NeuralGraphPDE.SpectralConvType
SpectralConv(n::Int)

Compute the Fourier differentiation of a 1D periodic function evenly sampled on $[0,2π]$, not including one of the endpoints. This is only a toy function and not the most effecient approch.

\[ u_i =\frac{1}{2} \sum_{j}{\cos \left(\frac{\left(x_{i}-x_{j}\right) n}{2}\right) \cot \left(\frac{x_{i}-x_{j}}{2}\right) u_{j}}\]

Arguments

  • n: The number of sampled points.

Inputs

  • u: Discret function values on $2jπ/n$, for $j=1,2,...,n$.

Returns

  • The derivative of u.

Parameters

  • None.

States

  • graph: A comple graph g of the type GNNGraph, where g.edata.e is x_i-x_j.

Examples

julia> using Lux, Random

julia> s = SpectralConv(100);

julia> rng = Random.default_rng();
julia> ps, st = Lux.setup(rng, s);

julia> x = LinRange(0, 2π, 101)[2:end];
julia> s(sin.(x), ps, st)[1] .- cos.(x)
100-element Vector{Float64}:
 -2.9976021664879227e-15
  4.440892098500626e-15
 -3.885780586188048e-15
  4.9960036108132044e-15
 -1.1102230246251565e-15
 -6.328271240363392e-15
  6.994405055138486e-15
  5.551115123125783e-16
  0.0
  ⋮
 -1.892930256985892e-13
  1.8640644583456378e-13
 -1.2012613126444194e-13
  8.526512829121202e-14
 -6.405986852087153e-14
  4.451994328746878e-14
 -2.631228568361621e-14
  1.509903313490213e-14

julia> s(cos.(x), ps, st)[1] .+ sin.(x)
100-element Vector{Float64}:
  1.9442780718748054e-14
 -3.552713678800501e-14
  4.246603069191224e-15
 -8.715250743307479e-15
  1.1934897514720433e-14
 -2.7533531010703882e-14
  2.6867397195928788e-14
 -1.176836406102666e-14
  6.5503158452884236e-15
  ⋮
  4.048983370807946e-13
 -4.0362158060247566e-13
  2.742805982336449e-13
 -2.53408405370692e-13
  2.479405569744131e-13
 -2.366440376988521e-13
  2.0448920334814602e-13
 -6.064106189943799e-14
source
NeuralGraphPDE.VMHConvType
VMHConv(ϕ, γ; initialgraph = initialgraph, aggr = mean)

Convolutional layer from Learning continuous-time PDEs from sparse data with graph neural networks.

\[\begin{aligned} \mathbf{m}_i &= \square_{j \in N(i)}\, \phi(\mathbf{h}_i, \mathbf{h}_j - \mathbf{h}_i; \mathbf{x}_j - \mathbf{x}_i)\\ \mathbf{h}_i' &= \gamma(\mathbf{h}_i ,\mathbf{m}_i) \end{aligned}\]

Arguments

  • ϕ: The neural network for the message function.
  • γ: The neural network for the update function.
  • initialgraph: GNNGraph or a function that returns a GNNGraph
  • aggr: Aggregation operator for the incoming messages (e.g. +, *, max, min, and mean).

Inputs

  • h: Trainable node embeddings, NamedTuple or Array.

Returns

  • NamedTuple or Array that has the same struct with x with different a size of channels.

Parameters

  • Parameters of ϕ.
  • Parameters of γ.

States

  • graph: GNNGraph where graph.ndata.x represents the spatial coordinates of nodes.

Examples

s = [1, 1, 2, 3]
t = [2, 3, 1, 1]
g = GNNGraph(s, t)

u = randn(4, g.num_nodes)
g = GNNGraph(g, ndata = (; x = rand(3, g.num_nodes)))
ϕ = Dense(4 + 4 + 3 => 5)
γ = Dense(5 + 4 => 7)
l = VMHConv(ϕ, γ, initialgraph = g)

rng = Random.default_rng()
ps, st = Lux.setup(rng, l)

y, st = l(u, ps, st)
source