Schziophrenia PPI network
Eigenvalues of a Laplacian $0 = \lambda_1 \le \lambda_2 \le \cdots \le \lambda_n$ and graph has $k$ connected components, then
$$0 = \lambda_1 = \lambda_2 = \cdots = \lambda_k \lt \lambda_{k+1} \le \cdots \le \lambda_n$$Graph signals provide a suitable form to encode structure information within signal processing
Wikipedia - Fourier transform
Graph Fourier Transform: $F(\lambda_l) = U^T f$
Graph Spectral Filtering: $G(\lambda_l)F(\lambda_l) = G(\Lambda)U^T f$
Inverse Graph Fourier Transform: $(f*g)(i) = UG(\Lambda)U^T f$
Shuman, D. I., Narang, S. K., Frossard, P., Ortega, A., & Vandergheynst, P. (2013). The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE Signal Processing Magazine, 30(3), 83–98. doi: 10.1109/msp.2012.2235192
$W$: adjacency (or weighted) matrix, $X$: input node features, $l$: node/edge labels
A input $x$ and a filter $g \in \mathbb{R}^N$ is defined as
$$ x *_G g = \mathcal{F}^{-1}\{\mathcal{F}\{x\} \odot \mathcal{F}\{g\}\} = U(U^Tx \odot U^Tg) $$For efficiency, define filters as Chebyshev polynomials.
Apply the same graph convolution layer to update hidden representations.
Apply different graph convolution layers to update hidden representations.
Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., & Monfardini, G. (2009). The Graph Neural Network Model. IEEE Transactions on Neural Networks, 20(1), 61–80. doi: 10.1109/tnn.2008.2005605
$h_v^t$: vertex features, $e_{uv}$: edge features, $N(v)$: neighbors of vertex $v$
Reducing number of vertecies of a graph.
diffusion convolution + sequence to sequence architecture + scheduled sampling technique
diffusion convolution + sequence to sequence architecture + scheduled sampling technique
$\theta \in \mathbb{R}^{K \times 2}$: (training) parameter for filter
$D_O^{-1}A, D_I^{-1}A^T$: transition matrix for diffusion process and reverse one
$X \in \mathbb{R}^{N \times P}$: input features
$H \in \mathbb{R}^{N \times Q}$: output
Really sparse matrix (Netflix with 480k movies x 18k users with 0.011% known entries)
Use colleted product ratings from users to offer recommendations.
Use similarity between products and users to recommend products.
$G=(\mathcal{V},\mathcal{E}), \mathcal{V} \rightarrow \infty, \mathcal{E} \rightarrow \infty$ $\phi_i: \mathcal{V} \rightarrow \mathbb{R}^n$, heat at vertex $i$
$\phi$, heat distribution across whole graph
$k$: heat capacity
I worked on SparseArrays and CuArrays.CUSPARSE
## Model
model = Chain(GCNConv(g, num_features=>1000, relu),
GCNConv(g, 1000=>500, relu),
Dense(500, 7),
softmax)
## Loss
loss(x, y) = crossentropy(model(x), y)
accuracy(x, y) = mean(onecold(model(x)) .== onecold(y))
## Training
ps = Flux.params(model)
train_data = [(train_X, train_y)]
opt = ADAM(0.01)
evalcb() = @show(accuracy(train_X, train_y))
Flux.train!(loss, ps, train_data, opt, cb=throttle(evalcb, 10))
g = SimpleGraph(5)
add_edge!(1, 2); add_edge!(3, 4)
GCNConv(g, num_features=>1000, relu)
The automatic differentiation (AD) engine in Julia language.
$h_v^t$: vertex features, $e_{uv}$: edge features, $N(v)$: neighbors of vertex $v$