generated from eliahuhorwitz/Academic-project-page-template
-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
302 lines (260 loc) · 16 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<!-- Meta tags for social media banners, these should be filled in appropriatly as they are your "business card" -->
<!-- Replace the content tag with appropriate information -->
<meta name="description" content="DESCRIPTION META TAG">
<meta property="og:title" content="SPARKS"/>
<meta property="og:description" content="Official website for SPARKS."/>
<meta property="og:url" content="https://franciscrickinstitute.github.io/sparks-ai/"/>
<!-- Path to banner image, should be in the path listed below. Optimal dimenssions are 1200X630-->
<meta property="og:image" content="static/image/your_banner_image.png" />
<meta property="og:image:width" content="1200"/>
<meta property="og:image:height" content="630"/>
<meta name="twitter:title" content="TWITTER BANNER TITLE META TAG">
<meta name="twitter:description" content="TWITTER BANNER DESCRIPTION META TAG">
<!-- Path to banner image, should be in the path listed below. Optimal dimenssions are 1200X600-->
<meta name="twitter:image" content="static/images/your_twitter_banner_image.png">
<meta name="twitter:card" content="summary_large_image">
<!-- Keywords for your paper to be indexed by-->
<meta name="keywords" content="KEYWORDS SHOULD BE PLACED HERE">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>SPARKS</title>
<link rel="apple-touch-icon" sizes="180x180" href="/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/favicon-16x16.png">
<link rel="manifest" href="/site.webmanifest">
<link rel="mask-icon" href="/safari-pinned-tab.svg" color="#5bbad5">
<meta name="msapplication-TileColor" content="#da532c">
<meta name="theme-color" content="#ffffff">
<link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro"
rel="stylesheet">
<link rel="stylesheet" href="static/css/bulma.min.css">
<link rel="stylesheet" href="static/css/bulma-carousel.min.css">
<link rel="stylesheet" href="static/css/bulma-slider.min.css">
<link rel="stylesheet" href="static/css/fontawesome.all.min.css">
<link rel="stylesheet"
href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
<link rel="stylesheet" href="static/css/index.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script src="https://documentcloud.adobe.com/view-sdk/main.js"></script>
<script defer src="static/js/fontawesome.all.min.js"></script>
<script src="static/js/bulma-carousel.min.js"></script>
<script src="static/js/bulma-slider.min.js"></script>
<script src="static/js/index.js"></script>
<style>
.container {
display: flex;
flex-wrap: wrap;
justify-content: space-around;
width: 100%;
}
.item {
max-width: calc(20% - 20px);
height: auto;
margin: 10px;
flex-grow: 1;
flex-basis: calc(20% - 20px); /* This sets the initial size of the items to approximately one third of the container width, minus the margin */
object-fit: contain; /* This maintains the aspect ratio of the elements */
}
video, img {
width: 100%;
height: 100%;
}
</style>
</head>
<body>
<section class="hero">
<div class="hero-body">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column has-text-centered">
<h1 class="title is-1 publication-title">SPARKS: A Biologically Inspired Neural Attention Model for the Analysis of Sequential Spiking Patterns</h1>
<div class="is-size-5 publication-authors">
<!-- Paper authors -->
<span class="author-block">
<a href="https://scholar.google.com/citations?user=KB0fO_MAAAAJ&hl=en" target="_blank">Nicolas Skatchkovsky</a><sup>1</sup>,</span>
<span class="author-block">
Natalia Glazman<sup>2</sup>,</span>
<span class="author-block">
<a href="https://www.imperial.ac.uk/people/s.sadeh" target="_blank">Sadra Sadeh</a><sup>1, 2, *</sup>,</span>
<span class="author-block">
<a href="https://www.crick.ac.uk/research/labs/flor-iacaruso" target="_blank">Florencia Iacaruso</a><sup>1, *</sup>,</span>
</div>
<div class="is-size-5 publication-authors">
<span class="author-block"><sup>1</sup>The Francis Crick Institute, <sup>2</sup>Imperial College London <br>Cosyne 2024</span>
<span class="eql-cntrb"><small><br><sup>*</sup>Indicates Equal Contribution</small></span>
</div>
<div class="column has-text-centered">
<div class="publication-links">
<!-- ArXiv abstract Link -->
<span class="link-block">
<a href="https://www.biorxiv.org/content/10.1101/2024.08.13.607787v1" target="_blank"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="ai ai-arxiv"></i>
</span>
<span>BioArXiv</span>
</a>
</span>
<!-- Arxiv PDF link -->
<!-- E<span class="link-block">
<a href="Coming soon!" target="_blank"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fas fa-file-pdf"></i>
</span>
<span>Paper</span>
</a>
</span> -->
<!-- Github link -->
<span class="link-block">
<a href="https://github.com/franciscrickinstitute/sparks" target="_blank"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fab fa-github"></i>
</span>
<span>Code</span>
</a>
</span>
<span class="link-block">
<a href="https://franciscrickinstitute.github.io/sparks-ai/docs#" target="_blank"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fas fa-info-circle"></i>
</span>
<span>Documentation</span>
</a>
</span>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<div class="container" style="width: 20%; margin-bottom: 20px;">
<img width="20%" src="static/images/logo_2.png" alt="Image description">
</div>
<div class="container">
<video class="item" poster="" id="tree" autoplay muted loop>
<source src="static/videos/monkey_reaching_spikes.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<img class="item" src="static/images/sparks-encoder.png" alt="Image description">
<video class="item" poster="" id="tree" autoplay muted loop>
<source src="static/videos/monkey_reaching_latent.mp4" type="video/mp4">
</video>
<img class="item" src="static/images/sparks-decoder.png" alt="Image description">
<video class="item" poster="" id="tree" autoplay muted loop>
<source src="static/videos/monkey_reaching_targets.mp4" type="video/mp4">
</video>
</div>
<div class="container is-max-desktop">
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<div class="content has-text-justified">
<p>Two-dimensional embeddings obtained during the supervised prediction of the position of the hand of monkeys performing a centre-out task (Chowdury et al., 2020).</p>
</div>
</div>
</div>
</div>
<!-- End teaser video -->
<!-- Paper abstract -->
<section class="section hero is-light">
<div class="container is-max-desktop">
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Abstract</h2>
<div class="content has-text-justified">
<p>
Understanding how the brain represents sensory information and triggers behavioural responses is a fundamental goal in neuroscience. Despite advances in neuronal recording techniques, linking the resulting high dimensional responses to relevant variables remains challenging. Inspired by recent progress in machine learning, we propose a novel self-attention mechanism that generates reliable latent representations by sequentially extracting information from the precise timing of single spikes through Hebbian learning. We train a variational autoencoder encompassing the proposed attention layer using an information-theoretic criterion inspired by predictive coding to enforce temporal coherence in the latent representations. The resulting model, SPARKS, produces interpretable embeddings from just tens of neurons, demonstrating robustness across animals and sessions. Through unsupervised and supervised learning, SPARKS generates meaningful low-dimensional representations of high-dimensional recordings and offers state-of-the-art prediction capabilities for behavioural variables on diverse electrophysiology and calcium imaging datasets. Notably, we capture oscillatory sequences from the medial entorhinal cortex (MEC) at unprecedented precision, compare latent representation of natural scenes across sessions and animals, and reveal the hierarchical organisation of the mouse visual cortex from simple datasets. Combining machine learning models with biologically inspired mechanisms, SPARKS provides a promising solution for revealing large-scale network dynamics. Its capacity to generalize across animals and behavioural states suggests SPARKS potential to estimate the animal’s latent generative model of the world.
</p>
</div>
</div>
</div>
</div>
</section>
<!-- End paper abstract -->
<section class="section" id="Overview">
<h2 class="title">Overview</h2>
<div class="container" style="width: 70%; margin-bottom: 20px;">
<img width="80%" src="static/images/sparks-overview.png" alt="Image description">
</div>
To obtain consistent latent embeddings that capture most of the variance from high dimensional neuronal responses, we have developed a Sequential Predictive Autoencoder for the Representation of Spiking Signals (SPARKS), combining a variational autoencoder with the proposed Hebbian attention layer and predictive learning rule (see details below). The encoder comprises several attention blocks, each composed of an attention layer, followed by a fully connected feedforward network with residual connections and batch normalisation. The first attention block implements our Hebbian self-attention mechanism, with the following blocks implementing conventional dot-product attention. The decoder is a fully connected feedforward neural network, tasked to reconstruct either the input signal to perform unsupervised learning, or to predict a desired reference signal for supervised learning.
</section>
<section class="section" id="Hebbian">
<h2 class="title">The Hebbian Attention layer</h2>
<div class="container" style="width: 80%; margin-bottom: 20px;">
<img width="80%" src="static/images/encoder.png" alt="Image description">
</div>
At the heart of SPARKS is the Hebbian attention layer, a biologically motivated adaptation of the conventional attention mechanism used in Transformers. This layer allows the model to focus on significant parts of the neural input data, mimicking the way biological systems prioritize certain signals.
</section>
<section class="section" id="Learning">
<h2 class="title">Learning and Optimization</h2>
<div class="container" style="width: 40%; margin-bottom: 20px;">
<img width="40%" src="static/images/optimisation.png" alt="Image description">
</div>
SPARKS utilizes a variational approach to optimize the encoder and decoder networks. By adopting a predictive causally conditioned distribution, inspired by predictive coding theories in neuroscience, the model learns to generate accurate predictions from neural data. This framework also supports training across different sessions and even different animals, allowing for robust learning despite variability in data collection conditions.
</section>
<section class="section" id="Applications">
<h2 class="title">Applications and Insights</h2>
<div class="container">
<div style="display: inline-block; width: 45%; margin-bottom: 150px;">
<video width="100%" autoplay loop muted preload="auto">
<source src="static/videos/mec_encs.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<figcaption>Two-dimensional embeddings obtained from unsupervised learning of calcium recordings in the medial entorhinal cortex (MEC) of passive mice (Gonzalo Cogno et al., 2024). Without prior pre-processing, SPARKS unveils a ring topology from the recording and allows us to obtain the phase of the underlying oscillation in the signal.</figcaption>
</div>
<div style="display: inline-block; width: 45%; margin-bottom: 150px;">
<video width="100%" autoplay loop muted preload="auto">
<source src="static/videos/allen_visual.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
<figcaption>Reconstruction of a natural movie shown to passive mice using 100 neurons from the primary visual cortex recorded with a Neuropixels probe (de Vries et al., 2020).</figcaption>
</div>
</div>
SPARKS has been successfully applied to decode neural signals and predict sensory inputs, such as visual stimuli, with high accuracy. It demonstrates the potential to uncover the temporal dynamics of neural signals across different brain regions, offering valuable insights into brain function. Additionally, the model's ability to handle unsupervised and supervised learning tasks makes it versatile for various neuroscience research applications.
</section>
<!--BibTex citation -->
<section class="section" id="BibTeX">
<div class="container is-max-desktop content">
<h2 class="title">BibTeX</h2>
<pre><code>@article {skatchkovsky24sparks,
author = {Skatchkovsky, Nicolas and Glazman, Natalia and Sadeh, Sadra and Iacaruso, Florencia},
title = {A Biologically Inspired Attention Model for Neural Signal Analysis},
elocation-id = {2024.08.13.607787},
year = {2024},
doi = {10.1101/2024.08.13.607787},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2024/08/16/2024.08.13.607787},
eprint = {https://www.biorxiv.org/content/early/2024/08/16/2024.08.13.607787.full.pdf},
journal = {bioRxiv}
}
</code></pre>
</div>
</section>
<!--End BibTex citation -->
<footer class="footer">
<div class="container">
<div class="columns is-centered">
<div class="column is-8">
<div class="content">
<p>
Author: <a href="https://scholar.google.com/citations?user=KB0fO_MAAAAJ&hl=en" target="_blank">Nicolas Skatchkovsky</a>.
This page was built using the <a href="https://github.com/eliahuhorwitz/Academic-project-page-template" target="_blank">Academic Project Page Template</a> which was adopted from the <a href="https://nerfies.github.io" target="_blank">Nerfies</a> project page.
<br> This website is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/" target="_blank">Creative
Commons Attribution-ShareAlike 4.0 International License</a>.
</p>
</div>
</div>
</div>
</div>
</footer>
<!-- Statcounter tracking code -->
<!-- You can add a tracker to track page visits by creating an account at statcounter.com -->
<!-- End of Statcounter Code -->
</body>
</html>