Skip to content

Commit

Permalink
Merge branch 'main' into exponents
Browse files Browse the repository at this point in the history
  • Loading branch information
E-Rum committed Jan 8, 2025
2 parents 54b6fd6 + 04edb22 commit 401e65d
Show file tree
Hide file tree
Showing 8 changed files with 135 additions and 6 deletions.
85 changes: 85 additions & 0 deletions CITATION.cff
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
cff-version: 1.2.0
message: "If you use torch-pme for your work, please read and cite it as below."
title: >-
Fast and flexible range-separated models for atomistic machine learning
abstract: |
Most atomistic machine learning (ML) models rely on a locality ansatz, and decompose the energy into a sum of short-ranged, atom-centered contributions. This leads to clear limitations when trying to describe problems that are dominated by long-range physical effects - most notably electrostatics. Many approaches have been proposed to overcome these limitations, but efforts to make them efficient and widely available are hampered by the need to incorporate an ad hoc implementation of methods to treat long-range interactions. We develop a framework aiming to bring some of the established algorithms to evaluate non-bonded interactions - including Ewald summation, classical particle-mesh Ewald (PME), and particle-particle/particle-mesh (P3M) Ewald - into atomistic ML. We provide a reference implementation for PyTorch as well as an experimental one for JAX. Beyond Coulomb and more general long-range potentials, we introduce purified descriptors which disregard the immediate neighborhood of each atom, and are more suitable for general long-ranged ML applications. Our implementations are fast, feature-rich, and modular: They provide an accurate evaluation of physical long-range forces that can be used in the construction of (semi)empirical baseline potentials; they exploit the availability of automatic differentiation to seamlessly combine long-range models with conventional, local ML schemes; and they are sufficiently flexible to implement more complex architectures that use physical interactions as building blocks. We benchmark and demonstrate our torch-pme and jax-pme libraries to perform molecular dynamics simulations, to train range-separated ML potentials, and to evaluate long-range equivariant descriptors of atomic structures.
type: preprint
database: arXiv.org
date-accessed: 2024-12-05T12:43:16Z
repository: arXiv
url: http://arxiv.org/abs/2412.03281
keywords:
- Physics - Chemical Physics
authors:
- family-names: Loche
given-names: Philip
- family-names: Huguenin-Dumittan
given-names: Kevin K.
- family-names: Honarmand
given-names: Melika
- family-names: Xu
given-names: Qianjun
- family-names: Rumiantsev
given-names: Egor
- family-names: How
given-names: Wei Bin
- family-names: Langer
given-names: Marcel F.
- family-names: Ceriotti
given-names: Michele
editors:
- family-names: Loche
given-names: Philip
- family-names: Huguenin-Dumittan
given-names: Kevin K.
- family-names: Honarmand
given-names: Melika
- family-names: Xu
given-names: Qianjun
- family-names: Rumiantsev
given-names: Egor
- family-names: How
given-names: Wei Bin
- family-names: Langer
given-names: Marcel F.
- family-names: Ceriotti
given-names: Michele
recipients:
- family-names: Loche
given-names: Philip
- family-names: Huguenin-Dumittan
given-names: Kevin K.
- family-names: Honarmand
given-names: Melika
- family-names: Xu
given-names: Qianjun
- family-names: Rumiantsev
given-names: Egor
- family-names: How
given-names: Wei Bin
- family-names: Langer
given-names: Marcel F.
- family-names: Ceriotti
given-names: Michele
translators:
- family-names: Loche
given-names: Philip
- family-names: Huguenin-Dumittan
given-names: Kevin K.
- family-names: Honarmand
given-names: Melika
- family-names: Xu
given-names: Qianjun
- family-names: Rumiantsev
given-names: Egor
- family-names: How
given-names: Wei Bin
- family-names: Langer
given-names: Marcel F.
- family-names: Ceriotti
given-names: Michele
date-published: 2024-12-04
identifiers:
- type: doi
value: 10.48550/arXiv.2412.03281
1 change: 1 addition & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
graft src

include LICENSE
include CITATION.cff
include README.rst

prune docs
Expand Down
2 changes: 1 addition & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ can optionally be installed together and used as ``torchpme.metatensor`` via
Quickstart
----------

Here is a simple example get you started with *torch-pme*:
Here is a simple example to get started with *torch-pme*:

.. code-block:: python
Expand Down
2 changes: 2 additions & 0 deletions docs/src/references/changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@ changelog <https://keepachangelog.com/en/1.1.0/>`_ format. This project follows
Fixed
#####

* Fixed consistency of ``dtype`` and ``device`` in the ``SplinePotential`` class
* Fix inconsistent ``cutoff`` in neighbor list example
* All calculators now check if the cell is zero if the potential is range-separated


Expand Down
2 changes: 1 addition & 1 deletion examples/2-neighbor-lists-usage.py
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ def distances(
#
# and create new distances in a similar manner as above.

nl = vesin.torch.NeighborList(cutoff=1.0, full_list=False)
nl = vesin.torch.NeighborList(cutoff=cutoff, full_list=False)
neighbor_indices_new, d = nl.compute(
points=positions_new, box=cell, periodic=True, quantities="Pd"
)
Expand Down
15 changes: 12 additions & 3 deletions src/torchpme/potentials/spline.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,9 @@ def __init__(
if len(y_grid) != len(r_grid):
raise ValueError("Length of radial grid and value array mismatch.")

r_grid = r_grid.to(dtype=dtype, device=device)
y_grid = y_grid.to(dtype=dtype, device=device)

if reciprocal:
if torch.min(r_grid) <= 0.0:
raise ValueError(
Expand All @@ -89,6 +92,8 @@ def __init__(
k_grid = torch.pi * 2 * torch.reciprocal(r_grid).flip(dims=[0])
else:
k_grid = r_grid.clone()
else:
k_grid = k_grid.to(dtype=dtype, device=device)

if yhat_grid is None:
# computes automatically!
Expand All @@ -98,6 +103,8 @@ def __init__(
y_grid,
compute_second_derivatives(r_grid, y_grid),
)
else:
yhat_grid = yhat_grid.to(dtype=dtype, device=device)

# the function is defined for k**2, so we define the grid accordingly
if reciprocal:
Expand All @@ -108,12 +115,14 @@ def __init__(
self._krn_spline = CubicSpline(k_grid**2, yhat_grid)

if y_at_zero is None:
self._y_at_zero = self._spline(torch.tensor([0.0]))
self._y_at_zero = self._spline(torch.zeros(1, dtype=dtype, device=device))
else:
self._y_at_zero = y_at_zero

if yhat_at_zero is None:
self._yhat_at_zero = self._krn_spline(torch.tensor([0.0]))
self._yhat_at_zero = self._krn_spline(
torch.zeros(1, dtype=dtype, device=device)
)
else:
self._yhat_at_zero = yhat_at_zero

Expand All @@ -140,7 +149,7 @@ def self_contribution(self) -> torch.Tensor:
return self._y_at_zero

def background_correction(self) -> torch.Tensor:
return torch.tensor([0.0])
return torch.zeros(1)

from_dist.__doc__ = Potential.from_dist.__doc__
lr_from_dist.__doc__ = Potential.lr_from_dist.__doc__
Expand Down
2 changes: 1 addition & 1 deletion src/torchpme/utils/splines.py
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ def compute_second_derivatives(
d2y = _solve_tridiagonal(a, b, c, d)

# Converts back to the original dtype
return d2y.to(x_points.dtype)
return d2y.to(dtype=x_points.dtype, device=x_points.device)


def compute_spline_ft(
Expand Down
32 changes: 32 additions & 0 deletions tests/test_potentials.py
Original file line number Diff line number Diff line change
Expand Up @@ -573,3 +573,35 @@ def test_combined_potential_learnable_weights():
loss.backward()
optimizer.step()
assert torch.allclose(combined.weights, weights - 0.1)


@pytest.mark.parametrize("device", ["cpu", "cuda"])
@pytest.mark.parametrize("dtype", [torch.float32, torch.float64])
@pytest.mark.parametrize(
"potential_class", [CoulombPotential, InversePowerLawPotential, SplinePotential]
)
def test_potential_device_dtype(potential_class, device, dtype):
if device == "cuda" and not torch.cuda.is_available():
pytest.skip("CUDA is not available")

smearing = 1.0
exponent = 1.0

if potential_class is InversePowerLawPotential:
potential = potential_class(
exponent=exponent, smearing=smearing, dtype=dtype, device=device
)
elif potential_class is SplinePotential:
x_grid = torch.linspace(0, 20, 100, device=device, dtype=dtype)
y_grid = torch.exp(-(x_grid**2) * 0.5)
potential = potential_class(
r_grid=x_grid, y_grid=y_grid, reciprocal=False, dtype=dtype, device=device
)
else:
potential = potential_class(smearing=smearing, dtype=dtype, device=device)

dists = torch.linspace(0.1, 10.0, 100, device=device, dtype=dtype)
potential_lr = potential.lr_from_dist(dists)

assert potential_lr.device.type == device
assert potential_lr.dtype == dtype

0 comments on commit 401e65d

Please sign in to comment.