diff --git a/DifferentiationInterfaceTest/previews/PR435/.documenter-siteinfo.json b/DifferentiationInterfaceTest/previews/PR435/.documenter-siteinfo.json index c89f99f0b..3a14861d7 100644 --- a/DifferentiationInterfaceTest/previews/PR435/.documenter-siteinfo.json +++ b/DifferentiationInterfaceTest/previews/PR435/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-01T16:47:33","documenter_version":"1.6.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.5","generation_timestamp":"2024-09-01T17:04:44","documenter_version":"1.6.0"}} \ No newline at end of file diff --git a/DifferentiationInterfaceTest/previews/PR435/api/index.html b/DifferentiationInterfaceTest/previews/PR435/api/index.html index a2cf38117..c02cd7cd4 100644 --- a/DifferentiationInterfaceTest/previews/PR435/api/index.html +++ b/DifferentiationInterfaceTest/previews/PR435/api/index.html @@ -1,5 +1,5 @@ -API reference · DifferentiationInterfaceTest.jl

API reference

Entry points

DifferentiationInterfaceTest.test_differentiationFunction
test_differentiation(
+API reference · DifferentiationInterfaceTest.jl

API reference

Entry points

DifferentiationInterfaceTest.test_differentiationFunction
test_differentiation(
     backends::Vector{<:ADTypes.AbstractADType};
     ...
 )
@@ -26,12 +26,12 @@
     atol,
     rtol
 )
-

Cross-test a list of backends on a list of scenarios, running a variety of different tests.

Default arguments

Keyword arguments

Testing:

  • correctness=true: whether to compare the differentiation results with the theoretical values specified in each scenario
  • type_stability=false: whether to check type stability with JET.jl (thanks to JET.@test_opt)
  • sparsity: whether to check sparsity of the jacobian / hessian
  • detailed=false: whether to print a detailed or condensed test log

Filtering:

  • input_type=Any, output_type=Any: restrict scenario inputs / outputs to subtypes of this
  • first_order=true, second_order=true: include first order / second order operators
  • onearg=true, twoarg=true: include one-argument / two-argument functions
  • inplace=true, outofplace=true: include in-place / out-of-place operators

Options:

  • logging=false: whether to log progress
  • isequal=isequal: function used to compare objects exactly, with the standard signature isequal(x, y)
  • isapprox=isapprox: function used to compare objects approximately, with the standard signature isapprox(x, y; atol, rtol)
  • atol=0: absolute precision for correctness testing (when comparing to the reference outputs)
  • rtol=1e-3: relative precision for correctness testing (when comparing to the reference outputs)
source
test_differentiation(
+

Cross-test a list of backends on a list of scenarios, running a variety of different tests.

Default arguments

Keyword arguments

Testing:

  • correctness=true: whether to compare the differentiation results with the theoretical values specified in each scenario
  • type_stability=false: whether to check type stability with JET.jl (thanks to JET.@test_opt)
  • sparsity: whether to check sparsity of the jacobian / hessian
  • detailed=false: whether to print a detailed or condensed test log

Filtering:

  • input_type=Any, output_type=Any: restrict scenario inputs / outputs to subtypes of this
  • first_order=true, second_order=true: include first order / second order operators
  • onearg=true, twoarg=true: include one-argument / two-argument functions
  • inplace=true, outofplace=true: include in-place / out-of-place operators

Options:

  • logging=false: whether to log progress
  • isequal=isequal: function used to compare objects exactly, with the standard signature isequal(x, y)
  • isapprox=isapprox: function used to compare objects approximately, with the standard signature isapprox(x, y; atol, rtol)
  • atol=0: absolute precision for correctness testing (when comparing to the reference outputs)
  • rtol=1e-3: relative precision for correctness testing (when comparing to the reference outputs)
source
test_differentiation(
     backend::ADTypes.AbstractADType,
     args...;
     kwargs...
 )
-

Shortcut for a single backend.

source
DifferentiationInterfaceTest.benchmark_differentiationFunction
benchmark_differentiation(
     backends::Vector{<:ADTypes.AbstractADType},
     scenarios::Vector{<:Scenario};
     input_type,
@@ -45,14 +45,14 @@
     excluded,
     logging
 ) -> DataFrames.DataFrame
-

Benchmark a list of backends for a list of operators on a list of scenarios.

The object returned is a DataFrames.DataFrame where each column corresponds to a field of DifferentiationBenchmarkDataRow.

The keyword arguments available here have the same meaning as those in test_differentiation.

source
DifferentiationInterfaceTest.DifferentiationBenchmarkDataRowType
DifferentiationBenchmarkDataRow

Ad-hoc storage type for differentiation benchmarking results.

If you have a vector rows::Vector{DifferentiationBenchmarkDataRow}, you can turn it into a DataFrame as follows:

using DataFrames
+

Benchmark a list of backends for a list of operators on a list of scenarios.

The object returned is a DataFrames.DataFrame where each column corresponds to a field of DifferentiationBenchmarkDataRow.

The keyword arguments available here have the same meaning as those in test_differentiation.

source
DifferentiationInterfaceTest.DifferentiationBenchmarkDataRowType
DifferentiationBenchmarkDataRow

Ad-hoc storage type for differentiation benchmarking results.

If you have a vector rows::Vector{DifferentiationBenchmarkDataRow}, you can turn it into a DataFrame as follows:

using DataFrames
 
-df = DataFrame(rows)

The resulting DataFrame will have one column for each of the following fields.

Fields

  • backend::ADTypes.AbstractADType: backend used for benchmarking

  • scenario::Scenario: scenario used for benchmarking

  • operator::Symbol: differentiation operator used for benchmarking, e.g. :gradient or :hessian

  • calls::Int64: number of calls to the differentiated function for one call to the operator

  • samples::Int64: number of benchmarking samples taken

  • evals::Int64: number of evaluations used for averaging in each sample

  • time::Float64: minimum runtime over all samples, in seconds

  • allocs::Float64: minimum number of allocations over all samples

  • bytes::Float64: minimum memory allocated over all samples, in bytes

  • gc_fraction::Float64: minimum fraction of time spent in garbage collection over all samples, between 0.0 and 1.0

  • compile_fraction::Float64: minimum fraction of time spent compiling over all samples, between 0.0 and 1.0

See the documentation of Chairmarks.jl for more details on the measurement fields.

source

Pre-made scenario lists

The precise contents of the scenario lists are not part of the API, only their existence.

Scenario types

DifferentiationInterfaceTest.ScenarioType
Scenario{op,args,pl}

Store a testing scenario composed of a function and its input + output for a given operator.

This generic type should never be used directly: use the specific constructor corresponding to the operator you want to test, or a predefined list of scenarios.

Constructors

Type parameters

  • op: one of :pushforward, :pullback, :derivative, :gradient, :jacobian,:second_derivative, :hvp, :hessian
  • args: either 1 (for f(x) = y) or 2 (for f!(y, x) = nothing)
  • pl: either :inplace or :outofplace

Fields

  • f::Any: function f (if args==1) or f! (if args==2) to apply

  • x::Any: primal input

  • y::Any: primal output

  • seed::Any: seed for pushforward, pullback or HVP

  • res1::Any: first-order result of the operator

  • res2::Any: second-order result of the operator (when it makes sense)

Note that the res1 and res2 fields are given more meaningful names in the keyword arguments of each specialized constructor. For example:

  • the keyword grad of GradientScenario becomes res1
  • the keyword hess of HessianScenario becomes res2, and the keyword grad becomes res1
source

Internals

This is not part of the public API.

DifferentiationInterfaceTest.flux_scenariosFunction
flux_scenarios(rng=Random.default_rng())

Create a vector of Scenarios with neural networks from Flux.jl.

Warning

This function requires FiniteDifferences.jl and Flux.jl to be loaded (it is implemented in a package extension).

Danger

These scenarios are still experimental and not part of the public API. Their ground truth values are computed with finite differences, and thus subject to imprecision.

source
DifferentiationInterfaceTest.lux_scenariosFunction
lux_scenarios(rng=Random.default_rng())

Create a vector of Scenarios with neural networks from Lux.jl.

Warning

This function requires ComponentArrays.jl, FiniteDiff.jl, Lux.jl and LuxTestUtils.jl to be loaded (it is implemented in a package extension).

Danger

These scenarios are still experimental and not part of the public API. Their ground truth values are computed with finite differences, and thus subject to imprecision.

source
+df = DataFrame(rows)

The resulting DataFrame will have one column for each of the following fields.

Fields

  • backend::ADTypes.AbstractADType: backend used for benchmarking

  • scenario::Scenario: scenario used for benchmarking

  • operator::Symbol: differentiation operator used for benchmarking, e.g. :gradient or :hessian

  • calls::Int64: number of calls to the differentiated function for one call to the operator

  • samples::Int64: number of benchmarking samples taken

  • evals::Int64: number of evaluations used for averaging in each sample

  • time::Float64: minimum runtime over all samples, in seconds

  • allocs::Float64: minimum number of allocations over all samples

  • bytes::Float64: minimum memory allocated over all samples, in bytes

  • gc_fraction::Float64: minimum fraction of time spent in garbage collection over all samples, between 0.0 and 1.0

  • compile_fraction::Float64: minimum fraction of time spent compiling over all samples, between 0.0 and 1.0

See the documentation of Chairmarks.jl for more details on the measurement fields.

source

Pre-made scenario lists

The precise contents of the scenario lists are not part of the API, only their existence.

Scenario types

DifferentiationInterfaceTest.ScenarioType
Scenario{op,args,pl}

Store a testing scenario composed of a function and its input + output for a given operator.

This generic type should never be used directly: use the specific constructor corresponding to the operator you want to test, or a predefined list of scenarios.

Constructors

Type parameters

  • op: one of :pushforward, :pullback, :derivative, :gradient, :jacobian,:second_derivative, :hvp, :hessian
  • args: either 1 (for f(x) = y) or 2 (for f!(y, x) = nothing)
  • pl: either :inplace or :outofplace

Fields

  • f::Any: function f (if args==1) or f! (if args==2) to apply

  • x::Any: primal input

  • y::Any: primal output

  • seed::Any: seed for pushforward, pullback or HVP

  • res1::Any: first-order result of the operator

  • res2::Any: second-order result of the operator (when it makes sense)

Note that the res1 and res2 fields are given more meaningful names in the keyword arguments of each specialized constructor. For example:

  • the keyword grad of GradientScenario becomes res1
  • the keyword hess of HessianScenario becomes res2, and the keyword grad becomes res1
source

Internals

This is not part of the public API.

DifferentiationInterfaceTest.flux_scenariosFunction
flux_scenarios(rng=Random.default_rng())

Create a vector of Scenarios with neural networks from Flux.jl.

Warning

This function requires FiniteDifferences.jl and Flux.jl to be loaded (it is implemented in a package extension).

Danger

These scenarios are still experimental and not part of the public API. Their ground truth values are computed with finite differences, and thus subject to imprecision.

source
DifferentiationInterfaceTest.lux_scenariosFunction
lux_scenarios(rng=Random.default_rng())

Create a vector of Scenarios with neural networks from Lux.jl.

Warning

This function requires ComponentArrays.jl, FiniteDiff.jl, Lux.jl and LuxTestUtils.jl to be loaded (it is implemented in a package extension).

Danger

These scenarios are still experimental and not part of the public API. Their ground truth values are computed with finite differences, and thus subject to imprecision.

source
diff --git a/DifferentiationInterfaceTest/previews/PR435/index.html b/DifferentiationInterfaceTest/previews/PR435/index.html index a20a9775f..214e33095 100644 --- a/DifferentiationInterfaceTest/previews/PR435/index.html +++ b/DifferentiationInterfaceTest/previews/PR435/index.html @@ -11,4 +11,4 @@ Pkg.add( url="https://github.com/gdalle/DifferentiationInterface.jl", subdir="DifferentiationInterfaceTest" -) +) diff --git a/DifferentiationInterfaceTest/previews/PR435/tutorial/index.html b/DifferentiationInterfaceTest/previews/PR435/tutorial/index.html index 1ce3bb392..6f2daad7c 100644 --- a/DifferentiationInterfaceTest/previews/PR435/tutorial/index.html +++ b/DifferentiationInterfaceTest/previews/PR435/tutorial/index.html @@ -13,12 +13,12 @@ type_stability=false, # checks type stability with JET.jl detailed=true, # prints a detailed test set )Test Summary: | Pass Total Time -Testing correctness | 68 68 18.2s +Testing correctness | 68 68 18.1s AutoForwardDiff() | 34 34 5.3s gradient | 34 34 5.3s - Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32 | 17 17 3.1s + Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32 | 17 17 3.0s Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64 | 17 17 2.2s AutoEnzyme(mode=EnzymeCore.ReverseMode{false, EnzymeCore.FFIABI, false, false}()) | 34 34 12.8s gradient | 34 34 12.8s - Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32 | 17 17 10.8s - Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64 | 17 17 2.0s

If you are too lazy to manually specify the reference, you can also provide an AD backend as the ref_backend keyword argument, which will serve as the ground truth for comparison.

Benchmarking

Once you are confident that your backends give the correct answers, you probably want to compare their performance. This is made easy by the benchmark_differentiation function, whose syntax should feel familiar:

df = benchmark_differentiation(backends, scenarios);
12×11 DataFrame
Rowbackendscenariooperatorcallssamplesevalstimeallocsbytesgc_fractioncompile_fraction
Abstract…Scenario…SymbolInt64Int64Int64Float64Float64Float64Float64Float64
1AutoForwardDiff()Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32prepare_gradient0114.99e-611.0528.00.00.0
2AutoForwardDiff()Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32value_and_gradient!13612218.0e-81.032.00.00.0
3AutoForwardDiff()Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32gradient!13918816.0e-80.00.00.00.0
4AutoForwardDiff()Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64prepare_gradient0114.308e-611.01776.00.00.0
5AutoForwardDiff()Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64value_and_gradient!13314711.9e-75.0192.00.00.0
6AutoForwardDiff()Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64gradient!11995811.9e-74.0160.00.00.0
7AutoEnzyme(mode=ReverseMode{false, FFIABI, false, false}())Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32prepare_gradient0111.5e-70.00.00.00.0
8AutoEnzyme(mode=ReverseMode{false, FFIABI, false, false}())Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32value_and_gradient!16668517.01e-710.0208.00.00.0
9AutoEnzyme(mode=ReverseMode{false, FFIABI, false, false}())Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32gradient!113877316.9e-80.00.00.00.0
10AutoEnzyme(mode=ReverseMode{false, FFIABI, false, false}())Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64prepare_gradient0112.3e-70.00.00.00.0
11AutoEnzyme(mode=ReverseMode{false, FFIABI, false, false}())Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64value_and_gradient!16411817.51e-710.0208.00.00.0
12AutoEnzyme(mode=ReverseMode{false, FFIABI, false, false}())Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64gradient!114320018.9e-80.00.00.00.0

The resulting object is a DataFrame from DataFrames.jl, whose columns correspond to the fields of DifferentiationBenchmarkDataRow:

+ Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32 | 17 17 10.9s + Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64 | 17 17 1.9s

If you are too lazy to manually specify the reference, you can also provide an AD backend as the ref_backend keyword argument, which will serve as the ground truth for comparison.

Benchmarking

Once you are confident that your backends give the correct answers, you probably want to compare their performance. This is made easy by the benchmark_differentiation function, whose syntax should feel familiar:

df = benchmark_differentiation(backends, scenarios);
12×11 DataFrame
Rowbackendscenariooperatorcallssamplesevalstimeallocsbytesgc_fractioncompile_fraction
Abstract…Scenario…SymbolInt64Int64Int64Float64Float64Float64Float64Float64
1AutoForwardDiff()Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32prepare_gradient0114.699e-611.0528.00.00.0
2AutoForwardDiff()Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32value_and_gradient!13962816.9e-81.032.00.00.0
3AutoForwardDiff()Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32gradient!14573516.0e-80.00.00.00.0
4AutoForwardDiff()Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64prepare_gradient0114.87e-611.01776.00.00.0
5AutoForwardDiff()Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64value_and_gradient!13729812.0e-75.0192.00.00.0
6AutoForwardDiff()Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64gradient!14002111.9e-74.0160.00.00.0
7AutoEnzyme(mode=ReverseMode{false, FFIABI, false, false}())Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32prepare_gradient0111.4e-70.00.00.00.0
8AutoEnzyme(mode=ReverseMode{false, FFIABI, false, false}())Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32value_and_gradient!13106116.91e-710.0208.00.00.0
9AutoEnzyme(mode=ReverseMode{false, FFIABI, false, false}())Scenario{:gradient,1,:inplace} f : Vector{Float32} -> Float32gradient!115491916.9e-80.00.00.00.0
10AutoEnzyme(mode=ReverseMode{false, FFIABI, false, false}())Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64prepare_gradient0111.31e-70.00.00.00.0
11AutoEnzyme(mode=ReverseMode{false, FFIABI, false, false}())Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64value_and_gradient!16598917.51e-710.0208.00.00.0
12AutoEnzyme(mode=ReverseMode{false, FFIABI, false, false}())Scenario{:gradient,1,:inplace} f : Matrix{Float64} -> Float64gradient!114884011.0e-70.00.00.00.0

The resulting object is a DataFrame from DataFrames.jl, whose columns correspond to the fields of DifferentiationBenchmarkDataRow: