KernelDistributions.jl
KernelDistributions.KernelDistributions
— ModuleMeasureTheory.jl is what I have used because of the nicer interface until now, but all the type are not isbits and can not be used on the GPU. Distributions.jl is pretty close but not perfect for the execution on the GPU:
- Mostly type stable
- Mixtures a quirky
- Uniform is not strongly typed resulting in Float64 calculations all the time.
Here, I provide stripped-down Distributions which are isbitstype, strongly typed and thus support execution on the GPU. KernelDistributions offer the following interface functions:
DensityInterface.logdensityof(dist::KernelDistribution, x)
Random.rand!(rng, dist::KernelDistribution, A)
Base.rand(rng, dist::KernelDistribution, dims...)
Base.eltype(::Type{<:AbstractKernelDistribution})
: Number format of the distribution, e.g. Float16
The Interface requires the following to be implemented:
- Bijectors.bijector(d): Bijector
rand_kernel(rng, dist::MyKernelDistribution{T})::T
generate a single random number from the distributionDistributions.logpdf(dist::MyKernelDistribution{T}, x)::T
evaluate the normalized logdensityBase.maximum(d), Base.minimum(d), Distributions.insupport(d)
: Determine the support of the distributionDistributions.logcdf(d, x), Distributions.invlogcdf(d, x)
: Support for Truncated{D}
Most of the time Float64 precision is not required, especially for GPU computations. Thus, I default to Float32, mostly for memory capacity reasons.
KernelDistributions.AbstractKernelDistribution
— TypeAbstractKernelDistribution{T,S<:ValueSupport} <: UnivariateDistribution{S}
Overrides the following behaviors of Distributions.jl:
logdensityof
broadcastslogpdf
bijector
for an array of distributions broadcastsbijector
- Arrays are generated RNG specific (default: Array, CUDA.RNG: CuArray) and filled via broadcasting
@license BSD-3 https://opensource.org/licenses/BSD-3-Clause
Copyright (c) 2022, Institute of Automatic Control - RWTH Aachen University
All rights reserved.