- About
- Prerequisites
- Plugins
- Device Plugins Operator
- Demos
- Developers
- Running e2e Tests
- Supported Kubernetes versions
- Related code
This repository contains a framework for developing plugins for the Kubernetes device plugins framework, along with a number of device plugin implementations utilising that framework.
Prerequisites for building and running these device plugins include:
- Appropriate hardware
- A fully configured Kubernetes cluster
- A working Go environment, of at least version v1.13.
The below sections detail existing plugins developed using the framework.
The GPU device plugin supports Intel GVT-d device passthrough and acceleration using GPUs of the following hardware families:
- Integrated GPUs within Intel Core processors
- Intel Xeon processors
- Intel Visual Compute Accelerator (Intel VCA)
The demo subdirectory contains both a GPU plugin demo video as well as code for an OpenCL FFT demo.
The FPGA device plugin supports FPGA passthrough for the following hardware:
- Intel Arria 10
- Intel Stratix 10
The FPGA plugin comes as three parts.
- the device plugin
- the admissing controller
- the CRIO-O prestart hook
Refer to each individual sub-components documentation for more details. Brief overviews of the sub-components are below.
The demo subdirectory contains a video showing deployment and use of the FPGA plugin. Sources relating to the demo can be found in the opae-nlb-demo subdirectory.
The FPGA device plugin is responsible for discovering and reporting FPGA
devices to kubelet
.
The FPGA admission controller webhook is responsible for performing mapping from user-friendly function IDs to the Interface ID and Bitstream ID that are required for FPGA programming. It also implements access control by namespacing FPGA configuration information.
The FPGA prestart CRI-O hook performs discovery of the requested FPGA function bitstream and programs FPGA devices based on the environment variables in the workload description.
QAT device plugin
The QAT plugin supports device plugin for Intel QAT adapters, and includes code showing deployment via DPDK.
The demo subdirectory includes details of both a QAT DPDK demo and a QAT OpenSSL demo. Source for the OpenSSL demo can be found in the relevant subdirectory.
Details for integrating the QAT device plugin into Kata Containers can be found in the Kata Containers documentation repository.
The VPU device plugin supports Intel VCAC-A card (https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/media-analytics-vcac-a-accelerator-card-by-celestica-datasheet.pdf) the card has:
- 1 Intel Core i3-7100U processor
- 12 MyriadX VPUs
- 8GB DDR4 memory
The demo subdirectory includes details of a OpenVINO deployment and use of the VPU plugin. Sources can be found in openvino-demo
Currently the operator has limited support for the QAT and GPU device plugins: it validates container image references and extends reported statuses.
To run an operator instance in the container run
$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.0/cert-manager.yaml
$ make deploy-operator
Then deploy your device plugin by applying its custom resource, e.g.
GpuDevicePlugin
with
$ kubectl apply -f ./deployments/operator/samples/deviceplugin_v1_gpudeviceplugin.yaml
Observe it is up and running:
$ kubectl get GpuDevicePlugin
NAME DESIRED READY NODE SELECTOR AGE
gpudeviceplugin-sample 1 1 5s
The demo subdirectory contains a number of demonstrations for a variety of the available plugins.
For information on how to develop a new plugin using the framework, see the Developers Guide and the code in the device plugins pkg directory.
Currently the E2E tests require having a Kubernetes cluster already configured on the nodes with the hardware required by the device plugins. Also all the container images with the executables under test must be available in the cluster. Given these two conditions are satisfied one can run the tests with
$ go test -v ./test/e2e/...
In case you want to run only certain tests, e.g. QAT ones, then run
$ go test -v ./test/e2e/... -args -ginkgo.focus "QAT"
If you need to specify paths to your custom kubeconfig
containing
embedded authentication info then add the -kubeconfig
argument:
$ go test -v ./test/e2e/... -args -kubeconfig /path/to/kubeconfig
The full list of available options can be obtained with
$ go test ./test/e2e/... -args -help
Also it is possible to run the tests which don't depend on hardware without a pre-configured Kubernetes cluster. Just make sure you have Kind installed on your host and run
$ make test-with-kind
The controller-runtime library provides a package for integration testing by
starting a local control plane. The package is called
envtest. The
operator uses this package for its integration testing.
Please have a look at envtest
's documentation to set up it properly. But basically
you just need to have etcd
and kube-apiserver
binaries available on your
host. By default they are expected to be located at /usr/local/kubebuilder/bin
.
But you can have it stored anywhere by setting the KUBEBUILDER_ASSETS
environment variable. So, given you have the binaries copied to
${HOME}/work/kubebuilder-assets
to run the tests just enter
$ KUBEBUILDER_ASSETS=${HOME}/work/kubebuilder-assets make envtest
Releases are made under the github releases area. Supported releases and matching Kubernetes versions are listed below:
Branch | Kubernetes branch/version |
---|---|
release-0.18 | Kubernetes 1.18 branch v1.18.x |
release-0.17 | Kubernetes 1.17 branch v1.17.x |
release-0.15 | Kubernetes 1.15 branch v1.15.x |
release-0.11 | Kubernetes 1.11 branch v1.11.x |
A related Intel SRIOV network device plugin can be found in this repository