diff --git a/doc/ref_arch/kubernetes/chapters/chapter01.rst b/doc/ref_arch/kubernetes/chapters/chapter01.rst index d034641..ea985ff 100644 --- a/doc/ref_arch/kubernetes/chapters/chapter01.rst +++ b/doc/ref_arch/kubernetes/chapters/chapter01.rst @@ -4,30 +4,28 @@ Introduction Overview -------- -The objective of this Reference Architecture (RA) is to develop a usable Kubernetes-based platform for the Telco -industry. The RA will be based on the standard Kubernetes platform wherever possible. This Reference Architecture -for Kubernetes will describe the high-level system components and their interactions, taking the goals and requirements +The objective of this Reference Architecture (RA) is to develop an usable Kubernetes-based platform for the Telecom +industry. The RA is based on the standard Kubernetes platform wherever possible. This Reference Architecture +for Kubernetes describes the high-level system components and their interactions, taking the goals and requirements from the Cloud Infrastructure Reference Model :cite:p:`refmodel` (RM) and mapping them to Kubernetes (and related) components. This document needs to be sufficiently detailed and robust such that it can be used to guide the production -deployment of Kubernetes within an operator, whilst being flexible enough to evolve with and remain aligned with the -wider Kubernetes ecosystem outside of Telco. +deployment of Kubernetes within a network operator, whilst being flexible enough to evolve with and remain aligned with +the wider Kubernetes ecosystem outside of Telecom. To set this in context, it makes sense to start with the high-level definition and understanding of Kubernetes. -Kubernetes :cite:p:`kubernetes` is a "portable, extendable, open-source platform for managing containerised -workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing -ecosystem. Kubernetes services, support, and tools are widely available" :cite:p:`whatiskubernetes`. -Kubernetes is developed as an -open source project in the `kubernetes` :cite:p:`k8srepo` repository of GitHub. - -To assist with the goal of creating a reference architecture that will support Telco workloads, but at the same time -leverage the work that already has been completed in the Kubernetes community, RA2 will take an -"RA2 Razor" approach to build the foundation. This can be -explained along the lines of "if something is useful for non-Telco workloads, we will not include it only for Telco -workloads". For example, start the Reference Architecture from a vanilla Kubernetes (say, v1.16) feature set, then -provide clear evidence that a functional requirement cannot be met by that system (say, multi-NIC support), only then -the RA would add the least invasive, Kubernetes-community aligned extension (say, Multus) to fill the gap. If there are -still gaps that cannot be filled by standard Kubernetes community technologies or extensions then the RA will concisely -document the requirement in the +Kubernetes :cite:p:`kubernetes` is a "portable, extendable, open source platform for managing containerised +workloads and services, that facilitates both declarative configuration and automation. It has a large and rapidly +growing ecosystem. Kubernetes services, support, and tools are widely available" :cite:p:`whatiskubernetes`. +Kubernetes is developed as an open source project in the `kubernetes` :cite:p:`k8srepo` repository of GitHub. + +To assist with the goal of creating a reference architecture that will support Telecom workloads, but at the same time +leverage the work that already has been completed in the Kubernetes community, RA2 will take an "RA2 Razor" approach to +build the foundation. This can be explained along the lines of "if something is useful for non-Telecom workloads, we +will not include it only for Telecom workloads". For example, start the Reference Architecture from a vanilla Kubernetes +(say, v1.31) feature set, then provide clear evidence that a functional requirement cannot be met by that system +(say, multi-NIC support), only then the RA would add the least invasive, Kubernetes-community aligned extension +(say, Multus) to fill the gap. If there are still gaps that cannot be filled by standard Kubernetes community +technologies or extensions then the RA will concisely record the requirement in the :ref:`chapters/chapter07:introduction to gaps, innovation, and development` chapter of this document and approach the relevant project maintainers with a request to add this functionality into the feature set. @@ -37,7 +35,7 @@ Kubernetes-based Network Function workloads, and lifecycle management of Kuberne community. The intention is to expand as much of the existing test frameworks to be used for the verification and conformance testing of Kubernetes-based workloads, and Kubernetes cluster lifecycle management. -Required component versions +Required Component Versions ~~~~~~~~~~~~~~~~~~~~~~~~~~~ ========== =================== @@ -49,32 +47,24 @@ Kubernetes 1.31 Principles ~~~~~~~~~~ -Architectural principles +Architectural Principles ^^^^^^^^^^^^^^^^^^^^^^^^ This Reference Architecture conforms with the Anuket principles: -1. **Open-source preference:** for building Cloud Infrastructure - solutions, components and tools, using open-source technology. -2. **Open APIs:** to enable interoperability, component - substitution, and minimise integration efforts. -3. **Separation of concerns:** to promote lifecycle independence of - different architectural layers and modules (e.g., disaggregation of - software from hardware). -4. **Automated lifecycle management:** to minimise the - end-to-end lifecycle costs, maintenance downtime (target zero +1. **Open source preference:** for building Cloud Infrastructure solutions, components and tools, using open source + technology. +2. **Open APIs:** to enable interoperability, component substitution, and minimise integration efforts. +3. **Separation of concerns:** to promote lifecycle independence of different architectural layers and modules + (e.g., disaggregation of software from hardware). +4. **Automated lifecycle management:** to minimise the end-to-end lifecycle costs, maintenance downtime (target zero downtime), and errors resulting from manual processes. -5. **Automated scalability:** of workloads to minimise costs and - operational impacts. -6. **Automated closed loop assurance:** for fault resolution, - simplification, and cost reduction of cloud operations. -7. **Cloud nativeness:** to optimise the utilisation of resources - and enable operational efficiencies. -8. **Security compliance:** to ensure the architecture follows - the industry best security practices and is at all levels compliant - to relevant security regulations. -9. **Resilience and Availability:** to withstand - Single Point of Failure. +5. **Automated scalability:** of workloads to minimise costs and operational impacts. +6. **Automated closed loop assurance:** for fault resolution, simplification, and cost reduction of cloud operations. +7. **Cloud nativeness:** to optimise the utilisation of resources and enable operational efficiencies. +8. **Security compliance:** to ensure the architecture follows the industry best security practices and is at all levels + compliant to relevant security regulations. +9. **Resilience and Availability:** to withstand Single Point of Failure. Cloud Native Principles ^^^^^^^^^^^^^^^^^^^^^^^ @@ -232,10 +222,11 @@ Definitions of systems to focus attention on topics of greater importance or general concepts. It can be the result of decoupling. * - Anuket - - A LFN open-source project developing open reference infrastructure models, architectures, tools, and programs. + - A Linux Foundation Networking (LFN) open source project developing open reference infrastructure models, + architectures, tools, and programs. * - CaaS Containers as a Service - - A Platform suitable to host and run Containerised workloads, such as Kubernetes. - Instances of CaaS Platforms are known as **CaaS Clusters**. + - A Platform suitable to host and run Containerised workloads, such as Kubernetes. Instances of CaaS Platforms are + known as **CaaS Clusters**. * - CaaS Manager - A management plane function that manages the lifecycle (instantiation, scaling, healing, etc.) of one or more CaaS instances, including communication with VIM for control plane and node lifecycle management. @@ -243,7 +234,7 @@ Definitions - A generic term covering **NFVI**, **IaaS** and **CaaS** capabilities - essentially the infrastructure on which a **Workload** can be executed. **NFVI**, **IaaS** and **CaaS** layers can be built on top of each other. In case of CaaS some cloud infrastructure features (e.g.: HW management or multitenancy) are implemented by using an - underlying *IaaS** layer. + underlying **IaaS** layer. * - Cloud Infrastructure Hardware Profile - Defines the behaviour, capabilities, configuration, and metrics provided by a cloud infrastructure hardware layer resources available for the workloads. @@ -293,10 +284,10 @@ Definitions - External networks provide network connectivity for a cloud infrastructure tenant to resources outside of the tenant space. * - Fluentd - - An open-source data collector for unified logging layer, which allows data collection and consumption for better + - An open source data collector for unified logging layer, which allows data collection and consumption for better use and understanding of data. **Fluentd** is a CNCF graduated project. * - Functest - - An open-source project part of Anuket LFN project. It addresses functional testing with a collection of + - An open source project part of Anuket LFN project. It addresses functional testing with a collection of state-of-the-art virtual infrastructure test suites, including automatic VNF testing. * - Hardware resources - Compute, storage and network hardware resources on which the cloud infrastructure platform software, virtual @@ -316,9 +307,9 @@ Definitions - Is a virtual compute resource, in a known state such as running or suspended, that can be used like a physical server. It can be used to specify VM Instance or Container Instance. * - Kibana - - An open-source data visualisation system. + - An open source data visualisation system. * - Kubernetes - - An open-source system for automating deployment, scaling, and management of containerised applications. + - An open source system for automating deployment, scaling, and management of containerised applications. * - Kubernetes Cluster - A set of machines, called nodes (either *workers* or *control plane*), that run containerised applications managed by Kubernetes. @@ -365,8 +356,8 @@ Definitions * - Open Platform for NFV (OPNFV) - A collaborative project under the Linux Foundation. OPNFV is now part of the LFN Anuket project. It aims to implement, test, and deploy tools for conformance and performance of NFV infrastructure. - * - OPNFV Verification Program (OVP) - - An open-source, community-led compliance and verification program aiming to demonstrate the readiness and + * - OPNFV Verification Program (OVP) / Anuket Assured + - An open source, community-led compliance and verification program aiming to demonstrate the readiness and availability of commercial NFV products and services using OPNFV and ONAP components. * - Platform - A cloud capabilities type in which the cloud service user can deploy, manage and run customer-created or @@ -379,7 +370,7 @@ Definitions typically set up to run a single primary container. It can also run optional sidecar containers that add supplementary features like logging. * - Prometheus - - An open-source monitoring and alerting system. + - An open source monitoring and alerting system. * - Quota - An imposed upper limit on specific types of resources, usually used to prevent excessive resource consumption by a given consumer (tenant, VM, container). diff --git a/doc/ref_arch/kubernetes/chapters/chapter02.rst b/doc/ref_arch/kubernetes/chapters/chapter02.rst index a38f6c1..369faf7 100644 --- a/doc/ref_arch/kubernetes/chapters/chapter02.rst +++ b/doc/ref_arch/kubernetes/chapters/chapter02.rst @@ -8,7 +8,7 @@ This chapter will specialise the requirements defined in the overall Reference M requirements. Additional, RA2-specific, entries are included in section :ref:`chapters/chapter02:kubernetes architecture requirements`. -Key word definitions +Key Word Definitions -------------------- The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and @@ -45,31 +45,31 @@ Cloud Infrastructure Software Profile Capabilities - Specification Reference * - Exposed Infrastructure Capabilities - e.cap.001 - - Max number of vCPU that can be assigned to a single pod by the Cloud Infrastructure + - Max number of vCPU that can be assigned to a single Pod by the Cloud Infrastructure - At least 16 - At least 16 - ra2.ch.011 * - Exposed Infrastructure Capabilities - e.cap.002 - - Max memory in MB that can be assigned to a single pod by the Cloud Infrastructure + - Max memory in MB that can be assigned to a single Pod by the Cloud Infrastructure - at least 32 GB - at least 32 GB - ra2.ch.012 * - Exposed Infrastructure Capabilities - e.cap.003 - - Max storage in GB that can be assigned to a single pod by the Cloud Infrastructure + - Max storage in GB that can be assigned to a single Pod by the Cloud Infrastructure - at least 320 GB - at least 320 GB - ra2.ch.010 * - Exposed Infrastructure Capabilities - e.cap.004 - - Max number of connection points that can be assigned to a single pod by the Cloud Infrastructure + - Max number of connection points that can be assigned to a single Pod by the Cloud Infrastructure - 6 - 6 - ra2.ntw.003 * - Exposed Infrastructure Capabilities - e.cap.005 - - Max storage in GB that can be attached / mounted to pod by the Cloud Infrastructure + - Max storage in GB that can be attached / mounted to Pod by the Cloud Infrastructure - Up to 16TB (1) - Up to 16TB (1) - N/A @@ -196,7 +196,7 @@ Cloud Infrastructure Software Profile Capabilities - N/A * - Internal Infrastructure Capabilities - i.pm.002 - - Monitor pod CPU usage, per nanosecond + - Monitor Pod CPU usage, per nanosecond - Must support - Must support - N/A @@ -208,7 +208,7 @@ Cloud Infrastructure Software Profile Capabilities - N/A * - Internal Infrastructure Capabilities - i.pm.004 - - Monitor pod CPU utilisation + - Monitor Pod CPU utilisation - Must support - Must support - N/A @@ -244,8 +244,8 @@ Cloud Infrastructure Software Profile Capabilities Virtual Network Interface Specifications ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Note: The required number of connection points to a pod is described in ``e.cap.004`` above. This section describes the - required bandwidth of those connection points. + Note: The required number of connection points to a Pod is described in ``e.cap.004`` above. This section describes + the required bandwidth of those connection points. .. list-table:: Reference Model Requirements: Network Interface Specifications :widths: 10 30 30 10 10 10 @@ -422,7 +422,7 @@ Cloud Infrastructure Software Profile Requirements Virtual Networking -**(1)** Might have other interfaces (such as SR-IOV VFs to be directly passed to a VM or a pod) or NIC-specific drivers +**(1)** Might have other interfaces (such as SR-IOV VFs to be directly passed to a VM or a Pod) or NIC-specific drivers on Kubernetes nodes. **(2)** In Kubernetes based infrastructures network separation is possible without an overlay (e.g.: with IPVLAN) @@ -539,7 +539,7 @@ Cloud Infrastructure Hardware Profile Requirements Edge Cloud Infrastructure Hardware Profile Requirements ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the case of Telco Edge Cloud Deployments, hardware requirements can differ from the above to account for +In the case of Telecom Edge Cloud Deployments, hardware requirements can differ from the above to account for environmental and other constraints. The Reference Model :cite:p:`refmodel` includes considerations specific to deployments at the edge of the network. The infrastructure profiles "Basic" and @@ -556,26 +556,26 @@ of requirements of the above table are relaxed as follows: - Requirement for Basic Profile - Requirement for High-Performance Profile - Specification Reference - * - Telco Edge Cloud: Infrastructure Profiles + * - Telecom Edge Cloud: Infrastructure Profiles - infra.hw.cpu.cfg.001 - sockets - - - - * - Telco Edge Cloud: Infrastructure Profiles + * - Telecom Edge Cloud: Infrastructure Profiles - infra.hw.cpu.cfg.002 - Minimum number of Cores per CPU - 1 - 1 - ra2.ch.008 - * - Telco Edge Cloud: Infrastructure Profiles + * - Telecom Edge Cloud: Infrastructure Profiles - infra.hw.cpu.cfg.003 - NUMA alignment - N - Y (1) - ra2.ch.008 -Telco Edge Cloud: Infrastructure Profiles. +Telecom Edge Cloud: Infrastructure Profiles. **(1)** immaterial if the number of CPU sockets (infra.hw.cpu.cfg.001) is 1. @@ -658,7 +658,7 @@ Cloud Infrastructure Monitoring Capabilities - N/A * - Internal Performance Measurement Capabilities - i.pm.002 - - Capability to monitor per pod CPU (Virtual compute resource) usage (in ns) + - Capability to monitor per Pod CPU (Virtual compute resource) usage (in ns) - Must support - N/A * - Internal Performance Measurement Capabilities @@ -668,7 +668,7 @@ Cloud Infrastructure Monitoring Capabilities - N/A * - Internal Performance Measurement Capabilities - i.pm.004 - - Capability to monitor per pod CPU (Virtual compute resource) usage (in percentage) + - Capability to monitor per Pod CPU (Virtual compute resource) usage (in percentage) - Must support - N/A * - Internal Performance Measurement Capabilities @@ -938,11 +938,13 @@ Cloud Infrastructure Security Requirements - * - Platform and Access - sec.sys.020 - - The Cloud Infrastructure architecture **should** rely on Zero Trust principles to build a secure by design environment. + - The Cloud Infrastructure architecture **should** rely on Zero Trust principles to build a secure by design + environment. - * - Confidentiality and Integrity - sec.ci.001 - - The Platform **must** support Confidentiality and Integrity of data at rest and in-transit. by design environment. + - The Platform **must** support Confidentiality and Integrity of data at rest and in-transit. by design + environment. - :ref:`chapters/chapter05:securing the kubernetes orchestrator` * - Confidentiality and Integrity - sec.ci.002 @@ -1256,13 +1258,13 @@ Cloud Infrastructure Security Requirements - * - IaaC - Secure Code Stage Requirements - sec.code.001 - - SAST -Static Application Security Testing **must** be applied during Secure Coding stage triggered by Pull, + - Static Application Security Testing (SAST) **must** be applied during Secure Coding stage triggered by Pull, Clone or Comment trigger. Security testing that analyses application source code for software vulnerabilities and gaps against best practices. Example: open source OWASP range of tools. - * - IaaC - Secure Code Stage Requirements - sec.code.002 - - SCA - Software Composition Analysis **should** be applied during Secure Coding stage triggered by Pull, + - Software Composition Analysis (SCA) **should** be applied during Secure Coding stage triggered by Pull, Clone or Comment trigger. Security testing that analyses application source code or compiled code for software components with known vulnerabilities. Example: open source OWASP range of tools. - @@ -1278,12 +1280,12 @@ Cloud Infrastructure Security Requirements - * - IaaC - Secure Code Stage Requirements - sec.code.005 - - SAST of Source Code Repo **should** be performed during Secure Coding stage triggered by Developer Code trigger. - Continuous delivery pre-deployment: scanning prior to deployment. + - SAST of Source Code Repository **should** be performed during Secure Coding stage triggered by Developer Code + trigger. Continuous delivery pre-deployment: scanning prior to deployment. - * - IaaC - Continuous Build, Integration and Testing Stage Requirements - sec.bld.001 - - SAST -Static Application Security Testing **should** be applied during the Continuous Build, Integration and + - Static Application Security Testing (SAST) **should** be applied during the Continuous Build, Integration and Testing stage triggered by Build and Integrate trigger. Example: open source OWASP range of tools. - * - IaaC - Continuous Build, Integration and Testing Stage Requirements @@ -1299,7 +1301,7 @@ Cloud Infrastructure Security Requirements - * - IaaC - Continuous Build, Integration and Testing Stage Requirements - sec.bld.004 - - DAST - Dynamic Application Security Testing **should** be applied during the Continuous Build, Integration + - Dynamic Application Security Testing (DAST) **should** be applied during the Continuous Build, Integration and Testing stage triggered by Stage & Test trigger. Security testing that analyses a running application by exercising application functionality and detecting vulnerabilities based on application behaviour and response. Example: OWASP ZAP. @@ -1313,7 +1315,7 @@ Cloud Infrastructure Security Requirements - * - IaaC - Continuous Build, Integration and Testing Stage Requirements - sec.bld.006 - - IAST - Interactive Application Security Testing **should** be applied during the Continuous Build, Integration + - Interactive Application Security Testing (IAST) **should** be applied during the Continuous Build, Integration and Testing stage triggered by Stage & Test trigger. Software component deployed with an application that assesses application behaviour and detects presence of vulnerabilities on an application being exercised in realistic testing scenarios. Example: Contrast Community Edition. @@ -1321,7 +1323,7 @@ Cloud Infrastructure Security Requirements * - IaaC - Continuous Delivery and Deployment Stage Requirements - sec.del.001 - Image Scan **must** be applied during the Continuous Delivery and Deployment stage triggered by - Publish to Artifact and Image Repository trigger. Example: GitLab uses the open-source Clair engine for + Publish to Artifact and Image Repository trigger. Example: GitLab uses the open source Clair engine for container image scanning. - * - IaaC - Continuous Delivery and Deployment Stage Requirements @@ -1351,7 +1353,7 @@ Cloud Infrastructure Security Requirements - * - IaaC - Runtime Defence and Monitoring Requirements - sec.run.002 - - RASP - Runtime Application Self- Protection **should** be continuously applied during the Runtime Defence + - Runtime Application Self-Protection (RASP) **should** be continuously applied during the Runtime Defence and Monitoring stage. Security technology deployed within the target application in production for detecting, alerting, and blocking attacks. - @@ -1514,13 +1516,13 @@ machines or containers. * - inf.com.01 - Infrastructure - Compute - - The Architecture must provide compute resources for pods. + - The Architecture must provide compute resources for Pods. - ra2.k8s.004 * - inf.stg.01 - Infrastructure - Storage - The Architecture must support the ability for an operator to choose whether or - not to deploy persistent storage for pods. + not to deploy persistent storage for Pods. - ra2.stg.004 * - inf.ntw.01 - Infrastructure @@ -1587,7 +1589,7 @@ machines or containers. * - inf.ntw.14 - Infrastructure - Network - - The platform must allow NAT-less traffic (i.e., exposing the pod IP address directly to the + - The platform must allow NAT-less traffic (i.e., exposing the Pod IP address directly to the outside), allowing source and destination IP addresses to be preserved in the traffic headers from workloads to external networks. This is needed e.g. for signalling applications, using SIP and Diameter protocols. diff --git a/doc/ref_arch/kubernetes/chapters/chapter03.rst b/doc/ref_arch/kubernetes/chapters/chapter03.rst index b4b1b3b..7615282 100644 --- a/doc/ref_arch/kubernetes/chapters/chapter03.rst +++ b/doc/ref_arch/kubernetes/chapters/chapter03.rst @@ -5,15 +5,15 @@ Introduction to High Level Architecture --------------------------------------- The Anuket Reference Architecture (RA2) for Kubernetes based cloud infrastructure is intended to be an industry -standard-independent Kubernetes reference architecture that is not tied to any specific offering or distribution. +standard and independent Kubernetes reference architecture that is not tied to any specific offering or distribution. No vendor-specific enhancements are required to achieve conformance with the Anuket specifications. Conformance to these specifications can be achieved by using upstream components or features that are developed by the open source community, and conformance is ensured by successfully running the RC2 conformance testing suite. By using the Reference Architecture (RA2) for Kubernetes based cloud infrastructure specifications, operators can deploy infrastructure that will run any VNF or CNF that has successfully run on an RA2-conformant infrastructure. The -purpose of this chapter is to outline all the components required to provide Telco-grade Kubernetes in a consistent and -reliable way. The specification of how to setup these components is detailed in the +purpose of this chapter is to outline all the components required to provide Telecom-grade Kubernetes in a consistent +and reliable way. The specification of how to setup these components is detailed in the :ref:`chapters/chapter04:component level architecture` chapter. Kubernetes is already a well-documented and widely deployed open source project of the Cloud Native @@ -24,8 +24,8 @@ The following chapters describe the specific features required by the Anuket Ref expected to be implemented. While this reference architecture provides options for modular components, such as service mesh, the focus of the -Reference Architecture is on the abstracted interfaces and features that are required for Telco workload management and -execution. +Reference Architecture is on the abstracted interfaces and features that are required for Telecom workload management +and execution. Chapter 4 of the Reference Model (RM) :cite:p:`refmodel` describes the hardware and software profiles that reflect the capabilities and features that the types of Cloud Infrastructure provide to the workloads. @@ -37,7 +37,7 @@ high-performance). the local folder during the GSMA transformation work. .. figure:: ../figures/RM-ch05-sw-profile.png - :alt: (from RM): NFVI softwareprofiles + :alt: (from RM): NFVI software profiles :name: (from RM): NFVI software profiles (from RM): NFVI software profiles @@ -122,14 +122,14 @@ Where low-level runtimes are used for the execution of a container within an ope complete high-level container runtimes are used for the general management of container images - moving them to where they need to be executed, unpacking them, and then passing them to the low-level runtime, which then executes the container. These high-level runtimes also include a comprehensive API that other components, such as Kubernetes, can -use to interact and manage the containers. An example of this type of runtime is containerd, which provides the +use to interact and manage the containers. An example of this type of runtime is Containerd, which provides the features described above, and depends on runc for execution. For Kubernetes, the important interface to consider for container management is the Kubernetes Container Runtime Interface (CRI). This is an interface specification for any container runtime to integrate with the control plane (kubelet) of a Kubernetes node. The CRI allows to decouple the kubelet from the runtime that is running in the node OS, allowing to -swap container runtime if it is compliant with CRI. Examples CRI-compliant runtimes include containerd +swap container runtime if it is compliant with CRI. Examples CRI-compliant runtimes include Containerd and cri-o, which are built specifically to work with Kubernetes. To fulfill ``inf.vir.01``, the architecture should support a container runtime which provides the isolation of the @@ -344,9 +344,9 @@ TLS Certificate Management Network functions (NFs) running in Kubernetes may require PKI TLS certificates for multiple purposes. For example, 3GPP TS 33.501 describes how Inter-NF communications must be secured using mutual TLS and OAuth. -`cert-manager` :cite:p:`cert-manager` can automatically provision and manage TLS certificates in Kubernetes, in order for CNFs to use them for -TLS communications. It can request PKI certificates from issuers, ensure the certificates are valid and up-to-date, -and can renew them before their expiry. Network Functions that are deployed on Kubernetes clusters +`cert-manager` :cite:p:`cert-manager` can automatically provision and manage TLS certificates in Kubernetes, in order +for CNFs to use them for TLS communications. It can request PKI certificates from issuers, ensure the certificates are +valid and up-to-date, and can renew them before their expiry. Network Functions that are deployed on Kubernetes clusters can delegate the lifecycle management of their certificates to `cert-manager`. Example lifecycle steps are listed below: @@ -359,10 +359,10 @@ Example lifecycle steps are listed below: of “how” the certificate is obtained, since this is delegated to cert-manager. The certificate request can originate from any container in the CNF Pod- either the NFc “application”, or the service mesh (e.g. where deployed as a sidecar). -2. When it receives the certificate request, cert-manager will generate a new private key, then send a Certificate Signing - Request (CSR) to the relevant issuing CA. The CA returns the signed certificate. One of the benefits of cert-manager - is its “pluggable” architecture. It comes with built-in support for a number of issuing CA types and protocols, and - developers can easily add support for new ones. +2. When it receives the certificate request, cert-manager will generate a new private key, then send a Certificate + Signing Request (CSR) to the relevant issuing CA. The CA returns the signed certificate. One of the benefits of + cert-manager is its “pluggable” architecture. It comes with built-in support for a number of issuing CA types and + protocols, and developers can easily add support for new ones. 3. Once the certificate is returned by the relevant issuing CA, cert-manager stores the private key and certificate as a K8s Secret (specifically using the built-in “kubernetes.io/tls” Secret type). The Secret name is taken from the Certificate CRD. @@ -450,7 +450,8 @@ applications. This allows resources such as “GPUs, high-performance NICs, FPGA and other similar computing resources that may require vendor-specific initialization and setup” to be managed and consumed via standard interfaces. -The figure `Kubernetes Networking Architecture` below shows the main building blocks of a Kubernetes networking solution: +The figure `Kubernetes Networking Architecture` below shows the main building blocks of a Kubernetes networking +solution: - **Kubernetes Control Plane**: this is the core of a Kubernetes Cluster: the apiserver, the etcd cluster, the kube-scheduler, and the various controller-managers. The control plane (in particular the apiserver) @@ -504,7 +505,7 @@ The figure `Kubernetes Networking Architecture` below shows the main building bl they are coordinated (as required by ``inf.ntw.10``). - **Service Mesh**: The well-known service meshes are "application service meshes" - that address and interact with the application layer 7 protocols (eg.: HTTP) + that address and interact with the application layer 7 protocols (e.g.: HTTP) only. Therefore, their support is not required, as these service meshes are outside the scope of the infrastructure layer of this architecture. @@ -538,8 +539,8 @@ Kubernetes, including: Custom Resource Definitions) or an external management plane (e.g., dynamic address assignment from a VPN server). -There are several types of low latency and high throughput networks required by telco workloads: for example signalling -traffic workloads and user plane traffic workloads. +There are several types of low latency and high throughput networks required by Telecom workloads: for example +signalling traffic workloads and user plane traffic workloads. Networks used for signalling traffic are more demanding than what a standard overlay network can handle, but still do not need the use of user space networking. Due to the nature of the signalling protocols used, these type of networks require NAT-less communication documented in ``infra.net.cfg.003`` and will need to be served by a CNI plugin