-
Notifications
You must be signed in to change notification settings - Fork 37
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #47 from aws-observability/42-existing-single-clus…
…ter-observability-pattern-with-aws-mixed-approach-services Existing single Cluster Observability Pattern with AWS Mixed approach services
- Loading branch information
Showing
3 changed files
with
149 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
import ExistingEksMixedConstruct from '../lib/existing-eks-mixed-observability-construct'; | ||
import { configureApp, errorHandler } from '../lib/common/construct-utils'; | ||
|
||
const app = configureApp(); | ||
|
||
new ExistingEksMixedConstruct().buildAsync(app, 'existing-eks-mixed').catch((error) => { | ||
errorHandler(app, "Existing Cluster Pattern is missing information of existing cluster: " + error); | ||
}); |
78 changes: 78 additions & 0 deletions
78
...rns/existing-eks-observability-accelerators/existing-eks-mixed-observability.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,78 @@ | ||
# Existing EKS Cluster AWS Mixed Observability Accelerator | ||
|
||
## Architecture | ||
|
||
The following figure illustrates the architecture of the pattern we will be deploying for Existing EKS Cluster AWS Mixed Observability pattern, using AWS native tools such as CloudWatch and X-Ray and Open Source tools such as Amazon Distro for OpenTelmetry (ADOT) and Prometheus Node Exporter. | ||
|
||
![Architecture](../images/mixed-diagram.png) | ||
|
||
This example makes use of CloudWatch, as a metric and log aggregation layer, while X-Ray is used as a trace-aggregation layer. In order to collect the metrics and traces, we use the Open Source ADOT collector. Fluent Bit is used to export the logs to CloudWatch Logs. | ||
|
||
In this architecture, AWS X-Ray provides a complete view of requests as they travel through your application and filters visual data across payloads, functions, traces, services, and APIs. X-Ray also allows you to perform analytics, to gain powerful insights about your distributed trace data. | ||
|
||
Utilizing CloudWatch and X-Ray as an aggregation layer allows for a fully-managed scalable telemetry backend. In this example we get those benefits while still having the flexibility and rapid development of the Open Source collection tools. | ||
|
||
## Objective | ||
|
||
This pattern aims to add Observability on top of an existing EKS cluster, with a mixture of AWS native and open source managed AWS services. | ||
|
||
## Prerequisites: | ||
|
||
Ensure that you have installed the following tools on your machine: | ||
|
||
1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) | ||
2. [kubectl](https://Kubernetes.io/docs/tasks/tools/) | ||
3. [cdk](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html#getting_started_install) | ||
4. [npm](https://docs.npmjs.com/cli/v8/commands/npm-install) | ||
|
||
You will also need: | ||
|
||
1. Either an existing EKS cluster, or you can setup a new one with [Single New EKS Cluster Observability Accelerator](../single-new-eks-observability-accelerators/single-new-eks-cluster.md) | ||
2. An OpenID Connect (OIDC) provider, associated to the above EKS cluster (Note: Single EKS Cluster Pattern takes care of that for you) | ||
|
||
## Deploying | ||
|
||
1. Edit `~/.cdk.json` by setting the name of your existing cluster: | ||
|
||
```json | ||
"context": { | ||
... | ||
"existing.cluster.name": "...", | ||
... | ||
} | ||
``` | ||
|
||
2. Edit `~/.cdk.json` by setting the kubectl role name; if you used Single New EKS Cluster Observability Accelerator to setup your cluster, the kubectl role name would be provided by the output of the deployment, on your command-line interface (CLI): | ||
|
||
```json | ||
"context": { | ||
... | ||
"existing.kubectl.rolename":"...", | ||
... | ||
} | ||
``` | ||
|
||
3. Run the following command from the root of this repository to deploy the pipeline stack: | ||
|
||
```bash | ||
make build | ||
make pattern existing-eks-mixed-observability deploy | ||
``` | ||
|
||
## Verify the resources | ||
|
||
Please see [Single New EKS Cluster AWS Mixed Observability Accelerator](../single-new-eks-observability-accelerators/single-new-eks-mixed-observability.md). | ||
|
||
## Teardown | ||
|
||
You can teardown the whole CDK stack with the following command: | ||
|
||
```bash | ||
make pattern existing-eks-mixed-observability destroy | ||
``` | ||
|
||
If you setup your cluster with Single New EKS Cluster Observability Accelerator, you also need to run: | ||
|
||
```bash | ||
make pattern single-new-eks-cluster destroy | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
import { ImportClusterProvider, utils } from '@aws-quickstart/eks-blueprints'; | ||
import * as blueprints from '@aws-quickstart/eks-blueprints'; | ||
import { cloudWatchDeploymentMode } from '@aws-quickstart/eks-blueprints'; | ||
import { ObservabilityBuilder } from '../common/observability-builder'; | ||
import * as cdk from "aws-cdk-lib"; | ||
import * as eks from 'aws-cdk-lib/aws-eks'; | ||
|
||
export default class ExistingEksMixedobservabilityConstruct { | ||
async buildAsync(scope: cdk.App, id: string) { | ||
// AddOns for the cluster | ||
const stackId = `${id}-observability-accelerator`; | ||
|
||
const clusterName = utils.valueFromContext(scope, "existing.cluster.name", undefined); | ||
const kubectlRoleName = utils.valueFromContext(scope, "existing.kubectl.rolename", undefined); | ||
|
||
const account = process.env.COA_ACCOUNT_ID! || process.env.CDK_DEFAULT_ACCOUNT!; | ||
const region = process.env.COA_AWS_REGION! || process.env.CDK_DEFAULT_REGION!; | ||
|
||
const sdkCluster = await blueprints.describeCluster(clusterName, region); // get cluster information using EKS APIs | ||
const vpcId = sdkCluster.resourcesVpcConfig?.vpcId; | ||
|
||
/** | ||
* Assumes the supplied role is registered in the target cluster for kubectl access. | ||
*/ | ||
|
||
const importClusterProvider = new ImportClusterProvider({ | ||
clusterName: sdkCluster.name!, | ||
version: eks.KubernetesVersion.of(sdkCluster.version!), | ||
clusterEndpoint: sdkCluster.endpoint, | ||
openIdConnectProvider: blueprints.getResource(context => | ||
new blueprints.LookupOpenIdConnectProvider(sdkCluster.identity!.oidc!.issuer!).provide(context)), | ||
clusterCertificateAuthorityData: sdkCluster.certificateAuthority?.data, | ||
kubectlRoleArn: blueprints.getResource(context => new blueprints.LookupRoleProvider(kubectlRoleName).provide(context)).roleArn, | ||
clusterSecurityGroupId: sdkCluster.resourcesVpcConfig?.clusterSecurityGroupId | ||
}); | ||
|
||
const cloudWatchAdotAddOn = new blueprints.addons.CloudWatchAdotAddOn({ | ||
deploymentMode: cloudWatchDeploymentMode.DEPLOYMENT, | ||
namespace: 'default', | ||
name: 'adot-collector-cloudwatch', | ||
metricsNameSelectors: ['apiserver_request_.*', 'container_memory_.*', 'container_threads', 'otelcol_process_.*'], | ||
}); | ||
|
||
const addOns: Array<blueprints.ClusterAddOn> = [ | ||
new blueprints.addons.CloudWatchLogsAddon({ | ||
logGroupPrefix: `/aws/eks/${stackId}`, | ||
logRetentionDays: 30 | ||
}), | ||
new blueprints.addons.AdotCollectorAddOn(), | ||
cloudWatchAdotAddOn, | ||
new blueprints.addons.XrayAdotAddOn(), | ||
]; | ||
|
||
ObservabilityBuilder.builder() | ||
.account(account) | ||
.region(region) | ||
.addExistingClusterObservabilityBuilderAddOns() | ||
.clusterProvider(importClusterProvider) | ||
.resourceProvider(blueprints.GlobalResources.Vpc, new blueprints.VpcProvider(vpcId)) // this is required with import cluster provider | ||
.addOns(...addOns) | ||
.build(scope, stackId); | ||
} | ||
} |