Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ipv6 cluster creation issue in testsys #954

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

zongzhidanielhu
Copy link

@zongzhidanielhu zongzhidanielhu commented Dec 2, 2024

Issue number:
ipv6 variants always have a ipFamily=ipv4

Description of changes:
changed the way of create cluster so that ipv6 has ipFamily=ipv6, ipv4 has ipFamily=ipv4

Testing done:

  1. after codebase changed, but new docker image for bottlerocket-test-system
  2. push image into ECR
  3. trigger the changes in BoB, the applied variant is aws-k8s-1.24
  4. Checked cluster status in cloudFormation, the aforementioned issue has been solved

Terms of contribution:

By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

@zongzhidanielhu zongzhidanielhu changed the title Added ipFamily conditions ipv6 cluster creation issue in testsys Dec 11, 2024
@zongzhidanielhu zongzhidanielhu marked this pull request as ready for review December 11, 2024 19:37
@gthao313
Copy link
Member

LGTM. Can you fix the checks by following the error messages? thanks!

… fix previously wrong ipFamily issue

This is the body of the commit: this fix addressed the issue of when
create clusters, the ipv6 cluster always has a ipFamily of ipv4 which is
wrong. In this commit, a new way of creating cluster is added which is
using created yaml file in a certain path. That way we can definite the
ipFamily for whether it is ipv6 or ipv4.
@@ -116,6 +121,7 @@ impl AwsClients {
}
}

#[allow(dead_code)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What dead code are we allowing?

}

fn create_yaml(cluster_name: &str, region: &str, version: &str) -> ProviderResult<()> {
let ip_family = if cluster_name.ends_with("ipv6") {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would probably be better solved with an Enum since we know there are two valid values and only two valid values.

@@ -129,6 +135,103 @@ enum ClusterConfig {
},
}

#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
struct Config {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we get doc strings for all these structs? I think this is specifically an eksctl configuration struct, but a docstring would help. I think EksctlConfig might be a better name as well to signal we don't own this format, its intended to be used specifically as input to eksctl.

#[derive(Serialize)]
#[serde(rename_all = "camelCase")]
struct KubernetesNetworkConfig {
ip_family: String,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you made this type an Enum, you could use the type system to protect your values against unknown/invalid ones.

}],
};

let yaml = serde_yaml::to_string(&cluster).expect("Failed to serialize YAML");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use .context() here to stay consistent with the rest of the codebase on error handling?

/// The default region for the cluster.
const DEFAULT_REGION: &str = "us-west-2";
/// The default cluster version.
const DEFAULT_VERSION: &str = "1.24";
const TEST_CLUSTER_CONFIG_PATH: &str = "/local/eksctl_config.yaml";
const CLUSTER_CONFIG_PATH: &str = "/local/cluster_config.yaml";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you create your own cluster_config.yaml file here? Do we need both this and TEST_CLUSTER_CONFIG_PATH?

@@ -194,53 +288,35 @@ impl ClusterConfig {
}

/// Create a cluster with the given config.
#[allow(clippy::let_and_return)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you add this? I think we want to fix the code, not disable the lint.

Self::Args {
cluster_name,
region,
version,
zones,
zones: _,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't follow the change to understand why you needed to change this? Are we no longer handling zones? Won't this break code where we have to be very clear which zone we are launching in when there is capacity only in a certain zone?

@@ -144,15 +247,6 @@ impl ClusterConfig {
)?)
.context(Resources::Clear, "Unable to serialize eksctl config.")?;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like you just broke the ability to pass in a file for ClusterConfig? Can you explain why you removed this?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't change this part of the code, used to add some code here later on I recovery to it original state.

};

trace!("Calling eksctl create cluster with config file");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: We probably want to indicate which configuration file we are calling with. This could be very confusing if its using the one you generate vs one passed in and this log line wouldn't help the user discern which one is happening.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants