-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Implement tests for scs-0210-v2, scs-0214-v1 #481
Comments
We integrated sonobuoy yesterday, but for the
|
For conformance testing of The check consists of two stages:
Stage 1 has multiple strategies:
A downside of strategy B is the cost to provide these otherwise unused clusters and the manual labour which is required whenever the provider switches the The stages can be implemented independent of each other. Furthermore, the design allows us to add other strategies later on, if needed. |
I've asked the members of the Team Container Matrix chat for feedback. Independent of that I'll continue to implement second stage of the described design. |
So for the first part I think every testing strategy must be as close to distribution agnostic as it can be, so different k8s deployment/distribution variants can be tested, as long as they are CNCF certified compliant k8s variants. There is still a lot of movement in k8s distributions/installers development so it seems wise to stay agile and not be dependent on a single blessed cluster installer. So imho "Strategy A" is already out. another comment I have: notice that Kubernetes doesn't necessarily "only" supports the last three releases, even if it says so in it's release schedule, e.g. currently there are in fact 4 supported releases listed at https://kubernetes.io/releases/ even if version 1.26 will be EOL at 28.02.2024 (so next week). I know this is pretty weird, but well, it is what it is. in april, according to the schedule k8s 1.30 will be released bumping the number of supported releases to 4 again. So keep that in mind while writing test 😉 So I'm strongly in favor of option "B", because we really can only do meaningful testing when we exercise the actual code, that is, connect to a running cluster and see which version it is. There's imho not much value gained from just checking a manually provided yaml file or something. However let me also add some more notes. Please bear with me, but I think the standard mandates as a MUST:
afaik we only currently test if all the provided versions from a CSP are still supported, we do not test at the day before a release goes EOL if it is still listed/possible to create at the CSP. This makes me also realize, that it is not really defined what "supported" means here, because arguably you maybe don't want customers to create a new cluster on a version that will be EOL tomorrow. A more reasonable definition of "supported" might include providing security updates for cluster versions until they are EOL and be able to reinstall the "same cluster" until it is EOL, but maybe restrict new installation of a soon-to-be EOL version to e.g. one month prior to EOL? I realize we will probably not be able to extend the tests even further, but I think it still is worth mentioning to maybe fix this later by either changing the standard or extend the tests. PS: I notice the standard has also a missing link for |
Thanks @artificial-intelligence for your feedback and perspective.
That's a fair point, I agree.
I'll focus on this (to be honest it is also my favorite approach).
Still, it's at least useful for me as a test implementer, for testing the tests ;-)
Ack. Because testing only three "profiles" is not enough, I will extend the number of profiles with another
I agree with you. At the moment, I think the "supported" in the standard roughly means "a minor version of k8s which is not EOL". Using this definition might not be the best choice for a customer for the reasons you pointed out. However, I also think this is a change to the standard itself that needs to be discussed in separate issue. |
The main requirement here is to get providers to deliver patches (and minor releases) fast enough. So that would be my primary concern for testing. A provider disabling the provisioning of an old version a bit too early is not nice, but not nearly as likely and also not nearly as problematic for the user. So testing strategy would be from my perspective:
There is one tiny challenge: Step 2 is implementation dependant. We can implement a test for cluster stacks (and we could also cover KaaS v1 if we wanted to). |
Thanks @garloff for the feedback! I agree with the primary goals you described. However, the test methods steps 2 and 3 would deviate a lot from the current implementation for scs-0210-v1, which merely checks the versions of an existing cluster. My proposal for v2 tests is an extension of this for multiple clusters. To summarize and categorize the current proposed approaches for scs-0210-v2 testing:
Because we're in a hurry, I think we need to settle for one of the simpler approaches, but leave the implementation open. @mbuechse what do you think? |
After meeting with @jschoone we concluded that it makes most sense if I stick to option 1b, i.e., "Get metadata by connecting to multiple existing clusters" because it is the most simple one to implement. It also mimics the way we connect to OpenStack for the IaaS conformance tests (we use a central As I've stated before, other options/strategies can be added later on. However, in this context I also want to note that creating K8s clusters during the conformance tests will add delays and an additional source of (transient) errors. For the other standard that needs to be tested (
Unfortunately, we would need to revamp the standard to mandate that these labels need to be set. To my knowledge, this is not the default case. |
That sounds like a good idea. I would try to implement this tomorrow (no time today). Sadly, like you said, this also kinda mandates the creation of a |
Had a short exchange with @mbuechse yesterday, we probably won't do a |
This provides a basic node distribution test only operating based on the labels of the kubernetes cluster provided. A test will come back negative, if no distribution according to the labels can be detected. Signed-off-by: Hannes Baum <hannes.baum@cloudandheat.com>
The test section of the standard also needs to be updated, even if the standard is already stabilized. Signed-off-by: Hannes Baum <hannes.baum@cloudandheat.com>
PR created: #486 |
Signed-off-by: Hannes Baum <hannes.baum@cloudandheat.com>
Signed-off-by: Hannes Baum <hannes.baum@cloudandheat.com>
Signed-off-by: Hannes Baum <hannes.baum@cloudandheat.com>
Signed-off-by: Hannes Baum <hannes.baum@cloudandheat.com>
Signed-off-by: Hannes Baum <hannes.baum@cloudandheat.com>
Signed-off-by: Hannes Baum <hannes.baum@cloudandheat.com>
Signed-off-by: Hannes Baum <hannes.baum@cloudandheat.com>
Signed-off-by: Hannes Baum <hannes.baum@cloudandheat.com>
This provides a basic node distribution test only operating based on the labels of the kubernetes cluster provided. A test will come back negative, if no distribution according to the labels can be detected. Signed-off-by: Hannes Baum <hannes.baum@cloudandheat.com>
The test section of the standard also needs to be updated, even if the standard is already stabilized. Signed-off-by: Hannes Baum <hannes.baum@cloudandheat.com>
* Provide node distribution test (#481) This provides a basic node distribution test only operating based on the labels of the kubernetes cluster provided. A test will come back negative, if no distribution according to the labels can be detected. Signed-off-by: Hannes Baum <hannes.baum@cloudandheat.com> * Update the test section in the standard (#481) The test section of the standard also needs to be updated, even if the standard is already stabilized. Signed-off-by: Hannes Baum <hannes.baum@cloudandheat.com> --------- Signed-off-by: Hannes Baum <hannes.baum@cloudandheat.com>
resolves #481 Signed-off-by: Matthias Büchse <matthias.buechse@cloudandheat.com>
resolves #481 Signed-off-by: Matthias Büchse <matthias.buechse@cloudandheat.com>
This adds the missing conformance tests for the version policy in scs-0210-v2, reusing the CVE collection and parsing code from the scs-0210-v1 Python script written by @cah-hbaum. Refs: #481 Closes: SovereignCloudStack/issues#505 Co-authored-by: Hannes Baum <hannes.baum@cloudandheat.com> Co-authored-by: Matthias Büchse <matthias.buechse@cloudandheat.com> Signed-off-by: Martin Morgenstern <martin.morgenstern@cloudandheat.com>
This is a high-priority crash operation: find a way to implement reasonable tests by 2024-02-28.
Request assistance from whatever parties necessary!
The text was updated successfully, but these errors were encountered: