Skip to content

Commit

Permalink
Merge branch 'main' into stabilize_scs-0214-v2
Browse files Browse the repository at this point in the history
  • Loading branch information
garloff authored Nov 25, 2024
2 parents da439b1 + c3dd463 commit 64f2d7a
Show file tree
Hide file tree
Showing 20 changed files with 1,190 additions and 124 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ type: Supplement
track: IaaS
status: Draft
supplements:
- scs-XXXX-v1-security-of-iaas-service-software.md
- scs-0124-v1-security-of-iaas-service-software.md
---

## Testing or Detecting security updates in software
Expand Down
277 changes: 277 additions & 0 deletions Standards/scs-0125-v1-secure-connections.md

Large diffs are not rendered by default.

37 changes: 0 additions & 37 deletions Standards/scs-0214-v1-k8s-node-distribution.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,44 +80,7 @@ If the standard is used by a provider, the following decisions are binding and v
can also be scaled vertically first before scaling horizontally.
- Worker node distribution MUST be indicated to the user through some kind of labeling
in order to enable (anti)-affinity for workloads over "failure zones".
- To provide metadata about the node distribution, which also enables testing of this standard,
providers MUST label their K8s nodes with the labels listed below.
- `topology.kubernetes.io/zone`

Corresponds with the label described in [K8s labels documentation][k8s-labels-docs].
It provides a logical zone of failure on the side of the provider, e.g. a server rack
in the same electrical circuit or multiple machines bound to the internet through a
singular network structure. How this is defined exactly is up to the plans of the provider.
The field gets autopopulated most of the time by either the kubelet or external mechanisms
like the cloud controller.

- `topology.kubernetes.io/region`

Corresponds with the label described in [K8s labels documentation][k8s-labels-docs].
It describes the combination of one or more failure zones into a region or domain, therefore
showing a larger entity of logical failure zone. An example for this could be a building
containing racks that are put into such a zone, since they're all prone to failure, if e.g.
the power for the building is cut. How this is defined exactly is also up to the provider.
The field gets autopopulated most of the time by either the kubelet or external mechanisms
like the cloud controller.

- `topology.scs.community/host-id`

This is an SCS-specific label; it MUST contain the hostID of the physical machine running
the hypervisor (NOT: the hostID of a virtual machine). Here, the hostID is an arbitrary identifier,
which need not contain the actual hostname, but it should nonetheless be unique to the host.
This helps identify the distribution over underlying physical machines,
which would be masked if VM hostIDs were used.

## Conformance Tests

The script `k8s-node-distribution-check.py` checks the nodes available with a user-provided
kubeconfig file. It then determines based on the labels `kubernetes.io/hostname`, `topology.kubernetes.io/zone`,
`topology.kubernetes.io/region` and `node-role.kubernetes.io/control-plane`, if a distribution
of the available nodes is present. If this isn't the case, the script produces an error.
If also produces warnings and informational outputs, if e.g. labels don't seem to be set.

[k8s-ha]: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
[k8s-large-clusters]: https://kubernetes.io/docs/setup/best-practices/cluster-large/
[scs-0213-v1]: https://github.com/SovereignCloudStack/standards/blob/main/Standards/scs-0213-v1-k8s-nodes-anti-affinity.md
[k8s-labels-docs]: https://kubernetes.io/docs/reference/labels-annotations-taints/#topologykubernetesiozone
3 changes: 2 additions & 1 deletion Standards/scs-0219-v1-kaas-networking.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
---
title: KaaS Networking Standard
type: Standard
status: Draft
status: Stable
stabilized_at: 2024-11-21
track: KaaS
---

Expand Down
122 changes: 50 additions & 72 deletions Tests/iaas/mandatory-services/mandatory-iaas-services.py
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
#!/usr/bin/env python3
"""Mandatory APIs checker
This script retrieves the endpoint catalog from Keystone using the OpenStack
SDK and checks whether all mandatory APi endpoints, are present.
Expand Down Expand Up @@ -26,54 +27,30 @@
block_storage_service = ["volume", "volumev3", "block-storage"]


def connect(cloud_name: str) -> openstack.connection.Connection:
"""Create a connection to an OpenStack cloud
:param string cloud_name:
The name of the configuration to load from clouds.yaml.
:returns: openstack.connnection.Connection
"""
return openstack.connect(
cloud=cloud_name,
)


def check_presence_of_mandatory_services(cloud_name: str, s3_credentials=None):
try:
connection = connect(cloud_name)
services = connection.service_catalog
except Exception as e:
print(str(e))
raise Exception(
f"Connection to cloud '{cloud_name}' was not successfully. "
f"The Catalog endpoint could not be accessed. "
f"Please check your cloud connection and authorization."
)
def check_presence_of_mandatory_services(conn: openstack.connection.Connection, s3_credentials=None):
services = conn.service_catalog

if s3_credentials:
mandatory_services.remove("object-store")
for svc in services:
svc_type = svc['type']
if svc_type in mandatory_services:
mandatory_services.remove(svc_type)
continue
if svc_type in block_storage_service:
elif svc_type in block_storage_service:
block_storage_service.remove(svc_type)

bs_service_not_present = 0
if len(block_storage_service) == 3:
# neither block-storage nor volume nor volumev3 is present
# we must assume, that there is no volume service
logger.error("FAIL: No block-storage (volume) endpoint found.")
logger.error("No block-storage (volume) endpoint found.")
mandatory_services.append(block_storage_service[0])
bs_service_not_present = 1
if not mandatory_services:
# every mandatory service API had an endpoint
return 0 + bs_service_not_present
else:
# there were multiple mandatory APIs not found
logger.error(f"FAIL: The following endpoints are missing: "
f"{mandatory_services}")
return len(mandatory_services) + bs_service_not_present
if mandatory_services:
# some mandatory APIs were not found
logger.error(f"The following endpoints are missing: "
f"{', '.join(mandatory_services)}.")
return len(mandatory_services) + bs_service_not_present


def list_containers(conn):
Expand Down Expand Up @@ -167,8 +144,8 @@ def s3_from_ostack(creds, conn, endpoint):
# pass


def check_for_s3_and_swift(cloud_name: str, s3_credentials=None):
# If we get credentials we assume, that there is no Swift and only test s3
def check_for_s3_and_swift(conn: openstack.connection.Connection, s3_credentials=None):
# If we get credentials, we assume that there is no Swift and only test s3
if s3_credentials:
try:
s3 = s3_conn(s3_credentials)
Expand All @@ -183,58 +160,46 @@ def check_for_s3_and_swift(cloud_name: str, s3_credentials=None):
if s3_buckets == [TESTCONTNAME]:
del_bucket(s3, TESTCONTNAME)
# everything worked, and we don't need to test for Swift:
print("SUCCESS: S3 exists")
logger.info("SUCCESS: S3 exists")
return 0
# there were no credentials given, so we assume s3 is accessable via
# the service catalog and Swift might exist too
try:
connection = connect(cloud_name)
connection.authorize()
except Exception as e:
print(str(e))
raise Exception(
f"Connection to cloud '{cloud_name}' was not successfully. "
f"The Catalog endpoint could not be accessed. "
f"Please check your cloud connection and authorization."
)
s3_creds = {}
try:
endpoint = connection.object_store.get_endpoint()
except Exception as e:
logger.error(
f"FAIL: No object store endpoint found in cloud "
f"'{cloud_name}'. No testing for the s3 service possible. "
f"Details: %s", e
endpoint = conn.object_store.get_endpoint()
except Exception:
logger.exception(
"No object store endpoint found. No testing for the s3 service possible."
)
return 1
# Get S3 endpoint (swift) and ec2 creds from OpenStack (keystone)
s3_from_ostack(s3_creds, connection, endpoint)
s3_from_ostack(s3_creds, conn, endpoint)
# Overrides (var names are from libs3, in case you wonder)
s3_from_env(s3_creds, "HOST", "S3_HOSTNAME", "https://")
s3_from_env(s3_creds, "AK", "S3_ACCESS_KEY_ID")
s3_from_env(s3_creds, "SK", "S3_SECRET_ACCESS_KEY")

s3 = s3_conn(s3_creds, connection)
s3 = s3_conn(s3_creds, conn)
s3_buckets = list_s3_buckets(s3)
if not s3_buckets:
s3_buckets = create_bucket(s3, TESTCONTNAME)
assert s3_buckets

# If we got till here, s3 is working, now swift
swift_containers = list_containers(connection)
swift_containers = list_containers(conn)
# if not swift_containers:
# swift_containers = create_container(connection, TESTCONTNAME)
# swift_containers = create_container(conn, TESTCONTNAME)
result = 0
if Counter(s3_buckets) != Counter(swift_containers):
print("WARNING: S3 buckets and Swift Containers differ:\n"
f"S3: {sorted(s3_buckets)}\nSW: {sorted(swift_containers)}")
logger.warning("S3 buckets and Swift Containers differ:\n"
f"S3: {sorted(s3_buckets)}\nSW: {sorted(swift_containers)}")
result = 1
else:
print("SUCCESS: S3 and Swift exist and agree")
logger.info("SUCCESS: S3 and Swift exist and agree")
# Clean up
# FIXME: Cleanup created EC2 credential
# if swift_containers == [TESTCONTNAME]:
# del_container(connection, TESTCONTNAME)
# del_container(conn, TESTCONTNAME)
# Cleanup created S3 bucket
if s3_buckets == [TESTCONTNAME]:
del_bucket(s3, TESTCONTNAME)
Expand Down Expand Up @@ -266,34 +231,47 @@ def main():
help="Enable OpenStack SDK debug logging"
)
args = parser.parse_args()
logging.basicConfig(
format="%(levelname)s: %(message)s",
level=logging.DEBUG if args.debug else logging.INFO,
)
openstack.enable_logging(debug=args.debug)

# parse cloud name for lookup in clouds.yaml
cloud = os.environ.get("OS_CLOUD", None)
if args.os_cloud:
cloud = args.os_cloud
assert cloud, (
"You need to have the OS_CLOUD environment variable set to your cloud "
"name or pass it via --os-cloud"
)
cloud = args.os_cloud or os.environ.get("OS_CLOUD", None)
if not cloud:
raise RuntimeError(
"You need to have the OS_CLOUD environment variable set to your "
"cloud name or pass it via --os-cloud"
)

s3_credentials = None
if args.s3_endpoint:
if (not args.s3_access) or (not args.s3_access_secret):
print("WARNING: test for external s3 needs access key and access secret.")
logger.warning("test for external s3 needs access key and access secret.")
s3_credentials = {
"AK": args.s3_access,
"SK": args.s3_access_secret,
"HOST": args.s3_endpoint
}
elif args.s3_access or args.s3_access_secret:
print("WARNING: access to s3 was given, but no endpoint provided.")
logger.warning("access to s3 was given, but no endpoint provided.")

result = check_presence_of_mandatory_services(cloud, s3_credentials)
result = result + check_for_s3_and_swift(cloud, s3_credentials)
with openstack.connect(cloud) as conn:
result = check_presence_of_mandatory_services(conn, s3_credentials)
result += check_for_s3_and_swift(conn, s3_credentials)

print('service-apis-check: ' + ('PASS', 'FAIL')[min(1, result)])

return result


if __name__ == "__main__":
main()
try:
sys.exit(main())
except SystemExit:
raise
except BaseException as exc:
logging.debug("traceback", exc_info=True)
logging.critical(str(exc))
sys.exit(1)
61 changes: 61 additions & 0 deletions Tests/iaas/secure-connections/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# Secure Connections Standard Test Suite

## Test Environment Setup

> **NOTE:** The test execution procedure does not require cloud admin rights.
A valid cloud configuration for the OpenStack SDK in the shape of "`clouds.yaml`" is mandatory[^1].
**This file is expected to be located in the current working directory where the test script is executed unless configured otherwise.**

[^1]: [OpenStack Documentation: Configuring OpenStack SDK Applications](https://docs.openstack.org/openstacksdk/latest/user/config/configuration.html)

The test execution environment can be located on any system outside of the cloud infrastructure that has OpenStack API access.
Make sure that the API access is configured properly in "`clouds.yaml`".

It is recommended to use a Python virtual environment[^2].
Next, install the libraries required by the test suite:

```bash
pip3 install openstacksdk sslyze
```

> Note: the version of the sslyze library determines the [version of the Mozilla TLS recommendation JSON](https://wiki.mozilla.org/Security/Server_Side_TLS#JSON_version_of_the_recommendations) that it checks against.
Within this environment execute the test suite.

[^2]: [Python 3 Documentation: Virtual Environments and Packages](https://docs.python.org/3/tutorial/venv.html)

## Test Execution

The test suite is executed as follows:

```bash
python3 tls-checker.py --os-cloud mycloud
```

As an alternative to "`--os-cloud`", the "`OS_CLOUD`" environment variable may be specified instead.
The parameter is used to look up the correct cloud configuration in "`clouds.yaml`".
For the example command above, this file should contain a `clouds.mycloud` section like this:

```yaml
---
clouds:
mycloud:
auth:
auth_url: ...
...
...
```

For any further options consult the output of "`python3 tls-checker.py --help`".

### Script Behavior & Test Results

The script will print all actions and passed tests to `stdout`.

If all tests pass, the script will return with an exit code of `0`.

If any test fails, the script will halt, print the exact error to `stderr` and return with a non-zero exit code.

Any tests that indicate a recommendation of the standard is not met, will print a warning message under the corresponding endpoint output.
However, unmet recommendations will not count as errors.
Loading

0 comments on commit 64f2d7a

Please sign in to comment.