Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use nftables mode but netfilter rules not effected #5384

Open
4 tasks done
coyzeng opened this issue Dec 23, 2024 · 1 comment
Open
4 tasks done

Use nftables mode but netfilter rules not effected #5384

coyzeng opened this issue Dec 23, 2024 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@coyzeng
Copy link

coyzeng commented Dec 23, 2024

Before creating an issue, make sure you've checked the following:

  • You are running the latest released version of k0s
  • Make sure you've searched for existing issues, both open and closed
  • Make sure you've searched for PRs too, a fix might've been merged already
  • You're looking at docs for the released version, "main" branch docs are usually ahead of released versions.

Platform

Linux/AMD64, CentOS Stream 10

Version

Latest version

Sysinfo

`k0s sysinfo`
Total memory: 62.4 GiB (pass)
File system of /var/lib: xfs (pass)
Disk space available for /var/lib/k0s: 3.6 TiB (pass)
Relative disk space available for /var/lib/k0s: 97% (pass)
Name resolution: localhost: [::1 127.0.0.1] (pass)
Operating system: Linux (pass)
  Linux kernel release: 6.12.0-35.el10.x86_64 (pass)
  Max. file descriptors per process: current: 524288 / max: 524288 (pass)
  AppArmor: unavailable (pass)
  Executable in PATH: modprobe: /usr/sbin/modprobe (pass)
  Executable in PATH: mount: /usr/bin/mount (pass)
  Executable in PATH: umount: /usr/bin/umount (pass)
  /proc file system: mounted (0x9fa0) (pass)
  Control Groups: version 2 (pass)
    cgroup controller "cpu": available (is a listed root controller) (pass)
    cgroup controller "cpuacct": available (via cpu in version 2) (pass)
    cgroup controller "cpuset": available (is a listed root controller) (pass)
    cgroup controller "memory": available (is a listed root controller) (pass)
    cgroup controller "devices": available (device filters attachable) (pass)
    cgroup controller "freezer": available (cgroup.freeze exists) (pass)
    cgroup controller "pids": available (is a listed root controller) (pass)
    cgroup controller "hugetlb": available (is a listed root controller) (pass)
    cgroup controller "blkio": available (via io in version 2) (pass)
  CONFIG_CGROUPS: Control Group support: built-in (pass)
    CONFIG_CGROUP_FREEZER: Freezer cgroup subsystem: built-in (pass)
    CONFIG_CGROUP_PIDS: PIDs cgroup subsystem: built-in (pass)
    CONFIG_CGROUP_DEVICE: Device controller for cgroups: built-in (pass)
    CONFIG_CPUSETS: Cpuset support: built-in (pass)
    CONFIG_CGROUP_CPUACCT: Simple CPU accounting cgroup subsystem: built-in (pass)
    CONFIG_MEMCG: Memory Resource Controller for Control Groups: built-in (pass)
    CONFIG_CGROUP_HUGETLB: HugeTLB Resource Controller for Control Groups: built-in (pass)
    CONFIG_CGROUP_SCHED: Group CPU scheduler: built-in (pass)
      CONFIG_FAIR_GROUP_SCHED: Group scheduling for SCHED_OTHER: built-in (pass)
        CONFIG_CFS_BANDWIDTH: CPU bandwidth provisioning for FAIR_GROUP_SCHED: built-in (pass)
    CONFIG_BLK_CGROUP: Block IO controller: built-in (pass)
  CONFIG_NAMESPACES: Namespaces support: built-in (pass)
    CONFIG_UTS_NS: UTS namespace: built-in (pass)
    CONFIG_IPC_NS: IPC namespace: built-in (pass)
    CONFIG_PID_NS: PID namespace: built-in (pass)
    CONFIG_NET_NS: Network namespace: built-in (pass)
  CONFIG_NET: Networking support: built-in (pass)
    CONFIG_INET: TCP/IP networking: built-in (pass)
      CONFIG_IPV6: The IPv6 protocol: built-in (pass)
    CONFIG_NETFILTER: Network packet filtering framework (Netfilter): built-in (pass)
      CONFIG_NETFILTER_ADVANCED: Advanced netfilter configuration: built-in (pass)
      CONFIG_NF_CONNTRACK: Netfilter connection tracking support: module (pass)
      CONFIG_NETFILTER_XTABLES: Netfilter Xtables support: built-in (pass)
        CONFIG_NETFILTER_XT_TARGET_REDIRECT: REDIRECT target support: module (pass)
        CONFIG_NETFILTER_XT_MATCH_COMMENT: "comment" match support: module (pass)
        CONFIG_NETFILTER_XT_MARK: nfmark target and match support: module (pass)
        CONFIG_NETFILTER_XT_SET: set target and match support: module (pass)
        CONFIG_NETFILTER_XT_TARGET_MASQUERADE: MASQUERADE target support: module (pass)
        CONFIG_NETFILTER_XT_NAT: "SNAT and DNAT" targets support: module (pass)
        CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: "addrtype" address type match support: module (pass)
        CONFIG_NETFILTER_XT_MATCH_CONNTRACK: "conntrack" connection tracking match support: module (pass)
        CONFIG_NETFILTER_XT_MATCH_MULTIPORT: "multiport" Multiple port match support: module (pass)
        CONFIG_NETFILTER_XT_MATCH_RECENT: "recent" match support: module (pass)
        CONFIG_NETFILTER_XT_MATCH_STATISTIC: "statistic" match support: module (pass)
      CONFIG_NETFILTER_NETLINK: module (pass)
      CONFIG_NF_NAT: module (pass)
      CONFIG_IP_SET: IP set support: module (pass)
        CONFIG_IP_SET_HASH_IP: hash:ip set support: module (pass)
        CONFIG_IP_SET_HASH_NET: hash:net set support: module (pass)
      CONFIG_IP_VS: IP virtual server support: module (pass)
        CONFIG_IP_VS_NFCT: Netfilter connection tracking: built-in (pass)
        CONFIG_IP_VS_SH: Source hashing scheduling: module (pass)
        CONFIG_IP_VS_RR: Round-robin scheduling: module (pass)
        CONFIG_IP_VS_WRR: Weighted round-robin scheduling: module (pass)
      CONFIG_NF_CONNTRACK_IPV4: IPv4 connetion tracking support (required for NAT): unknown (warning)
      CONFIG_NF_REJECT_IPV4: IPv4 packet rejection: module (pass)
      CONFIG_NF_NAT_IPV4: IPv4 NAT: unknown (warning)
      CONFIG_IP_NF_IPTABLES: IP tables support: module (pass)
        CONFIG_IP_NF_FILTER: Packet filtering: module (pass)
          CONFIG_IP_NF_TARGET_REJECT: REJECT target support: module (pass)
        CONFIG_IP_NF_NAT: iptables NAT support: module (pass)
        CONFIG_IP_NF_MANGLE: Packet mangling: module (pass)
      CONFIG_NF_DEFRAG_IPV4: module (pass)
      CONFIG_NF_CONNTRACK_IPV6: IPv6 connetion tracking support (required for NAT): unknown (warning)
      CONFIG_NF_NAT_IPV6: IPv6 NAT: unknown (warning)
      CONFIG_IP6_NF_IPTABLES: IP6 tables support: module (pass)
        CONFIG_IP6_NF_FILTER: Packet filtering: module (pass)
        CONFIG_IP6_NF_MANGLE: Packet mangling: module (pass)
        CONFIG_IP6_NF_NAT: ip6tables NAT support: module (pass)
      CONFIG_NF_DEFRAG_IPV6: module (pass)
    CONFIG_BRIDGE: 802.1d Ethernet Bridging: module (pass)
      CONFIG_LLC: module (pass)
      CONFIG_STP: module (pass)
  CONFIG_EXT4_FS: The Extended 4 (ext4) filesystem: module (pass)
  CONFIG_PROC_FS: /proc file system support: built-in (pass)

What happened?

apiVersion: v1
items:
- apiVersion: k0s.k0sproject.io/v1beta1
  kind: ClusterConfig
  metadata:
    creationTimestamp: "2024-12-23T06:58:26Z"
    generation: 1
    name: k0s
    namespace: kube-system
    resourceVersion: "211"
    uid: 171223b0-86fc-45c0-a32d-cd4d35f943e1
  spec:
    extensions:
      helm:
        concurrencyLevel: 5
    network:
      clusterDomain: cluster.local
      dualStack:
        enabled: false
      kubeProxy:
        iptables:
          minSyncPeriod: 0s
          syncPeriod: 0s
        ipvs:
          minSyncPeriod: 0s
          syncPeriod: 0s
          tcpFinTimeout: 0s
          tcpTimeout: 0s
          udpTimeout: 0s
        metricsBindAddress: 0.0.0.0:10249
        mode: nftables
        nftables:
          minSyncPeriod: 0s
          syncPeriod: 0s
      kuberouter:
        autoMTU: true
        hairpin: Enabled
        metricsPort: 8080
      nodeLocalLoadBalancing:
        enabled: false
        envoyProxy:
          apiServerBindPort: 7443
          image:
            image: quay.io/k0sproject/envoy-distroless
            version: v1.31.3
          konnectivityServerBindPort: 7132
        type: EnvoyProxy
      podCIDR: 10.9.0.0/16
      provider: kuberouter
      serviceCIDR: 10.96.0.0/12
kind: List
metadata:
  resourceVersion: ""

Steps to reproduce

Expected behavior

k0s start k8s cluster without any other operations.

Or

Configuration firewalld service and then k0s started successful.

Actual behavior

It must stop the firewalld.service to make k8s start successful.

systemctl stop firewalld

If firewalld not stop, k8s will print many errors like below.

panic: unable to load configmap based request-header-client-ca-file: Get "https://10.8.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 10.8.0.1:443: connect: no route to host

Configuration below not worked.

firewall-cmd --permanent --zone=trusted --add-source=10.8.0.0/16 # pods
firewall-cmd --permanent --zone=trusted --add-source=10.9.0.0/16 # services
firewall-cmd --permanent --zone=trusted --add-source=10.96.0.0/12 # services
firewall-cmd --reload

There has another problem, if I set

serviceCIDR: 10.8.0.0/16

CRD clusterconfigs.k0s.k0sproject.io in kube-system actual value is

10.96.0.0/12

But actual pod IP is

Name:                 metrics-server-78c4ccbc7f-z2hx6
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Service Account:      metrics-server
Node:                 ducer/192.168.1.3
Start Time:           Mon, 23 Dec 2024 15:44:17 +0800
Labels:               k8s-app=metrics-server
                      pod-template-hash=78c4ccbc7f
Annotations:          <none>
Status:               Running
IP:                   10.9.0.7
IPs:
  IP:           10.9.0.7

Screenshots and logs

No response

Additional context

No response

@coyzeng coyzeng added the bug Something isn't working label Dec 23, 2024
@juanluisvaladas juanluisvaladas self-assigned this Dec 23, 2024
@juanluisvaladas
Copy link
Contributor

Hi, I imagine it's a kubelet problem. Can you please:
1- Attach the logs of k0sworker so that we see the kubelet logs
2- See if the node is created in the apiserver to begin with, can you please check if it's added with kubectl get nodes? If it's added and not ready attach kubectl get node <node name> -o yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants