Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't get openIDConnect to work in 3.0.0-rc.13 and no logs output in UI pods #633

Open
sarg3nt opened this issue Dec 13, 2024 · 7 comments

Comments

@sarg3nt
Copy link

sarg3nt commented Dec 13, 2024

I'm trying to get openIDConnect to work in the new 3.0.0-rc.13.

When I turn it on and configure it, the UI pods crash loop due to the health checks failing.

There are zero log lines being output, which makes debugging this kind of hard.

Usually I can get OIDC working with enough log reading and head banging and we do have it working in several other apps in our cluster (ArgoCD, Grafana, Oauth2Proxy, Headlamp, etc.)

I've tried changing / removing just about every value I can think of in the openIDConnect settings but get the same issue every time.
With openIDConnect turned off, the UI works fine, so I know the ingress and everything else is set up correctly.

I did read through: https://github.com/kyverno/policy-reporter/blob/2c98eac298d94cd8d5ef71bb6db16d6c8c82b863/docs/UI_AUTH.md

Values

This is a subset of the values.

  openIDConnect:
    # -- Enable openID Connect authentication
    enabled: true
    # -- OpenID Connect Discovery URL
    discoveryUrl: "${issuer_url}"
    # -- OpenID Connect Callback URL
    callbackUrl: "https://kyverno.${cluster_fqdn}/callback"
    #callbackUrl: http://localhost:8082/callback
    # -- OpenID Connect ClientID
    clientId: "${client_id}"
    # -- OpenID Connect ClientSecret
    clientSecret: "${client_secret}"
    # -- Optional Group Claim to map user groups to the profile
    # groups can be used to define access control for clusters, boards and custom boards.
    groupClaim: "Group.Read.All"
    # -- OpenID Connect allowed Scopes
    scopes: 
      - openid
      - profile
      - email
    # -- Provide OpenID Connect configuration via Secret
    # supported keys: `discoveryUrl`, `clientId`, `clientSecret`
    secretRef: ""

I've tried removing the groupClaim and the scopes with no luck. I doubt that's it as I don't know why that would cause the issue we are seeing.

Events:

                                                                                                                                                        
  Type     Reason     Age               From               Message                                                                                                            
  ----     ------     ----              ----               -------                                                                                                            
  Normal   Scheduled  27s               default-scheduler  Successfully assigned kyverno-policy-reporter/policy-reporter-ui-5595c9cf7-6nk7d to lpul-vault-k8s-agent-0.vault.ad.selinc.com                                                                                                                                                                   
  Normal   Pulling    27s               kubelet            Pulling image "sel-docker.artifactory.metro.ad.selinc.com/kyverno/policy-reporter-ui:2.0.0-rc.4"                   
  Normal   Pulled     25s               kubelet            Successfully pulled image "sel-docker.artifactory.metro.ad.selinc.com/kyverno/policy-reporter-ui:2.0.0-rc.4" in 2.692s (2.692s including waiting). Image size: 32694854 bytes.                                                                                                                   
  Normal   Created    24s               kubelet            Created container policy-reporter-ui                                                                               
  Normal   Started    24s               kubelet            Started container policy-reporter-ui                                                                               
  Warning  Unhealthy  7s (x4 over 24s)  kubelet            Readiness probe failed: Get "http://192.168.1.94:8080/healthz": dial tcp 192.168.1.94:8080: connect: connection refused                                                                                                                                                                          
  Warning  Unhealthy  7s (x2 over 17s)  kubelet            Liveness probe failed: Get "http://192.168.1.94:8080/healthz": dial tcp 192.168.1.94:8080: connect: connection refused                                                                                                                                                                           

UI Secret

Below has senstaive values redacted and is from a run with no scopes or groupClaim

config.yaml: "namespace: kyverno-policy-reporter\n\ntempDir: /tmp\n\nlogging:\n  api:                                               
  false\n  server: false\n  encoding: console\n  logLevel: 0\n\nserver:\n  port: 8080\n                                             
  \ cors: true\n  overwriteHost: true\n\nui:\n  displayMode: \n  banner: \n\nclusters:\n                                            
  \ - name: Default\n    secretRef: policy-reporter-ui-default-cluster\n\nsources:\n                                                
  \ - name: kyverno\n    chartType: result\n    exceptions: false\n    excludes:\n                                                  
  \     results:\n      - warn\n      - error\nopenIDConnect:\n    callbackUrl: https://kyverno.k8s.vault.ad.selinc.com/callback\n  
  \   clientId: <redacted>\n    clientSecret: <redacted>\n                  
  \   discoveryUrl: https://login.microsoftonline.com/<redacted>/v2.0\n                                   
  \   enabled: true\n    groupClaim: \"\"\n    scopes: []\n    secretRef: \"\"\noauth:\n                                            
  \   callbackUrl: \"\"\n    clientId: \"\"\n    clientSecret: \"\"\n    enabled:                                                   
  false\n    provider: \"\"\n    scopes: []\n    secretRef: \"\"\n"   

Let me know if you need to see the rest of the helm values.yaml file.

Is this a bug or am I doing something wrong.

@fjogeleit
Copy link
Member

Will take a Look and see if I can improve logging

@fjogeleit
Copy link
Member

@sarg3nt do you see no logs at all or no logs related to OpenIDConnect? Represents Group.Read.All a nested structure?

@fjogeleit
Copy link
Member

fjogeleit commented Dec 13, 2024

Can you set the UI image tag (ui.image.tag) to 2.0.0-rc.5?

Your unhealthy error assumes that it is not able to start the server and already fails in the setup process.

Does kubectl logs deploy/policy-reporter-ui show anything?

@sarg3nt
Copy link
Author

sarg3nt commented Dec 13, 2024

@fjogeleit
Set image tag to 2.0.0-rc.5, same result.

No, Group.Read.All is simply the name of the group claim in our Azure app for some reason. It's what we have to map from.
So for example in the ArgoCD Dex config I have to set.

            scopes:
              - openid
              - profile
              - email
            claimMapping:
              groups: "Group.Read.All"

Correct, zero logs whatsoever.
image

@fjogeleit
Copy link
Member

Could you set the ui.logging.logLevel to -1? To show debug logs and check if any debug log occours?

https://kyverno.github.io/policy-reporter-docs/policy-reporter-ui/configuration.html#logging

@sarg3nt
Copy link
Author

sarg3nt commented Dec 19, 2024

@fjogeleit sorry for the delay.
Tried setting ui.logging.logLevel to -1, still zero logs in policy-reporter-ui

@fjogeleit
Copy link
Member

fjogeleit commented Dec 23, 2024

Hm not really sure why it shows no logs at all.

I released a new chart version where I added the Readiness and Liveness probes to the UI values. So you could try to disable both by overwrite ui.livenessProbe and ui.readinessProbe.

ui:
  livenessProbe:
    httpGet: null
    port: null
  readinessProbe:
    httpGet: null
    port: null

Hopefully this leads to an actual error and logs in the pod to investigate into.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants