-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Strange replication behaviour #219
Comments
Okay it does seem to work with 2 replicas however no custom schemas are applied |
okay honestly I just patched
runtime ldap calls seem to replicate just fine with this, ofc upgrading has to be done more carefully |
Hi @ZeChArtiahSaher , I don't think you are using the latest release of the chart as this has been fixed. |
@jp-gouin I'm on |
I'm asking because this is already implemented see https://github.com/jp-gouin/helm-openldap/blob/master/templates/statefulset.yaml#L63 |
That's true and I'm referring to this line: helm-openldap/templates/statefulset.yaml Line 52 in c22e62e
with which in my env for some reason I get this behavior:
Each stage I validated via ldapi socket inside pods My assumption is that newly deployed pod has greater priority due to recency? I'm not versed in syncprov. And I'm assuming this also applies to helm upgrades. It seems to me the most reasonable thing to do is to scale to 0 first unless one wants to risk having replication come swinging during upgrade if you've got say a few pods |
It seems it might be best to externally load up schemas/ldifs & even subsequent acl mods in general with replication enabled. Probably should update docs on that fact |
tbh, I think it's not a schema you're trying to change. It would assume this data would appear in customLdif. That said, even if it applies, you're changing olcPasswordHash, this could be the reason replication fails in one direction. And also: you're sending an ldif in "ldapmodify" format, is that what is used in the chart? And it does a modify, what is the behaviour of a modify on a non-existent key? Is it added? (I'm not sure here, would have to test that out) but I'm pretty sure the key (olcPasswordHash) does not exist by default in the ldap server created by the stack. One thing I noticed and could be up for a nice feature update one day, would be to do the replication with another user than the admin |
When I toggle replication on (which also does work as I verified in terms of replicating schemas that are in the container) during initialization phase however there is an error that seemingly prevents fully successful initialization and maybe prevents custom ldifs? Unless ofc those are completely ignored idk, would be nice to know that precisely, docs don't specify.
P.S: Without replication ldifs/schemas init just fine. Also, I will start looking into replication definitions in the chart as well in the mean time
Cert (created prior to deploy):
Possible log entry in question?:
log dump:
The text was updated successfully, but these errors were encountered: