In deploying 389-ds-portal to a container, I notice that lib389 would fail with "unable to locate defaults.inf".
This means we need some way to "proceed" sanely when we can't find defaults.inf.
I'm glad this issue is finally acknowledged. In the past the answer was to backport defaults.inf.
Metadata Update from @vashirov:
- Custom field origin adjusted to None
- Custom field reviewstatus adjusted to None
I think in the past we just assumed lib389 was always on a 389-ds host, which was a wrong assumption to make. Saying this I think we need to more seriously consider the allocate api as a problem here because of the local vs remote setup and if we need the paths module at all.
So there are a few ways to possibly approach this: one is simply have remote_simple_allocate not create a ds_paths type, but this will cause None type errors on callers, and some remote code could have legitimate need for knowing the paths or getting them via the cn=config.
Another option is having Paths have a local vs remote flag that switches if we read defaults inf, and then we rely on the cn/=config to provide these values remotely if we have permission to read them.
Finally, a more extreme option is to really seperate our the concept of a local vs remote instance by having a RemoteDirSrv type that DirSrv subclasses - DirSrv would have ds_paths and all the local file manipulation types, and remoteDirSrv would not have any of it. We then could have things like Config or Account module "take" the subclass required for their work IE NssSsl would require a DirSrv type due to needing local actions, where Accounts would just need RemoteDirSrv as a superclass etc. Anything that uses "paths" or has fs interactions would then be a "DirSrv" type.
Thoughts @spichugi and @vashirov ? I think this whole issue stems from the huge amount of confusion in DirSrv historically as a kitchen sink of local and remote actions and having such a swathe of tech debt. But I'm also a bit afraid of sweeping changes from breaking test suites and cli etc. Saying this if we're going to look at it, I think we should really think about "next 5 yrs" of lib389 and how it could be used by us and others, so maybe this is the time for some changes.
Metadata Update from @mreynolds:
- Issue set to the milestone: 1.4.2
Ping @spichugi and @vashirov for your thoughts here :)
I'm probably going to go the path of making Paths aware from the instance of local vs remote, and if remote it will try to read from cn=config if online, else it will exception.
Hey @firstyear, I apologize for my silence, was OOO last week.
Surprisingly, this is a fairly complex topic. I have some ideas, but I need to know your use cases, your user stories on how you want to use lib389 and how to interact with remote instances using CLI. Your suggested approach looks reasonable to me.
You're right about lib389 being everything and a kitchen sink. So, speaking of the next 5 years, I'd like to step back a bit, define what lib389 is (389ds administration library) and what is not (389ds testing library), define a roadmap. Just from the top of my head:
All good mate, I just wanted to get your input before I did anything here, because I think you would have some thoughts.
You are completely right - it's complex.
lib389 isn't so much a kitchen sink as much as "it was written by people who were java/c developers". And they did a lot for the time, but the scope of what we wanted to achieve, and the models we apply now evolved.
I think that it's not just this list though:
Honestly, lib389 is an administration library but it turns out that that's also what you need for testing - to test every facet of the code, we also end up being able to administer all aspects of the code too. So having them in the same library is fine, it's the specialist testing parts that can move to conftest like you say (such as topologies).
I guess the challenge here is: Do I do it the "right" way which is a bigger refactor (dirsrv vs dirsrvlocal) or do I do it the "quick" way for right now and get it done, with the aim of a larger refactor soon? The eternal struggle.
For now I'm tending to "quick" if only that without mypy such a refactor would be hard to gain confidence in ...
Hit a pretty major snag here - looks like we have remote functions (like monitorldbm) that use ds_is_older, and ds_is_older relies on paths to get the version as we compiled it. Honestly not sure how to get around this, may take a short break and think about it.
Okay, so there are three ways to progress:
The second option is the best, but it's pretty invasive I think.
The root cause is that ds_is_older was intended for test only purposes, but sneaked into lib389. For tests we need to identify if DS has features/fixes to skip/xfail tests instead of just fail. It needs to be done before we run the test, so it has to be offline. Initially I was getting version string from ns-slapd itself, but later it was changed to use defaults.inf.
So I suggest option 4: for MonitorLDBM we can modify this function (or create a new one) to do an online check and get version from nsslapd-versionstring in cn=config:
The version check has real value for determining if an api or feature server side is possible to consume.
Paths already has a full online processing capability btw, so I think that making version a check that is capable from online in paths, then moving add_brookers into def open(), so that we know the connection is made could be the best option since all.
This in mind I really do hate the add_brookers concept ... that's another piece of legacy debt I want removed but can not because it would break so much.
It passes basic tests but I'd want to see how this goes through CI to be really sure.
This in mind, I honestly wonder about the feasibility of making lib389-but-better in parallel, that fixes these issues and we port tests over one at a time, but that's a huge amount of work for what return? This PR really was highlighting some of the serious issues in the library where Ithink the original vision is really different to the "current vision" of the tooling.
to comment on this ticket.