#49652 directory server returns an entry to the client whose ip address is not permitted in ACI
Closed: wontfix 7 years ago Opened 7 years ago by mreynolds.

Ticket was cloned from Red Hat Bugzilla (product Red Hat Enterprise Linux 7): Bug 1569365

Deny ACI's are not properly stored in the cached results, and DENY aci's are not properly evaluated if there are no allow aci's on that same resource.


Metadata Update from @mreynolds:
- Custom field rhbz adjusted to https://bugzilla.redhat.com/show_bug.cgi?id=1569365

7 years ago

Metadata Update from @mreynolds:
- Issue assigned to mreynolds

7 years ago

Metadata Update from @mreynolds:
- Custom field component adjusted to None
- Custom field origin adjusted to None
- Custom field reviewstatus adjusted to None
- Custom field type adjusted to None
- Custom field version adjusted to None

7 years ago

Metadata Update from @mreynolds:
- Issue set to the milestone: 1.3.7.0 (was: 0.0 NEEDS_TRIAGE)

7 years ago

Metadata Update from @mreynolds:
- Custom field reviewstatus adjusted to review (was: None)
- Issue set to the milestone: 0.0 NEEDS_TRIAGE (was: 1.3.7.0)

7 years ago

So for part one of the fix we loop over all the attribute "caches" and set the result to failure. This originally seemed inefficient so I tried improving it by setting a flag, and then check that flag later instead of reprocessing all the attributes again. I did get this working, but it feels hacky to me, and I don't know how much we gained from this approach. While TET testing still did not show any regressions with the single flag approach - I just don't trust it - it feels risky. The issue with setting a single flag in the aclpb is when do you reset the flag? Especially if there are two different binds on the the same connection - while this is rare, it is legal, so we must account for it. There are just too many variables to handle if we go with this approach.

I feel that the original solution (the current attached patch) is actually the most reliable/stable solution. It's just less intrusive into the existing acl cache design. I was originally worried about a performance hit, as it will look over all the entry's attributes, but it only does this if the operation already failed. Do we care?

Anyway I welcome feedback on this as there are different ways to fix part one.

Metadata Update from @lkrispen:
- Custom field reviewstatus adjusted to ack (was: review)

7 years ago

yes, I think your patch is ok, any attemp to make it more efficient would introduce new complexity

d214765..d77c7f0 master -> master

183bacd..eb08d43 389-ds-base-1.3.8 -> 389-ds-base-1.3.8

9b1ad54..8bdcfa4 389-ds-base-1.3.7 -> 389-ds-base-1.3.7

02f502a..31ba1e7 389-ds-base-1.3.6 -> 389-ds-base-1.3.6

03a1935..f4a76bb 389-ds-base-1.2.11 -> 389-ds-base-1.2.11

Metadata Update from @mreynolds:
- Issue close_status updated to: fixed
- Issue set to the milestone: 1.3.7.0 (was: 0.0 NEEDS_TRIAGE)
- Issue status updated to: Closed (was: Open)

7 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/2711

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: fixed)

4 years ago

Log in to comment on this ticket.

Metadata