#50888 Server should reduce page_size, defined in pageResultCtrl, to the value it supports
Closed: wontfix 3 years ago by spichugi. Opened 4 years ago by tbordaz.

Issue Description

A server protects itself with nsslapd-pagesizelimit and nsslapd-sizelimit.
A client sending a simple_page req with a page_size > nsslapd-pagesizelimit will get err=SIZE_LIMIT_EXCEEDED.
A client does not know those server params and wants the server to process the request, adapting it to its own configuration, without error.

https://tools.ietf.org/html/rfc2696 says:

Servers implementations may enforce an overriding sizelimit, to
prevent the retrieval of large portions of a publically-accessible
directory.

The purpose of the ticket is that the server override the page_size of the request with its own tuning values (nsslapd-pagesizelimit, nsslapd-sizelimit), to process the request without error.

Package Version and Platform

all version

Steps to reproduce

  1. set nsslapd-pagesizelimit=30
  2. create 100 entries
  3. ldapsearch -E PR=50 "(filter_matching the 100entries)"
    3.

Actual results

SIZE_LIMIT_EXCEEDED at the first SEARCH_RESULT_DONE

Expected results

4 SEARCH_RESULT_DONE with 30/30/30/10 entries and without SIZE_LIMIT_EXCEEDED


Metadata Update from @mreynolds:
- Custom field origin adjusted to None
- Custom field reviewstatus adjusted to None
- Issue set to the milestone: 1.4.3

4 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/3941

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix
- Issue status updated to: Closed (was: Open)

3 years ago

Login to comment on this ticket.

Metadata