c9d0b6c Ticket #47606 - replica init/bulk import errors should be more verbose

Authored and Committed by nhosoi 10 years ago
    Ticket #47606 - replica init/bulk import errors should be more verbose
    
    Description:
    1. maxbersize: If the size of an entry is larger than the consumer's
       maxbersize, the following error used to be logged:
         Incoming BER Element was too long, max allowable is ### bytes.
         Change the nsslapd-maxbersize attribute in cn=config to increase.
       This message does not indicate how large the maxbersize needs to be.
       This patch adds the code to retrieve the failed ber size.
       Revised message:
         Incoming BER Element was @@@ bytes, max allowable is ### bytes.
    	 Change the nsslapd-maxbersize attribute in cn=config to increase.
       Note: There is no lber API that returns the ber size if it fails to
       handle the ber.  This patch borrows the internal structure of ber
       and get the size.  This could be risky since the size or structure
       of the ber could be updated in the openldap/mozldap lber.
    2. cache size: The bulk import depends upon the nsslapd-cachememsize
       value in the backend instance entry (e.g., cn=userRoot,cn=ldbm
       database,cn=plugins,cn=config).  If an entry size is larger than
       the cachememsize, the bulk import used to fail with this message:
         import userRoot: REASON: entry too large (@@@ bytes) for the
    	 import buffer size (### bytes).  Try increasing nsslapd-
    	 cachememsize.
       Also, the message follows the skipping entry message:
         import userRoot: WARNING: skipping entry "<DN>"
       but actually, it did NOT "skip" the entry and continue the bulk
       import, but it failed there and completely wiped out the backend
       database.
       This patch modifies the message as follows:
         import userRoot: REASON: entry too large (@@@ bytes) for the
    	 effective import buffer size (### bytes). Try increasing nsslapd-
    	 cachememsize for the backend instance "userRoot".
       and as the message mentions, it just skips the failed entry and
       continues the bulk import.
    3. In repl5_tot_result_threadmain, when conn_read_result_ex returns
       non zero (non SUCCESS), it sets abort, but does not set any error
       code to rc (return code), which is not considered as "finished" in
       repl5_tot_waitfor_async_results and it contines waiting until the
       code reaches the max loop count (about 5 minutes).  This patch sets
       LDAP_CONNECT_ERROR to the return code along with setting abort, if
       conn_read_result_ex returns CONN_NOT_CONNECTED.  This makes the bulk
       import finishes quickly when it fails.
    
    https://fedorahosted.org/389/ticket/47606
    
    Reviewed by rmeggins@redhat.com (Thank you, Rich!!)
    (cherry picked from commit 1119083d3d99993421609783efcb8962d78724fc)
    (cherry picked from commit fde9ed5bf74b4ea1fff875bcb421137c78af1227)