#49444 Heap use after free in task_log
Closed: wontfix 6 years ago Opened 6 years ago by firstyear.

Issue Description

In a busy task, it's possible for the task log to be used-after-free, due to the way that notification of the update worked without correct locking.

=================================================================
==19525==AddressSanitizer: while reporting a bug found another one. Ignoring.
==19525==ERROR: AddressSanitizer: heap-use-after-free on address 0x61d00025f880 at pc 0x7f16fd58b36e bp 0x7f16b505da30 sp 0x7f16b505d1d8
READ of size 3 at 0x61d00025f880 thread T72
    #0 0x7f16fd58b36d in __interceptor_fopen64 ??:?
    #1 0x7f16fd58b36d in ?? ??:0
    #2 0x7f16fcfec23c in normalize_mods2bvals /home/william/development/389ds/ds/ldap/servers/slapd/util.c:774:0
    #3 0x7f16fcef0a2b in modify_internal_pb /home/william/development/389ds/ds/ldap/servers/slapd/modify.c:540:0
    #4 0x7f16fcef0412 in slapi_modify_internal_pb /home/william/development/389ds/ds/ldap/servers/slapd/modify.c:448:0
    #5 0x7f16fcfcc0b0 in modify_internal_entry /home/william/development/389ds/ds/ldap/servers/slapd/task.c:748:0
    #6 0x7f16fcfc9f71 in slapi_task_status_changed /home/william/development/389ds/ds/ldap/servers/slapd/task.c:366:0
    #7 0x7f16fcfc8e77 in slapi_task_log_notice /home/william/development/389ds/ds/ldap/servers/slapd/task.c:324:0
    #8 0x7f16eee40dcd in import_log_notice /home/william/development/389ds/ds/ldap/servers/slapd/back-ldbm/import.c:223:0
    #9 0x7f16eee40349 in import_fifo_fetch /home/william/development/389ds/ds/ldap/servers/slapd/back-ldbm/import.c:133:0
    #10 0x7f16eee64d91 in import_worker /home/william/development/389ds/ds/ldap/servers/slapd/back-ldbm/import-threads.c:2881:0
    #11 0x7f16fa7550ea in PR_Select ??:?
    #12 0x7f16fa7550ea in ?? ??:0
    #13 0x7f16fa51436c in start_thread ??:0:0
    #14 0x7f16f9decbbe in __GI___clone ??:0:0

0x61d00025f880 is located 0 bytes inside of 2392-byte region [0x61d00025f880,0x61d0002601d8)
freed by thread T74 here:
    #0 0x7f16fd618c40 in __interceptor_realloc ??:0:0
    #1 0x7f16fce33544 in slapi_ch_realloc /home/william/development/389ds/ds/ldap/servers/slapd/ch_malloc.c:150:0
    #2 0x7f16fcfc8d09 in slapi_task_log_notice /home/william/development/389ds/ds/ldap/servers/slapd/task.c:313:0
    #3 0x7f16eee40dcd in import_log_notice /home/william/development/389ds/ds/ldap/servers/slapd/back-ldbm/import.c:223:0
    #4 0x7f16eee40349 in import_fifo_fetch /home/william/development/389ds/ds/ldap/servers/slapd/back-ldbm/import.c:133:0
    #5 0x7f16eee64d91 in import_worker /home/william/development/389ds/ds/ldap/servers/slapd/back-ldbm/import-threads.c:2881:0
    #6 0x7f16fa7550ea in PR_Select ??:?
    #7 0x7f16fa7550ea in ?? ??:0

previously allocated by thread T72 here:
    #0 0x7f16fd618c40 in __interceptor_realloc ??:0:0
    #1 0x7f16fce33544 in slapi_ch_realloc /home/william/development/389ds/ds/ldap/servers/slapd/ch_malloc.c:150:0
    #2 0x7f16fcfc8d09 in slapi_task_log_notice /home/william/development/389ds/ds/ldap/servers/slapd/task.c:313:0
    #3 0x7f16eee40dcd in import_log_notice /home/william/development/389ds/ds/ldap/servers/slapd/back-ldbm/import.c:223:0
    #4 0x7f16eee40349 in import_fifo_fetch /home/william/development/389ds/ds/ldap/servers/slapd/back-ldbm/import.c:133:0
    #5 0x7f16eee64d91 in import_worker /home/william/development/389ds/ds/ldap/servers/slapd/back-ldbm/import-threads.c:2881:0
    #6 0x7f16fa7550ea in PR_Select ??:?
    #7 0x7f16fa7550ea in ?? ??:0

Metadata Update from @firstyear:
- Issue assigned to firstyear

6 years ago

Metadata Update from @firstyear:
- Custom field component adjusted to None
- Custom field origin adjusted to None
- Custom field reviewstatus adjusted to review
- Custom field type adjusted to None
- Custom field version adjusted to None

6 years ago

The patch replaces the check if(task_log_lock) by PR_Assert.

The only reason that task_log_lock is NULL could be that PR_NewLock() failed in new_task(). But in that case the task should immediately be terminated and cleaned up and not waiting until an assert fails.

@lkrispen This is correct. But I wanted to be "safe" and given the previous "if" statements, I wanted to maintain the checks in the place.

Perhaps we should add PR_ASSERT to the lock creation too, but for now I'd feel safer if we keep the asserts in place, especially since the if statements were "there for a reason".

I have no problem with leaving the asserts, but in lock creation I would explicitely check that the lock was created and log an error and abort the task creation. No need to crash there. Maybe this missing check was the reason for the later if statements

Hmmm okay. I'll try this, and re-run the tests :)

I don't understand why you do not want to handle the failure of the lock creation properly. The assert will just abort the process (in debug build), but if we know that the creation of the lock failed there is no use in continuing, it is an error and the error should be returned.

As discussed today, this was because if the lock fails to create, we have huge, bigger problems and will likely burn and crash anyway.

But you are correct that we should still do the "right thing", so I will leave the ASSERT for our debug purpose, and also I will cleanly handle the error case.

Thanks :)

Metadata Update from @lkrispen:
- Custom field reviewstatus adjusted to ack (was: review)

6 years ago

commit a406205
To ssh://git@pagure.io/389-ds-base.git
22a11d9..a406205 master -> master

Thank you @lkrispen, really appreciate your help!

Metadata Update from @firstyear:
- Issue close_status updated to: fixed
- Issue status updated to: Closed (was: Open)

6 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/2503

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: fixed)

3 years ago

Login to comment on this ticket.

Metadata