#49693 A DB_DEADLOCK while adding a tombstone (RUV) leads to access of an already freed entry
Closed: wontfix 5 years ago Opened 5 years ago by tbordaz.

Issue Description

The assert occurs during asan test. It happens during the initialization of a topology.
I doubt it is systematic because it happens during a ldbm_add RETRY on db_deadlock, so there should not be a reproducible testcase

stack is

==23623==ERROR: AddressSanitizer: heap-use-after-free on address 0x610000580650 at pc 0x7f1c15d6f6c5 bp 0x7f1bd4909300 sp 0x7f1bd49092f0
READ of size 8 at 0x610000580650 thread T78
#0 0x7f1c15d6f6c4 in slapi_sdn_get_dn <ws>/ldap/servers/slapd/dn.c:2347
#1 0x7f1c15d8c19f in slapi_entry_get_dn_const <ws>/ldap/servers/slapd/entry.c:2109
#2 0x7f1c15d8a2c0 in entry2str_internal <ws>/ldap/servers/slapd/entry.c:1635
#3 0x7f1c15d8adac in entry2str_internal_ext <ws>/ldap/servers/slapd/entry.c:1770
#4 0x7f1c15d8af13 in slapi_entry2str_with_options <ws>/ldap/servers/slapd/entry.c:1809
#5 0x7f1c0590eff5 in ldbm_back_add <ws>/ldap/servers/slapd/back-ldbm/ldbm_add.c:911
#6 0x7f1c15d344e2 in op_shared_add <ws>/ldap/servers/slapd/add.c:679
#7 0x7f1c15d3282c in add_internal_pb <ws>/ldap/servers/slapd/add.c:407
#8 0x7f1c15d321cc in slapi_add_internal_pb <ws>/ldap/servers/slapd/add.c:332
#9 0x7f1c070040b7 in replica_create_ruv_tombstone <ws>/ldap/servers/plugins/replication/repl5_replica.c:3382
#10 0x7f1c06fff45f in _replica_configure_ruv <ws>/ldap/servers/plugins/replication/repl5_replica.c:2556
#11 0x7f1c06ff889b in replica_reload_ruv <ws>/ldap/servers/plugins/replication/repl5_replica.c:1500
#12 0x7f1c07006f7c in replica_enable_replication <ws>/ldap/servers/plugins/replication/repl5_replica.c:3816
#13 0x7f1c06fea3a7 in multimaster_be_state_change <ws>/ldap/servers/plugins/replication/repl5_plugins.c:1475
#14 0x7f1c15df937a in mtn_be_state_change <ws>/ldap/servers/slapd/mapping_tree.c:209
#15 0x7f1c15e0b6fa in mtn_internal_be_set_state <ws>/ldap/servers/slapd/mapping_tree.c:3534
#16 0x7f1c15e0b7f5 in slapi_mtn_be_enable <ws>/ldap/servers/slapd/mapping_tree.c:3584
#17 0x7f1c058cf418 in import_all_done <ws>/ldap/servers/slapd/back-ldbm/import.c:1196
#18 0x7f1c058d23a8 in import_main_offline <ws>/ldap/servers/slapd/back-ldbm/import.c:1605
#19 0x7f1c058d25cf in import_main <ws>/ldap/servers/slapd/back-ldbm/import.c:1629
#20 0x7f1c13ca207a  (/lib64/libnspr4.so+0x2907a)
#21 0x7f1c1363e36c in start_thread (/lib64/libpthread.so.0+0x736c)
#22 0x7f1c12f12b4e in __clone (/lib64/libc.so.6+0x110b4e)

0x610000580650 is located 16 bytes inside of 184-byte region [0x610000580640,0x6100005806f8)
freed by thread T78 here:
#0 0x7f1c165394b8 in __interceptor_free (/lib64/libasan.so.4+0xde4b8)
#1 0x7f1c15d53731 in slapi_ch_free <ws>/ldap/servers/slapd/ch_malloc.c:270
#2 0x7f1c15d8b60a in slapi_entry_free <ws>/ldap/servers/slapd/entry.c:1916
#3 0x7f1c0586bb4e in backentry_free <ws>/ldap/servers/slapd/back-ldbm/backentry.c:28
#4 0x7f1c058714ea in entrycache_return <ws>/ldap/servers/slapd/back-ldbm/cache.c:1157
#5 0x7f1c05871021 in cache_return <ws>/ldap/servers/slapd/back-ldbm/cache.c:1118
#6 0x7f1c05909a98 in ldbm_back_add <ws>/ldap/servers/slapd/back-ldbm/ldbm_add.c:227
#7 0x7f1c15d344e2 in op_shared_add <ws>/ldap/servers/slapd/add.c:679
#8 0x7f1c15d3282c in add_internal_pb <ws>/ldap/servers/slapd/add.c:407
#9 0x7f1c15d321cc in slapi_add_internal_pb <ws>/ldap/servers/slapd/add.c:332
#10 0x7f1c070040b7 in replica_create_ruv_tombstone <ws>/ldap/servers/plugins/replication/repl5_replica.c:3382
#11 0x7f1c06fff45f in _replica_configure_ruv <ws>/ldap/servers/plugins/replication/repl5_replica.c:2556
#12 0x7f1c06ff889b in replica_reload_ruv <ws>/ldap/servers/plugins/replication/repl5_replica.c:1500
#13 0x7f1c07006f7c in replica_enable_replication <ws>/ldap/servers/plugins/replication/repl5_replica.c:3816
#14 0x7f1c06fea3a7 in multimaster_be_state_change <ws>/ldap/servers/plugins/replication/repl5_plugins.c:1475
#15 0x7f1c15df937a in mtn_be_state_change <ws>/ldap/servers/slapd/mapping_tree.c:209
#16 0x7f1c15e0b6fa in mtn_internal_be_set_state <ws>/ldap/servers/slapd/mapping_tree.c:3534
#17 0x7f1c15e0b7f5 in slapi_mtn_be_enable <ws>/ldap/servers/slapd/mapping_tree.c:3584
#18 0x7f1c058cf418 in import_all_done <ws>/ldap/servers/slapd/back-ldbm/import.c:1196
#19 0x7f1c058d23a8 in import_main_offline <ws>/ldap/servers/slapd/back-ldbm/import.c:1605
#20 0x7f1c058d25cf in import_main <ws>/ldap/servers/slapd/back-ldbm/import.c:1629
#21 0x7f1c13ca207a  (/lib64/libnspr4.so+0x2907a)

Package Version and Platform

Happened on 1.3.7. At a first look it applies to all versions.

Steps to reproduce

  1. No testcase identified

Actual results

asan assert

Expected results

Should not assert


Metadata Update from @tbordaz:
- Issue assigned to tbordaz

5 years ago

Metadata Update from @tbordaz:
- Custom field component adjusted to None
- Custom field origin adjusted to None
- Custom field reviewstatus adjusted to None
- Custom field type adjusted to None
- Custom field version adjusted to None

5 years ago

Metadata Update from @tbordaz:
- Custom field origin adjusted to Dev (was: None)
- Custom field reviewstatus adjusted to review (was: None)
- Custom field type adjusted to defect (was: None)

5 years ago

Metadata Update from @lkrispen:
- Custom field reviewstatus adjusted to ack (was: review)

5 years ago

The bug applies to all releases but the risk is low and the signature is direct (crash when 'e' is accessed) so IMHO it is enough to fix it in master only

To ssh://pagure.io/389-ds-base.git
0282ef2..6157c6a master -> master

Metadata Update from @tbordaz:
- Issue close_status updated to: fixed
- Issue set to the milestone: 1.4.0
- Issue status updated to: Closed (was: Open)

5 years ago

389-ds-base is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in 389-ds-base's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/389ds/389-ds-base/issues/2752

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Metadata Update from @spichugi:
- Issue close_status updated to: wontfix (was: fixed)

3 years ago

Login to comment on this ticket.

Metadata