#4031 sssd-kcm: talloc_abort call via schedule_fd_processing
Closed: duplicate 4 years ago by atikhonov. Opened 4 years ago by yrro.

Having installed sssd-kcm and setting default_ccache_name = KCM: in
krb5.conf, after running kinit and klist a few times I started getting
the following crash:

#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1  0x00007f7ce3493535 in __GI_abort () at abort.c:79
#2  0x00007f7ce3a8b621 in talloc_abort (reason=0x7f7ce3a99070 "Bad talloc magic value - unknown value")
    at ../talloc.c:500
#3  0x00007f7ce3a8b591 in talloc_abort_unknown_value () at ../talloc.c:529
#4  talloc_chunk_from_ptr (ptr=0x55db7c89de20) at ../talloc.c:529
#5  _talloc_free (ptr=0x55db7c89de20, location=0x55db7b05b27a "../src/util/tev_curl.c:449") at ../talloc.c:1747
#6  0x000055db7b0378fb in schedule_fd_processing (multi=<optimized out>, timeout_ms=0, userp=<optimized out>)
    at ../src/util/tev_curl.c:449
#7  0x00007f7ce3af78cc in update_timer (multi=multi@entry=0x55db7c89d4a0) at multi.c:2941
#8  0x00007f7ce3af8f76 in curl_multi_add_handle (data=0x55db7dcc6b80, multi=0x55db7c89d4a0) at multi.c:500
#9  curl_multi_add_handle (multi=0x55db7c89d4a0, data=0x55db7dcc6b80) at multi.c:376
#10 0x000055db7b037fa9 in tcurl_request_send (mem_ctx=mem_ctx@entry=0x55db7c8aade0, ev=ev@entry=0x55db7c839830,
    tcurl_ctx=tcurl_ctx@entry=0x55db7c89d480, tcurl_req=tcurl_req@entry=0x55db7dcc91d0, timeout=timeout@entry=5)
    at ../src/util/tev_curl.c:700
#11 0x000055db7b038a98 in tcurl_http_send (mem_ctx=0x55db7c8aade0, ev=ev@entry=0x55db7c839830,
    tcurl_ctx=0x55db7c89d480, method=method@entry=TCURL_HTTP_GET,
    socket_path=socket_path@entry=0x55db7b0534a3 "/var/run/secrets.socket", url=<optimized out>,
    headers=0x55db7b0759b0 <sec_headers>, body=0x0, timeout=5) at ../src/util/tev_curl.c:1017
#12 0x000055db7b02d659 in sec_list_send (mem_ctx=<optimized out>, ev=ev@entry=0x55db7c839830,
    client=client@entry=0x55db7c89a7d0, secdb=<optimized out>) at ../src/responder/kcm/kcmsrv_ccache_secrets.c:163
#13 0x000055db7b02dc4e in sec_get_ccache_send (mem_ctx=<optimized out>, ev=ev@entry=0x55db7c839830,
    secdb=secdb@entry=0x55db7c89a440, client=client@entry=0x55db7c89a7d0, name=name@entry=0x55db7c89d300 "1000",
    uuid=uuid@entry=0x7fff37adda30 "") at ../src/responder/kcm/kcmsrv_ccache_secrets.c:482
#14 0x000055db7b02e16d in ccdb_sec_getbyname_send (mem_ctx=<optimized out>, ev=0x55db7c839830, db=<optimized out>,
    client=0x55db7c89a7d0, name=0x55db7c89d300 "1000") at ../src/responder/kcm/kcmsrv_ccache_secrets.c:1275
#15 0x000055db7b025e39 in kcm_ccdb_getbyname_send (mem_ctx=<optimized out>, ev=ev@entry=0x55db7c839830,
    db=0x55db7c89a220, client=0x55db7c89a7d0, name=0x55db7c89d300 "1000")
    at ../src/responder/kcm/kcmsrv_ccache.c:692
#16 0x000055db7b02fa6e in kcm_op_get_cred_by_uuid_send (mem_ctx=<optimized out>, ev=0x55db7c839830,
    op_ctx=0x55db7dcc9110) at ../src/responder/kcm/kcmsrv_ops.c:1099
#17 0x000055db7b02e9f3 in kcm_cmd_queue_done (subreq=0x0) at ../src/responder/kcm/kcmsrv_ops.c:196
#18 0x00007f7ce3aa7479 in tevent_common_invoke_immediate_handler (im=0x55db7c838460, removed=removed@entry=0x0)
    at ../tevent_immediate.c:165
#19 0x00007f7ce3aa74a3 in tevent_common_loop_immediate (ev=ev@entry=0x55db7c839830) at ../tevent_immediate.c:202
#20 0x00007f7ce3aace5b in epoll_event_loop_once (ev=0x55db7c839830, location=<optimized out>)
    at ../tevent_epoll.c:917
#21 0x00007f7ce3aab2d7 in std_event_loop_once (ev=0x55db7c839830,
    location=0x7f7ce36e8178 "../src/util/server.c:725") at ../tevent_standard.c:110
#22 0x00007f7ce3aa67e4 in _tevent_loop_once (ev=ev@entry=0x55db7c839830,
    location=location@entry=0x7f7ce36e8178 "../src/util/server.c:725") at ../tevent.c:772
#23 0x00007f7ce3aa6a2b in tevent_common_loop_wait (ev=0x55db7c839830,
    location=0x7f7ce36e8178 "../src/util/server.c:725") at ../tevent.c:895
#24 0x00007f7ce3aab277 in std_event_loop_wait (ev=0x55db7c839830,
    location=0x7f7ce36e8178 "../src/util/server.c:725") at ../tevent_standard.c:141
#25 0x00007f7ce36c38e3 in server_loop () from /usr/lib/x86_64-linux-gnu/sssd/libsss_util.so
#26 0x000055db7b022c70 in main (argc=<optimized out>, argv=<optimized out>) at ../src/responder/kcm/kcm.c:318

In sssd_kcm.log we have only the following message:

(Fri May 24 09:13:18 2019) [sssd[kcm]] [talloc_log_fn] (0x0010): Bad talloc magic value - unknown value

... putting debug_level=9 in the [kcm] section of sssd.conf doesn't
cause any more detailed messages to be logged.

[forwarded from https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=929473]
sssd 1.16.3 on Debian testing/unstable;


@yrro , I think this is the same issue as was fixed in https://github.com/SSSD/sssd/pull/724

For some reason it was only fixed in master (2.x) branch.

@jhrozek, is there any reason this patch can't be backported to 1.16?

Metadata Update from @atikhonov:
- Issue assigned to atikhonov

4 years ago

No reason, I guess I forgot or didn't realise it was a good candidate. I pushed the patch into sssd-1-16 as well as 37718f8

btw is there a reason Debian is using 1.16.3 and not 1.16.4?

The next release ("buster") has been in soft-freeze (small, targeted fixes only) since February 12 sadly. Once I get back to my Debian machine I'll test this fix and see if I can persuade the maintainers to apply it to the package in unstable.

BTW, thanks for pointing at the fix so quickly! :)

Once I get back to my Debian machine I'll test this fix

Hi @yrro,

Did you have the opportunity to test a fix / can this issue be closed?

@atikhonov The fix works fine. I have requested that it be
applied in the next point release of Debian. Thanks again!

Thanks for confirmation.

Metadata Update from @atikhonov:
- Issue close_status updated to: duplicate
- Issue status updated to: Closed (was: Open)

4 years ago

SSSD is moving from Pagure to Github. This means that new issues and pull requests
will be accepted only in SSSD's github repository.

This issue has been cloned to Github and is available here:
- https://github.com/SSSD/sssd/issues/5001

If you want to receive further updates on the issue, please navigate to the github issue
and click on subscribe button.

Thank you for understanding. We apologize for all inconvenience.

Login to comment on this ticket.

Metadata