bug imported from https://bugs.centos.org/view.php?id=16969 and requested by @amoralej
when building the openvswitch in centos8, test suite executed in x86_64 builds are failing and taking very long to execute: https://cbs.centos.org/kojifiles/work/tasks/9381/1119381/build.log Extract from in ## Detailed failed tests. ## section 1. completion.at:21: testing appctl-bashcomp - basic verification ... ./completion.at:23: ovsdb-tool create conf.db $abs_top_srcdir/vswitchd/vswitch.ovsschema ./completion.at:23: ovsdb-server --detach --no-chdir --pidfile --log-file --remote=punix:$OVS_RUNDIR/db.sock stderr: 2020-01-23T00:50:49Z|00001|vlog|INFO|opened log file /builddir/build/BUILD/openvswitch-2.12.0/tests/testsuite.dir/0001/ovsdb-server.log ./completion.at:23: sed < stderr ' /vlog|INFO|opened log file/d /ovsdb_server|INFO|ovsdb-server (Open vSwitch)/d' ./completion.at:23: ovs-vsctl --no-wait init ./completion.at:23: ovs-vswitchd --enable-dummy --disable-system --disable-system-route --detach --no-chdir --pidfile --log-file -vvconn -vofproto_dpif -vunixctl stderr: 2020-01-23T00:50:49Z|00001|vlog|INFO|opened log file /builddir/build/BUILD/openvswitch-2.12.0/tests/testsuite.dir/0001/ovs-vswitchd.log 2020-01-23T00:50:49Z|00002|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 0 2020-01-23T00:50:49Z|00003|ovs_numa|INFO|Discovered 16 CPU cores on NUMA node 1 2020-01-23T00:50:49Z|00004|ovs_numa|INFO|Discovered 2 NUMA nodes and 32 CPU cores 2020-01-23T00:50:49Z|00005|reconnect|INFO|unix:/builddir/build/BUILD/openvswitch-2.12.0/tests/testsuite.dir/0001/db.sock: connecting... 2020-01-23T00:50:49Z|00006|netlink_socket|INFO|netlink: could not enable listening to all nsid (Operation not permitted) 2020-01-23T00:50:49Z|00007|reconnect|INFO|unix:/builddir/build/BUILD/openvswitch-2.12.0/tests/testsuite.dir/0001/db.sock: connected 2020-01-23T00:50:49Z|00008|dpdk|INFO|DPDK Disabled - Use other_config:dpdk-init to enable ./completion.at:23: sed < stderr ' /ovs_numa|INFO|Discovered /d /vlog|INFO|opened log file/d /vswitchd|INFO|ovs-vswitchd (Open vSwitch)/d /reconnect|INFO|/d /dpif_netlink|INFO|Generic Netlink family .ovs_datapath. does not exist/d /ofproto|INFO|using datapath ID/d /netdev_linux|INFO|.*device has unknown hardware address family/d /ofproto|INFO|datapath ID changed to fedcba9876543210/d /dpdk|INFO|DPDK Disabled - Use other_config:dpdk-init to enable/d /netlink_socket|INFO|netlink: could not enable listening to all nsid/d /probe tc:/d /tc: Using policy/d' ./completion.at:23: add_of_br 0 ./completion.at:29: echo "${INPUT}" | sed -e '1,/Available/d' | tail -n+2 ./completion.at:29: echo "${INPUT}" | sed -e '1,/Available/d' | tail -n+2 ./completion.at:29: echo "${INPUT}" | sed -e '1,/Available/d' | tail -n+2 ./completion.at:29: echo "${INPUT}" | sed -e '1,/Available/d' | tail -n+2 ./completion.at:47: echo "${INPUT}" | sed -e '1,/Available/d' | tail -n+2 ./completion.at:55: echo "${INPUT}" | sed -e '1,/Available/d' | tail -n+2 ./completion.at:66: ovs-ofctl monitor br0 --detach --no-chdir --pidfile ./completion.at:67: echo "${INPUT}" | sed -e '1,/Available/d' | tail -n+2 ./completion.at:67: echo "${INPUT}" | sed -e '1,/Available/d' | tail -n+2 ./completion.at:67: echo "${INPUT}" | sed -e '1,/Available/d' | tail -n+2 ./completion.at:80: test -e $OVS_RUNDIR/ovs-ofctl.pid ./completion.at:80: ovs-appctl --timeout=10 -t ovs-ofctl exit completion.at:80: waiting while kill -0 $TMPPID 2>/dev/null... completion.at:80: wait failed after 10 seconds ./ovs-macros.at:241: hard failure 1. completion.at:21: 1. appctl-bashcomp - basic verification (completion.at:21): FAILED (ovs-macros.at:241) # -*- compilation -*- All failures seems to have similar pattern of timeout while trying to kill ovs process: ... echo "${INPUT}" | sed -e '1,/Available/d' | tail -n+2 ./completion.at:131: echo "$INPUT" | sed -e '/./,$!d' ./completion.at:135: check_logs ./completion.at:135: test -e $OVS_RUNDIR/ovs-vswitchd.pid ./completion.at:135: ovs-appctl --timeout=10 -t ovs-vswitchd exit --cleanup completion.at:135: waiting while kill -0 $TMPPID 2>/dev/null... completion.at:135: wait failed after 10 seconds ./ovs-macros.at:241: hard failure ...... echo "${INPUT}" | sed -e '1,/Available/d' | tail -n+2 ./completion.at:165: echo "$INPUT" | sed -e '/./,$!d' ./completion.at:169: check_logs ./completion.at:169: test -e $OVS_RUNDIR/ovs-vswitchd.pid ./completion.at:169: ovs-appctl --timeout=10 -t ovs-vswitchd exit --cleanup completion.at:169: waiting while kill -0 $TMPPID 2>/dev/null... completion.at:169: wait failed after 10 seconds ./ovs-macros.at:241: hard failure Note that this tests are actually launching ovs processes (are functional tests).
Metadata Update from @arrfab: - Issue assigned to alphacc - Issue tagged with: cbs
Metadata Update from @arrfab: - Issue tagged with: groomed
Metadata Update from @arrfab: - Issue priority set to: Waiting on Reporter (was: Needs Review)
Just quickly triaging some items in backlog and curious if you still have that issue and if it needs to be prioritized somehow. Can you give some feedback @amoralej ? Thanks
Metadata Update from @arrfab: - Issue close_status updated to: Insufficient Data - Issue status updated to: Closed (was: Open)
For the record, i've just tested that this issue is fixed and i can run tests successfully for openvswitch in CBS. Thanks and sorry for the delay.
Log in to comment on this ticket.