make -C tools/testing/selftests TARGETS=net/forwarding TEST_PROGS=tc_actioons.sh TEST_GEN_PROGS="" run_tests make: Entering directory '/home/virtme/testing-4/tools/testing/selftests' make[1]: Entering directory '/home/virtme/testing-4/tools/testing/selftests/net/forwarding' make[1]: Nothing to be done for 'all'. make[1]: Leaving directory '/home/virtme/testing-4/tools/testing/selftests/net/forwarding' make[1]: Entering directory '/home/virtme/testing-4/tools/testing/selftests/net/forwarding' TAP version 13 1..1 # timeout set to 0 # selftests: net/forwarding: tc_actions.sh [ 31.559473][ T327] GACT probability NOT on # TEST: gact drop and ok (skip_hw) [ OK ] [ 35.722778][ T400] Mirror/redirect action on # TEST: mirred egress flower redirect (skip_hw) [ OK ] # TEST: mirred egress flower mirror (skip_hw) [ OK ] # TEST: mirred egress matchall mirror (skip_hw) [ OK ] [ 41.103083][ T486] tc (486) used greatest stack depth: 24184 bytes left [ 41.495537][ T492] mausezahn (492) used greatest stack depth: 24136 bytes left [ 44.207429][ T529] ping (529) used greatest stack depth: 23352 bytes left # TEST: mirred_egress_to_ingress (skip_hw) [ OK ] # [ 51.995801][ C3] [ 51.995968][ C3] ============================================ [ 51.996306][ C3] WARNING: possible recursive locking detected [ 51.996638][ C3] 6.8.0-rc1-virtme #1 Not tainted [ 51.996930][ C3] -------------------------------------------- [ 51.997271][ C3] ksoftirqd/3/32 is trying to acquire lock: [ 51.997588][ C3] ffff8880063b9b70 (slock-AF_INET/1){+.-.}-{2:2}, at: tcp_v4_rcv+0x2159/0x29b0 [ 51.998080][ C3] [ 51.998080][ C3] but task is already holding lock: [ 51.998474][ C3] ffff8880063b8e30 (slock-AF_INET/1){+.-.}-{2:2}, at: tcp_v4_rcv+0x2159/0x29b0 [ 51.998958][ C3] [ 51.998958][ C3] other info that might help us debug this: [ 51.999391][ C3] Possible unsafe locking scenario: [ 51.999391][ C3] [ 51.999792][ C3] CPU0 [ 51.999996][ C3] ---- [ 52.000191][ C3] lock(slock-AF_INET/1); [ 52.000435][ C3] lock(slock-AF_INET/1); [ 52.000676][ C3] [ 52.000676][ C3] *** DEADLOCK *** [ 52.000676][ C3] [ 52.001119][ C3] May be due to missing lock nesting notation [ 52.001119][ C3] [ 52.001568][ C3] 8 locks held by ksoftirqd/3/32: [ 52.001840][ C3] #0: ffffffffb4b447e0 (rcu_read_lock){....}-{1:2}, at: process_backlog+0x1ed/0x5e0 [ 52.002355][ C3] #1: ffffffffb4b447e0 (rcu_read_lock){....}-{1:2}, at: ip_local_deliver_finish+0x1f5/0x450 [ 52.002904][ C3] #2: ffff8880063b8e30 (slock-AF_INET/1){+.-.}-{2:2}, at: tcp_v4_rcv+0x2159/0x29b0 [ 52.003405][ C3] #3: ffffffffb4b447e0 (rcu_read_lock){....}-{1:2}, at: __ip_queue_xmit+0x65/0x1910 [ 52.003921][ C3] #4: ffffffffb4b447e0 (rcu_read_lock){....}-{1:2}, at: ip_finish_output2+0x262/0x18e0 [ 52.004444][ C3] #5: ffffffffb4b44780 (rcu_read_lock_bh){....}-{1:2}, at: __dev_queue_xmit+0x1c1/0x1ca0 [ 52.004992][ C3] #6: ffffffffb4b447e0 (rcu_read_lock){....}-{1:2}, at: netif_receive_skb_internal+0x84/0x300 [ 52.005562][ C3] #7: ffffffffb4b447e0 (rcu_read_lock){....}-{1:2}, at: ip_local_deliver_finish+0x1f5/0x450 [ 52.006109][ C3] [ 52.006109][ C3] stack backtrace: [ 52.006427][ C3] CPU: 3 PID: 32 Comm: ksoftirqd/3 Not tainted 6.8.0-rc1-virtme #1 [ 52.006842][ C3] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014 [ 52.007511][ C3] Call Trace: [ 52.007703][ C3] [ 52.007873][ C3] dump_stack_lvl+0x64/0xb0 [ 52.008148][ C3] validate_chain+0x525/0xa00 [ 52.008409][ C3] ? __pfx_validate_chain+0x10/0x10 [ 52.008707][ C3] ? hlock_class+0x4e/0x130 [ 52.008956][ C3] ? mark_lock+0x38/0x3e0 [ 52.009187][ C3] __lock_acquire+0xb67/0x1610 [ 52.009461][ C3] ? lock_downgrade+0x90/0x110 [ 52.009733][ C3] ? mark_lock+0x38/0x3e0 [ 52.009972][ C3] lock_acquire.part.0+0xe5/0x330 [ 52.010240][ C3] ? tcp_v4_rcv+0x2159/0x29b0 [ 52.010506][ C3] ? __pfx_lock_acquire.part.0+0x10/0x10 [ 52.010810][ C3] ? __pfx_sk_filter_trim_cap+0x10/0x10 [ 52.011129][ C3] ? lock_acquire+0x1c1/0x220 [ 52.011387][ C3] ? tcp_v4_rcv+0x2159/0x29b0 [ 52.011643][ C3] _raw_spin_lock_nested+0x33/0x80 [ 52.011936][ C3] ? tcp_v4_rcv+0x2159/0x29b0 [ 52.012201][ C3] tcp_v4_rcv+0x2159/0x29b0 [ 52.012450][ C3] ? __pfx_tcp_v4_rcv+0x10/0x10 [ 52.012726][ C3] ? __pfx_raw_v4_input+0x10/0x10 [ 52.013007][ C3] ? __pfx_lock_acquire.part.0+0x10/0x10 [ 52.013308][ C3] ip_protocol_deliver_rcu+0x93/0x360 [ 52.013615][ C3] ip_local_deliver_finish+0x2ae/0x450 [ 52.013916][ C3] ip_local_deliver+0x19d/0x480 [ 52.014192][ C3] ? __pfx_ip_local_deliver+0x10/0x10 [ 52.014485][ C3] ? ip_rcv_finish_core.constprop.0+0x522/0x1300 [ 52.014823][ C3] ip_rcv+0x564/0x740 [ 52.015070][ C3] ? __pfx_ip_rcv+0x10/0x10 [ 52.015329][ C3] ? lock_acquire.part.0+0xe5/0x330 [ 52.015610][ C3] ? netif_receive_skb_internal+0x84/0x300 [ 52.015922][ C3] ? __pfx_ip_rcv+0x10/0x10 [ 52.016186][ C3] __netif_receive_skb_one_core+0x166/0x1b0 [ 52.016506][ C3] ? __pfx___netif_receive_skb_one_core+0x10/0x10 [ 52.016863][ C3] ? mark_held_locks+0xa5/0xf0 [ 52.017131][ C3] ? lock_acquire+0x1c1/0x220 [ 52.017397][ C3] ? netif_receive_skb_internal+0x84/0x300 [ 52.017710][ C3] netif_receive_skb_internal+0xb0/0x300 [ 52.018032][ C3] ? __pfx_netif_receive_skb_internal+0x10/0x10 [ 52.018369][ C3] ? __copy_skb_header+0xaf/0x490 [ 52.018654][ C3] ? __skb_clone+0x57a/0x760 [ 52.018912][ C3] netif_receive_skb+0x55/0x280 [ 52.019191][ C3] tcf_mirred_to_dev+0x444/0xd70 [act_mirred] [ 52.019531][ C3] ? __pfx_tcf_skbedit_act+0x10/0x10 [act_skbedit] [ 52.019891][ C3] tcf_mirred_act+0x338/0x780 [act_mirred] [ 52.020212][ C3] tcf_action_exec.part.0+0x115/0x3d0 [ 52.020504][ C3] fl_classify+0x4dc/0x650 [cls_flower] [ 52.020826][ C3] ? __pfx_fl_classify+0x10/0x10 [cls_flower] [ 52.021162][ C3] ? __pfx_usage_match+0x10/0x10 [ 52.021431][ C3] ? check_irq_usage+0x27e/0x850 [ 52.021698][ C3] ? __pfx_check_irq_usage+0x10/0x10 [ 52.022000][ C3] ? __bfs+0x24a/0x650 [ 52.022233][ C3] ? __pfx_hlock_conflict+0x10/0x10 [ 52.022517][ C3] ? __bfs+0x24a/0x650 [ 52.022739][ C3] ? __pfx_usage_match+0x10/0x10 [ 52.023025][ C3] ? check_path.constprop.0+0x24/0x50 [ 52.023328][ C3] ? check_noncircular+0x14e/0x3e0 [ 52.023609][ C3] ? __pfx_check_noncircular+0x10/0x10 [ 52.023907][ C3] __tcf_classify+0x32c/0x7d0 [ 52.024183][ C3] tcf_classify+0x283/0x930 [ 52.024440][ C3] ? __pfx_tcf_classify+0x10/0x10 [ 52.024710][ C3] ? lock_acquire.part.0+0xe5/0x330 [ 52.024994][ C3] ? __dev_queue_xmit+0x1c1/0x1ca0 [ 52.025282][ C3] tc_run+0x2e4/0x5d0 [ 52.025502][ C3] ? __pfx_tc_run+0x10/0x10 [ 52.025767][ C3] ? lock_acquire+0x1c1/0x220 [ 52.026028][ C3] ? __dev_queue_xmit+0x1c1/0x1ca0 [ 52.026311][ C3] __dev_queue_xmit+0x8eb/0x1ca0 [ 52.026574][ C3] ? hlock_class+0x4e/0x130 [ 52.026836][ C3] ? mark_lock+0x38/0x3e0 [ 52.027085][ C3] ? __pfx___dev_queue_xmit+0x10/0x10 [ 52.027383][ C3] ? lockdep_hardirqs_on_prepare.part.0+0x151/0x370 [ 52.027745][ C3] ? neigh_hh_output+0x348/0x590 [ 52.028023][ C3] ip_finish_output2+0x786/0x18e0 [ 52.028304][ C3] ? __pfx_ip_finish_output2+0x10/0x10 [ 52.028600][ C3] ? __ip_finish_output+0x3dd/0x770 [ 52.028891][ C3] ip_output+0x16b/0x4f0 [ 52.029127][ C3] ? __pfx_ip_output+0x10/0x10 [ 52.029405][ C3] ? __pfx_lock_acquire.part.0+0x10/0x10 [ 52.029716][ C3] ? ip_local_out+0x114/0x3b0 [ 52.029990][ C3] __ip_queue_xmit+0x672/0x1910 [ 52.030254][ C3] ? __skb_clone+0x57a/0x760 [ 52.030497][ C3] __tcp_transmit_skb+0x22b1/0x2d20 [ 52.030800][ C3] ? __pfx___tcp_transmit_skb+0x10/0x10 [ 52.031106][ C3] ? tcp_small_queue_check.isra.0+0xe9/0x380 [ 52.031427][ C3] tcp_write_xmit+0xe42/0x24c0 [ 52.031682][ C3] ? ipv4_mtu+0x37/0x360 [ 52.031932][ C3] ? __pfx_tcp_write_xmit+0x10/0x10 [ 52.032226][ C3] ? __pfx_tcp_current_mss+0x10/0x10 [ 52.032520][ C3] __tcp_push_pending_frames+0x96/0x320 [ 52.032826][ C3] tcp_rcv_state_process+0x81e/0x1fd0 [ 52.033115][ C3] ? tcp_v4_rcv+0x2159/0x29b0 [ 52.033363][ C3] ? hlock_class+0x4e/0x130 [ 52.033625][ C3] ? __lock_acquired+0x18a/0x6b0 [ 52.033894][ C3] ? __pfx_tcp_rcv_state_process+0x10/0x10 [ 52.034201][ C3] ? __pfx___lock_acquired+0x10/0x10 [ 52.034503][ C3] ? __pfx_do_raw_spin_trylock+0x10/0x10 [ 52.034818][ C3] tcp_v4_do_rcv+0x154/0x850 [ 52.035073][ C3] tcp_v4_rcv+0x235a/0x29b0 [ 52.035322][ C3] ? __pfx_tcp_v4_rcv+0x10/0x10 [ 52.035599][ C3] ? __pfx_raw_v4_input+0x10/0x10 [ 52.035879][ C3] ? __pfx_lock_acquire.part.0+0x10/0x10 [ 52.036185][ C3] ip_protocol_deliver_rcu+0x93/0x360 [ 52.036487][ C3] ip_local_deliver_finish+0x2ae/0x450 [ 52.036785][ C3] ip_local_deliver+0x19d/0x480 [ 52.037044][ C3] ? __pfx_ip_local_deliver+0x10/0x10 [ 52.037351][ C3] ? ip_rcv_finish_core.constprop.0+0x522/0x1300 [ 52.037696][ C3] ip_rcv+0x564/0x740 [ 52.037927][ C3] ? __pfx_ip_rcv+0x10/0x10 [ 52.038178][ C3] ? lock_acquire.part.0+0xe5/0x330 [ 52.038456][ C3] ? process_backlog+0x1ed/0x5e0 [ 52.038748][ C3] ? __pfx_ip_rcv+0x10/0x10 [ 52.039000][ C3] __netif_receive_skb_one_core+0x166/0x1b0 [ 52.039317][ C3] ? __pfx___netif_receive_skb_one_core+0x10/0x10 [ 52.039654][ C3] ? __pfx_do_raw_spin_trylock+0x10/0x10 [ 52.039972][ C3] ? lock_acquire+0x1c1/0x220 [ 52.040236][ C3] ? process_backlog+0x1ed/0x5e0 [ 52.040506][ C3] process_backlog+0xd3/0x5e0 [ 52.040760][ C3] __napi_poll.constprop.0+0xa5/0x450 [ 52.041066][ C3] net_rx_action+0x440/0xb40 [ 52.041325][ C3] ? lockdep_hardirqs_on_prepare.part.0+0x151/0x370 [ 52.041685][ C3] ? __pfx_net_rx_action+0x10/0x10 [ 52.041976][ C3] __do_softirq+0x1bc/0x7ff [ 52.042227][ C3] ? __pfx_run_ksoftirqd+0x10/0x10 [ 52.042523][ C3] run_ksoftirqd+0x2e/0x60 [ 52.042773][ C3] smpboot_thread_fn+0x30e/0x840 [ 52.043046][ C3] ? __pfx_smpboot_thread_fn+0x10/0x10 [ 52.043356][ C3] ? __pfx_smpboot_thread_fn+0x10/0x10 [ 52.043660][ C3] kthread+0x292/0x360 [ 52.043887][ C3] ? __pfx_kthread+0x10/0x10 [ 52.044150][ C3] ret_from_fork+0x34/0x70 [ 52.044395][ C3] ? __pfx_kthread+0x10/0x10 [ 52.044640][ C3] ret_from_fork_asm+0x1b/0x30 [ 52.044937][ C3] [ 70.615283][ T578] ncat (578) used greatest stack depth: 21168 bytes left TEST: mirred_egress_to_ingress_tcp (skip_hw) [ OK ] # INFO: Could not test offloaded functionality ok 1 selftests: net/forwarding: tc_actions.sh make[1]: Leaving directory '/home/virtme/testing-4/tools/testing/selftests/net/forwarding' make: Leaving directory '/home/virtme/testing-4/tools/testing/selftests' xx__-> echo $? 0 xx__->