-
Notifications
You must be signed in to change notification settings - Fork 5
selftests/bpf: fix array_size.cocci warning #120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Master branch: e878ae2 |
|
Master branch: 44e9a74 |
3467178 to
d6ef19e
Compare
|
At least one diff in series https://patchwork.kernel.org/project/netdevbpf/list/?series=621395 expired. Closing PR. |
|
Master branch: 3399dd9 |
d6ef19e to
b9d49fc
Compare
|
Master branch: de55c9a |
b9d49fc to
0bd96dd
Compare
|
Master branch: eecbfd9 |
0bd96dd to
29d4d59
Compare
|
Master branch: 743bec1 |
29d4d59 to
6f189a1
Compare
|
Master branch: 5861701 |
Fix the array_size.cocci warning in tools/testing/selftests/bpf/ Use `ARRAY_SIZE(arr)` in bpf_util.h instead of forms like `sizeof(arr)/sizeof(arr[0])`. Signed-off-by: Guo Zhengkui <[email protected]>
6f189a1 to
85a50fd
Compare
|
At least one diff in series https://patchwork.kernel.org/project/netdevbpf/list/?series=621741 expired. Closing PR. |
The BPF STX/LDX instruction uses offset relative to the FP to address
stack space. Since the BPF_FP locates at the top of the frame, the offset
is usually a negative number. However, arm64 str/ldr immediate instruction
requires that offset be a positive number. Therefore, this patch tries to
convert the offsets.
The method is to find the negative offset furthest from the FP firstly.
Then add it to the FP, calculate a bottom position, called FPB, and then
adjust the offsets in other STR/LDX instructions relative to FPB.
FPB is saved using the callee-saved register x27 of arm64 which is not
used yet.
Before adjusting the offset, the patch checks every instruction to ensure
that the FP does not change in run-time. If the FP may change, no offset
is adjusted.
For example, for the following bpftrace command:
bpftrace -e 'kprobe:do_sys_open { printf("opening: %s\n", str(arg1)); }'
Without this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: mov x25, sp
1c: mov x26, #0x0 // #0
20: bti j
24: sub sp, sp, #0x90
28: add x19, x0, #0x0
2c: mov x0, #0x0 // #0
30: mov x10, #0xffffffffffffff78 // #-136
34: str x0, [x25, x10]
38: mov x10, #0xffffffffffffff80 // #-128
3c: str x0, [x25, x10]
40: mov x10, #0xffffffffffffff88 // #-120
44: str x0, [x25, x10]
48: mov x10, #0xffffffffffffff90 // #-112
4c: str x0, [x25, x10]
50: mov x10, #0xffffffffffffff98 // #-104
54: str x0, [x25, x10]
58: mov x10, #0xffffffffffffffa0 // #-96
5c: str x0, [x25, x10]
60: mov x10, #0xffffffffffffffa8 // #-88
64: str x0, [x25, x10]
68: mov x10, #0xffffffffffffffb0 // #-80
6c: str x0, [x25, x10]
70: mov x10, #0xffffffffffffffb8 // #-72
74: str x0, [x25, x10]
78: mov x10, #0xffffffffffffffc0 // #-64
7c: str x0, [x25, x10]
80: mov x10, #0xffffffffffffffc8 // #-56
84: str x0, [x25, x10]
88: mov x10, #0xffffffffffffffd0 // #-48
8c: str x0, [x25, x10]
90: mov x10, #0xffffffffffffffd8 // #-40
94: str x0, [x25, x10]
98: mov x10, #0xffffffffffffffe0 // #-32
9c: str x0, [x25, x10]
a0: mov x10, #0xffffffffffffffe8 // #-24
a4: str x0, [x25, x10]
a8: mov x10, #0xfffffffffffffff0 // #-16
ac: str x0, [x25, x10]
b0: mov x10, #0xfffffffffffffff8 // #-8
b4: str x0, [x25, x10]
b8: mov x10, #0x8 // #8
bc: ldr x2, [x19, x10]
[...]
With this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: stp x27, x28, [sp, #-16]!
1c: mov x25, sp
20: sub x27, x25, #0x88
24: mov x26, #0x0 // #0
28: bti j
2c: sub sp, sp, #0x90
30: add x19, x0, #0x0
34: mov x0, #0x0 // #0
38: str x0, [x27]
3c: str x0, [x27, #8]
40: str x0, [x27, #16]
44: str x0, [x27, #24]
48: str x0, [x27, #32]
4c: str x0, [x27, #40]
50: str x0, [x27, #48]
54: str x0, [x27, #56]
58: str x0, [x27, #64]
5c: str x0, [x27, #72]
60: str x0, [x27, #80]
64: str x0, [x27, #88]
68: str x0, [x27, #96]
6c: str x0, [x27, #104]
70: str x0, [x27, #112]
74: str x0, [x27, #120]
78: str x0, [x27, #128]
7c: ldr x2, [x19, #8]
[...]
Signed-off-by: Xu Kuohai <[email protected]>
The BPF STX/LDX instruction uses offset relative to the FP to address
stack space. Since the BPF_FP locates at the top of the frame, the offset
is usually a negative number. However, arm64 str/ldr immediate instruction
requires that offset be a positive number. Therefore, this patch tries to
convert the offsets.
The method is to find the negative offset furthest from the FP firstly.
Then add it to the FP, calculate a bottom position, called FPB, and then
adjust the offsets in other STR/LDX instructions relative to FPB.
FPB is saved using the callee-saved register x27 of arm64 which is not
used yet.
Before adjusting the offset, the patch checks every instruction to ensure
that the FP does not change in run-time. If the FP may change, no offset
is adjusted.
For example, for the following bpftrace command:
bpftrace -e 'kprobe:do_sys_open { printf("opening: %s\n", str(arg1)); }'
Without this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: mov x25, sp
1c: mov x26, #0x0 // #0
20: bti j
24: sub sp, sp, #0x90
28: add x19, x0, #0x0
2c: mov x0, #0x0 // #0
30: mov x10, #0xffffffffffffff78 // #-136
34: str x0, [x25, x10]
38: mov x10, #0xffffffffffffff80 // #-128
3c: str x0, [x25, x10]
40: mov x10, #0xffffffffffffff88 // #-120
44: str x0, [x25, x10]
48: mov x10, #0xffffffffffffff90 // #-112
4c: str x0, [x25, x10]
50: mov x10, #0xffffffffffffff98 // #-104
54: str x0, [x25, x10]
58: mov x10, #0xffffffffffffffa0 // #-96
5c: str x0, [x25, x10]
60: mov x10, #0xffffffffffffffa8 // #-88
64: str x0, [x25, x10]
68: mov x10, #0xffffffffffffffb0 // #-80
6c: str x0, [x25, x10]
70: mov x10, #0xffffffffffffffb8 // #-72
74: str x0, [x25, x10]
78: mov x10, #0xffffffffffffffc0 // #-64
7c: str x0, [x25, x10]
80: mov x10, #0xffffffffffffffc8 // #-56
84: str x0, [x25, x10]
88: mov x10, #0xffffffffffffffd0 // #-48
8c: str x0, [x25, x10]
90: mov x10, #0xffffffffffffffd8 // #-40
94: str x0, [x25, x10]
98: mov x10, #0xffffffffffffffe0 // #-32
9c: str x0, [x25, x10]
a0: mov x10, #0xffffffffffffffe8 // #-24
a4: str x0, [x25, x10]
a8: mov x10, #0xfffffffffffffff0 // #-16
ac: str x0, [x25, x10]
b0: mov x10, #0xfffffffffffffff8 // #-8
b4: str x0, [x25, x10]
b8: mov x10, #0x8 // #8
bc: ldr x2, [x19, x10]
[...]
With this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: stp x27, x28, [sp, #-16]!
1c: mov x25, sp
20: sub x27, x25, #0x88
24: mov x26, #0x0 // #0
28: bti j
2c: sub sp, sp, #0x90
30: add x19, x0, #0x0
34: mov x0, #0x0 // #0
38: str x0, [x27]
3c: str x0, [x27, #8]
40: str x0, [x27, #16]
44: str x0, [x27, #24]
48: str x0, [x27, #32]
4c: str x0, [x27, #40]
50: str x0, [x27, #48]
54: str x0, [x27, #56]
58: str x0, [x27, #64]
5c: str x0, [x27, #72]
60: str x0, [x27, #80]
64: str x0, [x27, #88]
68: str x0, [x27, #96]
6c: str x0, [x27, #104]
70: str x0, [x27, #112]
74: str x0, [x27, #120]
78: str x0, [x27, #128]
7c: ldr x2, [x19, #8]
[...]
Signed-off-by: Xu Kuohai <[email protected]>
The BPF STX/LDX instruction uses offset relative to the FP to address
stack space. Since the BPF_FP locates at the top of the frame, the offset
is usually a negative number. However, arm64 str/ldr immediate instruction
requires that offset be a positive number. Therefore, this patch tries to
convert the offsets.
The method is to find the negative offset furthest from the FP firstly.
Then add it to the FP, calculate a bottom position, called FPB, and then
adjust the offsets in other STR/LDX instructions relative to FPB.
FPB is saved using the callee-saved register x27 of arm64 which is not
used yet.
Before adjusting the offset, the patch checks every instruction to ensure
that the FP does not change in run-time. If the FP may change, no offset
is adjusted.
For example, for the following bpftrace command:
bpftrace -e 'kprobe:do_sys_open { printf("opening: %s\n", str(arg1)); }'
Without this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: mov x25, sp
1c: mov x26, #0x0 // #0
20: bti j
24: sub sp, sp, #0x90
28: add x19, x0, #0x0
2c: mov x0, #0x0 // #0
30: mov x10, #0xffffffffffffff78 // #-136
34: str x0, [x25, x10]
38: mov x10, #0xffffffffffffff80 // #-128
3c: str x0, [x25, x10]
40: mov x10, #0xffffffffffffff88 // #-120
44: str x0, [x25, x10]
48: mov x10, #0xffffffffffffff90 // #-112
4c: str x0, [x25, x10]
50: mov x10, #0xffffffffffffff98 // #-104
54: str x0, [x25, x10]
58: mov x10, #0xffffffffffffffa0 // #-96
5c: str x0, [x25, x10]
60: mov x10, #0xffffffffffffffa8 // #-88
64: str x0, [x25, x10]
68: mov x10, #0xffffffffffffffb0 // #-80
6c: str x0, [x25, x10]
70: mov x10, #0xffffffffffffffb8 // #-72
74: str x0, [x25, x10]
78: mov x10, #0xffffffffffffffc0 // #-64
7c: str x0, [x25, x10]
80: mov x10, #0xffffffffffffffc8 // #-56
84: str x0, [x25, x10]
88: mov x10, #0xffffffffffffffd0 // #-48
8c: str x0, [x25, x10]
90: mov x10, #0xffffffffffffffd8 // #-40
94: str x0, [x25, x10]
98: mov x10, #0xffffffffffffffe0 // #-32
9c: str x0, [x25, x10]
a0: mov x10, #0xffffffffffffffe8 // #-24
a4: str x0, [x25, x10]
a8: mov x10, #0xfffffffffffffff0 // #-16
ac: str x0, [x25, x10]
b0: mov x10, #0xfffffffffffffff8 // #-8
b4: str x0, [x25, x10]
b8: mov x10, #0x8 // #8
bc: ldr x2, [x19, x10]
[...]
With this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: stp x27, x28, [sp, #-16]!
1c: mov x25, sp
20: sub x27, x25, #0x88
24: mov x26, #0x0 // #0
28: bti j
2c: sub sp, sp, #0x90
30: add x19, x0, #0x0
34: mov x0, #0x0 // #0
38: str x0, [x27]
3c: str x0, [x27, #8]
40: str x0, [x27, #16]
44: str x0, [x27, #24]
48: str x0, [x27, #32]
4c: str x0, [x27, #40]
50: str x0, [x27, #48]
54: str x0, [x27, #56]
58: str x0, [x27, #64]
5c: str x0, [x27, #72]
60: str x0, [x27, #80]
64: str x0, [x27, #88]
68: str x0, [x27, #96]
6c: str x0, [x27, #104]
70: str x0, [x27, #112]
74: str x0, [x27, #120]
78: str x0, [x27, #128]
7c: ldr x2, [x19, #8]
[...]
Signed-off-by: Xu Kuohai <[email protected]>
The BPF STX/LDX instruction uses offset relative to the FP to address
stack space. Since the BPF_FP locates at the top of the frame, the offset
is usually a negative number. However, arm64 str/ldr immediate instruction
requires that offset be a positive number. Therefore, this patch tries to
convert the offsets.
The method is to find the negative offset furthest from the FP firstly.
Then add it to the FP, calculate a bottom position, called FPB, and then
adjust the offsets in other STR/LDX instructions relative to FPB.
FPB is saved using the callee-saved register x27 of arm64 which is not
used yet.
Before adjusting the offset, the patch checks every instruction to ensure
that the FP does not change in run-time. If the FP may change, no offset
is adjusted.
For example, for the following bpftrace command:
bpftrace -e 'kprobe:do_sys_open { printf("opening: %s\n", str(arg1)); }'
Without this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: mov x25, sp
1c: mov x26, #0x0 // #0
20: bti j
24: sub sp, sp, #0x90
28: add x19, x0, #0x0
2c: mov x0, #0x0 // #0
30: mov x10, #0xffffffffffffff78 // #-136
34: str x0, [x25, x10]
38: mov x10, #0xffffffffffffff80 // #-128
3c: str x0, [x25, x10]
40: mov x10, #0xffffffffffffff88 // #-120
44: str x0, [x25, x10]
48: mov x10, #0xffffffffffffff90 // #-112
4c: str x0, [x25, x10]
50: mov x10, #0xffffffffffffff98 // #-104
54: str x0, [x25, x10]
58: mov x10, #0xffffffffffffffa0 // #-96
5c: str x0, [x25, x10]
60: mov x10, #0xffffffffffffffa8 // #-88
64: str x0, [x25, x10]
68: mov x10, #0xffffffffffffffb0 // #-80
6c: str x0, [x25, x10]
70: mov x10, #0xffffffffffffffb8 // #-72
74: str x0, [x25, x10]
78: mov x10, #0xffffffffffffffc0 // #-64
7c: str x0, [x25, x10]
80: mov x10, #0xffffffffffffffc8 // #-56
84: str x0, [x25, x10]
88: mov x10, #0xffffffffffffffd0 // #-48
8c: str x0, [x25, x10]
90: mov x10, #0xffffffffffffffd8 // #-40
94: str x0, [x25, x10]
98: mov x10, #0xffffffffffffffe0 // #-32
9c: str x0, [x25, x10]
a0: mov x10, #0xffffffffffffffe8 // #-24
a4: str x0, [x25, x10]
a8: mov x10, #0xfffffffffffffff0 // #-16
ac: str x0, [x25, x10]
b0: mov x10, #0xfffffffffffffff8 // #-8
b4: str x0, [x25, x10]
b8: mov x10, #0x8 // #8
bc: ldr x2, [x19, x10]
[...]
With this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: stp x27, x28, [sp, #-16]!
1c: mov x25, sp
20: sub x27, x25, #0x88
24: mov x26, #0x0 // #0
28: bti j
2c: sub sp, sp, #0x90
30: add x19, x0, #0x0
34: mov x0, #0x0 // #0
38: str x0, [x27]
3c: str x0, [x27, #8]
40: str x0, [x27, #16]
44: str x0, [x27, #24]
48: str x0, [x27, #32]
4c: str x0, [x27, #40]
50: str x0, [x27, #48]
54: str x0, [x27, #56]
58: str x0, [x27, #64]
5c: str x0, [x27, #72]
60: str x0, [x27, #80]
64: str x0, [x27, #88]
68: str x0, [x27, #96]
6c: str x0, [x27, #104]
70: str x0, [x27, #112]
74: str x0, [x27, #120]
78: str x0, [x27, #128]
7c: ldr x2, [x19, #8]
[...]
Signed-off-by: Xu Kuohai <[email protected]>
The BPF STX/LDX instruction uses offset relative to the FP to address
stack space. Since the BPF_FP locates at the top of the frame, the offset
is usually a negative number. However, arm64 str/ldr immediate instruction
requires that offset be a positive number. Therefore, this patch tries to
convert the offsets.
The method is to find the negative offset furthest from the FP firstly.
Then add it to the FP, calculate a bottom position, called FPB, and then
adjust the offsets in other STR/LDX instructions relative to FPB.
FPB is saved using the callee-saved register x27 of arm64 which is not
used yet.
Before adjusting the offset, the patch checks every instruction to ensure
that the FP does not change in run-time. If the FP may change, no offset
is adjusted.
For example, for the following bpftrace command:
bpftrace -e 'kprobe:do_sys_open { printf("opening: %s\n", str(arg1)); }'
Without this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: mov x25, sp
1c: mov x26, #0x0 // #0
20: bti j
24: sub sp, sp, #0x90
28: add x19, x0, #0x0
2c: mov x0, #0x0 // #0
30: mov x10, #0xffffffffffffff78 // #-136
34: str x0, [x25, x10]
38: mov x10, #0xffffffffffffff80 // #-128
3c: str x0, [x25, x10]
40: mov x10, #0xffffffffffffff88 // #-120
44: str x0, [x25, x10]
48: mov x10, #0xffffffffffffff90 // #-112
4c: str x0, [x25, x10]
50: mov x10, #0xffffffffffffff98 // #-104
54: str x0, [x25, x10]
58: mov x10, #0xffffffffffffffa0 // #-96
5c: str x0, [x25, x10]
60: mov x10, #0xffffffffffffffa8 // #-88
64: str x0, [x25, x10]
68: mov x10, #0xffffffffffffffb0 // #-80
6c: str x0, [x25, x10]
70: mov x10, #0xffffffffffffffb8 // #-72
74: str x0, [x25, x10]
78: mov x10, #0xffffffffffffffc0 // #-64
7c: str x0, [x25, x10]
80: mov x10, #0xffffffffffffffc8 // #-56
84: str x0, [x25, x10]
88: mov x10, #0xffffffffffffffd0 // #-48
8c: str x0, [x25, x10]
90: mov x10, #0xffffffffffffffd8 // #-40
94: str x0, [x25, x10]
98: mov x10, #0xffffffffffffffe0 // #-32
9c: str x0, [x25, x10]
a0: mov x10, #0xffffffffffffffe8 // #-24
a4: str x0, [x25, x10]
a8: mov x10, #0xfffffffffffffff0 // #-16
ac: str x0, [x25, x10]
b0: mov x10, #0xfffffffffffffff8 // #-8
b4: str x0, [x25, x10]
b8: mov x10, #0x8 // #8
bc: ldr x2, [x19, x10]
[...]
With this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: stp x27, x28, [sp, #-16]!
1c: mov x25, sp
20: sub x27, x25, #0x88
24: mov x26, #0x0 // #0
28: bti j
2c: sub sp, sp, #0x90
30: add x19, x0, #0x0
34: mov x0, #0x0 // #0
38: str x0, [x27]
3c: str x0, [x27, #8]
40: str x0, [x27, #16]
44: str x0, [x27, #24]
48: str x0, [x27, #32]
4c: str x0, [x27, #40]
50: str x0, [x27, #48]
54: str x0, [x27, #56]
58: str x0, [x27, #64]
5c: str x0, [x27, #72]
60: str x0, [x27, #80]
64: str x0, [x27, #88]
68: str x0, [x27, #96]
6c: str x0, [x27, #104]
70: str x0, [x27, #112]
74: str x0, [x27, #120]
78: str x0, [x27, #128]
7c: ldr x2, [x19, #8]
[...]
Signed-off-by: Xu Kuohai <[email protected]>
The BPF STX/LDX instruction uses offset relative to the FP to address
stack space. Since the BPF_FP locates at the top of the frame, the offset
is usually a negative number. However, arm64 str/ldr immediate instruction
requires that offset be a positive number. Therefore, this patch tries to
convert the offsets.
The method is to find the negative offset furthest from the FP firstly.
Then add it to the FP, calculate a bottom position, called FPB, and then
adjust the offsets in other STR/LDX instructions relative to FPB.
FPB is saved using the callee-saved register x27 of arm64 which is not
used yet.
Before adjusting the offset, the patch checks every instruction to ensure
that the FP does not change in run-time. If the FP may change, no offset
is adjusted.
For example, for the following bpftrace command:
bpftrace -e 'kprobe:do_sys_open { printf("opening: %s\n", str(arg1)); }'
Without this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: mov x25, sp
1c: mov x26, #0x0 // #0
20: bti j
24: sub sp, sp, #0x90
28: add x19, x0, #0x0
2c: mov x0, #0x0 // #0
30: mov x10, #0xffffffffffffff78 // #-136
34: str x0, [x25, x10]
38: mov x10, #0xffffffffffffff80 // #-128
3c: str x0, [x25, x10]
40: mov x10, #0xffffffffffffff88 // #-120
44: str x0, [x25, x10]
48: mov x10, #0xffffffffffffff90 // #-112
4c: str x0, [x25, x10]
50: mov x10, #0xffffffffffffff98 // #-104
54: str x0, [x25, x10]
58: mov x10, #0xffffffffffffffa0 // #-96
5c: str x0, [x25, x10]
60: mov x10, #0xffffffffffffffa8 // #-88
64: str x0, [x25, x10]
68: mov x10, #0xffffffffffffffb0 // #-80
6c: str x0, [x25, x10]
70: mov x10, #0xffffffffffffffb8 // #-72
74: str x0, [x25, x10]
78: mov x10, #0xffffffffffffffc0 // #-64
7c: str x0, [x25, x10]
80: mov x10, #0xffffffffffffffc8 // #-56
84: str x0, [x25, x10]
88: mov x10, #0xffffffffffffffd0 // #-48
8c: str x0, [x25, x10]
90: mov x10, #0xffffffffffffffd8 // #-40
94: str x0, [x25, x10]
98: mov x10, #0xffffffffffffffe0 // #-32
9c: str x0, [x25, x10]
a0: mov x10, #0xffffffffffffffe8 // #-24
a4: str x0, [x25, x10]
a8: mov x10, #0xfffffffffffffff0 // #-16
ac: str x0, [x25, x10]
b0: mov x10, #0xfffffffffffffff8 // #-8
b4: str x0, [x25, x10]
b8: mov x10, #0x8 // #8
bc: ldr x2, [x19, x10]
[...]
With this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: stp x27, x28, [sp, #-16]!
1c: mov x25, sp
20: sub x27, x25, #0x88
24: mov x26, #0x0 // #0
28: bti j
2c: sub sp, sp, #0x90
30: add x19, x0, #0x0
34: mov x0, #0x0 // #0
38: str x0, [x27]
3c: str x0, [x27, #8]
40: str x0, [x27, #16]
44: str x0, [x27, #24]
48: str x0, [x27, #32]
4c: str x0, [x27, #40]
50: str x0, [x27, #48]
54: str x0, [x27, #56]
58: str x0, [x27, #64]
5c: str x0, [x27, #72]
60: str x0, [x27, #80]
64: str x0, [x27, #88]
68: str x0, [x27, #96]
6c: str x0, [x27, #104]
70: str x0, [x27, #112]
74: str x0, [x27, #120]
78: str x0, [x27, #128]
7c: ldr x2, [x19, #8]
[...]
Signed-off-by: Xu Kuohai <[email protected]>
The BPF STX/LDX instruction uses offset relative to the FP to address
stack space. Since the BPF_FP locates at the top of the frame, the offset
is usually a negative number. However, arm64 str/ldr immediate instruction
requires that offset be a positive number. Therefore, this patch tries to
convert the offsets.
The method is to find the negative offset furthest from the FP firstly.
Then add it to the FP, calculate a bottom position, called FPB, and then
adjust the offsets in other STR/LDX instructions relative to FPB.
FPB is saved using the callee-saved register x27 of arm64 which is not
used yet.
Before adjusting the offset, the patch checks every instruction to ensure
that the FP does not change in run-time. If the FP may change, no offset
is adjusted.
For example, for the following bpftrace command:
bpftrace -e 'kprobe:do_sys_open { printf("opening: %s\n", str(arg1)); }'
Without this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: mov x25, sp
1c: mov x26, #0x0 // #0
20: bti j
24: sub sp, sp, #0x90
28: add x19, x0, #0x0
2c: mov x0, #0x0 // #0
30: mov x10, #0xffffffffffffff78 // #-136
34: str x0, [x25, x10]
38: mov x10, #0xffffffffffffff80 // #-128
3c: str x0, [x25, x10]
40: mov x10, #0xffffffffffffff88 // #-120
44: str x0, [x25, x10]
48: mov x10, #0xffffffffffffff90 // #-112
4c: str x0, [x25, x10]
50: mov x10, #0xffffffffffffff98 // #-104
54: str x0, [x25, x10]
58: mov x10, #0xffffffffffffffa0 // #-96
5c: str x0, [x25, x10]
60: mov x10, #0xffffffffffffffa8 // #-88
64: str x0, [x25, x10]
68: mov x10, #0xffffffffffffffb0 // #-80
6c: str x0, [x25, x10]
70: mov x10, #0xffffffffffffffb8 // #-72
74: str x0, [x25, x10]
78: mov x10, #0xffffffffffffffc0 // #-64
7c: str x0, [x25, x10]
80: mov x10, #0xffffffffffffffc8 // #-56
84: str x0, [x25, x10]
88: mov x10, #0xffffffffffffffd0 // #-48
8c: str x0, [x25, x10]
90: mov x10, #0xffffffffffffffd8 // #-40
94: str x0, [x25, x10]
98: mov x10, #0xffffffffffffffe0 // #-32
9c: str x0, [x25, x10]
a0: mov x10, #0xffffffffffffffe8 // #-24
a4: str x0, [x25, x10]
a8: mov x10, #0xfffffffffffffff0 // #-16
ac: str x0, [x25, x10]
b0: mov x10, #0xfffffffffffffff8 // #-8
b4: str x0, [x25, x10]
b8: mov x10, #0x8 // #8
bc: ldr x2, [x19, x10]
[...]
With this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: stp x27, x28, [sp, #-16]!
1c: mov x25, sp
20: sub x27, x25, #0x88
24: mov x26, #0x0 // #0
28: bti j
2c: sub sp, sp, #0x90
30: add x19, x0, #0x0
34: mov x0, #0x0 // #0
38: str x0, [x27]
3c: str x0, [x27, #8]
40: str x0, [x27, #16]
44: str x0, [x27, #24]
48: str x0, [x27, #32]
4c: str x0, [x27, #40]
50: str x0, [x27, #48]
54: str x0, [x27, #56]
58: str x0, [x27, #64]
5c: str x0, [x27, #72]
60: str x0, [x27, #80]
64: str x0, [x27, #88]
68: str x0, [x27, #96]
6c: str x0, [x27, #104]
70: str x0, [x27, #112]
74: str x0, [x27, #120]
78: str x0, [x27, #128]
7c: ldr x2, [x19, #8]
[...]
Signed-off-by: Xu Kuohai <[email protected]>
The BPF STX/LDX instruction uses offset relative to the FP to address
stack space. Since the BPF_FP locates at the top of the frame, the offset
is usually a negative number. However, arm64 str/ldr immediate instruction
requires that offset be a positive number. Therefore, this patch tries to
convert the offsets.
The method is to find the negative offset furthest from the FP firstly.
Then add it to the FP, calculate a bottom position, called FPB, and then
adjust the offsets in other STR/LDX instructions relative to FPB.
FPB is saved using the callee-saved register x27 of arm64 which is not
used yet.
Before adjusting the offset, the patch checks every instruction to ensure
that the FP does not change in run-time. If the FP may change, no offset
is adjusted.
For example, for the following bpftrace command:
bpftrace -e 'kprobe:do_sys_open { printf("opening: %s\n", str(arg1)); }'
Without this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: mov x25, sp
1c: mov x26, #0x0 // #0
20: bti j
24: sub sp, sp, #0x90
28: add x19, x0, #0x0
2c: mov x0, #0x0 // #0
30: mov x10, #0xffffffffffffff78 // #-136
34: str x0, [x25, x10]
38: mov x10, #0xffffffffffffff80 // #-128
3c: str x0, [x25, x10]
40: mov x10, #0xffffffffffffff88 // #-120
44: str x0, [x25, x10]
48: mov x10, #0xffffffffffffff90 // #-112
4c: str x0, [x25, x10]
50: mov x10, #0xffffffffffffff98 // #-104
54: str x0, [x25, x10]
58: mov x10, #0xffffffffffffffa0 // #-96
5c: str x0, [x25, x10]
60: mov x10, #0xffffffffffffffa8 // #-88
64: str x0, [x25, x10]
68: mov x10, #0xffffffffffffffb0 // #-80
6c: str x0, [x25, x10]
70: mov x10, #0xffffffffffffffb8 // #-72
74: str x0, [x25, x10]
78: mov x10, #0xffffffffffffffc0 // #-64
7c: str x0, [x25, x10]
80: mov x10, #0xffffffffffffffc8 // #-56
84: str x0, [x25, x10]
88: mov x10, #0xffffffffffffffd0 // #-48
8c: str x0, [x25, x10]
90: mov x10, #0xffffffffffffffd8 // #-40
94: str x0, [x25, x10]
98: mov x10, #0xffffffffffffffe0 // #-32
9c: str x0, [x25, x10]
a0: mov x10, #0xffffffffffffffe8 // #-24
a4: str x0, [x25, x10]
a8: mov x10, #0xfffffffffffffff0 // #-16
ac: str x0, [x25, x10]
b0: mov x10, #0xfffffffffffffff8 // #-8
b4: str x0, [x25, x10]
b8: mov x10, #0x8 // #8
bc: ldr x2, [x19, x10]
[...]
With this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: stp x27, x28, [sp, #-16]!
1c: mov x25, sp
20: sub x27, x25, #0x88
24: mov x26, #0x0 // #0
28: bti j
2c: sub sp, sp, #0x90
30: add x19, x0, #0x0
34: mov x0, #0x0 // #0
38: str x0, [x27]
3c: str x0, [x27, #8]
40: str x0, [x27, #16]
44: str x0, [x27, #24]
48: str x0, [x27, #32]
4c: str x0, [x27, #40]
50: str x0, [x27, #48]
54: str x0, [x27, #56]
58: str x0, [x27, #64]
5c: str x0, [x27, #72]
60: str x0, [x27, #80]
64: str x0, [x27, #88]
68: str x0, [x27, #96]
6c: str x0, [x27, #104]
70: str x0, [x27, #112]
74: str x0, [x27, #120]
78: str x0, [x27, #128]
7c: ldr x2, [x19, #8]
[...]
Signed-off-by: Xu Kuohai <[email protected]>
The BPF STX/LDX instruction uses offset relative to the FP to address
stack space. Since the BPF_FP locates at the top of the frame, the offset
is usually a negative number. However, arm64 str/ldr immediate instruction
requires that offset be a positive number. Therefore, this patch tries to
convert the offsets.
The method is to find the negative offset furthest from the FP firstly.
Then add it to the FP, calculate a bottom position, called FPB, and then
adjust the offsets in other STR/LDX instructions relative to FPB.
FPB is saved using the callee-saved register x27 of arm64 which is not
used yet.
Before adjusting the offset, the patch checks every instruction to ensure
that the FP does not change in run-time. If the FP may change, no offset
is adjusted.
For example, for the following bpftrace command:
bpftrace -e 'kprobe:do_sys_open { printf("opening: %s\n", str(arg1)); }'
Without this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: mov x25, sp
1c: mov x26, #0x0 // #0
20: bti j
24: sub sp, sp, #0x90
28: add x19, x0, #0x0
2c: mov x0, #0x0 // #0
30: mov x10, #0xffffffffffffff78 // #-136
34: str x0, [x25, x10]
38: mov x10, #0xffffffffffffff80 // #-128
3c: str x0, [x25, x10]
40: mov x10, #0xffffffffffffff88 // #-120
44: str x0, [x25, x10]
48: mov x10, #0xffffffffffffff90 // #-112
4c: str x0, [x25, x10]
50: mov x10, #0xffffffffffffff98 // #-104
54: str x0, [x25, x10]
58: mov x10, #0xffffffffffffffa0 // #-96
5c: str x0, [x25, x10]
60: mov x10, #0xffffffffffffffa8 // #-88
64: str x0, [x25, x10]
68: mov x10, #0xffffffffffffffb0 // #-80
6c: str x0, [x25, x10]
70: mov x10, #0xffffffffffffffb8 // #-72
74: str x0, [x25, x10]
78: mov x10, #0xffffffffffffffc0 // #-64
7c: str x0, [x25, x10]
80: mov x10, #0xffffffffffffffc8 // #-56
84: str x0, [x25, x10]
88: mov x10, #0xffffffffffffffd0 // #-48
8c: str x0, [x25, x10]
90: mov x10, #0xffffffffffffffd8 // #-40
94: str x0, [x25, x10]
98: mov x10, #0xffffffffffffffe0 // #-32
9c: str x0, [x25, x10]
a0: mov x10, #0xffffffffffffffe8 // #-24
a4: str x0, [x25, x10]
a8: mov x10, #0xfffffffffffffff0 // #-16
ac: str x0, [x25, x10]
b0: mov x10, #0xfffffffffffffff8 // #-8
b4: str x0, [x25, x10]
b8: mov x10, #0x8 // #8
bc: ldr x2, [x19, x10]
[...]
With this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: stp x27, x28, [sp, #-16]!
1c: mov x25, sp
20: sub x27, x25, #0x88
24: mov x26, #0x0 // #0
28: bti j
2c: sub sp, sp, #0x90
30: add x19, x0, #0x0
34: mov x0, #0x0 // #0
38: str x0, [x27]
3c: str x0, [x27, #8]
40: str x0, [x27, #16]
44: str x0, [x27, #24]
48: str x0, [x27, #32]
4c: str x0, [x27, #40]
50: str x0, [x27, #48]
54: str x0, [x27, #56]
58: str x0, [x27, #64]
5c: str x0, [x27, #72]
60: str x0, [x27, #80]
64: str x0, [x27, #88]
68: str x0, [x27, #96]
6c: str x0, [x27, #104]
70: str x0, [x27, #112]
74: str x0, [x27, #120]
78: str x0, [x27, #128]
7c: ldr x2, [x19, #8]
[...]
Signed-off-by: Xu Kuohai <[email protected]>
The BPF STX/LDX instruction uses offset relative to the FP to address
stack space. Since the BPF_FP locates at the top of the frame, the offset
is usually a negative number. However, arm64 str/ldr immediate instruction
requires that offset be a positive number. Therefore, this patch tries to
convert the offsets.
The method is to find the negative offset furthest from the FP firstly.
Then add it to the FP, calculate a bottom position, called FPB, and then
adjust the offsets in other STR/LDX instructions relative to FPB.
FPB is saved using the callee-saved register x27 of arm64 which is not
used yet.
Before adjusting the offset, the patch checks every instruction to ensure
that the FP does not change in run-time. If the FP may change, no offset
is adjusted.
For example, for the following bpftrace command:
bpftrace -e 'kprobe:do_sys_open { printf("opening: %s\n", str(arg1)); }'
Without this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: mov x25, sp
1c: mov x26, #0x0 // #0
20: bti j
24: sub sp, sp, #0x90
28: add x19, x0, #0x0
2c: mov x0, #0x0 // #0
30: mov x10, #0xffffffffffffff78 // #-136
34: str x0, [x25, x10]
38: mov x10, #0xffffffffffffff80 // #-128
3c: str x0, [x25, x10]
40: mov x10, #0xffffffffffffff88 // #-120
44: str x0, [x25, x10]
48: mov x10, #0xffffffffffffff90 // #-112
4c: str x0, [x25, x10]
50: mov x10, #0xffffffffffffff98 // #-104
54: str x0, [x25, x10]
58: mov x10, #0xffffffffffffffa0 // #-96
5c: str x0, [x25, x10]
60: mov x10, #0xffffffffffffffa8 // #-88
64: str x0, [x25, x10]
68: mov x10, #0xffffffffffffffb0 // #-80
6c: str x0, [x25, x10]
70: mov x10, #0xffffffffffffffb8 // #-72
74: str x0, [x25, x10]
78: mov x10, #0xffffffffffffffc0 // #-64
7c: str x0, [x25, x10]
80: mov x10, #0xffffffffffffffc8 // #-56
84: str x0, [x25, x10]
88: mov x10, #0xffffffffffffffd0 // #-48
8c: str x0, [x25, x10]
90: mov x10, #0xffffffffffffffd8 // #-40
94: str x0, [x25, x10]
98: mov x10, #0xffffffffffffffe0 // #-32
9c: str x0, [x25, x10]
a0: mov x10, #0xffffffffffffffe8 // #-24
a4: str x0, [x25, x10]
a8: mov x10, #0xfffffffffffffff0 // #-16
ac: str x0, [x25, x10]
b0: mov x10, #0xfffffffffffffff8 // #-8
b4: str x0, [x25, x10]
b8: mov x10, #0x8 // #8
bc: ldr x2, [x19, x10]
[...]
With this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: stp x27, x28, [sp, #-16]!
1c: mov x25, sp
20: sub x27, x25, #0x88
24: mov x26, #0x0 // #0
28: bti j
2c: sub sp, sp, #0x90
30: add x19, x0, #0x0
34: mov x0, #0x0 // #0
38: str x0, [x27]
3c: str x0, [x27, #8]
40: str x0, [x27, #16]
44: str x0, [x27, #24]
48: str x0, [x27, #32]
4c: str x0, [x27, #40]
50: str x0, [x27, #48]
54: str x0, [x27, #56]
58: str x0, [x27, #64]
5c: str x0, [x27, #72]
60: str x0, [x27, #80]
64: str x0, [x27, #88]
68: str x0, [x27, #96]
6c: str x0, [x27, #104]
70: str x0, [x27, #112]
74: str x0, [x27, #120]
78: str x0, [x27, #128]
7c: ldr x2, [x19, #8]
[...]
Signed-off-by: Xu Kuohai <[email protected]>
The BPF STX/LDX instruction uses offset relative to the FP to address
stack space. Since the BPF_FP locates at the top of the frame, the offset
is usually a negative number. However, arm64 str/ldr immediate instruction
requires that offset be a positive number. Therefore, this patch tries to
convert the offsets.
The method is to find the negative offset furthest from the FP firstly.
Then add it to the FP, calculate a bottom position, called FPB, and then
adjust the offsets in other STR/LDX instructions relative to FPB.
FPB is saved using the callee-saved register x27 of arm64 which is not
used yet.
Before adjusting the offset, the patch checks every instruction to ensure
that the FP does not change in run-time. If the FP may change, no offset
is adjusted.
For example, for the following bpftrace command:
bpftrace -e 'kprobe:do_sys_open { printf("opening: %s\n", str(arg1)); }'
Without this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: mov x25, sp
1c: mov x26, #0x0 // #0
20: bti j
24: sub sp, sp, #0x90
28: add x19, x0, #0x0
2c: mov x0, #0x0 // #0
30: mov x10, #0xffffffffffffff78 // #-136
34: str x0, [x25, x10]
38: mov x10, #0xffffffffffffff80 // #-128
3c: str x0, [x25, x10]
40: mov x10, #0xffffffffffffff88 // #-120
44: str x0, [x25, x10]
48: mov x10, #0xffffffffffffff90 // #-112
4c: str x0, [x25, x10]
50: mov x10, #0xffffffffffffff98 // #-104
54: str x0, [x25, x10]
58: mov x10, #0xffffffffffffffa0 // #-96
5c: str x0, [x25, x10]
60: mov x10, #0xffffffffffffffa8 // #-88
64: str x0, [x25, x10]
68: mov x10, #0xffffffffffffffb0 // #-80
6c: str x0, [x25, x10]
70: mov x10, #0xffffffffffffffb8 // #-72
74: str x0, [x25, x10]
78: mov x10, #0xffffffffffffffc0 // #-64
7c: str x0, [x25, x10]
80: mov x10, #0xffffffffffffffc8 // #-56
84: str x0, [x25, x10]
88: mov x10, #0xffffffffffffffd0 // #-48
8c: str x0, [x25, x10]
90: mov x10, #0xffffffffffffffd8 // #-40
94: str x0, [x25, x10]
98: mov x10, #0xffffffffffffffe0 // #-32
9c: str x0, [x25, x10]
a0: mov x10, #0xffffffffffffffe8 // #-24
a4: str x0, [x25, x10]
a8: mov x10, #0xfffffffffffffff0 // #-16
ac: str x0, [x25, x10]
b0: mov x10, #0xfffffffffffffff8 // #-8
b4: str x0, [x25, x10]
b8: mov x10, #0x8 // #8
bc: ldr x2, [x19, x10]
[...]
With this patch, jited code(fragment):
0: bti c
4: stp x29, x30, [sp, #-16]!
8: mov x29, sp
c: stp x19, x20, [sp, #-16]!
10: stp x21, x22, [sp, #-16]!
14: stp x25, x26, [sp, #-16]!
18: stp x27, x28, [sp, #-16]!
1c: mov x25, sp
20: sub x27, x25, #0x88
24: mov x26, #0x0 // #0
28: bti j
2c: sub sp, sp, #0x90
30: add x19, x0, #0x0
34: mov x0, #0x0 // #0
38: str x0, [x27]
3c: str x0, [x27, #8]
40: str x0, [x27, #16]
44: str x0, [x27, #24]
48: str x0, [x27, #32]
4c: str x0, [x27, #40]
50: str x0, [x27, #48]
54: str x0, [x27, #56]
58: str x0, [x27, #64]
5c: str x0, [x27, #72]
60: str x0, [x27, #80]
64: str x0, [x27, #88]
68: str x0, [x27, #96]
6c: str x0, [x27, #104]
70: str x0, [x27, #112]
74: str x0, [x27, #120]
78: str x0, [x27, #128]
7c: ldr x2, [x19, #8]
[...]
Signed-off-by: Xu Kuohai <[email protected]>
Signed-off-by: Daniel Borkmann <[email protected]>
Link: https://lore.kernel.org/bpf/[email protected]
The inline assembly for arm64's cmpxchg_double*() implementations use a +Q constraint to hazard against other accesses to the memory location being exchanged. However, the pointer passed to the constraint is a pointer to unsigned long, and thus the hazard only applies to the first 8 bytes of the location. GCC can take advantage of this, assuming that other portions of the location are unchanged, leading to a number of potential problems. This is similar to what we fixed back in commit: fee960b ("arm64: xchg: hazard against entire exchange variable") ... but we forgot to adjust cmpxchg_double*() similarly at the same time. The same problem applies, as demonstrated with the following test: | struct big { | u64 lo, hi; | } __aligned(128); | | unsigned long foo(struct big *b) | { | u64 hi_old, hi_new; | | hi_old = b->hi; | cmpxchg_double_local(&b->lo, &b->hi, 0x12, 0x34, 0x56, 0x78); | hi_new = b->hi; | | return hi_old ^ hi_new; | } ... which GCC 12.1.0 compiles as: | 0000000000000000 <foo>: | 0: d503233f paciasp | 4: aa0003e4 mov x4, x0 | 8: 1400000e b 40 <foo+0x40> | c: d2800240 mov x0, #0x12 // #18 | 10: d2800681 mov x1, #0x34 // #52 | 14: aa0003e5 mov x5, x0 | 18: aa0103e6 mov x6, x1 | 1c: d2800ac2 mov x2, #0x56 // #86 | 20: d2800f03 mov x3, #0x78 // #120 | 24: 48207c82 casp x0, x1, x2, x3, [x4] | 28: ca050000 eor x0, x0, x5 | 2c: ca060021 eor x1, x1, x6 | 30: aa010000 orr x0, x0, x1 | 34: d2800000 mov x0, #0x0 // #0 <--- BANG | 38: d50323bf autiasp | 3c: d65f03c0 ret | 40: d2800240 mov x0, #0x12 // #18 | 44: d2800681 mov x1, #0x34 // #52 | 48: d2800ac2 mov x2, #0x56 // #86 | 4c: d2800f03 mov x3, #0x78 // #120 | 50: f9800091 prfm pstl1strm, [x4] | 54: c87f1885 ldxp x5, x6, [x4] | 58: ca0000a5 eor x5, x5, x0 | 5c: ca0100c6 eor x6, x6, x1 | 60: aa0600a6 orr x6, x5, x6 | 64: b5000066 cbnz x6, 70 <foo+0x70> | 68: c8250c82 stxp w5, x2, x3, [x4] | 6c: 35ffff45 cbnz w5, 54 <foo+0x54> | 70: d2800000 mov x0, #0x0 // #0 <--- BANG | 74: d50323bf autiasp | 78: d65f03c0 ret Notice that at the lines with "BANG" comments, GCC has assumed that the higher 8 bytes are unchanged by the cmpxchg_double() call, and that `hi_old ^ hi_new` can be reduced to a constant zero, for both LSE and LL/SC versions of cmpxchg_double(). This patch fixes the issue by passing a pointer to __uint128_t into the +Q constraint, ensuring that the compiler hazards against the entire 16 bytes being modified. With this change, GCC 12.1.0 compiles the above test as: | 0000000000000000 <foo>: | 0: f9400407 ldr x7, [x0, #8] | 4: d503233f paciasp | 8: aa0003e4 mov x4, x0 | c: 1400000f b 48 <foo+0x48> | 10: d2800240 mov x0, #0x12 // #18 | 14: d2800681 mov x1, #0x34 // #52 | 18: aa0003e5 mov x5, x0 | 1c: aa0103e6 mov x6, x1 | 20: d2800ac2 mov x2, #0x56 // #86 | 24: d2800f03 mov x3, #0x78 // #120 | 28: 48207c82 casp x0, x1, x2, x3, [x4] | 2c: ca050000 eor x0, x0, x5 | 30: ca060021 eor x1, x1, x6 | 34: aa010000 orr x0, x0, x1 | 38: f9400480 ldr x0, [x4, #8] | 3c: d50323bf autiasp | 40: ca0000e0 eor x0, x7, x0 | 44: d65f03c0 ret | 48: d2800240 mov x0, #0x12 // #18 | 4c: d2800681 mov x1, #0x34 // #52 | 50: d2800ac2 mov x2, #0x56 // #86 | 54: d2800f03 mov x3, #0x78 // #120 | 58: f9800091 prfm pstl1strm, [x4] | 5c: c87f1885 ldxp x5, x6, [x4] | 60: ca0000a5 eor x5, x5, x0 | 64: ca0100c6 eor x6, x6, x1 | 68: aa0600a6 orr x6, x5, x6 | 6c: b5000066 cbnz x6, 78 <foo+0x78> | 70: c8250c82 stxp w5, x2, x3, [x4] | 74: 35ffff45 cbnz w5, 5c <foo+0x5c> | 78: f9400480 ldr x0, [x4, #8] | 7c: d50323bf autiasp | 80: ca0000e0 eor x0, x7, x0 | 84: d65f03c0 ret ... sampling the high 8 bytes before and after the cmpxchg, and performing an EOR, as we'd expect. For backporting, I've tested this atop linux-4.9.y with GCC 5.5.0. Note that linux-4.9.y is oldest currently supported stable release, and mandates GCC 5.1+. Unfortunately I couldn't get a GCC 5.1 binary to run on my machines due to library incompatibilities. I've also used a standalone test to check that we can use a __uint128_t pointer in a +Q constraint at least as far back as GCC 4.8.5 and LLVM 3.9.1. Fixes: 5284e1b ("arm64: xchg: Implement cmpxchg_double") Fixes: e9a4b79 ("arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPU") Reported-by: Boqun Feng <[email protected]> Link: https://lore.kernel.org/lkml/Y6DEfQXymYVgL3oJ@boqun-archlinux/ Reported-by: Peter Zijlstra <[email protected]> Link: https://lore.kernel.org/lkml/[email protected]/ Signed-off-by: Mark Rutland <[email protected]> Cc: [email protected] Cc: Arnd Bergmann <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Steve Capper <[email protected]> Cc: Will Deacon <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
Like commit 1cf3bfc ("bpf: Support 64-bit pointers to kfuncs") for s390x, add support for 64-bit pointers to kfuncs for LoongArch. Since the infrastructure is already implemented in BPF core, the only thing need to be done is to override bpf_jit_supports_far_kfunc_call(). Before this change, several test_verifier tests failed: # ./test_verifier | grep # | grep FAIL #119/p calls: invalid kfunc call: ptr_to_mem to struct with non-scalar FAIL #120/p calls: invalid kfunc call: ptr_to_mem to struct with nesting depth > 4 FAIL #121/p calls: invalid kfunc call: ptr_to_mem to struct with FAM FAIL #122/p calls: invalid kfunc call: reg->type != PTR_TO_CTX FAIL #123/p calls: invalid kfunc call: void * not allowed in func proto without mem size arg FAIL #124/p calls: trigger reg2btf_ids[reg->type] for reg->type > __BPF_REG_TYPE_MAX FAIL #125/p calls: invalid kfunc call: reg->off must be zero when passed to release kfunc FAIL #126/p calls: invalid kfunc call: don't match first member type when passed to release kfunc FAIL #127/p calls: invalid kfunc call: PTR_TO_BTF_ID with negative offset FAIL #128/p calls: invalid kfunc call: PTR_TO_BTF_ID with variable offset FAIL #129/p calls: invalid kfunc call: referenced arg needs refcounted PTR_TO_BTF_ID FAIL #130/p calls: valid kfunc call: referenced arg needs refcounted PTR_TO_BTF_ID FAIL #486/p map_kptr: ref: reference state created and released on xchg FAIL This is because the kfuncs in the loaded module are far away from __bpf_call_base: ffff800002009440 t bpf_kfunc_call_test_fail1 [bpf_testmod] 9000000002e128d8 T __bpf_call_base The offset relative to __bpf_call_base does NOT fit in s32, which breaks the assumption in BPF core. Enable bpf_jit_supports_far_kfunc_call() lifts this limit. Note that to reproduce the above result, tools/testing/selftests/bpf/config should be applied, and run the test with JIT enabled, unpriv BPF enabled. With this change, the test_verifier tests now all passed: # ./test_verifier ... Summary: 777 PASSED, 0 SKIPPED, 0 FAILED Tested-by: Tiezhu Yang <[email protected]> Signed-off-by: Hengqi Chen <[email protected]> Signed-off-by: Huacai Chen <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
Add 3 test cases for skb dynptr used in tp_btf: - test_dynptr_skb_tp_btf: use skb dynptr in tp_btf and make sure it is read-only. - skb_invalid_ctx_fentry/skb_invalid_ctx_fexit: bpf_dynptr_from_skb should fail in fentry/fexit. In test_dynptr_skb_tp_btf, to trigger the tracepoint in kfree_skb, test_pkt_access is used for its test_run, as in kfree_skb.c. Because the test process is different from others, a new setup type is defined, i.e., SETUP_SKB_PROG_TP. The result is like: $ ./test_progs -t 'dynptr/test_dynptr_skb_tp_btf' #77/14 dynptr/test_dynptr_skb_tp_btf:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/1 PASSED, 0 SKIPPED, 0 FAILED $ ./test_progs -t 'dynptr/skb_invalid_ctx_f' #77/83 dynptr/skb_invalid_ctx_fentry:OK #77/84 dynptr/skb_invalid_ctx_fexit:OK #77 dynptr:OK #120 kfunc_dynptr_param:OK Summary: 2/2 PASSED, 0 SKIPPED, 0 FAILED Also fix two coding style nits (change spaces to tabs). Signed-off-by: Philo Lu <[email protected]>
…nter dereference There is a critical race condition in kprobe initialization that can lead to NULL pointer dereference and kernel crash. [1135630.084782] Unable to handle kernel paging request at virtual address 0000710a04630000 ... [1135630.260314] pstate: 404003c9 (nZcv DAIF +PAN -UAO) [1135630.269239] pc : kprobe_perf_func+0x30/0x260 [1135630.277643] lr : kprobe_dispatcher+0x44/0x60 [1135630.286041] sp : ffffaeff4977fa40 [1135630.293441] x29: ffffaeff4977fa40 x28: ffffaf015340e400 [1135630.302837] x27: 0000000000000000 x26: 0000000000000000 [1135630.312257] x25: ffffaf029ed108a8 x24: ffffaf015340e528 [1135630.321705] x23: ffffaeff4977fc50 x22: ffffaeff4977fc50 [1135630.331154] x21: 0000000000000000 x20: ffffaeff4977fc50 [1135630.340586] x19: ffffaf015340e400 x18: 0000000000000000 [1135630.349985] x17: 0000000000000000 x16: 0000000000000000 [1135630.359285] x15: 0000000000000000 x14: 0000000000000000 [1135630.368445] x13: 0000000000000000 x12: 0000000000000000 [1135630.377473] x11: 0000000000000000 x10: 0000000000000000 [1135630.386411] x9 : 0000000000000000 x8 : 0000000000000000 [1135630.395252] x7 : 0000000000000000 x6 : 0000000000000000 [1135630.403963] x5 : 0000000000000000 x4 : 0000000000000000 [1135630.412545] x3 : 0000710a04630000 x2 : 0000000000000006 [1135630.421021] x1 : ffffaeff4977fc50 x0 : 0000710a04630000 [1135630.429410] Call trace: [1135630.434828] kprobe_perf_func+0x30/0x260 [1135630.441661] kprobe_dispatcher+0x44/0x60 [1135630.448396] aggr_pre_handler+0x70/0xc8 [1135630.454959] kprobe_breakpoint_handler+0x140/0x1e0 [1135630.462435] brk_handler+0xbc/0xd8 [1135630.468437] do_debug_exception+0x84/0x138 [1135630.475074] el1_dbg+0x18/0x8c [1135630.480582] security_file_permission+0x0/0xd0 [1135630.487426] vfs_write+0x70/0x1c0 [1135630.493059] ksys_write+0x5c/0xc8 [1135630.498638] __arm64_sys_write+0x24/0x30 [1135630.504821] el0_svc_common+0x78/0x130 [1135630.510838] el0_svc_handler+0x38/0x78 [1135630.516834] el0_svc+0x8/0x1b0 kernel/trace/trace_kprobe.c: 1308 0xffff3df8995039ec <kprobe_perf_func+0x2c>: ldr x21, [x24,#120] include/linux/compiler.h: 294 0xffff3df8995039f0 <kprobe_perf_func+0x30>: ldr x1, [x21,x0] kernel/trace/trace_kprobe.c 1308: head = this_cpu_ptr(call->perf_events); 1309: if (hlist_empty(head)) 1310: return 0; crash> struct trace_event_call -o struct trace_event_call { ... [120] struct hlist_head *perf_events; //(call->perf_event) ... } crash> struct trace_event_call ffffaf015340e528 struct trace_event_call { ... perf_events = 0xffff0ad5fa89f088, //this value is correct, but x21 = 0 ... } Race Condition Analysis: The race occurs between kprobe activation and perf_events initialization: CPU0 CPU1 ==== ==== perf_kprobe_init perf_trace_event_init tp_event->perf_events = list;(1) tp_event->class->reg (2)← KPROBE ACTIVE Debug exception triggers ... kprobe_dispatcher kprobe_perf_func (tk->tp.flags & TP_FLAG_PROFILE) head = this_cpu_ptr(call->perf_events)(3) (perf_events is still NULL) Problem: 1. CPU0 executes (1) assigning tp_event->perf_events = list 2. CPU0 executes (2) enabling kprobe functionality via class->reg() 3. CPU1 triggers and reaches kprobe_dispatcher 4. CPU1 checks TP_FLAG_PROFILE - condition passes (step 2 completed) 5. CPU1 calls kprobe_perf_func() and crashes at (3) because call->perf_events is still NULL CPU1 sees that kprobe functionality is enabled but does not see that perf_events has been assigned. Add pairing read and write memory barriers to guarantee that if CPU1 sees that kprobe functionality is enabled, it must also see that perf_events has been assigned. Link: https://lore.kernel.org/all/[email protected]/ Fixes: 50d7805 ("tracing/kprobes: Add probe handler dispatcher to support perf and ftrace concurrent use") Cc: [email protected] Signed-off-by: Yuan Chen <[email protected]> Signed-off-by: Masami Hiramatsu (Google) <[email protected]>
Pull request for series with
subject: selftests/bpf: fix array_size.cocci warning
version: 1
url: https://patchwork.kernel.org/project/netdevbpf/list/?series=621395