93 Commits

Author SHA1 Message Date
ef128f3752 Merge pull request #79 from pythonbpf/dependabot/github_actions/actions-c2e7f7cad0
Bump the actions group with 2 updates
2025-12-25 17:40:36 +05:30
b92208ed0d Bump the actions group with 2 updates
Bumps the actions group with 2 updates: [actions/upload-artifact](https://github.com/actions/upload-artifact) and [actions/download-artifact](https://github.com/actions/download-artifact).


Updates `actions/upload-artifact` from 5 to 6
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v5...v6)

Updates `actions/download-artifact` from 6 to 7
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v6...v7)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: actions
- dependency-name: actions/download-artifact
  dependency-version: '7'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: actions
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-15 10:24:00 +00:00
2bd8e73724 Add symbolization example 2025-12-09 00:36:50 +05:30
641f8bacbe Merge pull request #78 from pythonbpf/kfunc
move examples to the correct directory
2025-12-08 21:41:52 +05:30
749b06020d move examples to examples folder 2025-12-08 21:39:50 +05:30
0ce5add39b compact the disksnoop.py example
Signed-off-by: varun-r-mallya <varunrmallya@gmail.com>
2025-12-08 16:23:44 +05:30
d0e2360f46 Add anomaly-detection example 2025-12-02 04:01:41 +05:30
049ec55e85 bump version
Signed-off-by: varun-r-mallya <varunrmallya@gmail.com>
2025-11-30 05:55:22 +05:30
77901accf2 fix xdp test c form test
Signed-off-by: varun-r-mallya <varunrmallya@gmail.com>
2025-11-30 05:53:32 +05:30
0616a2fccb Add requirements.txt for BCC-Examples 2025-11-30 05:39:58 +05:30
526425a267 Add command to copy vmlinux.py for container-monitor 2025-11-30 05:37:15 +05:30
466ecdb6a4 Update Python package installation in README 2025-11-30 05:36:16 +05:30
752a10fa5f Add installation instructions to README
Added installation instructions and dependencies section to README.
2025-11-30 05:35:50 +05:30
3602b502f4 Add steps to run container-example in BCC-Examples/README.md 2025-11-30 05:34:08 +05:30
808db2722d Add instructions to run examples in README 2025-11-30 05:27:12 +05:30
99fc5d75cc Refactor disksnoop.py to remove logging and main block
Removed logging setup and main execution block from disksnoop.py.
2025-11-30 04:53:53 +05:30
c91e69e2f7 Add bpftool to installation dependencies 2025-11-30 04:35:15 +05:30
dc995a1448 Fix variable initialization in BPF tracepoint example 2025-11-30 04:31:39 +05:30
0fd6bea211 Fix return value in README example 2025-11-30 04:29:31 +05:30
01d234ac86 Merge pull request #77 from pythonbpf/fix-vmlinux-ir-gen
Add a web dashboard to container monitor
2025-11-28 22:12:47 +05:30
c97efb2570 change web version 2025-11-28 22:11:41 +05:30
76c982e15e Add a web dashboard 2025-11-28 21:30:41 +05:30
2543826e85 Merge pull request #75 from pythonbpf/fix-vmlinux-ir-gen
add container-monitor example
2025-11-28 21:12:45 +05:30
650744f843 beautify container monitor 2025-11-28 21:10:39 +05:30
d73c793989 format chore 2025-11-28 21:02:29 +05:30
bbe4990878 add container-monitor example 2025-11-28 21:02:29 +05:30
600993f626 add syscall monitor 2025-11-28 21:02:28 +05:30
6c55d56ef0 add file io and network stats in container monitor example 2025-11-28 21:02:28 +05:30
704b0d8cd3 Fix debug info generation of PerfEventArray maps 2025-11-28 21:02:27 +05:30
0e50079d88 Add passing test struct_pylib.py 2025-11-28 21:02:27 +05:30
d457f87410 Bump actions/checkout from 5 to 6 in the actions group
Bumps the actions group with 1 update: [actions/checkout](https://github.com/actions/checkout).


Updates `actions/checkout` from 5 to 6
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: actions
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-28 21:02:26 +05:30
4ea02745b3 Janitorial: Remove useless comments 2025-11-28 21:02:26 +05:30
84edddb685 Fix passing test hash_map_struct to include char array test 2025-11-28 21:02:25 +05:30
6f017a9176 Unify struct and pointer to struct handling, abstract null check in ir_ops 2025-11-28 21:02:25 +05:30
24e5829b80 format chore 2025-11-28 14:52:45 +05:30
2daedc5882 Fix debug info generation of PerfEventArray maps 2025-11-28 14:50:40 +05:30
14af7ec4dd add file_io.bpf.py to make container-monitor 2025-11-28 14:47:29 +05:30
536ea4855e add cgroup helper 2025-11-28 14:47:01 +05:30
5ba29db362 fix the c form of the xdp program 2025-11-27 23:44:44 +05:30
0ca835079d Merge pull request #74 from pythonbpf/fix-vmlinux-ir-gen
Fix some issues with vmlinux struct usage in syntax
TODO: THIS DOES NOT FIX the XDP example. 
The issue with the XDP example here is that the range is not being tracked due to multiple loads from stack which is making the verifier lose track of the range on that value.
2025-11-27 23:03:45 +05:30
de8c486461 fix: remove deref_to_depth on single depth pointers 2025-11-27 22:59:34 +05:30
f135cdbcc0 format chore 2025-11-27 14:03:12 +05:30
a8595ff1d2 feat: allocate tmp variable for pointer to vmlinux struct field access. 2025-11-27 14:02:00 +05:30
d43d3ad637 clear disksnoop output 2025-11-27 12:45:48 +05:30
9becee8f77 add expected type check 2025-11-27 12:44:12 +05:30
189526d5ca format chore 2025-11-27 12:42:10 +05:30
1593b7bcfe feat:user defined struct casting 2025-11-27 12:41:57 +05:30
127852ee9f Add passing test struct_pylib.py 2025-11-27 04:45:34 +05:30
4905649700 feat:non struct field values can be cast 2025-11-26 14:18:40 +05:30
7b7b00dbe7 add disksnoop ipynb 2025-11-25 22:51:24 +05:30
102e4ca78c add disksnoop example 2025-11-24 22:50:39 +05:30
2fd4fefbcc Merge pull request #73 from pythonbpf/dependabot/github_actions/actions-76468cb07f
Bump actions/checkout from 5 to 6 in the actions group
2025-11-24 17:15:55 +05:30
016fd5de5c Bump actions/checkout from 5 to 6 in the actions group
Bumps the actions group with 1 update: [actions/checkout](https://github.com/actions/checkout).


Updates `actions/checkout` from 5 to 6
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: actions
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-24 11:44:07 +00:00
8ad5fb8a3a Janitorial: Remove useless comments 2025-11-23 06:29:44 +05:30
bf9635e324 Fix passing test hash_map_struct to include char array test 2025-11-23 06:27:01 +05:30
cbe365d760 Unify struct and pointer to struct handling, abstract null check in ir_ops 2025-11-23 06:26:09 +05:30
fed6af1ed6 bump version and prepare for release 2025-11-22 13:54:41 +05:30
18886816fb Merge pull request #68 from pythonbpf/request-struct
Support enough machinery to make request struct work
2025-11-22 13:48:06 +05:30
a2de15fb1e add c_int to type_deducer.py 2025-11-22 13:36:21 +05:30
9def969592 Make map val struct type allocation work by fixing pointer deref and debuginfogen: WIP 2025-11-22 13:20:09 +05:30
081ee5cb4c move requests.py to passing tests 2025-11-22 13:19:55 +05:30
a91c3158ad sort fields in debug info by offset order 2025-11-22 12:35:47 +05:30
2b3635fe20 format chore 2025-11-22 01:48:44 +05:30
6f25c554a9 fix CO-RE read for cast structs 2025-11-22 01:47:25 +05:30
84507b8b98 add btf probe read kernel helper 2025-11-22 00:57:12 +05:30
a42a75179d format chore 2025-11-22 00:37:39 +05:30
377fa4041d add regular struct field access handling in vmlinux_registry.py 2025-11-22 00:36:59 +05:30
99321c7669 add a failing C test 2025-11-21 23:01:08 +05:30
11850d16d3 field check in allocation pass 2025-11-21 21:47:58 +05:30
9ee821c7f6 make pointer allocation feasible but subverting LLC 2025-11-21 21:47:55 +05:30
25394059a6 allow casting 2025-11-21 21:47:10 +05:30
fde8eab775 allow allocation pass on vmlinux cast 2025-11-21 21:47:07 +05:30
42b8865a56 Merge branch 'master' into request-struct 2025-11-21 02:10:52 +05:30
144d9b0ab4 change c-file test structure 2025-11-20 17:24:02 +05:30
902a52a07d remove debug print statements 2025-11-20 14:39:13 +05:30
306570953b format chore 2025-11-20 14:18:45 +05:30
740eed45e1 add placeholder debug info to shut llvmlite up about NoneType 2025-11-20 14:17:57 +05:30
c8801f4c3e nonetype not parsed 2025-11-19 23:35:10 +05:30
e5b3b001ce Minor fix for PTR_TO_MAP_VALUE_OR_NULL target 2025-11-19 04:29:35 +05:30
19b42b9a19 Allocate hashmap lookup return vars based on the value type of said hashmap 2025-11-19 04:09:51 +05:30
9f5ec62383 Add get_uint8_type to DebugInfoGenerator 2025-11-19 03:24:40 +05:30
7af54df7c0 Add passing test hash_map_struct.py for using structs as hashmap key/val types 2025-11-19 00:17:01 +05:30
573bbb350e Allow structs to be key/val type for hashmaps 2025-11-19 00:08:15 +05:30
64679f8072 Add skeleton _get_key_val_dbg_type in maps_debug_info.py 2025-11-18 05:00:00 +05:30
5667facf23 Pass down structs_sym_tab to maps_debug_info, allow vmlinux enums to be used in an indexed format for map declaration 2025-11-18 04:34:51 +05:30
4f8af16a17 Pass structs_sym_tab to maps_proc 2025-11-18 04:34:42 +05:30
b84884162d Merge pull request #69 from pythonbpf/symex
Add support for userspace+kernelspace stack trace example using blazesym
2025-11-17 01:47:35 +05:30
49740598ea format chore 2025-11-13 09:31:10 +05:30
73bbf00e7c add tests 2025-11-13 09:29:53 +05:30
f7dee329cb fix nested pointers issue in array generation and also fix zero length array IR generation 2025-11-10 20:29:28 +05:30
5031f90377 fix stacked vmlinux struct parsing issue 2025-11-10 20:06:04 +05:30
95a624044a fix type error 2025-11-08 20:28:56 +05:30
c5bef26b88 add multi imports to single import line. 2025-11-08 18:08:04 +05:30
56 changed files with 5700 additions and 246 deletions

View File

@ -12,7 +12,7 @@ jobs:
name: Format
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- uses: actions/setup-python@v6
with:
python-version: "3.x"

View File

@ -20,7 +20,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- uses: actions/setup-python@v6
with:
@ -33,7 +33,7 @@ jobs:
python -m build
- name: Upload distributions
uses: actions/upload-artifact@v5
uses: actions/upload-artifact@v6
with:
name: release-dists
path: dist/
@ -59,7 +59,7 @@ jobs:
steps:
- name: Retrieve release distributions
uses: actions/download-artifact@v6
uses: actions/download-artifact@v7
with:
name: release-dists
path: dist/

View File

@ -7,14 +7,25 @@ This folder contains examples of BCC tutorial examples that have been ported to
- You will also need `matplotlib` for vfsreadlat.py example.
- You will also need `rich` for vfsreadlat_rich.py example.
- You will also need `plotly` and `dash` for vfsreadlat_plotly.py example.
- All of these are added to `requirements.txt` file. You can install them using the following command:
```bash
pip install -r requirements.txt
```
## Usage
- You'll need root privileges to run these examples. If you are using a virtualenv, use the following command to run the scripts:
```bash
sudo <path_to_virtualenv>/bin/python3 <script_name>.py
```
- For the disksnoop and container-monitor examples, you need to generate the vmlinux.py file first. Follow the instructions in the [main README](https://github.com/pythonbpf/Python-BPF/tree/master?tab=readme-ov-file#first-generate-the-vmlinuxpy-file-for-your-kernel) to generate the vmlinux.py file.
- For vfsreadlat_plotly.py, run the following command to start the Dash server:
```bash
sudo <path_to_virtualenv>/bin/python3 vfsreadlat_plotly/bpf_program.py
```
Then open your web browser and navigate to the given URL.
- For container-monitor, you need to first copy the vmlinux.py to `container-monitor/` directory.
Then run the following command to run the example:
```bash
cp vmlinux.py container-monitor/
sudo <path_to_virtualenv>/bin/python3 container-monitor/container_monitor.py
```

View File

@ -0,0 +1,122 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "c3520e58-e50f-4bc1-8f9d-a6fecbf6e9f0",
"metadata": {},
"outputs": [],
"source": [
"from vmlinux import struct_request, struct_pt_regs\n",
"from pythonbpf import bpf, section, bpfglobal, map, BPF\n",
"from pythonbpf.helper import ktime\n",
"from pythonbpf.maps import HashMap\n",
"from ctypes import c_int64, c_uint64, c_int32\n",
"\n",
"REQ_WRITE = 1\n",
"\n",
"\n",
"@bpf\n",
"@map\n",
"def start() -> HashMap:\n",
" return HashMap(key=c_uint64, value=c_uint64, max_entries=10240)\n",
"\n",
"\n",
"@bpf\n",
"@section(\"kprobe/blk_mq_end_request\")\n",
"def trace_completion(ctx: struct_pt_regs) -> c_int64:\n",
" # Get request pointer from first argument\n",
" req_ptr = ctx.di\n",
" req = struct_request(ctx.di)\n",
" # Print: data_len, cmd_flags, latency_us\n",
" data_len = req.__data_len\n",
" cmd_flags = req.cmd_flags\n",
" # Lookup start timestamp\n",
" req_tsp = start.lookup(req_ptr)\n",
" if req_tsp:\n",
" # Calculate delta in nanoseconds\n",
" delta = ktime() - req_tsp\n",
"\n",
" # Convert to microseconds for printing\n",
" delta_us = delta // 1000\n",
"\n",
" print(f\"{data_len} {cmd_flags:x} {delta_us}\\n\")\n",
"\n",
" # Delete the entry\n",
" start.delete(req_ptr)\n",
"\n",
" return c_int64(0)\n",
"\n",
"\n",
"@bpf\n",
"@section(\"kprobe/blk_mq_start_request\")\n",
"def trace_start(ctx1: struct_pt_regs) -> c_int32:\n",
" req = ctx1.di\n",
" ts = ktime()\n",
" start.update(req, ts)\n",
" return c_int32(0)\n",
"\n",
"\n",
"@bpf\n",
"@bpfglobal\n",
"def LICENSE() -> str:\n",
" return \"GPL\"\n",
"\n",
"\n",
"b = BPF()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "97040f73-98e0-4993-94c6-125d1b42d931",
"metadata": {},
"outputs": [],
"source": [
"b.load()\n",
"b.attach_all()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b1bd4f51-fa25-42e1-877c-e48a2605189f",
"metadata": {},
"outputs": [],
"source": [
"from pythonbpf import trace_pipe"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "96b4b59b-b0db-4952-9534-7a714f685089",
"metadata": {},
"outputs": [],
"source": [
"trace_pipe()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

48
BCC-Examples/disksnoop.py Normal file
View File

@ -0,0 +1,48 @@
from ctypes import c_int32, c_int64, c_uint64
from vmlinux import struct_pt_regs, struct_request
from pythonbpf import bpf, bpfglobal, compile, map, section
from pythonbpf.helper import ktime
from pythonbpf.maps import HashMap
@bpf
@map
def start() -> HashMap:
return HashMap(key=c_uint64, value=c_uint64, max_entries=10240)
@bpf
@section("kprobe/blk_mq_end_request")
def trace_completion(ctx: struct_pt_regs) -> c_int64:
req_ptr = ctx.di
req = struct_request(ctx.di)
data_len = req.__data_len
cmd_flags = req.cmd_flags
req_tsp = start.lookup(req_ptr)
if req_tsp:
delta = ktime() - req_tsp
delta_us = delta // 1000
print(f"{data_len} {cmd_flags:x} {delta_us}\n")
start.delete(req_ptr)
return c_int64(0)
@bpf
@section("kprobe/blk_mq_start_request")
def trace_start(ctx1: struct_pt_regs) -> c_int32:
req = ctx1.di
ts = ktime()
start.update(req, ts)
return c_int32(0)
@bpf
@bpfglobal
def LICENSE() -> str:
return "GPL"
compile()

View File

@ -0,0 +1,9 @@
# =============================================================================
# Requirements for PythonBPF BCC-Examples
# =============================================================================
dash
matplotlib
numpy
plotly
rich

View File

@ -40,16 +40,11 @@ Python-BPF is an LLVM IR generator for eBPF programs written in Python. It uses
---
## Try It Out!
Run
```bash
curl -s https://raw.githubusercontent.com/pythonbpf/Python-BPF/refs/heads/master/tools/setup.sh | sudo bash
```
## Installation
Dependencies:
* `bpftool`
* `clang`
* Python ≥ 3.8
@ -61,6 +56,38 @@ pip install pythonbpf pylibbpf
---
## Try It Out!
#### First, generate the vmlinux.py file for your kernel:
- Install the required dependencies:
- On Ubuntu:
```bash
sudo apt-get install bpftool clang
pip install pythonbpf pylibbpf ctypeslib2
```
- Generate the `vmlinux.py` using:
```bash
sudo tools/vmlinux-gen.py
```
- Copy this file to `BCC-Examples/`
#### Next, install requirements for BCC-Examples:
- These requirements are only required for the python notebooks, vfsreadlat and container-monitor examples.
```bash
pip install -r BCC-Examples/requirements.txt
```
- Now, follow the instructions in the [BCC-Examples/README.md](https://github.com/pythonbpf/Python-BPF/blob/master/BCC-Examples/README.md) to run the examples.
#### To spin up jupyter notebook examples:
- Run and follow the instructions on screen
```bash
curl -s https://raw.githubusercontent.com/pythonbpf/Python-BPF/refs/heads/master/tools/setup.sh | sudo bash
```
- Check the jupyter server on the web browser and run the notebooks in the `BCC-Examples/` folder.
---
## Example Usage
```python
@ -88,16 +115,15 @@ def hist() -> HashMap:
@section("tracepoint/syscalls/sys_enter_clone")
def hello(ctx: c_void_p) -> c_int64:
process_id = pid()
one = 1
prev = hist.lookup(process_id)
if prev:
previous_value = prev + 1
print(f"count: {previous_value} with {process_id}")
hist.update(process_id, previous_value)
return c_int64(0)
return 0
else:
hist.update(process_id, one)
return c_int64(0)
hist.update(process_id, 1)
return 0
@bpf

405
blazesym-example/Cargo.lock generated Normal file
View File

@ -0,0 +1,405 @@
# This file is automatically @generated by Cargo.
# It is not intended for manual editing.
version = 4
[[package]]
name = "adler2"
version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "320119579fcad9c21884f5c4861d16174d0e06250625266f50fe6898340abefa"
[[package]]
name = "anstream"
version = "0.6.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "43d5b281e737544384e969a5ccad3f1cdd24b48086a0fc1b2a5262a26b8f4f4a"
dependencies = [
"anstyle",
"anstyle-parse",
"anstyle-query",
"anstyle-wincon",
"colorchoice",
"is_terminal_polyfill",
"utf8parse",
]
[[package]]
name = "anstyle"
version = "1.0.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5192cca8006f1fd4f7237516f40fa183bb07f8fbdfedaa0036de5ea9b0b45e78"
[[package]]
name = "anstyle-parse"
version = "0.2.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4e7644824f0aa2c7b9384579234ef10eb7efb6a0deb83f9630a49594dd9c15c2"
dependencies = [
"utf8parse",
]
[[package]]
name = "anstyle-query"
version = "1.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "40c48f72fd53cd289104fc64099abca73db4166ad86ea0b4341abe65af83dadc"
dependencies = [
"windows-sys",
]
[[package]]
name = "anstyle-wincon"
version = "3.0.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "291e6a250ff86cd4a820112fb8898808a366d8f9f58ce16d1f538353ad55747d"
dependencies = [
"anstyle",
"once_cell_polyfill",
"windows-sys",
]
[[package]]
name = "anyhow"
version = "1.0.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a23eb6b1614318a8071c9b2521f36b424b2c83db5eb3a0fead4a6c0809af6e61"
[[package]]
name = "bitflags"
version = "2.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "812e12b5285cc515a9c72a5c1d3b6d46a19dac5acfef5265968c166106e31dd3"
[[package]]
name = "blazesym"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ace0ab71bbe9a25cb82f6d0e513ae11aebd1a38787664475bb2ed5cbe2329736"
dependencies = [
"cpp_demangle",
"gimli",
"libc",
"memmap2",
"miniz_oxide",
"rustc-demangle",
]
[[package]]
name = "cc"
version = "1.2.46"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b97463e1064cb1b1c1384ad0a0b9c8abd0988e2a91f52606c80ef14aadb63e36"
dependencies = [
"find-msvc-tools",
"shlex",
]
[[package]]
name = "cfg-if"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9330f8b2ff13f34540b44e946ef35111825727b38d33286ef986142615121801"
[[package]]
name = "cfg_aliases"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724"
[[package]]
name = "clap"
version = "4.5.51"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4c26d721170e0295f191a69bd9a1f93efcdb0aff38684b61ab5750468972e5f5"
dependencies = [
"clap_builder",
"clap_derive",
]
[[package]]
name = "clap_builder"
version = "4.5.51"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75835f0c7bf681bfd05abe44e965760fea999a5286c6eb2d59883634fd02011a"
dependencies = [
"anstream",
"anstyle",
"clap_lex",
"strsim",
]
[[package]]
name = "clap_derive"
version = "4.5.49"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2a0b5487afeab2deb2ff4e03a807ad1a03ac532ff5a2cee5d86884440c7f7671"
dependencies = [
"heck",
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "clap_lex"
version = "0.7.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a1d728cc89cf3aee9ff92b05e62b19ee65a02b5702cff7d5a377e32c6ae29d8d"
[[package]]
name = "colorchoice"
version = "1.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b05b61dc5112cbb17e4b6cd61790d9845d13888356391624cbe7e41efeac1e75"
[[package]]
name = "cpp_demangle"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0667304c32ea56cb4cd6d2d7c0cfe9a2f8041229db8c033af7f8d69492429def"
dependencies = [
"cfg-if",
]
[[package]]
name = "equivalent"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "877a4ace8713b0bcf2a4e7eec82529c029f1d0619886d18145fea96c3ffe5c0f"
[[package]]
name = "fallible-iterator"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2acce4a10f12dc2fb14a218589d4f1f62ef011b2d0cc4b3cb1bba8e94da14649"
[[package]]
name = "find-msvc-tools"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3a3076410a55c90011c298b04d0cfa770b00fa04e1e3c97d3f6c9de105a03844"
[[package]]
name = "gimli"
version = "0.32.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e629b9b98ef3dd8afe6ca2bd0f89306cec16d43d907889945bc5d6687f2f13c7"
dependencies = [
"fallible-iterator",
"indexmap",
"stable_deref_trait",
]
[[package]]
name = "hashbrown"
version = "0.16.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5419bdc4f6a9207fbeba6d11b604d481addf78ecd10c11ad51e76c2f6482748d"
[[package]]
name = "heck"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
[[package]]
name = "indexmap"
version = "2.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6717a8d2a5a929a1a2eb43a12812498ed141a0bcfb7e8f7844fbdbe4303bba9f"
dependencies = [
"equivalent",
"hashbrown",
]
[[package]]
name = "is_terminal_polyfill"
version = "1.70.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a6cb138bb79a146c1bd460005623e142ef0181e3d0219cb493e02f7d08a35695"
[[package]]
name = "libbpf-rs"
version = "0.24.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "93edd9cd673087fa7518fd63ad6c87be2cd9b4e35034b1873f3e3258c018275b"
dependencies = [
"bitflags",
"libbpf-sys",
"libc",
"vsprintf",
]
[[package]]
name = "libbpf-sys"
version = "1.6.2+v1.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ba0346fc595fa2c8e274903e8a0e3ed5e6a29183af167567f6289fd3b116881b"
dependencies = [
"cc",
"nix",
"pkg-config",
]
[[package]]
name = "libc"
version = "0.2.177"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2874a2af47a2325c2001a6e6fad9b16a53b802102b528163885171cf92b15976"
[[package]]
name = "memmap2"
version = "0.9.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "744133e4a0e0a658e1374cf3bf8e415c4052a15a111acd372764c55b4177d490"
dependencies = [
"libc",
]
[[package]]
name = "miniz_oxide"
version = "0.8.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1fa76a2c86f704bdb222d66965fb3d63269ce38518b83cb0575fca855ebb6316"
dependencies = [
"adler2",
"simd-adler32",
]
[[package]]
name = "nix"
version = "0.30.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "74523f3a35e05aba87a1d978330aef40f67b0304ac79c1c00b294c9830543db6"
dependencies = [
"bitflags",
"cfg-if",
"cfg_aliases",
"libc",
]
[[package]]
name = "once_cell_polyfill"
version = "1.70.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "384b8ab6d37215f3c5301a95a4accb5d64aa607f1fcb26a11b5303878451b4fe"
[[package]]
name = "pkg-config"
version = "0.3.32"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7edddbd0b52d732b21ad9a5fab5c704c14cd949e5e9a1ec5929a24fded1b904c"
[[package]]
name = "plain"
version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b4596b6d070b27117e987119b4dac604f3c58cfb0b191112e24771b2faeac1a6"
[[package]]
name = "proc-macro2"
version = "1.0.103"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5ee95bc4ef87b8d5ba32e8b7714ccc834865276eab0aed5c9958d00ec45f49e8"
dependencies = [
"unicode-ident",
]
[[package]]
name = "blazesym-example"
version = "0.1.0"
dependencies = [
"anyhow",
"blazesym",
"clap",
"libbpf-rs",
"libc",
"plain",
]
[[package]]
name = "quote"
version = "1.0.42"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a338cc41d27e6cc6dce6cefc13a0729dfbb81c262b1f519331575dd80ef3067f"
dependencies = [
"proc-macro2",
]
[[package]]
name = "rustc-demangle"
version = "0.1.26"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "56f7d92ca342cea22a06f2121d944b4fd82af56988c270852495420f961d4ace"
[[package]]
name = "shlex"
version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fda2ff0d084019ba4d7c6f371c95d8fd75ce3524c3cb8fb653a3023f6323e64"
[[package]]
name = "simd-adler32"
version = "0.3.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d66dc143e6b11c1eddc06d5c423cfc97062865baf299914ab64caa38182078fe"
[[package]]
name = "stable_deref_trait"
version = "1.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6ce2be8dc25455e1f91df71bfa12ad37d7af1092ae736f3a6cd0e37bc7810596"
[[package]]
name = "strsim"
version = "0.11.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f"
[[package]]
name = "syn"
version = "2.0.110"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a99801b5bd34ede4cf3fc688c5919368fea4e4814a4664359503e6015b280aea"
dependencies = [
"proc-macro2",
"quote",
"unicode-ident",
]
[[package]]
name = "unicode-ident"
version = "1.0.22"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9312f7c4f6ff9069b165498234ce8be658059c6728633667c526e27dc2cf1df5"
[[package]]
name = "utf8parse"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821"
[[package]]
name = "vsprintf"
version = "2.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aec2f81b75ca063294776b4f7e8da71d1d5ae81c2b1b149c8d89969230265d63"
dependencies = [
"cc",
"libc",
]
[[package]]
name = "windows-link"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0805222e57f7521d6a62e36fa9163bc891acd422f971defe97d64e70d0a4fe5"
[[package]]
name = "windows-sys"
version = "0.61.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ae137229bcbd6cdf0f7b80a31df61766145077ddf49416a728b02cb3921ff3fc"
dependencies = [
"windows-link",
]

View File

@ -0,0 +1,14 @@
[package]
name = "blazesym-example"
version = "0.1.0"
edition = "2024"
[dependencies]
libbpf-rs = "0.24"
blazesym = "0.2.0-rc.4"
anyhow = "1.0"
clap = { version = "4.5", features = ["derive"] }
libc = "0.2"
plain = "0.2"
[build-dependencies]

View File

@ -0,0 +1,333 @@
// src/main.rs - Fixed imports and error handling
use std::mem;
use std::path::PathBuf;
use std::time::Duration;
use anyhow::{anyhow, Context, Result};
use blazesym::symbolize::{CodeInfo, Input, Symbolized, Symbolizer};
use blazesym::symbolize::source::{Source, Kernel, Process};
use clap::Parser;
use libbpf_rs::{MapCore, ObjectBuilder, RingBufferBuilder}; // Added MapCore
// Match your Python struct exactly
#[repr(C)]
#[derive(Debug, Copy, Clone)]
struct ExecEvent {
pid: i64,
cpu: i32,
timestamp: i64,
comm: [u8; 16],
kstack_sz: i64,
ustack_sz: i64,
kstack: [u8; 128], // str(128) in Python
ustack: [u8; 128], // str(128) in Python
}
unsafe impl plain::Plain for ExecEvent {}
// Define perf_event constants (not in libc on all platforms)
const PERF_TYPE_HARDWARE: u32 = 0;
const PERF_TYPE_SOFTWARE: u32 = 1;
const PERF_COUNT_HW_CPU_CYCLES: u64 = 0;
const PERF_COUNT_SW_CPU_CLOCK: u64 = 0;
#[repr(C)]
struct PerfEventAttr {
type_: u32,
size: u32,
config: u64,
sample_period_or_freq: u64,
sample_type: u64,
read_format: u64,
flags: u64,
// ... rest can be zeroed
_padding: [u64; 64],
}
#[derive(Parser, Debug)]
struct Args {
/// Path to the BPF object file
#[arg(default_value = "stack_traces.o")]
object_file: PathBuf,
/// Sampling frequency
#[arg(short, long, default_value_t = 50)]
freq: u64,
/// Use software events
#[arg(long)]
sw_event: bool,
/// Verbose output
#[arg(short, long)]
verbose: bool,
}
fn open_perf_event(cpu: i32, freq: u64, sw_event: bool) -> Result<i32> {
let mut attr: PerfEventAttr = unsafe { mem::zeroed() };
attr.size = mem::size_of::<PerfEventAttr>() as u32;
attr.type_ = if sw_event {
PERF_TYPE_SOFTWARE
} else {
PERF_TYPE_HARDWARE
};
attr.config = if sw_event {
PERF_COUNT_SW_CPU_CLOCK
} else {
PERF_COUNT_HW_CPU_CYCLES
};
// Use frequency-based sampling
attr.sample_period_or_freq = freq;
attr.flags = 1 << 10; // freq = 1, disabled = 1
let fd = unsafe {
libc::syscall(
libc::SYS_perf_event_open,
&attr as *const _,
-1, // pid = -1 (all processes)
cpu, // cpu
-1, // group_fd
0, // flags
)
};
if fd < 0 {
Err(anyhow!("Failed to open perf event on CPU {}: {}", cpu,
std::io::Error::last_os_error()))
} else {
Ok(fd as i32)
}
}
fn print_stack_trace(
addrs: &[u64],
symbolizer: &Symbolizer,
pid: u32,
is_kernel: bool,
) {
if addrs.is_empty() {
return;
}
let src = if is_kernel {
Source::Kernel(Kernel::default())
} else {
Source::Process(Process::new(pid.into()))
};
let syms = match symbolizer.symbolize(&src, Input::AbsAddr(addrs)) {
Ok(syms) => syms,
Err(e) => {
eprintln!(" Failed to symbolize: {}", e);
for addr in addrs {
println!("0x{:016x}: <no-symbol>", addr);
}
return;
}
};
for (addr, sym) in addrs.iter().zip(syms.iter()) {
match sym {
Symbolized::Sym(sym_info) => {
print!("0x{:016x}: {} @ 0x{:x}+0x{:x}",
addr, sym_info.name, sym_info.addr, sym_info.offset);
if let Some(ref code_info) = sym_info.code_info {
print_code_info(code_info);
}
println!();
// Print inlined frames
for inlined in &sym_info.inlined {
print!(" {} (inlined)", inlined.name);
if let Some(ref code_info) = inlined.code_info {
print_code_info(code_info);
}
println!();
}
}
Symbolized::Unknown(..) => {
println!("0x{:016x}: <no-symbol>", addr);
}
}
}
}
fn print_code_info(code_info: &CodeInfo) {
let path = code_info.to_path();
let path_str = path.display();
match (code_info.line, code_info.column) {
(Some(line), Some(col)) => print!(" {}:{}:{}", path_str, line, col),
(Some(line), None) => print!(" {}:{}", path_str, line),
(None, _) => print!(" {}", path_str),
}
}
fn handle_event(symbolizer: &Symbolizer, data: &[u8]) -> i32 {
let event = plain::from_bytes::<ExecEvent>(data).expect("Invalid event data");
// Extract comm string
let comm = std::str::from_utf8(&event.comm)
.unwrap_or("<unknown>")
.trim_end_matches('\0');
println!("[{:.9}] COMM: {} (pid={}) @ CPU {}",
event.timestamp as f64 / 1_000_000_000.0,
comm,
event.pid,
event.cpu);
// Handle kernel stack
if event.kstack_sz > 0 {
println!("Kernel:");
let num_frames = (event.kstack_sz / 8) as usize;
let kstack_u64 = unsafe {
std::slice::from_raw_parts(
event.kstack.as_ptr() as *const u64,
num_frames.min(16),
)
};
// Filter out zero addresses
let kstack: Vec<u64> = kstack_u64.iter()
.copied()
.take_while(|&addr| addr != 0)
.collect();
print_stack_trace(&kstack, symbolizer, 0, true);
} else {
println!("No Kernel Stack");
}
// Handle user stack
if event.ustack_sz > 0 {
println!("Userspace:");
let num_frames = (event.ustack_sz / 8) as usize;
let ustack_u64 = unsafe {
std::slice::from_raw_parts(
event.ustack.as_ptr() as *const u64,
num_frames.min(16),
)
};
// Filter out zero addresses
let ustack: Vec<u64> = ustack_u64.iter()
.copied()
.take_while(|&addr| addr != 0)
.collect();
print_stack_trace(&ustack, symbolizer, event.pid as u32, false);
} else {
println!("No Userspace Stack");
}
println!();
0
}
fn main() -> Result<()> {
let args = Args::parse();
if !args.object_file.exists() {
return Err(anyhow!("Object file not found: {:?}", args.object_file));
}
println!("Loading BPF object: {:?}", args.object_file);
// Load BPF object
let mut obj_builder = ObjectBuilder::default();
obj_builder.debug(args.verbose);
let open_obj = obj_builder
.open_file(&args.object_file)
.context("Failed to open BPF object")?;
let mut obj = open_obj.load().context("Failed to load BPF object")?;
println!("✓ BPF object loaded");
// Find the program
let prog = obj
.progs_mut()
.find(|p| p.name() == "trace_exec_enter")
.ok_or_else(|| anyhow!("Program 'trace_exec_enter' not found"))?;
println!("✓ Found program: trace_exec_enter");
// Find the map
let map = obj
.maps()
.find(|m| m.name() == "exec_events")
.ok_or_else(|| anyhow!("Map 'exec_events' not found"))?;
println!("✓ Found map: exec_events");
// Get number of CPUs
let num_cpus = libbpf_rs::num_possible_cpus()?;
println!("✓ Detected {} CPUs\n", num_cpus);
// Open perf events and attach BPF program
println!("Setting up perf events...");
let mut links = Vec::new();
for cpu in 0..num_cpus {
match open_perf_event(cpu as i32, args.freq, args.sw_event) {
Ok(perf_fd) => {
match prog.attach_perf_event(perf_fd) {
Ok(link) => {
links.push(link);
if args.verbose {
println!(" ✓ Attached to CPU {}", cpu);
}
}
Err(e) => {
eprintln!(" ✗ Failed to attach to CPU {}: {}", cpu, e);
unsafe { libc::close(perf_fd); }
}
}
}
Err(e) => {
if args.verbose {
eprintln!(" ✗ Failed to open perf event on CPU {}: {}", cpu, e);
}
}
}
}
println!("✓ Attached to {} CPUs\n", links.len());
if links.is_empty() {
return Err(anyhow!("Failed to attach to any CPU"));
}
// Initialize symbolizer
let symbolizer = Symbolizer::new();
// Set up ring buffer
let mut builder = RingBufferBuilder::new();
builder.add(&map, move |data: &[u8]| -> i32 {
handle_event(&symbolizer, data)
})?;
let ringbuf = builder.build()?;
println!("========================================");
println!("Profiling started. Press Ctrl+C to stop.");
println!("========================================\n");
// Poll for events - just keep polling until error
loop {
if let Err(e) = ringbuf.poll(Duration::from_millis(100)) {
// Any error breaks the loop (including Ctrl+C)
eprintln!("\nStopping: {}", e);
break;
}
}
println!("Done.");
Ok(())
}

View File

@ -0,0 +1,49 @@
# tests/passing_tests/ringbuf_advanced.py
from pythonbpf import bpf, map, section, bpfglobal, struct, compile
from pythonbpf.maps import RingBuffer
from pythonbpf.helper import ktime, pid, smp_processor_id, comm, get_stack
from ctypes import c_void_p, c_int32, c_int64
import logging
@bpf
@struct
class exec_event:
pid: c_int64
cpu: c_int32
timestamp: c_int64
comm: str(16) # type: ignore [valid-type]
kstack_sz: c_int64
ustack_sz: c_int64
kstack: str(128) # type: ignore [valid-type]
ustack: str(128) # type: ignore [valid-type]
@bpf
@map
def exec_events() -> RingBuffer:
return RingBuffer(max_entries=1048576)
@bpf
@section("perf_event")
def trace_exec_enter(ctx: c_void_p) -> c_int64:
evt = exec_event()
evt.pid = pid()
evt.cpu = smp_processor_id()
evt.timestamp = ktime()
comm(evt.comm)
evt.kstack_sz = get_stack(evt.kstack)
evt.ustack_sz = get_stack(evt.ustack, 256)
exec_events.output(evt)
print(f"Submitted exec_event for pid: {evt.pid}, cpu: {evt.cpu}")
return 0 # type: ignore [return-value]
@bpf
@bpfglobal
def LICENSE() -> str:
return "GPL"
compile(logging.INFO)

View File

@ -0,0 +1,22 @@
"""
Process Anomaly Detection - Constants and Utilities
"""
import logging
logger = logging.getLogger(__name__)
MAX_SYSCALLS = 548
def comm_for_pid(pid: int) -> bytes | None:
"""Get process name from /proc."""
try:
with open(f"/proc/{pid}/comm", "rb") as f:
return f.read().strip()
except FileNotFoundError:
logger.warning(f"Process with PID {pid} not found.")
except PermissionError:
logger.warning(f"Permission denied when accessing /proc/{pid}/comm.")
except Exception as e:
logger.warning(f"Error reading /proc/{pid}/comm: {e}")
return None

View File

@ -0,0 +1,173 @@
"""
Autoencoder for Process Behavior Anomaly Detection
Uses Keras/TensorFlow to train an autoencoder on syscall patterns.
Anomalies are detected when reconstruction error exceeds threshold.
"""
import logging
import os
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from tensorflow import keras
from lib import MAX_SYSCALLS
logger = logging.getLogger(__name__)
def create_autoencoder(n_inputs: int = MAX_SYSCALLS) -> keras.Model:
"""
Create the autoencoder architecture.
Architecture: input → encoder → bottleneck → decoder → output
"""
inp = keras.Input(shape=(n_inputs,))
# Encoder
encoder = keras.layers.Dense(n_inputs)(inp)
encoder = keras.layers.ReLU()(encoder)
# Bottleneck (compressed representation)
bottleneck = keras.layers.Dense(n_inputs // 2)(encoder)
# Decoder
decoder = keras.layers.Dense(n_inputs)(bottleneck)
decoder = keras.layers.ReLU()(decoder)
output = keras.layers.Dense(n_inputs, activation="linear")(decoder)
model = keras.Model(inp, output)
model.compile(optimizer="adam", loss="mse")
return model
class AutoEncoder:
"""
Autoencoder for syscall pattern anomaly detection.
Usage:
# Training
ae = AutoEncoder('model.keras')
model, threshold = ae.train('data.csv', epochs=200)
# Inference
ae = AutoEncoder('model.keras', load=True)
_, errors, total_error = ae.predict([features])
"""
def __init__(self, filename: str, load: bool = False):
self.filename = filename
self.model = None
if load:
self._load_model()
def _load_model(self) -> None:
"""Load a trained model from disk."""
if not os.path.exists(self.filename):
raise FileNotFoundError(f"Model file not found: {self.filename}")
logger.info(f"Loading model from {self.filename}")
self.model = keras.models.load_model(self.filename)
def train(
self,
datafile: str,
epochs: int,
batch_size: int,
test_size: float = 0.1,
) -> tuple[keras.Model, float]:
"""
Train the autoencoder on collected data.
Args:
datafile: Path to CSV file with training data
epochs: Number of training epochs
batch_size: Training batch size
test_size: Fraction of data to use for validation
Returns:
Tuple of (trained model, error threshold)
"""
if not os.path.exists(datafile):
raise FileNotFoundError(f"Data file not found: {datafile}")
logger.info(f"Loading training data from {datafile}")
# Load and prepare data
df = pd.read_csv(datafile)
features = df.drop(["sample_time"], axis=1).values
logger.info(f"Loaded {len(features)} samples with {features.shape[1]} features")
# Split train/test
train_data, test_data = train_test_split(
features,
test_size=test_size,
random_state=42,
)
logger.info(f"Training set: {len(train_data)} samples")
logger.info(f"Test set: {len(test_data)} samples")
# Create and train model
self.model = create_autoencoder()
if self.model is None:
raise RuntimeError("Failed to create the autoencoder model.")
logger.info("Training autoencoder...")
self.model.fit(
train_data,
train_data,
validation_data=(test_data, test_data),
epochs=epochs,
batch_size=batch_size,
verbose=1,
)
# Save model (use .keras format for Keras 3.x compatibility)
self.model.save(self.filename)
logger.info(f"Model saved to {self.filename}")
# Calculate error threshold from test data
threshold = self._calculate_threshold(test_data)
return self.model, threshold
def _calculate_threshold(self, test_data: np.ndarray) -> float:
"""Calculate error threshold from test data."""
logger.info(f"Calculating error threshold from {len(test_data)} test samples")
if self.model is None:
raise RuntimeError("Model not loaded. Use load=True or train first.")
predictions = self.model.predict(test_data, verbose=0)
errors = np.abs(test_data - predictions).sum(axis=1)
return float(errors.max())
def predict(self, X: list | np.ndarray) -> tuple[np.ndarray, np.ndarray, float]:
"""
Run prediction and return reconstruction error.
Args:
X: Input data (list of feature vectors)
Returns:
Tuple of (reconstructed, per_feature_errors, total_error)
"""
if self.model is None:
raise RuntimeError("Model not loaded. Use load=True or train first.")
X = np.asarray(X, dtype=np.float32)
y = self.model.predict(X, verbose=0)
# Per-feature reconstruction error
errors = np.abs(X[0] - y[0])
total_error = float(errors.sum())
return y, errors, total_error

View File

@ -0,0 +1,448 @@
# Copyright 2017 Sasha Goldshtein
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
syscall.py contains functions useful for mapping between syscall names and numbers
"""
# Syscall table for Linux x86_64, not very recent. Automatically generated from
# https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/arch/x86/entry/syscalls/syscall_64.tbl?h=linux-6.17.y
# using the following command:
#
# cat arch/x86/entry/syscalls/syscall_64.tbl \
# | awk 'BEGIN { print "syscalls = {" }
# /^[0-9]/ { print " "$1": b\""$3"\"," }
# END { print "}" }'
SYSCALLS = {
0: b"read",
1: b"write",
2: b"open",
3: b"close",
4: b"stat",
5: b"fstat",
6: b"lstat",
7: b"poll",
8: b"lseek",
9: b"mmap",
10: b"mprotect",
11: b"munmap",
12: b"brk",
13: b"rt_sigaction",
14: b"rt_sigprocmask",
15: b"rt_sigreturn",
16: b"ioctl",
17: b"pread64",
18: b"pwrite64",
19: b"readv",
20: b"writev",
21: b"access",
22: b"pipe",
23: b"select",
24: b"sched_yield",
25: b"mremap",
26: b"msync",
27: b"mincore",
28: b"madvise",
29: b"shmget",
30: b"shmat",
31: b"shmctl",
32: b"dup",
33: b"dup2",
34: b"pause",
35: b"nanosleep",
36: b"getitimer",
37: b"alarm",
38: b"setitimer",
39: b"getpid",
40: b"sendfile",
41: b"socket",
42: b"connect",
43: b"accept",
44: b"sendto",
45: b"recvfrom",
46: b"sendmsg",
47: b"recvmsg",
48: b"shutdown",
49: b"bind",
50: b"listen",
51: b"getsockname",
52: b"getpeername",
53: b"socketpair",
54: b"setsockopt",
55: b"getsockopt",
56: b"clone",
57: b"fork",
58: b"vfork",
59: b"execve",
60: b"exit",
61: b"wait4",
62: b"kill",
63: b"uname",
64: b"semget",
65: b"semop",
66: b"semctl",
67: b"shmdt",
68: b"msgget",
69: b"msgsnd",
70: b"msgrcv",
71: b"msgctl",
72: b"fcntl",
73: b"flock",
74: b"fsync",
75: b"fdatasync",
76: b"truncate",
77: b"ftruncate",
78: b"getdents",
79: b"getcwd",
80: b"chdir",
81: b"fchdir",
82: b"rename",
83: b"mkdir",
84: b"rmdir",
85: b"creat",
86: b"link",
87: b"unlink",
88: b"symlink",
89: b"readlink",
90: b"chmod",
91: b"fchmod",
92: b"chown",
93: b"fchown",
94: b"lchown",
95: b"umask",
96: b"gettimeofday",
97: b"getrlimit",
98: b"getrusage",
99: b"sysinfo",
100: b"times",
101: b"ptrace",
102: b"getuid",
103: b"syslog",
104: b"getgid",
105: b"setuid",
106: b"setgid",
107: b"geteuid",
108: b"getegid",
109: b"setpgid",
110: b"getppid",
111: b"getpgrp",
112: b"setsid",
113: b"setreuid",
114: b"setregid",
115: b"getgroups",
116: b"setgroups",
117: b"setresuid",
118: b"getresuid",
119: b"setresgid",
120: b"getresgid",
121: b"getpgid",
122: b"setfsuid",
123: b"setfsgid",
124: b"getsid",
125: b"capget",
126: b"capset",
127: b"rt_sigpending",
128: b"rt_sigtimedwait",
129: b"rt_sigqueueinfo",
130: b"rt_sigsuspend",
131: b"sigaltstack",
132: b"utime",
133: b"mknod",
134: b"uselib",
135: b"personality",
136: b"ustat",
137: b"statfs",
138: b"fstatfs",
139: b"sysfs",
140: b"getpriority",
141: b"setpriority",
142: b"sched_setparam",
143: b"sched_getparam",
144: b"sched_setscheduler",
145: b"sched_getscheduler",
146: b"sched_get_priority_max",
147: b"sched_get_priority_min",
148: b"sched_rr_get_interval",
149: b"mlock",
150: b"munlock",
151: b"mlockall",
152: b"munlockall",
153: b"vhangup",
154: b"modify_ldt",
155: b"pivot_root",
156: b"_sysctl",
157: b"prctl",
158: b"arch_prctl",
159: b"adjtimex",
160: b"setrlimit",
161: b"chroot",
162: b"sync",
163: b"acct",
164: b"settimeofday",
165: b"mount",
166: b"umount2",
167: b"swapon",
168: b"swapoff",
169: b"reboot",
170: b"sethostname",
171: b"setdomainname",
172: b"iopl",
173: b"ioperm",
174: b"create_module",
175: b"init_module",
176: b"delete_module",
177: b"get_kernel_syms",
178: b"query_module",
179: b"quotactl",
180: b"nfsservctl",
181: b"getpmsg",
182: b"putpmsg",
183: b"afs_syscall",
184: b"tuxcall",
185: b"security",
186: b"gettid",
187: b"readahead",
188: b"setxattr",
189: b"lsetxattr",
190: b"fsetxattr",
191: b"getxattr",
192: b"lgetxattr",
193: b"fgetxattr",
194: b"listxattr",
195: b"llistxattr",
196: b"flistxattr",
197: b"removexattr",
198: b"lremovexattr",
199: b"fremovexattr",
200: b"tkill",
201: b"time",
202: b"futex",
203: b"sched_setaffinity",
204: b"sched_getaffinity",
205: b"set_thread_area",
206: b"io_setup",
207: b"io_destroy",
208: b"io_getevents",
209: b"io_submit",
210: b"io_cancel",
211: b"get_thread_area",
212: b"lookup_dcookie",
213: b"epoll_create",
214: b"epoll_ctl_old",
215: b"epoll_wait_old",
216: b"remap_file_pages",
217: b"getdents64",
218: b"set_tid_address",
219: b"restart_syscall",
220: b"semtimedop",
221: b"fadvise64",
222: b"timer_create",
223: b"timer_settime",
224: b"timer_gettime",
225: b"timer_getoverrun",
226: b"timer_delete",
227: b"clock_settime",
228: b"clock_gettime",
229: b"clock_getres",
230: b"clock_nanosleep",
231: b"exit_group",
232: b"epoll_wait",
233: b"epoll_ctl",
234: b"tgkill",
235: b"utimes",
236: b"vserver",
237: b"mbind",
238: b"set_mempolicy",
239: b"get_mempolicy",
240: b"mq_open",
241: b"mq_unlink",
242: b"mq_timedsend",
243: b"mq_timedreceive",
244: b"mq_notify",
245: b"mq_getsetattr",
246: b"kexec_load",
247: b"waitid",
248: b"add_key",
249: b"request_key",
250: b"keyctl",
251: b"ioprio_set",
252: b"ioprio_get",
253: b"inotify_init",
254: b"inotify_add_watch",
255: b"inotify_rm_watch",
256: b"migrate_pages",
257: b"openat",
258: b"mkdirat",
259: b"mknodat",
260: b"fchownat",
261: b"futimesat",
262: b"newfstatat",
263: b"unlinkat",
264: b"renameat",
265: b"linkat",
266: b"symlinkat",
267: b"readlinkat",
268: b"fchmodat",
269: b"faccessat",
270: b"pselect6",
271: b"ppoll",
272: b"unshare",
273: b"set_robust_list",
274: b"get_robust_list",
275: b"splice",
276: b"tee",
277: b"sync_file_range",
278: b"vmsplice",
279: b"move_pages",
280: b"utimensat",
281: b"epoll_pwait",
282: b"signalfd",
283: b"timerfd_create",
284: b"eventfd",
285: b"fallocate",
286: b"timerfd_settime",
287: b"timerfd_gettime",
288: b"accept4",
289: b"signalfd4",
290: b"eventfd2",
291: b"epoll_create1",
292: b"dup3",
293: b"pipe2",
294: b"inotify_init1",
295: b"preadv",
296: b"pwritev",
297: b"rt_tgsigqueueinfo",
298: b"perf_event_open",
299: b"recvmmsg",
300: b"fanotify_init",
301: b"fanotify_mark",
302: b"prlimit64",
303: b"name_to_handle_at",
304: b"open_by_handle_at",
305: b"clock_adjtime",
306: b"syncfs",
307: b"sendmmsg",
308: b"setns",
309: b"getcpu",
310: b"process_vm_readv",
311: b"process_vm_writev",
312: b"kcmp",
313: b"finit_module",
314: b"sched_setattr",
315: b"sched_getattr",
316: b"renameat2",
317: b"seccomp",
318: b"getrandom",
319: b"memfd_create",
320: b"kexec_file_load",
321: b"bpf",
322: b"execveat",
323: b"userfaultfd",
324: b"membarrier",
325: b"mlock2",
326: b"copy_file_range",
327: b"preadv2",
328: b"pwritev2",
329: b"pkey_mprotect",
330: b"pkey_alloc",
331: b"pkey_free",
332: b"statx",
333: b"io_pgetevents",
334: b"rseq",
335: b"uretprobe",
424: b"pidfd_send_signal",
425: b"io_uring_setup",
426: b"io_uring_enter",
427: b"io_uring_register",
428: b"open_tree",
429: b"move_mount",
430: b"fsopen",
431: b"fsconfig",
432: b"fsmount",
433: b"fspick",
434: b"pidfd_open",
435: b"clone3",
436: b"close_range",
437: b"openat2",
438: b"pidfd_getfd",
439: b"faccessat2",
440: b"process_madvise",
441: b"epoll_pwait2",
442: b"mount_setattr",
443: b"quotactl_fd",
444: b"landlock_create_ruleset",
445: b"landlock_add_rule",
446: b"landlock_restrict_self",
447: b"memfd_secret",
448: b"process_mrelease",
449: b"futex_waitv",
450: b"set_mempolicy_home_node",
451: b"cachestat",
452: b"fchmodat2",
453: b"map_shadow_stack",
454: b"futex_wake",
455: b"futex_wait",
456: b"futex_requeue",
457: b"statmount",
458: b"listmount",
459: b"lsm_get_self_attr",
460: b"lsm_set_self_attr",
461: b"lsm_list_modules",
462: b"mseal",
463: b"setxattrat",
464: b"getxattrat",
465: b"listxattrat",
466: b"removexattrat",
467: b"open_tree_attr",
468: b"file_getattr",
469: b"file_setattr",
512: b"rt_sigaction",
513: b"rt_sigreturn",
514: b"ioctl",
515: b"readv",
516: b"writev",
517: b"recvfrom",
518: b"sendmsg",
519: b"recvmsg",
520: b"execve",
521: b"ptrace",
522: b"rt_sigpending",
523: b"rt_sigtimedwait",
524: b"rt_sigqueueinfo",
525: b"sigaltstack",
526: b"timer_create",
527: b"mq_notify",
528: b"kexec_load",
529: b"waitid",
530: b"set_robust_list",
531: b"get_robust_list",
532: b"vmsplice",
533: b"move_pages",
534: b"preadv",
535: b"pwritev",
536: b"rt_tgsigqueueinfo",
537: b"recvmmsg",
538: b"sendmmsg",
539: b"process_vm_readv",
540: b"process_vm_writev",
541: b"setsockopt",
542: b"getsockopt",
543: b"io_setup",
544: b"io_submit",
545: b"execveat",
546: b"preadv2",
547: b"pwritev2",
}

View File

@ -0,0 +1,117 @@
"""
PythonBPF eBPF Probe for Syscall Histogram Collection
"""
from vmlinux import struct_trace_event_raw_sys_enter
from pythonbpf import bpf, map, section, bpfglobal, BPF
from pythonbpf.helper import pid
from pythonbpf.maps import HashMap
from ctypes import c_int64
from lib import MAX_SYSCALLS, comm_for_pid
@bpf
@map
def histogram() -> HashMap:
return HashMap(key=c_int64, value=c_int64, max_entries=1024)
@bpf
@map
def target_pid_map() -> HashMap:
return HashMap(key=c_int64, value=c_int64, max_entries=1)
@bpf
@section("tracepoint/raw_syscalls/sys_enter")
def trace_syscall(ctx: struct_trace_event_raw_sys_enter) -> c_int64:
syscall_id = ctx.id
current_pid = pid()
target = target_pid_map.lookup(0)
if target:
if current_pid != target:
return 0 # type: ignore
if syscall_id < 0 or syscall_id >= 548:
return 0 # type: ignore
count = histogram.lookup(syscall_id)
if count:
histogram.update(syscall_id, count + 1)
else:
histogram.update(syscall_id, 1)
return 0 # type: ignore
@bpf
@bpfglobal
def LICENSE() -> str:
return "GPL"
ebpf_prog = BPF()
class Probe:
"""
Syscall histogram probe for a target process.
Usage:
probe = Probe(target_pid=1234)
probe.start()
histogram = probe.get_histogram()
"""
def __init__(self, target_pid: int, max_syscalls: int = MAX_SYSCALLS):
self.target_pid = target_pid
self.max_syscalls = max_syscalls
self.comm = comm_for_pid(target_pid)
if self.comm is None:
raise ValueError(f"Cannot find process with PID {target_pid}")
self._bpf = None
self._histogram_map = None
self._target_map = None
def start(self):
"""Compile, load, and attach the BPF probe."""
# Compile and load
self._bpf = ebpf_prog
self._bpf.load()
self._bpf.attach_all()
# Get map references
self._histogram_map = self._bpf["histogram"]
self._target_map = self._bpf["target_pid_map"]
# Set target PID in the map
self._target_map.update(0, self.target_pid)
return self
def get_histogram(self) -> list:
"""Read current histogram values as a list."""
if self._histogram_map is None:
raise RuntimeError("Probe not started. Call start() first.")
result = [0] * self.max_syscalls
for syscall_id in range(self.max_syscalls):
try:
count = self._histogram_map.lookup(syscall_id)
if count is not None:
result[syscall_id] = int(count)
except Exception:
pass
return result
def __getitem__(self, syscall_id: int) -> int:
"""Allow indexing: probe[syscall_id]"""
if self._histogram_map is None:
raise RuntimeError("Probe not started")
try:
count = self._histogram_map.lookup(syscall_id)
return int(count) if count is not None else 0
except Exception:
return 0

View File

@ -0,0 +1,335 @@
#!/usr/bin/env python3
"""
Process Behavior Anomaly Detection using PythonBPF and Autoencoders
Ported from evilsocket's BCC implementation to PythonBPF.
https://github.com/evilsocket/ebpf-process-anomaly-detection
Usage:
# 1.Learn normal behavior from a process
sudo python main.py --learn --pid 1234 --data normal.csv
# 2.Train the autoencoder (no sudo needed)
python main.py --train --data normal.csv --model model.h5
# 3.Monitor for anomalies
sudo python main.py --run --pid 1234 --model model.h5
"""
import argparse
import logging
import os
import sys
import time
from collections import Counter
from lib import MAX_SYSCALLS
from lib.ml import AutoEncoder
from lib.platform import SYSCALLS
from lib.probe import Probe
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(name)s: %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
logger = logging.getLogger(__name__)
def learn(pid: int, data_path: str, poll_interval_ms: int) -> None:
"""
Capture syscall patterns from target process.
Args:
pid: Target process ID
data_path: Path to save CSV data
poll_interval_ms: Polling interval in milliseconds
"""
if os.path.exists(data_path):
logger.error(
f"{data_path} already exists.Delete it or use a different filename."
)
sys.exit(1)
try:
probe = Probe(pid)
except ValueError as e:
logger.error(str(e))
sys.exit(1)
probe_comm = probe.comm.decode() if probe.comm else "unknown"
print(f"📊 Learning from process {pid} ({probe_comm})")
print(f"📁 Saving data to {data_path}")
print(f"⏱️ Polling interval: {poll_interval_ms}ms")
print("Press Ctrl+C to stop...\n")
probe.start()
prev_histogram = [0.0] * MAX_SYSCALLS
prev_report_time = time.time()
sample_count = 0
poll_interval_sec = poll_interval_ms / 1000.0
header = "sample_time," + ",".join(f"sys_{i}" for i in range(MAX_SYSCALLS))
with open(data_path, "w") as fp:
fp.write(header + "\n")
try:
while True:
histogram = [float(x) for x in probe.get_histogram()]
if histogram != prev_histogram:
deltas = _compute_deltas(prev_histogram, histogram)
prev_histogram = histogram.copy()
row = f"{time.time()},{','.join(map(str, deltas))}"
fp.write(row + "\n")
fp.flush()
sample_count += 1
now = time.time()
if now - prev_report_time >= 1.0:
print(f" {sample_count} samples saved...")
prev_report_time = now
time.sleep(poll_interval_sec)
except KeyboardInterrupt:
print(f"\n✅ Stopped. Saved {sample_count} samples to {data_path}")
def train(data_path: str, model_path: str, epochs: int, batch_size: int) -> None:
"""
Train autoencoder on captured data.
Args:
data_path: Path to training CSV data
model_path: Path to save trained model
epochs: Number of training epochs
batch_size: Training batch size
"""
if not os.path.exists(data_path):
logger.error(f"Data file {data_path} not found.Run --learn first.")
sys.exit(1)
print(f"🧠 Training autoencoder on {data_path}")
print(f" Epochs: {epochs}")
print(f" Batch size: {batch_size}")
print()
ae = AutoEncoder(model_path)
_, threshold = ae.train(data_path, epochs, batch_size)
print()
print("=" * 50)
print("✅ Training complete!")
print(f" Model saved to: {model_path}")
print(f" Error threshold: {threshold:.6f}")
print()
print(f"💡 Use --max-error {threshold:.4f} when running detection")
print("=" * 50)
def run(pid: int, model_path: str, max_error: float, poll_interval_ms: int) -> None:
"""
Monitor process and detect anomalies.
Args:
pid: Target process ID
model_path: Path to trained model
max_error: Anomaly detection threshold
poll_interval_ms: Polling interval in milliseconds
"""
if not os.path.exists(model_path):
logger.error(f"Model file {model_path} not found. Run --train first.")
sys.exit(1)
try:
probe = Probe(pid)
except ValueError as e:
logger.error(str(e))
sys.exit(1)
ae = AutoEncoder(model_path, load=True)
probe_comm = probe.comm.decode() if probe.comm else "unknown"
print(f"🔍 Monitoring process {pid} ({probe_comm}) for anomalies")
print(f" Error threshold: {max_error}")
print(f" Polling interval: {poll_interval_ms}ms")
print("Press Ctrl+C to stop...\n")
probe.start()
prev_histogram = [0.0] * MAX_SYSCALLS
anomaly_count = 0
check_count = 0
poll_interval_sec = poll_interval_ms / 1000.0
try:
while True:
histogram = [float(x) for x in probe.get_histogram()]
if histogram != prev_histogram:
deltas = _compute_deltas(prev_histogram, histogram)
prev_histogram = histogram.copy()
check_count += 1
_, feat_errors, total_error = ae.predict([deltas])
if total_error > max_error:
anomaly_count += 1
_report_anomaly(anomaly_count, total_error, max_error, feat_errors)
time.sleep(poll_interval_sec)
except KeyboardInterrupt:
print("\n✅ Stopped.")
print(f" Checks performed: {check_count}")
print(f" Anomalies detected: {anomaly_count}")
def _compute_deltas(prev: list[float], current: list[float]) -> list[float]:
"""Compute rate of change between two histograms."""
deltas = []
for p, c in zip(prev, current):
if c != 0.0:
delta = 1.0 - (p / c)
else:
delta = 0.0
deltas.append(delta)
return deltas
def _report_anomaly(
count: int,
total_error: float,
threshold: float,
feat_errors: list[float],
) -> None:
"""Print anomaly report with top offending syscalls."""
print(f"🚨 ANOMALY #{count} detected!")
print(f" Total error: {total_error:.4f} (threshold: {threshold})")
errors_by_syscall = {idx: err for idx, err in enumerate(feat_errors)}
top3 = Counter(errors_by_syscall).most_common(3)
print(" Top anomalous syscalls:")
for idx, err in top3:
name = SYSCALLS.get(idx, f"syscall_{idx}")
print(f"{name!r}: {err:.4f}")
print()
def parse_args() -> argparse.Namespace:
"""Parse command line arguments."""
parser = argparse.ArgumentParser(
description="Process anomaly detection with PythonBPF and Autoencoders",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Learn from a process (e.g., Firefox) for a few minutes
sudo python main.py --learn --pid $(pgrep -o firefox) --data firefox.csv
# Train the model (no sudo needed)
python main.py --train --data firefox.csv --model firefox.h5
# Monitor the same process for anomalies
sudo python main.py --run --pid $(pgrep -o firefox) --model firefox.h5
# Full workflow for nginx:
sudo python main.py --learn --pid $(pgrep -o nginx) --data nginx_normal.csv
python main.py --train --data nginx_normal.csv --model nginx.h5 --epochs 100
sudo python main.py --run --pid $(pgrep -o nginx) --model nginx.h5 --max-error 0.05
""",
)
actions = parser.add_mutually_exclusive_group()
actions.add_argument(
"--learn",
action="store_true",
help="Capture syscall patterns from a process",
)
actions.add_argument(
"--train",
action="store_true",
help="Train autoencoder on captured data",
)
actions.add_argument(
"--run",
action="store_true",
help="Monitor process for anomalies",
)
parser.add_argument(
"--pid",
type=int,
default=0,
help="Target process ID",
)
parser.add_argument(
"--data",
default="data.csv",
help="CSV file for training data (default: data.csv)",
)
parser.add_argument(
"--model",
default="model.keras",
help="Model file path (default: model.h5)",
)
parser.add_argument(
"--time",
type=int,
default=100,
help="Polling interval in milliseconds (default: 100)",
)
parser.add_argument(
"--epochs",
type=int,
default=200,
help="Training epochs (default: 200)",
)
parser.add_argument(
"--batch-size",
type=int,
default=16,
help="Training batch size (default: 16)",
)
parser.add_argument(
"--max-error",
type=float,
default=0.09,
help="Anomaly detection threshold (default: 0.09)",
)
return parser.parse_args()
def main() -> None:
"""Main entry point."""
args = parse_args()
if not any([args.learn, args.train, args.run]):
print("No action specified.Use --learn, --train, or --run.")
print("Run with --help for usage information.")
sys.exit(0)
if args.learn:
if args.pid == 0:
logger.error("--pid required for --learn")
sys.exit(1)
learn(args.pid, args.data, args.time)
elif args.train:
train(args.data, args.model, args.epochs, args.batch_size)
elif args.run:
if args.pid == 0:
logger.error("--pid required for --run")
sys.exit(1)
run(args.pid, args.model, args.max_error, args.time)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,49 @@
# Container Monitor TUI
A beautiful terminal-based container monitoring tool that combines syscall tracking, file I/O monitoring, and network traffic analysis using eBPF.
## Features
- 🎯 **Interactive Cgroup Selection** - Navigate and select cgroups with arrow keys
- 📊 **Real-time Monitoring** - Live graphs and statistics
- 🔥 **Syscall Tracking** - Total syscall count per cgroup
- 💾 **File I/O Monitoring** - Read/write operations and bytes with graphs
- 🌐 **Network Traffic** - RX/TX packets and bytes with live graphs
-**Efficient Caching** - Reduced /proc lookups for better performance
- 🎨 **Beautiful TUI** - Clean, colorful terminal interface
## Requirements
- Python 3.7+
- pythonbpf
- Root privileges (for eBPF)
## Installation
```bash
# Ensure you have pythonbpf installed
pip install pythonbpf
# Run the monitor
sudo $(which python) container_monitor.py
```
## Usage
1. **Selection Screen**: Use ↑↓ arrow keys to navigate through cgroups, press ENTER to select
2. **Monitoring Screen**: View real-time graphs and statistics, press ESC or 'b' to go back
3. **Exit**: Press 'q' at any time to quit
## Architecture
- `container_monitor.py` - Main BPF program combining all three tracers
- `data_collector.py` - Data collection, caching, and history management
- `tui. py` - Terminal user interface with selection and monitoring screens
## BPF Programs
- **vfs_read/vfs_write** - Track file I/O operations
- **__netif_receive_skb/__dev_queue_xmit** - Track network traffic
- **raw_syscalls/sys_enter** - Count all syscalls
All programs filter by cgroup ID for per-container monitoring.

View File

@ -0,0 +1,220 @@
"""Container Monitor - TUI-based cgroup monitoring combining syscall, file I/O, and network tracking."""
from pythonbpf import bpf, map, section, bpfglobal, struct, BPF
from pythonbpf.maps import HashMap
from pythonbpf.helper import get_current_cgroup_id
from ctypes import c_int32, c_uint64, c_void_p
from vmlinux import struct_pt_regs, struct_sk_buff
from data_collection import ContainerDataCollector
from tui import ContainerMonitorTUI
# ==================== BPF Structs ====================
@bpf
@struct
class read_stats:
bytes: c_uint64
ops: c_uint64
@bpf
@struct
class write_stats:
bytes: c_uint64
ops: c_uint64
@bpf
@struct
class net_stats:
rx_packets: c_uint64
tx_packets: c_uint64
rx_bytes: c_uint64
tx_bytes: c_uint64
# ==================== BPF Maps ====================
@bpf
@map
def read_map() -> HashMap:
return HashMap(key=c_uint64, value=read_stats, max_entries=1024)
@bpf
@map
def write_map() -> HashMap:
return HashMap(key=c_uint64, value=write_stats, max_entries=1024)
@bpf
@map
def net_stats_map() -> HashMap:
return HashMap(key=c_uint64, value=net_stats, max_entries=1024)
@bpf
@map
def syscall_count() -> HashMap:
return HashMap(key=c_uint64, value=c_uint64, max_entries=1024)
# ==================== File I/O Tracing ====================
@bpf
@section("kprobe/vfs_read")
def trace_read(ctx: struct_pt_regs) -> c_int32:
cg = get_current_cgroup_id()
count = c_uint64(ctx.dx)
ptr = read_map.lookup(cg)
if ptr:
s = read_stats()
s.bytes = ptr.bytes + count
s.ops = ptr.ops + 1
read_map.update(cg, s)
else:
s = read_stats()
s.bytes = count
s.ops = c_uint64(1)
read_map.update(cg, s)
return c_int32(0)
@bpf
@section("kprobe/vfs_write")
def trace_write(ctx1: struct_pt_regs) -> c_int32:
cg = get_current_cgroup_id()
count = c_uint64(ctx1.dx)
ptr = write_map.lookup(cg)
if ptr:
s = write_stats()
s.bytes = ptr.bytes + count
s.ops = ptr.ops + 1
write_map.update(cg, s)
else:
s = write_stats()
s.bytes = count
s.ops = c_uint64(1)
write_map.update(cg, s)
return c_int32(0)
# ==================== Network I/O Tracing ====================
@bpf
@section("kprobe/__netif_receive_skb")
def trace_netif_rx(ctx2: struct_pt_regs) -> c_int32:
cgroup_id = get_current_cgroup_id()
skb = struct_sk_buff(ctx2.di)
pkt_len = c_uint64(skb.len)
stats_ptr = net_stats_map.lookup(cgroup_id)
if stats_ptr:
stats = net_stats()
stats.rx_packets = stats_ptr.rx_packets + 1
stats.tx_packets = stats_ptr.tx_packets
stats.rx_bytes = stats_ptr.rx_bytes + pkt_len
stats.tx_bytes = stats_ptr.tx_bytes
net_stats_map.update(cgroup_id, stats)
else:
stats = net_stats()
stats.rx_packets = c_uint64(1)
stats.tx_packets = c_uint64(0)
stats.rx_bytes = pkt_len
stats.tx_bytes = c_uint64(0)
net_stats_map.update(cgroup_id, stats)
return c_int32(0)
@bpf
@section("kprobe/__dev_queue_xmit")
def trace_dev_xmit(ctx3: struct_pt_regs) -> c_int32:
cgroup_id = get_current_cgroup_id()
skb = struct_sk_buff(ctx3.di)
pkt_len = c_uint64(skb.len)
stats_ptr = net_stats_map.lookup(cgroup_id)
if stats_ptr:
stats = net_stats()
stats.rx_packets = stats_ptr.rx_packets
stats.tx_packets = stats_ptr.tx_packets + 1
stats.rx_bytes = stats_ptr.rx_bytes
stats.tx_bytes = stats_ptr.tx_bytes + pkt_len
net_stats_map.update(cgroup_id, stats)
else:
stats = net_stats()
stats.rx_packets = c_uint64(0)
stats.tx_packets = c_uint64(1)
stats.rx_bytes = c_uint64(0)
stats.tx_bytes = pkt_len
net_stats_map.update(cgroup_id, stats)
return c_int32(0)
# ==================== Syscall Tracing ====================
@bpf
@section("tracepoint/raw_syscalls/sys_enter")
def count_syscalls(ctx: c_void_p) -> c_int32:
cgroup_id = get_current_cgroup_id()
count_ptr = syscall_count.lookup(cgroup_id)
if count_ptr:
new_count = count_ptr + c_uint64(1)
syscall_count.update(cgroup_id, new_count)
else:
syscall_count.update(cgroup_id, c_uint64(1))
return c_int32(0)
@bpf
@bpfglobal
def LICENSE() -> str:
return "GPL"
# ==================== Main ====================
if __name__ == "__main__":
print("🔥 Loading BPF programs...")
# Load and attach BPF program
b = BPF()
b.load()
b.attach_all()
# Get map references and enable struct deserialization
read_map_ref = b["read_map"]
write_map_ref = b["write_map"]
net_stats_map_ref = b["net_stats_map"]
syscall_count_ref = b["syscall_count"]
read_map_ref.set_value_struct("read_stats")
write_map_ref.set_value_struct("write_stats")
net_stats_map_ref.set_value_struct("net_stats")
print("✅ BPF programs loaded and attached")
# Setup data collector
collector = ContainerDataCollector(
read_map_ref, write_map_ref, net_stats_map_ref, syscall_count_ref
)
# Create and run TUI
tui = ContainerMonitorTUI(collector)
tui.run()

View File

@ -0,0 +1,208 @@
"""Data collection and management for container monitoring."""
import os
import time
from pathlib import Path
from typing import Dict, List, Set, Optional
from dataclasses import dataclass
from collections import deque, defaultdict
@dataclass
class CgroupInfo:
"""Information about a cgroup."""
id: int
name: str
path: str
@dataclass
class ContainerStats:
"""Statistics for a container/cgroup."""
cgroup_id: int
cgroup_name: str
# File I/O
read_ops: int = 0
read_bytes: int = 0
write_ops: int = 0
write_bytes: int = 0
# Network I/O
rx_packets: int = 0
rx_bytes: int = 0
tx_packets: int = 0
tx_bytes: int = 0
# Syscalls
syscall_count: int = 0
# Timestamp
timestamp: float = 0.0
class ContainerDataCollector:
"""Collects and manages container monitoring data from BPF."""
def __init__(
self, read_map, write_map, net_stats_map, syscall_map, history_size: int = 100
):
self.read_map = read_map
self.write_map = write_map
self.net_stats_map = net_stats_map
self.syscall_map = syscall_map
# Caching
self._cgroup_cache: Dict[int, CgroupInfo] = {}
self._cgroup_cache_time = 0
self._cache_ttl = 5.0
0 # Refresh cache every 5 seconds
# Historical data for graphing
self._history_size = history_size
self._history: Dict[int, deque] = defaultdict(
lambda: deque(maxlen=history_size)
)
def get_all_cgroups(self) -> List[CgroupInfo]:
"""Get all cgroups with caching."""
current_time = time.time()
# Use cached data if still valid
if current_time - self._cgroup_cache_time < self._cache_ttl:
return list(self._cgroup_cache.values())
# Refresh cache
self._refresh_cgroup_cache()
return list(self._cgroup_cache.values())
def _refresh_cgroup_cache(self):
"""Refresh the cgroup cache from /proc."""
cgroup_map: Dict[int, Set[str]] = defaultdict(set)
# Scan /proc to find all cgroups
for proc_dir in Path("/proc").glob("[0-9]*"):
try:
cgroup_file = proc_dir / "cgroup"
if not cgroup_file.exists():
continue
with open(cgroup_file) as f:
for line in f:
parts = line.strip().split(":")
if len(parts) >= 3:
cgroup_path = parts[2]
cgroup_mount = f"/sys/fs/cgroup{cgroup_path}"
if os.path.exists(cgroup_mount):
stat_info = os.stat(cgroup_mount)
cgroup_id = stat_info.st_ino
cgroup_map[cgroup_id].add(cgroup_path)
except (PermissionError, FileNotFoundError, OSError):
continue
# Update cache with best names
new_cache = {}
for cgroup_id, paths in cgroup_map.items():
# Pick the most descriptive path
best_path = self._get_best_cgroup_path(paths)
name = self._get_cgroup_name(best_path)
new_cache[cgroup_id] = CgroupInfo(id=cgroup_id, name=name, path=best_path)
self._cgroup_cache = new_cache
self._cgroup_cache_time = time.time()
def _get_best_cgroup_path(self, paths: Set[str]) -> str:
"""Select the most descriptive cgroup path."""
path_list = list(paths)
# Prefer paths with more components (more specific)
# Prefer paths containing docker, podman, etc.
for keyword in ["docker", "podman", "kubernetes", "k8s", "systemd"]:
for path in path_list:
if keyword in path.lower():
return path
# Return longest path (most specific)
return max(path_list, key=lambda p: (len(p.split("/")), len(p)))
def _get_cgroup_name(self, path: str) -> str:
"""Extract a friendly name from cgroup path."""
if not path or path == "/":
return "root"
# Remove leading/trailing slashes
path = path.strip("/")
# Try to extract container ID or service name
parts = path.split("/")
# For Docker: /docker/<container_id>
if "docker" in path.lower():
for i, part in enumerate(parts):
if part.lower() == "docker" and i + 1 < len(parts):
container_id = parts[i + 1][:12] # Short ID
return f"docker:{container_id}"
# For systemd services
if "system.slice" in path:
for part in parts:
if part.endswith(".service"):
return part.replace(".service", "")
# For user slices
if "user.slice" in path:
return f"user:{parts[-1]}" if parts else "user"
# Default: use last component
return parts[-1] if parts else path
def get_stats_for_cgroup(self, cgroup_id: int) -> ContainerStats:
"""Get current statistics for a specific cgroup."""
cgroup_info = self._cgroup_cache.get(cgroup_id)
cgroup_name = cgroup_info.name if cgroup_info else f"cgroup-{cgroup_id}"
stats = ContainerStats(
cgroup_id=cgroup_id, cgroup_name=cgroup_name, timestamp=time.time()
)
# Get file I/O stats
read_stat = self.read_map.lookup(cgroup_id)
if read_stat:
stats.read_ops = int(read_stat.ops)
stats.read_bytes = int(read_stat.bytes)
write_stat = self.write_map.lookup(cgroup_id)
if write_stat:
stats.write_ops = int(write_stat.ops)
stats.write_bytes = int(write_stat.bytes)
# Get network stats
net_stat = self.net_stats_map.lookup(cgroup_id)
if net_stat:
stats.rx_packets = int(net_stat.rx_packets)
stats.rx_bytes = int(net_stat.rx_bytes)
stats.tx_packets = int(net_stat.tx_packets)
stats.tx_bytes = int(net_stat.tx_bytes)
# Get syscall count
syscall_cnt = self.syscall_map.lookup(cgroup_id)
if syscall_cnt is not None:
stats.syscall_count = int(syscall_cnt)
# Add to history
self._history[cgroup_id].append(stats)
return stats
def get_history(self, cgroup_id: int) -> List[ContainerStats]:
"""Get historical statistics for graphing."""
return list(self._history[cgroup_id])
def get_cgroup_info(self, cgroup_id: int) -> Optional[CgroupInfo]:
"""Get cached cgroup information."""
return self._cgroup_cache.get(cgroup_id)

View File

@ -0,0 +1,752 @@
"""Terminal User Interface for container monitoring."""
import time
import curses
import threading
from typing import Optional, List
from data_collection import ContainerDataCollector
from web_dashboard import WebDashboard
def _safe_addstr(stdscr, y: int, x: int, text: str, *args):
"""Safely add string to screen with bounds checking."""
try:
height, width = stdscr.getmaxyx()
if 0 <= y < height and 0 <= x < width:
# Truncate text to fit
max_len = width - x - 1
if max_len > 0:
stdscr.addstr(y, x, text[:max_len], *args)
except curses.error:
pass
def _draw_fancy_header(stdscr, title: str, subtitle: str):
"""Draw a fancy header with title and subtitle."""
height, width = stdscr.getmaxyx()
# Top border
_safe_addstr(stdscr, 0, 0, "" * width, curses.color_pair(6) | curses.A_BOLD)
# Title
_safe_addstr(
stdscr,
0,
max(0, (width - len(title)) // 2),
f" {title} ",
curses.color_pair(6) | curses.A_BOLD,
)
# Subtitle
_safe_addstr(
stdscr,
1,
max(0, (width - len(subtitle)) // 2),
subtitle,
curses.color_pair(1),
)
# Bottom border
_safe_addstr(stdscr, 2, 0, "" * width, curses.color_pair(6))
def _draw_metric_box(
stdscr,
y: int,
x: int,
width: int,
label: str,
value: str,
detail: str,
color_pair: int,
):
"""Draw a fancy box for displaying a metric."""
height, _ = stdscr.getmaxyx()
if y + 4 >= height:
return
# Top border
_safe_addstr(
stdscr, y, x, "" + "" * (width - 2) + "", color_pair | curses.A_BOLD
)
# Label
_safe_addstr(stdscr, y + 1, x, "", color_pair | curses.A_BOLD)
_safe_addstr(stdscr, y + 1, x + 2, label, color_pair | curses.A_BOLD)
_safe_addstr(stdscr, y + 1, x + width - 1, "", color_pair | curses.A_BOLD)
# Value
_safe_addstr(stdscr, y + 2, x, "", color_pair | curses.A_BOLD)
_safe_addstr(stdscr, y + 2, x + 4, value, curses.color_pair(2) | curses.A_BOLD)
_safe_addstr(
stdscr,
y + 2,
min(x + width - len(detail) - 3, x + width - 2),
detail,
color_pair | curses.A_BOLD,
)
_safe_addstr(stdscr, y + 2, x + width - 1, "", color_pair | curses.A_BOLD)
# Bottom border
_safe_addstr(
stdscr, y + 3, x, "" + "" * (width - 2) + "", color_pair | curses.A_BOLD
)
def _draw_section_header(stdscr, y: int, title: str, color_pair: int):
"""Draw a section header."""
height, width = stdscr.getmaxyx()
if y >= height:
return
_safe_addstr(stdscr, y, 2, title, curses.color_pair(color_pair) | curses.A_BOLD)
_safe_addstr(
stdscr,
y,
len(title) + 3,
"" * (width - len(title) - 5),
curses.color_pair(color_pair) | curses.A_BOLD,
)
def _calculate_rates(history: List) -> dict:
"""Calculate per-second rates from history."""
if len(history) < 2:
return {
"syscalls_per_sec": 0.0,
"rx_bytes_per_sec": 0.0,
"tx_bytes_per_sec": 0.0,
"rx_pkts_per_sec": 0.0,
"tx_pkts_per_sec": 0.0,
"read_bytes_per_sec": 0.0,
"write_bytes_per_sec": 0.0,
"read_ops_per_sec": 0.0,
"write_ops_per_sec": 0.0,
}
# Calculate delta between last two samples
recent = history[-1]
previous = history[-2]
time_delta = recent.timestamp - previous.timestamp
if time_delta <= 0:
time_delta = 1.0
return {
"syscalls_per_sec": (recent.syscall_count - previous.syscall_count)
/ time_delta,
"rx_bytes_per_sec": (recent.rx_bytes - previous.rx_bytes) / time_delta,
"tx_bytes_per_sec": (recent.tx_bytes - previous.tx_bytes) / time_delta,
"rx_pkts_per_sec": (recent.rx_packets - previous.rx_packets) / time_delta,
"tx_pkts_per_sec": (recent.tx_packets - previous.tx_packets) / time_delta,
"read_bytes_per_sec": (recent.read_bytes - previous.read_bytes) / time_delta,
"write_bytes_per_sec": (recent.write_bytes - previous.write_bytes) / time_delta,
"read_ops_per_sec": (recent.read_ops - previous.read_ops) / time_delta,
"write_ops_per_sec": (recent.write_ops - previous.write_ops) / time_delta,
}
def _format_bytes(bytes_val: float) -> str:
"""Format bytes into human-readable string."""
if bytes_val < 0:
bytes_val = 0
for unit in ["B", "KB", "MB", "GB", "TB"]:
if bytes_val < 1024.0:
return f"{bytes_val:.1f}{unit}"
bytes_val /= 1024.0
return f"{bytes_val:.1f}PB"
def _draw_bar_graph_enhanced(
stdscr,
y: int,
x: int,
width: int,
height: int,
data: List[float],
color_pair: int,
):
"""Draw an enhanced bar graph with axis and scale."""
screen_height, screen_width = stdscr.getmaxyx()
if not data or width < 2 or y + height >= screen_height:
return
# Calculate statistics
max_val = max(data) if max(data) > 0 else 1
min_val = min(data)
avg_val = sum(data) / len(data)
# Take last 'width - 12' data points (leave room for Y-axis)
graph_width = max(1, width - 12)
recent_data = data[-graph_width:] if len(data) > graph_width else data
# Draw Y-axis labels (with bounds checking)
if y < screen_height:
_safe_addstr(
stdscr, y, x, f"{_format_bytes(max_val):>9}", curses.color_pair(7)
)
if y + height // 2 < screen_height:
_safe_addstr(
stdscr,
y + height // 2,
x,
f"{_format_bytes(avg_val):>9}",
curses.color_pair(7),
)
if y + height - 1 < screen_height:
_safe_addstr(
stdscr,
y + height - 1,
x,
f"{_format_bytes(min_val):>9}",
curses.color_pair(7),
)
# Draw bars
for row in range(height):
if y + row >= screen_height:
break
threshold = (height - row) / height
bar_line = ""
for val in recent_data:
normalized = val / max_val if max_val > 0 else 0
if normalized >= threshold:
bar_line += ""
elif normalized >= threshold - 0.15:
bar_line += ""
elif normalized >= threshold - 0.35:
bar_line += ""
elif normalized >= threshold - 0.5:
bar_line += ""
else:
bar_line += " "
_safe_addstr(stdscr, y + row, x + 11, bar_line, color_pair)
# Draw X-axis
if y + height < screen_height:
_safe_addstr(
stdscr,
y + height,
x + 10,
"" + "" * len(recent_data),
curses.color_pair(7),
)
_safe_addstr(
stdscr,
y + height,
x + 10 + len(recent_data),
"→ time",
curses.color_pair(7),
)
def _draw_labeled_graph(
stdscr,
y: int,
x: int,
width: int,
height: int,
label: str,
rate: str,
detail: str,
data: List[float],
color_pair: int,
description: str,
):
"""Draw a graph with labels and legend."""
screen_height, screen_width = stdscr.getmaxyx()
if y >= screen_height or y + height + 2 >= screen_height:
return
# Header with metrics
_safe_addstr(stdscr, y, x, label, curses.color_pair(1) | curses.A_BOLD)
_safe_addstr(stdscr, y, x + len(label) + 2, rate, curses.color_pair(2))
_safe_addstr(
stdscr, y, x + len(label) + len(rate) + 4, detail, curses.color_pair(7)
)
# Draw the graph
if len(data) > 1:
_draw_bar_graph_enhanced(stdscr, y + 1, x, width, height, data, color_pair)
else:
_safe_addstr(stdscr, y + 2, x + 2, "Collecting data...", curses.color_pair(7))
# Graph legend
if y + height + 1 < screen_height:
_safe_addstr(
stdscr, y + height + 1, x, f"└─ {description}", curses.color_pair(7)
)
class ContainerMonitorTUI:
"""TUI for container monitoring with cgroup selection and live graphs."""
def __init__(self, collector: ContainerDataCollector):
self.collector = collector
self.selected_cgroup: Optional[int] = None
self.current_screen = "selection" # "selection" or "monitoring"
self.selected_index = 0
self.scroll_offset = 0
self.web_dashboard = None
self.web_thread = None
def run(self):
"""Run the TUI application."""
curses.wrapper(self._main_loop)
def _main_loop(self, stdscr):
"""Main curses loop."""
# Configure curses
curses.curs_set(0) # Hide cursor
stdscr.nodelay(True) # Non-blocking input
stdscr.timeout(100) # Refresh every 100ms
# Initialize colors
curses.start_color()
curses.init_pair(1, curses.COLOR_CYAN, curses.COLOR_BLACK)
curses.init_pair(2, curses.COLOR_GREEN, curses.COLOR_BLACK)
curses.init_pair(3, curses.COLOR_YELLOW, curses.COLOR_BLACK)
curses.init_pair(4, curses.COLOR_RED, curses.COLOR_BLACK)
curses.init_pair(5, curses.COLOR_MAGENTA, curses.COLOR_BLACK)
curses.init_pair(6, curses.COLOR_WHITE, curses.COLOR_BLUE)
curses.init_pair(7, curses.COLOR_BLUE, curses.COLOR_BLACK)
curses.init_pair(8, curses.COLOR_WHITE, curses.COLOR_CYAN)
while True:
stdscr.clear()
try:
height, width = stdscr.getmaxyx()
# Check minimum terminal size
if height < 25 or width < 80:
msg = "Terminal too small! Minimum: 80x25"
stdscr.attron(curses.color_pair(4) | curses.A_BOLD)
stdscr.addstr(
height // 2, max(0, (width - len(msg)) // 2), msg[: width - 1]
)
stdscr.attroff(curses.color_pair(4) | curses.A_BOLD)
stdscr.refresh()
key = stdscr.getch()
if key == ord("q") or key == ord("Q"):
break
continue
if self.current_screen == "selection":
self._draw_selection_screen(stdscr)
elif self.current_screen == "monitoring":
self._draw_monitoring_screen(stdscr)
stdscr.refresh()
# Handle input
key = stdscr.getch()
if key != -1:
if not self._handle_input(key, stdscr):
break # Exit requested
except KeyboardInterrupt:
break
except curses.error:
# Curses error - likely terminal too small, just continue
pass
except Exception as e:
# Show error briefly
height, width = stdscr.getmaxyx()
error_msg = f"Error: {str(e)[: width - 10]}"
stdscr.addstr(0, 0, error_msg[: width - 1])
stdscr.refresh()
time.sleep(1)
def _draw_selection_screen(self, stdscr):
"""Draw the cgroup selection screen."""
height, width = stdscr.getmaxyx()
# Draw fancy header box
_draw_fancy_header(stdscr, "🐳 CONTAINER MONITOR", "Select a Cgroup to Monitor")
# Instructions
instructions = (
"↑↓: Navigate | ENTER: Select | w: Web Mode | q: Quit | r: Refresh"
)
_safe_addstr(
stdscr,
3,
max(0, (width - len(instructions)) // 2),
instructions,
curses.color_pair(3),
)
# Get cgroups
cgroups = self.collector.get_all_cgroups()
if not cgroups:
msg = "No cgroups found. Waiting for activity..."
_safe_addstr(
stdscr,
height // 2,
max(0, (width - len(msg)) // 2),
msg,
curses.color_pair(4),
)
return
# Sort cgroups by name
cgroups.sort(key=lambda c: c.name)
# Adjust selection bounds
if self.selected_index >= len(cgroups):
self.selected_index = len(cgroups) - 1
if self.selected_index < 0:
self.selected_index = 0
# Calculate visible range
list_height = max(1, height - 8)
if self.selected_index < self.scroll_offset:
self.scroll_offset = self.selected_index
elif self.selected_index >= self.scroll_offset + list_height:
self.scroll_offset = self.selected_index - list_height + 1
# Calculate max name length and ID width for alignment
max_name_len = min(50, max(len(cg.name) for cg in cgroups))
max_id_len = max(len(str(cg.id)) for cg in cgroups)
# Draw cgroup list with fancy borders
start_y = 5
_safe_addstr(
stdscr, start_y, 2, "" + "" * (width - 6) + "", curses.color_pair(1)
)
# Header row
header = f" {'CGROUP NAME':<{max_name_len}}{'ID':>{max_id_len}} "
_safe_addstr(stdscr, start_y + 1, 2, "", curses.color_pair(1))
_safe_addstr(
stdscr, start_y + 1, 3, header, curses.color_pair(1) | curses.A_BOLD
)
_safe_addstr(stdscr, start_y + 1, width - 3, "", curses.color_pair(1))
# Separator
_safe_addstr(
stdscr, start_y + 2, 2, "" + "" * (width - 6) + "", curses.color_pair(1)
)
for i in range(list_height):
idx = self.scroll_offset + i
y = start_y + 3 + i
if y >= height - 2:
break
_safe_addstr(stdscr, y, 2, "", curses.color_pair(1))
_safe_addstr(stdscr, y, width - 3, "", curses.color_pair(1))
if idx >= len(cgroups):
continue
cgroup = cgroups[idx]
# Truncate name if too long
display_name = (
cgroup.name
if len(cgroup.name) <= max_name_len
else cgroup.name[: max_name_len - 3] + "..."
)
if idx == self.selected_index:
# Highlight selected with proper alignment
line = f"{display_name:<{max_name_len}}{cgroup.id:>{max_id_len}} "
_safe_addstr(stdscr, y, 3, line, curses.color_pair(8) | curses.A_BOLD)
else:
line = f" {display_name:<{max_name_len}}{cgroup.id:>{max_id_len}} "
_safe_addstr(stdscr, y, 3, line, curses.color_pair(7))
# Bottom border
bottom_y = min(start_y + 3 + list_height, height - 3)
_safe_addstr(
stdscr, bottom_y, 2, "" + "" * (width - 6) + "", curses.color_pair(1)
)
# Footer
footer = f"Total: {len(cgroups)} cgroups"
if len(cgroups) > list_height:
footer += f" │ Showing {self.scroll_offset + 1}-{min(self.scroll_offset + list_height, len(cgroups))}"
_safe_addstr(
stdscr,
height - 2,
max(0, (width - len(footer)) // 2),
footer,
curses.color_pair(1),
)
def _draw_monitoring_screen(self, stdscr):
"""Draw the monitoring screen for selected cgroup."""
height, width = stdscr.getmaxyx()
if self.selected_cgroup is None:
return
# Get current stats
stats = self.collector.get_stats_for_cgroup(self.selected_cgroup)
history = self.collector.get_history(self.selected_cgroup)
# Draw fancy header
_draw_fancy_header(
stdscr, f"📊 {stats.cgroup_name[:40]}", "Live Performance Metrics"
)
# Instructions
instructions = "ESC/b: Back to List | w: Web Mode | q: Quit"
_safe_addstr(
stdscr,
3,
max(0, (width - len(instructions)) // 2),
instructions,
curses.color_pair(3),
)
# Calculate metrics for rate display
rates = _calculate_rates(history)
y = 5
# Syscall count in a fancy box
if y + 4 < height:
_draw_metric_box(
stdscr,
y,
2,
min(width - 4, 80),
"⚡ SYSTEM CALLS",
f"{stats.syscall_count:,}",
f"Rate: {rates['syscalls_per_sec']:.1f}/sec",
curses.color_pair(5),
)
y += 4
# Network I/O Section
if y + 8 < height:
_draw_section_header(stdscr, y, "🌐 NETWORK I/O", 1)
y += 1
# RX graph
rx_label = f"RX: {_format_bytes(stats.rx_bytes)}"
rx_rate = f"{_format_bytes(rates['rx_bytes_per_sec'])}/s"
rx_pkts = f"{stats.rx_packets:,} pkts ({rates['rx_pkts_per_sec']:.1f}/s)"
_draw_labeled_graph(
stdscr,
y,
2,
width - 4,
4,
rx_label,
rx_rate,
rx_pkts,
[s.rx_bytes for s in history],
curses.color_pair(2),
"Received Traffic (last 100 samples)",
)
y += 6
# TX graph
if y + 8 < height:
tx_label = f"TX: {_format_bytes(stats.tx_bytes)}"
tx_rate = f"{_format_bytes(rates['tx_bytes_per_sec'])}/s"
tx_pkts = f"{stats.tx_packets:,} pkts ({rates['tx_pkts_per_sec']:.1f}/s)"
_draw_labeled_graph(
stdscr,
y,
2,
width - 4,
4,
tx_label,
tx_rate,
tx_pkts,
[s.tx_bytes for s in history],
curses.color_pair(3),
"Transmitted Traffic (last 100 samples)",
)
y += 6
# File I/O Section
if y + 8 < height:
_draw_section_header(stdscr, y, "💾 FILE I/O", 1)
y += 1
# Read graph
read_label = f"READ: {_format_bytes(stats.read_bytes)}"
read_rate = f"{_format_bytes(rates['read_bytes_per_sec'])}/s"
read_ops = f"{stats.read_ops:,} ops ({rates['read_ops_per_sec']:.1f}/s)"
_draw_labeled_graph(
stdscr,
y,
2,
width - 4,
4,
read_label,
read_rate,
read_ops,
[s.read_bytes for s in history],
curses.color_pair(4),
"Read Operations (last 100 samples)",
)
y += 6
# Write graph
if y + 8 < height:
write_label = f"WRITE: {_format_bytes(stats.write_bytes)}"
write_rate = f"{_format_bytes(rates['write_bytes_per_sec'])}/s"
write_ops = f"{stats.write_ops:,} ops ({rates['write_ops_per_sec']:.1f}/s)"
_draw_labeled_graph(
stdscr,
y,
2,
width - 4,
4,
write_label,
write_rate,
write_ops,
[s.write_bytes for s in history],
curses.color_pair(5),
"Write Operations (last 100 samples)",
)
def _launch_web_mode(self, stdscr):
"""Launch web dashboard mode."""
height, width = stdscr.getmaxyx()
# Show transition message
stdscr.clear()
msg1 = "🌐 LAUNCHING WEB DASHBOARD"
_safe_addstr(
stdscr,
height // 2 - 2,
max(0, (width - len(msg1)) // 2),
msg1,
curses.color_pair(6) | curses.A_BOLD,
)
msg2 = "Server starting at http://localhost:8050"
_safe_addstr(
stdscr,
height // 2,
max(0, (width - len(msg2)) // 2),
msg2,
curses.color_pair(2),
)
msg3 = "Press 'q' to stop web server and return to TUI"
_safe_addstr(
stdscr,
height // 2 + 2,
max(0, (width - len(msg3)) // 2),
msg3,
curses.color_pair(3),
)
stdscr.refresh()
time.sleep(1)
try:
# Create and start web dashboard
self.web_dashboard = WebDashboard(
self.collector, selected_cgroup=self.selected_cgroup
)
# Start in background thread
self.web_thread = threading.Thread(
target=self.web_dashboard.run, daemon=True
)
self.web_thread.start()
time.sleep(2) # Give server time to start
# Wait for user to press 'q' to return
msg4 = "Web dashboard running at http://localhost:8050"
msg5 = "Press 'q' to return to TUI"
_safe_addstr(
stdscr,
height // 2 + 4,
max(0, (width - len(msg4)) // 2),
msg4,
curses.color_pair(1) | curses.A_BOLD,
)
_safe_addstr(
stdscr,
height // 2 + 5,
max(0, (width - len(msg5)) // 2),
msg5,
curses.color_pair(3) | curses.A_BOLD,
)
stdscr.refresh()
stdscr.nodelay(False) # Blocking mode
while True:
key = stdscr.getch()
if key == ord("q") or key == ord("Q"):
break
# Stop web server
if self.web_dashboard:
self.web_dashboard.stop()
except Exception as e:
error_msg = f"Error starting web dashboard: {str(e)}"
_safe_addstr(
stdscr,
height // 2 + 4,
max(0, (width - len(error_msg)) // 2),
error_msg,
curses.color_pair(4),
)
stdscr.refresh()
time.sleep(3)
# Restore TUI settings
stdscr.nodelay(True)
stdscr.timeout(100)
def _handle_input(self, key: int, stdscr) -> bool:
"""Handle keyboard input. Returns False to exit."""
if key == ord("q") or key == ord("Q"):
return False # Exit
if key == ord("w") or key == ord("W"):
# Launch web mode
self._launch_web_mode(stdscr)
return True
if self.current_screen == "selection":
if key == curses.KEY_UP:
self.selected_index = max(0, self.selected_index - 1)
elif key == curses.KEY_DOWN:
cgroups = self.collector.get_all_cgroups()
self.selected_index = min(len(cgroups) - 1, self.selected_index + 1)
elif key == ord("\n") or key == curses.KEY_ENTER or key == 10:
# Select cgroup
cgroups = self.collector.get_all_cgroups()
if cgroups and 0 <= self.selected_index < len(cgroups):
cgroups.sort(key=lambda c: c.name)
self.selected_cgroup = cgroups[self.selected_index].id
self.current_screen = "monitoring"
elif key == ord("r") or key == ord("R"):
# Force refresh cache
self.collector._cgroup_cache_time = 0
elif self.current_screen == "monitoring":
if key == 27 or key == ord("b") or key == ord("B"): # ESC or 'b'
self.current_screen = "selection"
self.selected_cgroup = None
return True # Continue running

View File

@ -0,0 +1,826 @@
"""Beautiful web dashboard for container monitoring using Plotly Dash."""
import dash
from dash import dcc, html
from dash.dependencies import Input, Output
import plotly.graph_objects as go
from plotly.subplots import make_subplots
from typing import Optional
from data_collection import ContainerDataCollector
class WebDashboard:
"""Beautiful web dashboard for container monitoring."""
def __init__(
self,
collector: ContainerDataCollector,
selected_cgroup: Optional[int] = None,
host: str = "0.0.0.0",
port: int = 8050,
):
self.collector = collector
self.selected_cgroup = selected_cgroup
self.host = host
self.port = port
# Suppress Dash dev tools and debug output
self.app = dash.Dash(
__name__,
title="pythonBPF Container Monitor",
suppress_callback_exceptions=True,
)
self._setup_layout()
self._setup_callbacks()
self._running = False
def _setup_layout(self):
"""Create the dashboard layout."""
self.app.layout = html.Div(
[
# Futuristic Header with pythonBPF branding
html.Div(
[
html.Div(
[
html.Div(
[
html.Span(
"python",
style={
"fontSize": "52px",
"fontWeight": "300",
"color": "#00ff88",
"fontFamily": "'Courier New', monospace",
"textShadow": "0 0 20px rgba(0,255,136,0.5)",
},
),
html.Span(
"BPF",
style={
"fontSize": "52px",
"fontWeight": "900",
"color": "#00d4ff",
"fontFamily": "'Courier New', monospace",
"textShadow": "0 0 20px rgba(0,212,255,0.5)",
},
),
],
style={"marginBottom": "5px"},
),
html.Div(
"CONTAINER PERFORMANCE MONITOR",
style={
"fontSize": "16px",
"letterSpacing": "8px",
"color": "#8899ff",
"fontWeight": "300",
"fontFamily": "'Courier New', monospace",
},
),
],
style={
"textAlign": "center",
},
),
html.Div(
id="cgroup-name",
style={
"textAlign": "center",
"color": "#00ff88",
"fontSize": "20px",
"marginTop": "15px",
"fontFamily": "'Courier New', monospace",
"fontWeight": "bold",
"textShadow": "0 0 10px rgba(0,255,136,0.3)",
},
),
],
style={
"background": "linear-gradient(135deg, #0a0e27 0%, #1a1f3a 50%, #0a0e27 100%)",
"padding": "40px 20px",
"borderRadius": "0",
"marginBottom": "0",
"boxShadow": "0 10px 40px rgba(0,212,255,0.2)",
"border": "1px solid rgba(0,212,255,0.3)",
"borderTop": "3px solid #00d4ff",
"borderBottom": "3px solid #00ff88",
"position": "relative",
"overflow": "hidden",
},
),
# Cgroup selector (if no cgroup selected)
html.Div(
[
html.Label(
"SELECT CGROUP:",
style={
"fontSize": "14px",
"fontWeight": "bold",
"color": "#00d4ff",
"marginRight": "15px",
"fontFamily": "'Courier New', monospace",
"letterSpacing": "2px",
},
),
dcc.Dropdown(
id="cgroup-selector",
style={
"width": "600px",
"display": "inline-block",
"background": "#1a1f3a",
"border": "1px solid #00d4ff",
},
),
],
id="selector-container",
style={
"textAlign": "center",
"marginTop": "30px",
"marginBottom": "30px",
"padding": "20px",
"background": "rgba(26,31,58,0.5)",
"borderRadius": "10px",
"border": "1px solid rgba(0,212,255,0.2)",
"display": "block" if self.selected_cgroup is None else "none",
},
),
# Stats cards row
html.Div(
[
self._create_stat_card(
"syscall-card", "⚡ SYSCALLS", "#00ff88"
),
self._create_stat_card("network-card", "🌐 NETWORK", "#00d4ff"),
self._create_stat_card("file-card", "💾 FILE I/O", "#ff0088"),
],
style={
"display": "flex",
"justifyContent": "space-around",
"marginBottom": "30px",
"marginTop": "30px",
"gap": "25px",
"flexWrap": "wrap",
"padding": "0 20px",
},
),
# Graphs container
html.Div(
[
# Network graphs
html.Div(
[
html.Div(
[
html.Span("🌐 ", style={"fontSize": "24px"}),
html.Span(
"NETWORK",
style={
"fontFamily": "'Courier New', monospace",
"letterSpacing": "3px",
"fontWeight": "bold",
},
),
html.Span(
" I/O",
style={
"fontFamily": "'Courier New', monospace",
"letterSpacing": "3px",
"color": "#00d4ff",
},
),
],
style={
"color": "#ffffff",
"fontSize": "20px",
"borderBottom": "2px solid #00d4ff",
"paddingBottom": "15px",
"marginBottom": "25px",
"textShadow": "0 0 10px rgba(0,212,255,0.3)",
},
),
dcc.Graph(
id="network-graph", style={"height": "400px"}
),
],
style={
"background": "linear-gradient(135deg, #0a0e27 0%, #1a1f3a 100%)",
"padding": "30px",
"borderRadius": "15px",
"boxShadow": "0 8px 32px rgba(0,212,255,0.15)",
"marginBottom": "30px",
"border": "1px solid rgba(0,212,255,0.2)",
},
),
# File I/O graphs
html.Div(
[
html.Div(
[
html.Span("💾 ", style={"fontSize": "24px"}),
html.Span(
"FILE",
style={
"fontFamily": "'Courier New', monospace",
"letterSpacing": "3px",
"fontWeight": "bold",
},
),
html.Span(
" I/O",
style={
"fontFamily": "'Courier New', monospace",
"letterSpacing": "3px",
"color": "#ff0088",
},
),
],
style={
"color": "#ffffff",
"fontSize": "20px",
"borderBottom": "2px solid #ff0088",
"paddingBottom": "15px",
"marginBottom": "25px",
"textShadow": "0 0 10px rgba(255,0,136,0.3)",
},
),
dcc.Graph(
id="file-io-graph", style={"height": "400px"}
),
],
style={
"background": "linear-gradient(135deg, #0a0e27 0%, #1a1f3a 100%)",
"padding": "30px",
"borderRadius": "15px",
"boxShadow": "0 8px 32px rgba(255,0,136,0.15)",
"marginBottom": "30px",
"border": "1px solid rgba(255,0,136,0.2)",
},
),
# Combined time series
html.Div(
[
html.Div(
[
html.Span("📈 ", style={"fontSize": "24px"}),
html.Span(
"REAL-TIME",
style={
"fontFamily": "'Courier New', monospace",
"letterSpacing": "3px",
"fontWeight": "bold",
},
),
html.Span(
" METRICS",
style={
"fontFamily": "'Courier New', monospace",
"letterSpacing": "3px",
"color": "#00ff88",
},
),
],
style={
"color": "#ffffff",
"fontSize": "20px",
"borderBottom": "2px solid #00ff88",
"paddingBottom": "15px",
"marginBottom": "25px",
"textShadow": "0 0 10px rgba(0,255,136,0.3)",
},
),
dcc.Graph(
id="timeseries-graph", style={"height": "500px"}
),
],
style={
"background": "linear-gradient(135deg, #0a0e27 0%, #1a1f3a 100%)",
"padding": "30px",
"borderRadius": "15px",
"boxShadow": "0 8px 32px rgba(0,255,136,0.15)",
"border": "1px solid rgba(0,255,136,0.2)",
},
),
],
style={"padding": "0 20px"},
),
# Footer with pythonBPF branding
html.Div(
[
html.Div(
[
html.Span(
"Powered by ",
style={"color": "#8899ff", "fontSize": "12px"},
),
html.Span(
"pythonBPF",
style={
"color": "#00d4ff",
"fontSize": "14px",
"fontWeight": "bold",
"fontFamily": "'Courier New', monospace",
},
),
html.Span(
" | eBPF Container Monitoring",
style={
"color": "#8899ff",
"fontSize": "12px",
"marginLeft": "10px",
},
),
]
)
],
style={
"textAlign": "center",
"padding": "20px",
"marginTop": "40px",
"background": "linear-gradient(135deg, #0a0e27 0%, #1a1f3a 100%)",
"borderTop": "1px solid rgba(0,212,255,0.2)",
},
),
# Auto-update interval
dcc.Interval(id="interval-component", interval=1000, n_intervals=0),
],
style={
"padding": "0",
"fontFamily": "'Segoe UI', 'Courier New', monospace",
"background": "linear-gradient(to bottom, #050813 0%, #0a0e27 100%)",
"minHeight": "100vh",
"margin": "0",
},
)
def _create_stat_card(self, card_id: str, title: str, color: str):
"""Create a statistics card with futuristic styling."""
return html.Div(
[
html.H3(
title,
style={
"color": color,
"fontSize": "16px",
"marginBottom": "20px",
"fontWeight": "bold",
"fontFamily": "'Courier New', monospace",
"letterSpacing": "2px",
"textShadow": f"0 0 10px {color}50",
},
),
html.Div(
[
html.Div(
id=f"{card_id}-value",
style={
"fontSize": "42px",
"fontWeight": "bold",
"color": "#ffffff",
"marginBottom": "10px",
"fontFamily": "'Courier New', monospace",
"textShadow": f"0 0 20px {color}40",
},
),
html.Div(
id=f"{card_id}-rate",
style={
"fontSize": "14px",
"color": "#8899ff",
"fontFamily": "'Courier New', monospace",
},
),
]
),
],
style={
"flex": "1",
"minWidth": "280px",
"background": "linear-gradient(135deg, #0a0e27 0%, #1a1f3a 100%)",
"padding": "30px",
"borderRadius": "15px",
"boxShadow": f"0 8px 32px {color}20",
"border": f"1px solid {color}40",
"borderLeft": f"4px solid {color}",
"transition": "transform 0.3s, box-shadow 0.3s",
"position": "relative",
"overflow": "hidden",
},
)
def _setup_callbacks(self):
"""Setup dashboard callbacks."""
@self.app.callback(
[Output("cgroup-selector", "options"), Output("cgroup-selector", "value")],
[Input("interval-component", "n_intervals")],
)
def update_cgroup_selector(n):
if self.selected_cgroup is not None:
return [], self.selected_cgroup
cgroups = self.collector.get_all_cgroups()
options = [
{"label": f"{cg.name} (ID: {cg.id})", "value": cg.id}
for cg in sorted(cgroups, key=lambda c: c.name)
]
value = options[0]["value"] if options else None
if value and self.selected_cgroup is None:
self.selected_cgroup = value
return options, self.selected_cgroup
@self.app.callback(
Output("cgroup-selector", "value", allow_duplicate=True),
[Input("cgroup-selector", "value")],
prevent_initial_call=True,
)
def select_cgroup(value):
if value:
self.selected_cgroup = value
return value
@self.app.callback(
[
Output("cgroup-name", "children"),
Output("syscall-card-value", "children"),
Output("syscall-card-rate", "children"),
Output("network-card-value", "children"),
Output("network-card-rate", "children"),
Output("file-card-value", "children"),
Output("file-card-rate", "children"),
Output("network-graph", "figure"),
Output("file-io-graph", "figure"),
Output("timeseries-graph", "figure"),
],
[Input("interval-component", "n_intervals")],
)
def update_dashboard(n):
if self.selected_cgroup is None:
empty_fig = self._create_empty_figure(
"Select a cgroup to begin monitoring"
)
return (
"SELECT A CGROUP TO START",
"0",
"",
"0 B",
"",
"0 B",
"",
empty_fig,
empty_fig,
empty_fig,
)
try:
stats = self.collector.get_stats_for_cgroup(self.selected_cgroup)
history = self.collector.get_history(self.selected_cgroup)
rates = self._calculate_rates(history)
return (
f"{stats.cgroup_name}",
f"{stats.syscall_count:,}",
f"{rates['syscalls_per_sec']:.1f} calls/sec",
f"{self._format_bytes(stats.rx_bytes + stats.tx_bytes)}",
f"{self._format_bytes(rates['rx_bytes_per_sec'])}/s ↑ {self._format_bytes(rates['tx_bytes_per_sec'])}/s",
f"{self._format_bytes(stats.read_bytes + stats.write_bytes)}",
f"R: {self._format_bytes(rates['read_bytes_per_sec'])}/s W: {self._format_bytes(rates['write_bytes_per_sec'])}/s",
self._create_network_graph(history),
self._create_file_io_graph(history),
self._create_timeseries_graph(history),
)
except Exception as e:
empty_fig = self._create_empty_figure(f"Error: {str(e)}")
return (
"ERROR",
"0",
str(e),
"0 B",
"",
"0 B",
"",
empty_fig,
empty_fig,
empty_fig,
)
def _create_empty_figure(self, message: str):
"""Create an empty figure with a message."""
fig = go.Figure()
fig.update_layout(
title=message,
template="plotly_dark",
paper_bgcolor="#0a0e27",
plot_bgcolor="#0a0e27",
font=dict(color="#8899ff", family="Courier New, monospace"),
)
return fig
def _create_network_graph(self, history):
"""Create network I/O graph with futuristic styling."""
if len(history) < 2:
return self._create_empty_figure("Collecting data...")
times = [i for i in range(len(history))]
rx_bytes = [s.rx_bytes for s in history]
tx_bytes = [s.tx_bytes for s in history]
fig = make_subplots(
rows=2,
cols=1,
subplot_titles=("RECEIVED (RX)", "TRANSMITTED (TX)"),
vertical_spacing=0.15,
)
fig.add_trace(
go.Scatter(
x=times,
y=rx_bytes,
mode="lines",
name="RX",
fill="tozeroy",
line=dict(color="#00d4ff", width=3, shape="spline"),
fillcolor="rgba(0, 212, 255, 0.2)",
),
row=1,
col=1,
)
fig.add_trace(
go.Scatter(
x=times,
y=tx_bytes,
mode="lines",
name="TX",
fill="tozeroy",
line=dict(color="#00ff88", width=3, shape="spline"),
fillcolor="rgba(0, 255, 136, 0.2)",
),
row=2,
col=1,
)
fig.update_xaxes(title_text="Time (samples)", row=2, col=1, color="#8899ff")
fig.update_yaxes(title_text="Bytes", row=1, col=1, color="#8899ff")
fig.update_yaxes(title_text="Bytes", row=2, col=1, color="#8899ff")
fig.update_layout(
height=400,
template="plotly_dark",
paper_bgcolor="rgba(0,0,0,0)",
plot_bgcolor="#0a0e27",
showlegend=False,
hovermode="x unified",
font=dict(family="Courier New, monospace", color="#8899ff"),
)
return fig
def _create_file_io_graph(self, history):
"""Create file I/O graph with futuristic styling."""
if len(history) < 2:
return self._create_empty_figure("Collecting data...")
times = [i for i in range(len(history))]
read_bytes = [s.read_bytes for s in history]
write_bytes = [s.write_bytes for s in history]
fig = make_subplots(
rows=2,
cols=1,
subplot_titles=("READ OPERATIONS", "WRITE OPERATIONS"),
vertical_spacing=0.15,
)
fig.add_trace(
go.Scatter(
x=times,
y=read_bytes,
mode="lines",
name="Read",
fill="tozeroy",
line=dict(color="#ff0088", width=3, shape="spline"),
fillcolor="rgba(255, 0, 136, 0.2)",
),
row=1,
col=1,
)
fig.add_trace(
go.Scatter(
x=times,
y=write_bytes,
mode="lines",
name="Write",
fill="tozeroy",
line=dict(color="#8844ff", width=3, shape="spline"),
fillcolor="rgba(136, 68, 255, 0.2)",
),
row=2,
col=1,
)
fig.update_xaxes(title_text="Time (samples)", row=2, col=1, color="#8899ff")
fig.update_yaxes(title_text="Bytes", row=1, col=1, color="#8899ff")
fig.update_yaxes(title_text="Bytes", row=2, col=1, color="#8899ff")
fig.update_layout(
height=400,
template="plotly_dark",
paper_bgcolor="rgba(0,0,0,0)",
plot_bgcolor="#0a0e27",
showlegend=False,
hovermode="x unified",
font=dict(family="Courier New, monospace", color="#8899ff"),
)
return fig
def _create_timeseries_graph(self, history):
"""Create combined time series graph with futuristic styling."""
if len(history) < 2:
return self._create_empty_figure("Collecting data...")
times = [i for i in range(len(history))]
fig = make_subplots(
rows=3,
cols=1,
subplot_titles=(
"SYSTEM CALLS",
"NETWORK TRAFFIC (Bytes)",
"FILE I/O (Bytes)",
),
vertical_spacing=0.1,
specs=[
[{"secondary_y": False}],
[{"secondary_y": True}],
[{"secondary_y": True}],
],
)
# Syscalls
fig.add_trace(
go.Scatter(
x=times,
y=[s.syscall_count for s in history],
mode="lines",
name="Syscalls",
line=dict(color="#00ff88", width=3, shape="spline"),
),
row=1,
col=1,
)
# Network
fig.add_trace(
go.Scatter(
x=times,
y=[s.rx_bytes for s in history],
mode="lines",
name="RX",
line=dict(color="#00d4ff", width=2, shape="spline"),
),
row=2,
col=1,
secondary_y=False,
)
fig.add_trace(
go.Scatter(
x=times,
y=[s.tx_bytes for s in history],
mode="lines",
name="TX",
line=dict(color="#00ff88", width=2, shape="spline", dash="dot"),
),
row=2,
col=1,
secondary_y=True,
)
# File I/O
fig.add_trace(
go.Scatter(
x=times,
y=[s.read_bytes for s in history],
mode="lines",
name="Read",
line=dict(color="#ff0088", width=2, shape="spline"),
),
row=3,
col=1,
secondary_y=False,
)
fig.add_trace(
go.Scatter(
x=times,
y=[s.write_bytes for s in history],
mode="lines",
name="Write",
line=dict(color="#8844ff", width=2, shape="spline", dash="dot"),
),
row=3,
col=1,
secondary_y=True,
)
fig.update_xaxes(title_text="Time (samples)", row=3, col=1, color="#8899ff")
fig.update_yaxes(title_text="Count", row=1, col=1, color="#8899ff")
fig.update_yaxes(
title_text="RX Bytes", row=2, col=1, secondary_y=False, color="#00d4ff"
)
fig.update_yaxes(
title_text="TX Bytes", row=2, col=1, secondary_y=True, color="#00ff88"
)
fig.update_yaxes(
title_text="Read Bytes", row=3, col=1, secondary_y=False, color="#ff0088"
)
fig.update_yaxes(
title_text="Write Bytes", row=3, col=1, secondary_y=True, color="#8844ff"
)
fig.update_layout(
height=500,
template="plotly_dark",
paper_bgcolor="rgba(0,0,0,0)",
plot_bgcolor="#0a0e27",
hovermode="x unified",
showlegend=True,
legend=dict(
orientation="h",
yanchor="bottom",
y=1.02,
xanchor="right",
x=1,
font=dict(color="#8899ff"),
),
font=dict(family="Courier New, monospace", color="#8899ff"),
)
return fig
def _calculate_rates(self, history):
"""Calculate rates from history."""
if len(history) < 2:
return {
"syscalls_per_sec": 0.0,
"rx_bytes_per_sec": 0.0,
"tx_bytes_per_sec": 0.0,
"read_bytes_per_sec": 0.0,
"write_bytes_per_sec": 0.0,
}
recent = history[-1]
previous = history[-2]
time_delta = recent.timestamp - previous.timestamp
if time_delta <= 0:
time_delta = 1.0
return {
"syscalls_per_sec": max(
0, (recent.syscall_count - previous.syscall_count) / time_delta
),
"rx_bytes_per_sec": max(
0, (recent.rx_bytes - previous.rx_bytes) / time_delta
),
"tx_bytes_per_sec": max(
0, (recent.tx_bytes - previous.tx_bytes) / time_delta
),
"read_bytes_per_sec": max(
0, (recent.read_bytes - previous.read_bytes) / time_delta
),
"write_bytes_per_sec": max(
0, (recent.write_bytes - previous.write_bytes) / time_delta
),
}
def _format_bytes(self, bytes_val: float) -> str:
"""Format bytes into human-readable string."""
if bytes_val < 0:
bytes_val = 0
for unit in ["B", "KB", "MB", "GB", "TB"]:
if bytes_val < 1024.0:
return f"{bytes_val:.2f} {unit}"
bytes_val /= 1024.0
return f"{bytes_val:.2f} PB"
def run(self):
"""Run the web dashboard."""
self._running = True
# Suppress Werkzeug logging
import logging
log = logging.getLogger("werkzeug")
log.setLevel(logging.ERROR)
self.app.run(debug=False, host=self.host, port=self.port, use_reloader=False)
def stop(self):
"""Stop the web dashboard."""
self._running = False

View File

@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "pythonbpf"
version = "0.1.6"
version = "0.1.8"
description = "Reduced Python frontend for eBPF"
authors = [
{ name = "r41k0u", email="pragyanshchaturvedi18@gmail.com" },
@ -29,7 +29,7 @@ license = {text = "Apache-2.0"}
requires-python = ">=3.10"
dependencies = [
"llvmlite",
"llvmlite>=0.45",
"astpretty",
"pylibbpf"
]

View File

@ -7,6 +7,7 @@ from pythonbpf.helper import HelperHandlerRegistry
from pythonbpf.vmlinux_parser.dependency_node import Field
from .expr import VmlinuxHandlerRegistry
from pythonbpf.type_deducer import ctypes_to_ir
from pythonbpf.maps import BPFMapType
logger = logging.getLogger(__name__)
@ -25,7 +26,9 @@ def create_targets_and_rvals(stmt):
return stmt.targets, [stmt.value]
def handle_assign_allocation(builder, stmt, local_sym_tab, structs_sym_tab):
def handle_assign_allocation(
builder, stmt, local_sym_tab, map_sym_tab, structs_sym_tab
):
"""Handle memory allocation for assignment statements."""
logger.info(f"Handling assignment for allocation: {ast.dump(stmt)}")
@ -55,7 +58,9 @@ def handle_assign_allocation(builder, stmt, local_sym_tab, structs_sym_tab):
# Determine type and allocate based on rval
if isinstance(rval, ast.Call):
_allocate_for_call(builder, var_name, rval, local_sym_tab, structs_sym_tab)
_allocate_for_call(
builder, var_name, rval, local_sym_tab, map_sym_tab, structs_sym_tab
)
elif isinstance(rval, ast.Constant):
_allocate_for_constant(builder, var_name, rval, local_sym_tab)
elif isinstance(rval, ast.BinOp):
@ -74,7 +79,9 @@ def handle_assign_allocation(builder, stmt, local_sym_tab, structs_sym_tab):
)
def _allocate_for_call(builder, var_name, rval, local_sym_tab, structs_sym_tab):
def _allocate_for_call(
builder, var_name, rval, local_sym_tab, map_sym_tab, structs_sym_tab
):
"""Allocate memory for variable assigned from a call."""
if isinstance(rval.func, ast.Name):
@ -107,24 +114,108 @@ def _allocate_for_call(builder, var_name, rval, local_sym_tab, structs_sym_tab):
# Struct constructors
elif call_type in structs_sym_tab:
struct_info = structs_sym_tab[call_type]
var = builder.alloca(struct_info.ir_type, name=var_name)
local_sym_tab[var_name] = LocalSymbol(var, struct_info.ir_type, call_type)
logger.info(f"Pre-allocated {var_name} for struct {call_type}")
if len(rval.args) == 0:
# Zero-arg constructor: allocate the struct itself
var = builder.alloca(struct_info.ir_type, name=var_name)
local_sym_tab[var_name] = LocalSymbol(
var, struct_info.ir_type, call_type
)
logger.info(f"Pre-allocated {var_name} for struct {call_type}")
else:
# Pointer cast: allocate as pointer to struct
ptr_type = ir.PointerType(struct_info.ir_type)
var = builder.alloca(ptr_type, name=var_name)
var.align = 8
local_sym_tab[var_name] = LocalSymbol(var, ptr_type, call_type)
logger.info(
f"Pre-allocated {var_name} for struct pointer cast to {call_type}"
)
elif VmlinuxHandlerRegistry.is_vmlinux_struct(call_type):
# When calling struct_name(pointer), we're doing a cast, not construction
# So we allocate as a pointer (i64) not as the actual struct
var = builder.alloca(ir.IntType(64), name=var_name)
var.align = 8
local_sym_tab[var_name] = LocalSymbol(
var, ir.IntType(64), VmlinuxHandlerRegistry.get_struct_type(call_type)
)
logger.info(
f"Pre-allocated {var_name} for vmlinux struct pointer cast to {call_type}"
)
else:
logger.warning(f"Unknown call type for allocation: {call_type}")
elif isinstance(rval.func, ast.Attribute):
# Map method calls - need double allocation for ptr handling
_allocate_for_map_method(builder, var_name, local_sym_tab)
_allocate_for_map_method(
builder, var_name, rval, local_sym_tab, map_sym_tab, structs_sym_tab
)
else:
logger.warning(f"Unsupported call function type for {var_name}")
def _allocate_for_map_method(builder, var_name, local_sym_tab):
def _allocate_for_map_method(
builder, var_name, rval, local_sym_tab, map_sym_tab, structs_sym_tab
):
"""Allocate memory for variable assigned from map method (double alloc)."""
map_name = rval.func.value.id
method_name = rval.func.attr
# NOTE: We will have to special case HashMap.lookup which returns a pointer to value type
# The value type can be a struct as well, so we need to handle that properly
# This special casing is not ideal, as over time other map methods may need similar handling
# But for now, we will just handle lookup specifically
if map_name not in map_sym_tab:
logger.error(f"Map '{map_name}' not found for allocation")
return
if method_name != "lookup":
# Fallback allocation for other map methods
_allocate_for_map_method_fallback(builder, var_name, local_sym_tab)
return
map_params = map_sym_tab[map_name].params
if map_params["type"] != BPFMapType.HASH:
logger.warning(
"Map method lookup used on non-hash map, using fallback allocation"
)
_allocate_for_map_method_fallback(builder, var_name, local_sym_tab)
return
value_type = map_params["value"]
# Determine IR type for value
if isinstance(value_type, str) and value_type in structs_sym_tab:
struct_info = structs_sym_tab[value_type]
value_ir_type = struct_info.ir_type
else:
value_ir_type = ctypes_to_ir(value_type)
if value_ir_type is None:
logger.warning(
f"Could not determine IR type for map value '{value_type}', using fallback allocation"
)
_allocate_for_map_method_fallback(builder, var_name, local_sym_tab)
return
# Main variable (pointer to pointer)
ir_type = ir.PointerType(ir.IntType(64))
var = builder.alloca(ir_type, name=var_name)
local_sym_tab[var_name] = LocalSymbol(var, ir_type, value_type)
# Temporary variable for computed values
tmp_ir_type = value_ir_type
var_tmp = builder.alloca(tmp_ir_type, name=f"{var_name}_tmp")
local_sym_tab[f"{var_name}_tmp"] = LocalSymbol(var_tmp, tmp_ir_type)
logger.info(
f"Pre-allocated {var_name} and {var_name}_tmp for map method lookup of type {value_ir_type}"
)
def _allocate_for_map_method_fallback(builder, var_name, local_sym_tab):
"""Fallback allocation for map method variable (i64* and i64**)."""
# Main variable (pointer to pointer)
ir_type = ir.PointerType(ir.IntType(64))
var = builder.alloca(ir_type, name=var_name)
@ -135,7 +226,9 @@ def _allocate_for_map_method(builder, var_name, local_sym_tab):
var_tmp = builder.alloca(tmp_ir_type, name=f"{var_name}_tmp")
local_sym_tab[f"{var_name}_tmp"] = LocalSymbol(var_tmp, tmp_ir_type)
logger.info(f"Pre-allocated {var_name} and {var_name}_tmp for map method")
logger.info(
f"Pre-allocated {var_name} and {var_name}_tmp for map method (fallback)"
)
def _allocate_for_constant(builder, var_name, rval, local_sym_tab):
@ -257,13 +350,6 @@ def _allocate_for_attribute(builder, var_name, rval, local_sym_tab, structs_sym_
VmlinuxHandlerRegistry.get_field_type(vmlinux_struct_name, field_name)
)
field_ir, field = field_type
# TODO: For now, we only support integer type allocations.
# This always assumes first argument of function to be the context struct
base_ptr = builder.function.args[0]
local_sym_tab[
struct_var
].var = base_ptr # This is repurposing of var to store the pointer of the base type
local_sym_tab[struct_var].ir_type = field_ir
# Determine the actual IR type based on the field's type
actual_ir_type = None
@ -298,6 +384,7 @@ def _allocate_for_attribute(builder, var_name, rval, local_sym_tab, structs_sym_
f"Could not determine size for ctypes field {field_name}: {e}"
)
actual_ir_type = ir.IntType(64)
field_size_bits = 64
# Check if it's a nested vmlinux struct or complex type
elif field.type.__module__ == "vmlinux":
@ -306,24 +393,37 @@ def _allocate_for_attribute(builder, var_name, rval, local_sym_tab, structs_sym_
field.ctype_complex_type, ctypes._Pointer
):
actual_ir_type = ir.IntType(64) # Pointer is always 64-bit
field_size_bits = 64
# For embedded structs, this is more complex - might need different handling
else:
logger.warning(
f"Field {field_name} is a nested vmlinux struct, using i64 for now"
)
actual_ir_type = ir.IntType(64)
field_size_bits = 64
else:
logger.warning(
f"Unknown field type module {field.type.__module__} for {field_name}"
)
actual_ir_type = ir.IntType(64)
field_size_bits = 64
# Allocate with the actual IR type, not the GlobalVariable
# Pre-allocate the tmp storage used by load_struct_field (so we don't alloca inside handler)
tmp_name = f"{struct_var}_{field_name}_tmp"
tmp_ir_type = ir.IntType(field_size_bits)
tmp_var = builder.alloca(tmp_ir_type, name=tmp_name)
tmp_var.align = tmp_ir_type.width // 8
local_sym_tab[tmp_name] = LocalSymbol(tmp_var, tmp_ir_type)
logger.info(
f"Pre-allocated temp {tmp_name} (i{field_size_bits}) for vmlinux field read {vmlinux_struct_name}.{field_name}"
)
# Allocate with the actual IR type for the destination var
var = _allocate_with_type(builder, var_name, actual_ir_type)
local_sym_tab[var_name] = LocalSymbol(var, actual_ir_type, field)
logger.info(
f"Pre-allocated {var_name} from vmlinux struct {vmlinux_struct_name}.{field_name}"
f"Pre-allocated {var_name} as {actual_ir_type} from vmlinux struct {vmlinux_struct_name}.{field_name}"
)
return
else:

View File

@ -1,5 +1,7 @@
import ast
import logging
from inspect import isclass
from llvmlite import ir
from pythonbpf.expr import eval_expr
from pythonbpf.helper import emit_probe_read_kernel_str_call
@ -148,8 +150,47 @@ def handle_variable_assignment(
return False
val, val_type = val_result
logger.info(f"Evaluated value for {var_name}: {val} of type {val_type}, {var_type}")
logger.info(
f"Evaluated value for {var_name}: {val} of type {val_type}, expected {var_type}"
)
if val_type != var_type:
# Handle vmlinux struct pointers - they're represented as Python classes but are i64 pointers
if isclass(val_type) and (val_type.__module__ == "vmlinux"):
logger.info("Handling vmlinux struct pointer assignment")
# vmlinux struct pointers: val is a pointer, need to convert to i64
if isinstance(var_type, ir.IntType) and var_type.width == 64:
# Convert pointer to i64 using ptrtoint
if isinstance(val.type, ir.PointerType):
val = builder.ptrtoint(val, ir.IntType(64))
logger.info(
"Converted vmlinux struct pointer to i64 using ptrtoint"
)
builder.store(val, var_ptr)
logger.info(f"Assigned vmlinux struct pointer to {var_name} (i64)")
return True
else:
logger.error(
f"Type mismatch: vmlinux struct pointer requires i64, got {var_type}"
)
return False
# Handle user-defined struct pointer casts
# val_type is a string (struct name), var_type is a pointer to the struct
if isinstance(val_type, str) and val_type in structs_sym_tab:
struct_info = structs_sym_tab[val_type]
expected_ptr_type = ir.PointerType(struct_info.ir_type)
# Check if var_type matches the expected pointer type
if isinstance(var_type, ir.PointerType) and var_type == expected_ptr_type:
# val is already the correct pointer type from inttoptr/bitcast
builder.store(val, var_ptr)
logger.info(f"Assigned user-defined struct pointer cast to {var_name}")
return True
else:
logger.error(
f"Type mismatch: user-defined struct pointer cast requires pointer type, got {var_type}"
)
return False
if isinstance(val_type, Field):
logger.info("Handling assignment to struct field")
# Special handling for struct_xdp_md i32 fields that are zero-extended to i64

View File

@ -25,7 +25,7 @@ import re
logger: Logger = logging.getLogger(__name__)
VERSION = "v0.1.6"
VERSION = "v0.1.8"
def finalize_module(original_str):

View File

@ -49,6 +49,10 @@ class DebugInfoGenerator:
)
return self._type_cache[key]
def get_uint8_type(self) -> Any:
"""Get debug info for signed 8-bit integer"""
return self.get_basic_type("char", 8, dc.DW_ATE_unsigned)
def get_int32_type(self) -> Any:
"""Get debug info for signed 32-bit integer"""
return self.get_basic_type("int", 32, dc.DW_ATE_signed)

View File

@ -1,6 +1,6 @@
from .expr_pass import eval_expr, handle_expr, get_operand_value
from .type_normalization import convert_to_bool, get_base_type_and_depth
from .ir_ops import deref_to_depth
from .ir_ops import deref_to_depth, access_struct_field
from .call_registry import CallHandlerRegistry
from .vmlinux_registry import VmlinuxHandlerRegistry
@ -10,6 +10,7 @@ __all__ = [
"convert_to_bool",
"get_base_type_and_depth",
"deref_to_depth",
"access_struct_field",
"get_operand_value",
"CallHandlerRegistry",
"VmlinuxHandlerRegistry",

View File

@ -6,14 +6,14 @@ from typing import Dict
from pythonbpf.type_deducer import ctypes_to_ir, is_ctypes
from .call_registry import CallHandlerRegistry
from .ir_ops import deref_to_depth, access_struct_field
from .type_normalization import (
convert_to_bool,
handle_comparator,
get_base_type_and_depth,
deref_to_depth,
)
from pythonbpf.vmlinux_parser.assignment_info import Field
from .vmlinux_registry import VmlinuxHandlerRegistry
from ..vmlinux_parser.dependency_node import Field
logger: Logger = logging.getLogger(__name__)
@ -61,6 +61,7 @@ def _handle_constant_expr(module, builder, expr: ast.Constant):
def _handle_attribute_expr(
func,
expr: ast.Attribute,
local_sym_tab: Dict,
structs_sym_tab: Dict,
@ -89,12 +90,30 @@ def _handle_attribute_expr(
return vmlinux_result
else:
raise RuntimeError("Vmlinux struct did not process successfully")
metadata = structs_sym_tab[var_metadata]
if attr_name in metadata.fields:
gep = metadata.gep(builder, var_ptr, attr_name)
val = builder.load(gep)
field_type = metadata.field_type(attr_name)
return val, field_type
elif isinstance(var_metadata, Field):
logger.error(
f"Cannot access field '{attr_name}' on already-loaded field value '{var_name}'"
)
return None
if var_metadata in structs_sym_tab:
return access_struct_field(
builder,
var_ptr,
var_type,
var_metadata,
expr.attr,
structs_sym_tab,
func,
)
else:
logger.error(f"Struct metadata for '{var_name}' not found")
else:
logger.error(f"Undefined variable '{var_name}' for attribute access")
else:
logger.error("Unsupported attribute base expression type")
return None
@ -149,7 +168,11 @@ def get_operand_value(
var_type = var.type
base_type, depth = get_base_type_and_depth(var_type)
logger.info(f"var is {var}, base_type is {base_type}, depth is {depth}")
val = deref_to_depth(func, builder, var, depth)
if depth == 1:
val = builder.load(var)
return val
else:
val = deref_to_depth(func, builder, var, depth)
return val
else:
# Check if it's a vmlinux enum/constant
@ -525,6 +548,134 @@ def _handle_boolean_op(
return None
# ============================================================================
# Struct casting (including vmlinux struct casting)
# ============================================================================
def _handle_vmlinux_cast(
func,
module,
builder,
expr,
local_sym_tab,
map_sym_tab,
structs_sym_tab=None,
):
# handle expressions such as struct_request(ctx.di) where struct_request is a vmlinux
# struct and ctx.di is a pointer to a struct but is actually represented as a c_uint64
# which needs to be cast to a pointer. This is also a field of another vmlinux struct
"""Handle vmlinux struct cast expressions like struct_request(ctx.di)."""
if len(expr.args) != 1:
logger.info("vmlinux struct cast takes exactly one argument")
return None
# Get the struct name
struct_name = expr.func.id
# Evaluate the argument (e.g., ctx.di which is a c_uint64)
arg_result = eval_expr(
func,
module,
builder,
expr.args[0],
local_sym_tab,
map_sym_tab,
structs_sym_tab,
)
if arg_result is None:
logger.info("Failed to evaluate argument to vmlinux struct cast")
return None
arg_val, arg_type = arg_result
# Get the vmlinux struct type
vmlinux_struct_type = VmlinuxHandlerRegistry.get_struct_type(struct_name)
if vmlinux_struct_type is None:
logger.error(f"Failed to get vmlinux struct type for {struct_name}")
return None
# Cast the integer/value to a pointer to the struct
# If arg_val is an integer type, we need to inttoptr it
ptr_type = ir.PointerType()
# TODO: add a field value type check here
# print(arg_type)
if isinstance(arg_type, Field):
if ctypes_to_ir(arg_type.type.__name__):
# Cast integer to pointer
casted_ptr = builder.inttoptr(arg_val, ptr_type)
else:
logger.error(f"Unsupported type for vmlinux cast: {arg_type}")
return None
else:
casted_ptr = builder.inttoptr(arg_val, ptr_type)
return casted_ptr, vmlinux_struct_type
def _handle_user_defined_struct_cast(
func,
module,
builder,
expr,
local_sym_tab,
map_sym_tab,
structs_sym_tab,
):
"""Handle user-defined struct cast expressions like iphdr(nh).
This casts a pointer/integer value to a pointer to the user-defined struct,
similar to how vmlinux struct casts work but for user-defined @struct types.
"""
if len(expr.args) != 1:
logger.info("User-defined struct cast takes exactly one argument")
return None
# Get the struct name
struct_name = expr.func.id
if struct_name not in structs_sym_tab:
logger.error(f"Struct {struct_name} not found in structs_sym_tab")
return None
struct_info = structs_sym_tab[struct_name]
# Evaluate the argument (e.g.,
# an address/pointer value)
arg_result = eval_expr(
func,
module,
builder,
expr.args[0],
local_sym_tab,
map_sym_tab,
structs_sym_tab,
)
if arg_result is None:
logger.info("Failed to evaluate argument to user-defined struct cast")
return None
arg_val, arg_type = arg_result
# Cast the integer/pointer value to a pointer to the struct type
# The struct pointer type is a pointer to the struct's IR type
struct_ptr_type = ir.PointerType(struct_info.ir_type)
# If arg_val is an integer type (like i64), convert to pointer using inttoptr
if isinstance(arg_val.type, ir.IntType):
casted_ptr = builder.inttoptr(arg_val, struct_ptr_type)
logger.info(f"Cast integer to pointer for struct {struct_name}")
elif isinstance(arg_val.type, ir.PointerType):
# If already a pointer, bitcast to the struct pointer type
casted_ptr = builder.bitcast(arg_val, struct_ptr_type)
logger.info(f"Bitcast pointer to struct pointer for {struct_name}")
else:
logger.error(f"Unsupported type for user-defined struct cast: {arg_val.type}")
return None
return casted_ptr, struct_name
# ============================================================================
# Expression Dispatcher
# ============================================================================
@ -545,6 +696,18 @@ def eval_expr(
elif isinstance(expr, ast.Constant):
return _handle_constant_expr(module, builder, expr)
elif isinstance(expr, ast.Call):
if isinstance(expr.func, ast.Name) and VmlinuxHandlerRegistry.is_vmlinux_struct(
expr.func.id
):
return _handle_vmlinux_cast(
func,
module,
builder,
expr,
local_sym_tab,
map_sym_tab,
structs_sym_tab,
)
if isinstance(expr.func, ast.Name) and expr.func.id == "deref":
return _handle_deref_call(expr, local_sym_tab, builder)
@ -558,6 +721,16 @@ def eval_expr(
map_sym_tab,
structs_sym_tab,
)
if isinstance(expr.func, ast.Name) and (expr.func.id in structs_sym_tab):
return _handle_user_defined_struct_cast(
func,
module,
builder,
expr,
local_sym_tab,
map_sym_tab,
structs_sym_tab,
)
result = CallHandlerRegistry.handle_call(
expr, module, builder, func, local_sym_tab, map_sym_tab, structs_sym_tab
@ -568,7 +741,9 @@ def eval_expr(
logger.warning(f"Unknown call: {ast.dump(expr)}")
return None
elif isinstance(expr, ast.Attribute):
return _handle_attribute_expr(expr, local_sym_tab, structs_sym_tab, builder)
return _handle_attribute_expr(
func, expr, local_sym_tab, structs_sym_tab, builder
)
elif isinstance(expr, ast.BinOp):
return _handle_binary_op(
func,

View File

@ -17,34 +17,100 @@ def deref_to_depth(func, builder, val, target_depth):
# dereference with null check
pointee_type = cur_type.pointee
null_check_block = builder.block
not_null_block = func.append_basic_block(name=f"deref_not_null_{depth}")
merge_block = func.append_basic_block(name=f"deref_merge_{depth}")
null_ptr = ir.Constant(cur_type, None)
is_not_null = builder.icmp_signed("!=", cur_val, null_ptr)
logger.debug(f"Inserted null check for pointer at depth {depth}")
def load_op(builder, ptr):
return builder.load(ptr)
builder.cbranch(is_not_null, not_null_block, merge_block)
builder.position_at_end(not_null_block)
dereferenced_val = builder.load(cur_val)
logger.debug(f"Dereferenced to depth {depth - 1}, type: {pointee_type}")
builder.branch(merge_block)
builder.position_at_end(merge_block)
phi = builder.phi(pointee_type, name=f"deref_result_{depth}")
zero_value = (
ir.Constant(pointee_type, 0)
if isinstance(pointee_type, ir.IntType)
else ir.Constant(pointee_type, None)
cur_val = _null_checked_operation(
func, builder, cur_val, load_op, pointee_type, f"deref_{depth}"
)
phi.add_incoming(zero_value, null_check_block)
phi.add_incoming(dereferenced_val, not_null_block)
# Continue with phi result
cur_val = phi
cur_type = pointee_type
logger.debug(f"Dereferenced to depth {depth}, type: {pointee_type}")
return cur_val
def _null_checked_operation(func, builder, ptr, operation, result_type, name_prefix):
"""
Generic null-checked operation on a pointer.
"""
curr_block = builder.block
not_null_block = func.append_basic_block(name=f"{name_prefix}_not_null")
merge_block = func.append_basic_block(name=f"{name_prefix}_merge")
null_ptr = ir.Constant(ptr.type, None)
is_not_null = builder.icmp_signed("!=", ptr, null_ptr)
builder.cbranch(is_not_null, not_null_block, merge_block)
builder.position_at_end(not_null_block)
result = operation(builder, ptr)
not_null_after = builder.block
builder.branch(merge_block)
builder.position_at_end(merge_block)
phi = builder.phi(result_type, name=f"{name_prefix}_result")
if isinstance(result_type, ir.IntType):
null_val = ir.Constant(result_type, 0)
elif isinstance(result_type, ir.PointerType):
null_val = ir.Constant(result_type, None)
else:
null_val = ir.Constant(result_type, ir.Undefined)
phi.add_incoming(null_val, curr_block)
phi.add_incoming(result, not_null_after)
return phi
def access_struct_field(
builder, var_ptr, var_type, var_metadata, field_name, structs_sym_tab, func=None
):
"""
Access a struct field - automatically returns value or pointer based on field type.
"""
metadata = (
structs_sym_tab.get(var_metadata)
if isinstance(var_metadata, str)
else var_metadata
)
if not metadata or field_name not in metadata.fields:
raise ValueError(f"Field '{field_name}' not found in struct")
field_type = metadata.field_type(field_name)
is_ptr_to_struct = isinstance(var_type, ir.PointerType) and isinstance(
var_metadata, str
)
# Get struct pointer
struct_ptr = builder.load(var_ptr) if is_ptr_to_struct else var_ptr
should_load = not isinstance(field_type, ir.ArrayType)
def field_access_op(builder, ptr):
typed_ptr = builder.bitcast(ptr, metadata.ir_type.as_pointer())
field_ptr = metadata.gep(builder, typed_ptr, field_name)
return builder.load(field_ptr) if should_load else field_ptr
# Handle null check for pointer-to-struct
if is_ptr_to_struct:
if func is None:
raise ValueError("func required for null-safe struct pointer access")
if should_load:
result_type = field_type
else:
result_type = field_type.as_pointer()
result = _null_checked_operation(
func,
builder,
struct_ptr,
field_access_op,
result_type,
f"field_{field_name}",
)
return result, field_type
field_ptr = metadata.gep(builder, struct_ptr, field_name)
result = builder.load(field_ptr) if should_load else field_ptr
return result, field_type

View File

@ -147,7 +147,9 @@ def allocate_mem(
structs_sym_tab,
)
elif isinstance(stmt, ast.Assign):
handle_assign_allocation(builder, stmt, local_sym_tab, structs_sym_tab)
handle_assign_allocation(
builder, stmt, local_sym_tab, map_sym_tab, structs_sym_tab
)
allocate_temp_pool(builder, max_temps_needed, local_sym_tab)

View File

@ -1,6 +1,10 @@
from .helper_registry import HelperHandlerRegistry
from .helper_utils import reset_scratch_pool
from .bpf_helper_handler import handle_helper_call, emit_probe_read_kernel_str_call
from .bpf_helper_handler import (
handle_helper_call,
emit_probe_read_kernel_str_call,
emit_probe_read_kernel_call,
)
from .helpers import (
ktime,
pid,
@ -12,6 +16,7 @@ from .helpers import (
smp_processor_id,
uid,
skb_store_bytes,
get_current_cgroup_id,
get_stack,
XDP_DROP,
XDP_PASS,
@ -74,6 +79,8 @@ __all__ = [
"reset_scratch_pool",
"handle_helper_call",
"emit_probe_read_kernel_str_call",
"emit_probe_read_kernel_call",
"get_current_cgroup_id",
"ktime",
"pid",
"deref",

View File

@ -30,10 +30,12 @@ class BPFHelperID(Enum):
BPF_SKB_STORE_BYTES = 9
BPF_GET_CURRENT_PID_TGID = 14
BPF_GET_CURRENT_UID_GID = 15
BPF_GET_CURRENT_CGROUP_ID = 80
BPF_GET_CURRENT_COMM = 16
BPF_PERF_EVENT_OUTPUT = 25
BPF_GET_STACK = 67
BPF_PROBE_READ_KERNEL_STR = 115
BPF_PROBE_READ_KERNEL = 113
BPF_RINGBUF_OUTPUT = 130
BPF_RINGBUF_RESERVE = 131
BPF_RINGBUF_SUBMIT = 132
@ -67,6 +69,33 @@ def bpf_ktime_get_ns_emitter(
return result, ir.IntType(64)
@HelperHandlerRegistry.register(
"get_current_cgroup_id",
param_types=[],
return_type=ir.IntType(64),
)
def bpf_get_current_cgroup_id(
call,
map_ptr,
module,
builder,
func,
local_sym_tab=None,
struct_sym_tab=None,
map_sym_tab=None,
):
"""
Emit LLVM IR for bpf_get_current_cgroup_id helper function call.
"""
# func is an arg to just have a uniform signature with other emitters
helper_id = ir.Constant(ir.IntType(64), BPFHelperID.BPF_GET_CURRENT_CGROUP_ID.value)
fn_type = ir.FunctionType(ir.IntType(64), [], var_arg=False)
fn_ptr_type = ir.PointerType(fn_type)
fn_ptr = builder.inttoptr(helper_id, fn_ptr_type)
result = builder.call(fn_ptr, [], tail=False)
return result, ir.IntType(64)
@HelperHandlerRegistry.register(
"lookup",
param_types=[ir.PointerType(ir.IntType(64))],
@ -574,6 +603,75 @@ def bpf_probe_read_kernel_str_emitter(
return result, ir.IntType(64)
def emit_probe_read_kernel_call(builder, dst_ptr, dst_size, src_ptr):
"""Emit LLVM IR call to bpf_probe_read_kernel"""
fn_type = ir.FunctionType(
ir.IntType(64),
[ir.PointerType(), ir.IntType(32), ir.PointerType()],
var_arg=False,
)
fn_ptr = builder.inttoptr(
ir.Constant(ir.IntType(64), BPFHelperID.BPF_PROBE_READ_KERNEL.value),
ir.PointerType(fn_type),
)
result = builder.call(
fn_ptr,
[
builder.bitcast(dst_ptr, ir.PointerType()),
ir.Constant(ir.IntType(32), dst_size),
builder.bitcast(src_ptr, ir.PointerType()),
],
tail=False,
)
logger.info(f"Emitted bpf_probe_read_kernel (size={dst_size})")
return result
@HelperHandlerRegistry.register(
"probe_read_kernel",
param_types=[
ir.PointerType(ir.IntType(8)),
ir.PointerType(ir.IntType(8)),
],
return_type=ir.IntType(64),
)
def bpf_probe_read_kernel_emitter(
call,
map_ptr,
module,
builder,
func,
local_sym_tab=None,
struct_sym_tab=None,
map_sym_tab=None,
):
"""Emit LLVM IR for bpf_probe_read_kernel helper."""
if len(call.args) != 2:
raise ValueError(
f"probe_read_kernel expects 2 args (dst, src), got {len(call.args)}"
)
# Get destination buffer (char array -> i8*)
dst_ptr, dst_size = get_or_create_ptr_from_arg(
func, module, call.args[0], builder, local_sym_tab, map_sym_tab, struct_sym_tab
)
# Get source pointer (evaluate expression)
src_ptr, src_type = get_ptr_from_arg(
call.args[1], func, module, builder, local_sym_tab, map_sym_tab, struct_sym_tab
)
# Emit the helper call
result = emit_probe_read_kernel_call(builder, dst_ptr, dst_size, src_ptr)
logger.info(f"Emitted bpf_probe_read_kernel (size={dst_size})")
return result, ir.IntType(64)
@HelperHandlerRegistry.register(
"random",
param_types=[],

View File

@ -5,6 +5,7 @@ from llvmlite import ir
from pythonbpf.expr import (
get_operand_value,
eval_expr,
access_struct_field,
)
logger = logging.getLogger(__name__)
@ -135,7 +136,7 @@ def get_or_create_ptr_from_arg(
and field_type.element.width == 8
):
ptr, sz = get_char_array_ptr_and_size(
arg, builder, local_sym_tab, struct_sym_tab
arg, builder, local_sym_tab, struct_sym_tab, func
)
if not ptr:
raise ValueError("Failed to get char array pointer from struct field")
@ -266,7 +267,9 @@ def get_buffer_ptr_and_size(buf_arg, builder, local_sym_tab, struct_sym_tab):
)
def get_char_array_ptr_and_size(buf_arg, builder, local_sym_tab, struct_sym_tab):
def get_char_array_ptr_and_size(
buf_arg, builder, local_sym_tab, struct_sym_tab, func=None
):
"""Get pointer to char array and its size."""
# Struct field: obj.field
@ -277,11 +280,11 @@ def get_char_array_ptr_and_size(buf_arg, builder, local_sym_tab, struct_sym_tab)
if not (local_sym_tab and var_name in local_sym_tab):
raise ValueError(f"Variable '{var_name}' not found")
struct_type = local_sym_tab[var_name].metadata
if not (struct_sym_tab and struct_type in struct_sym_tab):
raise ValueError(f"Struct type '{struct_type}' not found")
struct_ptr, struct_type, struct_metadata = local_sym_tab[var_name]
if not (struct_sym_tab and struct_metadata in struct_sym_tab):
raise ValueError(f"Struct type '{struct_metadata}' not found")
struct_info = struct_sym_tab[struct_type]
struct_info = struct_sym_tab[struct_metadata]
if field_name not in struct_info.fields:
raise ValueError(f"Field '{field_name}' not found")
@ -292,8 +295,24 @@ def get_char_array_ptr_and_size(buf_arg, builder, local_sym_tab, struct_sym_tab)
)
return None, 0
struct_ptr = local_sym_tab[var_name].var
field_ptr = struct_info.gep(builder, struct_ptr, field_name)
# Check if char array
if not (
isinstance(field_type, ir.ArrayType)
and isinstance(field_type.element, ir.IntType)
and field_type.element.width == 8
):
logger.warning("Field is not a char array")
return None, 0
field_ptr, _ = access_struct_field(
builder,
struct_ptr,
struct_type,
struct_metadata,
field_name,
struct_sym_tab,
func,
)
# GEP to first element: [N x i8]* -> i8*
buf_ptr = builder.gep(

View File

@ -57,6 +57,11 @@ def get_stack(buf, flags=0):
return ctypes.c_int64(0)
def get_current_cgroup_id():
"""Get the current cgroup ID"""
return ctypes.c_int64(0)
XDP_ABORTED = ctypes.c_int64(0)
XDP_DROP = ctypes.c_int64(1)
XDP_PASS = ctypes.c_int64(2)

View File

@ -222,7 +222,7 @@ def _prepare_expr_args(expr, func, module, builder, local_sym_tab, struct_sym_ta
# Special case: struct field char array needs pointer to first element
if isinstance(expr, ast.Attribute):
char_array_ptr, _ = get_char_array_ptr_and_size(
expr, builder, local_sym_tab, struct_sym_tab
expr, builder, local_sym_tab, struct_sym_tab, func
)
if char_array_ptr:
return char_array_ptr

View File

@ -1,22 +1,31 @@
import logging
from llvmlite import ir
from pythonbpf.debuginfo import DebugInfoGenerator
from .map_types import BPFMapType
logger: logging.Logger = logging.getLogger(__name__)
def create_map_debug_info(module, map_global, map_name, map_params, structs_sym_tab):
"""Generate debug info metadata for BPF maps HASH and PERF_EVENT_ARRAY"""
generator = DebugInfoGenerator(module)
logger.info(f"Creating debug info for map {map_name} with params {map_params}")
uint_type = generator.get_uint32_type()
ulong_type = generator.get_uint64_type()
array_type = generator.create_array_type(
uint_type, map_params.get("type", BPFMapType.UNSPEC).value
)
type_ptr = generator.create_pointer_type(array_type, 64)
key_ptr = generator.create_pointer_type(
array_type if "key_size" in map_params else ulong_type, 64
array_type
if "key_size" in map_params
else _get_key_val_dbg_type(map_params.get("key"), generator, structs_sym_tab),
64,
)
value_ptr = generator.create_pointer_type(
array_type if "value_size" in map_params else ulong_type, 64
array_type
if "value_size" in map_params
else _get_key_val_dbg_type(map_params.get("value"), generator, structs_sym_tab),
64,
)
elements_arr = []
@ -97,3 +106,66 @@ def create_ringbuf_debug_info(
)
map_global.set_metadata("dbg", global_var)
return global_var
def _get_key_val_dbg_type(name, generator, structs_sym_tab):
"""Get the debug type for key/value based on type object"""
if not name:
logger.warn("No name provided for key/value type, defaulting to uint64")
return generator.get_uint64_type()
type_obj = structs_sym_tab.get(name)
if type_obj:
logger.info(f"Found struct named {name}, generating debug type")
return _get_struct_debug_type(type_obj, generator, structs_sym_tab)
# Fallback to basic types
logger.info(f"No struct named {name}, falling back to basic type")
# NOTE: Only handling int and long for now
if name in ["c_int32", "c_uint32"]:
return generator.get_uint32_type()
# Default fallback for now
return generator.get_uint64_type()
def _get_struct_debug_type(struct_obj, generator, structs_sym_tab):
"""Recursively create debug type for struct"""
elements_arr = []
for fld in struct_obj.fields.keys():
fld_type = struct_obj.field_type(fld)
if isinstance(fld_type, ir.IntType):
if fld_type.width == 32:
fld_dbg_type = generator.get_uint32_type()
else:
# NOTE: Assuming 64-bit for all other int types
fld_dbg_type = generator.get_uint64_type()
elif isinstance(fld_type, ir.ArrayType):
# NOTE: Array types have u8 elements only for now
# Debug info generation should fail for other types
elem_type = fld_type.element
if isinstance(elem_type, ir.IntType) and elem_type.width == 8:
char_type = generator.get_uint8_type()
fld_dbg_type = generator.create_array_type(char_type, fld_type.count)
else:
logger.warning(
f"Array element type {str(elem_type)} not supported for debug info, skipping"
)
continue
else:
# NOTE: Only handling int and char arrays for now
logger.warning(
f"Field type {str(fld_type)} not supported for debug info, skipping"
)
continue
member = generator.create_struct_member(
fld, fld_dbg_type, struct_obj.field_size(fld)
)
elements_arr.append(member)
struct_type = generator.create_struct_type(
elements_arr, struct_obj.size * 8, is_distinct=True
)
return struct_type

View File

@ -48,7 +48,7 @@ def create_bpf_map(module, map_name, map_params):
map_global.align = 8
logger.info(f"Created BPF map: {map_name} with params {map_params}")
return MapSymbol(type=map_params["type"], sym=map_global)
return MapSymbol(type=map_params["type"], sym=map_global, params=map_params)
def _parse_map_params(rval, expected_args=None):
@ -105,7 +105,9 @@ def process_ringbuf_map(map_name, rval, module, structs_sym_tab):
logger.info(f"Ringbuf map parameters: {map_params}")
map_global = create_bpf_map(module, map_name, map_params)
create_ringbuf_debug_info(module, map_global.sym, map_name, map_params)
create_ringbuf_debug_info(
module, map_global.sym, map_name, map_params, structs_sym_tab
)
return map_global
@ -119,7 +121,7 @@ def process_hash_map(map_name, rval, module, structs_sym_tab):
logger.info(f"Map parameters: {map_params}")
map_global = create_bpf_map(module, map_name, map_params)
# Generate debug info for BTF
create_map_debug_info(module, map_global.sym, map_name, map_params)
create_map_debug_info(module, map_global.sym, map_name, map_params, structs_sym_tab)
return map_global
@ -133,7 +135,7 @@ def process_perf_event_map(map_name, rval, module, structs_sym_tab):
logger.info(f"Map parameters: {map_params}")
map_global = create_bpf_map(module, map_name, map_params)
# Generate debug info for BTF
create_map_debug_info(module, map_global.sym, map_name, map_params)
create_map_debug_info(module, map_global.sym, map_name, map_params, structs_sym_tab)
return map_global

View File

@ -11,6 +11,7 @@ class MapSymbol:
type: BPFMapType
sym: ir.GlobalVariable
params: dict[str, Any] | None = None
class MapProcessorRegistry:

View File

@ -18,6 +18,10 @@ mapping = {
"c_longlong": ir.IntType(64),
"c_uint": ir.IntType(32),
"c_int": ir.IntType(32),
"c_ushort": ir.IntType(16),
"c_short": ir.IntType(16),
"c_ubyte": ir.IntType(8),
"c_byte": ir.IntType(8),
# Not so sure about this one
"str": ir.PointerType(ir.IntType(8)),
}

View File

@ -16,6 +16,33 @@ def get_module_symbols(module_name: str):
return [name for name in dir(imported_module)], imported_module
def unwrap_pointer_type(type_obj: Any) -> Any:
"""
Recursively unwrap all pointer layers to get the base type.
This handles multiply nested pointers like LP_LP_struct_attribute_group
and returns the base type (struct_attribute_group).
Stops unwrapping when reaching a non-pointer type (one without _type_ attribute).
Args:
type_obj: The type object to unwrap
Returns:
The base type after unwrapping all pointer layers
"""
current_type = type_obj
# Keep unwrapping while it's a pointer/array type (has _type_)
# But stop if _type_ is just a string or basic type marker
while hasattr(current_type, "_type_"):
next_type = current_type._type_
# Stop if _type_ is a string (like 'c' for c_char)
if isinstance(next_type, str):
break
current_type = next_type
return current_type
def process_vmlinux_class(
node,
llvm_module,
@ -158,13 +185,90 @@ def process_vmlinux_post_ast(
if hasattr(elem_type, "_length_") and is_complex_type:
type_length = elem_type._length_
if containing_type.__module__ == "vmlinux":
new_dep_node.add_dependent(
elem_type._type_.__name__
if hasattr(elem_type._type_, "__name__")
else str(elem_type._type_)
# Unwrap all pointer layers to get the base type for dependency tracking
base_type = unwrap_pointer_type(elem_type)
base_type_module = getattr(base_type, "__module__", None)
if base_type_module == "vmlinux":
base_type_name = (
base_type.__name__
if hasattr(base_type, "__name__")
else str(base_type)
)
# ONLY add vmlinux types as dependencies
new_dep_node.add_dependent(base_type_name)
logger.debug(
f"{containing_type} containing type of parent {elem_name} with {elem_type} and ctype {ctype_complex_type} and length {type_length}"
)
new_dep_node.set_field_containing_type(
elem_name, containing_type
)
new_dep_node.set_field_type_size(elem_name, type_length)
new_dep_node.set_field_ctype_complex_type(
elem_name, ctype_complex_type
)
new_dep_node.set_field_type(elem_name, elem_type)
# Check the containing_type module to decide whether to recurse
containing_type_module = getattr(
containing_type, "__module__", None
)
if containing_type_module == "vmlinux":
# Also unwrap containing_type to get base type name
base_containing_type = unwrap_pointer_type(
containing_type
)
containing_type_name = (
base_containing_type.__name__
if hasattr(base_containing_type, "__name__")
else str(base_containing_type)
)
# Check for self-reference or already processed
if containing_type_name == current_symbol_name:
# Self-referential pointer
logger.debug(
f"Self-referential pointer in {current_symbol_name}.{elem_name}"
)
new_dep_node.set_field_ready(elem_name, True)
elif handler.has_node(containing_type_name):
# Already processed
logger.debug(
f"Reusing already processed {containing_type_name}"
)
new_dep_node.set_field_ready(elem_name, True)
else:
# Process recursively - use base containing type, not the pointer wrapper
new_dep_node.add_dependent(containing_type_name)
process_vmlinux_post_ast(
base_containing_type,
llvm_handler,
handler,
processing_stack,
)
new_dep_node.set_field_ready(elem_name, True)
elif (
containing_type_module == ctypes.__name__
or containing_type_module is None
):
logger.debug(
f"Processing ctype internal{containing_type}"
)
new_dep_node.set_field_ready(elem_name, True)
else:
raise TypeError(
f"Module not supported in recursive resolution: {containing_type_module}"
)
elif (
base_type_module == ctypes.__name__
or base_type_module is None
):
# Handle ctypes or types with no module (like some internal ctypes types)
# DO NOT add ctypes as dependencies - just set field metadata and mark ready
logger.debug(
f"Base type {base_type} is ctypes - NOT adding as dependency, just processing field"
)
elif containing_type.__module__ == ctypes.__name__:
if isinstance(elem_type, type):
if issubclass(elem_type, ctypes.Array):
ctype_complex_type = ctypes.Array
@ -176,57 +280,20 @@ def process_vmlinux_post_ast(
)
else:
raise TypeError("Unsupported ctypes subclass")
else:
raise ImportError(
f"Unsupported module of {containing_type}"
)
logger.debug(
f"{containing_type} containing type of parent {elem_name} with {elem_type} and ctype {ctype_complex_type} and length {type_length}"
)
new_dep_node.set_field_containing_type(
elem_name, containing_type
)
new_dep_node.set_field_type_size(elem_name, type_length)
new_dep_node.set_field_ctype_complex_type(
elem_name, ctype_complex_type
)
new_dep_node.set_field_type(elem_name, elem_type)
if containing_type.__module__ == "vmlinux":
containing_type_name = (
containing_type.__name__
if hasattr(containing_type, "__name__")
else str(containing_type)
)
# Check for self-reference or already processed
if containing_type_name == current_symbol_name:
# Self-referential pointer
logger.debug(
f"Self-referential pointer in {current_symbol_name}.{elem_name}"
)
new_dep_node.set_field_ready(elem_name, True)
elif handler.has_node(containing_type_name):
# Already processed
logger.debug(
f"Reusing already processed {containing_type_name}"
)
new_dep_node.set_field_ready(elem_name, True)
else:
# Process recursively - THIS WAS MISSING
new_dep_node.add_dependent(containing_type_name)
process_vmlinux_post_ast(
containing_type,
llvm_handler,
handler,
processing_stack,
)
new_dep_node.set_field_ready(elem_name, True)
elif containing_type.__module__ == ctypes.__name__:
logger.debug(f"Processing ctype internal{containing_type}")
# Set field metadata but DO NOT add dependency or recurse
new_dep_node.set_field_containing_type(
elem_name, containing_type
)
new_dep_node.set_field_type_size(elem_name, type_length)
new_dep_node.set_field_ctype_complex_type(
elem_name, ctype_complex_type
)
new_dep_node.set_field_type(elem_name, elem_type)
new_dep_node.set_field_ready(elem_name, True)
else:
raise TypeError(
"Module not supported in recursive resolution"
raise ImportError(
f"Unsupported module of {base_type}: {base_type_module}"
)
else:
new_dep_node.add_dependent(
@ -245,9 +312,12 @@ def process_vmlinux_post_ast(
raise ValueError(
f"{elem_name} with type {elem_type} from module {module_name} not supported in recursive resolver"
)
elif module_name == ctypes.__name__ or module_name is None:
# Handle ctypes types - these don't need processing, just return
logger.debug(f"Skipping ctypes type {current_symbol_name}")
return True
else:
raise ImportError("UNSUPPORTED Module")
raise ImportError(f"UNSUPPORTED Module {module_name}")
logger.info(
f"{current_symbol_name} processed and handler readiness {handler.is_ready}"

View File

@ -11,7 +11,9 @@ from .class_handler import process_vmlinux_class
logger = logging.getLogger(__name__)
def detect_import_statement(tree: ast.AST) -> list[tuple[str, ast.ImportFrom]]:
def detect_import_statement(
tree: ast.AST,
) -> list[tuple[str, ast.ImportFrom, str, str]]:
"""
Parse AST and detect import statements from vmlinux.
@ -25,7 +27,7 @@ def detect_import_statement(tree: ast.AST) -> list[tuple[str, ast.ImportFrom]]:
List of tuples containing (module_name, imported_item) for each vmlinux import
Raises:
SyntaxError: If multiple imports from vmlinux are attempted or import * is used
SyntaxError: If import * is used
"""
vmlinux_imports = []
@ -40,28 +42,19 @@ def detect_import_statement(tree: ast.AST) -> list[tuple[str, ast.ImportFrom]]:
"Please import specific types explicitly."
)
# Check for multiple imports: from vmlinux import A, B, C
if len(node.names) > 1:
imported_names = [alias.name for alias in node.names]
raise SyntaxError(
f"Multiple imports from vmlinux are not supported. "
f"Found: {', '.join(imported_names)}. "
f"Please use separate import statements for each type."
)
# Check if no specific import is specified (should not happen with valid Python)
if len(node.names) == 0:
raise SyntaxError(
"Import from vmlinux must specify at least one type."
)
# Valid single import
# Support multiple imports: from vmlinux import A, B, C
for alias in node.names:
import_name = alias.name
# Use alias if provided, otherwise use the original name (commented)
# as_name = alias.asname if alias.asname else alias.name
vmlinux_imports.append(("vmlinux", node))
logger.info(f"Found vmlinux import: {import_name}")
# Use alias if provided, otherwise use the original name
as_name = alias.asname if alias.asname else alias.name
vmlinux_imports.append(("vmlinux", node, import_name, as_name))
logger.info(f"Found vmlinux import: {import_name} as {as_name}")
# Handle "import vmlinux" statements (not typical but should be rejected)
elif isinstance(node, ast.Import):
@ -103,40 +96,37 @@ def vmlinux_proc(tree: ast.AST, module):
with open(source_file, "r") as f:
mod_ast = ast.parse(f.read(), filename=source_file)
for import_mod, import_node in import_statements:
for alias in import_node.names:
imported_name = alias.name
found = False
for mod_node in mod_ast.body:
if (
isinstance(mod_node, ast.ClassDef)
and mod_node.name == imported_name
):
process_vmlinux_class(mod_node, module, handler)
found = True
break
if isinstance(mod_node, ast.Assign):
for target in mod_node.targets:
if isinstance(target, ast.Name) and target.id == imported_name:
process_vmlinux_assign(mod_node, module, assignments)
found = True
break
if found:
break
if not found:
logger.info(
f"{imported_name} not found as ClassDef or Assign in vmlinux"
)
for import_mod, import_node, imported_name, as_name in import_statements:
found = False
for mod_node in mod_ast.body:
if isinstance(mod_node, ast.ClassDef) and mod_node.name == imported_name:
process_vmlinux_class(mod_node, module, handler)
found = True
break
if isinstance(mod_node, ast.Assign):
for target in mod_node.targets:
if isinstance(target, ast.Name) and target.id == imported_name:
process_vmlinux_assign(mod_node, module, assignments, as_name)
found = True
break
if found:
break
if not found:
logger.info(f"{imported_name} not found as ClassDef or Assign in vmlinux")
IRGenerator(module, handler, assignments)
return assignments
def process_vmlinux_assign(node, module, assignments: dict[str, AssignmentInfo]):
def process_vmlinux_assign(
node, module, assignments: dict[str, AssignmentInfo], target_name=None
):
"""Process assignments from vmlinux module."""
# Only handle single-target assignments
if len(node.targets) == 1 and isinstance(node.targets[0], ast.Name):
target_name = node.targets[0].id
# Use provided target_name (for aliased imports) or fall back to original name
if target_name is None:
target_name = node.targets[0].id
# Handle constant value assignments
if isinstance(node.value, ast.Constant):

View File

@ -21,7 +21,7 @@ def debug_info_generation(
generated_debug_info: List of tuples (struct, debug_info) to track generated debug info
Returns:
The generated global variable debug info
The generated global variable debug info, or None for unsupported types
"""
# Set up debug info generator
generator = DebugInfoGenerator(llvm_module)
@ -31,23 +31,42 @@ def debug_info_generation(
if existing_struct.name == struct.name:
return debug_info
# Check if this is a union (not supported yet)
if not struct.name.startswith("struct_"):
logger.warning(f"Skipping debug info generation for union: {struct.name}")
# Create a minimal forward declaration for unions
union_type = generator.create_struct_type(
[], struct.__sizeof__() * 8, is_distinct=True
)
return union_type
# Process all fields and create members for the struct
members = []
for field_name, field in struct.fields.items():
# Get appropriate debug type for this field
field_type = _get_field_debug_type(
field_name, field, generator, struct, generated_debug_info
)
# Create struct member with proper offset
member = generator.create_struct_member_vmlinux(
field_name, field_type, field.offset * 8
)
members.append(member)
if struct.name.startswith("struct_"):
struct_name = struct.name.removeprefix("struct_")
else:
raise ValueError("Unions are not supported in the current version")
sorted_fields = sorted(struct.fields.items(), key=lambda item: item[1].offset)
for field_name, field in sorted_fields:
try:
# Get appropriate debug type for this field
field_type = _get_field_debug_type(
field_name, field, generator, struct, generated_debug_info
)
# Ensure field_type is a tuple
if not isinstance(field_type, tuple) or len(field_type) != 2:
logger.error(f"Invalid field_type for {field_name}: {field_type}")
continue
# Create struct member with proper offset
member = generator.create_struct_member_vmlinux(
field_name, field_type, field.offset * 8
)
members.append(member)
except Exception as e:
logger.error(f"Failed to process field {field_name} in {struct.name}: {e}")
continue
struct_name = struct.name.removeprefix("struct_")
# Create struct type with all members
struct_type = generator.create_struct_type_with_name(
struct_name, members, struct.__sizeof__() * 8, is_distinct=True
@ -74,11 +93,19 @@ def _get_field_debug_type(
generated_debug_info: List of already generated debug info
Returns:
The debug info type for this field
A tuple of (debug_type, size_in_bits)
"""
# Handle complex types (arrays, pointers)
# Handle complex types (arrays, pointers, function pointers)
if field.ctype_complex_type is not None:
if issubclass(field.ctype_complex_type, ctypes.Array):
# Handle function pointer types (CFUNCTYPE)
if callable(field.ctype_complex_type):
# Function pointers are represented as void pointers
logger.warning(
f"Field {field_name} is a function pointer, using void pointer"
)
void_ptr = generator.create_pointer_type(None, 64)
return void_ptr, 64
elif issubclass(field.ctype_complex_type, ctypes.Array):
# Handle array types
element_type, base_type_size = _get_basic_debug_type(
field.containing_type, generator
@ -100,11 +127,13 @@ def _get_field_debug_type(
for existing_struct, debug_info in generated_debug_info:
if existing_struct.name == struct_name:
# Use existing debug info
return debug_info, existing_struct.__sizeof__()
return debug_info, existing_struct.__sizeof__() * 8
# If not found, create a forward declaration
# This will be completed when the actual struct is processed
logger.warning("Forward declaration in struct created")
logger.info(
f"Forward declaration created for {struct_name} in {parent_struct.name}"
)
forward_type = generator.create_struct_type([], 0, is_distinct=True)
return forward_type, 0

View File

@ -11,6 +11,10 @@ logger = logging.getLogger(__name__)
class IRGenerator:
# This field keeps track of the non_struct names to avoid duplicate name errors.
type_number = 0
unprocessed_store: list[str] = []
# get the assignments dict and add this stuff to it.
def __init__(self, llvm_module, handler: DependencyHandler, assignments):
self.llvm_module = llvm_module
@ -129,7 +133,19 @@ class IRGenerator:
for field_name, field in struct.fields.items():
# does not take arrays and similar types into consideration yet.
if field.ctype_complex_type is not None and issubclass(
if callable(field.ctype_complex_type):
# Function pointer case - generate a simple field accessor
field_co_re_name, returned = self._struct_name_generator(
struct, field, field_index
)
field_index += 1
globvar = ir.GlobalVariable(
self.llvm_module, ir.IntType(64), name=field_co_re_name
)
globvar.linkage = "external"
globvar.set_metadata("llvm.preserve.access.index", debug_info)
self.generated_field_names[struct.name][field_name] = globvar
elif field.ctype_complex_type is not None and issubclass(
field.ctype_complex_type, ctypes.Array
):
array_size = field.type_size
@ -137,7 +153,7 @@ class IRGenerator:
if containing_type.__module__ == ctypes.__name__:
containing_type_size = ctypes.sizeof(containing_type)
if array_size == 0:
field_co_re_name = self._struct_name_generator(
field_co_re_name, returned = self._struct_name_generator(
struct, field, field_index, True, 0, containing_type_size
)
globvar = ir.GlobalVariable(
@ -149,7 +165,7 @@ class IRGenerator:
field_index += 1
continue
for i in range(0, array_size):
field_co_re_name = self._struct_name_generator(
field_co_re_name, returned = self._struct_name_generator(
struct, field, field_index, True, i, containing_type_size
)
globvar = ir.GlobalVariable(
@ -163,12 +179,28 @@ class IRGenerator:
array_size = field.type_size
containing_type = field.containing_type
if containing_type.__module__ == "vmlinux":
containing_type_size = self.handler[
containing_type.__name__
].current_offset
for i in range(0, array_size):
field_co_re_name = self._struct_name_generator(
struct, field, field_index, True, i, containing_type_size
# Unwrap all pointer layers to get the base struct type
base_containing_type = containing_type
while hasattr(base_containing_type, "_type_"):
next_type = base_containing_type._type_
# Stop if _type_ is a string (like 'c' for c_char)
# TODO: stacked pointers not handl;ing ctypes check here as well
if isinstance(next_type, str):
break
base_containing_type = next_type
# Get the base struct name
base_struct_name = (
base_containing_type.__name__
if hasattr(base_containing_type, "__name__")
else str(base_containing_type)
)
# Look up the size using the base struct name
containing_type_size = self.handler[base_struct_name].current_offset
if array_size == 0:
field_co_re_name, returned = self._struct_name_generator(
struct, field, field_index, True, 0, containing_type_size
)
globvar = ir.GlobalVariable(
self.llvm_module, ir.IntType(64), name=field_co_re_name
@ -176,9 +208,30 @@ class IRGenerator:
globvar.linkage = "external"
globvar.set_metadata("llvm.preserve.access.index", debug_info)
self.generated_field_names[struct.name][field_name] = globvar
field_index += 1
field_index += 1
else:
for i in range(0, array_size):
field_co_re_name, returned = self._struct_name_generator(
struct,
field,
field_index,
True,
i,
containing_type_size,
)
globvar = ir.GlobalVariable(
self.llvm_module, ir.IntType(64), name=field_co_re_name
)
globvar.linkage = "external"
globvar.set_metadata(
"llvm.preserve.access.index", debug_info
)
self.generated_field_names[struct.name][field_name] = (
globvar
)
field_index += 1
else:
field_co_re_name = self._struct_name_generator(
field_co_re_name, returned = self._struct_name_generator(
struct, field, field_index
)
field_index += 1
@ -198,7 +251,7 @@ class IRGenerator:
is_indexed: bool = False,
index: int = 0,
containing_type_size: int = 0,
) -> str:
) -> tuple[str, bool]:
# TODO: Does not support Unions as well as recursive pointer and array type naming
if is_indexed:
name = (
@ -208,7 +261,7 @@ class IRGenerator:
+ "$"
+ f"0:{field_index}:{index}"
)
return name
return name, True
elif struct.name.startswith("struct_"):
name = (
"llvm."
@ -217,9 +270,18 @@ class IRGenerator:
+ "$"
+ f"0:{field_index}"
)
return name
return name, True
else:
print(self.handler[struct.name])
raise TypeError(
"Name generation cannot occur due to type name not starting with struct"
logger.warning(
"Blindly handling non-struct type to avoid type errors in vmlinux IR generation. Possibly a union."
)
self.type_number += 1
unprocessed_type = "unprocessed_type_" + str(self.handler[struct.name].name)
if self.unprocessed_store.__contains__(unprocessed_type):
return unprocessed_type + "_" + str(self.type_number), False
else:
self.unprocessed_store.append(unprocessed_type)
return unprocessed_type, False
# raise TypeError(
# "Name generation cannot occur due to type name not starting with struct"
# )

View File

@ -77,7 +77,7 @@ class VmlinuxHandler:
return None
def get_vmlinux_enum_value(self, name):
"""Handle vmlinux enum constants by returning LLVM IR constants"""
"""Handle vmlinux.enum constants by returning LLVM IR constants"""
if self.is_vmlinux_enum(name):
value = self.vmlinux_symtab[name].value
logger.info(f"The value of vmlinux enum {name} = {value}")
@ -94,17 +94,168 @@ class VmlinuxHandler:
f"Attempting to access field {field_name} of possible vmlinux struct {struct_var_name}"
)
python_type: type = var_info.metadata
struct_name = python_type.__name__
globvar_ir, field_data = self.get_field_type(struct_name, field_name)
builder.function.args[0].type = ir.PointerType(ir.IntType(8))
field_ptr = self.load_ctx_field(
builder, builder.function.args[0], globvar_ir, field_data, struct_name
)
# Return pointer to field and field type
return field_ptr, field_data
# Check if this is a context field (ctx) or a cast struct
is_context_field = var_info.var is None
if is_context_field:
# Handle context field access (original behavior)
struct_name = python_type.__name__
globvar_ir, field_data = self.get_field_type(struct_name, field_name)
builder.function.args[0].type = ir.PointerType(ir.IntType(8))
field_ptr = self.load_ctx_field(
builder,
builder.function.args[0],
globvar_ir,
field_data,
struct_name,
)
return field_ptr, field_data
else:
# Handle cast struct field access
struct_name = python_type.__name__
globvar_ir, field_data = self.get_field_type(struct_name, field_name)
# Handle cast struct field access (use bpf_probe_read_kernel)
# Load the struct pointer from the local variable
struct_ptr = builder.load(var_info.var)
# Determine the preallocated tmp name that assignment pass should have created
tmp_name = f"{struct_var_name}_{field_name}_tmp"
# Use bpf_probe_read_kernel for non-context struct field access
field_value = self.load_struct_field(
builder,
struct_ptr,
globvar_ir,
field_data,
struct_name,
local_sym_tab,
tmp_name,
)
# Return field value and field type
return field_value, field_data
else:
raise RuntimeError("Variable accessed not found in symbol table")
@staticmethod
def load_struct_field(
builder,
struct_ptr_int,
offset_global,
field_data,
struct_name=None,
local_sym_tab=None,
tmp_name: str | None = None,
):
"""
Generate LLVM IR to load a field from a regular (non-context) struct using bpf_probe_read_kernel.
Args:
builder: llvmlite IRBuilder instance
struct_ptr_int: The struct pointer as an i64 value (already loaded from alloca)
offset_global: Global variable containing the field offset (i64)
field_data: contains data about the field
struct_name: Name of the struct being accessed (optional)
local_sym_tab: symbol table (optional) - used to locate preallocated tmp storage
tmp_name: name of the preallocated temporary storage to use (preferred)
Returns:
The loaded value
"""
# Load the offset value
offset = builder.load(offset_global)
# Convert i64 to pointer type (BPF stores pointers as i64)
i8_ptr_type = ir.PointerType(ir.IntType(8))
struct_ptr = builder.inttoptr(struct_ptr_int, i8_ptr_type)
# GEP with offset to get field pointer
field_ptr = builder.gep(
struct_ptr,
[offset],
inbounds=False,
)
# Determine the appropriate field size based on field information
field_size_bytes = 8 # Default to 8 bytes (64-bit)
int_width = 64 # Default to 64-bit
needs_zext = False
if field_data is not None:
# Try to determine the size from field metadata
if field_data.type.__module__ == ctypes.__name__:
try:
field_size_bytes = ctypes.sizeof(field_data.type)
field_size_bits = field_size_bytes * 8
if field_size_bits in [8, 16, 32, 64]:
int_width = field_size_bits
logger.info(
f"Determined field size: {int_width} bits ({field_size_bytes} bytes)"
)
# Special handling for struct_xdp_md i32 fields
if struct_name == "struct_xdp_md" and int_width == 32:
needs_zext = True
logger.info(
"struct_xdp_md i32 field detected, will zero-extend to i64"
)
else:
logger.warning(
f"Unusual field size {field_size_bits} bits, using default 64"
)
except Exception as e:
logger.warning(
f"Could not determine field size: {e}, using default 64"
)
elif field_data.type.__module__ == "vmlinux":
# For pointers to structs or complex vmlinux types
if field_data.ctype_complex_type is not None and issubclass(
field_data.ctype_complex_type, ctypes._Pointer
):
int_width = 64 # Pointers are always 64-bit
field_size_bytes = 8
logger.info("Field is a pointer type, using 64 bits")
else:
logger.warning("Complex vmlinux field type, using default 64 bits")
# Use preallocated temporary storage if provided by allocation pass
local_storage_i8_ptr = None
if tmp_name and local_sym_tab and tmp_name in local_sym_tab:
# Expect the tmp to be an alloca created during allocation pass
tmp_alloca = local_sym_tab[tmp_name].var
local_storage_i8_ptr = builder.bitcast(tmp_alloca, i8_ptr_type)
else:
# Fallback: allocate inline (not ideal, but preserves behavior)
local_storage = builder.alloca(ir.IntType(int_width))
local_storage_i8_ptr = builder.bitcast(local_storage, i8_ptr_type)
logger.warning(f"Temp storage '{tmp_name}' not found. Allocating inline")
# Use bpf_probe_read_kernel to safely read the field
# This generates:
# %gep = getelementptr i8, ptr %struct_ptr, i64 %offset (already done above as field_ptr)
# %passed = tail call ptr @llvm.bpf.passthrough.p0.p0(i32 2, ptr %gep)
# %result = call i64 inttoptr (i64 113 to ptr)(ptr %local_storage, i32 %size, ptr %passed)
from pythonbpf.helper import emit_probe_read_kernel_call
emit_probe_read_kernel_call(
builder, local_storage_i8_ptr, field_size_bytes, field_ptr
)
# Load the value from local storage
value = builder.load(
builder.bitcast(local_storage_i8_ptr, ir.PointerType(ir.IntType(int_width)))
)
# Zero-extend i32 to i64 if needed
if needs_zext:
value = builder.zext(value, ir.IntType(64))
logger.info("Zero-extended i32 value to i64")
return value
@staticmethod
def load_ctx_field(builder, ctx_arg, offset_global, field_data, struct_name=None):
"""

View File

@ -1,23 +1,22 @@
BPF_CLANG := clang
CFLAGS := -emit-llvm -target bpf -c
CFLAGS := -emit-llvm -target bpf -c -D__TARGET_ARCH_x86
SRC := $(wildcard *.bpf.c)
LL := $(SRC:.bpf.c=.bpf.ll)
LL2 := $(SRC:.bpf.c=.bpf.o2.ll)
OBJ := $(SRC:.bpf.c=.bpf.o)
LL0 := $(SRC:.bpf.c=.bpf.o0.ll)
.PHONY: all clean
all: $(LL) $(OBJ) $(LL2)
all: $(LL) $(OBJ) $(LL0)
%.bpf.o: %.bpf.c
$(BPF_CLANG) -O2 -g -target bpf -c $< -o $@
$(BPF_CLANG) -O2 -D__TARGET_ARCH_x86 -g -target bpf -c $< -o $@
%.bpf.ll: %.bpf.c
$(BPF_CLANG) -O0 $(CFLAGS) -g -S $< -o $@
$(BPF_CLANG) $(CFLAGS) -O2 -g -S $< -o $@
%.bpf.o2.ll: %.bpf.c
$(BPF_CLANG) -O2 $(CFLAGS) -g -S $< -o $@
%.bpf.o0.ll: %.bpf.c
$(BPF_CLANG) $(CFLAGS) -O0 -g -S $< -o $@
clean:
rm -f $(LL) $(OBJ) $(LL2)
rm -f $(LL) $(OBJ) $(LL0)

View File

@ -0,0 +1,66 @@
// disksnoop.bpf.c
// eBPF program (compile with: clang -O2 -g -target bpf -c disksnoop.bpf.c -o disksnoop.bpf.o)
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_core_read.h>
char LICENSE[] SEC("license") = "GPL";
struct {
__uint(type, BPF_MAP_TYPE_HASH);
__type(key, __u64);
__type(value, __u64);
__uint(max_entries, 10240);
} start_map SEC(".maps");
/* kprobe: record start timestamp keyed by request pointer */
SEC("kprobe/blk_mq_start_request")
int trace_start(struct pt_regs *ctx)
{
/* request * is first arg */
__u64 reqp = (__u64)(ctx->di);
__u64 ts = bpf_ktime_get_ns();
bpf_map_update_elem(&start_map, &reqp, &ts, BPF_ANY);
// /* optional debug:
bpf_printk("start: req=%llu ts=%llu\n", reqp, ts);
// */
return 0;
}
/* completion: compute latency and print data_len, cmd_flags, latency_us */
SEC("kprobe/blk_mq_end_request")
int trace_completion(struct pt_regs *ctx)
{
__u64 reqp = (__u64)(ctx->di);
__u64 *tsp;
__u64 now_ns;
__u64 delta_ns;
__u64 delta_us = 0;
bpf_printk("%lld", reqp);
tsp = bpf_map_lookup_elem(&start_map, &reqp);
if (!tsp)
return 0;
now_ns = bpf_ktime_get_ns();
delta_ns = now_ns - *tsp;
delta_us = delta_ns / 1000;
/* read request fields using CO-RE; needs vmlinux.h/BTF */
__u32 data_len = 0;
__u32 cmd_flags = 0;
/* __data_len is usually a 32/64-bit; use CORE read to be safe */
data_len = ( __u32 ) BPF_CORE_READ((struct request *)reqp, __data_len);
cmd_flags = ( __u32 ) BPF_CORE_READ((struct request *)reqp, cmd_flags);
/* print: "<bytes> <flags_hex> <latency_us>" */
bpf_printk("%u %x %llu\n", data_len, cmd_flags, delta_us);
/* remove from map */
bpf_map_delete_elem(&start_map, &reqp);
return 0;
}

View File

@ -0,0 +1,18 @@
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#include <bpf/bpf_core_read.h>
char LICENSE[] SEC("license") = "GPL";
SEC("kprobe/blk_mq_start_request")
int example(struct pt_regs *ctx)
{
u64 a = ctx->r15;
struct request *req = (struct request *)(ctx->di);
unsigned int something_ns = BPF_CORE_READ(req, timeout);
unsigned int data_len = BPF_CORE_READ(req, __data_len);
bpf_printk("data length %lld %ld %ld\n", data_len, something_ns, a);
return 0;
}

View File

@ -0,0 +1,18 @@
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#include <bpf/bpf_core_read.h>
char LICENSE[] SEC("license") = "GPL";
SEC("kprobe/blk_mq_start_request")
int example(struct pt_regs *ctx)
{
u64 a = ctx->r15;
struct request *req = (struct request *)(ctx->di);
unsigned int something_ns = req->timeout;
unsigned int data_len = req->__data_len;
bpf_printk("data length %lld %ld %ld\n", data_len, something_ns, a);
return 0;
}

View File

@ -0,0 +1,31 @@
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
struct fake_iphdr {
unsigned short useless;
unsigned short tot_len;
unsigned short id;
unsigned short frag_off;
unsigned char ttl;
unsigned char protocol;
unsigned short check;
unsigned int saddr;
unsigned int daddr;
};
SEC("xdp")
int xdp_prog(struct xdp_md *ctx) {
unsigned long data = ctx->data;
unsigned long data_end = ctx->data_end;
if (data + sizeof(struct ethhdr) + sizeof(struct fake_iphdr) > data_end) {
return XDP_ABORTED;
}
struct fake_iphdr *iph = (void *)data + sizeof(struct ethhdr);
bpf_printk("%d", iph->saddr);
return XDP_PASS;
}
char _license[] SEC("license") = "GPL";

View File

@ -0,0 +1,22 @@
from vmlinux import XDP_PASS
from pythonbpf import bpf, section, bpfglobal, compile_to_ir
import logging
from ctypes import c_int64, c_void_p
@bpf
@section("kprobe/blk_mq_start_request")
def example(ctx: c_void_p) -> c_int64:
d = XDP_PASS # This gives an error, but
e = XDP_PASS + 0 # this does not
print(f"test1 {e} test2 {d}")
return c_int64(0)
@bpf
@bpfglobal
def LICENSE() -> str:
return "GPL"
compile_to_ir("assignment_handling.py", "assignment_handling.ll", loglevel=logging.INFO)

View File

@ -0,0 +1,46 @@
from vmlinux import XDP_PASS, XDP_ABORTED
from vmlinux import (
struct_xdp_md,
)
from pythonbpf import bpf, section, bpfglobal, compile, compile_to_ir, struct
from ctypes import c_int64, c_ubyte, c_ushort, c_uint32, c_void_p
@bpf
@struct
class iphdr:
useless: c_ushort
tot_len: c_ushort
id: c_ushort
frag_off: c_ushort
ttl: c_ubyte
protocol: c_ubyte
check: c_ushort
saddr: c_uint32
daddr: c_uint32
@bpf
@section("xdp")
def ip_detector(ctx: struct_xdp_md) -> c_int64:
data = c_void_p(ctx.data)
data_end = c_void_p(ctx.data_end)
if data + 34 < data_end:
hdr = data + 14
iph = iphdr(hdr)
addr = iph.saddr
print(f"ipaddress: {addr}")
else:
return c_int64(XDP_ABORTED)
return c_int64(XDP_PASS)
@bpf
@bpfglobal
def LICENSE() -> str:
return "GPL"
compile_to_ir("xdp_test_1.py", "xdp_test_1.ll")
compile()

View File

@ -1,4 +1,4 @@
from pythonbpf import bpf, struct, section, bpfglobal
from pythonbpf import bpf, struct, section, bpfglobal, compile
from pythonbpf.helper import comm
from ctypes import c_void_p, c_int64
@ -26,3 +26,6 @@ def hello(ctx: c_void_p) -> c_int64:
@bpfglobal
def LICENSE() -> str:
return "GPL"
compile()

View File

@ -0,0 +1,44 @@
from pythonbpf import bpf, section, struct, bpfglobal, compile, map
from pythonbpf.maps import HashMap
from pythonbpf.helper import pid, comm
from ctypes import c_void_p, c_int64
@bpf
@struct
class val_type:
counter: c_int64
shizzle: c_int64
comm: str(16)
@bpf
@map
def last() -> HashMap:
return HashMap(key=val_type, value=c_int64, max_entries=16)
@bpf
@section("tracepoint/syscalls/sys_enter_clone")
def hello_world(ctx: c_void_p) -> c_int64:
obj = val_type()
obj.counter, obj.shizzle = 42, 96
comm(obj.comm)
t = last.lookup(obj)
if t:
print(f"Found existing entry: counter={obj.counter}, pid={t}")
last.delete(obj)
return 0 # type: ignore [return-value]
val = pid()
last.update(obj, val)
print(f"Map updated!, {obj.counter}, {obj.shizzle}, {val}")
return 0 # type: ignore [return-value]
@bpf
@bpfglobal
def LICENSE() -> str:
return "GPL"
compile()

View File

@ -0,0 +1,93 @@
"""
Test struct values in HashMap.
This example stores a struct in a HashMap and reads it back,
testing the new set_value_struct() functionality in pylibbpf.
"""
from pythonbpf import bpf, map, struct, section, bpfglobal, BPF
from pythonbpf.helper import ktime, smp_processor_id, pid, comm
from pythonbpf.maps import HashMap
from ctypes import c_void_p, c_int64, c_uint32, c_uint64
import time
import os
@bpf
@struct
class task_info:
pid: c_uint64
timestamp: c_uint64
comm: str(16)
@bpf
@map
def cpu_tasks() -> HashMap:
return HashMap(key=c_uint32, value=task_info, max_entries=256)
@bpf
@section("tracepoint/sched/sched_switch")
def trace_sched_switch(ctx: c_void_p) -> c_int64:
cpu = smp_processor_id()
# Create task info struct
info = task_info()
info.pid = pid()
info.timestamp = ktime()
comm(info.comm)
# Store in map
cpu_tasks.update(cpu, info)
return 0 # type: ignore
@bpf
@bpfglobal
def LICENSE() -> str:
return "GPL"
# Compile and load
b = BPF()
b.load()
b.attach_all()
print("Testing HashMap with Struct Values")
cpu_map = b["cpu_tasks"]
cpu_map.set_value_struct("task_info") # Enable struct deserialization
print("Listening for context switches.. .\n")
num_cpus = os.cpu_count() or 16
try:
while True:
time.sleep(1)
print(f"--- Snapshot at {time.strftime('%H:%M:%S')} ---")
for cpu in range(num_cpus):
try:
info = cpu_map.lookup(cpu)
if info:
comm_str = (
bytes(info.comm).decode("utf-8", errors="ignore").rstrip("\x00")
)
ts_sec = info.timestamp / 1e9
print(
f" CPU {cpu}: PID={info.pid}, comm={comm_str}, ts={ts_sec:.3f}s"
)
except KeyError:
# No data for this CPU yet
pass
print()
except KeyboardInterrupt:
print("\nStopped")

View File

@ -0,0 +1,27 @@
from vmlinux import struct_request, struct_pt_regs
from pythonbpf import bpf, section, bpfglobal, compile_to_ir, compile
import logging
from ctypes import c_int64
@bpf
@section("kprobe/blk_mq_start_request")
def example(ctx: struct_pt_regs) -> c_int64:
a = ctx.r15
req = struct_request(ctx.di)
d = req.__data_len
b = ctx.r12
c = req.timeout
print(f"data length {d} and {c} and {a}")
print(f"ctx arg {b}")
return c_int64(0)
@bpf
@bpfglobal
def LICENSE() -> str:
return "GPL"
compile_to_ir("requests.py", "requests.ll", loglevel=logging.INFO)
compile()

View File

@ -0,0 +1,21 @@
from vmlinux import struct_pt_regs
from pythonbpf import bpf, section, bpfglobal, compile_to_ir
import logging
from ctypes import c_int64
@bpf
@section("kprobe/blk_mq_start_request")
def example(ctx: struct_pt_regs) -> c_int64:
req = ctx.di
print(f"data length {req}")
return c_int64(0)
@bpf
@bpfglobal
def LICENSE() -> str:
return "GPL"
compile_to_ir("requests2.py", "requests2.ll", loglevel=logging.INFO)