Linux kernel live debugging, how it's done and what tools are used?
DebuggingLinux KernelKernelDebugging Problem Overview
What are the most common and why not uncommon methods and tools used to do live debugging on the Linux kernel? I know that Linus for eg. is against this kind of debugging for the Linux Kernel or it least was and thus nothing much has been done in that sense in those years, but honestly a lot of time has passed since 2000 and i am interested if that mentality has changed regarding the Linux project and what current methods are used to do live debugging on the Linux kernel at the moment(either local or remote)?
References to walkthroughs and tutorials on mentioned techniques and tools are welcome.
Debugging Solutions
Solution 1 - Debugging
Another option is to use ICE/JTAG controller, and GDB. This 'hardware' solution is especially used with embedded systems,
but for instance Qemu offers similar features:
-
start qemu with a gdb 'remote' stub which listens on 'localhost:1234' :
qemu -s ...
, -
then with GDB you open the kernel file
vmlinux
compiled with debug information (you can take a look a this mailing list thread where they discuss the unoptimization of the kernel). -
connect GDB and Qemu:
target remote localhost:1234
-
see your live kernel:
(gdb) where #0 cpu_v7_do_idle () at arch/arm/mm/proc-v7.S:77 #1 0xc0029728 in arch_idle () atarm/mach-realview/include/mach/system.h:36 #2 default_idle () at arm/kernel/process.c:166 #3 0xc00298a8 in cpu_idle () at arch/arm/kernel/process.c:199 #4 0xc00089c0 in start_kernel () at init/main.c:713
unfortunately, user-space debugging is not possible so far with GDB (no task list information, no MMU reprogramming to see different process contexts, ...), but if you stay in kernel-space, that's quite convenient.
info threads
will give you the list and states of the different CPUs
EDIT:
You can get more details about the procedure in this PDF: > Debugging Linux systems using GDB and QEMU.
Solution 2 - Debugging
While debugging Linux kernel we can utilize several tools, for example, debuggers (KDB, KGDB), dumping while crashed (LKCD), tracing toolkit (LTT, LTTV, LTTng), custom kernel instruments (dprobes, kprobes). In the following section I tried to summarized most of them, hope these will help.
LKCD (Linux Kernel Crash Dump) tool allows the Linux system to write the contents of its memory when a crash occurs. These logs can be further analyzed for the root cause of the crash. Resources regarding LKCD
- http://www-01.ibm.com/support/knowledgecenter/linuxonibm/liaax/lkcd.pdf
- https://www.novell.com/coolsolutions/feature/15284.html
- https://www.novell.com/support/kb/doc.php?id=3044267
Oops when kernel detects a problem, it prints an Oops message. Such a message is generated by printk statements in the fault handler (arch/*/kernel/traps.c). A dedicated ring buffer in the kernel being used by the printk statements. Oops contains information like, the CPU where the Oops occurred on, contents of CPU registers, number of Oops, description, stack back trace and others. Resources regarding kernel Oops
- https://www.kernel.org/doc/Documentation/oops-tracing.txt
- http://madwifi-project.org/wiki/DevDocs/KernelOops
- https://wiki.ubuntu.com/DebuggingKernelOops
Dynamic Probes is one of the popular debugging tool for Linux which developed by IBM. This tool allows the placement of a “probe” at almost any place in the system, in both user and kernel space. The probe consists of some code (written in a specialized, stack-oriented language) that is executed when control hits the given point. Resources regarding Dynamic Probe listed below
- http://www-01.ibm.com/support/knowledgecenter/linuxonibm/liaax/dprobesltt.pdf
- http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.107.6212&rep=rep1&type=pdf
Linux Trace Toolkit is a kernel patch and a set of related utilities that allow the tracing of events in the kernel. The trace includes timing information and can create a reasonably complete picture of what happened over a given period of time. Resources of LTT, LTT Viewer and LTT Next Generation
- http://elinux.org/Linux_Trace_Toolkit
- http://www.linuxjournal.com/article/3829
- http://multivax.blogspot.com/2010/11/introduction-to-linux-tracing-toolkit.html
MEMWATCH is an open source memory error detection tool. It works by defining MEMWATCH in gcc statement and by adding a header file to our code. Through this we can track memory leaks and memory corruptions. Resources regarding MEMWATCH
ftrace is a good tracing framework for Linux kernel. ftrace traces internal operations of the kernel. This tool included in the Linux kernel in 2.6.27. With its various tracer plugins, ftrace can be targeted at different static tracepoints, such as scheduling events, interrupts, memory-mapped I/O, CPU power state transitions, and operations related to file systems and virtualization. Also, dynamic tracking of kernel function calls is available, optionally restrictable to a subset of functions by using globs, and with the possibility to generate call graphs and provide stack usage. You can find a good tutorial of ftrace at https://events.linuxfoundation.org/slides/2010/linuxcon_japan/linuxcon_jp2010_rostedt.pdf
ltrace is a debugging utility in Linux, used to display the calls a user space application makes to shared libraries. This tool can be used to trace any dynamic library function call. It intercepts and records the dynamic library calls which are called by the executed process and the signals which are received by that process. It can also intercept and print the system calls executed by the program.
- http://www.ellexus.com/getting-started-with-ltrace-how-does-it-do-that/?doing_wp_cron=1425295977.1327838897705078125000
- http://developerblog.redhat.com/2014/07/10/ltrace-for-rhel-6-and-7/
KDB is the in-kernel debugger of the Linux kernel. KDB follows simplistic shell-style interface. We can use it to inspect memory, registers, process lists, dmesg, and even set breakpoints to stop in a certain location. Through KDB we can set breakpoints and execute some basic kernel run control (Although KDB is not source level debugger). Several handy resources regarding KDB
- http://www.drdobbs.com/open-source/linux-kernel-debugging/184406318
- http://elinux.org/KDB
- http://dev.man-online.org/man1/kdb/
- https://www.kernel.org/pub/linux/kernel/people/jwessel/kdb/usingKDB.html
KGDB is intended to be used as a source level debugger for the Linux kernel. It is used along with gdb to debug a Linux kernel. Two machines are required for using kgdb. One of these machines is a development machine and the other is the target machine. The kernel to be debugged runs on the target machine. The expectation is that gdb can be used to "break in" to the kernel to inspect memory, variables and look through call stack information similar to the way an application developer would use gdb to debug an application. It is possible to place breakpoints in kernel code and perform some limited execution stepping. Several handy resources regarding KGDB
Solution 3 - Debugging
According to the wiki, kgdb
was merged into the kernel in 2.6.26
which is within the last few years. kgdb
is a remote debugger, so you activate it in your kernel then you attach gdb to it somehow. I say somehow as there seems to be lots of options - see connecting gdb. Given that kgdb
is now in the source tree, I'd say going forward this is what you want to be using.
So it looks like Linus gave in. However, I would emphasize his argument - you should know what you're doing and know the system well. This is kernel land. If anything goes wrong, you don't get segfault
, you get anything from some obscure problem later on to the whole system coming down. Here be dragons. Proceed with care, you have been warned.
Solution 4 - Debugging
Another good tool for "live" debugging is kprobes / dynamic probes.
This lets you dynamically build little tiny modules which run when certain addresses are executed - sort of like a breakpoint.
The big advantage of them are:
- They do not impact the system - i.e. when a location is hit - it just excecutes the code - it doesn't halt the whole kernel.
- You don't need two different systems interconnected (target and debug) like with kgdb
It is best for doing things like hitting a breakpoint, and seeing what data values are, or checking if things have been changed/overwritten, etc. If you want to "step through code" - it doesn't do that.
Addition - 2018:
Another very powerful method is a program simply called "perf" which kind of rolls-up many tools (like Dynamic probes) and kind of replaces/depricates others (like oprofile).
In particular, the perf probe
command can be used to easily create/add dynamic probes to the system, afterwhich perf record
can sample the system and report info (and backtraces) when the probe is hit for reporting via perf report
(or perf script
). If you have good debug symbols in the kernel you can get great intel out of the system without even taking the kernel down. Do a man perf
(in Google or on your system) for more info on this tool or see this great page on it:
Solution 5 - Debugging
KGDB + QEMU step-by-step
KGDB is a kernel subsystem that allows you to step debug the kernel itself from a host GDB.
My QEMU + Buildroot example is a good way to get a taste of it without real hardware: https://github.com/cirosantilli/linux-kernel-module-cheat/tree/1969cd6f8d30dace81d9848c6bacbb8bad9dacd8#kgdb
Pros and cons vs other methods:
- advantage vs QEMU:
- you often don't have software emulation for your device as hardware vendors don't like to release accurate software models for their devices
- real hardware way faster than QEMU
- advantage vs JTAG: no need for extra JTAG hardware, easier to setup
- disadvantages vs QEMU and JTAG: less visibility and more intrusive. KGDB relies on the certain parts of the kernel working to be able to communicate with the host. So e.g. it breaks down in panic, you can't view the boot sequence.
The main steps are:
-
Compile the kernel with:
CONFIG_DEBUG_KERNEL=y CONFIG_DEBUG_INFO=y CONFIG_CONSOLE_POLL=y CONFIG_KDB_CONTINUE_CATASTROPHIC=0 CONFIG_KDB_DEFAULT_ENABLE=0x1 CONFIG_KDB_KEYBOARD=y CONFIG_KGDB=y CONFIG_KGDB_KDB=y CONFIG_KGDB_LOW_LEVEL_TRAP=y CONFIG_KGDB_SERIAL_CONSOLE=y CONFIG_KGDB_TESTS=y CONFIG_KGDB_TESTS_ON_BOOT=n CONFIG_MAGIC_SYSRQ=y CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x1 CONFIG_SERIAL_KGDB_NMI=n
Most of those are not mandatory, but this is what I've tested.
-
Add to your QEMU command:
-append 'kgdbwait kgdboc=ttyS0,115200' \ -serial tcp::1234,server,nowait
-
Run GDB with from the root of the Linux kernel source tree with:
gdb -ex 'file vmlinux' -ex 'target remote localhost:1234'
-
In GDB:
(gdb) c
and the boot should finish.
-
In QEMU:
echo g > /proc/sysrq-trigger
And GDB should break.
-
Now we are done, you can use GDB as usual:
b sys_write c
Tested in Ubuntu 14.04.
KGDB + Raspberry Pi
The exact same setup as above almost worked on a Raspberry Pi 2, Raspbian Jessie 2016-05-27.
You just have to learn to do the QEMU steps on the Pi, which are easily Googlable:
-
add the configuration options and recompile the kernel as explained at https://www.raspberrypi.org/documentation/linux/kernel/building.md There were unfortunately missing options on the default kernel build, notably no debug symbols, so the recompile is needed.
-
edit
cmdline.txt
of the boot partition and add:kgdbwait kgdboc=ttyAMA0,115200
-
connect
gdb
to the serial with:arm-linux-gnueabihf-gdb -ex 'file vmlinux' -ex 'target remote /dev/ttyUSB0'
If you are not familiar with the serial, check out this: https://www.youtube.com/watch?v=da5Q7xL_OTo All you need is a cheap adapter like this one. Make sure you can get a shell through the serial to ensure that it is working before trying out KGDB.
-
do:
echo g | sudo tee /proc/sysrq-trigger
from inside an SSH session, since the serial is already taken by GDB.
With this setup, I was able to put a breakpoint in sys_write
, pause program execution, list source and continue.
However, sometimes when I did next
in sys_write
GDB just hung and printed this error message several times:
Ignoring packet error, continuing...
so I'm not sure if something is wrong with my setup, or if this is expected because of what some background process is doing in the more complex Raspbian image.
I've also been told to try and disable multiprocessing with the Linux boot options, but I haven't tried it yet.
Solution 6 - Debugging
QEMU + GDB step-by-step procedure tested on Ubuntu 16.10 host
To get started from scratch quickly I've made a minimal fully automated QEMU + Buildroot example at: https://github.com/cirosantilli/linux-kernel-module-cheat Major steps are covered below.
First get a root filesystem rootfs.cpio.gz
. If you need one, consider:
- a minimal
init
-only executable image: https://unix.stackexchange.com/questions/122717/custom-linux-distro-that-runs-just-one-program-nothing-else/238579#238579 - a Busybox interactive system: https://unix.stackexchange.com/questions/2692/what-is-the-smallest-possible-linux-implementation/203902#203902
Then on the Linux kernel:
git checkout v4.9
make mrproper
make x86_64_defconfig
cat <<EOF >.config-fragment
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_KERNEL=y
CONFIG_GDB_SCRIPTS=y
EOF
./scripts/kconfig/merge_config.sh .config .config-fragment
make -j"$(nproc)"
qemu-system-x86_64 -kernel arch/x86/boot/bzImage \
-initrd rootfs.cpio.gz -S -s
On another terminal, supposing you want to start debugging from start_kernel
:
gdb \
-ex "add-auto-load-safe-path $(pwd)" \
-ex "file vmlinux" \
-ex 'set arch i386:x86-64:intel' \
-ex 'target remote localhost:1234' \
-ex 'break start_kernel' \
-ex 'continue' \
-ex 'disconnect' \
-ex 'set arch i386:x86-64' \
-ex 'target remote localhost:1234'
and we are done!!
For kernel modules see: https://stackoverflow.com/questions/28607538/how-to-debug-linux-kernel-modules-with-qemu/44095831#44095831
For Ubuntu 14.04, GDB 7.7.1, hbreak
was needed, break
software breakpoints were ignored. Not the case anymore in 16.10. See also: <https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/901944>
The messy disconnect
and what come after it are to work around the error:
Remote 'g' packet reply is too long: 000000000000000017d11000008ef4810120008000000000fdfb8b07000000000d352828000000004040010000000000903fe081ffffffff883fe081ffffffff00000000000e0000ffffffffffe0ffffffffffff07ffffffffffffffff9fffff17d11000008ef4810000000000800000fffffffff8ffffffffff0000ffffffff2ddbf481ffffffff4600000010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000007f0300000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000801f0000
Related threads:
- https://sourceware.org/bugzilla/show_bug.cgi?id=13984 might be a GDB bug
- https://stackoverflow.com/questions/8662468/remote-g-packet-reply-is-too-long
- http://wiki.osdev.org/QEMU_and_GDB_in_long_mode osdev.org is as usual an awesome source for these problems
- https://lists.nongnu.org/archive/html/qemu-discuss/2014-10/msg00069.html
See also:
- <https://github.com/torvalds/linux/blob/v4.9/Documentation/dev-tools/gdb-kernel-debugging.rst> official Linux kernel "documentation"
- <https://stackoverflow.com/questions/11408041/how-to-debug-the-linux-kernel-with-gdb-and-qemu/33203642>
Known limitations:
- the Linux kernel does not support (and does not even compile without patches) with
-O0
: https://stackoverflow.com/questions/29151235/how-to-de-optimize-the-linux-kernel - GDB 7.11 will blow your memory on some types of tab completion, even after the
max-completions
fix: https://stackoverflow.com/questions/597777/tab-completion-interrupt-for-large-binaries Likely some corner case which was not covered in that patch. So anulimit -Sv 500000
is a wise action before debugging. Blew up specifically when I tab completedfile<tab>
for thefilename
argument ofsys_execve
as in: https://stackoverflow.com/a/42290593/895245
Solution 7 - Debugging
Actually the joke is that Linux has had an in-kernel debugger since 2.2.12, xmon
, but only for the powerpc
architecture (actually it was ppc
back then).
It's not a source level debugger, and it's almost entirely undocumented, but still.
http://lxr.linux.no/linux-old+v2.2.12/arch/ppc/xmon/xmon.c#L119
Solution 8 - Debugging
As someone who writes kernel code a lot I have to say I have never used kgdb, and only rarely use kprobes etc.
It is still often the best approach to throw in some strategic printks
. In more recent kernels trace_printk
is a good way to do that without spamming dmesg.
Solution 9 - Debugging
User mode Linux (UML)
https://en.wikipedia.org/wiki/User-mode_Linux
Another virtualization another method that allows step debugging kernel code.
UML is very ingenious: it is implemented as an ARCH
, just like x86
, but instead of using low level instructions, it implements the ARCH
functions with userland system calls.
The result is that you are able to run Linux kernel code as a userland process on a Linux host!
First make a rootfs and run it as shown at: https://unix.stackexchange.com/questions/73203/how-to-create-rootfs-for-user-mode-linux-on-fedora-18/372207#372207
The um
defconfig sets CONFIG_DEBUG_INFO=y
by default (yup, it is a development thing), so we are fine.
On guest:
i=0
while true; do echo $i; i=$(($i+1)); done
On host in another shell:
ps aux | grep ./linux
gdb -pid "$pid"
In GDB:
break sys_write
continue
continue
And now you are controlling the count from GDB, and can see source as expected.
Pros:
- fully contained in the Linux kernel mainline tree
- more lightweight than QEMU's full system emulation
Cons:
-
very invasive, as it changes how the kernel itself is compiled.
But the higher level APIs outside of
ARCH
specifics should remain unchanged. -
arguably not very active: https://stackoverflow.com/questions/36353143/is-user-mode-linux-uml-project-stopped
See also: https://unix.stackexchange.com/questions/127829/why-would-someone-want-to-run-usermode-linux-uml
Solution 10 - Debugging
You guys are wrong, the kgdb still works well for latest kernel, you need to take care of kernel configuration of split image, randomization optimization.
kgdb over serial port is useless because no computer today supports DB9 on a motherboard serial port, USB serial port doesn't support the polling mode.
The new game is kgdboe, following is the log trace:
following is the host machine, vmlinux is from the target machine
root@Thinkpad-T510:~/KGDBOE# gdb vmlinux
Reading symbols from vmlinux...done.
(gdb) target remote udp:192.168.1.22:31337
1077 kernel/debug/debug_core.c: No such file or directory.
(gdb) l oom_kill_process
828 mm/oom_kill.c: No such file or directory.
(gdb) l oom_kill_process
828 in mm/oom_kill.c
(gdb) break oom_kill_process
Breakpoint 1 at 0xffffffff8119e0c0: file mm/oom_kill.c, line 833.
(gdb) c
Continuing.
[New Thread 1779]
[New Thread 1782]
[New Thread 1777]
[New Thread 1778]
[New Thread 1780]
[New Thread 1781]
[Switching to Thread 1779]
Thread 388 hit Breakpoint 1, oom_kill_process (oc=0xffffc90000d93ce8, message=0xffffffff82098fbc "Out of memory")
at mm/oom_kill.c:833
833 in mm/oom_kill.c
(gdb) s
834 in mm/oom_kill.c
(gdb)
On peer target machine, following is how to get it crash and to be captured by host machine
#swapoff -a
#stress -m 4 --vm-bytes=500m
Solution 11 - Debugging
kgdb and gdb are almost useless for debugging the kernel because the code is so optimized it bears no relation to the original source and many variables are optimized out. This makes stepping , hence stepping through the source is impossible, examining variables is impossible and is therefore almost pointless.
Actually it is worse than useless, it actually gives you false information so detached is the code you are looking at to the actual running code.
And no, you cant turn off optimizations in the kernel, it doesn't compile.
I have to say, coming from a windows kernel environment, the lack of decent debugger is annoying, given that there is junk code out there to maintain.