As mentioned earlier, some signals indicate that it's inadvisable or even impossible for a process to continue. In these cases, the default action is to prematurely terminate the process and write a file called a core file, colloquially known as dumping core. The writing of core files may be suppressed by your shell (see Your Shell May Suppress the Creation of a Core File for details).
If a core file is created during a run of your program, you can open your debugger, say GDB, on that file and then proceed with your usual GDB operations.
A core file contains a detailed description of the program's state when it died: the contents of the stack (or, if the program is threaded, the stacks for each thread), the contents of the CPU's registers (again with one set of register values per thread if the program is multithreaded), the values of the program's statically-allocated variables (global and static
variables), and so on.
It's very easy to create a core file. Here's code that generates one:
The abort()
function causes the current process to receive a SIGABRT
signal, and the default signal handler for SIGABRT
terminates the program and dumps core. Here's another short program that dumps core. In this program, we intentionally dereference a NULL pointer:
Let's generate a core file. Compile and run sigsegv.c:
$ gcc -g -W -Wall sigsegv.c -o sigsegv $ ./sigsegv Segmentation fault (core dumped)
If you list your current directory, you'll notice a new file named core (or some variant thereof). When you see a core file somewhere in your filesystem, it may not be obvious which program generated it. The Unix command file
helpfully tells us the name of the executable that dumped this particular core file:
$ file core core: ELF 32-bit LSB core file Intel 80386, version 1 (SYSV), SVR4-style, SVR4-style, from 'sigsegv'
In many, if not most cases, the debugging process does not involve core files. If a program seg faults, the programmer simply opens a debugger, such as GDB, and runs the program again to recreate the fault. For that reason, and because core files tend to be large, most modern shells have mechanisms to prevent core files from being written in the first place.
In bash
, you can control the creation of core files with the ulimit
command:
ulimit -c n
where n is the maximum size for a core file, in kilobytes. Any core file larger than nKB will not be written. If you don't specify n, the shell will display the current limitation on core files. If you want to allow a core file of any size, you can use
ulimit -c unlimited
For tcsh
and csh
users, the limit
command controls core file sizes. For example,
limit coredumpsize 1000000
will tell the shell you do not want a core file created if it will be more than a million bytes in size.
If you didn't get a core file after running sigsegv, check the current core file restrictions using ulimit -c
for bash
or limit -c
for tcsh
or csh
.
Why would you ever need a core file in the first place? Since you can simply re-run a program that has seg faulted from within GDB and recreate the seg fault, why bother with core files at all? The answer is that in some situations, such as the following ones, this assumption is not justified:
The seg fault only occurs after the program has run for a long period of time, so that it is infeasible to recreate the fault in the debugger.
The program's behavior depends on random, environmental events, so that running the program again may not reproduce the seg fault.
The seg fault occurs when the program is run by a naive user. Here the user, who would typically not be a programmer (or not have access to the source code), would not do the debugging. However, such a user could still send the core file (if it were available) to the programmer for inspection and debugging purposes.
Note, though, that if the program's source code isn't available or if the executable wasn't compiled with an enhanced symbol table, or even when we don't plan to debug the executable, core files are simply not very useful.