There are several metrics to measure the amount of memory a process is using. I will begin with the two that are easiest to obtain— the virtual set size (vss) and the resident memory size (rss), both of which are available in most implementations of the ps
and top
commands:
ps
command and VIRT in top
, is the total amount of memory mapped by a process. It is the sum of all the regions shown in /proc/<PID>/map
. This number is of limited interest, since only part of the virtual memory is committed to physical memory at any one time.ps
and RES in top
, is the sum of memory that is mapped to physical pages of memory. This gets closer to the actual memory budget of the process, but there is a problem, if you add up the Rss of all the processes, you will get an overestimate the memory in use because some pages will be shared.The versions of top
and ps
from BusyBox give very limited information. The examples that follow use the full version from the procps
pacakge.
The ps
command shows Vss (VSZ) and Rss (RSS) with the options, -Aly
, and a custom format which includes vsz
and rss
, as shown here:
# ps -eo pid,tid,class,rtprio,stat,vsz,rss,comm PID TID CLS RTPRIO STAT VSZ RSS COMMAND 1 1 TS - Ss 4496 2652 systemd ... 205 205 TS - Ss 4076 1296 systemd-journal 228 228 TS - Ss 2524 1396 udevd 581 581 TS - Ss 2880 1508 avahi-daemon 584 584 TS - Ss 2848 1512 dbus-daemon 590 590 TS - Ss 1332 680 acpid 594 594 TS - Ss 4600 1564 wpa_supplicant
Likewise, top
shows a summary of the free memory and memory usage per process:
top - 21:17:52 up 10:04, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 96 total, 1 running, 95 sleeping, 0 stopped, 0 zombie %Cpu(s): 1.7 us, 2.2 sy, 0.0 ni, 95.9 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st KiB Mem: 509016 total, 278524 used, 230492 free, 25572 buffers KiB Swap: 0 total, 0 used, 0 free, 170920 cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1098 debian 20 0 29076 16m 8312 S 0.0 3.2 0:01.29 wicd-client 595 root 20 0 64920 9.8m 4048 S 0.0 2.0 0:01.09 node 866 root 20 0 28892 9152 3660 S 0.2 1.8 0:36.38 Xorg
These simple commands give you a feel of the memory usage and give the first indication that you have a memory leak when you see that the Rss of a process keeps on increasing. However, they are not very accurate in the absolute measurements of memory usage.
In 2009, Matt Mackall began looking at the problem of accounting for shared pages in process memory measurement and added two new metrics called the unique set size or Uss, and the proportional set size or Pss:
The information is available in /proc/<PID>/smaps
, which contains additional information for each of the mappings shown in /proc/<PID>/maps
. Here is one section from such a file which provides information about the mapping for the libc
code segment:
b6e6d000-b6f45000 r-xp 00000000 b3:02 2444 /lib/libc-2.13.so Size: 864 kB Rss: 264 kB Pss: 6 kB Shared_Clean: 264 kB Shared_Dirty: 0 kB Private_Clean: 0 kB Private_Dirty: 0 kB Referenced: 264 kB Anonymous: 0 kB AnonHugePages: 0 kB Swap: 0 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Locked: 0 kB VmFlags: rd ex mr mw me
There is a tool named smem that collates the information from the smaps
files and presents it in various ways, including as pie or bar charts. The project page for smem is https://www.selenic.com/smem. It is available as a package in most desktop distributions. However, since it is written in Python, installing it on an embedded target requires a Python environment, which may be too much trouble for just one tool. To help with this, there is a small program named smemcap
that captures the state from /proc
on the target and saves it to a TAR file which can be analyzed later on the host computer. It is part of BusyBox, but it can also be compiled from the smem
source.
Running smem
natively, as root
, you will see these results:
# smem -t PID User Command Swap USS PSS RSS 610 0 /sbin/agetty -s ttyO0 11 0 128 149 720 1236 0 /sbin/agetty -s ttyGS0 1 0 128 149 720 609 0 /sbin/agetty tty1 38400 0 144 163 724 578 0 /usr/sbin/acpid 0 140 173 680 819 0 /usr/sbin/cron 0 188 201 704 634 103 avahi-daemon: chroot hel 0 112 205 500 980 0 /usr/sbin/udhcpd -S /etc 0 196 205 568 ... 836 0 /usr/bin/X :0 -auth /var 0 7172 7746 9212 583 0 /usr/bin/node autorun.js 0 8772 9043 10076 1089 1000 /usr/bin/python -O /usr/ 0 9600 11264 16388 ------------------------------------------------------------------ 53 6 0 65820 78251 146544
You can see from the last line of the output that, in this case, the total Pss is about a half of the Rss.
If you don't have or don't want to install Python on your target, you can capture the state using smemcap
, again as root
:
# smemcap > smem-bbb-cap.tar
Then, copy the TAR file to the host and read it using smem -S
, though this time there is no need to run as root
:
$ smem -t -S smem-bbb-cap.tar
Another way to display Pss is via ps_mem
(https://github.com/pixelb/ps_mem), which prints much the same information but in a simpler format. It is also written in Python.
Android also has a tool named procrank
, which can be cross compiled for embedded Linux with a few small changes. You can get the code from https://github.com/csimmonds/procrank_linux.