Now let's get some statistics to show you how to connect all these finer points together to make some good sense.
When you are testing the system with the following commands or doing the suggestion exercises you may want to see the machine paging or struggling more, a way of doing this would be to limit the memory that is allowed to be used from your bootup prompt as follows:
Boot: Linux mem=48M
Do not use more than the memory that you have as this will cause the kernel to have a crash.
If you have a motherboard with an old BIOS or an old kernel it may not be possible to access more than 64MB by default. You would then have to tell the system about the amount of memory e.g. 128MB. For this you can use the boot prompt as above or edit the /etc/lilo.conf file.
Print the accumulated user and system times for the shell and for processes run from the shell. The return status is 0.
Times is a shell-builtin command and therefore the shell has no extra resources required to find the command.
The report shows the amount of time taken for a command to run in real time (stop watch), the amount of time that the process in on the processor using user state and not system calls, the time on the processor and in system mode.
The difference between the user and the system time is the amount of time the process spent waiting.
If we wanted to save time we could avoid the screen altogether and send the output to /dev/null.
$ times ls -lR / > /dev/null
If you have run this command previously the time will be influenced by the output still being in the buffer cache.
User time does not change because you are writing to the console driver, but there should be a difference between the user and the system time.
We may need to reduce the real time and we do not want to change the hardware, the buffer cache may increase to handle this but then it will use more memory and that may cause the system to put pages into swap area.
This times command represents time in its reports to the nearest 10th of a second that the process takes to execute.
top provides an ongoing look at processor activity in real time. It displays a listing of the most CPU-intensive tasks on the system, and can provide an interactive interface for manipulating processes.
It can sort the tasks by CPU usage, memory usage.
top [-] [d delay] [p pid] [q] [c] [C] [S] [s] [i] [n iter] [b]
|d||Specifies the delay between screen updates.|
|P||Monitor only processes with given process id. This flag can be given up to twenty times. This option is neither available interactively nor can it be put into the configuration file.|
|Q||This causes top to refresh without any delay. If the caller has superuser privileges, top runs with the highest possible priority.|
|s||Specifies cumulative mode, where each process is listed with the CPU time that it as well as its dead children has spent.|
|S||Tells top to run in secure mode. This disables the potentially dangerous of the interactive commands (see below). A secure top is a nifty thing to leave running on a spare terminal.|
|i||Start top ignoring any idle or zombie processes.|
|C||display command line instead of the command name only. The default behavior has been changed as this seems to be more useful.|
|h||Show all threads.|
|n||Number of iterations. Update the display this number of times and then exit.|
|b||Batch mode. Useful for sending output from top to other programs or to a file. In this mode, top will not accept command line input. It runs until it produces the number of iterations requested with the n option or until killed. Output is plain text suitable for display on a dumb terminal.|
Top reads it's default configuration from two files, /etc/toprc and ~/.toprc.
|/etc/toprc||The global configuration file may be used to restrict the usage of top to the secure mode for non-non-privileged users.|
|~/.toprc||The personal configuration file contains two lines. The first line contains lower and upper letters to specify which fields in what order are to be displayed. The second line is more interesting (and important). It contains information on the other options.|
top displays a variety of information about the processor state.
|"uptime"||This line displays the time the system has been up, and the three load averages for the system. The load averages are the average number of process ready to run during the last 1, 5 and 15 minutes. This line is just like the output of uptime(1). The uptime display may be toggled by the interactive l command.|
|Processes||The total number of processes running at the time of the last update. This is also broken down into the number of tasks which are running, sleeping, stopped, or undead. The processes and states display may be toggled by the t interactive command.|
|"CPU states"||Shows the percentage of CPU time in user mode, system mode, niced tasks, iowait and idle. (Niced tasks are only those whose nice value is positive.) Time spent in niced tasks will also be counted in system and user time, so the total will be more than 100%.|
|Mem Statistics||on memory usage, including total available memory, free memory, used memory, shared memory, and memory used for buffers.|
|Swap Statistics||on swap space, including total swap space, available swap space, and used swap space.|
|PID||The process ID of each task.|
|PPID||The parent process ID each task.|
|UID||The user ID of the task's owner.|
|User||The user name of the task's owner.|
|PRI||The priority of the task.|
|NI||The nice value of the task. Negative nice values are higher priority.|
|SIZE||The size of the task's code plus data plus stack space, in kilobytes, is shown here.|
|TSIZE||The code size of the task.|
|DSIZE||Data + Stack size.|
|TRS||Text resident size.|
|SWAP||Size of the swapped out part of the task.|
|D||Size of pages marked dirty.|
|LC||Last used processor|
|RSS||The total amount of physical memory used by the task, in kilobytes, is shown here.|
|SHARE||The amount of shared memory used by the task is shown in this column.|
|STAT||The state of the task is shown here. The state is either S for sleeping, D for uninterruptible sleep, R for running, Z for zombies, or T for stopped or traced. These states are modified by trailing > for a process with negative nice value, N for a process with positive nice value, W for a swapped out process|
|WCHAN||depending on the availability of either /boot/psdatabase or the kernel link map /boot/System.map this shows the address or the name of the kernel function the task currently is sleeping in.|
|TIME||Total CPU time the task has used since it started. If cumulative mode is on, this also includes the CPU time used by the process's children which have died. You can set cumulative mode with the S command line option or toggle it with the interactive command S. The header line will then be changed to CTIME.|
|%CPU||The task's share of the CPU time since the last screen update, expressed as a percentage of total CPU time per processor.|
|%MEM||The task's share of the physical memory.|
|COMMAND||The task's command name, which will be truncated if it is too long to be displayed on one line. Tasks in memory will have a full command line, but swapped-out tasks will only have the name of the program in parentheses (for example, "(getty)").|
Several single-key commands are recognized while top is running (check the man pages for full details and explanations).
Some interesting examples are:
|r||Re-nice a process. You will be prompted for the PID of the task, and the value to nice it to. Entering a positve value will cause a process to be niced to negative values, and lose priority. If root is running top, a negative value can be entered, causing a process to get a higher than normal priority. The default renice value is 10. This command is not available in secure mode.|
|S||This toggles cumulative mode, the equivalent of ps -S, i.e., that CPU times will include a process's defunct children. For some programs, such as compilers, which work by forking into many separate tasks, normal mode will make them appear less demanding than they actually are. For others, however, such as shells and init, this behavior is correct.|
|. "f or F"||Add fields to display or remove fields from the display.|
Collect, report, or save system activity information. The sar command only reports on local activities.
/var/log/sa/sadd Indicate the daily data file, where the dd parameter is a number representing the day of the month. /proc contains various files with system statistics.
sar [opts] [-o filename] [-f filename] [interval/secs] [count]
The sar command writes to standard output the contents of selected cumulative activity counters in the operating system.
The accounting system, based on the values in the count and interval parameters, writes information the specified number of times spaced at the specified intervals in seconds. If the interval parameter is set to zero, the sar command displays the average statistics for the time since the system was booted.
The default value for the count parameter is 1. If its value is set to zero, then reports are generated continuously.
The collected data can also be saved in the file specified by the -o filename flag, in addition to being displayed onto the screen. If filename is omitted, sar uses the standard system activity daily data file, the /var/log/sa/sadd file, where the dd parameter indicates the current day.
The default version of the sar command (sar -u) might be one of the first facilities the user runs to begin system activity investigation, because it monitors major system resources.
|%user||%age of CPU utilisation executing at user level|
|%nice||%age of CPU utilisation executing at user level with nice priority|
|%system||%age of CPU utilisation executing at kernel or system level|
|%idle||The time that the CPU(s) is idle.|
If CPU utilization is near 100 percent (user + nice + system) then we are CPU-bound. However I would opt to monitor this for a number of days before making that decision, I would also get a look at all other available statistics so that I have a complete picture as to how the system is running prior to changing anything.
sar -o data.file interval count >/dev/null 2>&1 &
All data is captured in binary form and saved to a file (data.file). The data can then be selectively displayed with the sar command using the -f option.
Set the count parameter to select records at count second intervals. If this parameter is not set, all the records saved in the file will be selected.
|tps||Total number of transfers per second that were issued to the physical disk. A transfer is an I/O request to the physical disk. Multiple logical requests can be combined into a single I/O request to the disk.|
|rtps||Total number of read requests per second issued to the physical disk.|
|wtps||Total number of write requests per second issued to the physical disk.|
|bread/s||Total amount of data read from the drive in blocks per second. Blocks are equivalent to sectors with post 2.4 kernels and therefore have a size of 512 bytes.|
|bwrtn/s||Total amount of data written to the drive in blocks per second.|
Looking at the IO report, if the amount of data read and written to the disk is as much as was requested from the disk to read and write then you do not have a bottleneck. However if the read/write requests are not being met then that is where the system bottleneck resides.
proc/s Total number of processes created per second.
How busy is the system?
More and more the emphasis is being placed on having an efficient network and here is an excellent tool to monitor the network setup.
With the DEV keyword, statistics from the network devices are reported. The following values are displayed:
|IFACE||Name of the network interface for which statistics are reported.|
|rxpck/s||Total number of packets received per second.|
|txpck/s||Total number of packets transmitted per second.|
|rxbyt/s||Total number of bytes received per second.|
|txbyt/s||Total number of bytes transmitted per second.|
|rxcmp/s||Number of compressed packets received per second (for cslip etc.).|
|txcmp/s||Number of compressed packets transmitted per second.|
|rxmcst/s||Number of multicast packets received per second.|
With the EDEV keyword, statistics on failures (errors) from the network devices are reported. The following values are displayed:
|IFACE||Name of the network interface for which statistics are reported.|
|rxerr/s||Total number of bad packets received per second.|
|txerr/s||Total number of errors that happened per second while transmitting packets.|
|coll/s||Number of collisions that happened per second while transmitting packets.|
|rxdrop/s||Number of received packets dropped per second because of a lack of space in linux buffers.|
|txdrop/s||Number of transmitted packets dropped per second because of a lack of space in linux buffers.|
|txcarr/s||Number of carrier-errors that happened per second while transmitting packets.|
|rxfram/s||Number of frame alignment errors that happened per second on received packets.|
|rxfifo/s||Number of FIFO overrun errors that happened per second on received packets.|
|txfifo/s||Number of FIFO overrun errors that happened per second on transmitted packets.|
With the SOCK keyword, statistics on sockets in use are reported. The following values are displayed:
|Totsck||Total number of used sockets.|
|Tcpsck||Number of TCP sockets currently in use|
|Udpsck||Number of UDP sockets currently in use.|
|Rawsck||Number of RAW sockets currently in use.|
|ip-frag||Number of IP fragments currently in use.|
The FULL keyword is equivalent to specifying all the keywords above and therefore all the network activities are reported.
|runq||sz Run queue length (number of processes waiting for run time) This will not include the processes waiting for resources on the sleep queue. Only the processes that need one more resource before they can run, and that resource is access to the CPU.|
|plist||sz Number of processes in the process list.|
|ldavg||1 System load average for the last minute.|
|ldavg||5 System load average for the past 5 minutes.|
The run queue length should not be too long, depending on how busy your system is there should never be that many processes with all resources waiting torun on the CPU. If the queue is always long (not just long at month end time) you may have a process that is continually hogging the CPU.
However if the load is high and the run queue mostly empty look for IO or memeory problems.
The following values are displayed:
|Kbmemfree||Amount of free memory available in kilobytes.|
|Kbmemused||Amount of used memory in kilobytes. This does not take into account memory used by the kernel itself.|
|%memused||Percentage of used memory.|
|Kbmemshrd||Amount of memory shared by the system in kilobytes. Always zero with 2.4 kernels.|
|Kbbuffers||Amount of memory used as buffers by the kernel in kilobytes.|
|Kbcached||Amount of memory used to cache data by the kernel in kilobytes.|
|Kbswpfree||Amount of free swap space in kilobytes.|
|Kbswpused||Amount of used swap space in kilobytes.|
|%swpused||Percentage of used swap space.|
This information can fill in your diagram on memeory division and kernel usage of its part of memeory very nicely.
If your system is running slowly and you see that you are using full memory and having to use swap space ON A REGULAR basis NOT just a once off job. You may need to consider increasing the size of your memory. You might choose to flush the buffers more often and get rid of cache that has not been used for a while.
The more of a picture you get of your entire system, the more of an overview you get of how it is working the better decisions you can make about the performance issues.
|Dentunusd||Number of unused cache entries in the directory cache.|
|file-sz||Number of used file handles.|
|%file-sz||Percentage of used file handles with regard to the maximum number of file handles that the Linux kernel can allocate.|
|inode-sz||Number of used inode handlers.|
|super-sz||Number of super block handlers allocated by the kernel.|
|%super-sz||Percentage of allocated super block handlers with regard to the maximum number of super block handlers that Linux can allocate.|
|dquot-sz||Number of allocated disk quota entries.|
|%dquot-sz||Percentage of allocated disk quota entries with regard to the maximum number of cached disk quota entries that can be allocated.|
|rtsig-sz||Number of queued RT signals.|
|%rtsig-sz||Percentage of queued RT signals with regard to the maximum number of RT signals that can be queued.|
In this table we refer to the internal structure of a filesystem, and although we have mentioned this information before, it is likely that this will be better understood after studying the chapter in Finer Points on Filesystems.
Again if tables are too full it will affect performance, however if they are too empty then maybe there is too much space allocated to the system tables. If your performacne is not down graded do not change anything. Monitor for at least one month regularly prior to making any decisions on this report.
cswch/s Total number of context switches per second.
Context switching occurs when an operational process on the CPU is moved back to the run queu to await another turn on the processor - so what would this imply if the context switches per second are very high?
The following values are displayed:
|pswpin/s||Total number of swap pages the system brought in per second.|
|pswpout/s||Total number of swap pages the system brought out per second.|
Again a memory usage report, monitor this one carefully if system performance has down graded.
The SELF keyword indicates that statistics are to be reported for the child processes of the sar process itself, The ALL keyword indicates that statistics are to be reported for all the child processes of all the system processes.
Again we are looking at a report on CPU usage versus memory. We have discussed this before, mainly I find this report interesting for the geneology issues. As this is a performance section though with what you know so far of child/parent processes and the system calls involved what would this report tell you?
At the present time, no more than 256 processes can be monitored simultaneously. The following values are displayed:
|cminflt/s||Total number of minor faults the child processes have made per second, those which have not required loading a memory page from disk.|
|cmajflt/s||Total number of major faults the child processes have made per second, those which have required loading a memory page from disk.|
|%cuser||Percentage of CPU used by the child processes while executing at the user level (application).|
|%csystem||Percentage of CPU used by the child processes while executing at the system level (kernel).|
|cnswap/s||Number of pages from the child process address spaces the system has swapped out per second.|
Run the following commands and check the results, how is your system performing:
sar -u 2 5 -- Report CPU utilization for each 2 seconds. 5 lines are displayed.
sar -r -n DEV -f /var/log/sa/sa16 -- Display memory, swap space and network statistics saved in daily data file 'sa16'. You may not have a file called sa16, check in the /var/log/sa directory and see which days you have stored and use one of those files.
sar -A -- Display all the statistics saved in current daily data file.
.Run the following commands as close together as possible. From your primary screen- this will tie up the IO subsystem.
# dd if=/dev/root of=/dev/null
From your secondary screen run the following:
# sar -b 1 30
This will show you what is happening on your system. Make a note of what you see:
vmstat [-n] [delay [ count]] vmstat[-V]
vmstat reports information about processes, memory, paging, block IO, traps, and cpu activity.
vmstat does not require special permissions to run. These reports are intended to help identify system bottlenecks. Linux vmstat does not count itself as a running process. All linux blocks are currently 1k, except for CD-ROM blocks which are 2k.
The first report produced gives averages since the last reboot. Additional reports give information on a sampling period of length delay. The process and memory reports are instantaneous in either case.
|-n||switch causes the header to be displayed only once rather than periodically.|
|delay||is the delay between updates in seconds. If no delay is specified, only one report is printed with the average values since boot.|
|count||is the number of updates. If no count is specified and delay is defined, count defaults to infinity.|
|-V||switch results in displaying version information.|
|r||The number of processes waiting for run time.|
|b||The number of processes in uninterruptable sleep.|
|w||The number of processes swapped out but otherwise runnable. This field is calculated, but Linux never desperation swaps.|
|swpd||the amount of virtual memory used (kB).|
|free||the amount of idle memory (kB).|
|buff||the amount of memory used as buffers (kB).|
|si||Amount of memory swapped in from disk (kB/s).|
|so||Amount of memory swapped to disk (kB/s).|
|bi||Blocks sent to a block device (blocks/s).|
|bo||Blocks received from a block device (blocks/s).|
|in||The number of interrupts per second, including the clock.|
|cs||The number of context switches per second.|
|These are percentages of total CPU time|
Report Central Processing Unit (CPU) statistics and input/output statistics for devices and partitions.
iostat [ -c | -d ] [ -k ] [ -t ] [ -V ] [ -x [ device ] ] [ interval [ count ] ]
The iostat command is used for monitoring system input/output device loading by observing the time the devices are active in relation to their average transfer rates.
The iostat command generates reports that can be used to change system configuration to better balance the input/output load between physical disks.
The iostat command generates two types of reports:
1.the CPU Utilization report
2.and the Device Utilization report.
The first report generated by the iostat command is the CPU Utilization Report. This report detail is taken intitially from the time the system is booted up, thereafter the report will report only on the time since the last report.
For multiprocessor systems, the CPU values are global averages among all processors.
The report has the following format:
|%user||Show the percentage of CPU utilization that occurred while executing at the user level (application).|
|%nice||Show the percentage of CPU utilization that occurred while executing at the user level with nice priority.|
|%sys||Show the percentage of CPU utilization that occurred while executing at the system level (kernel).|
|%idle||Show the percentage of time that the CPU or CPUs were idle.|
You need the idle time of your system to be relatively low. You can expect the idle time to be high when the load average is low.
The processor may not have a runnable process on it, but the current processes are waiting for IO. If the load average and the idele time are both high then you probably do not have enough memory. In the worse case scenario, where you have enough memory you may have a network or disk related problem.
If you have a 0 idele time and your users are happy that means that your system is being used well and is busy and has enough resource to manage the load. Yet if you upgraded to a faster CPU that would also improve the performance of this machine.
If your system is running only 25% idele then a faster CPU is not your problem and you must look at more memory and a faster disk.
A system that is running 50% in a system-state is probably spending a lot of time doing disk IO. To improve this you could speak to the developers and see how they have written their programs for example: are they moving characters rather than blocks of data at a time. Also check your filesystem and disk structure, maybe that could also be improved.
The second report generated by the iostat command is the Device Utilization Report. The device report provides statistics on a per physical device or partition basis.
The report may show the following fields, (depending on whether -x and -k options are used):
|Device||This column gives the device name, which is displayed as hdiskn with 2.2 kernels, for the nth device. It is displayed as devm-n with newer kernels, where m is the major number of the device, and n a distinctive number. When -x option is used, the device name as listed in the /dev directory is displayed. (See Chapter on Tips and Tricks for more info on Major and Minor device numbers)|
|tps||Indicate the number of transfers per second that were issued to the device. A transfer is an I/O request to the device.|
|Blk_read/s||Indicate the amount of data read from the drive expressed in a number of blocks per second. Blocks are equivalent to sectors with post 2.4 kernels and therefore have a size of 512 bytes.|
|Blk_wrtn/s||Indicate the amount of data written to the drive expressed in a number of blocks per second.|
|Blk_read||The total number of blocks read.|
|Blk_wrtn||The total number of blocks written.|
|kB_read/s||Indicate the amount of data read from the drive expressed in kilobytes per second. Data displayed are valid only with kernels 2.4 and later.|
|kB_wrtn/s||Indicate the amount of data written to the drive expressed in kilobytes per second. Data displayed are valid only with kernels 2.4 and later.|
|kB_read||The total number of kilobytes read. Data displayed are valid only with kernels 2.4 and later.|
|kB_wrtn||The total number of kilobytes written. Data displayed are valid only with kernels 2.4 and later.|
|rrqm/s||The number of read requests merged per second that were issued to the device.|
|wrqm/s||The number of write requests merged per second that were issued to the device.|
|r/s||The number of read requests that were issued to the device per second.|
|w/s||The number of write requests that were issued to the device per second.|
|rsec/s||The number of sectors read from the device per second.|
|wsec/s||The number of sectors written to the device per second.|
|rkB/s||The number of kilobytes read from the device per second.|
|wkB/s||The number of kilobytes written to the device per second.|
|avgrq-sz||The average size (in sectors) of the requests that were issued to the device.|
|avgqu-sz||The average queue length of the requests that were issued to the device.|
|await||The average time (in milliseconds) for I/O requests issued to the device to be served.|
|svctm||The average service time (in milliseconds) for I/O requests that were issued to the device.|
|%util||Percentage of CPU time during which I/O requests were issued to the device.|
It would be good to see when the IO is slow or fast and move the relevant slower process runs to late at night when the devices are less used and when a slower response time does not matter as much.
We have discussed the ps command earlier so I am not going to take you through the structure or options of the command. However there are a couple of issues that likely you have not been told about before.
In Linux the ps command that we are going to look at is the one that uses /proc for the required inforamtion that it reports on.
This version of ps accepts several kinds of options, read the man pages for the list. The following table expresses some of the output modifiers that can be used. There are more than this but I thought these were the most useful:
|-H||show process hierarchy (forest)|
|-m||show all threads|
|C||use raw CPU time for %CPU instead of decaying average|
|S||include some dead child process data (as a sum with the parent)|
|c||true command name|
|e||show environment after the command|
|n||numeric output for WCHAN and USER|
|--cols||set screen width|
|--columns||set screen width|
|--html||HTML escaped output|
|--headers||repeat header lines|
|--lines||set screen height|
This ps works by reading the virtual files in /proc. This ps does not need to be suid kmem or have any privileges to run.
Programs swapped out to disk will be shown without command line arguments (and unless the c option is given) and in brackets.
%CPU shows the cputime/realtime percentage. It is time used divided by the time the process has been running.
The SIZE and RSS fields don't count the page tables and the task_struct of a proc; this is at least 12k of memory that is always resident. SIZE is the virtual size of the proc (code+data+stack).
Both Linux (and FreeBSD) have a way of displaying the various states of a process at any one time. It seems apparent that the process that would display this information is the "process status" command or ps.
This information can become invaluable to you when analysing your performance or needing to know how far a process is towards completion and what might be holding the process up.
You have a "process state" and a "flag" set for each process:
PROCESS FLAGS ALIGNWARN 001 print alignment warning msgs STARTING 002 being created EXITING 004 getting shut down PTRACED 010 set if ptrace (0) has been called TRACESYS 020 tracing system calls FORKNOEXEC 040 forked but didn't exec SUPERPRIV 100 used super-user privileges DUMPCORE 200 dumped core SIGNALED 400 killed by a signal PROCESS STATE CODES D uninterruptible sleep (usually IO) R runnable (on run queue) S sleeping T traced or stopped Z a defunct ("zombie") process
NOTE: For BSD formats and when the "stat" keyword is used, additional letters may be displayed:
It is also possible to find out the current wait states of the processes, in other words what resource or activity are they waiting for, you will need to examine the man pages of your version of Linux very carefully for this, but the command will look similar to the following:
debian:~# ps axo user,pid,stat,f,wchan,command USER PID STAT F WCHAN COMMAND root 1 S 100 select init root 2 SW 040 bdflus [kflushd] root 3 SW 040 kupdat [kupdate] root 4 SW 040 kswapd [kswapd] root 5 SW 040 contex [keventd] root 148 S 040 select /sbin/syslogd root 151 S 140 syslog /sbin/klogd root 175 S 140 select /usr/sbin/inetd root 179 S 140 select /usr/sbin/lpd root 182 S 140 select /usr/sbin/cupsd root 190 S 140 select /usr/sbin/sshd daemon 194 S 040 nanosl /usr/sbin/atd root 197 S 040 nanosl /usr/sbin/cron root 201 S 100 read_c -bash root 202 S 000 read_c /sbin/getty 38400 tty2 root 203 S 000 read_c /sbin/getty 38400 tty3 root 204 S 000 read_c /sbin/getty 38400 tty4 root 205 S 000 read_c /sbin/getty 38400 tty5 root 206 S 000 read_c /sbin/getty 38400 tty6 root 336 S 040 select /sbin/dhclient-2.2.x -q root 337 S 140 select /usr/sbin/sshd root 340 S 100 wait4 -bash root 579 R 100 - ps axo user,pid,stat,f, \ wchan,command
The WCHAN field is actually resolved from the numeric address by inspecting the kernel struct namelist, which is usually stored in a map file:
If you tell ps not to resolve the numerics ("n" flag), then you can see the hex values:
debian:~# ps anxo user,pid,stat,f,wchan,command USER PID STAT F WCHAN COMMAND 0 1 S 100 130dd4 init 0 2 SW 040 12a233 [kflushd] 0 3 SW 040 12a298 [kupdate] 0 4 SW 040 12381a [kswapd] 0 5 SW 040 11036b [keventd] 0 148 S 040 130dd4 /sbin/syslogd 0 151 S 140 114d1a /sbin/klogd 0 175 S 140 130dd4 /usr/sbin/inetd 0 179 S 140 130dd4 /usr/sbin/lpd 0 182 S 140 130dd4 /usr/sbin/cupsd 0 190 S 140 130dd4 /usr/sbin/sshd 1 194 S 040 11373c /usr/sbin/atd 0 197 S 040 11373c /usr/sbin/cron 0 201 S 100 1bee13 -bash 0 202 S 000 1bee13 /sbin/getty 38400 tty2 0 203 S 000 1bee13 /sbin/getty 38400 tty3 0 204 S 000 1bee13 /sbin/getty 38400 tty4 0 205 S 000 1bee13 /sbin/getty 38400 tty5 0 206 S 000 1bee13 /sbin/getty 38400 tty6 0 336 S 040 130dd4 /sbin/dhclient-2.2.x -q 0 337 S 140 130dd4 /usr/sbin/sshd 0 340 S 100 119017 -bash 0 580 R 100 - ps anxo user,pid,stat, \ f,wchan,command
As you can see it is possible to find out much more than initially apparent. Read the man pages of ps carefully.
These commands on their own cannot improve the performance of your machine, however if you monitor the resources it is possible to re-allocate them in a more balanced fashion to suit your needs and your system load.
The Linux default resource allocations are not the resource allocations that will suit every system but they should suit most systems.