Geoff Wild's ITRC Portal Monday, 28 July 2014
 
Memory Usage “What is using all of the memory”? PDF Print E-mail
Written by Geoff Wild   
Friday, 08 June 2007 00:06
  Memory Usage - “What is using all of the memory”?by: This e-mail address is being protected from spambots. You need JavaScript enabled to view it Last modified: July 23, 2004Latest version available at external ftp site:ftp://eh: This e-mail address is being protected from spambots. You need JavaScript enabled to view it /memory.htm            

Table of Contents
             Introduction
                  The memory line in the output of swapinfo
                  swapinfo defined                        A.  Review Buffer Cache size
            B.  Monitoring Memory Usage
                     1. plain memory (malloc) - ps procsizekmeminfo, and Glance
                     2. shared memory  (shmget) - ipcs, shminfo and  procsize                      3. memory mapped (mmap)shminfo and  procsize
            C.  OS Memory Leaks/Hogs
                     1.  Check for OS memory leaks  with kmeminfo                      2.  Check for known memory hogs (e.g. JFS inode cache)

            D.  Application Memory Leaks

E.  32-bit memory limitation 
            F.  SHMEM_MAGIC             G.  How much data space can application get?
H.  Memory Windows: details  patches   how to check for Memory Windows   memwin_stats I.   Memory Usage as seen in "dmesg", "swapinfo", "top", and "glance"

J.   Troubleshooting of Not enough space, "out of memory",  and "Not enough core"


            Summary (i.e. memory report download info)
            References







Introduction

The purpose of this document is to describe how memory is used and the tools, both supported and unsupported, that are availble to examine/report memory usage.  See below for details.






The memory line in the output of swapinfo...

  • The memory line is infamously misleading and does not refer to acual physical memory use!!!! Rather it is the size of pseudoswap, which happens to be calculated to be 75% of the size of RAM (a.k.a. memory.)  As the name ("pseudo") implies, it does NOT exist. Pseudoswap is enabled by default with the kernel parameter swapmem_on(5) set to 1.  Don't worry about this line, just look at the total line for total used and sometimes it's interesting to look at the device PCT USED as an indication of how much swapping has occurred since the box was last rebooted.


swapinfo description and example...

  • Use 'swapinfo -tm' to get a complete/total picutre of swap usage (see below for example and details.)  Pay particular attention to the total line as it indicates how much swap space has been actually reseved for swap. When this percentage gets near 100%, processes will not start up (unable to fork process) and new shared memory segments can not be created.

  • swapinfo -tm  example and explanation:

             Mb      Mb      Mb   PCT  START/      Mb
TYPE      AVAIL    USED    FREE  USED   LIMIT RESERVE  PRI  NAME
dev         288      83     205   29%       0       -    1  /dev/vg00/lvol2
reserve       -     141    -141
memory      102      41      61   40%
total       390     265     125   68%       -       0    -
    • dev line(s):
      • are the actual physical swap device(s)
      • show if swapping has actually occurred. In other words, the  PCT USED column in the dev lines represents the value last attained during a previous period of swapping. This is analogous to the high-water mark that a flood leaves.
      • to check to see if swapping is currently occuring, use 'vmstat -v 5 5' to see if the 'po' (page outs) is sustained above 0.
    • reserve line(s)
      • indicate how much of the swap device(s) has(have) been set aside for memory should it need to be swapped.
    • memory line:
      • indicative of how much of pseudo-swap has been reserved
      • when present, indicates  pseudo-swap is enabled   (i.e.  swapmem_on kernel paraemter is set to 1, which  is the default.) The size of pseudoswap is calculated to be 75% of the size of RAM (a.k.a. memory.)  In other words, it does not refer to acual physical memory use!!!!  Pseudo-swap was designed specifically for large memory systems for which acutal swapping is never (or rarely) expected to occur, so there’s less need to use actual physical disk space for swap. For more information, see swapmem_on(5) , which reads:

               In previous versions of HP-UX, system configuration required sufficient physical swap space for the maximum possible
               number of processes on the system. This is because HP-UX reserves swap space for a process when it is created, to
               ensure that a running process never needs to be killed due to insufficient swap.
               This was difficult, however, for systems needing gigabytes of swap space with gigabytes of physical memory, and those
               with workloads where the entire load would always be in core. This tunable was created to allow system swap space to
               be less than core  memory. To accomplish this, a portion of physical memory is set aside as 'pseudo-swap' space.
               While actual swap space is still available, processes still reserve all the swap they will need at fork or execute time from
               the physical device or file system swap. Once this swap is completely used, new processes do not reserve swap, and
               each page which would have been swapped to the physical device or file system is instead locked in memory and
               counted as part of the pseudo-swap space.
    • total line:
      •  the PCT USED value shown in the total line indicates how much swap space has been actually reseved for swap. When this percentage gets near 100%, processes will not start up (unable to fork process) and new shared memory segments can not be created.

 






A. Review Buffer Cache size
  • Review Buffer Cache size - Buffer cache is, by default, 50% of RAM (see kernel parameter dbc_max_pct(5)).  A buffer cache sweet spot is 400 Mb or 20% of memory, whichever is smaller. But of course, this may vary from system to system.  To check the current size of the buffer cache, either "sysdef | grep bufpages” (and multiply by 4096 to approximate the current size of buffer cache) or use glance’s memory screen to see what size “BufCache” is.
  •   Note: Although buffer cache is dynamic in size, decrease only occurs under memory pressure and then only decreases very slowly. So, the buffer cache often grows farily quickly to dbc_max_pct and only decreases (and slowly) when memory presssure is high.



Q: How much RAM(memory) does system have?


          A: Choose one of the following methods:


                 1. Use adb to query the kernel for the size of physical memory:

          a.    11.x   # echo phys_mem_pages/D | adb -k /stand/vmunix /dev/mem
            
10.x   # echo physmem/D | adb -k /stand/vmunix /dev/mem

                         b. multiply output of adb by 4096 to get the size of RAM.

                 2. Run:
            # dmesg | grep Phys

                 3. Check with glance in the Memory Report, look at the value of Phys Mem.





B. Monitoring Memory Usage

  • There are 3 ways for memory to be allocated, all requiring an equivalent amount of swap.
    1. plain memory as allocated with  malloc(3C) system call.
    2. shared memory as allocated with shmget(2) system call.
    3. memory mapped files as allocated with mmap(2) system call.  

             


  1. plain memory

a) Use ps(1), to report process memory usage (does NOT include shared memory or mmap files), and sort(1) to see largest memory users first. Start with:


           # ps –efl | sort –rnk 10 | more


      •  And then look at the 10th column (SZ) in the output to see the amount of memory used by this process for data/text and stack. This value is in pages, so multiply by 4096 to determine the size in bytes.  Anytime you see that the size (SZ) is a four-digit number, that's relatively large, so it's one to watch over time and to see if it continues to grow, and therefore may have a memory leak.
 
      • For example:


       # ps –efl | sort –rnk 10 | more
         F S  UID   PID  PPID  C PRI NI    ADDR   SZ   WCHAN   STIME TTY   TIME COMD
         1 S   root  904    1  0 154 20 48052500 6682  a229c8 May 26  ?   5:03 /usr/sbin/mib2agt
         141 R  root 1124   1  0 -16 20 43c91400 2596   -  May 26  ?   9:52 /opt/perf/bin/midaemon
         1 S   root 6572 6556 0 154 20 43142f00 1650  a229c8 09:29:02 pts/tj 0:04 swlist
         1 R   root 1246 1085 0 152 20 48232000 925     -  May 26  ?   0:58
/opt/perf/bin/rep_server -t  
         SCOPE /var/opt/perf/datafiles/lo

 

      • As seen in this example, the mib2agt process is using an excessive amount of memory. This binary, mib2agt, has a known memory leak fixed in  PHSS_27858  (ITRC ftp site download).  This patch DOES NOT require a reboot. Furthermore, mib2agt can be killed and then restarted with kill mib2agt_PID and/usr/sbin/mib2agt. But only restart it if you have need of supporting SNMP requests (e.g. OpenView). If not needed, can be configured to not start at bootup by modifying /etc/rc.config.d/SnmpMib2.

              

      •  Alternative ps command: Can use the UNIX95 options to look at both Virtual Size as well as the actual Size.
        • Run:
          #
          UNIX95=1 ps -efo vsz,sz,pid,args |grep -v grep | sort -rnk 1 | more

        • For example:
          # UNIX95=1 ps -efo vsz,sz,pid,args |grep -v grep | sort -rnk 1 | more
          VSZ     SZ   PID COMMAND
          12252  627  2745 /opt/OV/bin/ovdbrun -c /var/opt/OV/share/databases/analysis/
          9060  1214  2362 /opt/omni/lbin/rds -d
          8808  1892  2677 /opt/hpwebjet-5.5/hpwebjetd


    b) For information beyond data, statck, text usage,  can use an unsupported utility called procsize, which breaks down memory by : UAREA, TEXT, DATA, STACK, Shared  Memory (SHMEM), & Memory Mapped files (MMAP).
# ./procsize -fcn |sort -rnk 11 | more




                c) Can also look at memory usage, by process, with an unsupported utility called kmeminfo

                    

      • kmeminfo is an unsupported utility that can be used to examine/report memory usage.   
        kmeminfo is available for download here:

          System:     hprc.external.hp.com  (192.170.19.51)
          Login:      eh
          Password:   spear9
                                  ftp://eh: This e-mail address is being protected from spambots. You need JavaScript enabled to view it /kmeminfo.sh

 

      • For example:
             # ./kmeminfo –user
             kmeminfo (3.57)
             libp4 (7.124): Opening /stand/vmunix /dev/kmem
             Boot time: Mon Nov 25 12:01:58 2002
             Dump time: Mon Jan  6 12:24:21 2003
 -----------------------------------------------------------
             Summary of user processes memory usage:

             Process list sorted by resident set size ...

                  proc      vas p_pid    va_rss   va_prss va_ucount command
             0x0ab6180 0x1de3400  3185     3895      3865     7678 mxagent
             0x0abca00 0x1c1b900  1538     2051      2040     8867 rbootd
             0x0ac19c0 0x1f46f00  3184     1563      1533     7207 mxrmi
             0x0abccc0 0x1d55800  2434     1236      1032     5364 rds
             0x0acb3c0 0x2095200 12454       919       889     7415 kmeminfo
             0x0ac2d00 0x1e49800  2635      527       471     8715 ns-slapd
             0x0ac1f40 0x1df5a00  2684      380       365     3371 ns-admin
             0x0abc480 0x1bbb000  1476      359       277     3349 dmisp
             0x0ac4300 0x1e83600  2853      318       264     4237 hpwebjetd



                  d) Can use glance's process list or application list. For example:

                                 PROCESS LIST                      Users=    5
                              User      CPU Util     Cum     Disk           Thd
Process Name   PID   PPID Pri Name   (  100% max)    CPU   IO Rate    RSS   Cnt
--------------------------------------------------------------------------------
pax          13819  13818 148 root      2.7/ 5.8   273.3  9.4/32.8   284kb    1
glance       14464   1822 158 root      2.1/ 3.1     3.0  0.0/ 2.1   4.3mb    1
scopeux       1715      1 127 root      1.7/ 0.2   518.4  1.5/ 0.0   4.1mb    1
swapper          0      0 127 root      1.5/ 0.8  2213.0  0.3/ 0.0    16kb    1
java         10095      1 168 root      1.0/ 2.7   348.7  0.0/ 4.2  42.0mb   28
vxfsd           35      0 138 root      0.2/ 0.1   289.4  1.9/ 1.3   352kb   16

                                APPLICATION LIST                    Users=    5
                          Num Active  CPU  AvgCPU  Logl   Phys     Res    Virt
Idx Application         Procs  Procs  Util  Util    IO     IO      Mem     Mem
--------------------------------------------------------------------------------
  1 other                    2     0   0.0   0.0    0.0    0.0   804kb  19.3mb
  2 network                 55     5   0.4   0.3    0.0    0.0  12.1mb  35.4mb
  3 memory_management        3     3   1.6   1.8    0.0    1.1    96kb   376kb
  4 other_user_root        101    34  52.3  43.1   60.9   65.2 109.9mb 614.0mb

      • A trial version of Glance is available on the application CDs (usally on cd #2 or #3)
      • Glance is not available for download.

        Glance  product #'s
        • 11.x s700:   B3691AA  B3699AA
          • Trial version: B3691AA_TRY   B3699AA _TRY
        • 11.x s800:   B3693AA   B3701AA 
          • Trial version: B3693AA_TRY B3701AA_TRY




  • 2) shared memory as allocated with shmget(2) system call.


    • a) Can look at shared memory usage with  ipcs(1).  For example:
     # ipcs –mpb | more
  • For example:
            # ipcs –mpb  | more
IPC status from /dev/kmem as of Wed Mar  3 07:39:51 2004
T      ID     KEY        MODE        OWNER     GROUP  SEGSZ  CPID  LPID
Shared Memory:
m       0 0x41200007 --rw-rw-rw-      root      root    348   636   636
m       1 0x4e000002 --rw-rw-rw-      root      root  61760   636   638
m       2 0x41241878 --rw-rw-rw-      root      root   8192   636   638
m       3 0x000024ef --rw-rw-rw-      root      root   7712  1143  1137
m       4 0x30205f0d --rw-rw-rw-      root      root 1048576  1184  1226
m    1605 0x0c6629c9 --rw-r-----      root      root 19059552  1823 13457
m     606 0x49180013 --rw-r--r--      root      root  22908  1804  1903
m       7 0x06347849 --rw-rw-rw-      root      root  77384  1823  1903
m    7208 0x5e1c019c --rw-------      root       sys    512 19627 19627
m    3409 0x00000000 D-rw-------      root      root 213272  2198  2198
m      10 0x011c0082 --rw-------       www     other 100000  2203  2204
  • TRICKS:
    • To total the shared memory usage, run:
              # ipcs -mpb | sed -n '/^m/p' |
             awk '{total+=$(NF-2)}END{printf(“%dn”, total)}'
      • And if total is at or near 1.75 Gb or 2.75Gb then address as a 32-bit limitation issue.
    • To find processes, if still running, that last touched (LPID) shared memory segments:
      # ps -ef | `ipcs -mpb | sed -n '/^m/p' |
          awk '{printf("%s ", $NF)} END{printf("n")}' |
          sed 's/ /|/g'| sed 's/|$//' |
          awk '{printf("egrep -e %sn",$0)}' |
          sed 's/ -e / -e "/' |sed 's/$/"/'`


    • b) Can look at shared memory usage with an unsupported  tool called shminfo

 

      • shminfo is available for download here:
          System:     hprc.external.hp.com  (192.170.19.51)
          Login:      eh
          Password:   spear9

                                   ftp://eh: This e-mail address is being protected from spambots. You need JavaScript enabled to view it /shminfo.sh

 

      • For example:


              # ./shminfo
              Shared space from Window id 0 (global):
                      Space      Start       End  Kbytes Usage
              Q2 0x00006fea.0x40000000-0x7fff0000 1048512 FREE
              Q3 0x00000000.0x80000000-0x80001000      4 SHMEM id=0
              Q3 0x00000000.0x80001000-0x80002000      4 OTHER
              Q3 0x00000000.0x80002000-0x80102000    1024 SHMEM id=201
              Q3 0x00000000.0x80102000-0x81202000   17408 OTHER
              Q3 0x00000000.0x81202000-0x8121b000     100 SHMEM id=3602
              Q3 0x00000000.0x8121b000-0x81eea000   13116 FREE
              Q3 0x00000000.0x81eea000-0x81efd000     76 SHMEM id=3
              Q3 0x00000000.0x81efd000-0x82df0000   15308 OTHER
              Q3 0x00000000.0x82df0000-0x82df6000     24 SHMEM id=4004
              Q3 0x00000000.0x82df6000-0x83aa6000   12992 OTHER


              # ./shminfo –64bit
              libp4 (7.91): Opening /stand/vmunix /dev/kmem

              Loading symbols from /stand/vmunix
              shminfo (3.7)

              Global 64-bit shared quadrants:
              ===============================
                     Space             Start                End        Kbytes Usage
              Q1 0x09957000.0x0000000000000000-0x000003ffffffffff   4294967296 FREE
              Q4 0x08343400.0xc000000000000000-0xc00003ffffffffff   4294967296 FREE


     (note:shminfo indicates shared memory and OTHER means memory mapped files.)
      • TRICK
        • If a particular shared memory segment is of interest, and if you want to know which processes are attached to that shared memory segment, you can use shminfo -s id (where id is the shared memory identifier.)
          • For example:
# ./shminfo -s 8010
libp4 (7.91): Opening /stand/vmunix /dev/kmem

Loading symbols from /stand/vmunix
shminfo (3.7)

Shmid 8010:
struct shmid_ds at 0xc84c10
Pseudo vas at 0x49d0ca80
Pseudo pregion at 0x4c2b6200
Shared region at 0x4c2b5ac0
Segment at 0x12f2400.0xc33ba000
Segment allocated out of "Global 32-bit quadrant 4"
Processes using this segment:
proc=0x4c19f040 (pid 3097 "httpd"): vas=0x49d0cd00, SHMEM preg=0x4c3062c0
proc=0x48d87040 (pid 3094 "httpd"): vas=0x49d0cbc0, SHMEM preg=0x4c2e2840
proc=0x49e3b040 (pid 3089 "httpd"): vas=0x4c262680, SHMEM preg=0x4c2baec0


    • c) Can look at shared memory usage by process with an unsupported utility called procsize, which breaks down memory by  various types, specifically: UAREA, TEXT, DATA, STACK, Shared  Memory, Memory Mapped files.
      • For example:
        • Look at breakdown of memory usage, per process:

          # ./procsize -fnc | more

          pid Comm        UAREA   TEXT   DATA  STACK  SHMEM     IO  MMAP    Total
          2916 getty    v    4      5     6      4     0      0   349      369
          2287 prm3d    v   68      6   671    513     0      0  37212    38471
          .
          .
          .


        • TRICK: Here's the command to use to sort the processes by total memory usage, most to least.

# ./procsize -fcn |sort -rnk 11 | more

        • TRICK: Here's the command to use to sort the processes by shared memory usage, most to least.

# ./procsize -fcn |sort -rnk 8 | more




  • 3) memory mapped files as allocated with mmap(2) system call. 
    • a) The are no system commands that will report memory mapped file usage

    • b) Can use an unsupported utility called shminfo to see use of memory mapped memory.

      • In the output of shminfo, memory mapped  files are shown as “OTHER”.

      • See above for examples and download site for shminfo.

    • c) Can use procsize to see which processes use memory mapped files

      • In the output of procsize , memory mapped  files are shown under the "MMAP” column.

      • See above for examples and download site for procsize

      • TRICK: Use the following to sort by MMAP column:
           # ./procsize -fcn |sort -rnk 10 | more







C.  OS Memory Leaks/Hogs


  • 1. Check for known OS Memory Leaks with an unsupported utility called kmeminfo The Response Center can assist with analyzing the output of kmeminfo.

  • 2. Checking for known memory hogs
    • The JFS inode cache, is sized by the vx_ninode kernel parameter.  The default value of vx_ninode is determined by size of RAM and is different for 11.00 -vs- 11.11
      • For example:
        • an 11.11 system with 8GB of memory, vx_ninode is defaulted to 256,000
        • an 11.0 system 8GB of memory, vx_ninode is defaulted to 144,000
      • For most situations, a smaller value for vx_ninode is reasonable, say 20,000 for example

      • Lowering vx_ninode results in a large savings of memory.

      • To see the size of the JFS (3.3 and above) inode cache:
          # echo "vxfs_ninode/D" | adb -k /stand/vmunix /dev/mem

      • To see how many JFS (3.3 and above) inodes are currently cached:
          # echo "vx_cur_inodes/D" | adb -k /stand/vmunix /dev/mem
      • To gage* the size of a systems JFS inode cache, looking at the output of kmeminfo, use the following table to know which bucket/arena JFS inode cache uses.

                                       OS                     JFS version       arena/bucket*

                                    11.11                            3.5                      vx_icache_arena
                                    11.11                            3.3                      M_TEMP
                                    11.00 32-bit                 3.1                      bucket[10]
                                    11.00 64-bit                 3.1                      bucket[11]
                                    11.00 32-bit/64-bit      3.3                      bucket[10]

                         * NOTE: JFS inode cache is one of the consumers of bucket/arena.





D.  Application Memory Leaks

  • Use a tool to capture a baseline of memory use per process,
    • Then gather subsequent reports to see if there is a steady increase in memory use.

  • To check for 3rd Party Memory Leaksm, try PurifyTM to troubleshoot for this problem. NOTE, this is not an endorsement of Purify.

           http://www.rational.com

              

 

 

 

 

 

 



E. 32-bit memory limitation, General Information

  • For 32 bit applications, the maximum size of a single shared memory segment a process can attach is limited to 1 GB (shmmax <= 0x40000000). The first 1 Gb resides in quadrant 3, and quadrant 4 only has 0.75 Gb reserved for shared memory and memory mapped files. So the total memory addressable by the default executable type is 1.75 Gb. 

    An executable type
    SHMEM_MAGIC has been defined which adds the use of quadrant 2 for shared memory and memory mapped files.  This additional 1 Gb results in a system-wide maximum of 2.75 GB of shared memory ­and memory mapped files address space. See below for more details about SHMEM_MAGIC. (note: for more on 32-bit memory limitation, see: Understanding Shared Memory on PA-RISC Systems”, Doc id RCMEMKBAN00000027)
  • If you seeout of memory” or “not enough space” when running an application, and there is pleny of free swap space then the application may be requesting shared memory or may be mapping files to memory (with shmget() and mmap() system calls respectively) and the problem may due to 32-bit memory limitation/contention and the options are...


 




F. SHMEM_MAGIC
  • SHMEM_MAGIC executables can address 2.75 Gb
    • For more details than explained below (about SHARE_MAGIC, EXEC_MAGIC, AND SHMEM_MAGIC), see ITRC doc id rcfaxmemory001 ("Shared_magic Explained")

  • SUMMARY
    • To get SHMEM_MAGIC, the executable needs to have been previously linked with EXEC_MAGIC ("ld -N") and then can be chatr'ed to get SHMEM_MAGIC (i.e. with the "chatr -M" option.)

    • Check with the vendor's application support to see if their application supports SHMEM_MAGIC.

    • PATCHES - Unlike 11.0 which doesn't require patches for SHMEM_MAGIC, 10.20 needs patches. The 10.20 patches are:
      • PHKL_16750 (for s700) PHKL_16751 (for s800)
        • These are LITS (Line-In-The-Sand) patches and will never be superseded.
      • PHSS_21110 (linker/ld patch)
        • Note: PHSS_21110, may be superseded. Please check for the latest patches at the IT ResourceCenter (ITRC) at the following web site:       http://www.itrc.hp.com/
    • Q: Is application set up with SHMEM_MAGIC?

      • A: To determine if the 32-bit application is setup with SHMEM_MAGIC or is capable of SHMEM_MAGIC, use the chatr(1)
    
    # chatr /usr/bin/bdf |grep -i executable
         shared executable
         executable from stack: D (default)
    # chatr /opt/oracle/bin/orasrv | grep executable
         normal SHMEM_MAGIC executable
         executable from stack: D (default)


Executable Type as Reported By chatr
Magic Type
Capabilities

shared executable
SHARE_MAGIC
can ONLY address 1.75 Gb

normal executable
EXEC_MAGIC
can be chatr'd (with –M option) to obtain SHMEM_MAGIC.

normal SHMEM_MAGIC executable
SHMEM_MAGIC
can address 2.75 Gb
  



 




G.  How much data space can application get?

  • Normally, 32bit apps can, as allowed by maxdsiz, can get up to ~940 Mb of data space, unless they have been linked with EXEC_MAGIC, in which case, they can get upwards of 1.9 Gb of data space. Alternatively,  chatr may be used to enable third quadrant private. For example:     chatr +q3p enable executable_name

  • To determine what the executable is capable of, run:

           # chatr executable_name

    • if it shows as "shared executable"  then it can only get to about 940Mb,
    • if it shows "EXEC_MAGIC", then it can get upwards of 1.9 Gb of data space.

  • maxdsiz  - maximum size (in bytes) of the data segment/space for any user process.
          11.00            maxdsiz and maxdsiz_64bit
                   Default:  32bit & 64bit:  0x4000000(64 MB)   
          11.11 (v1.6):     maxdsiz(5)   &      maxdsiz_64bit(5)
                   Default:   32bit: 0x10000000(256 MB)    64bit: 0X40000000(1 GB)

 


H. Memory Windows
     


  • Overview - Running without memory windows, HP-UX has limitations for shared resources on 32-bit applications. All applications in the system are limited to a total of 1.75GB of shared memory, 2.75GB if compiled as SHMEM_MAGIC. In a system with 16GB of physical memory, only 1.75 can be used for shared resources!!!! To address this limitation, a functional change has been made (Memory Windows was introduced by patches at 11.0) to allow 32-bit processes to create unique memory windows for shared objects like shared memory. his allows cooperating applications to create 1GB of shared resources without exhausting the system-wide resource. Part of the virtual address space remains globally visible to all processes, so that shared libraries are accessible no matter what memory window they are in.  The following customer-visible changes have been made for memory windows:
 
  • GOTCHA: The default (SHARE_MAGIC) executable’s maximum size memory window is 1 gigabyte. Any consumption beyond 1 gigabyte consumes space from the 4th quadrant which is shared across *ALL* processes in the system. This is important, any application within a memory window that uses more than 1 gigabyte of shared memory consumes quadrant 4 resources that are shared by all processes no matter what memory window they occupy.


  • Details
    • Check with the vendor's application support to see if they support Memory Windows.

    • Patches - Memory Windows was originally introduced with 11.0 Patches: 
           PHKL_13810 (Kernel)

                 PHCO_13811 (Commands)

      • The current* patches are PHKL_18543 &  PHCO_23705
      • 11.11 (11i) does not need patches for Memory Windows.


    • Is system already configured for Memory Windows?
      • Either, test with  setmemwindow(1M), for example:
            # setmemwindow date

        • Memory Windows is not configured if you get nothing from the date(1) command (i.e. nothing comes back), or if you get an error like this:
              Error(12), unable to set memory window(-1)


      • Or check the kernel parameter max_mem_window to see if it has been set, with:
            # grep max_mem_window /stand/system
        • NOTE: If you do NOT see max_mem_window(5) as a kernel configurable parameter in SAM, then you can install the latest 11.0 SAM patch  (SAM was first made aware of max_mem_window with PHCO_21187) or you can add max_mem_window to system file manually and then generate a new kernel.

# ./memwin_stats -w
Entry   USER_KEY KERN_KEY  QUAD2_AVAIL  QUAD3_AVAIL    PID    REFCNT
Memory Windows:
   0    Global         0     262144       262144        0      357
   1   Private         1          0            0        0        1

# ./memwin_stats -m

Shared Memory:
T      ID     KEY        MODE        OWNER     GROUP   UserKey   KernId
m       0 0x41200007 --rw-rw-rw-      root      root2139031040  2139031040
m       1 0x4e000002 --rw-rw-rw-      root      root2139031040  2139031040
m       2 0x41241878 --rw-rw-rw-      root      root2139031040  2139031040
m       3 0x000024ef --rw-rw-rw-      root      root2139031040  2139031040
m       4 0x30205f0d --rw-rw-rw-      root      root2139031040  2139031040
m    1605 0x0c6629c9 --rw-r-----      root      root2139031040  2139031040
m     606 0x49180013 --rw-r--r--      root      root2139031040  2139031040
m       7 0x06347849 --rw-rw-rw-      root      root2139031040  2139031040
m    7208 0x5e1c019c --rw-------      root       sys2139031040  2139031040
m    3409 0x00000000 D-rw-------      root      root2139031040  2139031040
m      10 0x011c0082 --rw-------       www     other2139031040  2139031040

# ./memwin_stats -p 1226
Process Id (1226)
        User Key: -1
        Kernel Id: 0

 

 


  • Alternative ps command - Alternatively, you can use the UNIX95 options to look at both Virtual Size as well as the actual Size.
    • Run:
      # UNIX95=1 ps -efo vsz,sz,pid,args |grep -v grep | sort -rnk 1 | more
    • For example:
      # UNIX95=1 ps -efo vsz,sz,pid,args |grep -v grep | sort -rnk 1 | more
      VSZ     SZ   PID COMMAND
      12252  627  2745 /opt/OV/bin/ovdbrun -c /var/opt/OV/share/databases/analysis/
      9060  1214  2362 /opt/omni/lbin/rds -d
      8808  1892  2677 /opt/hpwebjet-5.5/hpwebjetd







I.  Memory Usage from "physmem", "swapinfo", "top", and "glance".

  • How do I undertand/resolve the different result about the memory usuage from "physmem", "swapinfo", "top", and "glance".

  • Physical Memory
    • Can use dmesg to report Physical memory (RAM) info. For example:

         # dmesg | grep Phys

         Physical: 212992 Kbytes, lockable: 152792 Kbytes, available: 178188 Kbytes

      • Note:
        • This system has 2GB physical memory.
        • Lockable memory is used for
          • Process images and overhead locked using the plock() system call (see HP-UX Reference entry plock(2)).
          • Shared memory segments locked with the SHM_LOC command of the shmctl() system call (see HP-UX Reference entry shmctl(2)).
          • Miscellaneous dynamic kernel data structures used by the shared memory system and some drivers


    • Can also report Physical memory (RAM) size with adb:
 
  11.x  # echo phys_mem_pages/D | adb /stand/vmunix /dev/kmem
        physmem:
        physmem: 524288
  10.x  # echo physmem/D |adb -k /stand/vmunix /dev/kmem
        physmem:
        physmem: 524288









  • SWAP


# swapinfo -tam
             Mb      Mb      Mb   PCT  START/      Mb
TYPE      AVAIL    USED    FREE  USED   LIMIT RESERVE  PRI  NAME
dev         288      59     229   20%       0       -    1  /dev/vg00/lvol2
reserve       -     146    -146
memory      102      45      57   44%
total       390     250     140   64%       -       0    -

    • The "memory" line in the output of swapinfo is NOT physical memory, rather it is pseudoswap which is calculated to be 75% the size of RAM.
    • MORE...

 


  • top
Load averages: 0.64, 0.57, 0.56 
195 processes: 194 sleeping, 1 running Cpu states: CPU LOAD USER NICE SYS IDLE BLOCK SWAIT INTR SSYS 0 0.67 3.7% 0.0% 0.2% 96.1% 0.0% 0.0% 0.0% 0.0% 1 0.61 7.3% 0.0% 2.4% 90.4% 0.0% 0.0% 0.0% 0.0% --- ---- ----- ----- ----- ----- ----- ----- ----- ----- avg 0.64 5.5% 0.0% 1.4% 93.1% 0.0% 0.0% 0.0% 0.0% Memory: 1444296K (1238320K) real, 1967080K (1468152K) virtual, 84908K free          ^          ^             ^         ^                  ^         |          |             |         |                  |         1          2             3         4                  5
  • Memory is not all of physcial, memory, it is:
    1. Total physical memory in the system DEDICATED to text, data or stack segments for all processes on the system.
    2. Total physical memory for runnable processes, as opposed to sleeping processes.
    3. Total memory dedicated to text, data orck segments for all processes on the system. Some of this is paged out to disk (that is, not all of this is in current physical memory.)
    4. Total memory for runnable processes, as opposed to sleeping or stopped processes.
    5. Physical memory the system considers to be unused and available to new processes. When this value is low, swapping is likely to occur.

NOTE: information about top is from Doc id A3940339









  • Glance
B3690A GlancePlus C.03.05.00    10:25:25      raw 9000/735    Current  Avg  High
--------------------------------------------------------------------------------
Cpu  Util   S     SN                            NARU      U  | 95%   27%   95%
Disk Util                                                     |  0%    1%   19%
Mem  Util   S   SU                             UB       B    | 91%   91%   91%
Swap Util  U             UR                    R           | 77%   77%   77%
--------------------------------------------------------------------------------
                                MEMORY REPORT                     Users=    7
Event        Current   Cumulative   Current Rate   Cum Rate  High Rate
--------------------------------------------------------------------------------
Page Faults        1          791         0.1        4.8       164.7
Page In            1          190         0.1        1.1        30.9
Page Out           0           1         0.0        0.0         0.1
KB Paged In     16kb        468kb         2.8        2.8       160.0
KB Paged Out     0kb          4kb         0.0        0.0         0.7
Reactivations       0           0         0.0        0.0         0.0
Deactivations       0           0         0.0        0.0         0.0
KB Deactivated    0kb         0kb         0.0        0.0         0.0
VM Reads           1           31         0.1        0.1        10.5
VM Writes           0            1         0.0        0.0         0.1

Total VM : 121.1mb   Sys Mem  :  13.8mb   User Mem:  91.3mb   Phys Mem: 144.0mb
Active VM:  73.7mb   Buf Cache:  26.4mb   Free Mem:  12.6mb
    • Total VM: The total private virtual memory (in KBs unless otherwise specified) at  the end of the interval.  This is the sum of the virtual allocation of private data and stack regions for all processes.   
    • Active VM: The total virtual memory (in KBs unless otherwise specified) allocated for processes currently are on the run queue or processes that have executed recently.  This is the sum of the virtual memory sizes of the data and stack regions for these processes.
    • Sys Mem: The amount of physical memory KBs unless otherwise specified) used  by the system (kernel) during the interval.  System memory does not include the buffer cache.
      • On HP-UX 10.20 and 11.0, this metric does not include some kinds of dynamically allocated kernel memory, which has always been reported in the GBL_MEM_USER* metrics.              
      • On HP-UX 11i and beyond, this metric does include some kinds of dynamically allocated kernel memory.
    • Buf Cache: The amount of physical memory (in KBs unless otherwise specified) used by the buffer cache during the interval. The buffer cache is a memory pool used by the system to stage disk IO data for the driver. 

    • User Mem: The amount of physical memory (in KBs unless otherwise specified) allocated to user code and data at the end of the interval.  User memory regions include code, heap, stack, and other data areas including shared memory.  This does not include memory for buffer cache. 
      • On HP-UX 10.20 and 11.0, this metric does include some kinds ofdynamically allocated kernel memory.    
      • On HP-UX 11i and beyond, this metric does not include some kinds ofdynamically allocated kernel memory,  which now is reported in the GBL_MEM_SYS* metrics.               
      • Large fluctuations in this metric can be caused by programs whichallocate large amounts of memory and then either release the memory or terminate.  A slow continual increase in this metric may indicate a program with a memory leak.

    • Free Mem: The amount of memory not allocated (in KBs unless otherwise specified).  As this value drops, the likelihood  increases that swapping or paging out  to disk may occur to satisfy new memory requests.
                      
    • Phys Mem:  The amount of physical memory in the system (in KBs unless otherwisespecified).  Banks with bad memory are not counted.
      • Note that on some machines, the Processor Dependent Code (PDC) code usesthe upper 1MB of memory and thus reports less than the actual physical memory of the system. Thus, on a system with 256MB of physical memory, this metric and dmesg(1M) might only report 267,386,880 bytes (255MB).  This is all the physical memory that software on the machine can access.


 

 



              


J. Troubleshooting Examples:  "Not enough space"  , "out of memory",  "Not enough core" (a.k.a. HPUX errno 12, ENOMEM.)

  • "call to mmap failed" when accompanied by "not enough space": 
    • If there is no lack of available swap, then the cause is typically contention for and/or fragmention of  the 32-bit address space used by shared memory and memory mapped files, which can be viewed using an unsupported utility called shminfo (example download).

  • "Not enough space" (examples), "out of memory", or  "Not enough core" (a.k.a. HPUX errno 12, ENOMEM)
    • 1.) If the app/db is requesting shared memory, and the amount requested is more than the value of the shmmax(5) kernel parameter, then shmmax needs to be increased.
      • If it is determined that shmmax is not causing the failure, then the requested amount of shared memory could not be obtained due to lack of requested amount of contiguous memory (i.e  memory is fragmented).
    • 2.) Whether or not app/db is using shared memory, the problem may be caused by not enough free swap space. So, check to see if there is enough swap. Use 'swapinfo -tm' and see how much total free space there is.

    • 3.) If not a problem with shmmax nor with swap, then, the cause is most likely either data / stack kernel parameters OR shared memory configuration or contention/fragmentation.

      • To see if the problem is due to data / stack kernel parameters, determine which one, you can use tusc to trace the system calls to see which system call is failing and to see the ERRNO.

        • a.) If  malloc() ... system call equivalent in tusc output would be called "brk"... then check data/stack kernel parameters...

          • data/stack kernel parameters  will halt processes when their stack or data grows near (or attempts to pass) the maximum defined by maxssiz and maxdsiz (respectively.) Prior to 11.11, these kernel  parameters (maxssiz and  maxdsiz) are static and so changes to them require a reboot. Normally 32bit apps can only get to ~940 Mb of data space.   32-bit apps can get upwards of 1.9 Gb of data space if the executable is compiled with EXEC_MAGIC (ld -N) or if chatr is used to enable third quadrant private (chatr +q3p enable executable_name).  To determine what the executable is capable of, use 'chatr executable_name' and if it shows as 'shared excutable' then it can only get to about 940Mb.
            • This is harder to determine and may need trial and error of increasing maxssiz and/or  maxdsiz until error stops.
            • If the process takes long enough to fail, then you can monitor it's stack and data usage with procsize.
            • The default for maxssiz is  8Mb and the default for maxdsiz is 64 Mb, they may need to be doubled, tripled, or quadrupled to resolve (i.e. unless Vendor recommends/knows good values, use trial-and-error.)

        • b.) If mmap()  or  shmget() is failing, then check for shared memory configuration or contention / fragmentation...

          • shared memory configuration or contention / fragmentation
            • Typically this is seen where other applications/dbs are using up shared memory to the extent that there is not any more left.
            • The options are, either:
              • reboot will defragment this memory or memory use should be reduced (by lowering the amount requested by the apps/dbs)
              • or temporarily shutdown apps/dbs that are using 32bit address space
              • or use Memory Windows (Memory Windows allows 32-bit processes to create private/unique memory windows for shared objects like shared memory.)
            • To view the existing memory usage, including fragmentation and largest FREE memory segment, use shminfo.









  • "Not enough space" examples:
    • /usr/lib/dld.sl:Call to mmap() failed - ZEROES /usr/lib/libdce.1
/usr/lib/dld.sl:Not enough space
/usr/sbin/sam[221]: 1067 Abort(coredump)
      • From attempting to run sam in TUI (Text) mode with swap 99% used.
    • OBAM INTERNAL ERROR: Cannot fork: Not enough space
sam: Error: The cpp(1) command failed on file: /usr/sam/lib/C/fal.ui.
      • From attempting to run sam in GUI (graphic) mode with swap 99% used.
    • sam: FATAL ERROR: Unable to load library "/usr/obam/lib/libIDMawrt.1": Not enough space
      • From attempting to run sam in TUI (Text) mode with swap 99% used.
    • /usr/lib/dld.sl:Call to mmap() failed - BSS /usr/lib/libnsl.1
/usr/lib/dld.sl:Not enough space
sh: 2885 Abort(coredump)
      • Seen during login [as root user] when system had swap at 99% used.











Summary - Memory Reporting

  1. Download  memory.tar.Z from the following ftp site:
          System:     hprc.external.hp.com  (192.170.19.51)
          Login:      eh
          Password:   spear9
          ftp://eh: This e-mail address is being protected from spambots. You need JavaScript enabled to view it / 

  2. Extract  into a directory of your choosing (e.g. /tmp). For example:

     # uncompress memory.tar.Z
     # tar xvf memory.tar

  3. From the directory of that you chose (e.g. /tmp), run the  memory script  at least twice. Once after reboot (as a baseline), and then again when memory issue is evident. Also run memory script periodically (after at least 2  hours, or maybe even a day).

      • For example:
        # ./memory
        Running........Done.
        Memory Report file is: /tmp/memory.
        03231400.txt
        #

  4. Look for where all of the memory is being used.
    Work
    to understand memory use or reduce the memory use by applications/os/databases.
    Compare the results of running these when the memory issue is evident *and* after a reboot (as a baseline.)










References -
  • White Papers
    • HP-UX Memory Management white paper
        • on 11.00 and prior, version 1.3 is in /usr/share/doc called mem_mgt.txt or mem_mgt.ps





















  • shminfo example:

#./shminfo|more
libp4 (7.91): Opening /stand/vmunix /dev/kmem

Loading symbols from /stand/vmunix
shminfo (3.7)

Global 32-bit shared quadrants:
===============================
        Space      Start        End  Kbytes Usage
Q4 0x063a7c00.0xc0000000-0xc0005fff      24 OTHER
Q4 0x063a7c00.0xc0006000-0xc0006fff       4 SHMEM id=0
Q4 0x063a7c00.0xc0007000-0xc000dfff      28 OTHER
Q4 0x063a7c00.0xc000e000-0xc000ffff       8 SHMEM id=2
Q4 0x063a7c00.0xc0010000-0xc0291fff    2568 OTHER
Q4 0x063a7c00.0xc0292000-0xc0299fff      32 SHMEM id=1 locked
Q4 0x063a7c00.0xc029a000-0xc0309fff     448 OTHER
Q4 0x063a7c00.0xc030a000-0xc030ffff      24 SHMEM id=405
Q4 0x063a7c00.0xc0310000-0xc03aafff     620 OTHER
Q4 0x063a7c00.0xc03ab000-0xc03abfff       4 FREE
Q4 0x063a7c00.0xc03ac000-0xc03ddfff     200 OTHER
Q4 0x063a7c00.0xc03de000-0xc03dffff       8 FREE
Q4 0x063a7c00.0xc03e0000-0xc03f9fff     104 OTHER
Q4 0x063a7c00.0xc03fa000-0xc03fbfff       8 FREE
Q4 0x063a7c00.0xc03fc000-0xc07cdfff    3912 OTHER
Q4 0x063a7c00.0xc07ce000-0xc07cffff       8 FREE
Q4 0x063a7c00.0xc07d0000-0xc07e1fff      72 OTHER
Q4 0x063a7c00.0xc07e2000-0xc07e3fff       8 FREE
Q4 0x063a7c00.0xc07e4000-0xc07e8fff      20 OTHER
Q4 0x063a7c00.0xc07e9000-0xc07effff      28 FREE
Q4 0x063a7c00.0xc07f0000-0xc086efff     508 OTHER
Q4 0x063a7c00.0xc086f000-0xc086ffff       4 FREE
Q4 0x063a7c00.0xc0870000-0xc08a2fff     204 OTHER
Q4 0x063a7c00.0xc08a3000-0xc08a3fff       4 FREE
Q4 0x063a7c00.0xc08a4000-0xc08aefff      44 OTHER
Q4 0x063a7c00.0xc08af000-0xc08b3fff      20 FREE
Q4 0x063a7c00.0xc08b4000-0xc08bbfff      32 OTHER
Q4 0x063a7c00.0xc08bc000-0xc08bffff      16 FREE
Q4 0x063a7c00.0xc08c0000-0xc09aafff     940 OTHER
Q4 0x063a7c00.0xc09ab000-0xc09abfff       4 FREE
Q4 0x063a7c00.0xc09ac000-0xc09b0fff      20 OTHER
Q4 0x063a7c00.0xc09b1000-0xc09b3fff      12 FREE
Q4 0x063a7c00.0xc09b4000-0xc09b9fff      24 OTHER
Q4 0x063a7c00.0xc09ba000-0xc09bffff      24 FREE
Q4 0x063a7c00.0xc09c0000-0xc0a12fff     332 OTHER
Q4 0x063a7c00.0xc0a13000-0xc0a13fff       4 FREE
Q4 0x063a7c00.0xc0a14000-0xc0a1cfff      36 OTHER
Q4 0x063a7c00.0xc0a1d000-0xc0a1ffff      12 FREE
Q4 0x063a7c00.0xc0a20000-0xc0a2afff      44 OTHER
Q4 0x063a7c00.0xc0a2b000-0xc0a2bfff       4 FREE
Q4 0x063a7c00.0xc0a2c000-0xc0a35fff      40 OTHER
Q4 0x063a7c00.0xc0a36000-0xc0a37fff       8 FREE
Q4 0x063a7c00.0xc0a38000-0xc0a3efff      28 OTHER
Q4 0x063a7c00.0xc0a3f000-0xc0a3ffff       4 FREE
Q4 0x063a7c00.0xc0a40000-0xc0adefff     636 OTHER
Q4 0x063a7c00.0xc0adf000-0xc0adffff       4 FREE
Q4 0x063a7c00.0xc0ae0000-0xc0af1fff      72 OTHER
Q4 0x063a7c00.0xc0af2000-0xc0afffff      56 FREE
Q4 0x063a7c00.0xc0b00000-0xc0b21fff     136 OTHER
Q4 0x063a7c00.0xc0b22000-0xc0b23fff       8 FREE
Q4 0x063a7c00.0xc0b24000-0xc0b30fff      52 OTHER
Q4 0x063a7c00.0xc0b31000-0xc0b3ffff      60 FREE
Q4 0x063a7c00.0xc0b40000-0xc0b80fff     260 OTHER
Q4 0x063a7c00.0xc0b81000-0xc0b83fff      12 FREE
Q4 0x063a7c00.0xc0b84000-0xc0b8cfff      36 OTHER
Q4 0x063a7c00.0xc0b8d000-0xc0ba3fff      92 FREE
Q4 0x063a7c00.0xc0ba4000-0xc0bb0fff      52 OTHER
Q4 0x063a7c00.0xc0bb1000-0xc0bbffff      60 FREE
Q4 0x063a7c00.0xc0bc0000-0xc0c0bfff     304 OTHER
Q4 0x063a7c00.0xc0c0c000-0xc0c0ffff      16 FREE
Q4 0x063a7c00.0xc0c10000-0xc0c27fff      96 OTHER
Q4 0x063a7c00.0xc0c28000-0xc0c3ffff      96 FREE
Q4 0x063a7c00.0xc0c40000-0xc0c5afff     108 OTHER
Q4 0x063a7c00.0xc0c5b000-0xc0c5ffff      20 FREE
Q4 0x063a7c00.0xc0c60000-0xc0c87fff     160 OTHER
Q4 0x063a7c00.0xc0c88000-0xc0c8ffff      32 FREE
Q4 0x063a7c00.0xc0c90000-0xc0cc8fff     228 OTHER
Q4 0x063a7c00.0xc0cc9000-0xc0ccffff      28 FREE
Q4 0x063a7c00.0xc0cd0000-0xc0ce5fff      88 OTHER
Q4 0x063a7c00.0xc0ce6000-0xc0cfffff     104 FREE
Q4 0x063a7c00.0xc0d00000-0xc0e61fff    1416 OTHER
Q4 0x063a7c00.0xc0e62000-0xc0e6ffff      56 FREE
Q4 0x063a7c00.0xc0e70000-0xc0e9cfff     180 OTHER
Q4 0x063a7c00.0xc0e9d000-0xc0e9ffff      12 FREE
Q4 0x063a7c00.0xc0ea0000-0xc0eb2fff      76 SHMEM id=2203
Q4 0x063a7c00.0xc0eb3000-0xc0ebffff      52 FREE
Q4 0x063a7c00.0xc0ec0000-0xc0ed6fff      92 OTHER
Q4 0x063a7c00.0xc0ed7000-0xc0edffff      36 FREE
Q4 0x063a7c00.0xc0ee0000-0xc0efcfff     116 OTHER
Q4 0x063a7c00.0xc0efd000-0xc0efffff      12 FREE
Q4 0x063a7c00.0xc0f00000-0xc12fafff    4076 OTHER
Q4 0x063a7c00.0xc12fb000-0xc12fffff      20 FREE
Q4 0x063a7c00.0xc1300000-0xc14d5fff    1880 OTHER
Q4 0x063a7c00.0xc14d6000-0xc14dffff      40 FREE
Q4 0x063a7c00.0xc14e0000-0xc14f5fff      88 OTHER
Q4 0x063a7c00.0xc14f6000-0xc14fffff      40 FREE
Q4 0x063a7c00.0xc1500000-0xc1f92fff   10828 SHMEM id=4
Q4 0x063a7c00.0xc1f93000-0xc1f9ffff      52 FREE
Q4 0x063a7c00.0xc1fa0000-0xc1fc4fff     148 OTHER
Q4 0x063a7c00.0xc1fc5000-0xc1fcffff      44 FREE
Q4 0x063a7c00.0xc1fd0000-0xc1fe8fff     100 OTHER
Q4 0x063a7c00.0xc1fe9000-0xc1ffffff      92 FREE
Q4 0x063a7c00.0xc2000000-0xc2165fff    1432 OTHER
Q4 0x063a7c00.0xc2166000-0xc217ffff     104 FREE
Q4 0x063a7c00.0xc2180000-0xc21edfff     440 OTHER
Q4 0x063a7c00.0xc21ee000-0xc21fffff      72 FREE
Q4 0x063a7c00.0xc2200000-0xc25d1fff    3912 OTHER
Q4 0x063a7c00.0xc25d2000-0xc25fffff     184 FREE
Q4 0x063a7c00.0xc2600000-0xc2647fff     288 OTHER
Q4 0x063a7c00.0xc2648000-0xc267ffff     224 FREE
Q4 0x063a7c00.0xc2680000-0xc277cfff    1012 OTHER
Q4 0x063a7c00.0xc277d000-0xc277ffff      12 FREE
Q4 0x063a7c00.0xc2780000-0xc27b7fff     224 OTHER
Q4 0x063a7c00.0xc27b8000-0xc27bffff      32 FREE
Q4 0x063a7c00.0xc27c0000-0xc2896fff     860 OTHER
Q4 0x063a7c00.0xc2897000-0xc28bffff     164 FREE
Q4 0x063a7c00.0xc28c0000-0xc2956fff     604 OTHER
Q4 0x063a7c00.0xc2957000-0xc29f8fff     648 SHMEM id=41
Q4 0x063a7c00.0xc29f9000-0xc29fffff      28 FREE
Q4 0x063a7c00.0xc2a00000-0xc2bb8fff    1764 OTHER
Q4 0x063a7c00.0xc2bb9000-0xc2c9ffff     924 FREE
Q4 0x063a7c00.0xc2ca0000-0xc2cd3fff     208 OTHER
Q4 0x063a7c00.0xc2cd4000-0xc2efffff    2224 FREE
Q4 0x063a7c00.0xc2f00000-0xc30b6fff    1756 OTHER
Q4 0x063a7c00.0xc30b7000-0xc30bffff      36 FREE
Q4 0x063a7c00.0xc30c0000-0xc30f8fff     228 OTHER
Q4 0x063a7c00.0xc30f9000-0xc323ffff    1308 FREE
Q4 0x063a7c00.0xc3240000-0xc331afff     876 OTHER
Q4 0x063a7c00.0xc331b000-0xc462dfff   19532 SHMEM id=10006
Q4 0x063a7c00.0xc462e000-0xc5940fff   19532 SHMEM id=7
Q4 0x063a7c00.0xc5941000-0xc6c53fff   19532 SHMEM id=8
Q4 0x063a7c00.0xc6c54000-0xc7f66fff   19532 SHMEM id=9
Q4 0x063a7c00.0xc7f67000-0xc9279fff   19532 SHMEM id=10
Q4 0x063a7c00.0xc927a000-0xca58cfff   19532 SHMEM id=11
Q4 0x063a7c00.0xca58d000-0xcb89ffff   19532 SHMEM id=12
Q4 0x063a7c00.0xcb8a0000-0xccbb2fff   19532 SHMEM id=13
Q4 0x063a7c00.0xccbb3000-0xcdec5fff   19532 SHMEM id=14
Q4 0x063a7c00.0xcdec6000-0xcf1d8fff   19532 SHMEM id=15
Q4 0x063a7c00.0xcf1d9000-0xd04ebfff   19532 SHMEM id=16
Q4 0x063a7c00.0xd04ec000-0xd17fefff   19532 SHMEM id=17
Q4 0x063a7c00.0xd17ff000-0xd2b11fff   19532 SHMEM id=18
Q4 0x063a7c00.0xd2b12000-0xd3e24fff   19532 SHMEM id=19
Q4 0x063a7c00.0xd3e25000-0xd5137fff   19532 SHMEM id=20
Q4 0x063a7c00.0xd5138000-0xd8a70fff   58596 SHMEM id=21
Q4 0x063a7c00.0xd8a71000-0xdc86efff   63480 SHMEM id=22
Q4 0x063a7c00.0xdc86f000-0xe0760fff   64456 SHMEM id=23
Q4 0x063a7c00.0xe0761000-0xe4652fff   64456 SHMEM id=24
Q4 0x063a7c00.0xe4653000-0xe7f8bfff   58596 SHMEM id=25
Q4 0x063a7c00.0xe7f8c000-0xebe7dfff   64456 SHMEM id=26
Q4 0x063a7c00.0xebe7e000-0xefd6ffff   64456 SHMEM id=27
Q4 0x063a7c00.0xefd70000-0xefffffff    2624 FREE


Last Updated on Friday, 08 June 2007 01:20