V-2-14: vx_iget - inode table overflow

Article: 100006806
Last Published: 2023-10-27
Ratings: 1 0
Product(s): InfoScale & Storage Foundation

Problem

All the system in-memory inodes are busy and an attempt was made to use a new inode.

Cause

In general, we run out of inode memory if there are lots of files being opened and closed.

VxFS caches vx_inode structures in kernel memory. Each vx_inode is approximately 2k in size. The upper limit is set by the tunable 32-bit vxfs_ninode value.

In simple terms, VxFS maintains 2 different lists on the vx_inode cache


- vx_inodes in use and
- vx_inodes that are placed on free lists.

VxFS will not immediately free kernel memory after a file is closed, it will place the vx_inode on the free list instead and later will free this space back to the kernel based on a delay.

This is an optimization to reduce the expense of constantly creating and freeing struct vx_inodes.

We can tune the size of the table based upon the output of the vxfsstat command.  

Solution

Looking at a sample vxfsstat from the system:

# /opt/VRTS/bin/vxfsstat -i /MOUNT/rai
inode cache statistics
  126950 inodes current    126950 peak               237756 maximum
  266088 lookups            51.43% hit rate
  129225 inodes alloced       2275 freed
    8767 sec recycle age [not limited by maximum]
    1800 sec free age

We see that maximum is 237756.  We are going to do for a start is to take this figure and double it.  That will give us a starting point for our vxfs_ninode tuneable.

To increase the  value we need to make the following change in "/etc/modprobe.conf" and reboot:

options vxfs vxfs_ninode=475512

The new parameters take affect after a reboot or after the VxFS module is unloaded

Was this content helpful?