Tuning System Performance

Indicates the kernel parameters that you need to tune for enhanced system performance.

Tune the following kernel parameters to enhance system performance.

fs.aio-max-nr
Preferred Value: 262144
Purpose: Enhances throughput. Tunes the Asynchronous non-blocking I/O (AIO) feature that allows a process to initiate multiple I/O operations simultaneously without having to wait for any of them to complete. This helps boost performance for applications that are able to overlap processing and I/O.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: No
Applicable to Containerized Client: No
fs.epoll.max_user_watches
Preferred Value: 32768
Purpose: Enhances throughput for high memory/CPU machines. Specifies a limit on the total number of file descriptors that a user can register across all epoll instances on the system. The limit is per real user ID.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
fs.file-max
Preferred Value: 32768
Purpose: Enhances throughput for high memory/CPU machines. Tunes the number of concurrently open file descriptors on the system.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
net.ipv4.route.flush
Preferred Value: 1
Purpose: Makes the TCP configurations effective instantly.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
net.core.rmem_max
Preferred Value: 4194304
Purpose: Enhances throughput by tuning the TCP stack. Sets the maximum OS receive buffer size for all types of connections.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
net.core.rmem_default
Preferred Value: 1048576
Purpose: Enhances throughput by tuning the TCP stack. Sets the default OS receive buffer size for all types of connections.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
net.core.wmem_max
Preferred Value: 4194304
Purpose: Enhances throughput by tuning the TCP stack. Sets the maximum OS send buffer size for all types of connections.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
net.core.wmem_default
Preferred Value: 1048576
Purpose: Enhances throughput by tuning the TCP stack. Sets the default OS send buffer size for all types of connections.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
net.core.netdev_max_backlog
Preferred Value: 30000
Purpose: Enhances throughput by tuning the TCP stack. Sets the maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
net.ipv4.tcp_rmem
Preferred Value: 4096 1048576 4194304
Purpose: Enhances throughput by tuning the TCP stack. Increase the read-buffer space allocatable (minimum size, initial size, and maximum size in bytes).
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
net.ipv4.tcp_wmem
Preferred Value: 4096 1048576 4194304
Purpose: Enhances throughput by tuning the TCP stack. Increase the write-buffer space allocatable (minimum size, initial size, and maximum size in bytes).
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
net.ipv4.tcp_mem
Preferred Value: 8388608 8388608 8388608
Purpose: Enhances throughput by tuning the TCP stack. Increase the maximum total buffer-space allocatable ((minimum size, initial size, and maximum size in pages (4096 bytes each)).
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
net.ipv4.tcp_syn_retries
Preferred Value: 4
Purpose: Maintains High Availability by detecting failures rapidly. This is a TCP setting that ensures that the TCP stack takes about 30 seconds to detect failure of a remote node. Note that this is a setting that impacts all TCP connections. Hence, exercise caution in lowering this further.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
net.ipv4.tcp_retries2
Preferred Value: 5
Purpose: Maintains High Availability by detecting failures rapidly. Influences the timeout of a TCP connection that is alive, when RTO retransmissions remain unacknowledged. Given a value of N, a hypothetical TCP connection following exponential backoff with an initial RTO of TCP_RTO_MIN would retransmit N times before killing the connection at the (N+1)th RTO.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
vm.dirty_ratio
Preferred Value: 6
Purpose: Maintains High Availability by guaranteeing fast resync time. Denotes the absolute amount of system memory which when dirty, the process doing writes would block and write out dirty pages to the disks.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
vm.dirty_background_ratio
Preferred Value: 3
Purpose: Maintains High Availability by guaranteeing fast resync time. Denotes the percentage of system memory that can be filled with dirty pages — memory pages that still need to be written to disk — before being written to disk.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
vm.overcommit_memory
Preferred Value: 0
Purpose: Enhances throughput. Allows the system to heuristically manage memory rather than overcommiting and experiencing crashes when memory is exhausted.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes
vm.swappiness
Preferred Value: 1
Purpose: Enhances throughput. A value of 1 means start using swap only when 99% of RAM is utilized. This is not required if the containers do not have swap.
Applicable to MFS (Bare Metal or Containerized): Yes, if containerized MFS uses swap space.
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes, if container uses swap space.
max_sectors_kb
Preferred Value: 1024
Purpose: Enhances throughput. Sets the maximum I/O per block disk. For example:
echo "1024" > /sys/block/$devName/queue/max_sectors_kb
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: No
Applicable to Containerized Client: No
scheduler
Preferred Value: noop
Purpose: Enhances throughput. NOOP is the simplest I/O scheduler for the Linux kernel based upon the FIFO queue concept. The NOOP scheduler inserts all incoming I/O requests into a simple FIFO queue and implements request merging. The scheduler assumes that I/O performance optimization will be handled at some other layer of the I/O hierarchy. Set NOOP per disk controller. For example:
echo noop > /sys/block/hda/queue/scheduler
NOTE For a high performance SSD on RHEL 8.x, it is recommended to set the scheduler to none or kyber, as explained.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: No
Applicable to Containerized Client: No
core-pattern
Preferred Value: /opt/cores/%e.core.%p.%h
Purpose: Enhances supportability. Indicates where and what should be the core file name in case a process crashes.
Applicable to MFS (Bare Metal or Containerized): Yes
Applicable to Bare Metal Client: Yes
Applicable to Containerized Client: Yes