Setting Socket Buffer Sizes
Setting Socket Buffer Sizes
When you determine buffer size settings, you try to strike a balance between communication needs and other processing.
Larger socket buffers allow your members to distribute data and events more quickly, but they also take memory away from other things. If you store very large data objects in your cache, finding the right sizing for your buffers while leaving enough memory for the cached data can become critical to system performance.
Ideally, you should have buffers large enough for the distribution of any single data object so you don’t get message fragmentation, which lowers performance. Your buffers should be at least as large as your largest stored objects and their keys plus some overhead for message headers. The overhead varies depending on the who is sending and receiving, but 100 bytes should be sufficient. You can also look at the statistics for the communication between your processes to see how many bytes are being sent and received.
If you see performance problems and logging messages indicating blocked writers, increasing your buffer sizes may help.
This table lists the settings for the various member relationships and protocols, and tells where to set them.
|Protocol / Area Affected||Configuration Location||Property Name|
|TCP / IP||---||---|
|Gateway receiver||gfsh create gateway-receiver or cache.xml <gateway-receiver>||socket-buffer-size|
TCP/IP Buffer Sizes
If possible, your TCP/IP buffer size settings should match across your GemFire installation. At a minimum, follow the guidelines listed here.
- Peer-to-peer. The socket-buffer-size setting in gemfire.properties should be the same throughout your distributed system.
The client’s pool socket-buffer size-should match the
setting for the servers the pool uses, as in these example
Client Socket Buffer Size cache.xml Configuration: <pool>name="PoolA" server-group="dataSetA" socket-buffer-size="42000"... Server Socket Buffer Size cache.xml Configuration: <cache-server port="40404" socket-buffer-size="42000"> <group>dataSetA</group> </cache-server>
Multisite (WAN). In a multi-site installation using
gateways, if the link between sites is not tuned for optimum
throughput, it could cause messages to back up in the cache
queues. If a receiving queue overflows because of inadequate
buffer sizes, it will become out of sync with the sender and
the receiver will be unaware of the condition.
The gateway sender's socket-buffer-size attribute should match the gateway receiver’s socket-buffer-size attribute for all gateway receivers that the sender connects to, as in these example cache.xml snippets:
Gateway Sender Socket Buffer Size cache.xml Configuration: <gateway-sender id="sender2" parallel="true" remote-distributed-system-id="2" socket-buffer-size="42000" maximum-queue-memory="150"/> Gateway Receiver Socket Buffer Size cache.xml Configuration: <gateway-receiver start-port="1530" end-port="1551" socket-buffer-size="42000"/>
UDP Multicast and Unicast Buffer Sizes
mcast-send-buffer-size=42000 mcast-recv-buffer-size=90000 udp-send-buffer-size=42000 udp-recv-buffer-size=90000
Operating System Limits
[warning 2008/06/24 16:32:20.286 PDT CacheRunner <main> tid=0x1] requested multicast send buffer size of 9999999 but got 262144: see system administration guide for how to adjust your OS Exception in thread "main" java.lang.IllegalArgumentException: Could not set "socket-buffer-size" to "99262144" because its value can not be greater than "20000000".
If you think you are requesting more space for your buffer sizes than your system allows, check with your system administrator about adjusting the operating system limits.