第一行是Thread pool情況,如果發現Peak大于thread max,就應該修改conf/resin.conf 中的thread-max,相應的增大thread-max。
Resin will automatically allocate and free threads as the load requires. Since the threads are pooled, Resin can reuse old threads without the performance penalty of creating and destroying the threads. When the load drops, Resin will slowly decrease the number of threads in the pool until is matches the load.
Most users can set thread-max to something large (200 or greater) and then forget about the threading. Some ISPs dedicate a JVM per user and have many JVMs on the same machine. In that case, it may make sense to reduce the thread-max to throttle the requests.
Since each servlet request gets its own thread, thread-max determines the maximum number of concurrent users. So if you have a peak of 100 users with slow modems downloading a large file, you'll need a thread-max of at least 100. The number of concurrent users is unrelated to the number of active sessions. Unless the user is actively downloading, he doesn't need a thread (except for "keepalives").
Keepalives make HTTP and srun requests more efficient. Connecting to a TCP server is relatively expensive. The client and server need to send several packets back and forth to establish the connection before the first data can go through. HTTP/1.1 introduced a protocol to keep the connection open for more requests. The srun protocol between Resin and the web server plugin also uses keepalives. By keeping the connection open for following requests, Resin can improve performance.
resin.conf for thread-keepalive
<resin ...>
<thread-pool>
<thread-max>250</thread-max>
</thread-pool>
<server>
<keepalive-max>500</keepalive-max>
<keepalive-timeout>120s</keepalive-timeout>
...
|
Requests and keepalive connections can only be idle for a limited time before Resin closes them. Each connection has a read timeout, request-timeout. If the client doesn't send a request within the timeout, Resin will close the TCP socket. The timeout prevents idle clients from hogging Resin resources.
...
<thread-pool>
<thread-max>250</thread-max>
</thread-pool>
<server>
<http port="8080" read-timeout="30s" write-timeout="30s"/>
...
|
...
<thread-max>250</thread-max>
<server>
<cluster>
<client-live-time>20s</client-live-time>
<srun id="a" port="6802" read-timeout="30s"/>
</cluster>
...
|
In general, the read-timeout and keepalives are less important for Resin standalone configurations than Apache/IIS/srun configurations. Very heavy traffic sites may want to reduce the timeout for Resin standalone.
Since read-timeout will close srun connections, its setting needs to take into consideration the client-live-time setting for mod_caucho or isapi_srun. client-live-time is the time the plugin will keep a connection open. read-timeout must always be larger than client-live-time, otherwise the plugin will try to reuse a closed socket.
The web server plugin, mod_caucho, needs configuration for its keepalive handling because requests are handled differently in the web server. Until the web server sends a request to Resin, it can't tell if Resin has closed the other end of the socket. If the JVM has restarted or if closed the socket because of read-timeout, mod_caucho will not know about the closed socket. So mod_caucho needs to know how long to consider a connection reusable before closing it. client-live-time tells the plugin how long it should consider a socket usable.
Because the plugin isn't signalled when Resin closes the socket, the socket will remain half-closed until the next web server request. A netstat will show that as a bunch of sockets in the FIN_WAIT_2 state. With Apache, there doesn't appear to be a good way around this. If these become a problem, you can increase read-timeout and client-live-time so the JVM won't close the keepalive connections as fast.
unix> netstat
...
localhost.32823 localhost.6802 32768 0 32768 0 CLOSE_WAIT
localhost.6802 localhost.32823 32768 0 32768 0 FIN_WAIT_2
localhost.32824 localhost.6802 32768 0 32768 0 CLOSE_WAIT
localhost.6802 localhost.32824 32768 0 32768 0 FIN_WAIT_2
...
|
A client and a server that open a large number of TCP connections can run into operating system/TCP limits. If mod_caucho isn't configured properly, it can use too many connections to Resin. When the limit is reached, mod_caucho will report "can't connect" errors until a timeout is reached. Load testing or benchmarking can run into the same limits, causing apparent connection failures even though the Resin process is running fine.
The TCP limit is the TIME_WAIT timeout. When the TCP socket closes, the side starting the close puts the socket into the TIME_WAIT state. A netstat will short the sockets in the TIME_WAIT state. The following shows an example of the TIME_WAIT sockets generated while benchmarking. Each client connection has a unique ephemeral port and the server always uses its public port:
Typical Benchmarking Netstat
unix> netstat
...
tcp 0 0 localhost:25033 localhost:8080 TIME_WAIT
tcp 0 0 localhost:25032 localhost:8080 TIME_WAIT
tcp 0 0 localhost:25031 localhost:8080 TIME_WAIT
tcp 0 0 localhost:25030 localhost:8080 TIME_WAIT
tcp 0 0 localhost:25029 localhost:8080 TIME_WAIT
tcp 0 0 localhost:25028 localhost:8080 TIME_WAIT
...
|
The socket will remain in the TIME_WAIT state for a system-dependent time, generally 120 seconds, but usually configurable. Since there are less than 32k ephemeral socket available to the client, the client will eventually run out and start seeing connection failures. On some operating systems, including RedHat Linux, the default limit is only 4k sockets. The full 32k sockets with a 120 second timeout limits the number of connections to about 250 connections per second.
If mod_caucho or isapi_srun are misconfigured, they can use too many connections and run into the TIME_WAIT limits. Using keepalives effectively avoids this problem. Since keepalive connections are reused, they won't go into the TIME_WAIT state until they're finally closed. A site can maximize the keepalives by setting thread-keepalive large and setting live-time and request-timeout to large values. thread-keepalive limits the maximum number of keepalive connections. live-time and request-timeout will configure how long the connection will be reused.
Configuration for a medium-loaded Apache
...
<thread-pool>
<thread-max>250</thread-max>
</thread-pool>
<server>
<keepalive-max>250</keepalive-max>
<keepalive-timeout>120s</keepalive-timeout>
<cluster>
<client-live-time>120s</client-live-time>
<srun id="a" port="6802" read-timeout="120s"/>
</cluster>
...
|
read-timeout must always be larger than client-live-time. In addition, keepalive-max should be larger than the maximum number of Apache processes.
Using Apache as a web server on Unix introduces a number of issues because Apache uses a process model instead of a threading model. The Apache processes don't share the keepalive srun connections. Each process has its own connection to Resin. In contrast, IIS uses a threaded model so it can share Resin connections between the threads. The Apache process model means Apache needs more connections to Resin than a threaded model would.
In other words, the keepalive and TIME_WAIT issues mentioned above are particularly important for Apache web servers. It's a good idea to use netstat to check that a loaded Apache web server isn't running out of keepalive connections and running into TIME_WAIT problems.
先將resin.conf文件中的thread-min,thread-max,thread-keepalive三個參數設置的比較大,分別寫上,1000,3000,1000,當然這是根據你的機器情況和可能同時訪問的數量決定的,如果你的網站訪問量很大的,應該再適當放大。
然后觀察任務管理器中的java線程變化情況,看看到底是線程達到多大的時候,java進程當掉的。我的是在379左右當掉。
然后將thread-min,thread-max,thread-keepalive分別寫為150,400,300;,也就是將當掉的時候的最大值稍微放大點,作為thread-max的值,因為該系統一般不會超過這個值。然后其他兩個參數根據情況設置一下。
這只是我的估計值,根據機器性能和訪問量不同,應該有所不同。
然后將accept-buffer-size值設置的較大,我設置到10000以上,這樣可以讓java能使用到更多的內存資源。
這樣的設置基本上能夠滿足resin的正常運行,當掉resin服務的情況大大減少,本設置適合于中小型網站。
Resin優化:
The allocation of memory for the JVM is specified using -X options when starting Resin
(the exact options may depend upon the JVM that you are using, the examples here are for the Sun JVM).
JVM option passed to Resin Meaning
-Xms initial java heap size
-Xmx maximum java heap size
-Xmn the size of the heap for the young generation
Resin startup with heap memory options unix> bin/httpd.sh -Xmn100M -Xms500M -Xmx500M win> bin/httpd.exe -Xmn100M -Xms500M -Xmx500M install win service> bin/httpd.exe -Xmn100M -Xms500M -Xmx500M -install
原文:http://www.caucho.com/resin-3.0/performance/jvm-tuning.xtp
JVM 優化:
java -Xms<size>
set initial Java heap size. default:Xms32m
java -Xmx<size>
set maximum Java heap size. default:Xmx128m
set it like that:
java -Xms=32m -Xmx=256m
If the problem persist, increase Xmx more than 256 ( 512m for example )
-J-mx<num>
Resin啟動時通過bin目錄下的wrapper.pl文件進行控制,我們可以修改這個文件來加一些參數,比如要加入Java的-Xms和-Xmx參數
進行
vi /usr/local/resin-2.1/bin/wrapper.pl
找到并修改以下這行為:
$JAVA_ARGS="-Xms512m -Xmx512m";
具體參數請根據自己的應用進行調節
Resin的優化---日志的設置
2007年03月06日 星期二 10:25
log設置
<log name='' level='info' path='stdout:' rollover-period='1W' timestamp='[%Y/%m/%d %H:%M:%S.%s] '/>
<log name='com.caucho.java' level='fine' path='stdout:' rollover-period='1W' timestamp='[%Y/%m/%d %H:%M:%S.%s] '/>
<log name='com.caucho.loader' level='config' path='stdout:' rollover-period='1W' timestamp='[%Y/%m/%d %H:%M:%S.%s] '/>
name 是指定對各個層次應用進行debug,name 設定有幾種情況,如:
Name=’’ name為空,這對所有應用、包括端口全面進行調試記載日志
Name=’com.caucho.jsp’ 指定只對jsp進行調試記載日志
Name=’com.caucho.java’ 指定只對java類進行調試
Name=’com.caucho.server.port’ 指定只對端口、線程進行調試
Name=’com.caucho.server.port.AcceptPool 指定只對端口線程的創建和釋放進行debug
….
level 的級別一般有::
Off Severe info config fine finer finest all 這幾中配置級別;
Off 關閉日志輸出
Severe 只輸出些嚴重的出錯信息
Info 輸出一般的綜合信息
Config 輸出配置信息
Fine 輸出resin的跟蹤信息
Finer 輸出詳細的跟蹤信息
Finest 輸出比Finer更詳細的跟蹤消息、細節問題
All 輸出所有的訪問輸出消息
path: 輸出文件路徑指向,可以形式如 path=’stdout:’ 注意后面有冒號;或指定絕對路徑path=’/usr/local/resin-3.0.7/log/stdout.log’
Timestamp : 完整的輸出日期格式[%Y/%m/%d %H:%M:%S.%s];
一般設置日志文件一周輪循一次,即 rollover-period=’1M’ 或 rollover-period=’7D’ , 當滿一周,系統會自動生成新日志記錄文件,格式如: stderr.log.20041201 stderr.log.20041208
rollover-period=’D’ 天
rollover-period=’h’ 小時
rollover-period=’W’ 周
rollover-period=’M’ 月
|
當不需要改動程序時,關閉java自動編譯會更快些.
<compiling-loader path="webapps/WEB-INF/classes" />
加個屬性
batch="false"