第一行是Thread pool情況,如果發(fā)現(xiàn)Peak大于thread max,就應(yīng)該修改conf/resin.conf 中的thread-max,相應(yīng)的增大thread-max。
第二行是Threads,如果長(zhǎng)期出現(xiàn)在這里而又不是SUN的方法,或者resin的方法的話(huà),就要對(duì)這些方法進(jìn)行測(cè)試、優(yōu)化。
以下內(nèi)容都是自己不斷實(shí)驗(yàn)總結(jié)的,而非resin官方的建議,可能不適合你的情況,我的經(jīng)驗(yàn)僅做為參考。)
最近發(fā)現(xiàn)有人用黑客類(lèi)工具惡意點(diǎn)擊網(wǎng)站,或發(fā)送大量垃圾包,具體是什么不清楚,但是很明顯是故意的,造成80端口無(wú)法正常訪(fǎng)問(wèn),或訪(fǎng)問(wèn)速度極慢。
用netstat -an >>c:\temp\aaa.txt 命令查看了當(dāng)時(shí)情況,發(fā)現(xiàn)某幾個(gè)ip的連接數(shù)量巨大,是不正常的。
不管是訪(fǎng)問(wèn)量大,還是有黑客騷擾,我想還是試試看把resin優(yōu)化一下。
首先要在訪(fǎng)問(wèn)量巨大的時(shí)候進(jìn)行觀察。
先將resin.conf文件中的thread-min,thread-max,thread-keepalive三個(gè)參數(shù)設(shè)置的比較大,分別寫(xiě)上,1000,3000,1000,當(dāng)然這是根據(jù)你的機(jī)器情況和可能同時(shí)訪(fǎng)問(wèn)的數(shù)量決定的,如果你的網(wǎng)站訪(fǎng)問(wèn)量很大的,應(yīng)該再適當(dāng)放大。
然后觀察任務(wù)管理器中的java線(xiàn)程變化情況,看看到底是線(xiàn)程達(dá)到多大的時(shí)候,java進(jìn)程當(dāng)?shù)舻摹N业氖窃?79左右當(dāng)?shù)簟?br />
然后將thread-min,thread-max,thread-keepalive分別寫(xiě)為150,400,300;,也就是將當(dāng)?shù)舻臅r(shí)候的最大值稍微放大點(diǎn),作為thread-max的值,因?yàn)樵撓到y(tǒng)一般不會(huì)超過(guò)這個(gè)值。然后其他兩個(gè)參數(shù)根據(jù)情況設(shè)置一下。
這只是我的估計(jì)值,根據(jù)機(jī)器性能和訪(fǎng)問(wèn)量不同,應(yīng)該有所不同。
然后將accept-buffer-size值設(shè)置的較大,我設(shè)置到10000以上,這樣可以讓java能使用到更多的內(nèi)存資源。
這樣的設(shè)置基本上能夠滿(mǎn)足resin的正常運(yùn)行,當(dāng)?shù)魊esin服務(wù)的情況大大減少,本設(shè)置適合于中小型網(wǎng)站。
Resin will automatically allocate and free threads as the load requires. Since the threads are pooled, Resin can reuse old threads without the performance penalty of creating and destroying the threads. When the load drops, Resin will slowly decrease the number of threads in the pool until is matches the load.
Most users can set thread-max to something large (200 or greater) and then forget about the threading. Some ISPs dedicate a JVM per user and have many JVMs on the same machine. In that case, it may make sense to reduce the thread-max to throttle the requests.
Since each servlet request gets its own thread, thread-max determines the maximum number of concurrent users. So if you have a peak of 100 users with slow modems downloading a large file, you'll need a thread-max of at least 100. The number of concurrent users is unrelated to the number of active sessions. Unless the user is actively downloading, he doesn't need a thread (except for "keepalives").
Keepalives make HTTP and srun requests more efficient. Connecting to a TCP server is relatively expensive. The client and server need to send several packets back and forth to establish the connection before the first data can go through. HTTP/1.1 introduced a protocol to keep the connection open for more requests. The srun protocol between Resin and the web server plugin also uses keepalives. By keeping the connection open for following requests, Resin can improve performance.
resin.conf for thread-keepalive
<resin ...>
<thread-pool>
<thread-max>250</thread-max>
</thread-pool>
<server>
<keepalive-max>500</keepalive-max>
<keepalive-timeout>120s</keepalive-timeout>
...
|
Requests and keepalive connections can only be idle for a limited time before Resin closes them. Each connection has a read timeout, request-timeout. If the client doesn't send a request within the timeout, Resin will close the TCP socket. The timeout prevents idle clients from hogging Resin resources.
...
<thread-pool>
<thread-max>250</thread-max>
</thread-pool>
<server>
<http port="8080" read-timeout="30s" write-timeout="30s"/>
...
|
...
<thread-max>250</thread-max>
<server>
<cluster>
<client-live-time>20s</client-live-time>
<srun id="a" port="6802" read-timeout="30s"/>
</cluster>
...
|
In general, the read-timeout and keepalives are less important for Resin standalone configurations than Apache/IIS/srun configurations. Very heavy traffic sites may want to reduce the timeout for Resin standalone.
Since read-timeout will close srun connections, its setting needs to take into consideration the client-live-time setting for mod_caucho or isapi_srun. client-live-time is the time the plugin will keep a connection open. read-timeout must always be larger than client-live-time, otherwise the plugin will try to reuse a closed socket.
The web server plugin, mod_caucho, needs configuration for its keepalive handling because requests are handled differently in the web server. Until the web server sends a request to Resin, it can't tell if Resin has closed the other end of the socket. If the JVM has restarted or if closed the socket because of read-timeout, mod_caucho will not know about the closed socket. So mod_caucho needs to know how long to consider a connection reusable before closing it. client-live-time tells the plugin how long it should consider a socket usable.
Because the plugin isn't signalled when Resin closes the socket, the socket will remain half-closed until the next web server request. A netstat will show that as a bunch of sockets in the FIN_WAIT_2 state. With Apache, there doesn't appear to be a good way around this. If these become a problem, you can increase read-timeout and client-live-time so the JVM won't close the keepalive connections as fast.
unix> netstat
...
localhost.32823 localhost.6802 32768 0 32768 0 CLOSE_WAIT
localhost.6802 localhost.32823 32768 0 32768 0 FIN_WAIT_2
localhost.32824 localhost.6802 32768 0 32768 0 CLOSE_WAIT
localhost.6802 localhost.32824 32768 0 32768 0 FIN_WAIT_2
...
|
A client and a server that open a large number of TCP connections can run into operating system/TCP limits. If mod_caucho isn't configured properly, it can use too many connections to Resin. When the limit is reached, mod_caucho will report "can't connect" errors until a timeout is reached. Load testing or benchmarking can run into the same limits, causing apparent connection failures even though the Resin process is running fine.
The TCP limit is the TIME_WAIT timeout. When the TCP socket closes, the side starting the close puts the socket into the TIME_WAIT state. A netstat will short the sockets in the TIME_WAIT state. The following shows an example of the TIME_WAIT sockets generated while benchmarking. Each client connection has a unique ephemeral port and the server always uses its public port:
Typical Benchmarking Netstat
unix> netstat
...
tcp 0 0 localhost:25033 localhost:8080 TIME_WAIT
tcp 0 0 localhost:25032 localhost:8080 TIME_WAIT
tcp 0 0 localhost:25031 localhost:8080 TIME_WAIT
tcp 0 0 localhost:25030 localhost:8080 TIME_WAIT
tcp 0 0 localhost:25029 localhost:8080 TIME_WAIT
tcp 0 0 localhost:25028 localhost:8080 TIME_WAIT
...
|
The socket will remain in the TIME_WAIT state for a system-dependent time, generally 120 seconds, but usually configurable. Since there are less than 32k ephemeral socket available to the client, the client will eventually run out and start seeing connection failures. On some operating systems, including RedHat Linux, the default limit is only 4k sockets. The full 32k sockets with a 120 second timeout limits the number of connections to about 250 connections per second.
If mod_caucho or isapi_srun are misconfigured, they can use too many connections and run into the TIME_WAIT limits. Using keepalives effectively avoids this problem. Since keepalive connections are reused, they won't go into the TIME_WAIT state until they're finally closed. A site can maximize the keepalives by setting thread-keepalive large and setting live-time and request-timeout to large values. thread-keepalive limits the maximum number of keepalive connections. live-time and request-timeout will configure how long the connection will be reused.
Configuration for a medium-loaded Apache
...
<thread-pool>
<thread-max>250</thread-max>
</thread-pool>
<server>
<keepalive-max>250</keepalive-max>
<keepalive-timeout>120s</keepalive-timeout>
<cluster>
<client-live-time>120s</client-live-time>
<srun id="a" port="6802" read-timeout="120s"/>
</cluster>
...
|
read-timeout must always be larger than client-live-time. In addition, keepalive-max should be larger than the maximum number of Apache processes.
Using Apache as a web server on Unix introduces a number of issues because Apache uses a process model instead of a threading model. The Apache processes don't share the keepalive srun connections. Each process has its own connection to Resin. In contrast, IIS uses a threaded model so it can share Resin connections between the threads. The Apache process model means Apache needs more connections to Resin than a threaded model would.
In other words, the keepalive and TIME_WAIT issues mentioned above are particularly important for Apache web servers. It's a good idea to use netstat to check that a loaded Apache web server isn't running out of keepalive connections and running into TIME_WAIT problems.
先將resin.conf文件中的thread-min,thread-max,thread-keepalive三個(gè)參數(shù)設(shè)置的比較大,分別寫(xiě)上,1000,3000,1000,當(dāng)然這是根據(jù)你的機(jī)器情況和可能同時(shí)訪(fǎng)問(wèn)的數(shù)量決定的,如果你的網(wǎng)站訪(fǎng)問(wèn)量很大的,應(yīng)該再適當(dāng)放大。
然后觀察任務(wù)管理器中的java線(xiàn)程變化情況,看看到底是線(xiàn)程達(dá)到多大的時(shí)候,java進(jìn)程當(dāng)?shù)舻摹N业氖窃?79左右當(dāng)?shù)簟?br />
然后將thread-min,thread-max,thread-keepalive分別寫(xiě)為150,400,300;,也就是將當(dāng)?shù)舻臅r(shí)候的最大值稍微放大點(diǎn),作為thread-max的值,因?yàn)樵撓到y(tǒng)一般不會(huì)超過(guò)這個(gè)值。然后其他兩個(gè)參數(shù)根據(jù)情況設(shè)置一下。
這只是我的估計(jì)值,根據(jù)機(jī)器性能和訪(fǎng)問(wèn)量不同,應(yīng)該有所不同。
然后將accept-buffer-size值設(shè)置的較大,我設(shè)置到10000以上,這樣可以讓java能使用到更多的內(nèi)存資源。
這樣的設(shè)置基本上能夠滿(mǎn)足resin的正常運(yùn)行,當(dāng)?shù)魊esin服務(wù)的情況大大減少,本設(shè)置適合于中小型網(wǎng)站。
Resin優(yōu)化:
The allocation of memory for the JVM is specified using -X options when starting Resin
(the exact options may depend upon the JVM that you are using, the examples here are for the Sun JVM).
JVM option passed to Resin Meaning
-Xms initial java heap size
-Xmx maximum java heap size
-Xmn the size of the heap for the young generation
Resin startup with heap memory options unix> bin/httpd.sh -Xmn100M -Xms500M -Xmx500M win> bin/httpd.exe -Xmn100M -Xms500M -Xmx500M install win service> bin/httpd.exe -Xmn100M -Xms500M -Xmx500M -install
原文:http://www.caucho.com/resin-3.0/performance/jvm-tuning.xtp
JVM 優(yōu)化:
java -Xms<size>
set initial Java heap size. default:Xms32m
java -Xmx<size>
set maximum Java heap size. default:Xmx128m
set it like that:
java -Xms=32m -Xmx=256m
If the problem persist, increase Xmx more than 256 ( 512m for example )
-J-mx<num>
Resin啟動(dòng)時(shí)通過(guò)bin目錄下的wrapper.pl文件進(jìn)行控制,我們可以修改這個(gè)文件來(lái)加一些參數(shù),比如要加入Java的-Xms和-Xmx參數(shù)
進(jìn)行
vi /usr/local/resin-2.1/bin/wrapper.pl
找到并修改以下這行為:
$JAVA_ARGS="-Xms512m -Xmx512m";
具體參數(shù)請(qǐng)根據(jù)自己的應(yīng)用進(jìn)行調(diào)節(jié)
Resin的優(yōu)化---日志的設(shè)置
2007年03月06日 星期二 10:25
log設(shè)置
<log name='' level='info' path='stdout:' rollover-period='1W' timestamp='[%Y/%m/%d %H:%M:%S.%s] '/>
<log name='com.caucho.java' level='fine' path='stdout:' rollover-period='1W' timestamp='[%Y/%m/%d %H:%M:%S.%s] '/>
<log name='com.caucho.loader' level='config' path='stdout:' rollover-period='1W' timestamp='[%Y/%m/%d %H:%M:%S.%s] '/>
name 是指定對(duì)各個(gè)層次應(yīng)用進(jìn)行debug,name 設(shè)定有幾種情況,如:
Name=’’ name為空,這對(duì)所有應(yīng)用、包括端口全面進(jìn)行調(diào)試記載日志
Name=’com.caucho.jsp’ 指定只對(duì)jsp進(jìn)行調(diào)試記載日志
Name=’com.caucho.java’ 指定只對(duì)java類(lèi)進(jìn)行調(diào)試
Name=’com.caucho.server.port’ 指定只對(duì)端口、線(xiàn)程進(jìn)行調(diào)試
Name=’com.caucho.server.port.AcceptPool 指定只對(duì)端口線(xiàn)程的創(chuàng)建和釋放進(jìn)行debug
….
level 的級(jí)別一般有::
Off Severe info config fine finer finest all 這幾中配置級(jí)別;
Off 關(guān)閉日志輸出
Severe 只輸出些嚴(yán)重的出錯(cuò)信息
Info 輸出一般的綜合信息
Config 輸出配置信息
Fine 輸出resin的跟蹤信息
Finer 輸出詳細(xì)的跟蹤信息
Finest 輸出比Finer更詳細(xì)的跟蹤消息、細(xì)節(jié)問(wèn)題
All 輸出所有的訪(fǎng)問(wèn)輸出消息
path: 輸出文件路徑指向,可以形式如 path=’stdout:’ 注意后面有冒號(hào);或指定絕對(duì)路徑path=’/usr/local/resin-3.0.7/log/stdout.log’
Timestamp : 完整的輸出日期格式[%Y/%m/%d %H:%M:%S.%s];
一般設(shè)置日志文件一周輪循一次,即 rollover-period=’1M’ 或 rollover-period=’7D’ , 當(dāng)滿(mǎn)一周,系統(tǒng)會(huì)自動(dòng)生成新日志記錄文件,格式如: stderr.log.20041201 stderr.log.20041208
rollover-period=’D’ 天
rollover-period=’h’ 小時(shí)
rollover-period=’W’ 周
rollover-period=’M’ 月
|
當(dāng)不需要改動(dòng)程序時(shí),關(guān)閉java自動(dòng)編譯會(huì)更快些.
<compiling-loader path="webapps/WEB-INF/classes" />
加個(gè)屬性
batch="false"