??xml version="1.0" encoding="utf-8" standalone="yes"?> U薯“http://www.oschina.net/question/12_8196” 有些时候,调试不得不用外网Q比如说做支付宝的支付接口,服务器后台?知就不得不用外网的ip。无奈,只能扔到服务器远E调试了?/p> |上讲关于远E调试tomcat的倒是蛮多Q基本都是将改啥catalina.sh,startup.sh的,操作q是复杂炏V下面这就只针对于 linux下的tomcatq程调试Q不用改啥文Ӟ单而又没啥副作用。本人在tomcat6 的情况下q程调试成功?/p> 怿? 多J2EE的开发者都是在Windows上面开发程序,然后把程序上传到Linux下运行的吧。可是有时候在自己机器上运行的好好的程序,攑ֈ服务器下? 出错了。单单看出错信息也推断不出是哪里的问题。这时候试试用Java的JPDAq程调试E序Q一定可以让你很满意?/p> tomcat服务器已l内|了JPDA支持Q只要用: catalina.sh jpda start q条命o启动tomcatQ它?yu)׃监?000端口Q等待调试器的连接。要注意?span style="font-size: 1em;">能?startup.sh脚本。tomcat会?JPDA_ADDRESSq个环境变量的倹{比如想监听8017端口Q?/p> export JPDA_ADDRESS=8017 接着Q选一个自己喜Ƣ的调试器,基本上现在主的如Eclipse、NetBean都可以。我主要使用EclipseQ操作只要三步: 1.选择“Run”菜单里的“Open Debug Dialog”?/p> 2.在对话框里找?#8220;Remote Java Application”?右键菜单选择“New”创徏一个配|?/p> 3.新创建的配置的Project选择服务器上的工E。Host和Port分别填写你服务器的IP和JPDA的端口,默认?000Q或 ?JPDA_ADDRESS?/p> 现在可以像本机调试一P 讄断点和跟t调试了?/p> --By oldjavaman http://blog.csdn.net/oldjavaman/archive/2009/07/10/4338315.aspx 1. Z文g的session持久化技?/p>
Seesion能够被跨服务器持久化Q?包含我们的web应用的Class发生变化Q?譬如在开发期_使用Z文g的持久化Seesion技术是非常便捷的, 其是我们在开发时Q?当Servlet会发生经常变?/p>
在resin.conf中配|如?/p>
<web-app xmlns="http://caucho.com/ns/resin"> <session-config> <file-store>WEB-INF/sessions</file-store> </session-config></web-app> q样的配|是Session写在一个在<file-store>中定义的文g目录中,当Session发生变化Ӟ会把session写入一个文Ӟ 当web应用被加载时Q?resin会从文g里面加蝲session 但是Q基于文件的session技术在做跨服务器的session传递时是没有作用的Q有人提Z用NFS技术, 在多个服务器之间׃nq个session持久文gQ?但是NFS往往会从本地调用~存Q?q样一来, 实际存放session的文件发生变化时Q不能及时在另一台服务器上得C?/p>
2. 分布式session 多台机器的负载均衡, 使用的session技术不外乎sticky sessions Q粘性sessionQ或者symmetrical sessionsQ对UsessionQ。前者关注的是负载均衡技术, 后者是xJVM的技术。用何U技术依赖于你有什么样的硬Ӟ多少台机器,你要如何理session. 2.1. 对称session 对于对称的session来说Q?一个完全一致的服务器环境是他可以工作的基础Q所以较_性session来说Q?׃它每ơwebh都需要更新session信息Q?所以比较低效?/p>
2.2. _性session 使用q种技术的是比较可靠的Q?如果A机器宕机Q?则可以从B机器上取得我们需要的sessionQ?而用者ƈ无从查觉Q另外用粘性session是高效率的,只有session发生变更时才需要重写到服务器?/p>
2.3. always-load-session always-save-session属性强制了客户的每ơwebh需要从服务器的session存储中获得更斎ͼ默认情况Q用户只有在创徏session才从服务器持久层取得sessionQ但是用了多个服务器的话, 需要标识来强制每个h都从服务器持久层取得session来保证每个服务器的session是一致的?/p>
2.4. always-save-session 使用<always-save-session >属性, 可以保在客户么个请求结束后Q?都会在服务器保存session的变化, 管低效Q?但是非常可靠?/p>
3. Z数据库的session同步技?br />
Z数据库的session技术非常容易理解, resin把session写入到数据库中, 每次hsession从数据库中来获得?/p>
Z效率的考虑Q?jvm所在机器必M存session的缓存,只有当session发生变化Ӟq个机器才会向数据库重新查询Q如果另一个jvm里面的代码改变了sessionQ将会通知q个机器向数据库h获得更新?/p>
q样的数据库同步技术会D向一个已l存在session的机器分发变更了的session数据Q这h据库可能会成为瓶颈, Z解决q样的问题, 采用便捷dy的mysql来存储sessionQ用Oracle来存放业务数据是一个不错的L?/p>
使用数据库技?lt;database>属性是必须的, 加上q个属性,resin会自动在制定数据库上创徏session存储的表 <resin xmlns="http://caucho.com/ns/resin"><server> <http id='a' port='80'/> <http id='b' port='80'/> <database jndi-name="jdbc/session"> ... </database> <cluster> <srun id='a' host='host-a' port='6802'/> <srun id='b' host='host-b' port='6802'/> </cluster> <persistent-store type="jdbc"> <init> <data-source>jdbc/session<data-source> </init> </persistent-store> ... <web-app-default> <session-config> <use-persistent-store/> </session-config> </web-app-default> 持久化的session必须在上文的<sever>中?lt;persistent-store>来定义。而每一个web-app应用必须使用<use-persistent-store/>来表C需要分布式session技?/p>
ata-source CREATE TABLE persistent_session ( id VARCHAR(64) NOT NULL, data BLOB, access_time int(11), expire_interval int(11), PRIMARY KEY(id) ) 下面是一个用持久层session的web-app定义CZQ?/p>
<web-app xmlns="http://caucho.com/ns/resin"> <session-config> <use-persistent-store/> <always-save-session/> </session-config></web-app> 4. Z集群session技?br />
Z集群的session技术应用在服务器集领域, 在一些案例中采用数据库的session分布技术是高效的, 有些场合Q采用集的session是高效的?/p>
在集的session每一个服务器拥有一个jvm和一个备份j(lu)vmQ?session同时保存在自qjvm和备份j(lu)vm里面?/p>
同样圎ͼ 你必M?lt;sever>中的<cluster>下面?lt;srun>属性来辑ֈ集群session的效果, 在web-app中?lt;use-persistent-store>属性来标识q个应用采用session持久化技?/p>
配置如下Q?/p>
<resin xmlns="http://caucho.com/ns/resin"> ... <server> <cluster> <srun id="a" host="192.168.0.1" port="6802" index="1"/> <srun id="b" host="192.168.0.2" port="6802" index="2"/> </cluster> <persistent-store type="cluster"> <init path="cluster"/> </persistent-store> ... <web-app xmlns="http://caucho.com/ns/resin"> <session-config> <use-persistent-store="true"/> </session-config></web-app> <srun>?lt;srun-backup>都被视ؓ一个集服务器Q?当一个服务器上的session发生变化Ӟ 它会自动L其他的备份服务器Q?q把备䆾服务器上的session更新Q?当这个服务器重新启动Ӟ 他会向备份服务器hsessionQƈ获得备䆾?/p>
<resin xmlns="http://caucho.com/ns/resin"><server> <http id='a' port='80'/> <http id='b' port='80'/> <cluster> <srun id='a' host='host-a' port='6802'/> <srun id='b' host='host-b' port='6802'/> </cluster> <persistent-store type="cluster"> <init path="cluster"/> </persistent-store> <host id=''> <web-app id=''> <session-config> <use-persistent-store="true"/> </session-config> </web-app> </host></server></resin> 本文来自CSDN博客Q{载请标明出处Qhttp://blog.csdn.net/oldjavaman/archive/2009/07/10/4338315.aspx
root /usr/local/website/web;
if ( $http_user_agent ~ "(MIDP)|(WAP)|(UP.Browser)|(Smartphone)|(Obigo)|(Mobile)|(AU.Browser)|(wxd.Mms)|(WxdB.Browser)|(CLDC)|(UP.Link)|(KM.Browser)|(UCWEB)|(SEMC\-Browser)|(Mini)|(Symbian)|(Palm)|(Nokia)|(Panasonic)|(MOT\-)|(SonyEricsson)|(NEC\-)|(Alcatel)|(Ericsson)|(BENQ)|(BenQ)|(Amoisonic)|(Amoi\-)|(Capitel)|(PHILIPS)|(SAMSUNG)|(Lenovo)|(Mitsu)|(Motorola)|(SHARP)|(WAPPER)|(LG\-)|(LG/)|(EG900)|(CECT)|(Compal)|(kejian)|(Bird)|(BIRD)|(G900/V1.0)|(Arima)|(CTL)|(TDG)|(Daxian)|(DAXIAN)|(DBTEL)|(Eastcom)|(EASTCOM)|(PANTECH)|(Dopod)|(Haier)|(HAIER)|(KONKA)|(KEJIAN)|(LENOVO)|(Soutec)|(SOUTEC)|(SAGEM)|(SEC\-)|(SED\-)|(EMOL\-)|(INNO55)|(ZTE)|(iPhone)|(Android)|(Windows CE)|(Wget)|(Java)|(curl)|(Opera)" ){
root /usr/local/website/wap;
}
index index.html index.htm;
}
]]>
catalina.sh jpda start
分布式session比文件持久session复杂得多Q?文g持久session是一个简单的Z内存的session理Q?但是分布式session必须事先在多台服务器之间实现session变化的传?/p>
对称session技术多用于负蝲均衡Q一个session可以从A机器中取出,存放在B机器里面Q采用JDBC session技术的对称session ,需要描qresin.conf中的“always-load-session”属性?每个h都获得最新状态的session
_性的session依赖JVM来实玎ͼ 只要session开始工作,那么负蝲均衡永q把相同的session存放于同一个机器上Q?举个例子来说,有一个ID为aaaXXX的session永远攑֜A机器的JVM-A上, 而bbbXXX的session用于攑֜B机器的JVM-B上?/p>
正如上文提及的对U式的session技术需要?lt;always-load-session>属性来标识是否每个h都要从服务器更新sessionQ?如果使用的是jdbc-session技术, 那么q个标识是一定要在配|文件中加上的, 但是如果是基于tcp-session技术的话, 可以不用标识Q?因ؓtcp-session的技术更练一些?/p>
默认情况, 当session发生变化时Resin会将session写入到服务器Q?如:你在E序中调用了setAttribute()Ҏ(gu)Q但是假设你仅仅是更Csession中的对象的一个属性, 譬如存放的是一个用户对象, 你改变了q个用户对象的年U, q个时候resinq不能侦到session的变化,也不会保存这个变化?/p>
数据?br />
table-name
存放session数据的表?
blob-type
Blobcd
max-idle-time
释放旉
5. 关于作?
OldJavaManQ长期致力于Java相关领域的技术工作, 主要参与J2EE相关E序的设计, 目前在南京的一家Y件企业就职,他希望和q大的Java爱好者结交朋友。大家可以通过 mail 联系??/p>
]]>
作者:darkblue
有两台服务器
L名:web-srv
内网IPQ?0.199.55.1
外网IPQ?21.183.173.225
域名Qweb.xxxx.com
L名:wap-srv
内网IPQ?0.199.55.3
外网IPQ?21.183.173.226
域名Qwap.xxxx.com
防火墙:PIX525Q服务器OS:RedHat ES4U2
需求:
web-srv和wap-srv均用apache2+resin3.1.2提供http服务Q监听端口都?0?/p>
1、当两台L都正常运作的时候,通过resin来实现负载均衡,web-srv?0端口提供web服务Qwap-srv?0端口提供wap服务Q?/p>
2、当web-srv宕机后,手工启用wap-srv上面的另一个apacheq程Q监?080端口Q提供web服务Q这个时候两个公|IP同时指向一个私|IP10.199.55.3Q结果是Q?0端口提供的是wap服务Q?080端口提供的是web服务Q?/p>
3、当wap-srv宕机后,手工启用web-srv上面的另一个apacheq程Q监?080端口Q提供wap服务Q这个时候两个公|IP同时指向一个私|IP10.199.55.1Q结果是Q?0端口提供的是web服务Q?080端口提供的是wap服务Q?/p>
配置Q?/p>
一、DNS
增加两条U录Q分别是121.183.173.225 <---> web.xxxx.comQ?21.183.173.226 <---> wap.xxxx.com?/p>
二、web-srv
2.1 安装apache2、resin-3.1.0
安装apache2?/p>
安装resin-3.1.0
Wget “http://www.caucho.com/download/resin-3.1.0.tar.gz”
Cp resin-3.1.0.tar.gz /usr/local/
Tar –xzvf resin-3.1.0.tar.gz
cd resin-3.1.0
./ configure --with-apache=/usr/local/apache2
Make
Make install
2.2 准备配置文g和目?/p>
copy /usr/local/apache2/conf/httpd.conf httpdweb.conf
copy /usr/local/apache2/conf/httpd.conf httpdwap.conf
mkdir /usr/local/apache2/htdocs/web
mkdir /usr/local/apache2/htdocs/wap
cp -r /usr/local/resin-3.1.0 /usr/local/resin-web
cp -r /usr/local/resin-3.1.0 /usr/local/resin-wap
2.3 修改httpdweb.conf的监听端?80)和web的\?/p>
file: /usr/local/apache2/conf/httpdweb.conf
ResinConfigServer 10.199.55.3 6802
ResinConfigServer 10.199.55.1 6802
CauchoConfigCacheDirectory /tmp
CauchoStatus yes
2.4 修改httpdwap.conf的监听端?8080)和wap的\?/p>
file: /usr/local/apache2/conf/httpdweb.conf
ResinConfigServer 10.199.55.3 6803
ResinConfigServer 10.199.55.1 6803
CauchoConfigCacheDirectory /tmp
CauchoStatus yes
2.5 修改resin3.1.2的集和相关的java配置
file: /usr/local/resin-web/conf/resin.conf
<cluster>
<server id="10.199.55.1" address="10.199.55.1" port="6802">
………………
</cluster>
file: /usr/local/resin-wap/conf/resin.conf
<cluster>
<server id="10.199.55.1" address="10.199.55.1" port="6803">
………………
</cluster>
2.6 启动apache和resin
apache2
/usr/local/apache2/bin/httpd -f ./conf/httpdweb.conf
/usr/local/apache2/bin/httpd -f ./conf/httpdwap.conf
resin
/usr/local/resin-web/bin/httpd.sh -server 10.199.55.1 start
/usr/local/resin-wap/bin/httpd.sh -server 10.199.55.1 start
三、wap-srv
3.1 安装apache2、resin-3.1.0
安装apache2?/p>
安装resin-3.1.0
Wget “http://www.caucho.com/download/resin-3.1.0.tar.gz”
Cp resin-3.1.0.tar.gz /usr/local/
Tar –xzvf resin-3.1.0.tar.gz
cd resin-3.1.0
./ configure --with-apache=/usr/local/apache2
Make
Make install
3.2 准备配置文g和目?/p>
copy /usr/local/apache2/conf/httpd.conf httpdweb.conf
copy /usr/local/apache2/conf/httpd.conf httpdwap.conf
mkdir /usr/local/apache2/htdocs/web
mkdir /usr/local/apache2/htdocs/wap
cp -r /usr/local/resin-3.1.0 /usr/local/resin-web
cp -r /usr/local/resin-3.1.0 /usr/local/resin-wap
3.3 修改httpdweb.conf的监听端?8080)和web的\?/p>
file: /usr/local/apache2/conf/httpdweb.conf
端口和目录略
ResinConfigServer 10.199.55.3 6802
ResinConfigServer 10.199.55.1 6802
CauchoConfigCacheDirectory /tmp
CauchoStatus yes
3.4 修改httpdwap.conf的监听端?80)和wap的\?/p>
file: /usr/local/apache2/conf/httpdweb.conf
端口和目录略
ResinConfigServer 10.199.55.3 6803
ResinConfigServer 10.199.55.1 6803
CauchoConfigCacheDirectory /tmp
CauchoStatus yes
3.5 修改resin3.1.2的集和相关的java配置
file: /usr/local/resin-web/conf/resin.conf
<cluster>
<server id="10.199.55.3" address="10.199.55.3" port="6802">
………………
</cluster>
file: /usr/local/resin-wap/conf/resin.conf
<cluster>
<server id="10.199.55.3" address="10.199.55.3" port="6803">
………………
</cluster>
3.6 启动apache和resin
apache2
/usr/local/apache2/bin/httpd -f ./conf/httpdweb.conf
/usr/local/apache2/bin/httpd -f ./conf/httpdwap.conf
resin
/usr/local/resin-web/bin/httpd.sh -server 10.199.55.3 start
/usr/local/resin-wap/bin/httpd.sh -server 10.199.55.3 start
四、防火墙的配|?/p>
因ؓweb-srv和wap-srv只提供www服务Q所以只需要对q两个内外网的IP地址对做端口映射?/p>
static (dmz,outside) tcp 121.183.173.225 www 10.199.55.1 www netmask 255.255.255.255
static (dmz,outside) tcp 121.183.173.226 www 10.199.55.3 www netmask 255.255.255.255
五、冷备的实现
5.1 当web-srv宕机后:
5.1.1 修改防火墙配|?/p>
no static (dmz,outside) tcp 121.183.173.225 www 10.199.55.1 www netmask 255.255.255.255
static (dmz,outside) tcp 121.183.173.225 www 10.199.55.3 8080 netmask 255.255.255.255
5.1.2 启动wap-srv的第二个apacheq程Q监?080端口Q提供web服务
/usr/local/apache2/bin/httpd -f ./conf/httpdweb.conf
用户讉Khttp://web.xxxx.com指?0.199.55.3:8080
5.2 当wap-srv宕机后:
5.2.1 修改防火墙配|?/p>
no static (dmz,outside) tcp 121.183.173.226 www 10.199.55.3 www netmask 255.255.255.255
static (dmz,outside) tcp 121.183.173.226 www 10.199.55.1 8080 netmask 255.255.255.255
5.2.2 启动web-srv的第二个apacheq程Q监?080端口Q提供wap服务
/usr/local/apache2/bin/httpd -f ./conf/httpdwap.conf
用户讉Khttp://wap.xxxx.com指?0.199.55.1:8080
<!-- define the servers in the cluster --> <server id="a" address="127.0.0.1" port="6800"/> <server id="b" address="127.0.0.1" port="6801"/>
1 |
> D:\resin-pro-3.1.3\httpd.exe -conf conf/resin-web.conf -server web-a |
1 |
<%System.out.println("aaaaaaaaaaaa");%> |
Resin Threads |
Resin will automatically allocate and free threads as the load requires. Since the threads are pooled, Resin can reuse old threads without the performance penalty of creating and destroying the threads. When the load drops, Resin will slowly decrease the number of threads in the pool until is matches the load.
Most users can set
to something large (200 or greater) and then forget about the threading. Some ISPs dedicate a JVM per user and have many JVMs on the same machine. In that case, it may make sense to reduce the to throttle the requests.Since each servlet request gets its own thread,
determines the maximum number of concurrent users. So if you have a peak of 100 users with slow modems downloading a large file, you'll need a of at least 100. The number of concurrent users is unrelated to the number of active sessions. Unless the user is actively downloading, he doesn't need a thread (except for "keepalives").
Keepalives |
Keepalives make HTTP and srun requests more efficient. Connecting to a TCP server is relatively expensive. The client and server need to send several packets back and forth to establish the connection before the first data can go through. HTTP/1.1 introduced a protocol to keep the connection open for more requests. The srun protocol between Resin and the web server plugin also uses keepalives. By keeping the connection open for following requests, Resin can improve performance.
<resin ...> <thread-pool> <thread-max>250</thread-max> </thread-pool> <server> <keepalive-max>500</keepalive-max> <keepalive-timeout>120s</keepalive-timeout> ... |
Requests and keepalive connections can only be idle for a limited time before Resin closes them. Each connection has a read timeout,
. If the client doesn't send a request within the timeout, Resin will close the TCP socket. The timeout prevents idle clients from hogging Resin resources.
... <thread-pool> <thread-max>250</thread-max> </thread-pool> <server> <http port="8080" read-timeout="30s" write-timeout="30s"/> ... |
... <thread-max>250</thread-max> <server> <cluster> <client-live-time>20s</client-live-time> <srun id="a" port="6802" read-timeout="30s"/> </cluster> ... |
In general, the read-timeout and keepalives are less important for Resin standalone configurations than Apache/IIS/srun configurations. Very heavy traffic sites may want to reduce the timeout for Resin standalone.
Since
will close srun connections, its setting needs to take into consideration the setting for mod_caucho or isapi_srun. is the time the plugin will keep a connection open. must always be larger than , otherwise the plugin will try to reuse a closed socket.The web server plugin, mod_caucho, needs configuration for its keepalive handling because requests are handled differently in the web server. Until the web server sends a request to Resin, it can't tell if Resin has closed the other end of the socket. If the JVM has restarted or if closed the socket because of
, mod_caucho will not know about the closed socket. So mod_caucho needs to know how long to consider a connection reusable before closing it. tells the plugin how long it should consider a socket usable.Because the plugin isn't signalled when Resin closes the socket, the socket will remain half-closed until the next web server request. A
will show that as a bunch of sockets in the FIN_WAIT_2 state. With Apache, there doesn't appear to be a good way around this. If these become a problem, you can increase and so the JVM won't close the keepalive connections as fast.
unix> netstat ... localhost.32823 localhost.6802 32768 0 32768 0 CLOSE_WAIT localhost.6802 localhost.32823 32768 0 32768 0 FIN_WAIT_2 localhost.32824 localhost.6802 32768 0 32768 0 CLOSE_WAIT localhost.6802 localhost.32824 32768 0 32768 0 FIN_WAIT_2 ... |
A client and a server that open a large number of TCP connections can run into operating system/TCP limits. If mod_caucho isn't configured properly, it can use too many connections to Resin. When the limit is reached, mod_caucho will report "can't connect" errors until a timeout is reached. Load testing or benchmarking can run into the same limits, causing apparent connection failures even though the Resin process is running fine.
The TCP limit is the TIME_WAIT timeout. When the TCP socket closes, the side starting the close puts the socket into the TIME_WAIT state. A
will short the sockets in the TIME_WAIT state. The following shows an example of the TIME_WAIT sockets generated while benchmarking. Each client connection has a unique ephemeral port and the server always uses its public port:
unix> netstat ... tcp 0 0 localhost:25033 localhost:8080 TIME_WAIT tcp 0 0 localhost:25032 localhost:8080 TIME_WAIT tcp 0 0 localhost:25031 localhost:8080 TIME_WAIT tcp 0 0 localhost:25030 localhost:8080 TIME_WAIT tcp 0 0 localhost:25029 localhost:8080 TIME_WAIT tcp 0 0 localhost:25028 localhost:8080 TIME_WAIT ... |
The socket will remain in the TIME_WAIT state for a system-dependent time, generally 120 seconds, but usually configurable. Since there are less than 32k ephemeral socket available to the client, the client will eventually run out and start seeing connection failures. On some operating systems, including RedHat Linux, the default limit is only 4k sockets. The full 32k sockets with a 120 second timeout limits the number of connections to about 250 connections per second.
If mod_caucho or isapi_srun are misconfigured, they can use too many connections and run into the TIME_WAIT limits. Using keepalives effectively avoids this problem. Since keepalive connections are reused, they won't go into the TIME_WAIT state until they're finally closed. A site can maximize the keepalives by setting
large and setting and to large values. limits the maximum number of keepalive connections. and will configure how long the connection will be reused.
... <thread-pool> <thread-max>250</thread-max> </thread-pool> <server> <keepalive-max>250</keepalive-max> <keepalive-timeout>120s</keepalive-timeout> <cluster> <client-live-time>120s</client-live-time> <srun id="a" port="6802" read-timeout="120s"/> </cluster> ... |
must always be larger than . In addition, should be larger than the maximum number of Apache processes.
Using Apache as a web server on Unix introduces a number of issues because Apache uses a process model instead of a threading model. The Apache processes don't share the keepalive srun connections. Each process has its own connection to Resin. In contrast, IIS uses a threaded model so it can share Resin connections between the threads. The Apache process model means Apache needs more connections to Resin than a threaded model would.
In other words, the keepalive and TIME_WAIT issues mentioned above are particularly important for Apache web servers. It's a good idea to use
to check that a loaded Apache web server isn't running out of keepalive connections and running into TIME_WAIT problems.
先将resin.conf文g中的thread-minQthread-maxQthread-keepalive三个参数讄的比较大Q分别写上,1000Q?000Q?000Q当然这是根据你的机器情况和可能同时讉K的数量决定的Q如果你的网站访问量很大的,应该再适当攑֤?br />
然后观察d理器中的javaU程变化情况Q看看到底是U程辑ֈ多大的时候,javaq程当掉的。我的是?79左右当掉?br />
然后thread-minQthread-maxQthread-keepalive分别写ؓ150Q?00Q?00Q,也就是将当掉的时候的最大值稍微放大点Q作为thread-max的|因ؓ该系l一般不会超q这个倹{然后其他两个参数根据情况设|一下?br />
q只是我的估计|Ҏ(gu)机器性能和访问量不同Q应该有所不同?br />
然后accept-buffer-sizeD|的较大Q我讄?0000以上Q这样可以让java能用到更多的内存资源?br />
q样的设|基本上能够满resin的正常运行,当掉resin服务的情况大大减,本设|适合于中型|站?/p>
Resin优化Q?
The allocation of memory for the JVM is specified using -X options when starting Resin
(the exact options may depend upon the JVM that you are using, the examples here are for the Sun JVM).
JVM option passed to Resin Meaning
-Xms initial java heap size
-Xmx maximum java heap size
-Xmn the size of the heap for the young generation
Resin startup with heap memory options unix> bin/httpd.sh -Xmn100M -Xms500M -Xmx500M win> bin/httpd.exe -Xmn100M -Xms500M -Xmx500M install win service> bin/httpd.exe -Xmn100M -Xms500M -Xmx500M -install
原文Qhttp://www.caucho.com/resin-3.0/performance/jvm-tuning.xtp
JVM 优化Q?
java -Xms<size>
set initial Java heap size. default:Xms32m
java -Xmx<size>
set maximum Java heap size. default:Xmx128m
set it like that:
java -Xms=32m -Xmx=256m
If the problem persist, increase Xmx more than 256 ( 512m for example )
-J-mx<num>
Resin启动旉过bin目录下的wrapper.pl文gq行控制Q我们可以修改这个文件来加一些参敎ͼ比如要加入Java?Xms?Xmx参数
q行
vi /usr/local/resin-2.1/bin/wrapper.pl
扑ֈq修改以下这行ؓQ?
$JAVA_ARGS="-Xms512m -Xmx512m";
具体参数h据自q应用q行调节
log讄 name 是指定对各个层次应用q行debugQname 讑֮有几U情况,如: level 的别一般有Q? pathQ?nbsp;输出文g路径指向,可以形式?nbsp;path=’stdout:’ 注意后面有冒P或指定绝对\径path=’/usr/local/resin-3.0.7/log/stdout.log’ 一般设|日志文件一周轮循一??nbsp;rollover-period=’1M’ ?nbsp;rollover-period=’7D’ , 当满一?pȝ会自动生成新日志记录文g,格式? stderr.log.20041201 stderr.log.20041208 |
当不需要改动程序时,关闭java自动~译会更快些.
<compiling-loader path="webapps/WEB-INF/classes" />
加个属?br />
batch="false"
$JAVA_ARGS="-server";
据说java中的-server参数是让本地化编译更完全.
1 |
<!-- - Resin 3.0 configuration file. --> <resin xmlns="http://caucho.com/ns/resin" xmlns:resin="http://caucho.com/ns/resin/core"> <!-- - Logging configuration for the JDK logging API. --> <log name="" level="all" path="stdout:" timestamp="[%H:%M:%S.%s] "/> <logger name="com.caucho.java" level="config"/> <logger name="com.caucho.loader" level="config"/> <dependency-check-interval>600s</dependency-check-interval> <javac compiler="internal" args=""/> <thread-pool> <thread-max>10240</thread-max> <spare-thread-min>50</spare-thread-min> </thread-pool> <min-free-memory>5M</min-free-memory> <server> <class-loader> <tree-loader path="${resin.home}/lib"/> <tree-loader path="${server.root}/lib"/> </class-loader> <keepalive-max>1024</keepalive-max> <keepalive-timeout>60s</keepalive-timeout> <resin:if test="${resin.isProfessional()}"> <select-manager enable="true"/> </resin:if> <bind-ports-after-start/> <http server-id="" host="*" port="80"/> <cluster> <srun server-id="" host="127.0.0.1" port="6802"/> </cluster> <resin:if test="${resin.isProfessional()}"> <persistent-store type="cluster"> <init path="session"/> </persistent-store> </resin:if> <ignore-client-disconnect>true</ignore-client-disconnect> <resin:if test="${isResinProfessional}"> <cache path="cache" memory-size="20M"/> </resin:if> <web-app-default> <class-loader> <tree-loader path="${server.root}/ext-webapp"/> </class-loader> <cache-mapping url-pattern="/" expires="60s"/> <cache-mapping url-pattern="*.gif" expires="600s"/> <cache-mapping url-pattern="*.jpg" expires="600s"/> <servlet servlet-name="directory" servlet-class="com.caucho.servlets.DirectoryServlet"> <init enable="false"/> </servlet> <allow-servlet-el/> <session-config> <enable-url-rewriting>false</enable-url-rewriting> </session-config> </web-app-default> <host-default> <class-loader> <compiling-loader path="webapps/WEB-INF/classes"/> <library-loader path="webapps/WEB-INF/lib"/> </class-loader> <!--access-log path="logs/access.log" format='%h %l %u %t "%r" %s %b "%{Referer}i" "%{User-Agent}i"' rollover-period="1W"/--> <web-app-deploy path="webapps"/> <ear-deploy path="deploy"> <ear-default> <!-- Configure this for the ejb server - - <ejb-server> - <config-directory>WEB-INF</config-directory> - <data-source>jdbc/test</data-source> - </ejb-server> --> </ear-default> </ear-deploy> <resource-deploy path="deploy"/> <web-app-deploy path="deploy"/> </host-default> <resin:import path="${resin.home}/conf/app-default.xml"/> <host-deploy path="hosts"> <host-default> <resin:import path="host.xml" optional="true"/> </host-default> </host-deploy> <host id="" root-directory="."> <web-app id="/" document-directory="d:\website\chat"> </web-app> </host> </server> </resin> |
1 |
<servlet-mapping servlet-class='com.caucho.servlets.ResinStatusServlet'> <url-pattern>/resin-status</url-pattern> <init enable="read"/> </servlet-mapping> |
1 |
import java.io.InputStream; import java.net.URL; public class TestURL { public static void main(String[] args) throws Exception { long a = System.currentTimeMillis(); System.out.println("Starting request url:"); for(int i = 0; i < 10000; i++){ URL url = new URL("http://192.168.1.200/main.jsp"); InputStream is = url.openStream(); is.close(); System.out.println("Starting request url:"+i); } System.out.println("request url end.take "+(System.currentTimeMillis()-a)+"ms"); } } |
1 |
// default timeout private long _timeout = 65000L; private int _connectionMax = 512;//是q行Q查找resin所有源码后Q发现没有对q个D行设|?/font> private int _minSpareConnection = 16; private int _keepaliveMax = -1; private int _minSpareListen = 5; private int _maxSpareListen = 10; |
1 、安?/span>
1Q?/span>安装?/span>JDK
2Q?/span>?/span>resin-3.0.x.zip解压~?/span>
3Q?/span>q行resin-3.0.x/httpd.exe
4Q?/span>打开http://localhost:8080查看试面
如果正确打开Q窗口会昄如下信息Q?/font>
C:"win32> resin-3.0.0"bin"httpd
Resin 3.0.0-beta (built Thu Feb 13 18:21:13 PST 2003)
Copyright(c) 1998-2002 Caucho Technology. All rights reserved.
Starting Resin on SatQ?/span> 01 Mar 2003 19:11:52 -0500 (EST)
[19:11:56.479] ServletServer[] starting
[19:11:57.000] Host[] starting
[19:11:58.312] Application[http://localhost:8080/doc] starting
[19:12:11.872] Application[http://localhost:8080/quercus] starting
...
[19:12:12.803]http listening to *:8080
[19:12:12.933]hmux listening to *:6802
2、配|?/span>
部v?/span>Windows下的服务Q?/span>
The Resin Web Server can be installed as an Windows service.
Resin服务器可以被安装成ؓWindows的服?/span>
要安装ؓWindows的服务,可以用下面的命o
C:"> resin-3.0.x"bin"httpd -install -conf conf/myconf.conf
q样Resin׃作ؓ服务随着机器的启动而自动开启?/span>
要想U除此服?/font>
C:"> resin-3.0.x"bin"httpd -remove
你也可以用如下命令开启和关闭Resin的服务:
C:"> net start resin
...
C:"> net stop resin
多服务v配置Q?/font>
使用参数 -install-as foo来指定一个特定的服务?/span>
C:"> resin-3.0.x"bin"httpd -install-as ResinA -conf conf/myconf.conf
-server a
C:"> net start ResinA
注意Q?/font>
有一?/span>JDK存在q样?/span>BugQ当理员帐h销以后服务也会随着关闭Q解军_法是在安装的时候用参?/span>–Xrs:
C:"> resin3.0.0/httpd.exe -install –Xrs
二、Linux下resin的安装以及配|:
1、安?/span>
1Q?/span> 安装 JDK 1.4
2Q?定环境变量JAVA_HOME讄正确
3Q?安装
单独q行Q?/span>
# tar zxvf resin-3.0.4.tar.gz
# mv resin03.0.6 /usr/local/resin
#cd /usr/local/resin
# ./configure
# make
# make install
#cd bin
#./httpd.sh start
自动启动Q?/strong>
?/span>/etc/rc.d/rc.local中加入如下语句:
/usr/local/resin/bin/httpd.sh start
?/span>Apache整合Q?/span>
1QApache安装
# tar zxvf httpd-2.49.tar.gz
# cd httpd-2.49
# ./configure --prefix=/usr/local/httpd --enable-modules=so --enable-so
--prefix
--enable-modules用来指定pȝ允许使用的功能扩展模块的cdQ这里指定ؓsocd?/span>
--enabel-so 用来指定允许使用DSOQ?/span>Dynamic Share Object动态共享对象)?/span>
# make
# make install
讄apache自动启动Q?/span>
?/span>Apache的启动文?/span>apachectl写入rc.local?/span>
/usr/local/httpd/bin/apachectl start
2QResin安装
# tar zxvf resin-3.0.4.tar.gz
# cd resin-3.0.4
# ./configure --prefix=/usr/local/resin --with-apache=/usr/local/httpd
# make
# make install
此时已经生成Resinq接Apache2?/span>.so文g了,其存在与$APACHE_HOMEe/modules/mod_caucho.so
conf/httpd.conf中就会多Z下语句:
LoadModule caucho_modules modules/mod_caucho.so
ResinConfigServer localhost 6802
分别先后启动Resin?/span>Apache
讉Khttp://hostname/caucho-status 可以看到Resin的状态页?/span>
2 、配|?/span>
1Q?/span>linux下?/span>resin的单服务器配|?/span>
使用单服务器Ҏ(gu)Q只要安装完毕,配置一?/span>Resin?/span>resin.conf文g?/span>app_default.xml文g可以了Q?/span>resin.conf文g中需要配|两处,一是端口号Q另一处是WebE序存放目录Q?/span>app_default.xml内可配置默认首页的搜索顺序。由于原|站使用多个端口配置Q所以单服务器只能用于做单个站点的测试用?/span>
2Q?/span>linux下?/span>resin的多服务器配|以及多实例开动运行的配置
有时候需要运行多个服务器以在同一个テQC监听多个端口Q这时候就需要用单独的Resin服务器运行多个实例,以监听多个端口来部v多个Web站点?/span>
可以有如下两U配|方法:
W一U方法:
q种Ҏ(gu)为多ơ?/span>httpd.sh的参数指定配|文件和q行时的pid文gQ实现多个实例的q行?/span>
使用的命令行如下所C:
$RESIN_HOME/bin/httpd.sh -conf conf/resin1.conf -pid resin1.pid start
解释Q?/span>
-conf 选项为选择此服务器实例所用的配置文gQ在q个文g里面配置不同的端口和ȝ录?/span>
-pid q程id及所?/span>pid文g?/span>
start启动?/span>
用以上命令,在徏立多个服务器配置文g以后可以手工打开多个服务器实例?/span>
把这些语句加?/span>/etc/rc.d/rc.local中就可以实现开动启动了?/span>
W二U方法:
q种Ҏ(gu)使用Chkconfig命o讄多服务器自动启动Q运行于不同pȝq行U别?/span>后台方式。ƈ使他们成为可在图形界面下理的服务?/span>
讄?/span>JAVA_HOME环境变量后将RESIN解包?/span>/home/resin下,执行~译脚本
#tar zxf resin-version.tar.gz
#mv resin-version /home/resin
#cd /home/resin/
#./configure
#make
#make install
?/span>make install生成?/span>$RESIN_HOME/contrib/init.resin复制?/span>/etc/rc.d/init.d/目录下改名ؓresinx
修改resin文g的内容(三处Q?/span>
Q?/span>1Q?/span>JAVA环境讄
扑ֈ以下代码D?/span>
JAVA_HOME=/usr/java
RESIN_HOME=/usr/local/resin
q修改他们ؓ相应的目录,W一个ؓJDK安装ȝ录,W二个ؓResin安{ȝ录?/span>
Q?/span>2Q?/span>PID=$RESIN_HOME/resin.pid
修改?/span>
PID=$RESIN_HOME/resin1.pidQ很清楚Q要不一LpidQ?/span>
Q?/span>3Q、找到程序段
start)
echo -n "Starting resin: "
if test -n "$USER"; then
su $USER -c "$EXE -pid $PID start $ARGS"
else
$EXE -pid $PID start $ARGS
fi
echo
;;
修改?/span>
start)
echo -n "Starting resin: "
if test -n "$USER"; then
su $USER -c "$EXE -conf $RESIN_HOME/conf/resin1.conf -pid $PID start $ARGS"
else
$EXE -conf $RESIN_HOME/conf/resin1.conf -pid $PID start $ARGS
fi
echo
;;
其实上面的程序和W一部分是一LQ只是用脚本来运行了?/span>
最?/span>chmod +x resin1
用上面的Ҏ(gu)拯?/span>resin2Q?/span>resin3.....
chmod +x resin1
chmod +x resin2
......
命o为:
cp contrib/init.resin /etc/rc.d/init.d/resin1
vi /etc/rc.d/init.d/resin1
i
:wq
chmod +x /etc/rc.d/init.d/resin1
如此你想q行几个服务器实例创建几个这L文g(resinx)
修改resin中的一些设|:JAVA_HOME RESIN_HOME USER{,
?/span>resin服务在不同的启动U中讄成自启动:
#/sbin/chkconfig resin1 reset
#/sbin/chkconfig resin2 reset
......
创徏不同的配|文?/span>
注意每一个配|文件必M持三处不同:
(1)srunQ负载^衡配|)部分端口号必M同,用不同的ip地址也可以?/span>
在原文g的如下部分:
<cluster>
<srun server-id="" host="127.0.0.1" port="6802"/>
</cluster>
(2)默认L件目录要设ؓ不同Q不然就失去了多个实例的意义Q不然的话还不如用负载^衡来提高性能
<web-app id="/" document-directory="webapps/ROOT"/>
(3)服务器的端口号必M同?/span>
<http server-id="" host="*" port="8080"/>
q样pȝ启动的时候,resin׃在后台运行多个实例了Q效果和W一步一栗?/span>
JSTL1.0是需要Servlet2.3和JSP1.2的?br />
JSTL1.1是需要Servlet2.4和JSP2.0的?br />
Resin?.1.2版本开始自己实CJSTL的core和fmt两个TAGLIB?br />
使用Resin自带的JSTL
不需要拷贝JAR和TLD文gQ也不需要配|web.xml?br />
只要在页面引用就可以了,注意与标准JSTL1.1的区别?br />
http://java.sun.com/jstl/core” prefix=”c”%>
http://java.sun.com/jstl/fmt” prefix=”fmt”%>
http://java.sun.com/jsp/jstl/functions” prefix=”fn”%>
感觉速度比较快?br />
如果要禁止自带的JSTLQ需要在Resin的配|文仉讄
…
使用标准的JSTL1.1
需要将JAR包拷贝到WEB-INF/lib目录下,不需要拷贝TLD文gQ不需要配|web.xml?br />
在页面这样引?br />
http://java.sun.com/jsp/jstl/core” prefix=”c” %>
http://java.sun.com/jsp/jstl/fmt” prefix=”fmt” %>
使用标准的JSTL1.0
需要将JAR包拷贝到WEB-INF/lib目录下,拯需要的TLD文g?br />
配置web.xml
jstl-c
/WEB-INF/tld/c.tld
jstl-fmt
/WEB-INF/tld/fmt.tld
在页面这样引?/p>
如果没有止Resin自带的JSTLQ然后自己又在Resin2.1.16里配|了标准JSTL1.0。结果可能导致fmt失效?/p>
<!--备䆾每日日志Qƈ且压~ؓ.gz格式文g减少占用定wQ如果一周备份一ơ则需要将“1D”改ؓ“1W”-->
<log name="" level="info" path="log/stdout.log" timestamp="[%H:%M:%S.%s] "
archive-format="stdout.log.%Y-%m-%d.gz"
rollover-period="1D"/>
<!--
- For production sites, change dependency-check-interval to something
- like 600s, so it only checks for updates every 10 minutes.
-->
<dependency-check-interval>2s</dependency-check-interval>
<!--
- You can change the compiler to "javac" or jikes.
- The default is "internal" only because it's the most
- likely to be available.
-->
<javac compiler="internal" args=""/>
<!-- Security providers.
- <security-provider>
- com.sun.net.ssl.internal.ssl.Provider
- </security-provider>
-->
<!--
- If starting bin/resin as root on Unix, specify the user name
- and group name for the web server user.
-
- <user-name>resin</user-name>
- <group-name>resin</group-name>
-->
<!--
- Configures threads shared among all HTTP and SRUN ports.
-->
<thread-pool>
<!-- Maximum number of threads. -->
<thread-max>1024</thread-max>
<!-- Minimum number of spare connection threads. -->
<spare-thread-min>25</spare-thread-min>
</thread-pool>
<!--
- Configures the minimum free memory allowed before Resin
- will force a restart.
-->
<min-free-memory>1M</min-free-memory>
<server>
<!-- adds all .jar files under the resin/lib directory -->
<class-loader>
<tree-loader path="${resin.home}/lib"/>
<tree-loader path="${server.root}/lib"/>
</class-loader>
<!-- Configures the keepalive -->
<keepalive-max>500</keepalive-max>
<keepalive-timeout>120s</keepalive-timeout>
<resin:if test="${resin.isProfessional()}">
<select-manager enable="true"/>
</resin:if>
<!-- listen to the http ports only after the server has started. -->
<bind-ports-after-start/>
<!-- The http port -->
<http server-id="" host="*" port="6060"/>
<!--
- SSL port configuration:
-
- <http port="8443">
- <openssl>
- <certificate-file>keys/gryffindor.crt</certificate-file>
- <certificate-key-file>keys/gryffindor.key</certificate-key-file>
- <password>test123</password>
- </openssl>
- </http>
-->
<!--
- The local cluster, used for load balancing and distributed
- backup.
-->
<!--cluster>
<srun server-id="" host="127.0.0.1" port="6806"/>
</cluster-->
<!--
- Configures the persistent store for single-server or clustered
- in Resin professional.
-->
<resin:if test="${resin.isProfessional()}">
<persistent-store type="cluster">
<init path="session"/>
</persistent-store>
</resin:if>
<!--
- Enables/disables exceptions when the browser closes a connection.
-->
<ignore-client-disconnect>true</ignore-client-disconnect>
<!--
- For security, use a different cookie for SSL sessions.
- <ssl-session-cookie>SSL_JSESSIONID</ssl-session-cookie>
-->
<!--
- Enables the cache (available in Resin Professional)
-->
<resin:if test="${isResinProfessional}">
<cache path="cache" memory-size="8M"/>
</resin:if>
<!--
- Enables periodic checking of the server status.
-
- With JDK 1.5, this will ask the JDK to check for deadlocks.
- All servers can add <url>s to be checked.
-->
<resin:if test="${isResinProfessional}">
<ping>
<!-- <url>http://localhost:8080/test-ping.jsp</url> -->
</ping>
</resin:if>
<!--
- Defaults applied to each web-app.
-->
<web-app-default>
<!--
- Extension library for common jar files. The ext is safe
- even for non-classloader aware jars. The loaded classes
- will be loaded separately for each web-app, i.e. the class
- itself will be distinct.
-->
<class-loader>
<tree-loader path="${server.root}/ext-webapp"/>
</class-loader>
<!--
- Sets timeout values for cacheable pages, e.g. static pages.
-->
<cache-mapping url-pattern="/" expires="5s"/>
<cache-mapping url-pattern="*.gif" expires="60s"/>
<cache-mapping url-pattern="*.jpg" expires="60s"/>
<!--
- Servlet to use for directory display.
-->
<servlet servlet-name="directory"
servlet-class="com.caucho.servlets.DirectoryServlet"/>
<!--
- Enable EL expressions in Servlet and Filter init-param
-->
<allow-servlet-el/>
<!--
- for security, disable session URLs by default.
-->
<session-config>
<enable-url-rewriting>true</enable-url-rewriting>
<save-only-on-shutdown>true</save-only-on-shutdown>
<file-store>${resin.home}/webapps</file-store>
<ignore-serialization-errors>true</ignore-serialization-errors>
</session-config>
<!--
- For security, set the HttpOnly flag in cookies.
- <cookie-http-only/>
-->
</web-app-default>
<!--
- Sample database pool configuration
-
- The JDBC name is java:comp/env/jdbc/test
<database>
<jndi-name>jdbc/mysql</jndi-name>
<driver type="org.gjt.mm.mysql.Driver">
<url>jdbc:mysql://localhost:3306/test</url>
<user></user>
<password></password>
</driver>
<prepared-statement-cache-size>8</prepared-statement-cache-size>
<max-connections>20</max-connections>
<max-idle-time>30s</max-idle-time>
</database>
-->
<!--
- Default host configuration applied to all virtual hosts.
-->
<host-default>
<class-loader>
<compiling-loader path="webapps-/WEB-INF/classes"/>
<library-loader path="webapps-/WEB-INF/lib"/>
</class-loader>
<!--
- With another web server, like Apache, this can be commented out
- because the web server will log this information.
-->
<!--查看上面U色字体的说?br>-->
<access-log path="logs/access.log"
archive-format="stdout.log.%Y-%m-%d.gz"
rollover-period="1D"/>
<!-- creates the webapps directory for .war expansion -->
<web-app-deploy path="webapps-"/>
<!-- creates the deploy directory for .ear expansion -->
<ear-deploy path="deploy">
<ear-default>
<!-- Configure this for the ejb server
-
- <ejb-server>
- <config-directory>WEB-INF</config-directory>
- <data-source>jdbc/test</data-source>
- </ejb-server>
-->
</ear-default>
</ear-deploy>
<!-- creates the deploy directory for .rar expansion -->
<resource-deploy path="deploy"/>
<!-- creates a second deploy directory for .war expansion -->
<web-app-deploy path="deploy"/>
</host-default>
<!-- includes the web-app-default for default web-app behavior -->
<resin:import path="${resin.home}/conf/app-default.xml"/>
<!-- configures a deployment directory for virtual hosts -->
<host-deploy path="hosts">
<host-default>
<resin:import path="host.xml" optional="true"/>
</host-default>
</host-deploy>
<!-- configures the default host, matching any host name -->
<host id="" root-directory=".">
<!--
- configures an explicit root web-app matching the
- webapp's ROOT
-->
<!--web-app id="/" document-directory="webapps/ROOT"/-->
<!--
- Administration application /resin-admin
-
- password is the md5 hash of the password.
- localhost is true to limit access to the localhost
-->
<resin:set var="resin_admin_password" default=""/>
<resin:set var="resin_admin_localhost" default="true"/>
<!--web-app id="/resin-admin" document-directory="${resin.home}/php/admin"/-->
<web-app id="/" document-directory="webapps/job">
</web-app>
<!--web-app id="resin-doc" document-directory="webapps/resin-doc"/-->
</host>
</server>
</resin>
负蝲qҎ(gu)如下Q?br>一台机器(操作pȝ2003Q安装apacheQ作载服务器Qƈ安装tomcat作ؓ一个workerQ一个单独安装tomcatQ作为第二个workerQ剩下的一台单独作为数据库服务器?br>Apache和tomcat的负载^衡采用JK1.2.14Q没有采?.0Q主要是2.0不再l护了)?br>集群Ҏ(gu)Q?br>采用Tomcat本n的集方案。在server.xml配置?br>压力试问题Q?br>压力试后,发现了一些问题,C一列出来:
Q?Q?采用Tocmat集群后,速度变得很慢。因为集后Q要q行session复制Q导致速度较慢。Tomcatd的复Ӟ目前不支持application复制。复制的作用Q主要用来容错的Q即一台机器有故障后,apache可以把请求自动{发到另外一个机器。在定w和速度的考虑上,我们最l选择速度Q去掉了Tomcat集群?br>Q?Q?操作pȝ最大ƈ发用L限制Q?br>Z采用|站的压力,我们开始的时候,仅测试Tomcat的最大负载数。Tomcat服务器安装的操作pȝ是windows2000 Professional。当我们用压力测试工Pq发试Ӟ发现只要过15个ƈ发用P会经常出现无法连接服务器的情c经q研IӞ发现是操作系l的问题Qwindows2000 Professional 支持的ƈ发访问用h限,默认的好像是15个。于是我们把操作pȝ全部采用windows2003 server版本?br>Q?Q?数据库连接池的问题:
试数据库连接性能Ӟ发现数据库连接速度很慢。每增加一些用Pq接性能差了很多。我们采用的数据库连接池是DBCPQ默认的初始化ؓ50个,应该不会很慢吧。查询数据库的连接数Q发现初始化Q只初始化一个连接。ƈ发增加一个用hQ程序就会重新创Z个连接,Dq接很慢。原因就在这里了。如何解军_Q偶在JDK1.4下的Tomcat5.0.30下执行数据库q接压力试Q发现速度很快Q程序创建数据库q接的速度也是很快的。看来JDK1.5的JDBC驱动E序有问题。于是我们修?JDK的版本ؓ1.4.
Q?Q?C3P0和DBCP
C3P0是Hibernate3.0默认的自带数据库q接池,DBCP是Apache开发的数据库连接池。我们对q两U连接池q行压力试Ҏ(gu)Q发现在q发300个用户以下时QDBCP比C3P0q_旉?U左叟뀂但在ƈ?00个用hQ两者差不多?/p>
速度上虽然DBCP比C3P0快些Q但是有BUGQ当DBCP建立的数据库q接Q因为某U原因断掉后QDBCP不会再重新创徏新的q接Q导致必重新启动Tomcat才能解决问题。DBCP的BUG使我们决定采用C3P0作ؓ数据库连接池?br>调整后的Ҏ(gu)Q?br>操作pȝWindows2003 server版本
JDK1.4
Tomcat 5.0.30
数据库连接池C3P0
仅采用负载^衡,不采用集?br>软g的配|:
Apache配置Q主要配|httpd.conf和新增加的文件workers.properties
Httpd.confQ?br>#一个连接的最大请求数?br>MaxKeepAliveRequests 10000
#NT环境Q只能配|这个参数来提供性能
<IfModule mpm_winnt.c>
#每个q程的线E数Q最?920。NT只启动父子两个进E,不能讄启动多个q程
ThreadsPerChild 1900
每个子进E能够处理的最大请求数
MaxRequestsPerChild 10000
</IfModule>
# 加蝲mod_jk
#
LoadModule jk_module modules/mod_jk.so
#
# 配置mod_jk
#
JkWorkersFile conf/workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel info
#h分发Q对jsp文gQ?do{动态请求交由tomcat处理
DocumentRoot "C:/Apache/htdocs"
JkMount /*.jsp loadbalancer
JkMount /*.do loadbalancer
JkMount /servlet/* loadbalancer
#xLLookupQ如果ؓonQ很影响性能Q可以有10多秒钟的延迟?br>HostnameLookups Off
#~存配置
LoadModule cache_module modules/mod_cache.so
LoadModule disk_cache_module modules/mod_disk_cache.so
LoadModule mem_cache_module modules/mod_mem_cache.so
<IfModule mod_cache.c>
CacheForceCompletion 100
CacheDefaultExpire 3600
CacheMaxExpire 86400
CacheLastModifiedFactor 0.1
<IfModule mod_disk_cache.c>
CacheEnable disk /
CacheRoot c:/cacheroot
CacheSize 327680
CacheDirLength 4
CacheDirLevels 5
CacheGcInterval 4
</IfModule>
<IfModule mod_mem_cache.c>
CacheEnable mem /
MCacheSize 8192
MCacheMaxObjectCount 10000
MCacheMinObjectSize 1
MCacheMaxObjectSize 51200
</IfModule>
</IfModule>
worker. Properties文g
#
# workers.properties Q可以参?br>http://jakarta.apache.org/tomcat/connectors-doc/config/workers.html
# In Unix, we use forward slashes:
ps=
# list the workers by name
worker.list=tomcat1, tomcat2, loadbalancer
# ------------------------
# First tomcat server
# ------------------------
worker.tomcat1.port=8009
worker.tomcat1.host=localhost
worker.tomcat1.type=ajp13
# Specify the size of the open connection cache.
#worker.tomcat1.cachesize
#
# Specifies the load balance factor when used with
# a load balancing worker.
# Note:
# ----> lbfactor must be > 0
# ----> Low lbfactor means less work done by the worker.
worker.tomcat1.lbfactor=900
# ------------------------
# Second tomcat server
# ------------------------
worker.tomcat1.port=8009
worker.tomcat1.host=202.88.8.101
worker.tomcat1.type=ajp13
# Specify the size of the open connection cache.
#worker.tomcat1.cachesize
#
# Specifies the load balance factor when used with
# a load balancing worker.
# Note:
# ----> lbfactor must be > 0
# ----> Low lbfactor means less work done by the worker.
worker.tomcat1.lbfactor=2000
# ------------------------
# Load Balancer worker
# ------------------------
#
# The loadbalancer (type lb) worker performs weighted round-robin
# load balancing with sticky sessions.
# Note:
# ----> If a worker dies, the load balancer will check its state
# once in a while. Until then all work is redirected to peer
# worker.
worker.loadbalancer.type=lb
worker.loadbalancer.balanced_workers=tomcat1,tomcat2
#
# END workers.properties
#
Tomcat1配置:
<!--配置server.xml
L8080端口Q即注释掉如下代码:-->
<Connector
port="8080" maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
enableLookups="false" redirectPort="8443" acceptCount="100"
debug="0" connectionTimeout="20000"
disableUploadTimeout="true" />
<!--配置8009端口如下Q?->
<Connector port="8009"
maxThreads="500" minSpareThreads="400" maxSpareThreads="450"
enableLookups="false" redirectPort="8443" debug="0"
protocol="AJP/1.3" />
<!--配置引擎-->
<Engine name="Catalina" defaultHost="localhost" debug="0" jvmRoute="tomcat1">
启动内存配置,开发configure tomcatE序卛_配置Q?br>Initial memory pool: 200 M
Maxinum memory pool:300M
Tomcat2配置Q?br>配置和tomcat1差不多,需要改动的地方如下Q?br><!--配置引擎-->
<Engine name="Catalina" defaultHost="localhost" debug="0" jvmRoute="tomcat2">
启动内存配置,开发configure tomcatE序卛_配置Q?br>Initial memory pool: 512 M
Maxinum memory pool:768M
Mysql配置Q?br>ServercdQDedicated MySQL Server Machine
Database usage:Transational Database Only
q发q接数量QOnline Transaction Processing(OLTP)
字符集:UTF8
数据库连接池的配|:
我们采用的是spring 框架Q配|如下:
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop>
<prop key="hibernate.connection.driver_class">com.mysql.jdbc.Driver</prop>
<prop key="hibernate.connection.url">jdbc:mysql://202.88.1.103/db</prop>
<prop key="hibernate.connection.username">sa</prop>
<prop key="hibernate.connection.password"></prop>
<prop key="hibernate.show_sql">false</prop>
<prop key="hibernate.use_sql_comments">false</prop>
<prop key="hibernate.cglib.use_reflection_optimizer">true</prop>
<prop key="hibernate.max_fetch_depth">2</prop>
<prop key="hibernate.c3p0.max_size">200</prop>
<prop key="hibernate.c3p0.min_size">5</prop>
<prop key="hibernate.c3p0.timeout">12000</prop>
<prop key="hibernate.c3p0.max_statements">50</prop>
<prop key="hibernate.c3p0.acquire_increment">1</prop>
</props>
</property>
其他的没有额外配|?br>LoadRunner 常见问题Q?br>Q?Qsofeware caused connctionQ这U情况,一般是脚本有问题,或者loadrunner有问题。解x法:重新启动机器Q或者重新录制脚本,估计是loadrunner的bug?br>Q?Qcannot connect to server:无法q接到服务器。这U情冉|服务器的配置有问题,服务器无法承受过多的q发q接了。需要优化服务器的配|,
如操作系l采用windows 2003 serverQ?br>优化tomcat配置QmaxThreads="500" minSpareThreads="400" maxSpareThreads="450"。但是tomcat 最多支?00个ƈ发访?br>优化apache配置Q?br>ThreadsPerChild 1900
MaxRequestsPerChild 10000
其他的错误如Q?br>Action.c(10): Error -27791: Server has shut down the connection prematurely
HTTP Status-Code=503 (Service Temporarily Unavailable)
一般都是由于服务器配置不够好引LQ按照问题(2Q处理,如果仍旧不行Q需要优化硬件和调整E序了?br>Apache问题Q?br>Q?Q?File does not exist: C:/Apache/htdocs/favicon.icoQ?br>q个问题是apacheQhtdocs目录没有favicon.ico文g引v的,该文件是|站的图标,仅在firefox,myIE{浏览器出现?br>Q?Q?囄无法昄Q?br>配置a(chn)pache后,却无法显C图片?br>解决Ҏ(gu)Q把E序的图片,按照E序l构copy到apache的htdocs目录下?br>Q?Q?无法处理hQ?br>当我们输?***.do 命o后,apache返回错误信息,而连接tomcat却没有问题。原因是没有?do命o转发ltomcat处理。解x法如下:
在apache配置文g中配|如下内容:
DocumentRoot "C:/Apache/htdocs"
JkMount /*.jsp loadbalancer
JkMount /*.do loadbalancer
ȝQ?br>|站的压力测试,涉及的知识面挺广的,不仅要熟(zhn)压力测试工Pq要知道如何配置和优化应用服务器和数据库Qƈ且需要知道如何优化网l、操作系l、硬件系l?br>试中不仅要善于发现问题Q要知道如何解决。最重要的一点,要有良好的测试方法。刚开始测试时Q可以从最单的试脚本入手Q不需要太复杂的脚本,q样便于发现问题。如我们刚开始时Q就从一个简单的下蝲登陆界面的脚本入手,试一个tomcat的压力负载。一个简单的获取登陆的脚本,帮助我们优化了tomcat的配|;后来再测试数据库q接Q也是一个简单的数据库连接脚本,帮助我们优化了数据库q接池;然后利用q些单的脚本Q测试apache的负载^衡,优化了apache配置。最后运行复杂的脚本Q模拟多U角色的用户在不同时间下的处理,以测试网站压力负载?/p>