??xml version="1.0" encoding="utf-8" standalone="yes"?>
以[final int x=911] , [static final int x=912]Z,jdk1.6.0_16(Z如此版本详细,是因Z面还有个jdk的bug).
样例c?
class Test {
private final int x=911;//modifiers:final->18,non-final->2
static final private int y=912;//modifiers:final->26,non-final->10
public int getX(){
return x;
}
public static int getY(){
return y;
}
}
Java中的final field意指帔R,赋g?不可改变.~译器会对final fieldq行如下的优?
e.g:
Test t=new Test();
凡是在程序中对t.x的引?~译器都以字面?11替换,getX()中的return x也会被替换成return 911;
所以就在q行时你改变了x的g无济于事,~译器对它们q行的是静态编?
但是Test.class.getDeclaredField("x").getInt(t)除外;
那么如何在运行时改变final field x的值呢?
private final int x=911;Field.modifiers?8,而private int x=911;Field.modifiers?.
所以如果我们修改Field[Test.class.getDeclaredField("x")].modifiers?8[final]变ؓ2[non-final],那么你就可以修改x的g.
Test tObj=new Test();
Field f_x=Test.class.getDeclaredField("x");
//修改modifiers 18->2
Field f_f_x=f_x.getClass().getDeclaredField("modifiers");
f_f_x.setAccessible(true);
f_f_x.setInt(f_x, 2/*non-final*/);
f_x.setAccessible(true);
f_x.setInt(tObj, 110);//改变x的gؓ110.
System.out.println("静态编译的x?"+tObj.getX()+".------.q行时改变了的?10:"+f_x.getInt(tObj));
f_x.setInt(tObj, 111);//你可以l改变x的gؓ.
System.out.println(f_x.getInt(tObj));
但是x复原来的modifiers,f_f_x.setInt(f_x, 18/*final*/);q是无效?因ؓField只会初始化它的FieldAccessor引用一?
在上面的q程?我还发现了个jdk bug,你如果将上面的红色代码改为如下的代码:
f_f_x.setInt(f_x, 10/*q个数值是static non-final modifiers,而x?strong>non-static?q样׃使f_x得到一个static FieldAccessor*/);那么会引发A fatal error has been detected by the Java Runtime Environment.q生相应的err log文g.昄JVM没有对这U情况加以处?我已提交to sun bug report site.
sun ?010-3-26通知?他们已承认该bug,bug id : 6938467.发布到外|可能有一C天的延迟.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6938467
下面分别是MINA,xSocket,Grizzly的源码分?
Apache MINA (mina-2.0.0-M6源码Z):
我们使用mina nio tcp最常用的样例如?
NioSocketAcceptor acceptor = new NioSocketAcceptor(/*NioProcessorPool's size*/);
DefaultIoFilterChainBuilder chain = acceptor.getFilterChain();
//chain.addLast("codec", new ProtocolCodecFilter(
//new TextLineCodecFactory()));
......
// Bind
acceptor.setHandler(/*our IoHandler*/);
acceptor.bind(new InetSocketAddress(port));
------------------------------------------------------------------------------------
首先从NioSocketAcceptor(extends AbstractPollingIoAcceptor)开?
bind(SocketAddress)--->bindInternal--->startupAcceptor:启动AbstractPollingIoAcceptor.Acceptor.run使用executor[Executor]的线E?注册[interestOps:SelectionKey.OP_ACCEPT],然后wakeup selector.
一旦有q接q来构建NioSocketSession--对应--channal,然后session.getProcessor().add(session)当前的channal加入到NioProcessor的selector中去[interestOps:SelectionKey.OP_READ],q样每个q接中有hq来q相应的NioProcessor来处?
q里有几点要说明的是:
1.一个NioSocketAcceptor对应了多个NioProcessor,比如NioSocketAcceptor׃用了SimpleIoProcessorPool DEFAULT_SIZE = Runtime.getRuntime().availableProcessors() + 1.当然q个size在new NioSocketAcceptor的时候可以设?
2.一个NioSocketAcceptor对应一个java nio selector[OP_ACCEPT],一个NioProcessor也对应一个java nio selector[OP_READ].
3.一个NioSocketAcceptor对应一个内部的AbstractPollingIoAcceptor.Acceptor---thread.
4.一个NioProcessor也对应一个内部的AbstractPollingIoProcessor.Processor---thread.
5.在new NioSocketAcceptor的时候如果你不提?strong>Executor(U程?的话,那么默认使用Executors.newCachedThreadPool().
q个Executor被NioSocketAcceptor和NioProcessor公用,也就是说上面的Acceptor---thread(一?和Processor---thread(多条)都是源于q个Executor.
当一个连接java nio channal--NioSession被加?strong>ProcessorPool[i]--NioProcessor中去后就转入了AbstractPollingIoProcessor.Processor.run,
AbstractPollingIoProcessor.Processor.runҎ是运行在上面?strong>Executor中的一条线E中?当前的NioProcessor处理注册在它的selector上的所有连接的h[interestOps:SelectionKey.OP_READ].
AbstractPollingIoProcessor.Processor.run的主要执行流E?
for (;;) {
......
int selected = selector(final SELECT_TIMEOUT = 1000L);
.......
if (selected > 0) {
process();
}
......
}
process()-->for all session-channal:OP_READ -->read(session):q个readҎ是AbstractPollingIoProcessor.private void read(T session)Ҏ.
read(session)的主要执行流E是read channal-data to buf,if readBytes>0 then IoFilterChain.fireMessageReceived(buf)/*我们的IoHandler.messageReceived在其中被调?/strong>*/;
到此mina Nio 处理h的流E已l明?
mina处理h的线E模型也出来?性能问题也来?/strong>,那就是在AbstractPollingIoProcessor.Processor.run-->process-->read(per session)?在process的时候mina?strong>for all selected-channals 逐次read data再fireMessageReceived到我们的IoHandler.messageReceived?/strong>,而不是ƈ发处?/strong>,q样一来很明显后来的请求将?strong>延迟处理.
我们假设:如果NioProcessorPool's size=2 现在?00个客L同时q接q来,假设每个NioProcessor都注册了100个连?对于每个NioProcessor?strong>依次序处理q?00个请?那么q其中的W?00个请求要得到处理,那它只有{到前面?9个被处理完了.
有h提出了改q方?那就是在我们自己的IoHandler.messageReceived中利用线E池再进行分发dispatching,q个当然是个好主?
但是hq是被gq处理了,因ؓq有read data所消耗的旉,q样W?00个请求它的数据要被读,p{前面的99个都被读完才?即便是增加ProcessorPool的尺怹不能解决q个问题.
此外mina?strong>陷阱(q个词较旉)也出来了,是?strong>read(session)?在说q个陷阱之前先说明一?我们的client端向server端发送一个消息体的时候不一定是完整的只发送一?可能分多ơ发?特别是在client端忙或要发送的消息体的长度较长的时?/strong>.而mina在这U情况下׃call我们的IoHandler.messageReceived多次,l果是消息体被分割了若q䆾,{于我们在IoHandler.messageReceived中每ơ处理的数据都是不完整的,q会D数据丢失,无效.
下面是read(session)的源?
private void read(T session) {
IoSessionConfig config = session.getConfig();
IoBuffer buf = IoBuffer.allocate(config.getReadBufferSize());
final boolean hasFragmentation =
session.getTransportMetadata().hasFragmentation();
try {
int readBytes = 0;
int ret;
try {
if (hasFragmentation/*hasFragmentation一定ؓture,也许mina的开发h员也意识C传输数据的碎片问?但是靠下面的处理是远q不够的,因ؓclient一旦间隔发?ret可能ؓ0,退出while,不完整的readBytes被fire*/) {
while ((ret = read(session, buf)) > 0) {
readBytes += ret;
if (!buf.hasRemaining()) {
break;
}
}
} else {
ret = read(session, buf);
if (ret > 0) {
readBytes = ret;
}
}
} finally {
buf.flip();
}
if (readBytes > 0) {
IoFilterChain filterChain = session.getFilterChain();
filterChain.fireMessageReceived(buf);
buf = null;
if (hasFragmentation) {
if (readBytes << 1 < config.getReadBufferSize()) {
session.decreaseReadBufferSize();
} else if (readBytes == config.getReadBufferSize()) {
session.increaseReadBufferSize();
}
}
}
if (ret < 0) {
scheduleRemove(session);
}
} catch (Throwable e) {
if (e instanceof IOException) {
scheduleRemove(session);
}
IoFilterChain filterChain = session.getFilterChain();
filterChain.fireExceptionCaught(e);
}
}
q个陷阱大家可以试一?看会不会一个完整的消息被多ơ发?你的IoHandler.messageReceived有没有被多次调用.
要保持我们应用程序消息体的完整性也很简单只需创徏一个断点breakpoint,然后set it to the current IoSession,一旦消息体数据完整dispatching it and remove it from the current session.
--------------------------------------------------------------------------------------------------
下面以xSocket v2_8_8源码Z:
tcp usage e.g:
IServer srv = new Server(8090, new EchoHandler());
srv.start() or run();
-----------------------------------------------------------------------
class EchoHandler implements IDataHandler {
public boolean onData(INonBlockingConnection nbc)
throws IOException,
BufferUnderflowException,
MaxReadSizeExceededException {
String data = nbc.readStringByDelimiter("\r\n");
nbc.write(data + "\r\n");
return true;
}
}
------------------------------------------------------------------------
说明1.Server:Acceptor:IDataHandler ------1:1:1
Server.run-->IoAcceptor.accept()在port上阻?一旦有channel׃IoSocketDispatcherPool中获取一个IoSocketDispatcher,同时构徏一个IoSocketHandler和NonBlockingConnection,调用Server.LifeCycleHandler.onConnectionAccepted(ioHandler) initialize the IoSocketHandler.注意:IoSocketDispatcherPool.size默认?,也就是说只有2条do select的线E和相应?个IoSocketDispatcher.q个和MINA的NioProcessor数是一L.
说明2.IoSocketDispatcher[java nio Selector]:IoSocketHandler:NonBlockingConnection------1:1:1
在IoSocketDispatcher[对应一个Selector].run?-->IoSocketDispatcher.handleReadWriteKeys:
for all selectedKeys
{
IoSocketHandler.onReadableEvent/onWriteableEvent.
}
IoSocketHandler.onReadableEvent的处理过E如?
1.readSocket();
2.NonBlockingConnection.IoHandlerCallback.onData
NonBlockingConnection.onData--->appendDataToReadBuffer: readQueue append data
3.NonBlockingConnection.IoHandlerCallback.onPostData
NonBlockingConnection.onPostData--->HandlerAdapter.onData[our dataHandler] performOnData in WorkerPool[threadpool].
因ؓ是把channel中的数据dreadQueue?应用E序的dataHandler.onData会被多次调用直到readQueue中的数据d为止.所以依然存在类似mina的陷?解决的方法依然类?因ؓq里有NonBlockingConnection.
----------------------------------------------------------------------------------------------
再下面以grizzly-nio-framework v1.9.18源码Z:
tcp usage e.g:
Controller sel = new Controller();
sel.setProtocolChainInstanceHandler(new DefaultProtocolChainInstanceHandler(){
public ProtocolChain poll() {
ProtocolChain protocolChain = protocolChains.poll();
if (protocolChain == null){
protocolChain = new DefaultProtocolChain();
//protocolChain.addFilter(our app's filter/*应用E序的处理从filter开?cMmina.ioHandler,xSocket.dataHandler*/);
//protocolChain.addFilter(new ReadFilter());
}
return protocolChain;
}
});
//如果你不增加自己的SelectorHandler,Controller默认用TCPSelectorHandler port:18888
sel.addSelectorHandler(our app's selectorHandler on special port);
sel.start();
------------------------------------------------------------------------------------------------------------
说明1.Controller:ProtocolChain:Filter------1:1:n,Controller:SelectorHandler------1:n,
SelectorHandler[对应一个Selector]:SelectorHandlerRunner------1:1,
Controller. start()--->for per SelectorHandler start SelectorHandlerRunner to run.
SelectorHandlerRunner.run()--->selectorHandler.select() then handleSelectedKeys:
for all selectedKeys
{
NIOContext.execute:dispatching to threadpool for ProtocolChain.execute--->our filter.execute.
}
你会发现q里没有read data from channel的动?因ؓq将׃的filter来完?所以自然没有mina,xsocket它们的陷阱问?分发提前?但是你要注意SelectorHandler:Selector:SelectorHandlerRunner:Thread[SelectorHandlerRunner.run]都是1:1:1:1,也就是说只有一条线E在doSelect then handleSelectedKeys.
相比之下虽然grizzly?strong>q发性能上更?但是?strong>易用?/strong>斚w却不如mina,xsocket,比如cMmina,xsocket中表C当前连接或会话的IoSession,INonBlockingConnection对象在grizzly中由NIOContext来负?但是NIOContextq没有提供session/connection lifecycle event,以及常规的read/write操作,q些都需要你自己L展SelectorHandler和ProtocolFilter,从另一个方面也可以说明grizzly的可扩展?灉|性更胜一{?
单样?:
public class Singleton {
private final static Singleton instance=new Singleton();
private Singleton(){}
public static Singleton getInstance(){
return instance;
}
}
双重查很通用,但是它引以ؓ傲的是性能的优?在getInstance被很多很多次调用的情况下).
呵呵,我就直接说结Z:在性能上最优的?单样? [当然也是在getInstance被很多很多次调用的情况下].
单样??strong>非惰性加?/strong>,所以有反驳?如果我不用到Singleton 的实例岂不是白占了内?
所以你选择 单样? q是 双重?/strong> 是要Ҏ你的实际情况?如果在程序中对单列类引用的频率是很高?那么应该选择 单样?,反之 双重?
2.请写Z个singleton模式的class.
你如果写Z面的2U样?我会问你:请问你如何在同一个jvm中ƈ且在同一个classLoader中得到它的多个实?(请不要奇?
样列1:
public class Singleton {
private final static Singleton instance=new Singleton();
private Singleton(){}
public static Singleton newInstance(){
return instance;
}
}
样列2:
public class Singleton {
private static volatile int instanceCounter=0;
private Singleton(){
if(instanceCounter>0)
throw new RuntimeException("can't create multi instances!");
instanceCounter++;
}
private final static Singleton instance=new Singleton();
public static Singleton newInstance(){
return instance;
}
}
3.java 的exception 分checked,unchecked.像RuntimeException,Error都不用显式try-catch,直接可以throw,
但是一般的exception是必catch?
throw new Exception("..."),如果q句不在try-catch体内,或者方法的声明没有throws,那么~译是通不q的.
ok,L如下的代?
public class TestClass {
public void testMethod()/*q里没有throws ?*/{
......
throw new Exception("force throw the exception...");
......
}
}
很明显上面的Ҏ如果q样的话是通不q编译的,但是如果非得要你在testMethod体中在运行时throw一个很一般的Exception,请问你有办法?
q?道题可不是sun出的考题?不信你搜?.....
0. 一?channal 对应一个SelectionKey in the same selector.
e.g:
SelectionKey sk=sc.register(selector, SelectionKey.OP_READ, handler);
sk==sc.register(selector, SelectionKey.OP_WRITE, handler) true?
selector.select() 每次q回的对同一channal的sk是否相同?
1.channel.register(...) may block if invoked concurrently with another registration[another.register(...)] or selection operation[selector.select(...)] involving *****the same selector*****.
q个是registerҎjdk src上的原文,
e.g:
如果一个selection thread已经在selectҎ上等待ing,那么q个时候如果有另一条线E调用channal.registerҎ的话,那么它将被blocking.
2.selectionKey.cancel() : The key will be removed from all of the selector's key sets during *****the next selection operation[selector.select(...)]*****.
may block briefly if invoked concurrently with a cancellation[cancel()] or selection operation[select(...)] involving ***the same selector***.
q个也是cancelҎjdk src上的原文,
e.g:
你先一个selectionKey.cancel(),然后随即再channel.register to the same selector,
在cancel和register之间,如果没有U程(包括当前U程)q行select操作的话,
那么 throws java.nio.channels.CancelledKeyException.
所?nbsp;cancel-->select-->re-register.
3.if don't remove the current selectedKey from selector.selectedKeys()[Set] 导?selector.select(...) not block [may be cpu 100%,specially when client cut the current channel(connection)].
e.g:
Iterator<SelectionKey> it=selector.selectedKeys().iterator();
...for/while it.hasNext()...
it.remove();<------*****must do it. or Keys' Set.clear() finally;
if remove the current selectedKey from selector.selectedKeys()[Set] but don't sk.interestOps(sk.interestOps()& (~sk.readyOps()));导?selector.select(...) not block [select() not block several times, or excepted exception]
4.op_write should not be registered to the selector. [may be cpu100%]
5. if involving wakeup() before select() [wakeup called several times >=1],the next select() not block [not block just once].
管以前有些人分析了nio的wakeup性能及not block in linux的bug,但是java nio依然是高效的,那些c/c++的牛Zȝ看jre/bin目录下的nio.dll/nio.so?java nio是基于select模型(q个是c/c++中常用网l编E模型之一)?
Zjava nio的服务器:mina,girzzly[glassfish],jetty(Zgirzzly),tomcat6[可以配置Http11NioProtocol]...
其中从本人对girzzly,tomcat6的源码分析来?它们都还没有真正发挥出nio异步处理h的优?它们的读写还都是blocking的虽然用了selectorPool,此外tomcat6要剥dsocket通信q要p一定的功夫.?strong>mina却是?font class="" style="font-family: " color="#ff0000">?/font>W其?/strong>,q有bug?
org.apache.tomcat.util.net.NioEndpoint.start()-->
TaskQueue taskqueue = new TaskQueue();/***queue.capacity==Integer.MAX_VALUE***/
TaskThreadFactory tf = new TaskThreadFactory(getName() + "-exec-");
executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), 60,TimeUnit.SECONDS,taskqueue, tf);
taskqueue.setParent( (ThreadPoolExecutor) executor, this);
2.*****如果把LinkedBlockingQueue.capacity讄Z个适当的D于Integer.MAX_VALUE,那么只有put到queue的Q务数到达LinkedBlockingQueue的capacity?才会l箋增加池中的线E?使得poolSize出corePoolSize但不过maximumPoolSize,q个时候来增加U程数是不是有点晚了??????*****.
q样一来reject(command)也可能随之而来?LinkedBlockingQueue.capacity讄Z值又是个头疼的问?
所以ThreadPoolExecutor+LinkedBlockingQueue表达的意思是首先会增加线E数到corePoolSize,但只有queue的Q务容量到达最大capacity?才会l箋在corePoolSize的基C增加U程来处理Q?直到maximumPoolSize.
但ؓ什么我们不能这样呢:LinkedBlockingQueue.capacity讄为Integer.MAX_VALUE,让task可能的得到处理,同时在忙的情况下,增加池中的线E充到maximumPoolSize来尽快的处理q些d.即便是把LinkedBlockingQueue.capacity讄Z个适当的?lt;<<q小于Integer.MAX_VALUE,也不一定非得在d数到达LinkedBlockingQueue的capacity之后才去增加U程使poolSize出corePoolSize向maximumPoolSize.
所以java util concurrent中的ThreadPoolExecutor+LinkedBlockingQueuel合的缺点也出来了:如果我们惌U程池尽可能多的处理大量的Q务的?我们会把LinkedBlockingQueue.capacity讄为Integer.MAX_VALUE,但是如果q样的话池中的线E数量就不能充到最大maximumPoolSize,也就不能充分发挥U程池的最大处理能?如果我们把LinkedBlockingQueue.capacity讄Z个较的?那么U程池中的线E数量会充到最大maximumPoolSize,但是如果池中的线E都忙的?U程池又会rejecth的Q?因ؓ队列已满.
如果我们把LinkedBlockingQueue.capacity讄Z个较大的g不是Integer.MAX_VALUE,那么{到U程池的U程数量准备开始超出corePoolSize?也就是Q务队列满?q个时候才d加线E的?hd的执行会有一定的延时,也就是没有得到及时的处理.
其实也就是说ThreadPoolExecutor~Z灉|的线E调度机?没有Ҏ当前d的执行情?是忙,q是?以及队列中的待处理Q务的数量U进行动态的调配U程?使得它的处理效率受到影响.
那么什么是忙的情况的判断呢?
busy[1]:如果poolSize==corePoolSize,q且现在忙着执行d的线E数(currentBusyWorkers){于poolSize.[而不现在put到queue的Q务数是否到达queue.capacity]
busy[2].1:如果poolSize==corePoolSize,q且put到queue的Q务数已到达queue.capacity.[queue.capacity是针Ҏd队列极限限制的情况]
busy[2].2:U程池的基本目标是尽可能的快速处理大量的hd,那么׃一定非得在put到queue的Q务数到达queue的capacity之后才判断ؓ忙的情况,只要queue中现有的d?task_counter)与poolSize或者maximumPoolSize存在一定的比例时就可以判断为忙?比如task_counter>=poolSize或者maximumPoolSize?NumberOfProcessor+1)?q样queue.capacityq个限制可以取消?
在上qbusy[1],busy[2]q?U情况下都应增加U程?直至maximumPoolSize,使请求的d得到最快的处理.
前面讲的是忙的时候ThreadPoolExecutor+LinkedBlockingQueue在处理上的瑕?那么I闲的时候又要如何呢?
如果corePoolSize<poolSize<maximumPoolSize,那么U程{待keepAliveTime之后应该降ؓcorePoolSize,嘿嘿,q个q的成了bug了哦,一个很隑֏现的bug,poolSize是被降下来了,可是很可能降q了?lt;corePoolSize,甚至降ؓ0也有可能.
ThreadPoolExecutor.Worker.run()-->ThreadPoolExecutor.getTask():
Runnable getTask() {
for (;;) {
try {
int state = runState;
if (state > SHUTDOWN)
return null;
Runnable r;
if (state == SHUTDOWN) // Help drain queue
r = workQueue.poll();
else if (poolSize > corePoolSize || allowCoreThreadTimeOut)
/*queue is empty,q里timeout之后,return null,之后call workerCanExit() return true.*/
r = workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS);
else
r = workQueue.take();
if (r != null)
return r;
if (workerCanExit()) {
if (runState >= SHUTDOWN) // Wake up others
interruptIdleWorkers();
return null;
}
// Else retry
} catch (InterruptedException ie) {
// On interruption, re-check runState
}
}
}//end getTask.
private boolean workerCanExit() {
final ReentrantLock mainLock = this.mainLock;
mainLock.lock();
boolean canExit;
try {
canExit = runState >= STOP ||
workQueue.isEmpty() ||
(allowCoreThreadTimeOut &&
poolSize > Math.max(1, corePoolSize));
} finally {
mainLock.unlock();
}
return canExit;
}//end workerCanExit.
在workerCanExit() return true之后,poolSize仍然大于corePoolSize,pooSize的值没有变?
ThreadPoolExecutor.Worker.run()结?->ThreadPoolExecutor.Worker.workerDone-->q个时候才poolSize--,可惜晚了,在多U程的环境下,poolSize的值将变ؓ于corePoolSize,而不是等于corePoolSize!!!!!!
例如:如果poolSize(6)大于corePoolSize(5),那么同时timeout的就不一定是一条线E?而是多条,它们都有可能退出run,使得poolSize--减过了corePoolSize.
提一下java.util.concurrent.ThreadPoolExecutor的allowCoreThreadTimeOutҎ, @since 1.6 public void allowCoreThreadTimeOut(boolean value);
它表辄意思是在空闲的时候让U程{待keepAliveTime,timeout后得poolSize能够降ؓ0.[其实我是希望它降为minimumPoolSize,特别是在服务器的环境?我们需要线E池保持一定数量的U程来及时处?雉碎?断断l箋?一股一波的,不是很有压力?h],当然你可以把corePoolSize当作minimumPoolSize,而不调用该方?
针对上述java util concurrentU程池的瑕疵,我对java util concurrentU程池模型进行了修正,特别是在"?(busy[1],busy[2])的情况下的Q务处理进行了优化,使得U程池尽可能快的处理可能多的Q?
下面提供了高效的U程池的源码购买:
java版threadpool:
http://item.taobao.com/auction/item_detail-0db2-9078a9045826f273dcea80aa490f1a8b.jhtml
c [not c++]版threadpool in windows NT:
http://item.taobao.com/auction/item_detail-0db2-28e37cb6776a1bc526ef5a27aa411e71.jhtml
从tomcat6开?增加了org.apache.catalina.CometProcessor接口来实现对comet技术的支持.
修改conf/server.xml
<Connector port="8080" protocol="HTTP/1.1"-改ؓ->"org.apache.coyote.http11.Http11NioProtocol"
java:请参看tomcat.apache.org上的CometServlet的例?
import javax.servlet.http.HttpServlet;
import org.apache.catalina.CometEvent;
import org.apache.catalina.CometProcessor;
CometServlet extends HttpServlet implements CometProcessor
javascript:
function installComet(){
var xmlReq = window.ActiveXObject ? new ActiveXObject("Microsoft.XMLHTTP") : new XMLHttpRequest();
xmlReq.onreadystatechange = handler;
xmlReq.open("GET", "/yourapp/comet",true);
xmlReq.send();
}
function handler(){
try{
if(xmlReq.readyState){
if(xmlReq.readyState>=3){
alert(xmlReq.responseText);
}
}
}catch(e){
alert(xmlReq.readyState+":e->:"+e.message);
}
}
在IE览器各个版本中handler只会被回调一ơ而不服务端针对此次q接发多次消息,此时的readyState?
对responseText的操作会引发javascript error:完成该操作所需的数据还不可使用?/p>
在Firefox中handler会被多次调用,但responseText会缓存前一ơ的消息而不会清?responseText的数据会随着服务端消息的到达而篏U?
到目前ؓ?览器只能通过插g的方式来实现对comet技术在客户端的支持,所以流行的flash player,ActionScript成Z首?
ActionScript通过socket来徏立长q接.
所以那些AJAX框架都不能真正的支持comet,而只能通过poll,setTimeout/setInterval,
而dwr的ReverseAjax正是使用了setTimeout来poll轮询服务端的,请参看dwr的engine.js的源?