??xml version="1.0" encoding="utf-8" standalone="yes"?>亚洲?V无码乱码国产精品,国产成人亚洲综合一区,男人的天堂亚洲一区二区三区http://www.tkk7.com/cooperzh/zh-cnTue, 13 May 2025 04:57:33 GMTTue, 13 May 2025 04:57:33 GMT60abstract class MappedByteBuffer extends ByteBufferhttp://www.tkk7.com/cooperzh/articles/MappedByteBuffer.htmlcooperzhcooperzhFri, 06 Jan 2012 06:09:00 GMThttp://www.tkk7.com/cooperzh/articles/MappedByteBuffer.htmlMappedByteBuffer 文件直接映到虚拟内存。可以映整个文Ӟ如果文g太大Q可以分D|定映?br />
通常通过FileChannel.map()Ҏ创徏?/div>映射之后Q通过MappedByteBuffer 讉K文g内容Q比到硬盘上去读取文件要快很多?br />
FileChannel.map()Ҏ创徏时指定方式:
MapMode.READ_ONLYQ尝试修改缓冲区则报异常ReadOnlyBufferException
MapMode.READ_WRITEQ共享缓冲区Q所有访问的E序都可d写,但写完是否其他程序立即看到变_未知
MapMode.PRIVATEQ创建副本,所有修改对同时讉K的其他程序不可见

protectedQ?br />volatile boolean isAMappedBuffer;
MappedByteBuffer(int mark,int pos,int lim,int cap,boolean mapped);
MappedByteBuffer(mark,pos,lim,cap);

privateҎQ?br />checkMapped(); 
pagePosition();

public finalҎQ?br />isLoaded(); ~存区内Ҏ否处于物理内存中
load(); 缓冲区内容从虚拟内存加载到物理内存
force(); 当缓存区?/span>MapMode.READ_WRITE模式Ӟ缓存区内容写入存储讑֤?/span>

private nativeҎQ?br />isLoaded0();
load0();
force0();










cooperzh 2012-01-06 14:09 发表评论
]]>
接口 Readablehttp://www.tkk7.com/cooperzh/articles/Readable.htmlcooperzhcooperzhFri, 06 Jan 2012 03:01:00 GMThttp://www.tkk7.com/cooperzh/articles/Readable.htmlReadable是一个字W源

publicҎQ?br />read(CharBuffer); 字W读入指定的~冲?/div>

cooperzh 2012-01-06 11:01 发表评论
]]>
接口 Appendablehttp://www.tkk7.com/cooperzh/articles/Appendable.htmlcooperzhcooperzhFri, 06 Jan 2012 02:58:00 GMThttp://www.tkk7.com/cooperzh/articles/Appendable.html
Ҏ有:
append(char);
append(CharSequence);
append(CharSequence csq, int start, int end);

cooperzh 2012-01-06 10:58 发表评论
]]>
接口 Comparable<T>http://www.tkk7.com/cooperzh/articles/Comparable.htmlcooperzhcooperzhThu, 05 Jan 2012 15:54:00 GMThttp://www.tkk7.com/cooperzh/articles/Comparable.html


public Ҏ
compareToҎQ比较两个对象的内容
q回值由实体cd体定义,通常?1,0,1

实现了该接口的实体类有了彼此比较的能力

实现了该接口的实体类的数l,在用sort()卛_实现自动排序。因为sort内部会调?/span>compareTo()来比较大。这是实现Comparable接口的好?/span>



cooperzh 2012-01-05 23:54 发表评论
]]>
《java解惑》。。?/title><link>http://www.tkk7.com/cooperzh/archive/2012/01/05/java_puzzlers.html</link><dc:creator>cooperzh</dc:creator><author>cooperzh</author><pubDate>Thu, 05 Jan 2012 15:35:00 GMT</pubDate><guid>http://www.tkk7.com/cooperzh/archive/2012/01/05/java_puzzlers.html</guid><description><![CDATA[<br />19 块注?/*  */)不能可靠的注释掉代码块,应用单行注释W?/<img src ="http://www.tkk7.com/cooperzh/aggbug/367973.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.tkk7.com/cooperzh/" target="_blank">cooperzh</a> 2012-01-05 23:35 <a href="http://www.tkk7.com/cooperzh/archive/2012/01/05/java_puzzlers.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>接口 CharSequencehttp://www.tkk7.com/cooperzh/articles/CharSequence.htmlcooperzhcooperzhThu, 05 Jan 2012 13:24:00 GMThttp://www.tkk7.com/cooperzh/articles/CharSequence.html
CharSequence ?char 值的一个可d?br />
最基本的char存储协议

ҎQ?/h2>
  1. charAt(int index);
    q回长度是序列中16位char的数?/li>
  2. length();
  3. subSequence(int start, int end);
  4. toString();



cooperzh 2012-01-05 21:24 发表评论
]]>
《JVM调优ȝ.pdf》。。?/title><link>http://www.tkk7.com/cooperzh/archive/2011/12/27/367353.html</link><dc:creator>cooperzh</dc:creator><author>cooperzh</author><pubDate>Tue, 27 Dec 2011 09:36:00 GMT</pubDate><guid>http://www.tkk7.com/cooperzh/archive/2011/12/27/367353.html</guid><description><![CDATA[作者blog:<a >http://pengjiaheng.iteye.com/</a><br /><br />数据cdQ基本类型和引用cd<br />基本cd的变量保存原始|它代表的值就是数值本w。byte,short,int,long,char,float,double,Boolean,returnAddress<br />引用cd代表某个对象的引用,存放引用值的地址。类cdQ接口类型,数组<br /><br />栈stack ?堆heap:<br />stack是运行时单位Q每个线E都会有U程栈与之对应。里面存储的是当前线E相关信息。包括局部变量,E序q行状态,Ҏq回值等<br />heap是存储数据的地方Q所有线E共享。存攑֯象信息?br /><br />1 从Y件设计的角度看,stack代表处理逻辑Qheap代表数据?br />2 heap的数据被多个U程׃nQ则多个U程可以讉K同一对象。heap的数据可以供所有stack讉KQ节省了I间<br />3 stack因ؓq行时的需要,保存pȝq行的上下文Q需要进行地址D늚划分。ƈ且只能向上增长,因此限制了stack的存储能力。而heap中的数据可以Ҏ需要动态增长,相应stack中只需要记录heap中的一个地址卛_?br />4 面向对象是stack和heap的完结构。对象的属性就是数据,存放在heap中。而对象的Ҏ是q行逻辑Q放在stack中?br /><br />在java中,main函数是stack的v点,也是E序的v炏V?br /><br />heap中存攄是对象实例,stack中是基本数据cd和heap中对象的引用。一个对象的大小是不可以估计的,甚至动态变化的。但是在stack中,一个对象只对应了一?byte的引用。这是stack和heap分离的好处?br /><br />因ؓ基本数据cd占用的空间是1?个字节,需要空间比较小Q而且不会出现动态增长的情况Q因此stack中存储就够了?br /><br />java中参C递时传D是传引用Q?br />E序永远在stack中运行,因而参C递的只是基本cd或者对象的引用。不会传递对象本w?br />单说Qjava在方法调用传递参数时Q都是进行传D用?br /><span style="color: red; ">当传递引用的时候,E序会将引用查找到heap中的那个对象Q这个时候进行修改,修改的是真实的对象,而不是引用本w!Q!</span><br />所以传递引用也是传递的最l倹{?br /><span style="color: red; ">另外Q因Z递基本类型是传递了基本|所以修改的也是另一个copyQ而无法修改原倹{只有传递的是对象引用时才能修改原对象?/span><br /><br />stack是程序运行最Ҏ的东ѝ程序运行可以没有heapQ但必须有stack?br /><br />java中,stack的大是通过-Xss来设|的Q当stack中数据比较多Ӟ需要适当调大q个|否则会出现java.long.StackOverflowError异常<br /><br /><br />Java对象的大?br />一个空object的大是8byteQ这个大只是保存heap中一个没有Q何属性的对象大大。如Q?br />Object o = new Object();<br />它所占的I间?byte + 8byte?byte为stack中保存对象引用需要的I间Q?byte是heap中对象的信息?br />因ؓ所有java非基本类型对象都是集成自ObjectQ所以不Z么样的java对象Q大都必须大于8byte<br /><div style="background-color:#eeeeee;font-size:13px;border:1px solid #CCCCCC;padding-right: 5px;padding-bottom: 4px;padding-left: 4px;padding-top: 4px;width: 98%;word-break:break-all"><!--<br /><br />Code highlighting produced by Actipro CodeHighlighter (freeware)<br />http://www.CodeHighlighter.com/<br /><br />--><span style="color: #008080; ">1</span> <br /><span style="color: #008080; ">2</span> Class NewObject{<br /><span style="color: #008080; ">3</span>     <span style="color: #0000FF; ">int</span> count;<br /><span style="color: #008080; ">4</span>     <span style="color: #0000FF; ">boolean</span> flag;<br /><span style="color: #008080; ">5</span>     Object o;<br /><span style="color: #008080; ">6</span> }</div><span style="color: red; ">其大ؓQ对象大?8) + int?4) + boolean(1) + 对象引用(4) = 17byte</span><br /><span style="color: red; ">但是因ؓjava在内存中对对象进行分配时都是?的倍数来分配,因此会ؓNewObject对象实例分配 24byte?/span><br /><br />需要注意基本类型的包装cd的大。因为包装类型已l成为对象了Q因此要把包装类型当对象来看待。如Integer,Float,Double{?br />一个包装类型最占?6byte(8的倍数)Q它是用基本类型的N倍,因此量用包装cR?br /><br />对象引用分ؓQ强引用QY引用Q弱引用和虚引用?br />1 强引用StrongReferenceQ我们一般声明对象时jvm生成的引用,强引用环境下Q垃圑֛攉要严格判断。如果被强引用则不会被回?br />2 软引用SoftReferenceQ一般作为缓存来使用。在垃圾回收的时候,jvm会根据当前系l的剩余内存来决定是否对软引用进行回收。如果jvm发生outOfMemoryӞ肯定是没有Y引用存在的?br />3 弱引用WeakReferenceQ与软引用类|都是作ؓ~存来用。不同的是在垃圾回收的时候,弱引用一定会被回收。因此其生命周期只存在一个垃圑֛收周期内?br />4 虚引用PhantomReferenceQŞ同虚设,随时会被垃圾回收。其主要功能是与引用队列(ReferenceQueue)联合使用。当垃圾回收发现一个对象有虚引用时Q就会在回收内存之前Q将虚引用添加到与之兌的引用队列中。程序可以通过判断引用队列中是否有虚引用来了解引用对象是否要被回收。从而决定是否采取行动?br /><br />pȝ一般用强引用。Y引用和弱引用一般是在内存大比较受限的情况下用。常用在桌面引用pȝ中?br /><br /><br />垃圾回收基本法<br />1 引用计数 Reference Counting<br />古老的回收法。对象多一个引用,增加一个计敎ͼ删除一个引用则减少一个计数。垃圑֛收时Q只攉计数?的对象?br />该算法最致命的是无法处理循环引用的问题?br />2 标记-清除 Mark-Sweep<br />此算法分Z个阶Dc第一阶段从引用根节点开始标记所有被引用的对象,W二阶段遍历整个heapQ把未标记的对象清除?br />此算法需要暂停整个应用。同时生内存碎片?br />3 复制 Copying<br />此算法把I间划分?个相{的区域。每ơ只使用其中一个区域。垃圑֛收时Q遍历当前用区域,把正在用中的对象复制到另一个区域中。此法每次只处理正在用中的对象,因此复制成本比较,同时复制q去以后q能q行相应的内存整理,不会出现片?br />此算法的~点是需要两倍的内存I间?br />4 标记-整理 Mark-Compact<br />此算法结合了 Mark_Sweep ?Copying 两个法的优炏V?br />W一阶段从根节点开始标记所有被引用对象Q第二阶D遍历整个heapQ清除未标记对象q且把存zd象压~到heap的其中一块,按照序摆放?br />此算法避免了片问题Q和Copying法的空间问题?br /><br />5 增量攉 Incremental Collecting <br />实时垃圾回收法Q在应用q行的同时进行垃圑֛Ӟjdk5 没有使用此算?br />6 分代攉 Generational Collecting<br />Z对对象生命周期分析后得出的垃圑֛收算法。把对象分ؓq青代、年老代、持久代Q对不同周期的对象采用不同的法Q上面算法的一个)q行回收?br /><br />7 串行攉<br />使用单线E处理所有垃圑֛收工作,因ؓ无序多线E交互,更容易实玎ͼ而且效率很高。但是无法用多处理器的优势Q所以只适合单处理器的机器?br />8 q行攉<br />使用多线E处理垃圑֛收工作速度快,效率高。理Zcpu数目多Q越能体现ƈ行收集的优势<br />9 q发攉<br />前面两个在进行垃圑֛收的时候,需要暂停整个运行环境。因此系l会有明昄暂停Q暂停时间因为heap大而越ѝ?br /><br /><br />垃圾回收面的问?br /><br />1 如何区分垃圾Q?br />因ؓ引用计数无法解决循环引用。所有后来的垃圾回收法都是从程序运行的根节点出发,遍历整个对象引用Q查扑֭zȝ对象。因为stack是真正进行程序执行的地方Q所以要知道哪些对象正在被用,需要从java stack开始。如果有多个U程Q必dq些U程对应的所有stackq行查?br />除了stack外,q有pȝq行时的寄存器,也是存储E序q行时数据的地方。这样以stack和寄存器中的引用v点,来找到heap中的对象Q又从这些对象中扑ֈ对heap中其他对象的引用Q这样逐步扩展。最l以null引用或基本类型结束,q样Ş成了一以java stack中引用所对应的对象ؓ根节点的一对象数。如果stack中有多个引用Q则最lŞ成多对象树。这些对象树上的对象Q都是当前系l运行所需要的对象Q不能被回收。而其他剩余对象,视ؓ无法被引用的对象Q可以被回收?br />因此垃圾回收的vҎ一些根对象(java stackQstatic 变量Q寄存器……)。最单的java stack是main函数。这U回收方式,是Mark-Sweep?br /><br />2 如何处理片Q?br />因ؓ不同java对象存活旉不同Q因此程序运行一D|间后Q会出现零散的内存碎片。碎片最直接的问题就是导致无法分配大块的内存I间Q以及程序运行效率降低。Copying和Mark-Compact都可以解决碎片问?br /><br />3 如何解决同时存在的对象创建和对象回收问题<br />垃圾回收U程是回收内存的Q程序运行是消耗内存的Q一个回收内存,一个分配内存,两者是毛段的。因此,在现有的垃圾回收方式中,在垃圑֛收前Q一般都需要暂停整个应用(暂停内存分配Q,然后q行垃圾回收Q回收完成后再l应用?br />q样的弊端是Q当heapI间持箋增大Ӟ垃圾回收的时间也相应增长,相应的程序暂停时间也增长。一些对旉要求很高的应用,比如最大暂停时间要求是几百msQ那么当heapI间大于几个GӞ可能超时。这U情况下Q垃圑֛收会成ؓpȝq行的一个瓶颈。ؓ了解册个矛盾,有了q发垃圾回收法。用这个算法,垃圾回收U程与程序运行线E同时运行。没有暂停,法复杂性会大大增加Q系l处理能力也相应降低。同时碎片问题将会比较难解决?br /><br /><br />分代垃圾回收详述Q?br />1 Z么要分代Q?br />Zq样一个事实:不同对象的生命周期是不一L。因此不同生命周期的对象可以采取不同的收集方式,以便提高回收效率?br />在javaq行q程中会产生大量对象。其中有些对象是与业务信息相关的。比如httph中的session对象、线E、socketq接Q这些跟业务直接挂钩Q因此生命周期比较长。但是程序运行过E中生成的时变量,生命周期会比较短。比如String对象{?br /><br />2 如何分代Q?br />jvm中共划ؓ三个代:q轻?Young Generation)、年老代(Old Generation)、持久代(Permanent Generation)<br />持久代主要存放JavacM息,与垃圑֛收要攉的java对象关系不大。年青代和年老代是对垃圾攉影响最大的?br /><br />q轻代:<br />1 所有新生成的对象首先都是放?#8220;q轻?里的。年M的目标就是尽可能快速的攉那些生命周期短的对象?br />q轻代分Z个区Q?个Eden区,2个Survivor(q存Q残??br /><span style="color: red; ">大部分对象在Eden区生成,当Eden区满Ӟq存zȝ对象被复制到Survivor1区,当Survivor1区满Ӟ此区的存zd象将被复制到Survivor2区,此时Q?/span><span style="color: red; ">Eden区满时还存活的对象将复制到Survivor2中,</span><span style="color: red; ">Survivor1Z被清I?/span><span style="color: red; ">?/span><span style="color: red; ">当Survivor2Z满时Q包含从1中复制过来的对象和从Eden来的对象Q,从Survivor1区复制过来的q且q存zȝ对象Q将被复制到"q老代"的年老区(Tenured Space)。Survivor2Z新增加的从Eden来的q存zȝ对象Q将复制到Survivor1中,Survivor2被清I。之后,</span><span style="color: red; ">Eden区满时还存活的对象就会复制到Survivor1中。重复这L循环?/span><br />两个Survivor区是对称的,没有先后序。所以同一个区中可能同时存在从Eden复制q来的对象和另一个Survivor区复制过来的对象?br />而复制到q老代的Tenured区的只有从第一个Survivor来的对象Q因为Tenured区存攄是从W一个Survivor来,依旧存活的对象?br />两个SurvivorZL一个是I的。同时根据需要,可以配置多个Survivor区,廉对象在年M中的存在旉Q减被攑ֈq老代的可能?br />2 q老代<br />在年M中经历了Nơ垃圑֛收后仍然存活的对象,׃被放到年老代。因此年老代存放的都是生命周期较长的对象?br />3 持久?br />用于存放静态文Ӟ如javac,Ҏ{?br />持久代对垃圾回收没有显著影响Q但是有些应用可能动态生成或者调用一些classQ例如Hibernate{,在这U时候需要设|一个比较大的持久代I间来存放这些运行过E中新增的类。持久代大小通过-XX:MaxPermSize=<N>q行讄?br /><br />什么情况下触发垃圾回收Q?br />׃对象q行了分代处理,因此垃圾回收区域、时间也不一栗?br />GC(Garbage Collection)有两U类型:Scavenge GC ?Full GC<br />1 Scavenge GC <br />一般情况下Q当新对象生成,q且在Eden区申L间失败时Q就会触发Scavenge GC。对Eden行GCQ清除非存活对象Qƈ且把且存活的对象移动到Survivor区。然后整理Survivor的两个区。这U方式的GC是对q轻代的Eden行,不会影响到年老代?br />因ؓ大部分对象都是从Eden区开始的Q同时EdenZ会分配的很大Q所以Scavenge GC 会频J进行?br />一般这里需要用速度快,效率高的法QEden区尽快空闲出来?br />2 Full GC<br />Ҏ个Heapq行整理Q包括年MQ年老代和持久代。Full GC因ؓ需要对整个q行回收Q所以比Scavenge GC要慢Q因此应该尽可能减少Full GC的次数。在对jvm调优的过E中Q很大一部分工作是对Full GC的调节。导致Full GC是因为如下方法:<br /><ul><li>q老代的Tenured写满<br /></li><li>持久代被写满</li><li>System.gc()被显式调?/li><li>上一ơGC之后Heap的各区域分配{略动态变?/li></ul><br />选择合适的垃圾攉法<br />1 串行攉?br />用单U程处理所有垃圑֛收工作,因ؓ无需多线E交互,所以效率比较高。但是,无法使用多处理器的优势,所以只适合单处理器机器?br />也可以用在小数据?100M)情况下的多处理器机器上。用-XX:+UseSerialGC打开<br /><br />2 q行攉?br />对年Mq行q行垃圾回收Q因此可以减垃圑֛收时间。一般在多线E多处理器上使用。?XX:+UseParallelGC打开?br />q行攉器在J2SE5.0更新上引入,在java SE6.0中进行了增强Q可以对q老代q行攉。如果年老代不用ƈ发收集的话,默认是用单U程q行垃圾回收Q因此会制约扩展能力。?XX:+UseParallelOldGC打开?br />讄Q?br /><ul><li>q行垃圾回收的线E数Q?XX:ParallelGCThreads=<N>。此值可以设|与机器处理器数量相{?/li><li>最大垃圑֛收暂停,指定垃圾回收时的最长暂停时_通过-XX:MaxGCPauseMillis=<N>指定。N为毫U,如果指定了此|heap大小和垃圑֛收相兛_Cq行调整以达到指定倹{设定此g减少应用吞吐量?br /></li><li>吞吐量,为垃圑֛收时间与非垃圑֛收时间的比|通过-XX:GCTimeRatio=<N>来设定,公式?/(1+N)。例如,N=19Ӟ表示5%的时间用于垃圑֛收。默认值是99Q即?%的时间用于垃圑֛收?br /></li></ul>3 q发攉?br />可以保证大部分工作都q发q行(应用不停?Q垃圑֛收只暂停很少的时_此收集器适合对响应时间要求比较高的大中规模应用?br />使用-XX:+UseConcMarkSweepGC打开?br />q发攉器主要减年老代的暂停时_它在应用不停止的情况下用独立的垃圾回收U程Q跟t可辑֯象。在每个q老代垃圾回收周期中,在收集出气ƈ发收起会Ҏ个应用进行简短的暂停Q在攉中还会再暂停一ơ。第二次暂停旉比第一ơ稍长,在此q程中多个线E同时进行垃圑֛收工作?br />q发攉器用处理器换来短暂的停时间?br />在一个N个处理器的系l上Qƈ发收集部分用K/N个可用处理器q行回收Q一般情况下1<=K<=N/4。即K于N的四分之一?br />在只有一个处理器的主Z使用q发攉器,讄为incremental mode模式也可以获得较短的停顿旉?br /><br />动垃圾(Floating Garbage)Q?br />׃在应用运行的同时q行垃圾回收Q所以有些垃圑֏能在垃圾回收q行完成时生,q样造成?#8220;Floating Garbage”Q这些垃N要在下次垃圾回收周期时才能回收掉。所有,q发攉器一般需要预?0%的空间用于Q动垃圾?br /><br />q发模式p|(Concurrent Mode Failure):<br />q发攉器在应用q行时进行收集,所以需要保证heap在垃圑֛收的q段旉有够的I间供程序用,否则Q垃圑֛收还未完成,heapI间先满了。这U情况下׃发生q发模式p|Q此时整个应用会暂停Q进行垃圑֛收?br />Z保证有够内存供q发攉器用,可以讄-XX:CMSInitiatingOccupancyFraction=<N>指定剩余多少heap时开始执行ƈ发收集?br /><br />1 串行serial攉器,适用于数据量比较?100M)的应用,或者单处理器下q且对响应时间无要求的应用?br />~点Q只能用于小型应?br />2 q行parallel攉器,适用于对吞吐量有高要求,多cpuQ对应用相应旉无要求的大中型应用。如后台处理、科学计?br />~点Q垃圾收集过E中应用响应旉可能加长<br />3 q发concurrent攉器:适用于对响应旉有高要求Q多cpu的大中型应用。如web服务器、应用服务器、电信交换、集成开发环境?br /><br /><br /><br />P26……<br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><img src ="http://www.tkk7.com/cooperzh/aggbug/367353.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.tkk7.com/cooperzh/" target="_blank">cooperzh</a> 2011-12-27 17:36 <a href="http://www.tkk7.com/cooperzh/archive/2011/12/27/367353.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>《构建高性能的大型分布式Java应用》笔?W一?IOhttp://www.tkk7.com/cooperzh/archive/2011/12/27/367323.htmlcooperzhcooperzhTue, 27 Dec 2011 03:50:00 GMThttp://www.tkk7.com/cooperzh/archive/2011/12/27/367323.html (异步dasynchronous IO)

jdk1.6及之前都只实现BIO ?NIO
jdk1.7开始支持AIOQ即NIO 2.0


在BIOd模式下server?
1 new ServerSocket(int port) 监听端口
2 serverSocket.accept() d式等待客L的连接,有连接才q回Socket对象
3 socket.getINputStream() 获取客户端发q来的信息流
4 socket.getOutputStream() 获取输出对象,从而写入数据返回客L

client端:
1 newSocketQString host,int port) 建立与服务器端的q接Q如果服务器没启动,报Connection refused异常
2 socket.getInputStream() d服务器端q回的流
3 socket.getOutputStream() 获取输出,写入数据发送到服务器端


在NIO模式下Server端:
1 ServerSocketChannel.open() 获取serverScoketChannel实例
2 serverScoketChannel.configueBlocking(false) 讄channel为非d模式
3 serverSocketChannel.socket() 获取serverSocket对象
4 serverSocket.bind(port) 监听端口
5 Selector.open() 打开SelectorQ获取selector实例
6 serverSocketChannel.register(Selector,int) 向selector注册channel和感兴趣的事?/span>
7 while(true) 循环以保证正常情况下服务器端一直处于运行状?/span>
8 selector.select() 获取selector实例中需要处理的SelectionKey的数?/span>
9 for(SelectionKey key:selector.selectedKeys()) 遍历selector.selectedKeys,以对每个SelectionKey的事件进行处?/span>
10 key.isAcceptable() 判断SelectionKey的类型是否ؓ客户端徏立连接的cd
11 key.channel() 当SelectionKey的类型是acceptabelӞ获取l定的ServerSocketChannel对象
12 serverSocketChannel.accept() 接受客户端徏立连接的hQƈq回SocketChannel对象
13 socketChannel.regiseter(Selector,int) 向Selector注册感兴的事gcdQ如read,write
14 key.isReadable() 判断SelectionKey是否为readableQ如是则意味着有消息流在等待处?/span>
15 socketChannel.read(ByteBuffer) 从SelectionKey中绑定的SocketChannel对象d消息?/span>
16 socketChannel.write(ByteBuffer) 从SelectionKey中绑定的SocketChannel对象输出消息?/span>

client端:
1 SocketChannel.open() 打开SocketChannel
2 SocketChannel.configureBlocking(false) SocketChannel配置为非d模式
3 SocketChannel.connect(host,port) q接到指定的目标地址
4 Selector.open() 打开Selector
5 SocketChannel.register(Selector,int) 向Selector注册感兴的事g,connected,read,write
6 while(true) 循环执行保证客户端一直处于运行状?/span>
7 Selector.select() 从Selector中获取是否有可读的key信息
8 for(SelectionKey key:selector.selectedKeys()) 遍历selector中所有selectedKeys
9 SelectionKey.isConnectable() 判断是否接徏立的cd
10 SelectionKey.channel() 获取l定的SocketChannel
11 SocketChannel.finishConnect() 完成q接的徏立(TCP/IP的三ơ握手)
12 SelectionKey.isReadable() 判断是否为可ȝ?/span>
13 SelectionKey.channel() 获取l定的SocketChannel
14 SocketChannel.read(ByteBuffer) 从SocketChannel中读取数到ByteBuffer?/span>
15 SocketChannel.write(ByteBuffer) 向SocketChannel中写入ByteBuffer对象数据


cooperzh 2011-12-27 11:50 发表评论
]]>
Tricks and Tips With AIO Part 1: The Frightening Thread Pool Q{载)http://www.tkk7.com/cooperzh/archive/2011/12/22/367000.htmlcooperzhcooperzhThu, 22 Dec 2011 04:01:00 GMThttp://www.tkk7.com/cooperzh/archive/2011/12/22/367000.htmlTricks and Tips With AIO Part 1: The Frightening Thread Pool


Jean-Francois 

A while ago JDK 1.4 introduced the notion of non-blocking I/O. With non-blocking I/O (NIO), you're getting events through a selector when there is some I/O ready to be processed, like read and write operations. before JDK 1.4, only blocking I/O was available. With locking I/O, you were just blocking on a stream, trying to read and write.
JDK 7 introduces asynchronous I/O (AIO). Asynchronous I/O gives you a notification when the I/O is completed. The big difference with non-blocking is with AIO you get the notification when the I/O operation complete, where with blocking you you get notified when the I/O operation is ready to be completed.
For example, with a socket channel in a non-blocking mode, you register with a selector, and the selector will give you a notification when there is data on that socket to read. With the asynchronous I/O, you actually start the read, and the I/O will complete sometime later when the read has happened and there is data in your byte buffer.
With AIO, you wait for completed I/O operation using we a completion handler (explained in details below). You specify a completion handler when you do your read, and the completion handler is invoked to tell you that the I/O operation has completed with the bytes that has been read. With non-blocking, you would have been notified and then you would have executed yourself the read operation to read bytes.
One of the nice thing you can do with AIO is to configure yourself the thread pool the kernel will uses to invoke a completion handler. A completion handler is an handler for consuming the result of an asynchronous I/O operation like accepting a remote connection or reading/writing some bytes. So an asynchronous channels (with NIO.1 we had SelectableChannel) allow a completion handler to be specified to consume the result of an asynchronous operation. The interface define three "callback":
   * completed(...): invoked when the I/O operation completes successfully.
   * failed(...): invoked if the I/O operations fails (like when the remote client close the connection).
   * cancelled(...): invoked when the I/O operation is cancelled by invoking the cancel method.
Below is an example (I will talk about it it much more details in part II) of how you can open a port and listen for requests:

 1 // Open a port
 2 final AsynchronousServerSocketChannel listener =
 3    AsynchronousServerSocketChannel.open().bind(new InetSocketAddress(port));
 4 
 5 // Accept connections
 6 listener.accept(nullnew CompletionHandler<AsynchronousSocketChannel,Void>() {
 7    public void completed(AsynchronousSocketChannel channel,Void> result) {}
 8    public void cancelled(AsynchronousSocketChannel channel,Void> result) {}
 9    public void failed(AsynchronousSocketChannel channel,Void> result) {}
10 }
11 

Now every time a connection is made to the port, the completed method will be invoked by a kernel's thread. Do you catch the difference with NIO.1? To achieve the same kind of operation with NIO.1 you would have listen for requests by doing: 

 1 selector.select(timeout);
 2 
 3 while(readyKeys.hasNext()){
 4  SelectionKey key = iterators.next();
 5  if (key.isAcceptable()){
 6    // Do something that doesn't block
 7    // because if it blocks, no more connection can be
 8    // accepted as the selector.select(..)
 9    // cannot be executed
10  }
11 }
12 

With AIO, the kernel is spawning the thread for us. Where this thread coming from? This is the topic of this Tricks and Tips with AIO.

By default, applications that do not create their own asynchronous channel group will use the default group that has an associated thread pool that is created automatically. What? The kernel will create a thread pool and manage it for me? Might be well suited for simple application, but for complex applications like the Grizzly Framework, relying on an 'external' thread pool is unthinkable as most of the time the application embedding Grizzly will configure its thread pool and pass it to Grizzly. Another reason is Grizzly has its own WorkerThread implementation that contains information about transactions (like ByteBuffer, attributes, etc.). At least the monster needs to be able to set the ThreadFactory!.

Note that I'm not saying using the kernel's thread pool is wrong, but for Grizzly, I prefer having full control of the thread pool. So what's my solution? There is two solutions, et c'est parti:

Fixed number of Threads (FixedThreadPool)

An asynchronous channel group associated with a fixed thread pool of size N creates N threads that are waiting for already processed I/O events. The kernel dispatch event directly to those threads, and those thread will first complete the I/O operation (like filling a ByteBuffer during a read operation). Once ready, the thread is re-used to directly invoke the completion handler that consumes the result. When the completion handler terminates normally then the thread returns to the thread pool and wait on a next event. If the completion handler terminates due to an uncaught error or runtime exception, the thread is returned to the pool and wait for new events as well (no thread are lost). For those cases, the thread is allowed to terminate (a new event is submitted to replace it). The reason the thread is allowed to terminate is so that the thread (or thread group) uncaught exception handler is executed.

So far so good? ....NOT. The first issue you must be aware when using fixed thread pool is if all threads "dead lock" inside a completion handler, your entire application can hangs until one thread becomes free to execute again. Hence this is critically important that the completion handler's methods complete in a timely manner so as to avoid keeping the invoking thread from dispatching to other completion handlers. If all completion handlers are blocked, any new event will be queued until one thread is 'delivered' from the lock. That can cause a really bad situation, is it? As an example, using a Future when waiting for a read operation to complete can lock you entire application:

1 Future result = ((AsynchronousSocketChannel)channel).read
2               (byteBuffer,,myCompletionHandler);
3 
4    try{
5        count = result.get(30, TimeUnit.SECONDS);
6    } catch (Throwable ex){
7        throw new EOFException(ex.getMessage());
8    }
9 

Like for OP_WRITE, I'm pretty sure nobody will ever code something like that, right? Well, some application needs to blocks until all the bytes are arrived (a Servlet Container is a good example) and if you don't paid attention, your server might hangs. Not convinced? Another example could be:

 1 channel.write(bb,30,TimeUnit.SECONDS,db_pool,new CompletionHandler() {
 2 
 3            public void completed(Integer byteWritten, DataBasePool attachment) {
 4                // Wait for a jdbc connection, blocking.
 5                MyDBConnection db_con = attachment.get();
 6            }
 7 
 8            public void failed(Throwable exc, DataBasePool attachment) {
 9            }
10 
11            public void cancelled(DataBasePool attachment) {
12            }
13        });
14 

Again, all threads may dead lock waiting for a database connection and your application might stop working as the kernel has no thread available to dispatch and complete I/O operation.

Grrr what's our solution? The first solution consists to carefully avoid blocking operations inside a completion handler, meaning any threads executing a kernel event must never block on something. I suspect this will be simple to achieve if you write an application from zero and you want to have a fully asynchronous application. Still, be careful and make sure you properly create enough threads. How you do that? Here is an example from Grizzly:

1 ThreadPoolExecutorServicePipeline executor = new ThreadPoolExecutorServicePipeline
2      (corePoolThreads,maxThreads,8192,30,TimeUnit.SECONDS);
3 AsynchronousChannelGroup asyncChannelGroup = AsynchronousChannelGroup
4       .withFixedThreadPool(executor,maxThreads);
5 

The second solution is to use a cached thread pool

Cached Thread Pool Configuration

An asynchronous channel group associated with a cached thread pool submits events to the thread pool that simply invoke the user's completion handler. Internal kernel's I/O operations are handled by one or more internal threads that are not visible to the user application. Yup! That means you have one hidden thread pool (not configurable via the official API, but as a system property) that dispatch events to a cached thread pool, which in turn invokes completion handler (Wait! you just win a price: a thread's context switch for free ;-). Since this is a cached thread pool, the probability of suffering the hangs problem described above is lower. I'm not saying it cannot happens as you can always create cached thread pool that cannot grows infinitely (those infinite thread pool should have never existed anyway!). But at least with cached thread pool you are guarantee that the kernel will be able to complete its I/O operations (like reading bytes). Just the invocation of the completion handler might be delayed when all the threads are blocked. Note that a cached thread pool must support unbounded queueing to works properly. How you do set a cached thread pool? Here is an example from Grizzly:

1 ThreadPoolExecutorServicePipeline executor = new ThreadPoolExecutorServicePipeline
2      (corePoolThreads,maxCachedThreadPoolSize,8192,
3            30,TimeUnit.SECONDS);
4 AsynchronousChannelGroup asyncChannelGroup = AsynchronousChannelGroup
5       .withCachedThreadPool(executor,maxCachedThreadPoolSize);
6 


What about the default that ship with the kernel?



If you do not create your own asynchronous channel group, then the kernel's default group that has an associated thread pool will be created automatically. This thread pool is a hybrid of the above configurations. It is a cached thread pool that creates threads on demand, and it has N threads that dequeue events and dispatch directly to the application's completion handler. The value of N defaults to the number of hardware threads but may be configured by a system property. In addition to N threads, there is one additional internal thread that dequeues events and submits tasks to the thread pool to invoke completion handlers. This internal thread ensures that the system doesn't stall when all of the fixed threads are blocked, or otherwise busy, executing completion handlers.

Conclusion

If you have one thing to learn from this part I is independently of which thread pool you decide to use (default, your own cached or fixed), make sure you at least limit blocking operations. This is specially true when a fixed thread pool is used as it may hangs your entire application as the kernel is running out of available threads. The situation can also occurs with cached thread pool, but at least the kernel can still execute the I/O operations.




 

 

 



cooperzh 2011-12-22 12:01 发表评论
]]>
NIO trick and trap NIO的技巧与陷阱http://www.tkk7.com/cooperzh/archive/2011/12/20/366884.htmlcooperzhcooperzhTue, 20 Dec 2011 12:41:00 GMThttp://www.tkk7.com/cooperzh/archive/2011/12/20/366884.html出处Q?a title="http://www.tkk7.com/killme2008/archive/2011/06/30/353422.html" href="http://www.tkk7.com/killme2008/archive/2011/06/30/353422.html">http://www.tkk7.com/killme2008/archive/2011/06/30/353422.html

IO划分Z个阶D:
1 {待数据qA
2 从内核缓冲区copy到进E缓冲区Q从socket通过socketChannel复制到ByteBufferQ?/span>

non-direct ByteBuffer: HeapByteBufferQ创建开销?/span>
direct ByteBufferQ通过操作pȝnative代码Q创建开销?/span>

Zblock的传输通常比基于流的传输更高效

使用NIO做网l编E容易,但离散的事g驱动模型~程困难Q而且陷阱重重

Reactor模式Q经典的NIO|络框架
核心lgQ?/span>
1 Synchronous Event Demultiplexer : Event loop + 事g分离
2 DispatcherQ事件派发,可以多线E?/span>
3 Request HandlerQ事件处理,业务代码


理想的NIO框架Q?/span>
1 优雅地隔IO代码和业务代?/span>
2 易于扩展
3 易于配置Q包括框架自w参数和协议参数
4 提供良好的codec框架Q方便marshall/unmarshall
5 透明性,内置良好的日志记录和数据l计
6 高性能

NIO框架性能的关键因?/span>
1 数据的copy
2 上下文切?context switch)
3 内存理
4 TCP选项Q高UIO函数
5 框架设计

减少数据copyQ?/span>
ByteBuffer的选择
View ByteBuffer
FileChannel.transferTo/transferFrom
FileChannel.map/MappedByteBuffer

ByteBuffer的选择Q?/span>
不知道用哪种bufferӞ用Non-Direct
没有参与IO操作Q用Non-Direct
中小规模应用(<1Kq发q接)Q用Non-Direct
长生命周期,较大的缓冲区Q用Direct
试证明Direct比Non-Direct更快Q用Direct
q程间数据共?JNI)Q用Direct
一个Buffer发给多个ClientQ考虑使用view ByteBuffer׃n数据Qbuffer.slice()


HeapByteBuffer~存

使用ByteBuffer.slice()创徏view ByteBufferQ?/span>
ByteBuffer buffer2 = buffer1.slice()Q?/span>
则buffer2的内容和buffer1的从position到limit的数据内容完全共?/span>
但是buffer2的positionQlimit是独立于buffer1?/span>

传输文g的传l方式:
byte[] buf = new byte[8192];
while(in.read(buf)>0){
    out.write(buf);
}
使用NIO后:
FileChannel in = ...
WriteableByteChannel out = ...
in.transferTo(0,fsize,out);
性能会有60%的提?/span>

FileChannel.map
文件映ؓ内存区域——MappedByteBuffer
提供快速的文g随机d能力
q_相关
适合大文Ӟ只读型操作,如大文g的MD5校验{?/span>
没有unmapҎQ什么时候被回收取决于GC

减少上下文切?/span>
旉~存
Selector.wakeup
提高IOd效率
U程模型

旉~存Q?/span>
1|络服务器通常需要频J获取系l时_定时器,协议旉戻I~存q期{?/span>
2 System.currentTimeMillis
   a linux调用gettimeofday需要切换到内核?/span>
   b 普通机器上Q?000万次调用需?2U,q_一?.3毫秒
   c 大部分应用不需要特别高的精?/span>
3 SystemTimer.currentTimeMillisQ自己创建)
   a 独立U程定期更新旉~存
   b currentTimeMillis直接q回~存?/span>
   c _ֺ取决于定期间?/span>
   d 1000万次调用降低?9毫秒


Selector.wakeup() 主要作用Q?/span>
解除d在Selector.select()上的U程Q立卌?/span>
两次成功的select()之间多次调用wakeup{h于一ơ调?/span>
如果当前没有d在select()上,则本ơwakeup作用在下次select()?/span>
什么时候wakeup() Q?/span>
注册了新的Channel或者事?/span>
Channel关闭Q取消注?/span>
优先U更高的事g触发Q如定时器事ӞQ希望及时处?/span>

wakeup的原理:
1 linux上利用pipe调用创徏一个管?/span>
2 windows上是一个loopback的tcpq接Q因为win32的管道无法加入select的fd set
3 管道或者tcpq接加入selected fd set
4 wakeup向管道或者连接写入一个字?/span>
5 d的select()因ؓ有IO旉qAQ立卌?/span>
可见wakeup的调用开销不可忽视

减少wakeup调用Q?/span>
1 仅在有需要时才调用。如往q接发送数据,通常是缓存在一个消息队列,当且仅当队列为空时注册writeqwakeup
booleanneedsWakeup=false;
synchronized(queue){
    if(queue.isEmpty())  needsWakeup=true;
    queue.add(session);
}
if(needsWakeup){
    registerOPWrite();
    selector.wakeup();
}
2 记录调用状态,避免重复调用Q例如Netty的优?/span>

d或者写?个字节:
不代表连接关?/span>
高负载或者慢速网l下很常见的情况
通常的处理方法是q回ql注册read/writeQ等待下ơ处理,~点是系l调用开销和线E切换开销
其他解决办法Q@环一定次数写入(如MinaQ或者yield一定次?/span>
启用临时选择器Temporary Selector在当前线E注册ƈpollQ例如Girzzy?/span>

在当前线E写入:
当发送缓冲队列ؓI的时候,可以直接往channel写数据,而不是放入缓冲队列,interest了write{待IOU程写入Q可以提高发送效?/span>
优点是可以减系l调用和U程切换
~点是当前线E中断会引vchannel关闭

U程模型
selector的三个主要事ӞreadQwriteQacceptQ都可以q行在不同的U程?/span>

通常Reactor实现Z个线E,内部l护一个selector

1 Boss Thread + worker Thread
   boss thread处理acceptQconnect
   worker thread处理readQwrite

ReactorU程数目Q?/span>
1 Netty 1 + 2 * cpu
2 Mina 1 + cpu + 1
3 Grizzly 1 + 1

常见U程模型Q?/span>
1 read和accept都运行在reactorU程?/span>
2 acceptq行在reactorU程上,readq行在单独线E?/span>
3 read和accept都运行在单独U程
4 readq行在reactorU程上,acceptq行在单独线E?/span>

选择适当的线E模型:
cecho应用Qunmashall和业务处理的开销非常低,选择模型1
模型2Q模?Q模?的accept处理开销很低
最佳选择Q模?。unmashall一般是cpu-boundQ而业务逻辑代码一般比较耗时Q不要在reactorU程处理

内存理
1 java能做的事情非常有?/span>
2 ~冲区的理
   a 池化。ThreadLocal cacheQ环形缓冲区
   b 扩展。putString,getString{高UAPIQ缓冲区自动扩展和׾~,处理不定长度字节
   c 字节序。跨语言通讯需要注意,默认字节序Big-EndianQjava的IO库和class文g

数据l构的选择
1 使用单的数据l构Q链表,队列Q数l,散列?/span>
2 使用j.u.c框架引入的ƈ发集合类Qlock-freeQspin lock
3 M数据l构都要注意定w限制QOutOfMemoryError
4 适当选择数据l构的初始容量,降低GC带来的媄?/span>

定时器的实现
1 定时器在|络E序中频J?br />    a 周期事g的触?br />    b 异步时的通知和移?br />    c 延迟事g的触?br />2 三个旉复杂?br />
    a 插入定时?br />    b 删除定时?br />    c PerTickBookkeepingQ一ơtick内系l需要执行的操作
3 Tick的方?br />    Selector.select(timeout)
    Thread.sleep(timeout)

定时器的实现Q链?br />定时器l织成链表结?br />插入定时器,加入链表N
删除定时?br />
PerTickBookkeepingQ遍历链表查找expire事g

定时器的实现Q排序链?/div>定时器l织成有序链表结构,按照expire截止旉升序排序
插入定时器,扑ֈ合适的位置插入
删除定时?br />
PerTickBookkeepingQ直接从表头找v

定时器的实现Q优先队?/div>定时器l织成优先队列,按照expire截止旉作ؓ优先U,优先队列一般采用最堆实现
插入定时?br />删除定时?br />
PerTickBookkeepingQ直接取root判断

定时器的实现QHash wheel timer
定时器l织成时间轮
指针按照一定周期旋转,一个tick跛_一个槽?br />定时器根据g时时间和当前指针位置插入到特定槽?br />
插入定时?br />删除定时?/div>
PerTickBookkeeping
槽位和tick军_了精度和延时

定时器的实现QHierarchical Timing
Hours WheelQMinutes WheelQSeconds Wheel

q接IDLE的判?br />1 q接处于IDLE状态:一D|间没有IOd事g发生
2 实现方式Q?br />    a 每次IOd都记录IOd写的旉?br />    b 定时扫描所有连接,判断当前旉和上一ơ读或写的时间差是否过讑֮阀|过卌接处于IDLE状态,通知业务处理?br />
   c 定时的方式:Zselect(timeout)或者定时器。MinaQselect(timeout);Netty:HashWheelTimer

合理讄TCP/IP选项Q有时会起到显著效果Q需要根据应用类型、协议设计、网l环境、OSq_{因素做考量Q以试l果为准

Socket~冲|选项QSO_RCVBUF ?SO_SNDBUF
Socket.setReceiveBufferSize/setSendBufferSize 仅仅是对底层q_的提C,是否有效取决于底层^台。因此getq回的不是真实的l果?br />讄原则Q?br />1 以太|上Q?k通常是不够的Q增加到16kQ吞吐量增加?0%
2 Socket~冲区大至应该是q接的MSS的三倍,MSS=MTU+40Q一?/span>以太|卡的MTU=1500字节?br />    MSSQ最大分D大?br />    MTUQ最大传输单?br />3 send buffer最好与对端的receive buffer寸一?br />4 对于一ơ性发送大量数据的应用Q增加缓冲区?8k?4k可能是唯一最有效的提高性能的方式?br />    Z最大化性能Q?/span>send buffer臛_要跟BDP(带宽延迟乘积)一样大?span style="font-size: 11px; ">
5 同样Q对于大量接收数据的应用Q提高接收缓冲区Q能减少发送端的阻?br />6 如果应用既发送大量数据,又接收大量数据,?/span>send buffer?span class="Apple-style-span" style="font-size: 11px; ">receive buffer应该同时增加
7 如果讄的ServerSocket?span class="Apple-style-span" style="font-size: 11px; ">receive buffer过RFC1323定义?4kQ那么必dl定端口前设|,以后accept产生的socket承这一讄
8 无论~冲区大多,你都应该可能地帮助TCP臛_以那样大的块写?br />
BDP(带宽延迟乘积)
Z优化TCP吞吐量,发送端应该发送够的数据包以填满发送端和接收端之间的逻辑通道
BDP = 带宽 * RTT

Nagle法QSO_TCPNODELAY
通过缓冲区内的包自动相连l成大包Q阻止发送大量小包阻塞网l,提高|络应用效率对于实时性要求较高的应用(telnet、网?Q需要关闭此法
Socket.setTcpNoDelay(true) 关闭法
Socket.setTcpNoDelay(false) 
打开法Q默?br />
SO_LINGER选项Q控制socket关闭后的行ؓ
Socket.setSoLinger(boolean linger,int timeout)
linger=falseQtimeout=-1
当socketdcloseQ调用的U程会马上返回,不会dQ然后进入CLOSING状态,D留在缓冲区中的数据l发送给对端Qƈ且与对端q行FIN-ACK协议交换Q最后进入TIME_WAIT状?br />
linger=trueQtimeout>0
调用close的线E将dQ发生两U可能的情况Q一是剩余的数据l箋发送,q行关闭协议交换Q二是超时过期,剩余数据被删除Q?/span>q行FIN-ACK协议交换
linger=trueQtimeout=0
q行所?#8220;hard-close”QQ何剩余的数据被丢弃Qƈ且FIN-ACK交换也不会发生,替代产生RSTQ让对端抛出“connection reset”的SocketException
4 慎重使用此选项QTIME_WAIT状态的价|
    可靠实现TCPq接l止
    允许
老的分节在网l中失Q防止发l新的连?br />   持箋旉=2*MSLQMSL为最大分节生命周期,一般ؓ30U到2分钟Q?span style="font-size: 11px; ">

SO_REUSEADDRQ重用端?br />Socket.setReuseAddress(boolean) 默认false
适用场景Q?br />1 当一个用本地地址和端口的socket1处于TIME_WAIT状态时Q你启动的socket2要占用该地址和端口,p用到此选项
SO_REUSEADDR允许同一端口上启动一个服务的多个实例Q多个进E)Q但每个实例l定的地址是不能相同的
3 SO_REUSEADDR允许完全相同的地址和端口的重复l定。但q只用于UDP的多播,不适用TCP

SO_REUSEPORT
listen做四元组Q多q程同一地址同一端口做acceptQ适合大量短连接的web server
Freebsd独有

其他选项Q?br />Socket.setPerformancePreferences(connectionTime, latency, bandwidth) 讄q接旉、gq、带宽的相对重要?br />Socket.setKeepAlive(boolean) q是TCP层的keep-alive概念Q非HTTP协议的。用于TCPq接保活Q默认间?时Q徏议在应用层做心蟩
Socket.sendUrgentData(data) 带外数据


技巧:
1 d公^
    Mina限制一ơ写入的字节C过最大的ȝ冲区?.5?br />2 针对FileChannel.transferTo的bug
    Mina判断异常Q如果是temporarily unavailable的IOExceptionQ则认ؓ传输字节Cؓ0
3 发送消息,通常是放入一个缓冲区队列注册writeQ等待IOU程d
    U程切换Q系l调?br />    如果队列为空Q直接在当前U程channel.writeQ隐患是当前U程的中断会引vq接关闭
4 事g处理优先U?br />    ACE框架推荐Qaccept > write > read (推荐)
     Mina ?NettyQread > write
5 处理事g注册的顺?br />
    在select()之前
    在select()之后Q处理wakeup竞争条g

Java Socket实现在不同^C的差?br />׃各种OSq_的socket实现不尽相同Q都会媄响到socket的实?br />需要考虑性能和健壮?br />    
 














cooperzh 2011-12-20 20:41 发表评论
]]> վ֩ģ壺 ұͨӰƬ߲| þþƷA㽶| ޴߶ר| av붫˵| ĻѴȫ| ޾ƷĻ| ѹۿ| Ʒѹۿ| Ļ뾫ƷԴþ | 99þùۺϾƷ| | žžƵ| Ӱ߹ۿ| ˳777߲| ¹AVר| ĻƵֻѿ| ɫƬѿ| Ļձ| ޾Ʒھþ| ձɫѹۿ| ѹۿ| MVȫƵվ| Ļձ| ޾Ʒ߹ۿ| ޹Ʒ13p| 91ѾƷԲ߲| Ƶ߹ۿ | ղ2021 | ɫavɫ߹| ޵һӰԺ| ѿ20| ƷŮͬһվ| þþþùɫAVѹۿ| þ޾Ʒgv| ޲Ƶ߹ۿ| ղһ| Ʒ߳| ޹Ʒ| һëƬ߲| avվëƬ| ĻӰӰԺ߹ۿƵ |