<rt id="bn8ez"></rt>
<label id="bn8ez"></label>

  • <span id="bn8ez"></span>

    <label id="bn8ez"><meter id="bn8ez"></meter></label>

    paulwong

    100萬并發連接服務器筆記之Java Netty處理1M連接會怎么樣

    前言

    每一種該語言在某些極限情況下的表現一般都不太一樣,那么我常用的Java語言,在達到100萬個并發連接情況下,會怎么樣呢,有些好奇,更有些期盼。
    這次使用經常使用的順手的netty NIO框架(netty-3.6.5.Final),封裝的很好,接口很全面,就像它現在的域名 netty.io,專注于網絡IO。
    整個過程沒有什么技術含量,淺顯分析過就更顯得有些枯燥無聊,準備好,硬著頭皮吧。

    測試服務器配置

    運行在VMWare Workstation 9中,64位Centos 6.2系統,分配14.9G內存左右,4核。
    已安裝有Java7版本:

    java version "1.7.0_21" Java(TM) SE Runtime Environment (build 1.7.0_21-b11) Java HotSpot(TM) 64-Bit Server VM (build 23.21-b01, mixed mode) 

    在/etc/sysctl.conf中添加如下配置:

    fs.file-max = 1048576 net.ipv4.ip_local_port_range = 1024 65535 net.ipv4.tcp_mem = 786432 2097152 3145728 net.ipv4.tcp_rmem = 4096 4096 16777216 net.ipv4.tcp_wmem = 4096 4096 16777216  net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_tw_recycle = 1 

    在/etc/security/limits.conf中添加如下配置:

         *	soft nofile 1048576      *	hard nofile 1048576 

    測試端

    測試端無論是配置還是程序和以前一樣,翻看前幾篇博客就可以看到client5.c的源碼,以及相關的配置信息等。

    服務器程序

    這次也是很簡單吶,沒有業務功能,客戶端HTTP請求,服務端輸出chunked編碼內容。

    入口HttpChunkedServer.java:

    1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465
    package com.test.server;
     
    import static org.jboss.netty.channel.Channels.pipeline;
     
    import java.net.InetSocketAddress;
    import java.util.concurrent.Executors;
     
    import org.jboss.netty.bootstrap.ServerBootstrap;
    import org.jboss.netty.channel.ChannelPipeline;
    import org.jboss.netty.channel.ChannelPipelineFactory;
    import org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory;
    import org.jboss.netty.handler.codec.http.HttpChunkAggregator;
    import org.jboss.netty.handler.codec.http.HttpRequestDecoder;
    import org.jboss.netty.handler.codec.http.HttpResponseEncoder;
    import org.jboss.netty.handler.stream.ChunkedWriteHandler;
     
    public class HttpChunkedServer {
    private final int port;
     
    public HttpChunkedServer(int port) {
    this.port = port;
    }
     
    public void run() {
    // Configure the server.
    ServerBootstrap bootstrap = new ServerBootstrap(
    new NioServerSocketChannelFactory(
    Executors.newCachedThreadPool(),
    Executors.newCachedThreadPool()));
     
    // Set up the event pipeline factory.
    bootstrap.setPipelineFactory(new ChannelPipelineFactory() {
    public ChannelPipeline getPipeline() throws Exception {
    ChannelPipeline pipeline = pipeline();
     
    pipeline.addLast("decoder", new HttpRequestDecoder());
    pipeline.addLast("aggregator", new HttpChunkAggregator(65536));
    pipeline.addLast("encoder", new HttpResponseEncoder());
    pipeline.addLast("chunkedWriter", new ChunkedWriteHandler());
     
    pipeline.addLast("handler", new HttpChunkedServerHandler());
    return pipeline;
    }
    });
     
    bootstrap.setOption("child.reuseAddress", true);
    bootstrap.setOption("child.tcpNoDelay", true);
    bootstrap.setOption("child.keepAlive", true);
     
    // Bind and start to accept incoming connections.
    bootstrap.bind(new InetSocketAddress(port));
    }
     
    public static void main(String[] args) {
    int port;
    if (args.length > 0) {
    port = Integer.parseInt(args[0]);
    } else {
    port = 8080;
    }
     
    System.out.format("server start with port %d \n", port);
    new HttpChunkedServer(port).run();
    }
    }

    唯一的自定義處理器HttpChunkedServerHandler.java:

    123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120
    package com.test.server;
     
    import static org.jboss.netty.handler.codec.http.HttpHeaders.Names.CONTENT_TYPE;
    import static org.jboss.netty.handler.codec.http.HttpMethod.GET;
    import static org.jboss.netty.handler.codec.http.HttpResponseStatus.BAD_REQUEST;
    import static org.jboss.netty.handler.codec.http.HttpResponseStatus.METHOD_NOT_ALLOWED;
    import static org.jboss.netty.handler.codec.http.HttpResponseStatus.OK;
    import static org.jboss.netty.handler.codec.http.HttpVersion.HTTP_1_1;
     
    import java.util.concurrent.atomic.AtomicInteger;
     
    import org.jboss.netty.buffer.ChannelBuffer;
    import org.jboss.netty.buffer.ChannelBuffers;
    import org.jboss.netty.channel.Channel;
    import org.jboss.netty.channel.ChannelFutureListener;
    import org.jboss.netty.channel.ChannelHandlerContext;
    import org.jboss.netty.channel.ChannelStateEvent;
    import org.jboss.netty.channel.ExceptionEvent;
    import org.jboss.netty.channel.MessageEvent;
    import org.jboss.netty.channel.SimpleChannelUpstreamHandler;
    import org.jboss.netty.handler.codec.frame.TooLongFrameException;
    import org.jboss.netty.handler.codec.http.DefaultHttpChunk;
    import org.jboss.netty.handler.codec.http.DefaultHttpResponse;
    import org.jboss.netty.handler.codec.http.HttpChunk;
    import org.jboss.netty.handler.codec.http.HttpHeaders;
    import org.jboss.netty.handler.codec.http.HttpRequest;
    import org.jboss.netty.handler.codec.http.HttpResponse;
    import org.jboss.netty.handler.codec.http.HttpResponseStatus;
    import org.jboss.netty.util.CharsetUtil;
     
    public class HttpChunkedServerHandler extends SimpleChannelUpstreamHandler {
    private static final AtomicInteger count = new AtomicInteger(0);
     
    private void increment() {
    System.out.format("online user %d\n", count.incrementAndGet());
    }
     
    private void decrement() {
    if (count.get() <= 0) {
    System.out.format("~online user %d\n", 0);
    } else {
    System.out.format("~online user %d\n", count.decrementAndGet());
    }
    }
     
    @Override
    public void messageReceived(ChannelHandlerContext ctx, MessageEvent e)
    throws Exception {
    HttpRequest request = (HttpRequest) e.getMessage();
    if (request.getMethod() != GET) {
    sendError(ctx, METHOD_NOT_ALLOWED);
    return;
    }
     
    sendPrepare(ctx);
    increment();
    }
     
    @Override
    public void channelDisconnected(ChannelHandlerContext ctx,
    ChannelStateEvent e) throws Exception {
    decrement();
    super.channelDisconnected(ctx, e);
    }
     
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e)
    throws Exception {
    Throwable cause = e.getCause();
    if (cause instanceof TooLongFrameException) {
    sendError(ctx, BAD_REQUEST);
    return;
    }
    }
     
    private static void sendError(ChannelHandlerContext ctx,
    HttpResponseStatus status) {
    HttpResponse response = new DefaultHttpResponse(HTTP_1_1, status);
    response.setHeader(CONTENT_TYPE, "text/plain; charset=UTF-8");
    response.setContent(ChannelBuffers.copiedBuffer(
    "Failure: " + status.toString() + "\r\n", CharsetUtil.UTF_8));
     
    // Close the connection as soon as the error message is sent.
    ctx.getChannel().write(response)
    .addListener(ChannelFutureListener.CLOSE);
    }
     
    private void sendPrepare(ChannelHandlerContext ctx) {
    HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK);
    response.setChunked(true);
    response.setHeader(HttpHeaders.Names.CONTENT_TYPE,
    "text/html; charset=UTF-8");
    response.addHeader(HttpHeaders.Names.CONNECTION,
    HttpHeaders.Values.KEEP_ALIVE);
    response.setHeader(HttpHeaders.Names.TRANSFER_ENCODING,
    HttpHeaders.Values.CHUNKED);
     
    Channel chan = ctx.getChannel();
    chan.write(response);
     
    // 緩沖必須湊夠256字節,瀏覽器端才能夠正常接收 ...
    StringBuilder builder = new StringBuilder();
    builder.append("<html><body><script>var _ = function (msg) { parent.s._(msg, document); };</script>");
    int leftChars = 256 - builder.length();
    for (int i = 0; i < leftChars; i++) {
    builder.append(" ");
    }
     
    writeStringChunk(chan, builder.toString());
    }
     
    private void writeStringChunk(Channel channel, String data) {
    ChannelBuffer chunkContent = ChannelBuffers.dynamicBuffer(channel
    .getConfig().getBufferFactory());
    chunkContent.writeBytes(data.getBytes());
    HttpChunk chunk = new DefaultHttpChunk(chunkContent);
     
    channel.write(chunk);
    }
    }

    啟動腳本start.sh

    12
    set CLASSPATH=.
    nohup java -server -Xmx6G -Xms6G -Xmn600M -XX:PermSize=50M -XX:MaxPermSize=50M -Xss256K -XX:+DisableExplicitGC -XX:SurvivorRatio=1 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSClassUnloadingEnabled -XX:LargePageSizeInBytes=128M -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+PrintClassHistogram -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -Xloggc:gc.log -Djava.ext.dirs=lib com.test.server.HttpChunkedServer 8000 >server.out 2>&1 &
    view rawstart.sh hosted with ? by GitHub

    達到100萬并發連接時的一些信息

    每次服務器端達到一百萬個并發持久連接之后,然后關掉測試端程序,斷開所有的連接,等到服務器端日志輸出在線用戶為0時,再次重復以上步驟。在這反反復復的情況下,觀察內存等信息的一些情況。以某次斷開所有測試端為例后,當前系統占用為(設置為list_free_1):

                      total       used       free     shared    buffers     cached      Mem:         15189       7736       7453          0         18        120      -/+ buffers/cache:       7597       7592      Swap:         4095        948       3147 

    通過top觀察,其進程相關信息

        PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                           4925 root      20   0 8206m 4.3g 2776 S  0.3 28.8  50:18.66 java 

    在啟動腳本start.sh中,我們設置堆內存為6G。

    ps aux|grep java命令獲得信息:

      root      4925 38.0 28.8 8403444 4484764 ?     Sl   15:26  50:18 java -server...HttpChunkedServer 8000 

    RSS占用內存為4484764K/1024K=4379M

    然后再次啟動測試端,在服務器接收到online user 1023749時,ps aux|grep java內容為:

      root      4925 43.6 28.4 8403444 4422824 ?     Sl   15:26  62:53 java -server... 

    查看當前網絡信息統計

      ss -s   Total: 1024050 (kernel 1024084)   TCP:   1023769 (estab 1023754, closed 2, orphaned 0, synrecv 0, timewait 0/0), ports 12    Transport Total     IP        IPv6   *    1024084   -         -           RAW     0         0         0           UDP     7         6         1           TCP     1023767   12        1023755     INET    1023774   18        1023756     FRAG    0         0         0     

    通過top查看一下

      top -p 4925   top - 17:51:30 up  3:02,  4 users,  load average: 1.03, 1.80, 1.19   Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie   Cpu0  :  0.9%us,  2.6%sy,  0.0%ni, 52.9%id,  1.0%wa, 13.6%hi, 29.0%si,  0.0%st   Cpu1  :  1.4%us,  4.5%sy,  0.0%ni, 80.1%id,  1.9%wa,  0.0%hi, 12.0%si,  0.0%st   Cpu2  :  1.5%us,  4.4%sy,  0.0%ni, 80.5%id,  4.3%wa,  0.0%hi,  9.3%si,  0.0%st   Cpu3  :  1.9%us,  4.4%sy,  0.0%ni, 84.4%id,  3.2%wa,  0.0%hi,  6.2%si,  0.0%st   Mem:  15554336k total, 15268728k used,   285608k free,     3904k buffers   Swap:  4194296k total,  1082592k used,  3111704k free,    37968k cached      PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                           4925 root      20   0 8206m 4.2g 2220 S  3.3 28.4  62:53.66 java 

    四核都被占用了,每一個核心不太平均。這是在虛擬機中得到結果,可能真實服務器會更好一些。 因為不是CPU密集型應用,CPU不是問題,無須多加關注。

    系統內存狀況

      free -m                total       used       free     shared    buffers     cached   Mem:         15189      14926        263          0          5         56   -/+ buffers/cache:      14864        324   Swap:         4095       1057       3038 

    物理內存已經無法滿足要求了,占用了1057M虛擬內存。

    查看一下堆內存情況

      jmap -heap 4925   Attaching to process ID 4925, please wait...   Debugger attached successfully.   Server compiler detected.   JVM version is 23.21-b01    using parallel threads in the new generation.   using thread-local object allocation.   Concurrent Mark-Sweep GC    Heap Configuration:      MinHeapFreeRatio = 40      MaxHeapFreeRatio = 70      MaxHeapSize      = 6442450944 (6144.0MB)      NewSize          = 629145600 (600.0MB)      MaxNewSize       = 629145600 (600.0MB)      OldSize          = 5439488 (5.1875MB)      NewRatio         = 2      SurvivorRatio    = 1      PermSize         = 52428800 (50.0MB)      MaxPermSize      = 52428800 (50.0MB)      G1HeapRegionSize = 0 (0.0MB)    Heap Usage:   New Generation (Eden + 1 Survivor Space):      capacity = 419430400 (400.0MB)      used     = 308798864 (294.49354553222656MB)      free     = 110631536 (105.50645446777344MB)      73.62338638305664% used   Eden Space:      capacity = 209715200 (200.0MB)      used     = 103375232 (98.5863037109375MB)      free     = 106339968 (101.4136962890625MB)      49.29315185546875% used   From Space:      capacity = 209715200 (200.0MB)      used     = 205423632 (195.90724182128906MB)      free     = 4291568 (4.0927581787109375MB)      97.95362091064453% used   To Space:      capacity = 209715200 (200.0MB)      used     = 0 (0.0MB)      free     = 209715200 (200.0MB)      0.0% used   concurrent mark-sweep generation:      capacity = 5813305344 (5544.0MB)      used     = 4213515472 (4018.321487426758MB)      free     = 1599789872 (1525.6785125732422MB)      72.48054631000646% used   Perm Generation:      capacity = 52428800 (50.0MB)      used     = 5505696 (5.250640869140625MB)      free     = 46923104 (44.749359130859375MB)      10.50128173828125% used    1439 interned Strings occupying 110936 bytes. 

    老生代占用內存為72%,較為合理,畢竟系統已經處理100萬個連接。

    再次斷開所有測試端,看看系統內存(free -m)

                   total       used       free     shared    buffers     cached   Mem:         15189       7723       7466          0         13        120   -/+ buffers/cache:       7589       7599   Swap:         4095        950       3145 

    記為list_free_2。

    list_free_1list_free_2兩次都釋放后的內存比較結果,系統可用物理已經內存已經降到7589M,先前可是7597M物理內存。
    總之,我們的JAVA測試程序在內存占用方面已經,最低需要7589 + 950 = 8.6G內存為最低需求內存吧。

    GC日志

    我們在啟動腳本處設置的一大串參數,到底是否達到目標,還得從gc日志處獲得具體效果,推薦使用GCViewer。

    GC事件概覽:
    gc_eventdetails

    其它:
    gc_total_1 gc_total_2 gc_total_3

    總之:

    • 只進行了一次Full GC,代價太高,停頓了12秒。
    • PartNew成為了停頓大戶,導致整個系統停頓了41秒之久,不可接受。
    • 當前JVM調優喜憂參半,還得繼續努力等

    小結

    Java與與Erlang、C相比,比較麻煩的事情,需要在程序一開始就得準備好它的堆棧到底需要多大空間,換個說法就是JVM啟動參數設置堆內存大小,設置合適的垃圾回收機制,若以后程序需要更多內存,需停止程序,編輯啟動參數,然后再次啟動。總之一句話,就是麻煩。單單JVM的調優,就得持續不斷的根據檢測、信息、日志等進行適當微調。

    • JVM需要提前指定堆大小,相比Erlang/C,這可能是個麻煩
    • GC(垃圾回收),相對比麻煩,需要持續不斷的根據日志、JVM堆棧信息、運行時情況進行JVM參數微調
    • 設置一個最大連接目標,多次測試達到頂峰,然后釋放所有連接,反復觀察內存占用,獲得一個較為合適的系統運行內存值
    • Eclipse Memory Analyzer結合jmap導出堆棧DUMP文件,分析內存泄漏,還是很方便的
    • 想修改運行時內容,或者稱之為熱加載,默認不可能
    • 真實機器上會有更好的反映

    吐槽一下:
    JAVA OSGI,相對比Erlang來說,需要人轉換思路,不是那么原生的東西,總是有些別扭,社區或商業公司對此的修修補補,不過是實現一些面向對象所不具備的熱加載的企業特性。

    測試源代碼,下載just_test。

    posted on 2015-07-13 18:26 paulwong 閱讀(4349) 評論(0)  編輯  收藏 所屬分類: NETTY

    主站蜘蛛池模板: 久久久WWW免费人成精品| 亚洲免费精彩视频在线观看| 国产精品99久久免费| 成年丰满熟妇午夜免费视频| 国产精品成人免费一区二区| 九九九精品成人免费视频| 啦啦啦高清视频在线观看免费| 手机在线看永久av片免费| 精品久久久久国产免费| 日本免费一区尤物| 全部免费国产潢色一级| 亚洲国产成人久久综合区| 国产亚洲精品高清在线| 国产亚洲一区二区三区在线不卡 | 免费无码又爽又刺激高潮软件| 国产免费一级高清淫曰本片| 三级网站在线免费观看| 日韩免费电影网站| 亚洲黄色免费在线观看| 免费无码黄十八禁网站在线观看| 成人免费午夜视频| www.亚洲一区| 亚洲av午夜福利精品一区| 亚洲成人午夜电影| 亚洲区日韩精品中文字幕| 国产成人综合亚洲| 99久久免费国产精精品| 最近中文字幕免费完整| 成年女人毛片免费播放人| 免费一区二区三区四区五区| 在线观看午夜亚洲一区| 亚洲最新永久在线观看| 中文有码亚洲制服av片| 成年网站免费入口在线观看| 女人体1963午夜免费视频| 久久久久国产精品免费免费搜索| 亚洲AV无码之日韩精品| 久久久久亚洲av无码尤物| 日韩亚洲人成在线| av片在线观看永久免费| 五月婷婷在线免费观看|