??xml version="1.0" encoding="utf-8" standalone="yes"?>日本亚洲中午字幕乱码,亚洲乱码在线播放,亚洲午夜精品久久久久久浪潮http://www.tkk7.com/wansong/category/49309.htmlwansongzh-cnTue, 30 Aug 2011 19:21:28 GMTTue, 30 Aug 2011 19:21:28 GMT60JGroup UDP和TCP两种方式http://www.tkk7.com/wansong/articles/357232.htmlw@ns0ngw@ns0ngThu, 25 Aug 2011 00:18:00 GMThttp://www.tkk7.com/wansong/articles/357232.htmlhttp://www.tkk7.com/wansong/comments/357232.htmlhttp://www.tkk7.com/wansong/articles/357232.html#Feedback0http://www.tkk7.com/wansong/comments/commentRss/357232.htmlhttp://www.tkk7.com/wansong/services/trackbacks/357232.htmlJGroup可以ZTCP协议来实现消息广播,也可以通过UDP方式来广播消息,利弊不言而喻QTCP可靠Q但是代价大Q性能没有UDP来的 好,UDP速度快,代h,但是消息的丢q以及无序性有着很大的限制。但是JGroup在UDP方式的基上,增加了协议栈的配|,通过配置上层的协 议,可以保证消息的重发,大包体的分解Q同时保证消息包体顺序)Q组内机器的状态检等功能?br />
http://www.javachen.com/2011/06/jgroups-introduction-and-configruation/

http://blog.csdn.net/lnfszl/article/details/5747427

http://docs.jboss.org/jbossas/jboss4guide/r4/html/jbosscache.chapt.html


w@ns0ng 2011-08-25 08:18 发表评论
]]>
JGroups 介、适用场合、配|、程序例子Demo{完全用指?/title><link>http://www.tkk7.com/wansong/articles/355922.html</link><dc:creator>w@ns0ng</dc:creator><author>w@ns0ng</author><pubDate>Sat, 06 Aug 2011 12:19:00 GMT</pubDate><guid>http://www.tkk7.com/wansong/articles/355922.html</guid><wfw:comment>http://www.tkk7.com/wansong/comments/355922.html</wfw:comment><comments>http://www.tkk7.com/wansong/articles/355922.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.tkk7.com/wansong/comments/commentRss/355922.html</wfw:commentRss><trackback:ping>http://www.tkk7.com/wansong/services/trackbacks/355922.html</trackback:ping><description><![CDATA[目前目中在克服JGroups初期使用的困难之后,已经使用比较E_了。感觉比较烦琐和Ҏ出错的还是JGroups配置。感觉中文的资源较少Q现自己ȝ的经验ȝ如下<br />Tim <a target="_blank">http://hi.baidu.com/jabber/blog/item/7e879852a23efd0f0cf3e3ea.html</a><br /><br /><a >http://puras.iteye.com/blog/81783</a><br /><br /><strong>JGroups 适合使用场合</strong><br />服务器集cluster、多服务器通讯、服务器replication(复制){,分布式cache~存<br /><br /><strong>JGroups ?/strong><br />JGroups是一个基于Java语言的提供可靠多?l播)的开发工具包。在IP Multicast基础上提供可靠服务,也可以构建在TCP或者WAN上。主要是由Bela Ban开发,属于JBoss.orgQ在JBoss的网站也有一些相x。目前在 SourceForge上还是比较活跃,l常保持更新?br /><br /><strong>JGroups 配置</strong><br /><br />PING: 发现初始成员<br />MERGE2: 网l层切分的包重新合ƈ?br />FD_SOCK: Failure Dectection 错误,ZTCP<br />FDQFailure Dectection 错误,Z心蟩<br />VERIFY_SUSPECT: 查貌似失败的节点<br />pbcast.NAKACK: 应答Q提供可靠传?br />UNICAST: 可靠的UNICAST<br />pbcast.STABLE: 计算q播信息是否E_<br />VIEW_SYNC: 定期q播view(成员名单)<br />pbcast.GMS: Group membership, 处理joins/leaves/crashes{?br />FC: 量控制<br />FRAG2:Fragmentation layerQ分包,大的数据包分拆成适合|络层传?br /><br />以上一些是比较重要的配|,基本上不能少。如果要深入研究可以在 org.jgroups.protocols 里面查看源代?br /><br /><strong>JGroups使用例子, JGroups demo, Tim的hello world例子</strong><br />Timreceiver.java<br /><pre><br />import org.jgroups.tests.perf.Receiver;<br />import org.jgroups.tests.perf.Transport;<br />import org.jgroups.util.Util;<br /><br />public class TimReceiver implements Receiver {<br /> private Transport transport = null;<br /> <br /> public static void main(String[] args) {<br /> TimReceiver t = new TimReceiver();<br /> try {<br /> int sendMsgCount = 5000;<br /> int msgSize = 1000;<br /> t.start();<br /><br /> t.sendMessages(sendMsgCount, msgSize);<br /> System.out.println("########## Begin to recv...");<br /> Thread.currentThread().join();<br /> } catch (Exception e) {<br /> e.printStackTrace();<br /> } finally {<br /> if (t != null) {<br /> t.stop();<br /> }<br /> }<br /> }<br /> <br /> public void start()<br /> throws Exception {<br /> transport = (Transport) new TimTransport();<br /> transport.create(null);<br /> transport.setReceiver(this);<br /> transport.start();<br /> }<br /><br /> public void stop() {<br /> if (transport != null) {<br /> transport.stop();<br /> transport.destroy();<br /> }<br /> }<br /><br /> private int count = 0;<br /> public void receive(Object sender, byte[] data) {<br /> System.out.print(".");<br /> if (++count == 5000) {<br /> System.out.println("\r\nRECV DONE.");<br /> System.exit(0);<br /> }<br /> <br /> }<br /><br /> private void sendMessages(int count, int msgSize)<br /> throws Exception {<br /> byte[] buf = new byte[msgSize];<br /> for (int k = 0; k < msgSize; k++)<br /> buf[k] = 'T';<br /><br /> System.out.println("-- sending " + count + " " + Util.printBytes(msgSize) + " messages");<br /><br /> for (int i = 0; i < count; i++) {<br /> transport.send(null, buf);<br /> }<br /> <br /> System.out.println("######### send complete");<br /> }<br />}</pre><br />TimTransport.java<br /><pre><br /><br />import java.util.Map;<br />import java.util.Properties;<br /><br />import org.jgroups.Address;<br />import org.jgroups.JChannel;<br />import org.jgroups.Message;<br />import org.jgroups.ReceiverAdapter;<br />import org.jgroups.tests.perf.Receiver;<br />import org.jgroups.tests.perf.Transport;<br /><br />public class TimTransport extends ReceiverAdapter implements Transport{<br /> private JChannel channel = null;<br /> private String groupName = "TimDemo";<br /> private Receiver receiver = null;<br /> <br /> String PROTOCOL_STACK_UDP1 = "UDP(bind_addr=192.168.100.59"; <br /> String PROTOCOL_STACK_UDP2 = ";mcast_port=8888";<br /> String PROTOCOL_STACK_UDP3 = ";mcast_addr=225.1.1.1";<br /> String PROTOCOL_STACK_UDP4 = ";tos=8;loopback=false;max_bundle_size=64000;" +<br /> "use_incoming_packet_handler=true;use_outgoing_packet_handler=false;ip_ttl=2;enable_bundling=true):"<br /> + "PING:MERGE2:FD_SOCK:FD:VERIFY_SUSPECT:"<br /> +"pbcast.NAKACK(gc_lag=50;max_xmit_size=50000;use_mcast_xmit=false;" +<br /> "retransmit_timeout=300,600,1200,2400,4800;discard_delivered_msgs=true):"<br /> +"UNICAST:pbcast.STABLE:VIEW_SYNC:"<br /> +"pbcast.GMS(print_local_addr=false;join_timeout=3000;" +<br /> "join_retry_timeout=2000;" +<br /> "shun=true;view_bundling=true):"<br /> +"FC(max_credits=2000000;min_threshold=0.10):FRAG2(frag_size=50000)";<br /><br /> <br /> public Object getLocalAddress() {<br /> return channel != null ? channel.getLocalAddress() : null;<br /> }<br /><br /> public void start() throws Exception {<br /> channel.connect(groupName); <br /> }<br /><br /> public void stop() {<br /> if (channel != null) {<br /> channel.shutdown();<br /> }<br /> }<br /><br /> public void destroy() {<br /> if (channel != null) {<br /> channel.close();<br /> channel = null;<br /> }<br /> }<br /><br /> public void setReceiver(Receiver r) {<br /> this.receiver = r;<br /> }<br /><br /> public Map dumpStats() {<br /> return channel != null ? channel.dumpStats() : null;<br /> }<br /><br /> public void send(Object destination, byte[] payload) throws Exception {<br /> byte[] tmp = new byte[payload.length];<br /> System.arraycopy(payload, 0, tmp, 0, payload.length);<br /> Message msg = null;<br /> msg = new Message((Address) destination, null, tmp);<br /> if (channel != null) {<br /> channel.send(msg);<br /> }<br /> }<br /><br /> public void receive(Message msg) {<br /> Address sender = msg.getSrc();<br /> byte[] payload = msg.getBuffer();<br /> if (receiver != null) {<br /> try {<br /> receiver.receive(sender, payload);<br /> } catch (Throwable tt) {<br /> tt.printStackTrace();<br /> }<br /> }<br /> }<br /><br /> public void create(Properties config) throws Exception {<br /> String PROTOCOL_STACK = PROTOCOL_STACK_UDP1 + PROTOCOL_STACK_UDP2 + PROTOCOL_STACK_UDP3 + PROTOCOL_STACK_UDP4;<br /> channel = new JChannel(PROTOCOL_STACK);<br /> channel.setReceiver(this); <br /> }<br /><br /> public void send(Object destination, byte[] payload, boolean oob) throws Exception {<br /> send(destination, payload);<br /> }<br />}</pre><img src ="http://www.tkk7.com/wansong/aggbug/355922.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.tkk7.com/wansong/" target="_blank">w@ns0ng</a> 2011-08-06 20:19 <a href="http://www.tkk7.com/wansong/articles/355922.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>利用JGroups同步两台server之间的cache?/title><link>http://www.tkk7.com/wansong/articles/355921.html</link><dc:creator>w@ns0ng</dc:creator><author>w@ns0ng</author><pubDate>Sat, 06 Aug 2011 12:00:00 GMT</pubDate><guid>http://www.tkk7.com/wansong/articles/355921.html</guid><wfw:comment>http://www.tkk7.com/wansong/comments/355921.html</wfw:comment><comments>http://www.tkk7.com/wansong/articles/355921.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.tkk7.com/wansong/comments/commentRss/355921.html</wfw:commentRss><trackback:ping>http://www.tkk7.com/wansong/services/trackbacks/355921.html</trackback:ping><description><![CDATA[     摘要: 一、需求前D|间做了一个项目,在后台有很多的数据都攑օCcache中了Q而且q会对cache中的数据q行更新。如果只有一台server没有M问题Q但是如果考虑到集负载^衡,q接多个server的时候,有问题出现了,怎么h能保证多个server之间cache的同步呢Q请看下面的部v图?二、引入JGroupsJGroups是一个可靠的l间通讯工具Q进E可以加入一个通讯l,l组内所有的成员...  <a href='http://www.tkk7.com/wansong/articles/355921.html'>阅读全文</a><img src ="http://www.tkk7.com/wansong/aggbug/355921.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.tkk7.com/wansong/" target="_blank">w@ns0ng</a> 2011-08-06 20:00 <a href="http://www.tkk7.com/wansong/articles/355921.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item><item><title>使用开源Gridq_QGridGain实现|格计算 http://www.tkk7.com/wansong/articles/355913.htmlw@ns0ngw@ns0ngSat, 06 Aug 2011 06:59:00 GMThttp://www.tkk7.com/wansong/articles/355913.htmlhttp://www.tkk7.com/wansong/comments/355913.htmlhttp://www.tkk7.com/wansong/articles/355913.html#Feedback0http://www.tkk7.com/wansong/comments/commentRss/355913.htmlhttp://www.tkk7.com/wansong/services/trackbacks/355913.html|格计算一般分ZU:数据|格和计网|单的说数据网格就是把数据分布式存储,计算|格是Q务分解ؓ子认为ƈ行计?/p>

一个计网格^台的作用是Q务分解开来,交给不同的结Ҏ器运行,然后把运行结果汇聚v来。这是Split and Aggregate。如下图所C,一个jobh分解Z个sub-jobQ分别被不同的机器执行,然后把结果汇聚,q回l调用的客户?/p>

 

GridGain是一个开源的java|格q_。它集成了很多现成的框架Q例?/p>

JBoss
Spring
Spring AOP
JBoss AOP
AspectJ
JGroups

GridGain有两个方法将应用E序grid化:

W一U是使用AOP

假设有一个应?/p>

01 class BizLogic {
02   public static Result process(String param) {
03     
04   }
05 }
06 
07 class Caller {
08   public static void Main(String[] args) {
09     BizLogic.process(args[0]);
10   }
11 }

如果要把process grid化,只要单的使用一个@Gridify的annotation卛_Q在Caller客户端要启动GridFactory

01 class BizLogic {
02   @Gridify()Here
03   public static Result process(String param) {
04     
05   }
06 }
07 
08 class Caller {
09   public static void Main(String[] args) {
10     GridFactory.start();Here
11 
12     try {
13       BizLogic.process(args[0]);
14     }
15     finally {
16       GridFactory.stop();Here
17     }
18   }
19 }



w@ns0ng 2011-08-06 14:59 发表评论
]]>
GridAffinityLoadBalancingSpi 数据分区技术和数据|格的集?/title><link>http://www.tkk7.com/wansong/articles/355912.html</link><dc:creator>w@ns0ng</dc:creator><author>w@ns0ng</author><pubDate>Sat, 06 Aug 2011 06:11:00 GMT</pubDate><guid>http://www.tkk7.com/wansong/articles/355912.html</guid><wfw:comment>http://www.tkk7.com/wansong/comments/355912.html</wfw:comment><comments>http://www.tkk7.com/wansong/articles/355912.html#Feedback</comments><slash:comments>0</slash:comments><wfw:commentRss>http://www.tkk7.com/wansong/comments/commentRss/355912.html</wfw:commentRss><trackback:ping>http://www.tkk7.com/wansong/services/trackbacks/355912.html</trackback:ping><description><![CDATA[<p><span style="font-family: 宋体">转蝲 <a >http://www.iteye.com/topic/475010</a>  <br /><br />|格技术分ZU一Uؓ|格计算一Uؓ|格数据Qgridgain是网D,可以Ҏ规则分节Ҏ术,提高计算速度Q比如用于规则引擎,技术引擎等Q银行结,保险考核l算{。但是如果在计算中数据时分布式或者是延时加蝲Ӟ此时数据获取不到Q这个时候就需要用到网格数据,在网g计算Q在|格中获取数据,q个时?strong>Infinispan</strong>是一个不错的选择Q?br /><br /><br />数据分区技术和数据|格的集?/span></p> <p><span style="font-family: 宋体">概述Q?/span></p> <p><span style="font-family: 宋体">当处理大量数据的时候,常常值得推荐的是跨节Ҏ数据分隔开处理。基本上Q每个点负责处理数据的一部分。这U方法基本上允许从数据库数据中加载大量的数据到缓存,然后配置你的电脑区执行这些数据。ؓ什么?Z避免数据在各节点的重复缓Ԍq样往往可以提升性能Q防止服务器瘫痪?/span></p> <p> </p> <p><span style="font-family: 宋体">使用</span>gridgain<span style="font-family: 宋体">Q?/span>Affinity Load Balancing<span style="font-family: 宋体">q样的设计非常完的解决了这个问题,而且可以和分布式~存集成Q解x据网根{?/span></p> <p> <img border="0" alt="" src="http://www.tkk7.com/images/blogjava_net/wansong/GridAffinityLoadBalancingSpi.jpg" /></p> <p><span style="font-family: Verdana; color: black">Affinity Load Balancing</span></p> <p> </p> <p><span style="font-family: 宋体; color: black">?/span><span style="font-family: Verdana; color: black">GridGain</span><span style="font-family: 宋体; color: black">?/span><span style="font-family: Verdana; color: black">Affinity Load Balancing</span><span style="font-family: 宋体; color: black">是通过</span><span style="font-family: Verdana; color: black; font-size: 9pt"><a title="GridAffinityLoadBalancingSpi" >GridAffinityLoadBalancingSpi</a>.</span><span style="font-family: 宋体; color: black; font-size: 9pt">提供?/span></p> <p><span style="font-family: 宋体; color: black; font-size: 9pt">下图说明是用数据网格和不适用数据|格的差别。左面的图表C没使用</span><span style="font-family: Verdana; color: black; font-size: 9pt">GridGain</span><span style="font-family: 宋体; color: black; font-size: 9pt">的执行流E,其中q程数据库服务器负责查询数据Q然后传递到主调用服务器。这U比数据库访问要快,但是l果计算使用很多不必要的流量?/span></p> <p> </p> <p><span style="font-family: 宋体; color: black; font-size: 9pt">叛_Q?/span><span style="font-family: 宋体; color: black">使用?/span><span style="font-family: Verdana; color: black">Gridgain</span><span style="font-family: 宋体; color: black">。整个逻辑计算与数据访问整合到本地节点。假讑֤量逻辑计算比数据序列到数据库要dyQ即大量计算Q,那么|络量是最的。此外,您的计算都可以访问节?/span><span style="font-family: Verdana; color: black">2</span><span style="font-family: 宋体; color: black">和节?/span><span style="font-family: Verdana; color: black">3</span><span style="font-family: 宋体; color: black">的数据。在q种情况下,</span><span style="font-family: Verdana; color: black">GridGain</span><span style="font-family: 宋体; color: black">分为逻辑计算</span><span style="font-family: Verdana; color: black">jobs</span><span style="font-family: 宋体; color: black">和合适的逻辑计算路由到相应的数据服务中进行计。以保所有计都在本地节点中计算。现在,如果数据服务节点崩溃Ӟ您的p|</span><span style="font-family: Verdana; color: black">jobs</span><span style="font-family: 宋体; color: black">会自动{Ud其他节点Q这U是允许p|的(数据|格和分布式~存提供q种方式Q?/span></p> <p> </p> <p> </p> <p><span style="font-family: 宋体; color: black">数据|格集成</span></p> <p><span style="font-family: Verdana; color: black">GridGain</span><span style="font-family: 宋体; color: black">没有实现数据高速缓存,但是与现有的?/span><span style="font-family: Verdana; color: black"> </span><span style="font-family: 宋体; color: black">据高速缓存或数据|格解决Ҏq行了集成。这使用户可以用几乎Q何的分布式缓存来实现自己喜欢的方案?/span></p> <p><span style="font-family: 宋体; color: black">比如Q?/span><span style="font-family: Verdana; color: black">GridGain</span><span style="font-family: 宋体; color: black">提供了一?/span><span style="font-family: Verdana; color: black">JBoss Cache Data Partitioning Example </span><span style="font-family: 宋体; color: black">告诉用户如何来?/span><span style="font-family: Verdana; color: black">Attinty Load Balancing</span><span style="font-family: 宋体; color: black">。事实上Q?/span><span style="font-family: Verdana; color: black">JBOSSCache</span><span style="font-family: 宋体; color: black">没有提供数据分区的功能。由于用了</span><span style="font-family: Verdana; color: black">GridGain</span><span style="font-family: 宋体; color: black">?/span><span style="font-family: Verdana; color: black; font-size: 9pt"><a title="GridAffinityLoadBalancingSpi" >GridAffinityLoadBalancingSpi</a></span><span style="font-family: 宋体; color: black; font-size: 9pt">提供?/span><span style="font-family: Verdana; color: black; font-size: 9pt">Attinity Load Balancing</span><span style="font-family: 宋体; color: black; font-size: 9pt">?/span><span style="font-family: Verdana; color: black; font-size: 9pt">JBoss</span><span style="font-family: Verdana; color: black; font-size: 9pt"> Cache </span><span style="font-family: 宋体; color: black; font-size: 9pt">数据分区成ؓ了可能?br /><br /><font color="#2b2b2b">本文包含附gQadmin@pjprimer.com 索取<br />==========English==========<br />Overview: When processing mass data, what often be worth to recommend is to cross node to open data space processing. Basically, every order the one share of responsible processing data. This kind of method basically allows to arrive from the data with the much to load in database data cache, the computer division that deploys you next carries out these data. Why? To avoid the data repetition in each node amortize, often can promote property so, prevent a server to break down. Use Gridgain, the settlement with such very ideal design of use Affinity Load Balancing this problem, and can mix distributed cache is compositive, solve data reseau. Affinity Load Balancing is in GridGain Affinity Load Balancing is to pass GridAffinityLoadBalancingSpi. Offer. Specification laying a plan is the difference of service data reseau and reseau of not applicable data. The graph of left expresses to did not use the executive technological process of GridGain, among them long-range database server is in charge of inquiring data, deliver next advocate call a server. Than the database the visit wants this kind fast, but as a result computation makes use a lot of needless flow. Right graph, used Gridgain. Whole and logistic computation and data visit conformity arrive this locality node. Assume a large number of logistic computation compare data alignment to want to the database deft (calculate in great quantities namely) , so network discharge will be the smallest. In addition, your computation can visit node 2 with node the data of 3. Below this kind of circumstance, gridGain calculates component for logic Jobs and right logistic computation way by have consideration in corresponding data service. In order to ensure all computation are calculated in this locality node. Now, when if data serves node,breaking down, your unsuccessful Jobs can transfer other node automatically, this kind is allow failure (data reseau and distributed cache offer this kind of way) . [Compositive GridGain did not realize Img][/img] data reseau cache of data high speed, but undertook with cache of existing data high speed or data reseau solution compositive. This makes the user can be used almost any distributed cache will implement the plan that he likes. For instance: GridGain offerred a JBoss Cache Data Partitioning Example to tell an user how to use Attinty Load Balancing. In fact, JBOSSCache did not provide the function of data partition. Because used the Attinity Load Balancing that the GridAffinityLoadBalancingSpi of GridGain offers to let JBoss Eamil ask for: Admin@pjprimer.com</font><br /></span></p><img src ="http://www.tkk7.com/wansong/aggbug/355912.html" width = "1" height = "1" /><br><br><div align=right><a style="text-decoration:none;" href="http://www.tkk7.com/wansong/" target="_blank">w@ns0ng</a> 2011-08-06 14:11 <a href="http://www.tkk7.com/wansong/articles/355912.html#Feedback" target="_blank" style="text-decoration:none;">发表评论</a></div>]]></description></item></channel></rss> <footer> <div class="friendship-link"> <p>лǵվܻԴȤ</p> <a href="http://www.tkk7.com/" title="亚洲av成人片在线观看">亚洲av成人片在线观看</a> <div class="friend-links"> </div> </div> </footer> վ֩ģ壺 <a href="http://liexion.com" target="_blank">㽶һ</a>| <a href="http://woaisouluo.com" target="_blank">þþþAVȥ</a>| <a href="http://41xjj.com" target="_blank">߿Ƭ˳Ƶ</a>| <a href="http://ding001.com" target="_blank">aëƬƵ</a>| <a href="http://15831883389.com" target="_blank">߲޾Ʒ</a>| <a href="http://xxx2222.com" target="_blank">޳avƬ</a>| <a href="http://sdsxyz.com" target="_blank">ҳƵ߹ۿ</a>| <a href="http://3x79.com" target="_blank">޹Ʒ۲ӰԺþ</a>| <a href="http://19520888.com" target="_blank">ҰȫƵ</a>| <a href="http://djllgs.com" target="_blank">ɫݺݰվ</a>| <a href="http://139699.com" target="_blank">ɫַ</a>| <a href="http://ikybh.com" target="_blank">߹ۿһ</a>| <a href="http://56p6.com" target="_blank">ձһ߹ۿ</a>| <a href="http://k67m.com" target="_blank">պһһ</a>| <a href="http://xinyuanmy.com" target="_blank">ѹۿƬëƬ</a>| <a href="http://ninggelang.com" target="_blank">лиëƬѿ </a>| <a href="http://zhaoxinwo.com" target="_blank">Ƶַ</a>| <a href="http://www65axax.com" target="_blank">ձ㽶Ƶ</a>| <a href="http://by6216.com" target="_blank">ղҹҹƵ</a>| <a href="http://ti166.com" target="_blank">þ۲ӰԺѿҹɫ</a>| <a href="http://600c81.com" target="_blank">av˾þۺɫ </a>| <a href="http://hwjyrck.com" target="_blank">йһػƵƬ</a>| <a href="http://bjsymsdwl.com" target="_blank">avavav߲</a>| <a href="http://tttui.com" target="_blank">ƷþþþóѶ</a>| <a href="http://szyujiaxing.com" target="_blank">ҹav2019</a>| <a href="http://cztshw.com" target="_blank">ҹˬˬˬWWWƵʮ˽ </a>| <a href="http://726zh.com" target="_blank">һƵվ</a>| <a href="http://4794d.com" target="_blank">av</a>| <a href="http://yese889.com" target="_blank">þþ뾫Ʒպ</a>| <a href="http://ivy-fund.com" target="_blank">Ʒ͵Ƶۿ</a>| <a href="http://ymiwang.com" target="_blank">һվ</a>| <a href="http://cqtjqcc.com" target="_blank">ĻƵ</a>| <a href="http://179228.com" target="_blank">þþƷƵۿ</a>| <a href="http://1408600.com" target="_blank">˳ͼƬվ</a>| <a href="http://34pmpm.com" target="_blank">ɫ͵͵ۺAVYP</a>| <a href="http://youyou8tv.com" target="_blank">޳ѹۿ</a>| <a href="http://maopiandao163.com" target="_blank">avҹƷ </a>| <a href="http://0354888.com" target="_blank">ұͨӰƬ߲</a>| <a href="http://eaivan.com" target="_blank">91Ʒž߹ۿ</a>| <a href="http://vvv75.com" target="_blank">޹˾þһ</a>| <a href="http://4922000.com" target="_blank">žžۺAVһ</a>| <script> (function(){ var bp = document.createElement('script'); var curProtocol = window.location.protocol.split(':')[0]; if (curProtocol === 'https') { bp.src = 'https://zz.bdstatic.com/linksubmit/push.js'; } else { bp.src = 'http://push.zhanzhang.baidu.com/push.js'; } var s = document.getElementsByTagName("script")[0]; s.parentNode.insertBefore(bp, s); })(); </script> </body>