??xml version="1.0" encoding="utf-8" standalone="yes"?> 一个计网格^台的作用是Q务分解开来,交给不同的结Ҏ器运行,然后把运行结果汇聚v来。这是Split and Aggregate。如下图所C,一个jobh分解Z个sub-jobQ分别被不同的机器执行,然后把结果汇聚,q回l调用的客户?/p>
GridGain是一个开源的java|格q_。它集成了很多现成的框架Q例?/p>
JBoss GridGain有两个方法将应用E序grid化: W一U是使用AOP 假设有一个应?/p>
如果要把process grid化,只要单的使用一个@Gridify的annotation卛_Q在Caller客户端要启动GridFactory 01 class BizLogic {
]]>
http://puras.iteye.com/blog/81783
JGroups 适合使用场合
服务器集cluster、多服务器通讯、服务器replication(复制){,分布式cache~存
JGroups ?/strong>
JGroups是一个基于Java语言的提供可靠多?l播)的开发工具包。在IP Multicast基础上提供可靠服务,也可以构建在TCP或者WAN上。主要是由Bela Ban开发,属于JBoss.orgQ在JBoss的网站也有一些相x。目前在 SourceForge上还是比较活跃,l常保持更新?br />
JGroups 配置
PING: 发现初始成员
MERGE2: 网l层切分的包重新合ƈ?br />FD_SOCK: Failure Dectection 错误,ZTCP
FDQFailure Dectection 错误,Z心蟩
VERIFY_SUSPECT: 查貌似失败的节点
pbcast.NAKACK: 应答Q提供可靠传?br />UNICAST: 可靠的UNICAST
pbcast.STABLE: 计算q播信息是否E_
VIEW_SYNC: 定期q播view(成员名单)
pbcast.GMS: Group membership, 处理joins/leaves/crashes{?br />FC: 量控制
FRAG2:Fragmentation layerQ分包,大的数据包分拆成适合|络层传?br />
以上一些是比较重要的配|,基本上不能少。如果要深入研究可以在 org.jgroups.protocols 里面查看源代?br />
JGroups使用例子, JGroups demo, Tim的hello world例子
Timreceiver.java
import org.jgroups.tests.perf.Receiver;
import org.jgroups.tests.perf.Transport;
import org.jgroups.util.Util;
public class TimReceiver implements Receiver {
private Transport transport = null;
public static void main(String[] args) {
TimReceiver t = new TimReceiver();
try {
int sendMsgCount = 5000;
int msgSize = 1000;
t.start();
t.sendMessages(sendMsgCount, msgSize);
System.out.println("########## Begin to recv...");
Thread.currentThread().join();
} catch (Exception e) {
e.printStackTrace();
} finally {
if (t != null) {
t.stop();
}
}
}
public void start()
throws Exception {
transport = (Transport) new TimTransport();
transport.create(null);
transport.setReceiver(this);
transport.start();
}
public void stop() {
if (transport != null) {
transport.stop();
transport.destroy();
}
}
private int count = 0;
public void receive(Object sender, byte[] data) {
System.out.print(".");
if (++count == 5000) {
System.out.println("\r\nRECV DONE.");
System.exit(0);
}
}
private void sendMessages(int count, int msgSize)
throws Exception {
byte[] buf = new byte[msgSize];
for (int k = 0; k < msgSize; k++)
buf[k] = 'T';
System.out.println("-- sending " + count + " " + Util.printBytes(msgSize) + " messages");
for (int i = 0; i < count; i++) {
transport.send(null, buf);
}
System.out.println("######### send complete");
}
}
TimTransport.java
import java.util.Map;
import java.util.Properties;
import org.jgroups.Address;
import org.jgroups.JChannel;
import org.jgroups.Message;
import org.jgroups.ReceiverAdapter;
import org.jgroups.tests.perf.Receiver;
import org.jgroups.tests.perf.Transport;
public class TimTransport extends ReceiverAdapter implements Transport{
private JChannel channel = null;
private String groupName = "TimDemo";
private Receiver receiver = null;
String PROTOCOL_STACK_UDP1 = "UDP(bind_addr=192.168.100.59";
String PROTOCOL_STACK_UDP2 = ";mcast_port=8888";
String PROTOCOL_STACK_UDP3 = ";mcast_addr=225.1.1.1";
String PROTOCOL_STACK_UDP4 = ";tos=8;loopback=false;max_bundle_size=64000;" +
"use_incoming_packet_handler=true;use_outgoing_packet_handler=false;ip_ttl=2;enable_bundling=true):"
+ "PING:MERGE2:FD_SOCK:FD:VERIFY_SUSPECT:"
+"pbcast.NAKACK(gc_lag=50;max_xmit_size=50000;use_mcast_xmit=false;" +
"retransmit_timeout=300,600,1200,2400,4800;discard_delivered_msgs=true):"
+"UNICAST:pbcast.STABLE:VIEW_SYNC:"
+"pbcast.GMS(print_local_addr=false;join_timeout=3000;" +
"join_retry_timeout=2000;" +
"shun=true;view_bundling=true):"
+"FC(max_credits=2000000;min_threshold=0.10):FRAG2(frag_size=50000)";
public Object getLocalAddress() {
return channel != null ? channel.getLocalAddress() : null;
}
public void start() throws Exception {
channel.connect(groupName);
}
public void stop() {
if (channel != null) {
channel.shutdown();
}
}
public void destroy() {
if (channel != null) {
channel.close();
channel = null;
}
}
public void setReceiver(Receiver r) {
this.receiver = r;
}
public Map dumpStats() {
return channel != null ? channel.dumpStats() : null;
}
public void send(Object destination, byte[] payload) throws Exception {
byte[] tmp = new byte[payload.length];
System.arraycopy(payload, 0, tmp, 0, payload.length);
Message msg = null;
msg = new Message((Address) destination, null, tmp);
if (channel != null) {
channel.send(msg);
}
}
public void receive(Message msg) {
Address sender = msg.getSrc();
byte[] payload = msg.getBuffer();
if (receiver != null) {
try {
receiver.receive(sender, payload);
} catch (Throwable tt) {
tt.printStackTrace();
}
}
}
public void create(Properties config) throws Exception {
String PROTOCOL_STACK = PROTOCOL_STACK_UDP1 + PROTOCOL_STACK_UDP2 + PROTOCOL_STACK_UDP3 + PROTOCOL_STACK_UDP4;
channel = new JChannel(PROTOCOL_STACK);
channel.setReceiver(this);
}
public void send(Object destination, byte[] payload, boolean oob) throws Exception {
send(destination, payload);
}
}
]]>
]]>
Spring
Spring AOP
JBoss AOP
AspectJ
JGroups 01 class BizLogic {
02 public static Result process(String param) {
03 …
04 }
05 }
06
07 class Caller {
08 public static void Main(String[] args) {
09 BizLogic.process(args[0]);
10 }
11 }
02 @Gridify(…)Here
03 public static Result process(String param) {
04 …
05 }
06 }
07
08 class Caller {
09 public static void Main(String[] args) {
10 GridFactory.start();Here
11
12 try {
13 BizLogic.process(args[0]);
14 }
15 finally {
16 GridFactory.stop();Here
17 }
18 }
19 }
]]>
|格技术分ZU一Uؓ|格计算一Uؓ|格数据Qgridgain是网D,可以Ҏ规则分节Ҏ术,提高计算速度Q比如用于规则引擎,技术引擎等Q银行结,保险考核l算{。但是如果在计算中数据时分布式或者是延时加蝲Ӟ此时数据获取不到Q这个时候就需要用到网格数据,在网g计算Q在|格中获取数据,q个时?strong>Infinispan是一个不错的选择Q?br />
数据分区技术和数据|格的集?/span>
概述Q?/span>
当处理大量数据的时候,常常值得推荐的是跨节Ҏ数据分隔开处理。基本上Q每个点负责处理数据的一部分。这U方法基本上允许从数据库数据中加载大量的数据到缓存,然后配置你的电脑区执行这些数据。ؓ什么?Z避免数据在各节点的重复缓Ԍq样往往可以提升性能Q防止服务器瘫痪?/span>
使用gridgainQ?/span>Affinity Load Balancingq样的设计非常完的解决了这个问题,而且可以和分布式~存集成Q解x据网根{?/span>
Affinity Load Balancing
?/span>GridGain?/span>Affinity Load Balancing是通过GridAffinityLoadBalancingSpi.提供?/span>
下图说明是用数据网格和不适用数据|格的差别。左面的图表C没使用GridGain的执行流E,其中q程数据库服务器负责查询数据Q然后传递到主调用服务器。这U比数据库访问要快,但是l果计算使用很多不必要的流量?/span>
叛_Q?/span>使用?/span>Gridgain。整个逻辑计算与数据访问整合到本地节点。假讑֤量逻辑计算比数据序列到数据库要dyQ即大量计算Q,那么|络量是最的。此外,您的计算都可以访问节?/span>2和节?/span>3的数据。在q种情况下,GridGain分为逻辑计算jobs和合适的逻辑计算路由到相应的数据服务中进行计。以保所有计都在本地节点中计算。现在,如果数据服务节点崩溃Ӟ您的p|jobs会自动{Ud其他节点Q这U是允许p|的(数据|格和分布式~存提供q种方式Q?/span>
数据|格集成
GridGain没有实现数据高速缓存,但是与现有的?/span> 据高速缓存或数据|格解决Ҏq行了集成。这使用户可以用几乎Q何的分布式缓存来实现自己喜欢的方案?/span>
比如Q?/span>GridGain提供了一?/span>JBoss Cache Data Partitioning Example 告诉用户如何来?/span>Attinty Load Balancing。事实上Q?/span>JBOSSCache没有提供数据分区的功能。由于用了GridGain?/span>GridAffinityLoadBalancingSpi提供?/span>Attinity Load Balancing?/span>JBoss Cache 数据分区成ؓ了可能?br />
本文包含附gQadmin@pjprimer.com 索取
==========English==========
Overview: When processing mass data, what often be worth to recommend is to cross node to open data space processing. Basically, every order the one share of responsible processing data. This kind of method basically allows to arrive from the data with the much to load in database data cache, the computer division that deploys you next carries out these data. Why? To avoid the data repetition in each node amortize, often can promote property so, prevent a server to break down. Use Gridgain, the settlement with such very ideal design of use Affinity Load Balancing this problem, and can mix distributed cache is compositive, solve data reseau. Affinity Load Balancing is in GridGain Affinity Load Balancing is to pass GridAffinityLoadBalancingSpi. Offer. Specification laying a plan is the difference of service data reseau and reseau of not applicable data. The graph of left expresses to did not use the executive technological process of GridGain, among them long-range database server is in charge of inquiring data, deliver next advocate call a server. Than the database the visit wants this kind fast, but as a result computation makes use a lot of needless flow. Right graph, used Gridgain. Whole and logistic computation and data visit conformity arrive this locality node. Assume a large number of logistic computation compare data alignment to want to the database deft (calculate in great quantities namely) , so network discharge will be the smallest. In addition, your computation can visit node 2 with node the data of 3. Below this kind of circumstance, gridGain calculates component for logic Jobs and right logistic computation way by have consideration in corresponding data service. In order to ensure all computation are calculated in this locality node. Now, when if data serves node,breaking down, your unsuccessful Jobs can transfer other node automatically, this kind is allow failure (data reseau and distributed cache offer this kind of way) . [Compositive GridGain did not realize Img][/img] data reseau cache of data high speed, but undertook with cache of existing data high speed or data reseau solution compositive. This makes the user can be used almost any distributed cache will implement the plan that he likes. For instance: GridGain offerred a JBoss Cache Data Partitioning Example to tell an user how to use Attinty Load Balancing. In fact, JBOSSCache did not provide the function of data partition. Because used the Attinity Load Balancing that the GridAffinityLoadBalancingSpi of GridGain offers to let JBoss Eamil ask for: Admin@pjprimer.com