<rt id="bn8ez"></rt>
<label id="bn8ez"></label>

  • <span id="bn8ez"></span>

    <label id="bn8ez"><meter id="bn8ez"></meter></label>

    paulwong

    #

    HBase、Redis中關于“長事務”(Long Transaction)的一點討論

    首先解釋下標題,可能命名不是那么嚴謹吧,大致的定義如下:

    sometimes you are in a situation where you want to read a record, check what is in it, and depending on that update the record. The problem is that between the time you read a row and perform the update, someone else might have updated the row, so your update might be based on outdated information.

    摘要一下:進程A讀取了某行R,進行時間較長的計算操作,在這個計算過程中B對行R進行了更改。A計算完畢后,若直接寫入,會覆蓋B的修改結果。此時應令A寫入失敗。

    以下的討論整理自下述兩個頁面,表示感謝!

    http://www.ngdata.com/hbase-row-locks/

    http://redis.io/topics/transactions

    一個最簡單、直接的思路是:Transaction + Row Lock。類似于傳統DBMS的思路:首先開啟行鎖,新建一個Transaction,隨后進行各種操作,最后commit,最最后解除行鎖。看似很簡單,也沒什么Bug,但注意,若計算時間較長,整個DB就掛起了,不能執行任何操作。

    BigTable的Paper中,對這類問題進行了討論。

    總體來說解決思路有三:

    1、Rowlock,但是對于HBase來說,RegionLock更成熟。因為RowLock會長時間(從Transction開始到更新)占用一個線程。當并發量很大的時候,系統會掛掉。。。

    2、ICV即HBase的incrementColumnValue()方法。

    3、CAS即HBase的checkAndPut方法:在Put之前,先檢查某個cell的值是否和value一樣,一樣再Put。注意,這里檢查條件的Cell和要Put的Cell可以是不同的column,甚至是不同的row。。。

    綜上在HBASE中,使用上述CAS方法是較好的解決方案。

    上面說了HBase,再來看一個輕量級的Redis:

    Redis也支持事務,具體見:http://redis.io/topics/transactions

    通過MULTI開始一個事務,EXEC執行一個事務。在兩者之間可以“執行”多個命令,但并未被實際執行,而是被Queue起來,直到EXEC再一起執行。Redis保證:在一個事務EXEC的過程中,不會處理其他任何Client的請求(會被掛起)。注意這里是EXEC鎖,而不是整個MULTI鎖。所以并發性能還是有保障的。

    為了支持Paper中CAS方案,Redis提供了WATCH命令:

    So what is WATCH really about? It is a command that will make the EXEC conditional: we are asking Redis to perform the transaction only if no other client modified any of the WATCHed keys. Otherwise the transaction is not entered at all.

    已經很顯然了,更多具體的,讀上述網頁的文檔吧。

    posted @ 2013-08-24 22:39 paulwong 閱讀(398) | 評論 (0)編輯 收藏

    zookeeper client使用筆記

    Zookeeper數據模型

    1. zk具有像文件系統一樣的層狀的命名空間。
    2. 命名空間中的每一個節點都可存儲數據。
    3. 只有絕對路徑,名字都是unicode字符。
    4. 每個節點都是ZNode類型(如同文件系統的stat)。
    5. 每個ZNode上可以設置Watch,znode改變會通知設置的watch的客戶端,同時清除Watch
    6. 每次對znode的讀寫都是原子的,每次讀寫都是帶要操作znode版本號的。
    7. 盡量保證單個znode在1MB一下。通常幾K。
    8. 臨時節點的概念:只存在于一個Session的有效期內的節點。臨時節點不允許有子節點。
    9. 使用zxid來標示zk中的每個事件(導致zk狀態改變的事件)。全局唯一。
    10. 對每個znode的改變觸發當前znode versions的改變。每個znode維護三個version(version:對應每次znode data改變,cversion:對應每次子節點改變,aversion:對應每次acl改變)

    Zookeeper狀態轉換

    1. session timeout時間至少是ticket time(默認是2000ms)的2倍,同時最大不能超過20倍ticket time
    2. 一旦session 過期,不必手動重新連接。zk client會處理重連。
    3. Session的過期與否是由server端決定的。在timeout時間之內,server沒有收到來自
      client的任何信息(心跳)時,則判定client session過期。同時會刪掉屬于這個session的臨時節點(znode),同時通知watch這個節點的client。
    4. 一旦session過期的client重新連接上zk cluster,將會受到“session expired”通知。
    5. 在建立zk連接時,會設置一個默認的watcher,當client狀態改變的時候,這個watcher會被調用。一般將這個watcher的初始狀態設為disconnect。這樣就可以處理后續的session 過期事件。

    Zookeeper Watch

    1. 每一次的讀操作(getData(), getChildren(), exists())都可以對操作的節點設置watcher。
    2. watch是一次性的。一旦數據改變或是刪除,則觸發watcher,后續的改變則不會再觸發。
    3. 因為watch是異步發送的,所以有可能在節點操作返回碼返回之前先返回給client。zk只能保證client收到的watch事件是在他設置watch事件返回成功后收到。
    4. watch的兩種類型:data watch(由getData() 和 exists()設置),返回znode data 和 child watch(由getChildren()設置), 返回children list。
    5. 導致watch事件丟失的一種情況:“ a watch for the existance of a znode not yet created will be missed if the znode is created and deleted while disconnected.

    posted @ 2013-08-23 10:47 paulwong 閱讀(708) | 評論 (0)編輯 收藏

    大數據平臺架構設計資源

    !!!基于Hadoop的大數據平臺實施記——整體架構設計
    http://blog.csdn.net/jacktan/article/details/9200979







    posted @ 2013-08-18 18:27 paulwong 閱讀(756) | 評論 (0)編輯 收藏

    How to install Hadoop cluster(2 node cluster) and Hbase on Vmware Workstation. It also includes installing Pig and Hive in the appendix

    By Tzu-Cheng Chuang 1-28-2011


    Requires: Ubuntu10.04, hadoop0.20.2, zookeeper 3.3.2 HBase0.90.0
    1. Download Ubuntu 10.04 desktop 32 bit from Ubuntu website.

    2. Install Ubuntu 10.04 with username: hadoop, password: password,  disk size: 20GB, memory: 2048MB, 1 processor, 2 cores

    3. Install build-essential (for GNU C, C++ compiler)    $ sudo apt-get install build-essential

    4. Install sun-jave-6-jdk
        (1) Add the Canonical Partner Repository to your apt repositories
        $ sudo add-apt-repository "deb http://archive.canonical.com/ lucid partner"
         (2) Update the source list
        $ sudo apt-get update
         (3) Install sun-java-6-jdk and make sure Sun’s java is the default jvm
        $ sudo apt-get install sun-java6-jdk
         (4) Set environment variable by modifying ~/.bashrc file, put the following two lines in the end of the file
        export JAVA_HOME=/usr/lib/jvm/java-6-sun
        export PATH=$PATH:$JAVA_HOME/bin 

    5. Configure SSH server so that ssh to localhost doesn’t need a passphrase
        (1) Install openssh server
        $ sudo apt-get install openssh-server
         (2) Generate RSA pair key
        $ ssh-keygen –t ras –P ""
         (3) Enable SSH access to local machine
        $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

    6. Disable IPv6 by      modifying  /etc/sysctl.conf file, put the following two lines in the end of the file
    #disable
    ipv6 net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1
    net.ipv6.conf.lo.disable_ipv6 = 1

    7. Install hadoop
        (1) Download hadoop-0.20.2.tar.gz(stable release on 1/25/2011)  from Apache hadoop website   
        (2) Extract hadoop archive file to /usr/local/   
        (3) Make symbolic link   
        (4) Modify /usr/local/hadoop/conf/hadoop-env.sh   
    Change from # The java implementation to use. Required. # export JAVA_HOME=/usr/lib/j2sdk1.5-sun To # The java implementation to use. Required. export JAVA_HOME=/usr/lib/jvm/java-6-sun
         (5)Create /usr/local/hadoop-datastore folder   
    $ sudo mkdir /usr/local/hadoop-datastore
    $ sudo chown hadoop:hadoop /usr/local/hadoop-datastore
    $ sudo chmod 750 /usr/local/hadoop-datastore
         (6)Put the following code in /usr/local/hadoop/conf/core-site.xml   
    hadoop.tmp.dir/usr/local/hadoop/tmp/dir/hadoop-${user.name}A base for other temporary directories.fs.default.namehdfs://master:54310The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.
        (7) Put the following code in /usr/local/hadoop/conf/mapred-site.xml   
    mapred.job.trackermaster:54311The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task.
         (8) Put the following code in /usr/local/hadoop/conf/hdfs-site.xml   
    dfs.replication1Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.
         (9) Add hadoop to environment variable by modifying ~/.bashrc   
    export HADOOP_HOME=/usr/local/hadoop export PATH=$HADOOP_HOME/bin:$PATH

    8. Restart Ubuntu Linux

    9. Copy this virtual machine to another folder. At least we have 2 copies of Ubuntu linux

    10. Modify /etc/hosts on both Linux Virtual Image machines, add in the following lines in the file. The IP address depends on each machine. We can use (ifconfig) to find out IP address.
    # /etc/hosts (for master AND slave) 192.168.0.1 master 192.168.0.2 slave     Modify the following line, because it might cause Hbase to find out wrong ip.   
    192.168.0.1 ubuntu

    11. Check hadoop user access on both machines.
    The hadoop user on the master (aka hadoop@master) must be able to connect a) to its own user account on the master – i.e. ssh master in this context and not necessarily ssh localhost – and b) to the hadoop user account on the slave (aka hadoop@slave)  via a password-less SSH login. On both machines, make sure each one can connect to master, slave without typing passwords.

    12. Cluster configuration
        (1) Modify /usr/local/hadoop/conf/masters
             only on master machine    master
         (2) Modify /usr/local/hadoop/conf/slaves
              only on master machine    master slave
         (3) Change “localhost” to “master” in /usr/local/conf/hadoop/conf/core-site.xml and /usr/local/hadoop/conf/mapred-site.xml
            only on master machine   
        (4) Change dfs.replication to “1” in /usr/local/conf/hadoop/conf/hdfs-site.xml
        only on master machine   

    13. Format the namenode only once and only on master machine
    $ /usr/local/hadoop/bin/hadoop namenode –format

    14. Later on, start the multi-node cluster by typing following code only on master. So far, please don’t start hadoop yet.
    $ /usr/local/hadoop/bin/start-dfs.sh $ /usr/local/hadoop/bin/start-mapred.sh

    15. Install zookeeper only on master node
        (1) download zookeeper-3.3.2.tar.gz from Apache hadoop website   
        (2) Extract  zookeeper-3.3.2.tar.gz    $ tar –xzf zookeeper-3-3.2.tar.gz
         (3) Move folder zookeeper-3.3.2 to /home/hadoop/ and create a symbloink link
        $ mv zookeeper-3.3.2 /home/hadoop/ ; ln –s /home/hadoop/zookeeper-3.3.2 /home/hadoop/zookeeper
         (4) copy conf/zoo_sample.cfg to conf/zoo.cfg
        $ cp conf/zoo_sample.cfg confg/zoo.cfg
         (5) Modify conf/zoo.cfg    dataDir=/home/hadoop/zookeeper/snapshot

    16. Install Hbase on both master and slave nodes, configure it as fully-distributed
        (1) Download hbase-0.90.0.tar.gz from Apache hadoop website   
        (2) Extract  hbase-0.90.0.tar.gz    $ tar –xzf hbase-0.90.0.tar.gz
         (3) Move folder hbase-0.90.0 to /home/hadoop/ and create a symbloink link    $ mv hbase-0.90.0 /home/hadoop/ ; ln –s /home/hadoop/hbase-0.90.0 /home/hadoop/hbase
         (4) Edit /home/hadoop/hbase/conf/hbase-site.xml, put the following in between and hbase.rootdirhdfs://master:54310/hbase The directory shared by region servers. Should be fully-qualified to include the filesystem to use. E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR hbase.cluster.distributedtrueThe mode the cluster will be in. Possible values are false: standalone and pseudo-distributed setups with managed Zookeeper true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh) hbase.zookeeper.quorummasterComma separated list of servers in the ZooKeeper Quorum. If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of servers which we will start/stop ZooKeeper on.
         (5) modify environment variables in /home/hadoop/hbase/conf/hbase-env.sh
        export JAVA_HOME=/usr/lib/jvm/java-6-sun/
    export HBASE_IDENT_STRING=$HOSTNAME
    export HBASE_MANAGES_ZK=false
         (6)Overwrite /home/hadoop/hbase/conf/regionservers
      on both machines    master slave
         (7)copy /usr/local/hadoop-0.20.2/haoop-0.20.2-core.jar to /home/hadoop/hbase/lib/  on both machines.
          This is very important to fix version difference issue. Pay attention to its ownership and mode(755).   

    17. Start zookeeper. It seems the zookeeper bundled with Hbase is not set up correctly.
    $ /home/hadoop/zookeeper/bin/zkServer.sh start     (Optional)We can test if zookeeper is running correctly by  typing     $ /home/hadoop/zookeeper/bin/zkCli.sh –server 127.0.0.1:2181

    18. Start hadoop cluster
    $ /usr/local/hadoop/bin/start-dfs.sh $ /usr/local/hadoop/bin/start-mapred.sh

    19. Start Hbase
    $ /home/hadoop/hbase/bin/start-hbase.sh

    20. Use Hbase shell
    $ /home/hadoop/hbase/bin/hbase shell     Check if hbase is running smoothly
        Open your browser, and type in the following.
        http://localhost:60010   


    21. Later on, stop the multi-node cluster by typing following code only on master
        (1) Stop Hbase    $ /home/hadoop/hbase/bin/stop-hbase.sh
         (2) Stop hadoop file system (HDFS)       
    $ /usr/local/hadoop/bin/stop-mapred.sh
    $ /usr/local/hadoop/bin/stop-dfs.sh
         (3) Stop zookeeper    
    $ /home/hadoop/zookeeper/bin/zkServer.sh stop

    Reference
    http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
    http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/
    http://wiki.apache.org/hadoop/Hbase/10Minutes
    http://hbase.apache.org/book/quickstart.html
    http://alans.se/blog/2010/hadoop-hbase-cygwin-windows-7-x64/

    Author
    Tzu-Cheng Chuang


    Appendix- Install Pig and Hive
    1. Install Pig 0.8.0 on this cluster
        (1) Download pig-0.8.0.tar.gz from Apache pig project website.  Then extract the file and move it to /home/hadoop/   
    $ tar –xzf pig-0.8.0.tar.gz ; mv pig-0.8.0 /home/hadoop/
         (2) Make symbolink link under pig-0.8.0/conf/   
    $ ln -s /usr/local/hadoop/conf/core-site.xml /home/hadoop/pig-0.8.0/conf/core-site.xml
    $ ln -s /usr/local/hadoop/conf/mapred-site.xml /home/hadoop/pig-0.8.0/conf/mapred-site.xml
    $ ln -s /usr/local/hadoop/conf/hdfs-site.xml /home/hadoop/pig-0.8.0/conf/hdfs-site.xml
         3) Start pig in map-reduce mode: $ /home/hadoop/pig-0.8.0/bin/pig
         (4) Exit pig from grunt>    quit

    2. Install Hive on this cluster
        (1) Download hive-0.6.0.tar.gz from Apache hive project website, and then extract the file and move it to /home/hadoop/    $ tar –xzf hive-0.6.0.tar.gz ; mv hive-0.6.0 ~/
         (2) Modify java heap size in hive-0.6.0/bin/ext/execHiveCmd.sh  Change 4096 to 1024   
        (3) Create /tmp and /user/hive/warehouse and set them chmod g+w in HDFS before a table can be created in Hive    $ hadoop fs –mkdir /tmp $ hadoop fs –mkdir /user/hive/warehouse $ hadoop fs –chmod g+w /tmp $ hadoop fs –chmod g+w /user/hive/warehouse
         (4) start Hive     $ /home/hadoop/hive-0.6.0/bin/hive

         3. (Optional)Load data by using Hive
        Create a file /home/hadoop/customer.txt    1, Kevin 2, David 3, Brian 4, Jane 5, Alice     After hive shell is started, type in    > CREATE TABLE IF NOT EXISTS customer(id INT, name STRING) > ROW FORMAT delimited fields terminated by ',' > STORED AS TEXTFILE; >LOAD DATA INPATH '/home/hadoop/customer.txt' OVERWRITE INTO TABLE customer; >SELECT customer.id, customer.name from customer;

    http://chuangtc.info/ParallelComputing/SetUpHadoopClusterOnVmwareWorkstation.htm

    posted @ 2013-08-17 22:23 paulwong 閱讀(1745) | 評論 (0)編輯 收藏

    ubuntu查看占用某端口的程序

    查看端口使用情況,使用netstat命令。
    查看已經連接的服務端口(ESTABLISHED
     
    netstat -a
     
    查看所有的服務端口(LISTEN,ESTABLISHED)
     
    netstat -ap



    查看8080端口,則可以結合grep命令:

    netstat -ap | grep 8080



    如查看8888端口,則在終端中輸入:

    lsof -i:8888



    若要停止使用這個端口的程序,使用kill +對應的pid即可

    posted @ 2013-08-16 09:29 paulwong 閱讀(1498) | 評論 (0)編輯 收藏

    小議JPA

    以前和數據庫打交道的標準INTERFACE是JDBC,放SQL語句,執行,就可以有結果。隨著近年ORM的興起,以對象的方式存取數據庫大行其道。于是產生了JPA。

    也是一套INTERFACE,以ORM的方式提供,由廠商實現,如ECLIPSE LINK,HIBERNATE,OPENEJB等。

    ENTITYMANAGERFACTORY:根據配置文件制造ENTITYMANAGER
    ENTITYMANAGER:以ORM的方式提供操作數據庫的功能
    TRANSACTION:事務保證
    PERSISTENCE.XML:鏈接數據庫信息,事務類型,重定義JPA的實現廠商等的配置信息

    在容器環境下使用:

    如果事務是RESOURCE_LOCAL的方式,則合用端需干所有的事情,如構造ENTITYMANAGER,打開事務,關閉事務等。類似于BMT。
    以下是在服務器環境中合用RESOURCE_LOCAL型的JPA

    事先要在容器中添加數據源。

     persistence.xml
    <?xml version="1.0" encoding="UTF-8" ?>
    <persistence xmlns="http://java.sun.com/xml/ns/persistence" version="1.0">

      <!-- Tutorial "unit" -->
      <persistence-unit name="Tutorial" transaction-type="RESOURCE_LOCAL">
        <non-jta-data-source>myNonJtaDataSource</non-jta-data-source>
        <class>org.superbiz.jpa.Account</class>
      </persistence-unit>

    </persistence>


    import javax.persistence.EntityManagerFactory;
    import javax.persistence.EntityManager;
    import javax.persistence.EntityTransaction;
    import javax.persistence.PersistenceUnit;

    public class MyEjbOrServlet  {

        @PersistenceUnit(unitName="Tutorial")
        private EntityManagerFactory factory;

        // Proper exception handling left out for simplicity
        public void ejbMethodOrServletServiceMethod() throws Exception {
            EntityManager entityManager = factory.createEntityManager();

            EntityTransaction entityTransaction = entityManager.getTransaction();

            entityTransaction.begin();

            Account account = entityManager.find(Account.class, 12345);

            account.setBalance(5000);

            entityTransaction.commit();
        }

        
    }


    以下是JTA方式的JPA,容器+EJB+JPA+JTA,容器會在EJB的方法調用前打開一個事務,在方法退出后,提交事務,并且如果是多個數據源的,即有多個ENTITYMANAGER的
    可以保證一致性,即全局事務。相當于之前的先調用USERTRANSACTION,BEGIN,COMMIT。

    事先要在容器中添加數據源。

     persistence.xml
    <?xml version="1.0" encoding="UTF-8" ?>
    <persistence xmlns="http://java.sun.com/xml/ns/persistence" version="1.0">

      <!-- Tutorial "unit" -->
      <persistence-unit name="Tutorial" transaction-type="JTA">
        <jta-data-source>myJtaDataSource</jta-data-source>
        <non-jta-data-source>myNonJtaDataSource</non-jta-data-source>
        <class>org.superbiz.jpa.Account</class>
      </persistence-unit>

    </persistence>


    EJB
    import javax.ejb.Stateless;
    import javax.ejb.TransactionAttribute;
    import javax.ejb.TransactionAttributeType;
    import javax.persistence.EntityManager;
    import javax.persistence.PersistenceContext;

    @Stateless
    public class MyEjb implements MyEjbInterface {

        @PersistenceContext(unitName = "Tutorial")
        private EntityManager entityManager;

        // Proper exception handling left out for simplicity
        @TransactionAttribute(TransactionAttributeType.REQUIRED)
        public void ejbMethod() throws Exception {

        Account account = entityManager.find(Account.class, 12345);

        account.setBalance(5000);

        }
    }


    如果是J2SE環境下使用JPA,則又是不一樣的。


    persistence.xml

    <?xml version="1.0" encoding="UTF-8"?>
    <persistence xmlns="http://java.sun.com/xml/ns/persistence" version="1.0">
        <persistence-unit name="SimplePU" transaction-type="RESOURCE_LOCAL">
            <provider>org.hibernate.ejb.HibernatePersistence</provider>
            <class>com.someone.jmail.valueobject.CallActivity</class>
            <class>com.someone.jmail.valueobject.Email</class>
            <properties>
                <property name="hibernate.connection.driver_class" value="com.mysql.jdbc.Driver" />
                <property name="hibernate.connection.url" value="jdbc:mysql://localhost:3306/test" />
                <property name="hibernate.connection.username" value="root" />
                <property name="hibernate.connection.password" value="12345" />
                <property name="hibernate.dialect" value="org.hibernate.dialect.MySQL5Dialect" />
                <property name="hibernate.show_sql" value="false"/>
                <property name="hibernate.format_sql" value="true"/>
                <property name="hibernate.use_sql_comments" value="false"/>
                <property name="hibernate.hbm2ddl.auto" value="none"/>
            </properties>
        </persistence-unit>
        
    </persistence>


    Dao:

    public class UserDaoImpl implements UserDao { 
     public AccountInfo save(AccountInfo accountInfo) { 
     EntityManagerFactory emf = 
     Persistence.createEntityManagerFactory("SimplePU"); 
     EntityManager em = emf.createEntityManager(); 
     em.getTransaction().begin(); 
     em.persist(accountInfo); 
     em.getTransaction().commit(); 
     emf.close(); 
     return accountInfo; 
        } 
     } 


    posted @ 2013-08-14 18:17 paulwong 閱讀(604) | 評論 (0)編輯 收藏

    HBASE界面工具

    hbaseexplorer
    下載此0.6的WAR包時,要將lib下的jasper-runtime-5.5.23.jar和jasper-compiler-5.5.23.jar刪掉,否則會報錯
    http://sourceforge.net/projects/hbaseexplorer/?source=dlp

    HBaseXplorer
    https://github.com/bit-ware/HBaseXplorer/downloads

    HBase Manager
    http://sourceforge.net/projects/hbasemanagergui/

    posted @ 2013-08-14 09:51 paulwong 閱讀(1145) | 評論 (0)編輯 收藏

    在 Ubuntu 安裝 java Jdk

    在 Ubuntu 安裝 java Jdk 很容易

    Install-Oracle-Java-7-in-Ubuntu-via-PPA-Repository

    安裝 java 很容易 ! (支援 Ubuntu 12.04, 11.10, 11.04 and 10.04)

    說在前頭 :
    0. 安裝 oracle java jdk (目前是 7u5 版)
    0-1. 包含 jdk jre 及 瀏覽器插件 (不能只安裝 jre 或 瀏覽器插件)
    0-2. 自動 辨識 64 bits 或 32 bits
    0-2. 安裝後 會自動從 ppa:webupd8team/java 套件庫 更新 已安裝套件
    0-3. 以後有新版本 這個方法 自動會直接安裝新版本(例如如果有 7u6 版)

    1. 安裝指令
       apt-get install software-properties-common
    1-1. sudo add-apt-repository ppa:webupd8team/java
     
    1-2. sudo apt-get update
     
    1-3. sudo apt-get install oracle-java7-installer(6就用這個:oracle-java6-installer)


    2. 看看是否安裝成功
    java -version

    目前最新版本
    java version "1.7.0_05"
    Java(TM) SE Runtime Environment (build 1.7.0_05-b05)
    Java HotSpot(TM) 64-Bit Server VM (build 23.1-b03, mixed mode)

    2-1. 如果上面的指令 得到的版本 不是剛剛安裝的版本
    sudo update-java-alternatives -s java-7-oracle

    再試一次
    java -version

    3. 移除 Oracle Java 7
    sudo apt-get remove oracle-java7-installer

    posted @ 2013-08-10 13:33 paulwong 閱讀(891) | 評論 (0)編輯 收藏

    CHUKWA資源

    CHUKWA
    日志分析大數據系統

    !!!這個寫得挺詳細,值得一看
    https://github.com/matrix-lisp/DataAnalysis-DataMining-With-Hadoop/blob/master/source/Hadoop-Chukwa.rst


    Chukwa配置及運行實例
    http://my.oschina.net/xiangchen/blog/100424


    Chukwa 0.4.0 詳細安裝流程,有提到0.4版的一個BUG
    http://blog.csdn.net/jostey/article/details/7068322


    http://chfpdxx.blog.163.com/blog/static/29542296201241494118753/


    chukwa 0.5.0 + hbase 0.94.8 + hadoop 1.1.4 + pig 0.11.1單機偽分布配置
    http://f.dataguru.cn/thread-158864-1-1.html


    將Chukwa 0.5部署在基于Cloudera CDH4的Hadoop集群上
    http://savagegarden.iteye.com/blog/1496786


    hadoop1.01+ hbase 0.92+chukwa0.5 安裝配置 +問題
    http://blog.csdn.net/yinlei212/article/details/7452955


    chukwa安裝
    http://blog.csdn.net/zhumin726/article/details/8290784


    Chukwa 0.5的安裝
    http://hi.baidu.com/zhangxinandala/item/db5d8adc22bab0d5241f4017

    posted @ 2013-08-09 17:43 paulwong 閱讀(388) | 評論 (0)編輯 收藏

    LINUX 網絡安全資源

    IPTABLES配置,如何開放、關閉端口等
    http://wiki.ubuntu.org.cn/IptablesHowTo
    http://www.cnblogs.com/wangkangluo1/archive/2012/04/19/2457072.html


    如果執行這個命令時,
    tail -/var/log/auth.log -200

    會發現經常有人試圖SSH過來猜root的密碼,那就要安裝fail2ban了。
    apt-get install fail2ban

    http://forum.ubuntu.org.cn/viewtopic.php?f=124&t=305533

    https://github.com/fail2ban/fail2ban

    或者設置一下IPTABLES:
    http://www.debian-administration.org/articles/187



    posted @ 2013-08-03 11:08 paulwong 閱讀(345) | 評論 (0)編輯 收藏

    僅列出標題
    共115頁: First 上一頁 62 63 64 65 66 67 68 69 70 下一頁 Last 
    主站蜘蛛池模板: a级毛片免费全部播放无码| 亚洲第一区精品观看| a级在线观看免费| 亚洲国产精品成人AV在线| 亚洲精品白色在线发布| 亚洲色精品88色婷婷七月丁香 | 久久国产精品亚洲综合 | 国产亚洲综合久久| 亚洲不卡中文字幕| 亚洲AV日韩AV永久无码久久| 亚洲 小说区 图片区 都市| 97无码免费人妻超级碰碰碰碰| 99精品一区二区免费视频| 中文字幕免费在线看| 一级看片免费视频| 国产精品亚洲专区在线播放| 亚洲AV综合色区无码一二三区| 亚洲一级特黄特黄的大片| 亚洲日韩乱码中文无码蜜桃臀| 亚洲图片一区二区| 亚洲av激情无码专区在线播放| 亚洲国产成人高清在线观看| 国产亚洲精品无码拍拍拍色欲| 亚洲情a成黄在线观看| 亚洲成A人片在线观看中文 | 日日摸日日碰夜夜爽亚洲| 亚洲乱码av中文一区二区| 亚洲嫩草影院在线观看| 亚洲精品国产电影午夜| 亚洲国产美女视频| 亚洲国产精品人久久电影| 国产精品高清视亚洲精品| 亚洲а∨天堂久久精品9966 | 免费A级毛片无码免费视| 2020久久精品国产免费| 99久久99久久精品免费看蜜桃 | 中文字幕乱码亚洲精品一区 | 日韩版码免费福利视频| 国产啪精品视频网免费| 99热在线精品免费全部my| 免费国产黄线在线观看|