<rt id="bn8ez"></rt>
<label id="bn8ez"></label>

  • <span id="bn8ez"></span>

    <label id="bn8ez"><meter id="bn8ez"></meter></label>

    ivaneeo's blog

    自由的力量,自由的生活。

      BlogJava :: 首頁 :: 聯系 :: 聚合  :: 管理
      669 Posts :: 0 Stories :: 64 Comments :: 0 Trackbacks

    Distributed File Systems (DFS) are a new type of file systems which provides some extra features over normal file systems and are used for storing and sharing files across wide area network and provide easy programmatic access. File Systems like HDFS from Hadoop and many others falls in the category of distributed file systems and has been widely used and are quite popular.

    This tutorial provides a step by step guide for accessing and using distributed file system for storing and retrieving data using j\Java. Hadoop Distributed File System has been used for this tutorial because it is freely available, easy to setup and is one of the most popular and well known Distributed file system. The tutorial demonstrates how to access Hadoop distributed file system using java showing all the basic operations.

    Introduction
    Distributed File Systems (DFS) are a new type of file systems which provides some extra features over normal file systems and are used for storing and sharing files across wide area network and provide easy programmatic access. 

    Distributed file system is used to make files distributed across multiple servers appear to users as if they reside in one place on the network. Distributed file system allows administrators to consolidate file shares that may exist on multiple servers to appear as if they all are in the same location so that users can access them from a single point on the network. 
    HDFS stands for Hadoop Distributed File System and is a distributed file system designed to run on commodity hardware. Some of the features provided by Hadoop are:
    •    Fault tolerance: Data can be replicated, so if any of the servers goes down, resources still will be available for user.
    •    Resource management and accessibility: Users does not require knowing the physical location of the data; they can access all the resources through a single point. HDFS also provides web browser interface to view the contents of the file.
    •    It provides high throughput access to application data.

    This tutorial will demonstrate how to use HDFS for basic distributed file system operations using Java. Java 1.6 version and Hadoop driver has been used (link is given in Pre-requisites section). The development environment consists of Eclipse 3.4.2 and Hadoop 0.19.1 on Microsoft Windows XP – SP3.


    Pre-requisites

    1.      Hadoop-0.19.1 installation - here and here -

    2.      Hadoop-0.19.1-core.jar file

    3.      Commons-logging-1.1.jar file

    4.      Java 1.6

    5.      Eclipse 3.4.2



    Creating New Project and FileSystem Object

    First step is to create a new project in Eclipse and then create a new class in that project. 
    Now add all the jar files to the project, as mentioned in the pre-requisites.
    First step in using or accessing Hadoop Distributed File System (HDFS) is to create file system object.
    Without creating an object you cannot perform any operations on the HDFS, so file system object is always required to be created.
    Two input parameters are required to create object. They are “Host name” and “Port”. 
    Code below shows how to create file system object to access HDFS. 

    Configuration config = new Configuration();

    config.set("fs.default.name","hdfs://127.0.0.1:9000/");

    FileSystem dfs = FileSystem.get(config);


    Here Host name = “127.0.0.1” & Port = “9000”.

    Various HDFS operations

    Now we will see various operations that can be performed on HDFS.

    Creating Directory

    Now we will start with creating a directory.
    First step for using HDFS is to create a directory where we will store our data. 
    Now let us create a directory named “TestDirectory”.

    String dirName = "TestDirectory";

    Path src = new Path(dfs.getWorkingDirectory()+"/"+dirName);

    dfs.mkdirs(src);

    Here dfs.getWorkingDirectory() function will return the path of the working directory which is the basic working directory and all the data will be stored inside this directory. mkdirs() function accepts object of the type Path, so as shown above Path object is created first. Directory is required to be created inside basic working directory, so Path object is created accordingly. dfs.mkdirs(src)function will create a directory in the working folder with name “TestDirectory”.

    Sub directories can also be created inside the “TestDirectory”; in that case path specified during creation of Path object will change. For example a directory named “subDirectory” can be created inside directory “TestDirectory” as shown in below code.

    String subDirName = "subDirectory";

    Path src = new Path(dfs.getWorkingDirectory()+"/TestDirectory/"+ subDirName);

    dfs.mkdirs(src);

    Deleting Directory or file

    Existing directory in the HDFS can be deleted. Below code shows how to delete the existing directory.

    String dirName = "TestDirectory";

    Path src = new Path(dfs.getWorkingDirectory()+"/"+dirName);

    Dfs.delete(src);


    Please note that delete() method can also be used to delete files. What needs to be deleted should be specified in the Path object.

    Copying file to/from HDFS from/to Local file system

    Basic aim of using HDFS is to store data, so now we will see how to put data in HDFS.
    Once directory is created, required data can be stored in HDFS from the local file system.
    So consider that a file named “file1.txt” is located at “E:\HDFS” in the local file system, and it is required to be copied under the folder “subDirectory” (that was created earlier) in HDFS.
    Code below shows how to copy file from local file system to HDFS.

    Path src = new Path("E://HDFS/file1.txt");

    Path dst = new Path(dfs.getWorkingDirectory()+"/TestDirectory/subDirectory/");

    dfs.copyFromLocalFile(src, dst);


    Here src and dst are the Path objects created for specifying the local file system path where file is located and HDFS path where file is required to be copied respectively. copyFromLocalFile() method is used for copying file from local file system to HDFS.

    Similarly, file can also be copied from HDFS to local file system. Code below shows how to copy file from HDFS to local file system.

    Path src = new Path(dfs.getWorkingDirectory()+"/TestDirectory/subDirectory/file1.txt");

    Path dst = new Path("E://HDFS/");

    dfs.copyToLocalFile(src, dst);

    Here copyToLocalFile() method is used for copying file from HDFS to local file system.

    CIO, CTO & Developer Resources

    Creating a file and writing data in it

    It is also possible to create a file in HDFS and write data in it. So if required instead of directly copying the file from the local file system, a file can be first created and then data can be written in it.
    Code below shows how to create a file name “file2.txt” in HDFS directory.

    Path src = new Path(dfs.getWorkingDirectory()+"/TestDirectory/subDirectory/file2.txt");

    dfs.createNewFile(src);


    Here createNewFile() method will create the file in HDFS based on the input provided in src object.

    Now as the file is created, data can be written in it. Code below shows how to write data present in the “file1.txt” of local file system to “file2.txt” of HDFS.

    Path src = new Path(dfs.getWorkingDirectory()+"/TestDirectory/subDirectory/file2.txt");

    FileInputStream fis = new FileInputStream("E://HDFS/file1.txt");

    int len = fis.available();

    byte[] btr = new byte[len];

    fis.read(btr);

    FSDataOutputStream fs = dfs.create(src);

    fs.write(btr);

    fs.close();


    Here write() method of FSDataOutputStream is used to write data in file located in HDFS.

    Reading data from a file

    It is always necessary to read the data from file for performing various operations on data. It is possible to read data from the file which is stored in HDFS. 
    Code below shows how to retrieve data from the file present in the HDFS. Here data is read from the file (file1.txt) which is present in the directory (subDirectory) that was created earlier.

    Path src = new Path(dfs.getWorkingDirectory()+"/TestDirectory/subDirectory/file1.txt");

    FSDataInputStream fs = dfs.open(src);

    String str = null;

    while ((str = fs.readline())!= null)
    {
    System.out.println(str);
    }


    Here readline() method of FSDataInputStream is used to read data from the file located in HDFS. Also src is the Path object used to specify the path of the file in HDFS which has to be read.

    Miscellaneous operations that can be performed on HDFS

    Below are some of the basic operations that can be performed on HDFS.

    Below is the code that can be used to check whether particular file or directory exists in HDFS. If it exists, it returns true and if it doesn’t exists it returns false.dfs.exists() method is used for this.

    Path src = new Path(dfs.getWorkingDirectory()+"/TestDirectory/HDFS/file1.txt");

    System.out.println(dfs.exists(src));

    Below is the code that can be used to check the default block size in which file would be split. It returns block size in terms of Number of Bytes.dfs.getDefaultBlockSize() method is used for this.

    System.out.println(dfs.getDefaultBlockSize());

    To check for the default replication factor, as shown belowdfs.getDefaultReplication() method can be used.

    System.out.println(dfs.getDefaultReplication());

    To check whether given path is HDFS directory or file, as shown belowdfs.isDirectory() or dfs.isFile() methods can be used.

    Path src = new Path(dfs.getWorkingDirectory()+"/TestDirectory/subDirectory/file1.txt");
    System.out.println(dfs.isDirectory(src));
    System.out.println(dfs.isFile(src));

    Conclusion
    So we just learned some of the basics about Hadoop Distributed File System, how to create and delete directory, how to copy file to/from HDFS from/to local file system, how to create and delete file into directory, how to write data in file, and how to read data from file. We also learned various other operations that can be performed on HDFS. Thus from what we have done we can say that, HDFS is easy to use for data storage and retrieval.

    References:
    http://hadoop.apache.org/common/docs/current/hdfs_design.html

    http://en.wikipedia.org/wiki/Hadoop

    posted on 2011-05-17 10:43 ivaneeo 閱讀(571) 評論(0)  編輯  收藏 所屬分類:
    主站蜘蛛池模板: 暖暖免费在线中文日本| 国产精品视频永久免费播放| 亚洲AV无码乱码在线观看裸奔| 久久久精品2019免费观看| 自拍日韩亚洲一区在线| 亚洲一区二区三区免费| 91精品免费观看| 国产精品观看在线亚洲人成网| 亚洲国产成人精品无码区在线观看 | 亚洲私人无码综合久久网| 亚洲成人国产精品| 最近高清中文字幕免费| 国产成人亚洲精品播放器下载| 亚洲an天堂an在线观看| 日韩高清在线免费观看| 国产99视频精品免费专区| 亚洲欧美日韩久久精品| 亚洲产国偷V产偷V自拍色戒| 国产一区二区三区在线免费| 少妇人妻偷人精品免费视频| 免费无码婬片aaa直播表情| 亚洲伊人久久大香线蕉| 国产亚洲综合网曝门系列| 成年人网站在线免费观看| 久久久久久久岛国免费播放| 污污的视频在线免费观看| 亚洲国产高清在线精品一区| 亚洲一级特黄无码片| 成**人免费一级毛片| 久久aⅴ免费观看| 特级毛片爽www免费版| 精品国产日韩久久亚洲| 亚洲AV无码乱码在线观看富二代| 免费永久在线观看黄网站| 91成年人免费视频| 美女视频黄a视频全免费网站色窝 美女被cao网站免费看在线看 | 在线观看片免费人成视频播放| 亚洲6080yy久久无码产自国产| 亚洲综合无码一区二区三区| 日本红怡院亚洲红怡院最新| 亚洲精品乱码久久久久久蜜桃|