<rt id="bn8ez"></rt>
<label id="bn8ez"></label>

  • <span id="bn8ez"></span>

    <label id="bn8ez"><meter id="bn8ez"></meter></label>

    paulwong

    開源分布式搜索平臺ELK+Redis+Syslog-ng實現日志實時搜索

    logstash + elasticsearch + Kibana+Redis+Syslog-ng

    ElasticSearch是一個基于Lucene構建的開源,分布式,RESTful搜索引擎。設計用于云計算中,能夠達到實時搜索,穩定,可靠,快速,安裝使用方便。支持通過HTTP使用JSON進行數據索引。

    logstash是一個應用程序日志、事件的傳輸、處理、管理和搜索的平臺。你可以用它來統一對應用程序日志進行收集管理,提供 Web 接口用于查詢和統計。其實logstash是可以被別的替換,比如常見的fluented

    Kibana是一個為 Logstash 和 ElasticSearch 提供的日志分析的 Web 接口。可使用它對日志進行高效的搜索、可視化、分析等各種操作。

    Redis是一個高性能的內存key-value數據庫,非必需安裝,可以防止數據丟失.
    kibana
    參考:

    http://www.logstash.net/

    http://chenlinux.com/2012/10/21/elasticearch-simple-usage/

    http://www.elasticsearch.cn

    http://download.oracle.com/otn-pub/java/jdk/7u67-b01/jdk-7u67-linux-x64.tar.gz?AuthParam=1408083909_3bf5b46169faab84d36cf74407132bba

    http://curran.blog.51cto.com/2788306/1263416

    http://storysky.blog.51cto.com/628458/1158707/

    http://zhumeng8337797.blog.163.com/blog/static/10076891420142712316899/

    http://enable.blog.51cto.com/747951/1049411

    http://chenlinux.com/2014/06/11/nginx-access-log-to-elasticsearch/

    http://www.w3c.com.cn/%E5%BC%80%E6%BA%90%E5%88%86%E5%B8%83%E5%BC%8F%E6%90%9C%E7%B4%A2%E5%B9%B3%E5%8F%B0elkelasticsearchlogstashkibana%E5%85%A5%E9%97%A8%E5%AD%A6%E4%B9%A0%E8%B5%84%E6%BA%90%E7%B4%A2%E5%BC%95

    http://woodygsd.blogspot.com/2014/06/an-adventure-with-elk-or-how-to-replace.html

    http://www.ricardomartins.com.br/enviando-dados-externos-para-a-stack-elk/

    http://tinytub.github.io/logstash-install.html

    http://jamesmcfadden.co.uk/securing-elasticsearch-with-nginx/

    https://github.com/elasticsearch/logstash/blob/master/patterns/grok-patterns

    http://zhaoyanblog.com/archives/319.html

    http://www.vpsee.com/2014/05/install-and-play-with-elasticsearch/

    ip說明
    118.x.x.x/16 為客戶端ip
    192.168.0.39和61.x.x.x為ELK的內網和外網ip

    安裝JDK

    http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html

    1. tar zxvf jdk-7u67-linux-x64.tar.gz\?AuthParam\=1408083909_3bf5b46169faab84d36cf74407132b

    2. mv jdk1.7.0_67 /usr/local/

    3. cd /usr/local/

    4. ln -s jdk1.7.0_67 jdk

    5. chown -R root:root jdk/

    配置環境變量
    vi /etc/profile

    1. export JAVA_HOME=/usr/local/jdk   

    2. export JRE_HOME=$JAVA_HOME/jre

    3. export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH

    4. export PATH=$JAVA_HOME/bin:$PATH

    5. export REDIS_HOME=/usr/local/redis

    6. export ES_HOME=/usr/local/elasticsearch

    7. export ES_CLASSPATH=$ES_HOME/config

    變量生效
    source /etc/profile

    驗證版本
    java -version

    1. java version "1.7.0_67"

    2. Java(TM) SE Runtime Environment (build 1.7.0_67-b01)

    3. Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)

    如果之前安裝過java,可以先卸載
    rpm -qa |grep java
    java-1.6.0-openjdk-1.6.0.0-1.24.1.10.4.el5
    java-1.6.0-openjdk-devel-1.6.0.0-1.24.1.10.4.el5

    rpm -e java-1.6.0-openjdk-1.6.0.0-1.24.1.10.4.el5 java-1.6.0-openjdk-devel-1.6.0.0-1.24.1.10.4.el5

    安裝redis

    http://redis.io/

    1. wget http://download.redis.io/releases/redis-2.6.17.tar.gz

    2. tar zxvf redis-2.6.17.tar.gz

    3. mv redis-2.6.17 /usr/local/

    4. cd /usr/local

    5. ln -s redis-2.6.17 redis

    6. cd /usr/local/redis

    7. make

    8. make install

    cd utils
    ./install_server.sh

    1. Please select the redis port for this instance: [6379]

    2. Selecting default: 6379

    3. Please select the redis config file name [/etc/redis/6379.conf]

    4. Selected default - /etc/redis/6379.conf

    5. Please select the redis log file name [/var/log/redis_6379.log]

    6. Selected default - /var/log/redis_6379.log

    7. Please select the data directory for this instance [/var/lib/redis/6379]

    8. Selected default - /var/lib/redis/6379

    9. Please select the redis executable path [/usr/local/bin/redis-server]

    編輯配置文件
    vi /etc/redis/6379.conf

    1. daemonize yes

    2. port 6379

    3. timeout 300

    4. tcp-keepalive 60

    啟動
    /etc/init.d/redis_6379 start

    exists, process is already running or crashed
    如報這個錯,需要編輯下/etc/init.d/redis_6379,去除頭上的\n

    加入自動啟動
    chkconfig –add redis_6379

    安裝Elasticsearch

    http://www.elasticsearch.org/

    http://www.elasticsearch.cn

    集群安裝只要節點在同一網段下,設置一致的cluster.name,啟動的Elasticsearch即可相互檢測到對方,組成集群

    1. wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.2.tar.gz

    2. tar zxvf elasticsearch-1.3.2.tar.gz

    3. mv elasticsearch-1.3.2 /usr/local/

    4. cd /usr/local/

    5. ln -s elasticsearch-1.3.2 elasticsearch

    6. elasticsearch/bin/elasticsearch -f

    1. [2014-08-20 13:19:05,710][INFO ][node                     ] [Jackpot] version[1.3.2], pid[19320], build[dee175d/2014-08-13T14:29:30Z]

    2. [2014-08-20 13:19:05,727][INFO ][node                     ] [Jackpot] initializing ...

    3. [2014-08-20 13:19:05,735][INFO ][plugins                  ] [Jackpot] loaded [], sites []

    4. [2014-08-20 13:19:10,722][INFO ][node                     ] [Jackpot] initialized

    5. [2014-08-20 13:19:10,723][INFO ][node                     ] [Jackpot] starting ...

    6. [2014-08-20 13:19:10,934][INFO ][transport                ] [Jackpot] bound_address {inet[/0.0.0.0:9301]}, publish_address {inet[/61.x.x.x:9301]}

    7. [2014-08-20 13:19:10,958][INFO ][discovery                ] [Jackpot] elasticsearch/5hUOX-2ES82s_0zvI9BUdg

    8. [2014-08-20 13:19:14,011][INFO ][cluster.service          ] [Jackpot] new_master [Jackpot][5hUOX-2ES82s_0zvI9BUdg][Impala][inet[/61.x.x.x:9301]], reason: zen-disco-join (elected_as_master)

    9. [2014-08-20 13:19:14,060][INFO ][http                     ] [Jackpot] bound_address {inet[/0.0.0.0:9201]}, publish_address {inet[/61.x.x.x:9201]}

    10. [2014-08-20 13:19:14,061][INFO ][node                     ] [Jackpot] started

    11. [2014-08-20 13:19:14,106][INFO ][gateway                  ] [Jackpot] recovered [0] indices into cluster_state

    12.  

    13.  

    14. [2014-08-20 13:20:58,273][INFO ][node                     ] [Jackpot] stopping ...

    15. [2014-08-20 13:20:58,323][INFO ][node                     ] [Jackpot] stopped

    16. [2014-08-20 13:20:58,323][INFO ][node                     ] [Jackpot] closing ...

    17. [2014-08-20 13:20:58,332][INFO ][node                     ] [Jackpot] closed

    ctrl+c退出

    以后臺方式運行
    elasticsearch/bin/elasticsearch -d

    訪問默認的9200端口
    curl -X GET http://localhost:9200

    1. {

    2.   "status" : 200,

    3.   "name" : "Steve Rogers",

    4.   "version" : {

    5.     "number" : "1.3.2",

    6.     "build_hash" : "dee175dbe2f254f3f26992f5d7591939aaefd12f",

    7.     "build_timestamp" : "2014-08-13T14:29:30Z",

    8.     "build_snapshot" : false,

    9.     "lucene_version" : "4.9"

    10.   },

    11.   "tagline" : "You Know, for Search"

    12. }

    安裝logstash

    http://logstash.net/

    1. wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz

    2. tar zxvf logstash-1.4.2.tar.gz

    3. mv logstash-1.4.2 /usr/local

    4. cd /usr/local

    5. ln -s logstash-1.4.2 logstash

    6. mkdir logstash/conf

    7. chown -R root:root logstash

    logstash

    因為java的默認heap size,回收機制等原因,logstash從1.4.0開始不再使用jar運行方式.
    以前方式:
    java -jar logstash-1.3.3-flatjar.jar agent -f logstash.conf
    現在方式:
    bin/logstash agent -f logstash.conf

    logstash下載即可使用,命令行參數可以參考logstash flags,主要有

    http://logstash.net/docs/1.2.1/flags

    安裝kibana

    logstash的最新版已經內置kibana,你也可以單獨部署kibana。kibana3是純粹JavaScript+html的客戶端,所以可以部署到任意http服務器上。

    http://www.elasticsearch.org/overview/elkdownloads/

    1. wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.0.tar.gz

    2. tar zxvf kibana-3.1.0.tar.gz

    3. mv kibana-3.1.0 /opt/htdocs/www/kibana

    4. vi /opt/htdocs/www/kibana/config.js

    配置elasticsearch源
    elasticsearch: “http://”+window.location.hostname+”:9200″,

    加入iptables
    6379為redis端口,9200為elasticsearch端口,118.x.x.x/16為當前測試時的客戶端ip

    1. iptables -A INPUT -p tcp -m tcp -s 118.x.x.x/16 --dport 9200 --j ACCEPT

    測試運行前端輸出
    bin/logstash -e ‘input { stdin { } } output { stdout {} }’

    輸入hello測試 
    2014-08-20T05:17:02.876+0000 Impala hello

    測試運行輸出到后端
    bin/logstash -e ‘input { stdin { } } output { elasticsearch { host => localhost } }’

    訪問kibana

    http://adminimpala.campusapply.com/kibana/index.html#/dashboard/file/default.json

    Yes- Great! We have a prebuilt dashboard: (Logstash Dashboard). See the note to the right about making it your global default

    No results There were no results because no indices were found that match your selected time span

    設置kibana讀取源
    在kibana的右上角有個 configure dashboard,再進入Index Settings
    [logstash-]YYYY.MM.DD
    這個需和logstash的輸出保持一致

    elasticsearch 跟 MySQL 中定義資料格式的角色關系對照表如下

    MySQL elasticsearch
    database index
    table type

    table schema mapping
    row document
    field field

    ELK整合

    syslog-ng.conf

    1. #省略其它內容

    2.  

    3. # Remote logging syslog

    4. source s_remote {

    5.         udp(ip(192.168.0.39) port(514));

    6. };

    7.  

    8. #nginx log

    9. source s_remotetcp {

    10.         tcp(ip(192.168.0.39) port(514) log_fetch_limit(100) log_iw_size(50000) max-connections(50) );

    11. };

    12.  

    13. filter f_filter12     { program('c1gstudio\.com'); };

    14.  

    15. #logstash syslog

    16. destination d_logstash_syslog { udp("localhost" port(10999) localport(10998)  ); };

    17.  

    18. #logstash web

    19. destination d_logstash_web { tcp("localhost" port(10997) localport(10996) ); };

    20.  

    21. log { source(s_remote); destination(d_logstash_syslog); };

    22.  

    23. log { source(s_remotetcp); filter(f_filter12); destination(d_logstash_web); };

    logstash_syslog.conf

    1. input {

    2.   udp {

    3.     port => 10999

    4.     type => syslog

    5.   }

    6. }

    7. filter {

    8.   if [type] == "syslog" {

    9.     grok {

    10.       match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }

    11.       add_field => [ "received_at", "%{@timestamp}" ]

    12.       add_field => [ "received_from", "%{host}" ]

    13.     }

    14.     syslog_pri { }

    15.     date {

    16.       match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

    17.     }

    18.   }

    19. }

    20.  

    21. output {

    22.   elasticsearch {

    23.   host => localhost   

    24.   index => "syslog-%{+YYYY}"

    25. }

    26. }

    logstash_redis.conf

    1. input {

    2.   tcp {

    3.     port => 10997

    4.     type => web

    5.   }

    6. }

    7. filter {

    8.   grok {

    9.     match => [ "message", "%{SYSLOGTIMESTAMP:syslog_timestamp} (?:%{SYSLOGFACILITY:syslog_facility} )?%{SYSLOGHOST:syslog_source} %{PROG:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{IPORHOST:clientip} - (?:%{USER:remote_user}|-) \[%{HTTPDATE:timestamp}\] \"%{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:status} (?:%{NUMBER:body_bytes_sent}|-) \"(?:%{URI:http_referer}|-)\" %{QS:agent} (?:%{IPV4:http_x_forwarded_for}|-)"]

    10.     remove_field => [ '@version ','host','syslog_timestamp','syslog_facility','syslog_pid']

    11.   }

    12.   date {

    13.     match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]

    14.   }

    15.    useragent {

    16.         source => "agent"

    17.         prefix => "useragent_"

    18.         remove_field => [ "useragent_device", "useragent_major", "useragent_minor" ,"useragent_patch","useragent_os","useragent_os_major","useragent_os_minor"]

    19.     }

    20.    geoip {

    21.         source => "clientip"

    22.         fields => ["country_name", "region_name", "city_name", "real_region_name", "latitude", "longitude"]

    23.         remove_field => [ "[geoip][longitude]", "[geoip][latitude]","location","region_name" ]

    24.     }

    25. }

    26.  

    27. output {

    28.   #stdout { codec => rubydebug }

    29.  redis {

    30.  batch => true

    31.  batch_events => 500

    32.  batch_timeout => 5

    33.  host => "127.0.0.1"

    34.  data_type => "list"

    35.  key => "logstash:web"

    36.  workers => 2

    37.  }

    38. }

    logstash_web.conf

    1. input {

    2.   redis {

    3.     host => "127.0.0.1"

    4.     port => "6379"

    5.     key => "logstash:web"

    6.     data_type => "list"

    7.     codec  => "json"

    8.     type => "web"

    9.   }

    10. }

    11.  

    12. output {

    13.   elasticsearch {

    14.   flush_size => 5000

    15.   host => localhost

    16.   idle_flush_time => 10

    17.   index => "web-%{+YYYY.MM.dd}"

    18.   }

    19.   #stdout { codec => rubydebug }

    20. }

    啟動elasticsearch和logstash
    /usr/local/elasticsearch/bin/elasticsearch -d

    /usr/local/logstash/bin/logstash agent -f /usr/local/logstash/conf/logstash_syslog.conf &
    /usr/local/logstash/bin/logstash agent -f /usr/local/logstash/conf/logstash_redis.conf &
    /usr/local/logstash/bin/logstash agent -f /usr/local/logstash/conf/logstash_web.conf &

    關閉
    ps aux|egrep ‘search|logstash’
    kill pid

    安裝控制器elasticsearch-servicewrapper
    如果是在服務器上就可以使用elasticsearch-servicewrapper這個es插件,它支持通過參數,指定是在后臺或前臺運行es,并且支持啟動,停止,重啟es服務(默認es腳本只能通過ctrl+c關閉es)。使用方法是到https://github.com/elasticsearch/elasticsearch-servicewrapper下載service文件夾,放到es的bin目錄下。下面是命令集合:
    bin/service/elasticsearch +
    console 在前臺運行es
    start 在后臺運行es
    stop 停止es
    install 使es作為服務在服務器啟動時自動啟動
    remove 取消啟動時自動啟動

    vi /usr/local/elasticsearch/service/elasticsearch.conf
    set.default.ES_HOME=/usr/local/elasticsearch

    命令示例

    查看狀態

    http://61.x.x.x:9200/_status?pretty=true

    集群健康查看

    http://61.x.x.x:9200/_cat/health?v

    epoch timestamp cluster status node.total node.data shards pri relo init unassign
    1409021531 10:52:11 elasticsearch yellow 2 1 20 20 0 0 20

    列出集群索引

    http://61.x.x.x:9200/_cat/indices?v

    health index pri rep docs.count docs.deleted store.size pri.store.size
    yellow web-2014.08.25 5 1 5990946 0 3.6gb 3.6gb
    yellow kibana-int 5 1 2 0 20.7kb 20.7kb
    yellow syslog-2014 5 1 709 0 585.6kb 585.6kb
    yellow web-2014.08.26 5 1 1060326 0 712mb 712mb

    刪除索引
    curl -XDELETE ‘http://localhost:9200/kibana-int/’
    curl -XDELETE ‘http://localhost:9200/logstash-2014.08.*’

    優化索引
    $ curl -XPOST ‘http://localhost:9200/old-index-name/_optimize’

    查看日志
    tail /usr/local/elasticsearch/logs/elasticsearch.log

    1. 2.4mb]->[2.4mb]/[273mb]}{[survivor] [3.6mb]->[34.1mb]/[34.1mb]}{[old] [79.7mb]->[80mb]/[682.6mb]}

    2. [2014-08-26 10:37:14,953][WARN ][monitor.jvm              ] [Red Shift] [gc][young][71044][54078] duration [43s], collections [1]/[46.1s], total [43s]/[26.5m], memory [384.7mb]->[123mb]/[989.8mb], all_pools {[young] [270.5mb]->[1.3mb]/[273mb]}{[survivor] [34.1mb]->[22.3mb]/[34.1mb]}{[old] [80mb]->[99.4mb]/[682.6mb]}

    3. [2014-08-26 10:38:03,619][WARN ][monitor.jvm              ] [Red Shift] [gc][young][71082][54080] duration [6.6s], collections [1]/[9.1s], total [6.6s]/[26.6m], memory [345.4mb]->[142.1mb]/[989.8mb], all_pools {[young] [224.2mb]->[2.8mb]/[273mb]}{[survivor] [21.8mb]->[34.1mb]/[34.1mb]}{[old] [99.4mb]->[105.1mb]/[682.6mb]}

    4. [2014-08-26 10:38:10,109][INFO ][cluster.service          ] [Red Shift] removed {[logstash-Impala-26670-2010][av8JOuEoR_iK7ZO0UaltqQ][Impala][inet[/61.x.x.x:9302]]{client=true, data=false},}, reason: zen-disco-node_failed([logstash-Impala-26670-2010][av8JOuEoR_iK7ZO0UaltqQ][Impala][inet[/61.x.x.x:9302]]{client=true, data=false}), reason transport disconnected (with verified connect)

    5. [2014-08-26 10:39:37,899][WARN ][monitor.jvm              ] [Red Shift] [gc][young][71171][54081] duration [3.4s], collections [1]/[4s], total [3.4s]/[26.6m], memory [411.7mb]->[139.5mb]/[989.8mb], all_pools {[young] [272.4mb]->[1.5mb]/[273mb]}{[survivor] [34.1mb]->[29.1mb]/[34.1mb]}{[old] [105.1mb]->[109mb]/[682.6mb]}

    安裝bigdesk
    要想知道整個插件的列表,請訪問http://www.elasticsearch.org/guide/reference/modules/plugins/ 插件還是很多的,個人認為比較值得關注的有以下幾個,其他的看你需求,比如你要導入數據當然就得關注river了。

    該插件可以查看集群的jvm信息,磁盤IO,索引創建刪除信息等,適合查找系統瓶頸,監控集群狀態等,可以執行如下命令進行安裝,或者訪問項目地址:https://github.com/lukas-vlcek/bigdesk

    bin/plugin -install lukas-vlcek/bigdesk

    1. Downloading .........................................................................................................................................................................................................................................................DONE

    2. Installed lukas-vlcek/bigdesk into /usr/local/elasticsearch/plugins/bigdesk

    3. Identified as a _site plugin, moving to _site structure ...

    cp -ar plugins/bigdesk/_site/ /opt/htdocs/www/bigdesk
    訪問

    http://localhost/bigdesk

    安全優化

    1.安全漏洞,影響ElasticSearch 1.2及以下版本 http://bouk.co/blog/elasticsearch-rce/
    /usr/local/elasticsearch/config/elasticsearch.yml
    script.disable_dynamic: true

    2.如果有多臺機器,可以以每臺設置n個shards的方式,根據業務情況,可以考慮取消replias
    這里設置默認的5個shards, 復制為0,shards定義后不能修改,replicas可以動態修改
    /usr/local/elasticsearch/config/elasticsearch.yml
    index.number_of_shards: 5
    index.number_of_replicas: 0

    #定義數據目錄(可選)
    path.data: /opt/elasticsearch

    3.內存適當調大,初始是-Xms256M, 最大-Xmx1G,-Xss256k,
    調大后,最小和最大一樣,避免GC, 并根據機器情況,設置內存大小,
    vi /usr/local/elasticsearch/bin/elasticsearch.in.sh
    if [ “x$ES_MIN_MEM” = “x” ]; then
    #ES_MIN_MEM=256m
    ES_MIN_MEM=2g
    fi
    if [ “x$ES_MAX_MEM” = “x” ]; then
    #ES_MAX_MEM=1g
    ES_MAX_MEM=2g
    fi

    4.減少shard刷新間隔
    curl -XPUT ‘http://61.x.x.x:9200/dw-search/_settings’ -d ‘{
    “index” : {
    “refresh_interval” : “-1″
    }
    }’

    完成bulk插入后再修改為初始值
    curl -XPUT ‘http://61.x.x.x:9200/dw-search/_settings’ -d ‘{
    “index” : {
    “refresh_interval” : “1s”
    }
    }’

    /etc/elasticsearch/elasticsearch.yml
    tranlog數據達到多少條進行平衡,默認為5000,刷新頻率,默認為120s
    index.translog.flush_threshold_ops: “100000”
    index.refresh_interval: 60s

    5.關閉文件的更新時間

    /etc/fstab

    在文件中添加 noatime,nodiratime
    /dev/sdc1 /data1 ext4 noatime,nodiratime 0 0

    自啟動
    chkconfig add redis_6379
    vi /etc/rc.local
    /usr/local/elasticsearch/bin/elasticsearch -d
    /usr/local/logstash/bin/logstash agent -f /usr/local/logstash/conf/logstash_syslog.conf &
    /usr/local/logstash/bin/logstash agent -f /usr/local/logstash/conf/logstash_redis.conf &
    /usr/local/logstash/bin/logstash agent -f /usr/local/logstash/conf/logstash_web.conf &
    /opt/lemp startnginx

    安裝問題

    ==========================================
    LoadError: Could not load FFI Provider: (NotImplementedError) FFI not available: null
    See http://jira.codehaus.org/browse/JRUBY-4583

    一開始我以為是沒有FFI,把jruby,ruby gem都裝了一遍.
    實際是由于我的/tmp沒有運行權限造成的,建個tmp目錄就可以了,附上ruby安裝步驟.

    mkdir /usr/local/jdk/tmp

    vi /usr/local/logstash/bin/logstash.lib.sh
    JAVA_OPTS=”$JAVA_OPTS -Djava.io.tmpdir=/usr/local/jdk/tmp”

    ===============================
    jruby 安裝

    1. wget http://jruby.org.s3.amazonaws.com/downloads/1.7.13/jruby-bin-1.7.13.tar.gz

    2. mv jruby-1.7.13 /usr/local/

    3. cd /usr/local/

    4. ln -s jruby-1.7.13 jruby

    Ruby Gem 安裝
    Ruby 1.9.2版本默認已安裝Ruby Gem
    安裝gem 需要ruby的版本在 1.8.7 以上,默認的centos5 上都是1.8.5 版本,所以首先你的升級你的ruby ,

    ruby -v
    ruby 1.8.5 (2006-08-25) [x86_64-linux]

    1. wget http://cache.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p547.tar.gz

    2. tar zxvf ruby-1.9.3-p547.tar.gz

    3. cd ruby-1.9.3-p547

    4. ./configure --prefix=/usr/local/ruby-1.9.3-p547

    5. make && make install

    6. cd /usr/local

    7. ln -s ruby-1.9.3-p547 ruby

    vi /etc/profile
    export PATH=$JAVA_HOME/bin:/usr/local/ruby/bin:$PATH
    source /etc/profile

    gem install bundler
    gem install i18n
    gem install ffi

    =======================

    elasticsearch 端口安全
    綁定內網ip

    iptables 只開放內網

    前端機反向代理
    server
    {
    listen 9201;
    server_name big.c1gstudio.com;
    index index.html index.htm index.php;
    root /opt/htdocs/www;
    include manageip.conf;
    deny all;

    location / {
    proxy_pass http://192.168.0.39:9200;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    #proxy_set_header X-Forwarded-For $remote_addr;
    add_header X-Cache Cache-156;
    proxy_redirect off;
    }

    access_log /opt/nginx/logs/access.log access;
    }

    kibana的config.js
    elasticsearch: “http://”+window.location.hostname+”:9201″,

    posted on 2015-02-17 16:18 paulwong 閱讀(13995) 評論(1)  編輯  收藏 所屬分類: LOG ANALYST BIG DATA SYSTEMELASTICSEARCH

    Feedback

    # re: 開源分布式搜索平臺ELK+Redis+Syslog-ng實現日志實時搜索 2015-05-04 18:14 linuxlst

    博文一看就知道博主是有很多實戰經驗的資深專家了,看到友鏈不多, http://www.178linux.com 希望和博主互做友鏈,感興趣可以聯系站點管理員  回復  更多評論   


    主站蜘蛛池模板: 中文字幕天天躁日日躁狠狠躁免费| 久久亚洲精品无码av| 七色永久性tv网站免费看| 亚洲中文字幕无码久久2017| 又硬又粗又长又爽免费看| 免费一级毛片在播放视频| 噜噜噜亚洲色成人网站| vvvv99日韩精品亚洲| 狠狠热精品免费观看| 亚洲视频在线一区二区| 和老外3p爽粗大免费视频| 国产真实伦在线视频免费观看| 亚洲欧洲无码AV不卡在线| 日本一线a视频免费观看| 男男gay做爽爽免费视频| 亚洲精品国产精品国自产观看| 日韩成人毛片高清视频免费看| 亚洲精品国产高清嫩草影院| 中文字幕免费观看视频| 亚洲av无码一区二区三区网站| 日韩精品久久久久久免费| 亚洲xxxxxx| 免费人成视频在线观看不卡| 久草免费福利在线| 亚洲黑人嫩小videos| 午夜成人免费视频| 国产美女视频免费观看的网站| 狠狠亚洲婷婷综合色香五月排名 | 久久精品视频亚洲| 久草免费在线观看视频| 国产精品无码亚洲一区二区三区| 亚洲国产精品一区二区九九| 九九精品成人免费国产片| 亚洲日产2021三区在线| 国产色爽女小说免费看| 免费人成激情视频在线观看冫| 亚洲导航深夜福利| 亚洲Aⅴ无码一区二区二三区软件| 免费91最新地址永久入口| 中文字幕乱码亚洲精品一区| 亚洲电影日韩精品|