InputSplit錛岀戶鎵胯嚜Writable鎺ュ彛錛屽洜姝や竴涓狪nputSplit瀹炲垯鍖呭惈浜嗗洓涓帴鍙e嚱鏁幫紝璇誨拰鍐欙紙readFields鍜寃rite錛夛紝getLength鑳藉緇欏嚭榪欎釜split涓墍璁板綍鐨勬暟鎹ぇ灝忥紝getLocations鑳藉寰楀埌榪欎釜split浣嶄簬鍝簺涓繪満涔嬩笂錛坆lkLocations[blkIndex].getHosts()錛夛紝榪欓噷闇瑕佽鏄庣殑鏄竴涓猙lock瑕佷箞瀵瑰簲涓涓猻plit錛岃涔堝搴斿涓猻plit錛屽洜姝ゆ瘡涓猻plit閮藉彲浠ヤ粠瀹冩墍灞炵殑block涓幏鍙栦富鏈轟俊鎭紝鑰屼笖鎴戠寽嫻媌lock鐨勫ぇ灝忓簲璇ユ槸split鐨勬暣鏁板嶏紝鍚﹀垯鏈夊彲鑳戒竴涓猻plit璺ㄨ秺涓や釜block銆?/p>
瀵逛簬RecordReader錛屽叾瀹炶繖涓帴鍙d富瑕佸氨鏄負浜嗙淮鎶や竴緇?lt;K,V>閿煎錛屼換浣曚竴涓疄鐜頒簡璇ユ帴鍙g殑綾葷殑鏋勯犲嚱鏁伴兘闇瑕佹槸“(Configuration conf, Class< ? extends InputSplit> split)”鐨勫艦寮忥紝鍥犱負涓涓猂ecordReader鏄湁閽堝鎬х殑錛屽氨鏄拡瀵規煇縐峴plit鏉ヨ繘琛岀殑錛屽洜姝ゅ繀欏誨緱涓庢煇縐峴plit緇戝畾璧鋒潵銆傝繖涓帴鍙d腑鏈閲嶈鐨勬柟娉曞氨鏄痭ext錛屽湪鍒╃敤next榪涜璇誨彇K鍜孷鏃訛紝闇瑕佸厛閫氳繃createKey鍜宑reateValue鏉ュ垱寤篕鍜孷鐨勫璞★紝鐒跺悗鍐嶄紶緇檔ext浣滀負鍙傛暟錛屼嬌寰梟ext瀵瑰艦鍙備腑鐨勬暟鎹垚鍛樿繘琛屼慨鏀廣?/p>
涓涓猣ile錛團ileStatus錛夊垎鎴愬涓猙lock瀛樺偍錛圔lockLocation[]錛夛紝姣忎釜block閮芥湁鍥哄畾鐨勫ぇ灝忥紙file.getBlockSize()錛夛紝鐒跺悗璁$畻鍑烘瘡涓猻plit鎵闇鐨勫ぇ灝忥紙computeSplitSize(goalSize, minSize, blockSize)錛夛紝鐒跺悗灝嗛暱搴︿負length錛坒ile.getLen()錛夌殑file鍒嗗壊涓哄涓猻plit錛屾渶鍚庝竴涓笉瓚充竴涓猻plit澶у皬鐨勯儴鍒嗗崟鐙負鍏跺垎閰嶄竴涓猻plit錛屾渶鍚庤繑鍥炶繖涓猣ile鍒嗗壊鐨勬渶緇堢粨鏋滐紙return splits.toArray(new FileSplit[splits.size()])錛夈?/p>
涓涓猨ob錛屼細寰楀埌杈撳叆鐨勬枃浠惰礬寰勶紙conf.get("mapred.input.dir", "")錛夛紝鐒跺悗鎹鍙互寰楀埌涓涓狿ath[]錛屽浜庢瘡涓狿ath錛岄兘鍙互寰楀埌涓涓猣s錛團ileSystem fs = p.getFileSystem(job)錛夛紝鐒跺悗鍐嶅緱鍒頒竴涓狥ileStatus[]錛團ileStatus[] matches = fs.globStatus(p, inputFilter)錛夛紝鍐嶆妸閲岄潰鐨勬瘡涓狥ileStatus鎷垮嚭鏉ワ紝鍒ゆ柇鍏舵槸鍚︿負dir錛屽鏋滄槸鐨勮瘽灝盕ileStatus stat:fs.listStatus(globStat.getPath(), inputFilter)錛岀劧鍚庡啀灝唖tat鍔犲叆鍒版渶緇堢殑緇撴灉闆嗕腑result錛涘鏋滄槸鏂囦歡鐨勮瘽錛岄偅灝辯洿鎺ュ姞鍏ュ埌緇撴灉闆嗕腑銆傝寰楃畝媧佷竴浜涳紝灝辨槸涓涓猨ob浼氬緱鍒癷nput.dir涓殑鎵鏈夋枃浠訛紝姣忎釜鏂囦歡閮界敤FileStatus鏉ヨ褰曘?/p>
MultiFileSplit鐨勫畼鏂規弿榪版槸“A sub-collection of input files. Unlike {@link FileSplit}, MultiFileSplit class does not represent a split of a file, but a split of input files into smaller sets. The atomic unit of split is a file.”錛屼竴涓狹ultiFileSplit涓惈鏈夊涓皬鏂囦歡錛屾瘡涓枃浠跺簲璇ュ彧闅跺睘浜庝竴涓猙lock錛岀劧鍚巊etLocations灝辮繑鍥炴墍鏈夊皬鏂囦歡瀵瑰簲鐨刡lock鐨刧etHosts錛沢etLength榪斿洖鎵鏈夋枃浠剁殑澶у皬鎬誨拰銆?/p>
瀵逛簬MultiFileInputFormat錛屽畠鐨刧etSplits榪斿洖鐨勬槸涓涓狹ultiFileSplit鐨勯泦鍚堬紝涔熷氨鏄竴涓釜鐨勫皬鏂囦歡綈囷紝涓句釜綆鍗曠殑渚嬪瓙灝變細寰堟竻妤氫簡錛氬亣瀹氳繖涓猨ob涓湁5涓皬鏂囦歡錛屽ぇ灝忓垎鍒負2錛?錛?錛?錛?錛涘亣瀹氭垜浠湡鏈泂plit鐨勬繪暟鐩負3鐨勮瘽錛屽厛綆楀嚭涓猟ouble avgLengthPerSplit = ((double)totLength) / numSplits錛岀粨鏋滃簲璇ヤ負5錛涚劧鍚庡啀鍒囧垎錛屽洜姝ゅ緱鍒扮殑涓変釜鏂囦歡綈囦負錛歿鏂囦歡1鍜?}銆亄鏂囦歡3}銆亄鏂囦歡4鍜?}銆傚鏋滆繖浜斾釜鏂囦歡鐨勫ぇ灝忓垎鍒負2錛?錛?錛?錛?錛涢偅涔堝簲璇ュ緱鍒板洓涓枃浠剁皣涓猴細{鏂囦歡1}銆亄鏂囦歡2}銆亄鏂囦歡3鍜?}銆亄鏂囦歡5}銆傛澶栵紝榪欎釜綾葷殑getRecordReader渚濈劧鏄釜abstract鐨勬柟娉曪紝鍥犳鍏跺瓙綾誨繀欏誨緱瀹炵幇榪欎釜鍑芥暟銆?/p>
1銆傞鍏堥渶瑕丄鏀寔榪欑璁塊棶妯″紡錛?br />
閰嶇疆A鐨?etc/ssh/sshd_config錛屽皢榪欎袱欏硅緗涓嬶細
RSAAuthentication yes
PubkeyAuthentication yes
2銆侭鐢熶駭id_rsa.pub錛屽茍灝嗚繖涓枃浠朵腑鐨勫唴瀹規渶緇堢敤“>>”娣誨姞鍒癆鐨刟uthorized_keys鏂囦歡鏈熬銆?/p>
3銆傚湪B涓婏紝ssh A鐨刬p/A鐨刪ostname灝卞彲浠ュ疄鐜版棤瀵嗙爜鐧婚檰A浜?/p>
浣嗘槸榪欎箞鍋氭槸鏈夊墠鎻愮殑錛屽緢澶氫漢閮藉拷鐣ヤ簡榪欎釜鍓嶆彁錛屽鑷磋垂浜嗗緢澶氬懆鎶橀兘娌℃湁鎴愬姛錛屽氨鍍忔垜浼肩殑錛屾垜灝辮垂浜嗗緢澶氭椂闂存墠鎵懼埌闂鎵鍦ㄣ?br /> 鍥犱負A鎴朆鏈哄櫒閲岄兘鏈夊緢澶氫釜璐︽埛錛屽湪B涓婇敭鍏sh鍛戒護鍚庯紝鎴戜滑騫舵病鏈夊埗瀹氳繛鎺ュ埌A涓婄殑閭d釜甯愭埛錛岄偅涔堣繖閲岄潰榛樿鐨勬綔瑙勫垯鏄粈涔堝憿錛熷氨鏄綘鍦˙涓妔sh鏃訛紝褰撳墠浣跨敤鐨勯偅涓笎鎴鳳紙鍋囧鍚嶅瓧鏄痟aha錛夊氨浼氫綔涓轟綘鏈熷緟榪炴帴鍒癆涓婄殑甯愭埛錛屾垜浠彲浠ユ樉紺虹殑閫氳繃ssh -l haha [hostname]鎴栬卻sh haha@[hostname]榪欑鏂瑰紡鏉ヨ繛鎺ュ埌A涓婄殑haha甯愭埛錛屽鏋滅敤闅愬+瑙勫垯鐨勮瘽錛岄偅涔堢郴緇熷氨鏄緷鎹綘鍦˙涓婂綋鍓嶄嬌鐢ㄧ殑甯愭埛鏉ヤ綔涓篈涓婅榪炴帴鐨勫笎鎴楓?br /> 鍥犳錛岃瀹炵幇鏃犲瘑鐮佽闂殑鍓嶆彁灝辨槸錛欰鍜孊涓婃湁鍚屾牱鐨勫笎鎴峰悕縐幫紝瀹屽叏涓鑷達紝鍖呮嫭澶у皬鍐欍傦紙鎴戝氨寰堥儊闂鳳紝鍥犱負鎴戝湪windows涓嬬敤cygwin鍜屼竴涓猯inux鏈哄櫒榪炴帴錛寃indows涓嬬殑甯愭埛絎竴涓瓧姣嶅ぇ鍐欎簡錛岃宭inux鐨勫笎鎴風殑絎竴涓瓧姣嶆槸灝忓啓鐨勶紝瀵艱嚧鎴戣垂浜嗗緢闀挎椂闂撮兘娌℃湁鍙戠幇闂鐥囩粨鎵鍦級銆傚叾瀹烇紝榪欎篃灝辨槸涓轟粈涔堝湪閰嶇疆hadoop鍒嗗竷寮忚綆楁椂錛屽繀欏昏姹傜殑姣忎釜鏈哄櫒涓婇兘蹇呴』鏈変竴涓畬鍏ㄤ竴鏍風殑鐢ㄦ埛鍚嶃?/p>
鏃㈢劧璇村埌浜嗗悗闈㈢殑榪欎簺娉ㄦ剰浜嬮」錛岄偅涔堜篃瑕佹彁閱掑ぇ瀹訛紝鍦ㄤ笂闈㈢粰鍑虹殑涓変釜姝ラ涓殑絎?姝ワ紝蹇呴』鏄湪絳夊悓鐨勫笎鎴蜂笅寰楀埌鐨刬d_rsa.pub鏂囦歡錛屽惁鍒欒繕鏄笉琛屻?/p>
鍊熺敤Steven Jobs鐨勪竴鐣瘽鏉ヨ灝辨槸錛?/p>
The only way to be truely satisfied is to do what you believe is great work, and the only way to do great work is to love what you do!
鎴戣寰椾竴涓漢鑳藉仛鍒拌繖涓姝ワ紝鐪熺殑寰堝垢紱忥紝鑷繁鍘誨姫鍔涳紝鍘繪嫾鎼忥紝鍘誨疄鐜拌嚜宸辯殑浠峰鹼紝璁╄嚜宸卞鑷繁鐨勮〃鐜版弧鎰忥紝榪欐槸鎴戠粡甯稿鑷繁璇寸殑涓鍙ヨ瘽銆?/p>
鐜板湪鐨勬垜錛屽伐浣滃畾浜嗭紝濂沖弸涔熷畾浜嗭紝涔熷氨鏄濡囧畾浜嗭紝鎴戦渶瑕佸仛鐨勫氨鏄幓濂嬫枟錛屽幓鍔姏錛屽幓鎷兼悘銆?/p>
鎴戝緢鎰熻阿鑷繁鑳介亣鍒拌繖鏍蜂竴涓濡囷紝鑳芥敮鎸佹垜錛屽叧蹇冩垜錛屾垜涓嶇煡閬撹嚜宸變粖鍚庝細涓嶄細寰堟垚鍔燂紝浣嗘槸鎴戠煡閬撴湁浜嗚繖涓ソ鍐呮煴錛屾垜鍋氫粈涔堥兘韙忓疄銆傛垜鐭ラ亾錛屾湁浜嗗ス錛屾垜澶垢紱忥紝鎴戜篃涓瀹氫細甯︾粰濂瑰垢紱忕殑錛孖 promise!
濂戒簡錛屼笅闈㈠氨鎶婁唬鐮佽創鍑烘潵鍚э紝鍛靛懙錛?/p>
#!/bin/sh
cd /hadoop/logs
var="`ls *.log`"
cur=""
name=""
file=log_name.txt
if [ -e $file ]; then
rm $file
fi
for cur in $var
do
name=`echo $cur | cut -d'-' -f3`
#cat $cur | grep ^2008 | awk '{print $0 " [`echo $name`]"}' >> $file
cat $cur | grep ^2008 | sed "s/^.*$/&[$name]/" >> $file
#awk '{print $0 " [`echo $name`]"}' >> $file
done
cp $file __temp.txt
sort __temp.txt >$file
rm __temp.txt
榪愯鐨勭粨鏋滄槸錛?/p>
2008-11-14 10:08:47,671 INFO org.apache.hadoop.dfs.NameNode: STARTUP_MSG: [namenode]
2008-11-14 10:08:48,140 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=NameNode, port=9000[namenode]
2008-11-14 10:08:48,171 INFO org.apache.hadoop.dfs.NameNode: Namenode up at: bacoo/192.168.1.34:9000[namenode]
2008-11-14 10:08:48,171 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=NameNode, sessionId=null[namenode]
2008-11-14 10:08:48,234 INFO org.apache.hadoop.dfs.NameNodeMetrics: Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext[namenode]
2008-11-14 10:08:48,875 INFO org.apache.hadoop.dfs.FSNamesystemMetrics: Initializing FSNamesystemMeterics using context object:org.apache.hadoop.metrics.spi.NullContext[namenode]
2008-11-14 10:08:48,875 INFO org.apache.hadoop.fs.FSNamesystem: fsOwner=Zhaoyb,None,root,Administrators,Users,Debugger,Users[namenode]
2008-11-14 10:08:48,875 INFO org.apache.hadoop.fs.FSNamesystem: isPermissionEnabled=true[namenode]
2008-11-14 10:08:48,875 INFO org.apache.hadoop.fs.FSNamesystem: supergroup=supergroup[namenode]
2008-11-14 10:08:48,890 INFO org.apache.hadoop.fs.FSNamesystem: Registered FSNamesystemStatusMBean[namenode]
2008-11-14 10:08:48,953 INFO org.apache.hadoop.dfs.Storage: Edits file edits of size 4 edits # 0 loaded in 0 seconds.[namenode]
2008-11-14 10:08:48,953 INFO org.apache.hadoop.dfs.Storage: Image file of size 80 loaded in 0 seconds.[namenode]
2008-11-14 10:08:48,953 INFO org.apache.hadoop.dfs.Storage: Number of files = 0[namenode]
2008-11-14 10:08:48,953 INFO org.apache.hadoop.dfs.Storage: Number of files under construction = 0[namenode]
2008-11-14 10:08:48,953 INFO org.apache.hadoop.fs.FSNamesystem: Finished loading FSImage in 657 msecs[namenode]
2008-11-14 10:08:49,000 INFO org.apache.hadoop.dfs.StateChange: STATE* Leaving safe mode after 0 secs.[namenode]
2008-11-14 10:08:49,000 INFO org.apache.hadoop.dfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes[namenode]
2008-11-14 10:08:49,000 INFO org.apache.hadoop.dfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks[namenode]
2008-11-14 10:08:49,609 INFO org.mortbay.util.Credential: Checking Resource aliases[namenode]
2008-11-14 10:08:50,015 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4[namenode]
2008-11-14 10:08:50,015 INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs][namenode]
2008-11-14 10:08:50,015 INFO org.mortbay.util.Container: Started HttpContext[/static,/static][namenode]
2008-11-14 10:08:54,656 INFO org.mortbay.util.Container: Started org.mortbay.jetty.servlet.WebApplicationHandler@17f11fb[namenode]
2008-11-14 10:08:55,453 INFO org.mortbay.util.Container: Started WebApplicationContext[/,/][namenode]
2008-11-14 10:08:55,468 INFO org.apache.hadoop.fs.FSNamesystem: Web-server up at: 0.0.0.0:50070[namenode]
2008-11-14 10:08:55,468 INFO org.mortbay.http.SocketListener: Started SocketListener on 0.0.0.0:50070[namenode]
2008-11-14 10:08:55,468 INFO org.mortbay.util.Container: Started org.mortbay.jetty.Server@61a907[namenode]
2008-11-14 10:08:55,484 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting[namenode]
2008-11-14 10:08:55,484 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting[namenode]
2008-11-14 10:08:55,515 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9000: starting[namenode]
2008-11-14 10:08:55,515 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000: starting[namenode]
2008-11-14 10:08:55,515 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9000: starting[namenode]
2008-11-14 10:08:55,515 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000: starting[namenode]
2008-11-14 10:08:55,515 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000: starting[namenode]
2008-11-14 10:08:55,531 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000: starting[namenode]
2008-11-14 10:08:55,531 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9000: starting[namenode]
2008-11-14 10:08:55,531 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9000: starting[namenode]
2008-11-14 10:08:55,531 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9000: starting[namenode]
2008-11-14 10:08:55,531 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000: starting[namenode]
2008-11-14 10:08:56,015 INFO org.apache.hadoop.dfs.NameNode.Secondary: STARTUP_MSG: [secondarynamenode]
2008-11-14 10:08:56,156 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=SecondaryNameNode, sessionId=null[secondarynamenode]
2008-11-14 10:08:56,468 WARN org.apache.hadoop.dfs.Storage: Checkpoint directory \tmp\hadoop-SYSTEM\dfs\namesecondary is added.[secondarynamenode]
2008-11-14 10:08:56,546 INFO org.mortbay.util.Credential: Checking Resource aliases[secondarynamenode]
2008-11-14 10:08:56,609 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4[secondarynamenode]
2008-11-14 10:08:56,609 INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs][secondarynamenode]
2008-11-14 10:08:56,609 INFO org.mortbay.util.Container: Started HttpContext[/static,/static][secondarynamenode]
2008-11-14 10:08:56,953 INFO org.mortbay.jetty.servlet.XMLConfiguration: No WEB-INF/web.xml in file:/E:/cygwin/hadoop/webapps/secondary. Serving files and default/dynamic servlets only[secondarynamenode]
2008-11-14 10:08:56,953 INFO org.mortbay.util.Container: Started org.mortbay.jetty.servlet.WebApplicationHandler@b1a4e2[secondarynamenode]
2008-11-14 10:08:57,062 INFO org.mortbay.util.Container: Started WebApplicationContext[/,/][secondarynamenode]
2008-11-14 10:08:57,078 INFO org.apache.hadoop.dfs.NameNode.Secondary: Secondary Web-server up at: 0.0.0.0:50090[secondarynamenode]
2008-11-14 10:08:57,078 INFO org.mortbay.http.SocketListener: Started SocketListener on 0.0.0.0:50090[secondarynamenode]
2008-11-14 10:08:57,078 INFO org.mortbay.util.Container: Started org.mortbay.jetty.Server@18a8ce2[secondarynamenode]
2008-11-14 10:08:57,078 WARN org.apache.hadoop.dfs.NameNode.Secondary: Checkpoint Period :3600 secs (60 min)[secondarynamenode]
2008-11-14 10:08:57,078 WARN org.apache.hadoop.dfs.NameNode.Secondary: Log Size Trigger :67108864 bytes (65536 KB)[secondarynamenode]
2008-11-14 10:08:59,828 INFO org.apache.hadoop.mapred.JobTracker: STARTUP_MSG: [jobtracker]
2008-11-14 10:09:00,015 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics with hostName=JobTracker, port=9001[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9001: starting[jobtracker]
2008-11-14 10:09:00,031 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9001: starting[jobtracker]
2008-11-14 10:09:00,125 INFO org.mortbay.util.Credential: Checking Resource aliases[jobtracker]
2008-11-14 10:09:01,703 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4[jobtracker]
2008-11-14 10:09:01,703 INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs][jobtracker]
2008-11-14 10:09:01,703 INFO org.mortbay.util.Container: Started HttpContext[/static,/static][jobtracker]
2008-11-14 10:09:02,312 INFO org.mortbay.util.Container: Started org.mortbay.jetty.servlet.WebApplicationHandler@1cd280b[jobtracker]
2008-11-14 10:09:08,359 INFO org.mortbay.util.Container: Started WebApplicationContext[/,/][jobtracker]
2008-11-14 10:09:08,375 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 9001[jobtracker]
2008-11-14 10:09:08,375 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030[jobtracker]
2008-11-14 10:09:08,375 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=[jobtracker]
2008-11-14 10:09:08,375 INFO org.mortbay.http.SocketListener: Started SocketListener on 0.0.0.0:50030[jobtracker]
2008-11-14 10:09:08,375 INFO org.mortbay.util.Container: Started org.mortbay.jetty.Server@16a9b9c[jobtracker]
2008-11-14 10:09:12,984 INFO org.apache.hadoop.mapred.JobTracker: Starting RUNNING[jobtracker]
2008-11-14 10:09:56,894 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG: [datanode]
2008-11-14 10:10:02,516 INFO org.apache.hadoop.mapred.TaskTracker: STARTUP_MSG: [tasktracker]
2008-11-14 10:10:08,768 INFO org.apache.hadoop.dfs.Storage: Formatting ...[datanode]
2008-11-14 10:10:08,768 INFO org.apache.hadoop.dfs.Storage: Storage directory /hadoop/hadoopfs/data is not formatted.[datanode]
2008-11-14 10:10:11,343 INFO org.apache.hadoop.dfs.DataNode: Registered FSDatasetStatusMBean[datanode]
2008-11-14 10:10:11,347 INFO org.apache.hadoop.dfs.DataNode: Opened info server at 50010[datanode]
2008-11-14 10:10:11,352 INFO org.apache.hadoop.dfs.DataNode: Balancing bandwith is 1048576 bytes/s[datanode]
2008-11-14 10:10:16,430 INFO org.mortbay.util.Credential: Checking Resource aliases[tasktracker]
2008-11-14 10:10:17,976 INFO org.mortbay.util.Credential: Checking Resource aliases[datanode]
2008-11-14 10:10:20,068 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4[datanode]
2008-11-14 10:10:20,089 INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs][datanode]
2008-11-14 10:10:20,089 INFO org.mortbay.util.Container: Started HttpContext[/static,/static][datanode]
2008-11-14 10:10:20,725 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4[tasktracker]
2008-11-14 10:10:20,727 INFO org.mortbay.util.Container: Started HttpContext[/logs,/logs][tasktracker]
2008-11-14 10:10:20,727 INFO org.mortbay.util.Container: Started HttpContext[/static,/static][tasktracker]
2008-11-14 10:10:27,078 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/localhost[jobtracker]
2008-11-14 10:10:32,171 INFO org.apache.hadoop.dfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 192.168.1.167:50010 storage DS-1556534590-127.0.0.1-50010-1226628640386[namenode]
2008-11-14 10:10:32,187 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.1.167:50010[namenode]
2008-11-14 10:13:57,171 WARN org.apache.hadoop.dfs.Storage: Checkpoint directory \tmp\hadoop-SYSTEM\dfs\namesecondary is added.[secondarynamenode]
2008-11-14 10:13:57,187 INFO org.apache.hadoop.fs.FSNamesystem: Number of transactions: 5 Total time for transactions(ms): 0 Number of syncs: 3 SyncTimes(ms): 4125 [namenode]
2008-11-14 10:13:57,187 INFO org.apache.hadoop.fs.FSNamesystem: Roll Edit Log from 192.168.1.34[namenode]
2008-11-14 10:13:57,953 INFO org.apache.hadoop.dfs.NameNode.Secondary: Downloaded file fsimage size 80 bytes.[secondarynamenode]
2008-11-14 10:13:57,968 INFO org.apache.hadoop.dfs.NameNode.Secondary: Downloaded file edits size 288 bytes.[secondarynamenode]
2008-11-14 10:13:58,593 INFO org.apache.hadoop.fs.FSNamesystem: fsOwner=Zhaoyb,None,root,Administrators,Users,Debugger,Users[secondarynamenode]
2008-11-14 10:13:58,593 INFO org.apache.hadoop.fs.FSNamesystem: isPermissionEnabled=true[secondarynamenode]
2008-11-14 10:13:58,593 INFO org.apache.hadoop.fs.FSNamesystem: supergroup=supergroup[secondarynamenode]
2008-11-14 10:13:58,640 INFO org.apache.hadoop.dfs.Storage: Edits file edits of size 288 edits # 5 loaded in 0 seconds.[secondarynamenode]
2008-11-14 10:13:58,640 INFO org.apache.hadoop.dfs.Storage: Number of files = 0[secondarynamenode]
2008-11-14 10:13:58,640 INFO org.apache.hadoop.dfs.Storage: Number of files under construction = 0[secondarynamenode]
2008-11-14 10:13:58,718 INFO org.apache.hadoop.dfs.Storage: Image file of size 367 saved in 0 seconds.[secondarynamenode]
2008-11-14 10:13:58,796 INFO org.apache.hadoop.fs.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0 Number of syncs: 0 SyncTimes(ms): 0 [secondarynamenode]
2008-11-14 10:13:58,921 INFO org.apache.hadoop.dfs.NameNode.Secondary: Posted URL 0.0.0.0:50070putimage=1&port=50090&machine=192.168.1.34&token=-16:145044639:0:1226628551796:1226628513000[secondarynamenode]
2008-11-14 10:13:59,078 INFO org.apache.hadoop.fs.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0 Number of syncs: 0 SyncTimes(ms): 0 [namenode]
2008-11-14 10:13:59,078 INFO org.apache.hadoop.fs.FSNamesystem: Roll FSImage from 192.168.1.34[namenode]
2008-11-14 10:13:59,265 WARN org.apache.hadoop.dfs.NameNode.Secondary: Checkpoint done. New Image Size: 367[secondarynamenode]
2008-11-14 10:29:02,171 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 0 time(s).[secondarynamenode]
2008-11-14 10:29:04,187 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 1 time(s).[secondarynamenode]
2008-11-14 10:29:06,109 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 2 time(s).[secondarynamenode]
2008-11-14 10:29:08,015 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 3 time(s).[secondarynamenode]
2008-11-14 10:29:10,031 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 4 time(s).[secondarynamenode]
2008-11-14 10:29:11,937 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 5 time(s).[secondarynamenode]
2008-11-14 10:29:13,843 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 6 time(s).[secondarynamenode]
2008-11-14 10:29:15,765 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 7 time(s).[secondarynamenode]
2008-11-14 10:29:17,671 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 8 time(s).[secondarynamenode]
2008-11-14 10:29:19,593 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 9 time(s).[secondarynamenode]
2008-11-14 10:29:21,078 ERROR org.apache.hadoop.dfs.NameNode.Secondary: Exception in doCheckpoint: [secondarynamenode]
2008-11-14 10:29:21,171 ERROR org.apache.hadoop.dfs.NameNode.Secondary: java.io.IOException: Call failed on local exception[secondarynamenode]
2008-11-14 10:34:23,156 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 0 time(s).[secondarynamenode]
2008-11-14 10:34:25,078 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 1 time(s).[secondarynamenode]
2008-11-14 10:34:27,078 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 2 time(s).[secondarynamenode]
2008-11-14 10:34:29,078 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 3 time(s).[secondarynamenode]
2008-11-14 10:34:31,000 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 4 time(s).[secondarynamenode]
2008-11-14 10:34:32,906 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 5 time(s).[secondarynamenode]
2008-11-14 10:34:34,921 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 6 time(s).[secondarynamenode]
2008-11-14 10:34:36,828 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 7 time(s).[secondarynamenode]
2008-11-14 10:34:38,640 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 8 time(s).[secondarynamenode]
2008-11-14 10:34:40,546 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: Bacoo/192.168.1.34:9000. Already tried 9 time(s).[secondarynamenode]
2008-11-14 10:34:41,468 ERROR org.apache.hadoop.dfs.NameNode.Secondary: Exception in doCheckpoint: [secondarynamenode]
2008-11-14 10:34:41,468 ERROR org.apache.hadoop.dfs.NameNode.Secondary: java.io.IOException: Call failed on local exception[secondarynamenode]
2008-11-14 10:38:43,359 INFO org.apache.hadoop.dfs.NameNode.Secondary: SHUTDOWN_MSG: [secondarynamenode]