<rt id="bn8ez"></rt>
<label id="bn8ez"></label>

  • <span id="bn8ez"></span>

    <label id="bn8ez"><meter id="bn8ez"></meter></label>

    軟件是對(duì)質(zhì)量的不懈追求

    #

    vim 的一些高級(jí)用法

    說明:如果你準(zhǔn)備把本文中的命令放到配置文件(比如 .vimrc)中而不是在命令行執(zhí)行,那么請(qǐng)去掉開頭的冒號(hào)。

    ★ 鍵映射

    :maptype key command

    其中,key 是要映射的鍵(序列),command 是所要映射的命令,maptype 包括如下幾種:

    map  命令,可視和命令追加模式下的鍵映射

    vmap 可視模式下的鍵映射

    nmap 命令模式下的鍵映射

    omap 命令追加模式下的鍵映射

    map! 插入和命令行模式下的鍵映射

    imap 插入模式下的鍵映射

    cmap 命令行模式下的鍵映射

    說明:命令追加模式指的是命令輸入中的狀態(tài),也就是在輸入一個(gè)需要多個(gè)按鍵的命令時(shí),已經(jīng)開始輸入但未完成的狀態(tài)。

    舉個(gè)例子,

    :map <F2> gg

    的意思是在命令,可視和命令追加模式把 F2 鍵映射為命令 gg,也就是說,當(dāng)在這三種模式下時(shí),按下 F2 鍵,就相當(dāng)于按下了鍵序列 gg,其作用是定位到第一行。

    要想避免 vim 把你映射的 command 中的內(nèi)容再次映射的話,應(yīng)該使用 noremap,其格式與 map 時(shí)相同。這時(shí)候,上面的各種 maptype 分別對(duì)應(yīng)如下:

    noremap  命令,可視和命令追加模式下的鍵映射(無二次映射的)

    vnoremap 可視模式下的鍵映射(無二次映射的)

    nnoremap 命令模式下的鍵映射(無二次映射的)

    onoremap 命令追加模式下的鍵映射(無二次映射的)

    noremap! 插入和命令行模式下的鍵映射(無二次映射的)

    inoremap 插入模式下的鍵映射(無二次映射的)

    cnoremap 命令行模式下的鍵映射(無二次映射的)

    取消一個(gè)鍵映射用 unmap,格式為 unmap key。其中 key 為之前定義了映射的鍵,unmap 可以換為如下幾種之一:

    unmap  取消命令,可視和命令追加模式下的鍵映射

    vunmap 取消可視模式下的鍵映射

    nunmap 取消命令模式下的鍵映射

    ounmap 取消命令追加模式下的鍵映射

    unmap! 取消插入和命令行模式下的鍵映射

    iunmap 取消插入模式下的鍵映射

    cunmap 取消命令行模式下的鍵映射

    ★ 把當(dāng)前目錄設(shè)為正在編輯的文件所在的目錄

    :cd %:p:h

    說明:只在類 Unix 操作系統(tǒng)下可用。

    :lcd %:p:h

    說明:在所有操作系統(tǒng)下可用。

    ★ 打開文件的時(shí)候自動(dòng)將當(dāng)前目錄設(shè)為該文件所在的目錄

    在 .vimrc 中加入如下行:

    :au BufEnter * :cd! %:p:h

    ★ 轉(zhuǎn)換文件格式

    由于 Unix、Window 和 MacOS 的換行符的內(nèi)部表示不一樣,因此有時(shí)候需要轉(zhuǎn)換文件格式。

    將文件格式轉(zhuǎn)換為 Unix 格式:

    :set fileformat=unix

    將文件格式轉(zhuǎn)換為 Windows 格式:

    :set fileformat=dos

    ★ 讓 gvim 啟動(dòng)時(shí)窗口最大化(只在 Windows 下可用)

    在 _vimrc 中加入如下行

    autocmd GUIEnter * simalt ~x

    posted @ 2009-11-14 14:08 BlakeSu 閱讀(286) | 評(píng)論 (0)編輯 收藏

    Struts2 實(shí)現(xiàn)格式化日期、小數(shù)

    首先說說格式化日期。strtus2有提供一個(gè)<s:date/>來格式化日期,
        例:<s:date value =Date" format="yyyy-MM-dd" />
    這樣可保證在不同的游覽器中都顯示為“2007-11-03”的格式。但這只能實(shí)現(xiàn)普通顯示,如果要使用編輯組件呢?有兩種簡便方法:
        1、使用struts2的dojo組件,<s:dateteimpicker/>
        例:<s:datetimepicker name="Date" displayFormat="yyyy-MM-dd" />
        2、使用JSTL
        例:<s:textfield name="" value="${}" />
      
        再說說格式化小數(shù)。Sturts2沒有象JSTL一樣提供一個(gè)<c:fmt>的格化式標(biāo)簽,所以要實(shí)現(xiàn)格式化比較麻煩一點(diǎn)。利用i18n與text來自定義實(shí)現(xiàn)小數(shù)格式化。
        例:首先在class目錄下創(chuàng)建一個(gè)Format.properties資源文件,
            輸入 FormatNumeral={0,number,##.000}
            然后在頁面引入這個(gè)定義。
    <s:i18n name="Format">
       <s:text name="FormatNumeral" >

           <s:param value="aNumeral"/>

        </s:text>
    </s:i18n>
     

        利用這個(gè)例子,還可以自定義多種格式化方式,相當(dāng)靈活了。

    posted @ 2009-11-14 14:07 BlakeSu 閱讀(737) | 評(píng)論 (0)編輯 收藏

    最有用的幾個(gè)Firefox使用技巧

    • 更多的網(wǎng)頁空間:除了使用F11通過全屏閱讀外,你還可以通過縮小Firefox工具條圖標(biāo)的大小:右擊工具條 -> 定制 -> 使用小圖標(biāo)。
    • 快捷鍵:
      • Ctrl + F (查找)
      • Alt + N (查找下一個(gè))
      • Ctrl + D (加入書簽)
      • Ctrl + T (新標(biāo)簽頁)
      • Ctrl + K (到搜索欄)
      • Ctrl + L (到地址欄)
      • Ctrl + = (增大字體)
      • Ctrl + - (縮小字體)
      • Ctrl + W (關(guān)閉當(dāng)前頁)
      • F5 (刷新)
      • Alt-Home (主頁)
    • 自動(dòng)完成:在地址欄(Ctrl + L),通過 Ctrl + Enter 自動(dòng)加入“www.”和“.com”;通過 Shift + Enter 自動(dòng)加入“www.”和“.net”
    • 標(biāo)簽切換:與其用鼠標(biāo)點(diǎn)擊,不如用鍵盤來幫助你呀~
      • Ctrl+Tab (從左往右切換標(biāo)簽)
      • Ctrl+Shft+Tab (反方向切換標(biāo)簽)
      • Ctrl+1-9 (切換到特定一個(gè)標(biāo)簽)
    • 刪除某一條URL記錄:到地址欄(Ctrl + L),選擇你想刪除的地址,“Delete”就可以了。
    • about:config:請(qǐng)參閱《About:config Tips and Screenshots》。
    • 限制Firefox的內(nèi)存使用率:在地址欄(Ctrl + L)中輸入about:config,找到“browser.cache”,然后選擇“browser.cache.memory.capacity”。 原值為50000,但是你可以設(shè)置得更小。如果你的內(nèi)存大小在512MB到1GB之間,建議設(shè)定為15000。
    • 最大限度的減少內(nèi)存使用率:Firefox最小化時(shí)可以減少內(nèi)存占有率。在地址欄(Ctrl + L)中輸入about:config,新建一個(gè)Boolean,為“config.trim_on_minimize”,然后設(shè)置為“true”。重新啟動(dòng)Firefox后才會(huì)有效果。

    posted @ 2009-11-14 14:04 BlakeSu 閱讀(200) | 評(píng)論 (0)編輯 收藏

    Spring 框架的優(yōu)點(diǎn)及缺點(diǎn)

    首先Spring 是一個(gè)框架,使用Spring并不代表代碼質(zhì)量的提高,就像蓋房子選擇用上海的地皮還是北京的地皮一樣,房子質(zhì)量與土地所在的城市無關(guān),與房子的具體設(shè)計(jì)方案和選料有關(guān)。
    使用Spring 等框架可以簡化很多基礎(chǔ)性的工作,配置好后可以方便構(gòu)建業(yè)務(wù)應(yīng)用。

    框 架使用多了會(huì)有局限的感覺,像小鳥被套在籠子里,無法飛出去,雖然在籠子里面吃喝不愁。目前編程的門檻越來越低,諸多開源框架廣泛傳播,幾乎沒有什么技術(shù) 門檻,會(huì)配置就會(huì)編程,而一個(gè)好的DBA對(duì)軟件性能會(huì)有很大提高,軟件的核心邏輯最終會(huì)轉(zhuǎn)移到對(duì)數(shù)據(jù)庫的操作上,而且對(duì)目前從事的工作來講,感覺技術(shù)的瓶 頸越來越多的局限在對(duì)數(shù)據(jù)庫的操作上,下一步要認(rèn)真提高下了。

    Spring的優(yōu)勢(shì)不言而喻:

    1. 提供了一種管理對(duì)象的方法,可以把中間層對(duì)象有效地組織起來。一個(gè)完美的框架“黏合劑”。

    2. 采用了分層結(jié)構(gòu),可以增量引入到項(xiàng)目中。

    3. 有利于面向接口編程習(xí)慣的養(yǎng)成。

    4. 目的之一是為了寫出易于測試的代碼。

    5. 非侵入性,應(yīng)用程序?qū)pring API的依賴可以減至最小限度。

    6. 一致的數(shù)據(jù)訪問介面。

    6. 一個(gè)輕量級(jí)的架構(gòu)解決方案。

    對(duì)Spring的理解

    Spring致力于使用POJOs來構(gòu)建應(yīng)用程序。由框架提供應(yīng)用程序的基礎(chǔ)設(shè)施,將只含有業(yè)務(wù)邏輯的POJOs作為組件來管理。從而在應(yīng)用程序中形成兩條相對(duì)獨(dú)立發(fā)展的平行線,并且在各自的抽象層面上延長了各自的生命周期。

    Spring的工作基礎(chǔ)是Ioc。Ioc將創(chuàng)建對(duì)象的職責(zé)從應(yīng)用程序代碼剝離到了框架中,通常2中注入方式:setter 和 ctor參數(shù)。

    每個(gè)Bean定義被當(dāng)作一個(gè)POJO(通過類名和JavaBean的初始屬性或構(gòu)造方法參數(shù)兩種方式定義的Bean)。

    Spring的核心在org.springframework.beans,更高抽象層面是BeanFactory. BeanFactory是一個(gè)非常輕量級(jí)的容器。

    關(guān)于可維護(hù)性的思考

    Spring之類的技術(shù)確實(shí)帶來了應(yīng)用系統(tǒng)的可維護(hù)性的提高嗎?

    Ioc, AOP之類的技術(shù),本質(zhì)上都是將原本位于應(yīng)用程序代碼中"硬編碼"邏輯,剝離出來放到了配置文件中(或者其他形式)。主流聲音都是認(rèn)為提高了應(yīng)用程序的可維護(hù)性。

    但如果從以下方面觀察,結(jié)合項(xiàng)目實(shí)際經(jīng)驗(yàn),個(gè)人感覺這些技術(shù)的應(yīng)用大大降低了應(yīng)用程序的可維護(hù)性,尤其是面對(duì)一個(gè)陌生的系統(tǒng),或者項(xiàng)目人員變動(dòng)頻繁的時(shí)候。

      1. 中斷了應(yīng)用程序的邏輯,使代碼變得不完整,不直觀。此時(shí)單從Source無法完全把握應(yīng)用的所有行為。

      2. 將原本應(yīng)該代碼化的邏輯配置化,增加了出錯(cuò)的機(jī)會(huì)以及額外的負(fù)擔(dān)。

      3. 時(shí)光倒退,失去了IDE的支持。在目前IDE功能日益強(qiáng)大的時(shí)代,以往代碼重構(gòu)等讓人頭痛的舉動(dòng)越來越容易。而且IDE還提供了諸多強(qiáng)大的輔助功能,使得編程的門檻降低很多。通常來說,維護(hù)代碼要比維護(hù)配置文件,或者配置文件+代碼的混合體要容易的多。

      4. 調(diào)試階段不直觀,后期的bug對(duì)應(yīng)階段,不容易判斷問題所在。

    posted @ 2009-11-14 14:01 BlakeSu 閱讀(223) | 評(píng)論 (0)編輯 收藏

    Linux Remote X應(yīng)用

    假設(shè)本地主機(jī)ip為172.16.1.1,遠(yuǎn)程的主機(jī)ip為172.16.1.2

      第一步,在本地主機(jī)上的任意一個(gè)xterm中執(zhí)行xhost,用來允許遠(yuǎn)程的其它主機(jī)可以和本地主機(jī)的X server聯(lián)網(wǎng):

      xhost + 172.16.1.2

      如果不指定任何ip地址,則表示權(quán)限完全放開,這會(huì)帶來安全問題,要小心!

      第二步,確認(rèn)本地主機(jī)的xfs是運(yùn)行的.用ps檢查一下進(jìn)程.

      第三步,從本地主機(jī)(172.16.1.1)上通過網(wǎng)絡(luò)登錄到遠(yuǎn)程主機(jī)172.16.1.2上,你用telnet,ssh,rsh都可以.設(shè)置DISPLAY變量.

      export DISPLAY=172.16.1.1:0

      第四步,現(xiàn)在可以使用遠(yuǎn)程主機(jī)上的X 應(yīng)用程序了.

      這么樣,很方便吧,但是你還不能掌控整個(gè)桌面環(huán)境,這個(gè)工作就交給vnc吧!Remote X 在局域網(wǎng)上運(yùn)行效果很不錯(cuò),普通的電話撥號(hào)就不用試了,速度太慢了.

    posted @ 2009-11-14 13:56 BlakeSu 閱讀(259) | 評(píng)論 (0)編輯 收藏

    VNC配置

    我相信有不少人在windows環(huán)境用過pcanywhere,但你想不想用一個(gè)免費(fèi)的,可以在linux,win9x/nt上都可以使用的pcanywhere,這就是vnc.

      vnc就是vitual network computing的縮寫,它支持許多操作平臺(tái),甚至可在瀏覽器中操作.

      我主要介紹vncviewer的用法,以及用linux遠(yuǎn)程控制linux或nt.

      vnc client通過架構(gòu)在tcp/ip上的vnc協(xié)議與vnc server溝通,通過認(rèn)證后,把X server的桌面環(huán)境,輸入設(shè)備,和X 資源交給vncserver掌控,vnc server將桌面環(huán)境通過vnc 協(xié)議送給vnc client端.讓vnc client來操縱vnc server桌面環(huán)境和輸入設(shè)備.

      首先下載到vnc的linux版本和windows版本.

      當(dāng)前的linux版本是vnc-3.3.3r1_x86_linux_2.0.tgz

      當(dāng)前的windows版本是vnc-3.3.3r7_x86_win32.zip

      1.安裝linux版的vnc

      (1)安裝

      tar zxvf vnc-3.3.3r1_x86_linux_2.0.tgz

      cd vnc_x86_linux_2.0

      cp *vnc* /usr/local/bin/

      mkdir /usr/local/vnc

      cp -r classes/ /usr/local/vnc/

      (2)設(shè)置vnc server的訪問密碼

      vncpasswd

      (3)啟動(dòng)vnc server

      vncserver

      注意運(yùn)行后顯示的信息,記下所用的端口號(hào),一般從1開始,因?yàn)?被x server占用了.現(xiàn)在,你就能提供vnc服務(wù)了.vnc client的用法等會(huì)介紹.

      2、安裝nt版的vnc

      1)安裝

      解開vnc-3.3.3r7_x86_win32.zip包后,會(huì)產(chǎn)生winvnc和vncviewer兩個(gè)目錄.winvnc目錄中是vnc server的安裝程序,vncviewer目錄中是vnc client的安裝序.我只關(guān)心vnc server,在winvnc目錄中執(zhí)行setup即可.

      2)設(shè)置

      首先執(zhí)行install default registry settings.

      run winvnc(app mode)就是執(zhí)行vnc server

      這時(shí)可看到winvnc運(yùn)行的小圖標(biāo),用鼠標(biāo)右鍵點(diǎn)擊圖標(biāo),在properties/incoming connections中設(shè)定密碼.默認(rèn)配置即可.

      現(xiàn)在,你的nt就能提供vnc服務(wù)了.

      3、使用vncviewer

      vnc server啟動(dòng)成功后,你就可用vncviewer來遠(yuǎn)程控制桌面了.

      vncviewer xxx.xxx.xxx.xxx:display number

      例如,vncviewer 172.16.1.2:1

      按要求輸入密碼就可以看到遠(yuǎn)程的桌面了.

      注意:viewers需要在16位色的顯示模式下工作,如果您的操作系統(tǒng)中沒上16位色,那么請(qǐng)您及時(shí)的調(diào)整您計(jì)算機(jī)的顯示模式。不然vncviewer無法正常工作。

      4、linux版vnc server的改進(jìn).

      linux上的vnc server內(nèi)定的桌面管理環(huán)境是twm,實(shí)在是太簡陋了.

      修改$HOME/.vnc/xstartup這個(gè)文件.

      把所有內(nèi)容的行前加上#,再在接尾部份加上:

      startkde &

      你當(dāng)然可用你喜好的桌面代替.我這是用kde來代替twm,速度會(huì)慢少少,但用起來方便不少.

      注意要重新啟動(dòng)vnc server.

      5、通過瀏覽器使用vnc

      通過瀏覽器使用vnc,要注意端口號(hào)的變化.

      假設(shè)vnc server是172.16.1.2:1的話,那么,可用瀏覽器訪問http://172.16.1.2:5801

      端口號(hào)=display number + 5800

      好了,心動(dòng)不如行動(dòng),just do it !

    posted @ 2009-11-14 13:55 BlakeSu 閱讀(277) | 評(píng)論 (0)編輯 收藏

    Undoing in Git - Reset and Revert

    If you've messed up the working tree, but haven't yet committed your mistake, you can return the entire working tree to the last committed state with

    $ git reset --hard HEAD

    If you make a commit that you later wish you hadn't, there are two fundamentally different ways to fix the problem:

    1. You can create a new commit that undoes whatever was done by the old commit. This is the correct thing if your mistake has already been made public.

    2. You can go back and modify the old commit. You should never do this if you have already made the history public; git does not normally expect the "history" of a project to change, and cannot correctly perform repeated merges from a branch that has had its history changed.

    Fixing a mistake with a new commit

    Creating a new commit that reverts an earlier change is very easy; just pass the git revert command a reference to the bad commit; for example, to revert the most recent commit:

    $ git revert HEAD

    This will create a new commit which undoes the change in HEAD. You will be given a chance to edit the commit message for the new commit.

    You can also revert an earlier change, for example, the next-to-last:

    $ git revert HEAD^

    In this case git will attempt to undo the old change while leaving intact any changes made since then. If more recent changes overlap with the changes to be reverted, then you will be asked to fix conflicts manually, just as in the case of <<resolving-a-merge, resolving a merge>>.

    posted @ 2009-11-14 13:53 BlakeSu 閱讀(419) | 評(píng)論 (0)編輯 收藏

    Tutorial: The best tips & tricks for bash

    The bash shell is just amazing. There are so many tasks that can be simplified using its handy features. This tutorial tells about some of those features, explains what exactly they do and learns you how to use them.

    Difficulty: Basic - Medium 

    Running a command from your history

    Sometimes you know that you ran a command a while ago and you want to run it again. You know a bit of the command, but you don't exactly know all options, or when you executed the command. Of course, you could just keep pressing the Up Arrow until you encounter the command again, but there is a better way. You can search the bash history in an interactive mode by pressing Ctrl + r. This will put bash in history mode, allowing you to type a part of the command you're looking for. In the meanwhile, it will show the most recent occasion where the string you're typing was used. If it is showing you a too recent command, you can go further back in history by pressing Ctrl + r again and again. Once you found the command you were looking for, press enter to run it. If you can't find what you're looking for and you want to try it again or if you want to get out of history mode for an other reason, just press Ctrl + c. By the way, Ctrl + c can be used in many other cases to cancel the current operation and/or start with a fresh new line.

    Repeating an argument

    You can repeat the last argument of the previous command in multiple ways. Have a look at this example:

    [rechosen@localhost ~]$ mkdir /path/to/exampledir 
    [rechosen@localhost ~]$ cd !$

    The second command might look a little strange, but it will just cd to /path/to/exampledir. The "!$" syntax repeats the last argument of the previous command. You can also insert the last argument of the previous command on the fly, which enables you to edit it before executing the command. The keyboard shortcut for this functionality is Esc + . (a period). You can also repeatedly press these keys to get the last argument of commands before the previous one.

    Some keyboard shortcuts for editing

    There are some pretty useful keyboard shortcuts for editing in bash. They might appear familiar to Emacs users:

    • Ctrl + a => Return to the start of the command you're typing
    • Ctrl + e => Go to the end of the command you're typing
    • Ctrl + u => Cut everything before the cursor to a special clipboard
    • Ctrl + k => Cut everything after the cursor to a special clipboard
    • Ctrl + y => Paste from the special clipboard that Ctrl + u and Ctrl + k save their data to
    • Ctrl + t => Swap the two characters before the cursor (you can actually use this to transport a character from the left to the right, try it!)
    • Ctrl + w => Delete the word / argument left of the cursor
    • Ctrl + l => Clear the screen

    Dealing with jobs

    If you've just started a huge process (like backupping a lot of files) using an ssh terminal and you suddenly remember that you need to do something else on the same server, you might want to get the huge process to the background. You can do this by pressing Ctrl + z, which will suspend the process, and then executing the bg command:

    [rechosen@localhost ~]$ bg
    [1]+ hugeprocess &

    This will make the huge process continue happily in the background, allowing you to do what you need to do. If you want to background another process with the huge one still running, just use the same steps. And if you want to get a process back to the foreground again, execute fg:

    [rechosen@localhost ~]$ fg
    hugeprocess

    But what if you want to foreground an older process that's still running? In a case like that, use the jobs command to see which processes bash is managing:

    [rechosen@localhost ~]$ jobs
    [1]-  Running                 hugeprocess &
    [2]+  Running                 anotherprocess &

    Note: A "+" after the job id means that that job is the 'current job', the one that will be affected if bg or fg is executed without any arguments. A "-" after the job id means that that job is the 'previous job'. You can refer to the previous job with "%-".

    Use the job id (the number on the left), preceded by a "%", to specify which process to foreground / background, like this:

    [rechosen@localhost ~]$ fg %3

    And:

    [rechosen@localhost ~]$ bg %7

    The above snippets would foreground job [3] and background job [7]. 

    Using several ways of substitution

    There are multiple ways to embed a command in an other one. You could use the following way (which is called command substitution):

    [rechosen@localhost ~]$ du -h -a -c $(find . -name *.conf 2>&-)

    The above command is quite a mouthful of options and syntax, so I'll explain it.

    • The du command calculates the actual size of files. The -h option makes du print the sizes in human-readable format, the -a tells du to calculate the size of all files, and the -c option tells du to produce a grand total. So, "du -h -a -c" will show the sizes of all files passed to it in a human-readable form and it will produce a grand total.
    • As you might have guessed, "$(find . -name *.conf 2>&-)" takes care of giving du some files to calculate the sizes of. This part is wrapped between "$(" and ")" to tell bash that it should run the command and return the command's output (in this case as an argument for du). The find command searches for files named <can be anything>.conf in the current directory and all accessible subdirectories. The "." indicates the current directory, the -name option allows to specify the filename of the file to search for, and "*.conf" is an expression that matches any string ending with the character sequence ".conf".
    • The only thing left to explain is the "2>&-". This part of the syntax makes bash discard the errors that find produces, so du won't get any non-filename input. There is a huge amount of explanation about this syntax near the end of the tutorial (look for "2>&1" and further).

    And there's another way to substitute, called process substitution:

    [rechosen@localhost ~]$ diff <(ps axo comm) <(ssh user@host ps axo comm)

    The command in the snippet above will compare the running processes on the local system and a remote system with an ssh server. Let's have a closer look at it:

    • First of all, diff. The diff command can be used to compare two files. I won't tell much about it here, as there is an extensive tutorial about diff and patch on this site.
    • Next, the "<(" and ")". These strings indicate that bash should substitute the command between them as a process. This will create a named pipe (usually in /dev/fd) that, in our case, will be given to diff as a file to compare.
    • Now the "ps axo comm". The ps command is used to list processes currently running on the system. The "a" option tells ps to list all processes with a tty, the "x" tells ps to list processes without a tty, too, and "o comm" tells ps to list the commands only ("o" indicates the starting of a user-defined output declaration, and "comm" indicates that ps should print the COMMAND column).
    • The "ssh user@host ps axo comm" will run "ps axo comm" on a remote system with an ssh server. For more detailed information about ssh, see this site's tutorial about ssh and scp.

    Let's have a look at the whole snippet now:

    • After interpreting the line, bash will run "ps axo comm" and redirect the output to a named pipe,
    • then it will execute "ssh user@host ps axo comm" and redirect the output to another named pipe,
    • and then it will execute diff with the filenames of the named pipes as argument.
    • The diff command will read the output from the pipes and compare them, and return the differences to the terminal so you can quickly see what differences there are in running processes (if you're familiar with diff's output, that is).

    This way, you have done in one line what would normally require at least two: comparing the outputs of two processes.

    And there even is another way, called xargs. This command can feed arguments, usually imported through a pipe, to a command. See the next chapter for more information about pipes. We'll now focus on xargs itself. Have a look at this example:

    [rechosen@localhost ~]$ find . -name *.conf -print0 | xargs -0 grep -l -Z mem_limit | xargs -0 -i cp {} {}.bak

    Note: the "-l" after grep is an L, not an i. 

    The command in the snippet above will make a backup of all .conf files in the current directory and accessible subdirectories that contain the string "mem_limit".

    • The find command is used to find all files in the current directory (the ".") and accessible subdirectories with a filename (the "-name" option) that ends with ".conf" ("*.conf" means "<anything>.conf"). It returns a list of them, with null characters as separators ("-print0" tells find to do so).
    • The output of find is piped (the "|" operator, see the next chapter for more information) to xargs. The "-0" option tells xargs that the names are separated by null characters, and "grep -l -Z mem_limit" is the command that the list of files will be feeded to as arguments. The grep command will search the files it gets from xargs for the string "mem_limit", returning a list of files (the -l option tells grep not to return the contents of the files, but just the filenames), again separated by null characters (the "-Z" option causes grep to do this).
    • The output of grep is also piped, to "xargs -0 -i cp {} {}.bak". We know what xargs does, except for the "-i" option. The "-i" option tells xargs to replace every occasion of the specified string with the argument it gets through the pipe. If no string is specified (as in our case), xargs will assume that it should replace the string "{}". Next, the "cp {} {}.bak". The "{}" will be replaced by xargs with the argument, so, if xargs got the file "sample.conf" through the pipe, cp will copy the file "sample.conf" to the file "sample.conf.bak", effectively making a backup of it.

    These substitutions can, once mastered, provide short and quick solutions for complicated problems.

    Piping data through commands

    One of the most powerful features is the ability to pipe data through commands. You could see this as letting bash take the output of a command, then feed it to an other command, take the output of that, feed it to another and so on. This is a simple example of using a pipe:

    [rechosen@localhost ~]$ ps aux | grep init

    If you don't know the commands yet: "ps aux" lists all processes executed by a valid user that are currently running on your system (the "a" means that processes of other users than the current user should also be listed, the "u" means that only processes executed by a valid user should be shown, and the "x" means that background processes (without a tty) should also be listed). The "grep init" searches the output of "ps aux" for the string "init". It does so because bash pipes the output of "ps aux" to "grep init", and bash does that because of the "|" operator.

    The "|" operator makes bash redirect all data that the command left of it returns to the stdout (more about that later) to the stdin of the command right of it. There are a lot of commands that support taking data from the stdin, and almost every program supports returning data using the stdout.

    The stdin and stdout are part of the standard streams; they were introduced with UNIX and are channels over which data can be transported. There are three standard streams (the third one is stderr, which should be used to report errors over). The stdin channel can be used by other programs to feed data to a running process, and the stdout channel can be used by a program to export data. Usually, stdout output (and stderr output, too) is received by the terminal environment in which the program is running, in our case bash. By default, bash will show you the output by echoing it onto the terminal screen, but now that we pipe it to an other command, we are not shown the data.

    Please note that, as in a pipe only the stdout of the command on the left is passed on to the next one, the stderr output will still go to the terminal. I will explain how to alter this further on in this tutorial.

    If you want to see the data that's passed on between programs in a pipe, you can insert the "tee" command between it. This program receives data from the stdin and then writes it to a file, while also passing it on again through the stdout. This way, if something is going wrong in a pipe sequence, you can see what data was passing through the pipes. The "tee" command is used like this:

    [rechosen@localhost ~]$ ps aux | tee filename | grep init

    The "grep" command will still receive the output of "ps aux", as tee just passes the data on, but you will be able to read the output of "ps aux" in the file <filename> after the commands have been executed. Note that "tee" tries to replace the file <filename> if you specify the command like this. If you don't want "tee" to replace the file but to append the data to it instead, use the -a option, like this:

    [rechosen@localhost ~]$ ps aux | tee -a filename | grep init

    As you have been able to see in the above command, you can place a lot of command with pipes after each other. This is not infinite, though. There is a maximum command-line length, which is usually determined by the kernel. However, this value usually is so big that you are very unlikely to hit the limit. If you do, you can always save the stdout output to a file somewhere inbetween and then use that file to continue operation. And that introduces the next subject: saving the stdout output to a file.

    Saving the stdout output to a file

    You can save the stdout output of a command to a file like this:

    [rechosen@localhost ~]$ ps aux > filename

    The above syntax will make bash write the stdout output of "ps aux" to the file filename. If filename already exists, bash will try to overwrite it. If you don't want bash to do so, but to append the output of "ps aux" to filename, you could do that this way:

    [rechosen@localhost ~]$ ps aux >> filename

    You can use this feature of bash to split a long line of pipes into multiple lines:

    [rechosen@localhost ~]$ command1 | command2 | ... | commandN > tempfile1

    [rechosen@localhost ~]$ cat tempfile1 | command1 | command2 | ... | commandN > tempfile2

    And so on. Note that the above use of cat is, in most cases, a useless one. In many cases, you can let command1 in the second snippet read the file, like this:

    [rechosen@localhost ~]$ command1 tempfile1 | command2 | ... | commandN > tempfile2

    And in other cases, you can use a redirect to feed a file to command1:

    [rechosen@localhost ~]$ command1 < tempfile1 | command2 | ... | commandN > tempfile2

    To be honest, I mainly included this to avoid getting the Useless Use of Cat Award =).

    Anyway, you can also use bash's ability to write streams to file for logging the output of script commands, for example. By the way, did you know that bash can also write the stderr output to a file, or both the stdout and the stderr streams?

    Playing with standard streams: redirecting and combining

    The bash shell allows us to redirect streams to other streams or to files. This is quite a complicated feature, so I'll try to explain it as clearly as possible. Redirecting a stream is done like this:

    [rechosen@localhost ~]$ ps aux 2>&1 | grep init

    In the snippet above, "grep init" will not only search the stdout output of "ps aux", but also the stderr output. The stderr and the stdout streams are combined. This is caused by that strange "2>&1" after "ps aux". Let's have a closer look at that.

    First, the "2". As said, there are three standard streams (stin, stdout and stderr).These standard streams also have default numbers:

    • 0: stdin
    • 1: stdout
    • 2: sterr

    As you can see, "2" is the stream number of stderr. And ">", we already know that from making bash write to a file. The actual meaning of this symbol is "redirect the stream on the left to the stream on the right". If there is no stream on the left, bash will assume you're trying to redirect stdout. If there's a filename on the right, bash will redirect the stream on the left to that file, so that everything passing through the pipe is written to the file.

    Note: the ">" symbol is used with and without a space behind it in this tutorial. This is only to keep it clear whether we're redirecting to a file or to a stream: in reality, when dealing with streams, it doesn't matter whether a space is behind it or not. When substituting processes, you shouldn't use any spaces.

    Back to our "2>&1". As explained, "2" is the stream number of stderr, ">" redirects the stream somewhere, but what is "&1"? You might have guessed, as the "grep init" command mentioned above searches both the stdout and stderr stream, that "&1" is the stdout stream. The "&" in front of it tells bash that you don't mean a file with filename "1". The streams are sent to the same destination, and to the command receiving them it will seem like they are combined.

    If you'd want to write to a file with the name "&1", you'd have to escape the "&", like this:

    [rechosen@localhost ~]$ ps aux > \&1

    Or you could put "&1" between single quotes, like this:

    [rechosen@localhost ~]$ ps aux > '&1'

    Wrapping a filename containing problematic characters between single quotes generally is a good way to stop bash from messing with it (unless there are single quotes in the string, then you'd have have escape them by putting a \ in front of them).

    Back again to the "2>&1". Now that we know what it means, we can also apply it in other ways, like this:

    [rechosen@localhost ~]$ ps aux > filename 2>&1

    The stdout output of ps aux will be sent to the file filename, and the stderr output, too. Now, this might seem unlogical. If bash would interpret it from the left to the right (and it does), you might think that it should be like:

    [rechosen@localhost ~]$ ps aux 2>&1 > filename

    Well, it shouldn't. If you'd execute the above syntax, the stderr output would just be echoed to the terminal. Why? Because bash does not redirect to a stream, but to the current final destination of the stream. Let me explain it:

    • First, we're telling bash to run the command "ps" with "aux" as an argument.
    • Then, we're telling to redirect stderr to stdout. At the moment, stdout is still going to the terminal, so the stderr output of "ps aux" is sent to the terminal.
    • After that, we're telling bash to redirect the stdout output to the file filename. The stdout output of "ps aux" is sent to this file indeed, but the stderr output isn't: it is not affected by stream 1.

    If we put the redirections the other way around ("> filename" first), it does work. I'll explain that, too:

    • First, we're telling bash to run the command "ps" with "aux" as an argument (again).
    • Then, we're redirecting the stdout to the file filename. This causes the stdout output of "ps aux" to be written to that file.
    • After that, we're redirecting the stderr stream to the stdout stream. The stdout stream is still pointing to the file filename because of the former statement. Therefore, stderr output is also written to the file.

    Get it? The redirects cause a stream to go to the same final destination as the specified one. It does not actually merge the streams, however.

    Now that we know how to redirect, we can use it in many ways. For example, we could pipe the stderr output instead of the stdout output:

    [rechosen@localhost ~]$ ps aux 2>&1 > /dev/null | grep init

    The syntax in this snippet will send the stderr output of "ps aux" to "grep init", while the stdout output is sent to /dev/null and therefore discarded. Note that "grep init" will probably not find anything in this case as "ps aux" is unlikely to report any errors.

    When looking more closely to the snippet above, a problem arises. As bash reads the command statements from the left to the right, nothing should go through the pipe, you might say. At the moment that "2>&1" is specified, stdout should still point to the terminal, shouldn't it? Well, here's a thing you should remember: bash reads command statements from the left to the right, but, before that, determines if there are multiple command statements and in which way they are separated. Therefore, bash already read and applied the "|" pipe symbol and stdout is already pointing to the pipe. Note that this also means that stream redirections must be specified before the pipe operator. If you, for example, would move "2>&1" to the end of the command, after "grep init", it would not affect ps aux anymore.

    We can also swap the stdout and the stderr stream. This allows to let the stderr stream pass through a pipe while the stdout is printed to the terminal. This will require a 3rd stream. Let's have a look at this example:

    [rechosen@localhost ~]$ ps aux 3>&1 1>&2 2>&3 | grep init

    That stuff seems to be quite complicated, right? Let's analyze what we're doing here:

    • "3>&1" => We're redirecting stream 3 to the same final destination as stream 1 (stdout). Stream 3 is a non-standard stream, but it is pretty much always available in bash. This way, we're effectively making a backup of the destination of stdout, which is, in this case, the pipe.
    • "1>&2" => We're redirecting stream 1 (stdout) to the same final destination as stream 2 (stderr). This destination is the terminal.
    • "2>&3" => We're redirecting stream 2 (stderr) to the final destination of stream 3. In the first step of these three ones, we set stream 3 to the same final destination as stream 1 (stdout), which was the pipe at that moment, and after that, we redirected stream 1 (stdout) to the final destination of stream 2 at that moment, the terminal. If we wouldn't have made a backup of stream 1's final destination in the beginning, we would not be able to refer to it now.

    So, by using a backup stream, we can swap the stdout and stderr stream. This backup stream does not belong to the standard streams, but it is pretty much always available in bash. If you're using it in a script, make sure you aren't breaking an earlier command by playing with the 3rd stream. You can also use stream 4, 5, 6 7 and so on if you need more streams. The highest stream number usually is 1023 (there are 1024 streams, but the first stream is stream 0, stdin). This may be different on other linux systems. Your mileage may vary. If you try to use a non-existing stream, you will get an error like this:

    bash: 1: Bad file descriptor

    If you want to return a non-standard stream to it's default state, redirect it to "&-", like this:

    [rechosen@localhost ~]$ ps aux 3>&1 1>&2 2>&3 3>&- | grep init

    Note that the stream redirections are always reset to their initial state if you're using them in a command. You'll only need to do this manually if you made a redirect using, for example, exec, as redirects made this way will last until changed manually.

    Final words

    Well, I hope you learned a lot from this tutorial. If the things you read here were new for you, don't worry if you can't apply them immediately. It already is useful if you just know what a statement means if you stumble upon it sometime. If you liked this, please help spreading the word about this blog by posting a link to it here and there. Thank you for reading, and good luck working with bash!

    posted @ 2009-11-14 13:49 BlakeSu 閱讀(197) | 評(píng)論 (0)編輯 收藏

    aptitude 使用快速參考

    aptitude 與 apt-get 一樣,是 Debian 及其衍生系統(tǒng)中功能極其強(qiáng)大的包管理工具。與 apt-get 不同的是,aptitude 在處理依賴問題上更佳一些。舉例來說,aptitude 在刪除一個(gè)包時(shí),會(huì)同時(shí)刪除本身所依賴的包。這樣,系統(tǒng)中不會(huì)殘留無用的包,整個(gè)系統(tǒng)更為干凈。以下是筆者總結(jié)的一些常用 aptitude 命令,僅供參考。

    命令 作用
    aptitude update 更新可用的包列表
    aptitude upgrade 升級(jí)可用的包
    aptitude dist-upgrade 將系統(tǒng)升級(jí)到新的發(fā)行版
    aptitude install pkgname 安裝包
    aptitude remove pkgname 刪除包
    aptitude purge pkgname 刪除包及其配置文件
    aptitude search string 搜索包
    aptitude show pkgname 顯示包的詳細(xì)信息
    aptitude clean 刪除下載的包文件
    aptitude autoclean 僅刪除過期的包文件

    當(dāng)然,你也可以在文本界面模式中使用 aptitude。

    posted @ 2009-11-14 13:49 BlakeSu 閱讀(181) | 評(píng)論 (0)編輯 收藏

    linux 頭文件作用(轉(zhuǎn)載)

    POSIX標(biāo)準(zhǔn)定義的頭文件
    <dirent.h>        目錄項(xiàng)
    <fcntl.h>         文件控制
    <fnmatch.h>    文件名匹配類型
    <glob.h>    路徑名模式匹配類型
    <grp.h>        組文件
    <netdb.h>    網(wǎng)絡(luò)數(shù)據(jù)庫操作
    <pwd.h>        口令文件
    <regex.h>    正則表達(dá)式
    <tar.h>        TAR歸檔值
    <termios.h>    終端I/O
    <unistd.h>    符號(hào)常量
    <utime.h>    文件時(shí)間
    <wordexp.h>    字符擴(kuò)展類型
    -------------------------
    <arpa/inet.h>    INTERNET定義
    <net/if.h>    套接字本地接口
    <netinet/in.h>    INTERNET地址族
    <netinet/tcp.h>    傳輸控制協(xié)議定義
    -------------------------   
    <sys/mman.h>    內(nèi)存管理聲明
    <sys/select.h>    Select函數(shù)
    <sys/socket.h>    套接字借口
    <sys/stat.h>    文件狀態(tài)
    <sys/times.h>    進(jìn)程時(shí)間
    <sys/types.h>    基本系統(tǒng)數(shù)據(jù)類型
    <sys/un.h>    UNIX域套接字定義
    <sys/utsname.h>    系統(tǒng)名
    <sys/wait.h>    進(jìn)程控制

    ------------------------------
    POSIX定義的XSI擴(kuò)展頭文件
    <cpio.h>    cpio歸檔值   
    <dlfcn.h>    動(dòng)態(tài)鏈接
    <fmtmsg.h>    消息顯示結(jié)構(gòu)
    ftw.h>        文件樹漫游
    <iconv.h>    代碼集轉(zhuǎn)換使用程序
    <langinfo.h>    語言信息常量
    <libgen.h>    模式匹配函數(shù)定義
    <monetary.h>    貨幣類型
    <ndbm.h>    數(shù)據(jù)庫操作
    <nl_types.h>    消息類別
    <poll.h>    輪詢函數(shù)
    <search.h>    搜索表
    <strings.h>    字符串操作
    <syslog.h>    系統(tǒng)出錯(cuò)日志記錄
    <ucontext.h>    用戶上下文
    <ulimit.h>    用戶限制
    <utmpx.h>    用戶帳戶數(shù)據(jù)庫   
    -----------------------------
    <sys/ipc.h>    IPC(命名管道)
    <sys/msg.h>    消息隊(duì)列
    <sys/resource.h>資源操作
    <sys/sem.h>    信號(hào)量
    <sys/shm.h>    共享存儲(chǔ)
    <sys/statvfs.h>    文件系統(tǒng)信息
    <sys/time.h>    時(shí)間類型
    <sys/timeb.h>    附加的日期和時(shí)間定義
    <sys/uio.h>    矢量I/O操作

    ------------------------------
    POSIX定義的可選頭文件
    <aio.h>        異步I/O
    <mqueue.h>    消息隊(duì)列
    <pthread.h>    線程
    <sched.h>    執(zhí)行調(diào)度
    <semaphore.h>    信號(hào)量
    <spawn.h>     實(shí)時(shí)spawn接口
    <stropts.h>    XSI STREAMS接口
    <trace.h>     事件跟蹤

    posted @ 2009-11-14 13:44 BlakeSu 閱讀(411) | 評(píng)論 (0)編輯 收藏

    僅列出標(biāo)題
    共12頁: First 上一頁 4 5 6 7 8 9 10 11 12 下一頁 
    主站蜘蛛池模板: 免费成人在线观看| 亚洲国产精品va在线播放| 一级毛片在播放免费| 国产l精品国产亚洲区在线观看| 99re6热视频精品免费观看 | 亚洲欧洲国产综合| 国产免费小视频在线观看| 久久久久久久国产免费看| 亚洲人xxx日本人18| 亚洲裸男gv网站| 无码精品A∨在线观看免费| 成人a毛片免费视频观看| 亚洲高清无在码在线电影不卡| 日韩在线免费看网站| 欧洲人免费视频网站在线| 亚洲一区二区三区写真| 国产成A人亚洲精V品无码性色| 高清国语自产拍免费视频国产 | a视频在线免费观看| 亚洲综合激情五月丁香六月| 亚洲五月综合缴情在线观看| 成人毛片免费观看视频| 久久国产精品免费专区| 黄网站色视频免费观看45分钟| 亚洲国产成人久久99精品| 国产AV无码专区亚洲AWWW| 免费看又爽又黄禁片视频1000| 国产精品免费福利久久| 在线播放免费人成视频网站| 亚洲人成网站在线播放2019| 久久精品国产亚洲AV网站| 亚洲色欲久久久久综合网| 成年性生交大片免费看| 3d成人免费动漫在线观看| 成人网站免费大全日韩国产 | 人妻免费一区二区三区最新| 亚洲AV无码成人精品区日韩| 亚洲另类春色校园小说| 色婷婷亚洲十月十月色天| 国产亚洲综合一区柠檬导航| 亚洲国产成人精品久久久国产成人一区二区三区综 |