1, Lucene的結(jié)構(gòu)框架:
注意:Lucene中的一些比較復(fù)雜的詞法分析是用JavaCC生成的(JavaCC:JavaCompilerCompiler,純Java的詞法
分析生成器),所以如果從源代碼編譯或需要修改其中的QueryParser、定制自己的詞法分析器,還需要從 https://javacc.dev.java.net/下載javacc。
lucene的組成結(jié)構(gòu):對(duì)于外部應(yīng)用來(lái)說(shuō)索引模塊(index)和檢索模塊(search)是主要的外部應(yīng)用入口。 org.apache.Lucene.search/ 搜索入口
org.apache.Lucene.index/ 索引入口
org.apache.Lucene.analysis/ 語(yǔ)言分析器
org.apache.Lucene.queryParser/ 查詢分析器
org.apache.Lucene.document/ 存儲(chǔ)結(jié)構(gòu)
org.apache.Lucene.store/ 底層IO/存儲(chǔ)結(jié)構(gòu)
org.apache.Lucene.util/ 一些公用的數(shù)據(jù)結(jié)構(gòu)
2, 關(guān)于計(jì)劃于詞庫(kù)的分詞和一元分詞,二元分詞的區(qū)別. noise.chs 是詞庫(kù)中作為stopword而存在的.請(qǐng)大家注意.
下面做了詳細(xì)描述:
2006年01月22日 星期日 于 2:39 am · 發(fā)表在: 默認(rèn)
Lucene應(yīng)用越來(lái)越多,在對(duì)中文對(duì)索引過(guò)程中,中文分詞問(wèn)題也就越來(lái)越重要。
在已有的分詞模式中,目前比較常用的也是比較通用的有一元分詞、二元分詞和基于詞庫(kù)的分詞三種。一元分詞在Java版本上由yysun實(shí)現(xiàn),并且已經(jīng)收錄
到Apache。其實(shí)現(xiàn)方式比較簡(jiǎn)單,即將每一個(gè)漢字作為一個(gè)Token,例如:“這是中文字”,在經(jīng)過(guò)一元分詞模式分詞后的結(jié)果為五個(gè)Token:這、
是、中、文、字。而二元分詞,則將兩個(gè)相連的漢字作為一個(gè)Token劃分,例如:“這是中文字”,運(yùn)用二元分詞模式分詞后,得到的結(jié)果為:這是、是中、中
文、文字。
一元分詞和二元分詞實(shí)現(xiàn)原理比較簡(jiǎn)單,基本支持所有東方語(yǔ)言。但二者的缺陷也比較明顯。一元分詞單純的考慮了中文的文字而沒(méi)有考慮到中文的詞性,例如在上
述的例子中,“中文”、“文字”這兩個(gè)十分明顯的中文詞語(yǔ)就沒(méi)有被識(shí)別出來(lái)。相反,二元分詞則分出了太多的冗余的中文詞,如上所述,“這是”、“是中”毫
無(wú)意義的文字組合竟被劃分為一個(gè)詞語(yǔ),而同樣的缺陷,命中的詞語(yǔ)也不十分準(zhǔn)確,如上:在“這是中文字”中,“中文字”這個(gè)詞語(yǔ)應(yīng)該優(yōu)先考慮的。而二元分詞
也未能實(shí)現(xiàn)。
基于詞庫(kù)的分詞實(shí)現(xiàn)難度比較大,其模式也有多種,如微軟在自己的軟件中的漢語(yǔ)分詞、海量的中文分詞研究版,還有目前在.Net下實(shí)現(xiàn)的使用率較高的獵兔,
和一些其他人自發(fā)實(shí)現(xiàn)的分詞工具等等。其都有自己的分析體系,雖然分析精度高,但實(shí)現(xiàn)難度大,實(shí)現(xiàn)周期長(zhǎng),而且,對(duì)一般的中小型應(yīng)用系統(tǒng)來(lái)講,在精度的要
求不是十分苛刻的環(huán)境下,這種模式對(duì)系統(tǒng)對(duì)消耗是一種奢侈行為。
在綜合考慮一元分詞、二元分詞及基于詞庫(kù)的分詞模式后,我大膽提出一種基于StopWord分割的分詞模式。這種分詞模式的設(shè)計(jì)思想是,針對(duì)要分割的段
落,先由標(biāo)點(diǎn)分割成標(biāo)準(zhǔn)的短句。然后根據(jù)設(shè)定的StopWord,將短句由StopWord最大化分割,分割為一個(gè)個(gè)詞語(yǔ)。如:輸入短句為“這是中文字
”,設(shè)定的StopWord列表為:“這”、“是”,則最終的結(jié)果為:“中文字”。
這個(gè)例子相對(duì)比較簡(jiǎn)單,舉個(gè)稍微長(zhǎng)一點(diǎn)的例子:輸入短句“中文軟件需要具有對(duì)中文文本的輸入、顯示、編輯、輸出等基本功能”,設(shè)定的StopWord列表為:“這”、“是”、“的”、“對(duì)”、“等”、“需要”、“具有”,則分割出對(duì)結(jié)果列表為:
====================
中文軟件
中文文本
輸入
顯示
編輯
輸出
基本功能
====================
基本實(shí)現(xiàn)了想要的結(jié)果,但其中也不乏不足之處,如上述的結(jié)果中“中文軟件”與“中文文本”應(yīng)該分割為三個(gè)獨(dú)立詞“中文”、“軟件”和“文本”,而不是上述的結(jié)果。
并且,對(duì)StopWord列表對(duì)設(shè)置,也是相對(duì)比較復(fù)雜的環(huán)節(jié),沒(méi)有一個(gè)確定的約束來(lái)設(shè)定StopWord。我的想法是,可以將一些無(wú)意義的主語(yǔ),如“我
”、“你”、“他”、“我們”、“他們”等,動(dòng)詞“是”、“對(duì)”、“有”等等其他各種詞性諸如“的”、“啊”、“一”、“不”、“在”、“人”等等
(System32目錄下noise.chs文件里的內(nèi)容可以作為參考)作為StopWord。
noise.chs 是詞庫(kù)中作為stopword而存在的.請(qǐng)大家注意.
3, 關(guān)于分詞的.還可以關(guān)注這個(gè)帖子:
http://lucene-group.group.javaeye.com/group/blog/58701
自己寫(xiě)的一個(gè)基于詞庫(kù)的lucene分詞程序--ThesaurusAnalyzer
我已經(jīng)測(cè)試過(guò).還可以.18萬(wàn)分詞.
4, lucene的自帶分詞的測(cè)試如下:\
Lucene本身提供了幾個(gè)分詞接口,我后來(lái)有給寫(xiě)了一個(gè)分詞接口.
功能遞增如下:
WhitespaceAnalyzer:僅僅是去除空格,對(duì)字符沒(méi)有l(wèi)owcase化,不支持中文
SimpleAnalyzer:功能強(qiáng)于WhitespaceAnalyzer,將除去letter之外的符號(hào)全部過(guò)濾掉,并且將所有的字符lowcase化,不支持中文
StopAnalyzer:StopAnalyzer的功能超越了SimpleAnalyzer,在SimpleAnalyzer的基礎(chǔ)上
增加了去除StopWords的功能,不支持中文
StandardAnalyzer:英文的處理能力同于StopAnalyzer.支持中文采用的方法為單字切分.
ChineseAnalyzer:來(lái)自于Lucene的sand box.性能類似于StandardAnalyzer,缺點(diǎn)是不支持中英文混和分詞.
CJKAnalyzer:chedong寫(xiě)的CJKAnalyzer的功能在英文處理上的功能和StandardAnalyzer相同
但是在漢語(yǔ)的分詞上,不能過(guò)濾掉標(biāo)點(diǎn)符號(hào),即使用二元切分
TjuChineseAnalyzer:我寫(xiě)的,功能最為強(qiáng)大.TjuChineseAnlyzer的功能相當(dāng)強(qiáng)大,在中文分詞方面由于其調(diào)用的為
ICTCLAS的java接口.所以其在中文方面性能上同與ICTCLAS.其在英文分詞上采用了Lucene的StopAnalyzer,可以去除
stopWords,而且可以不區(qū)分大小寫(xiě),過(guò)濾掉各類標(biāo)點(diǎn)符號(hào).
程序調(diào)試于:JBuilder 2005
package org.apache.lucene.analysis;
//Author:zhangbufeng
//TjuAILab(天津大學(xué)人工智能實(shí)驗(yàn)室)
//2005.9.22.11:00
import java.io.*;
import junit.framework.*;
import org.apache.lucene.*;
import org.apache.lucene.analysis.*;
import org.apache.lucene.analysis.StopAnalyzer;
import org.apache.lucene.analysis.standard.*;
import org.apache.lucene.analysis.cn.*;
import org.apache.lucene.analysis.cjk.*;
import org.apache.lucene.analysis.tjucn.*;
import com.xjt.nlp.word.*;
public class TestAnalyzers extends TestCase {
public TestAnalyzers(String name) {
super(name);
}
public void assertAnalyzesTo(Analyzer a,
String input,
String[] output) throws Exception {
//前面的"dummy"好像沒(méi)有用到
TokenStream ts = a.tokenStream("dummy", new StringReader(input));
StringReader readerInput=new StringReader(input);
for (int i=0; i Token t = ts.next();
//System.out.println(t);
assertNotNull(t);
//使用下面這條語(yǔ)句即可以輸出Token的每項(xiàng)的text,并且用空格分開(kāi)
System.out.print(t.termText);
System.out.print(" ");
assertEquals(t.termText(), output );
}
System.out.println(" ");
assertNull(ts.next());
ts.close();
}
public void outputAnalyzer(Analyzer a ,String input) throws Exception{
TokenStream ts = a.tokenStream("dummy",new StringReader(input));
StringReader readerInput = new StringReader(input);
while(true){
Token t = ts.next();
if(t!=null){
System.out.print(t.termText);
System.out.print(" ");
}
else
break;
}
System.out.println(" ");
ts.close();
}
public void testSimpleAnalyzer() throws Exception {
//學(xué)習(xí)使用SimpleAnalyzer();
//SimpleAnalyzer將除去letter之外的符號(hào)全部過(guò)濾掉,并且將所有的字符lowcase化
Analyzer a = new SimpleAnalyzer();
assertAnalyzesTo(a, "foo bar FOO BAR",
new String[] { "foo", "bar", "foo", "bar" });
assertAnalyzesTo(a, "foo bar . FOO <> BAR",
new String[] { "foo", "bar", "foo", "bar" });
assertAnalyzesTo(a, "foo.bar.FOO.BAR",
new String[] { "foo", "bar", "foo", "bar" });
assertAnalyzesTo(a, "U.S.A.",
new String[] { "u", "s", "a" });
assertAnalyzesTo(a, "C++",
new String[] { "c" });
assertAnalyzesTo(a, "B2B",
new String[] { "b", "b" });
assertAnalyzesTo(a, "2B",
new String[] { "b" });
assertAnalyzesTo(a, "\"QUOTED\" word",
new String[] { "quoted", "word" });
assertAnalyzesTo(a,"zhang ./ bu <> feng",
new String[]{"zhang","bu","feng"});
ICTCLAS splitWord = new ICTCLAS();
String result = splitWord.paragraphProcess("我愛(ài)大家 i LOVE chanchan");
assertAnalyzesTo(a,result,
new String[]{"我","愛(ài)","大家","i","love","chanchan"});
}
public void testWhiteSpaceAnalyzer() throws Exception {
//WhiterspaceAnalyzer僅僅是去除空格,對(duì)字符沒(méi)有l(wèi)owcase化
Analyzer a = new WhitespaceAnalyzer();
assertAnalyzesTo(a, "foo bar FOO BAR",
new String[] { "foo", "bar", "FOO", "BAR" });
assertAnalyzesTo(a, "foo bar . FOO <> BAR",
new String[] { "foo", "bar", ".", "FOO", "<>", "BAR" });
assertAnalyzesTo(a, "foo.bar.FOO.BAR",
new String[] { "foo.bar.FOO.BAR" });
assertAnalyzesTo(a, "U.S.A.",
new String[] { "U.S.A." });
assertAnalyzesTo(a, "C++",
new String[] { "C++" });
assertAnalyzesTo(a, "B2B",
new String[] { "B2B" });
assertAnalyzesTo(a, "2B",
new String[] { "2B" });
assertAnalyzesTo(a, "\"QUOTED\" word",
new String[] { "\"QUOTED\"", "word" });
assertAnalyzesTo(a,"zhang bu feng",
new String []{"zhang","bu","feng"});
ICTCLAS splitWord = new ICTCLAS();
String result = splitWord.paragraphProcess("我愛(ài)大家 i love chanchan");
assertAnalyzesTo(a,result,
new String[]{"我","愛(ài)","大家","i","love","chanchan"});
}
public void testStopAnalyzer() throws Exception {
//StopAnalyzer的功能超越了SimpleAnalyzer,在SimpleAnalyzer的基礎(chǔ)上
//增加了去除StopWords的功能
Analyzer a = new StopAnalyzer();
assertAnalyzesTo(a, "foo bar FOO BAR",
new String[] { "foo", "bar", "foo", "bar" });
assertAnalyzesTo(a, "foo a bar such FOO THESE BAR",
new String[] { "foo", "bar", "foo", "bar" });
assertAnalyzesTo(a,"foo ./ a bar such ,./<> FOO THESE BAR ",
new String[]{"foo","bar","foo","bar"});
ICTCLAS splitWord = new ICTCLAS();
String result = splitWord.paragraphProcess("我愛(ài)大家 i Love chanchan such");
assertAnalyzesTo(a,result,
new String[]{"我","愛(ài)","大家","i","love","chanchan"});
}
public void testStandardAnalyzer() throws Exception{
//StandardAnalyzer的功能最為強(qiáng)大,對(duì)于中文采用的為單字切分
Analyzer a = new StandardAnalyzer();
assertAnalyzesTo(a,"foo bar Foo Bar",
new String[]{"foo","bar","foo","bar"});
assertAnalyzesTo(a,"foo bar ./ Foo ./ BAR",
new String[]{"foo","bar","foo","bar"});
assertAnalyzesTo(a,"foo ./ a bar such ,./<> FOO THESE BAR ",
new String[]{"foo","bar","foo","bar"});
assertAnalyzesTo(a,"張步峰是天大學(xué)生",
new String[]{"張","步","峰","是","天","大","學(xué)","生"});
//驗(yàn)證去除英文的標(biāo)點(diǎn)符號(hào)
assertAnalyzesTo(a,"張,/步/,峰,.是.,天大<>學(xué)生",
new String[]{"張","步","峰","是","天","大","學(xué)","生"});
//驗(yàn)證去除中文的標(biāo)點(diǎn)符號(hào)
assertAnalyzesTo(a,"張。、步。、峰是。天大。學(xué)生",
new String[]{"張","步","峰","是","天","大","學(xué)","生"});
}
public void testChineseAnalyzer() throws Exception{
//可見(jiàn)ChineseAnalyzer在功能上和standardAnalyzer的功能差不多,但是可能在速度上慢于StandardAnalyzer
Analyzer a = new ChineseAnalyzer();
//去空格
assertAnalyzesTo(a,"foo bar Foo Bar",
new String[]{"foo","bar","foo","bar"});
assertAnalyzesTo(a,"foo bar ./ Foo ./ BAR",
new String[]{"foo","bar","foo","bar"});
assertAnalyzesTo(a,"foo ./ a bar such ,./<> FOO THESE BAR ",
new String[]{"foo","bar","foo","bar"});
assertAnalyzesTo(a,"張步峰是天大學(xué)生",
new String[]{"張","步","峰","是","天","大","學(xué)","生"});
//驗(yàn)證去除英文的標(biāo)點(diǎn)符號(hào)
assertAnalyzesTo(a,"張,/步/,峰,.是.,天大<>學(xué)生",
new String[]{"張","步","峰","是","天","大","學(xué)","生"});
//驗(yàn)證去除中文的標(biāo)點(diǎn)符號(hào)
assertAnalyzesTo(a,"張。、步。、峰是。天大。學(xué)生",
new String[]{"張","步","峰","是","天","大","學(xué)","生"});
//不支持中英文寫(xiě)在一起
// assertAnalyzesTo(a,"我愛(ài)你 i love chanchan",
/// new String[]{"我","愛(ài)","你","i","love","chanchan"});
}
public void testCJKAnalyzer() throws Exception {
//chedong寫(xiě)的CJKAnalyzer的功能在英文處理上的功能和StandardAnalyzer相同
//但是在漢語(yǔ)的分詞上,不能過(guò)濾掉標(biāo)點(diǎn)符號(hào),即使用二元切分
Analyzer a = new CJKAnalyzer();
assertAnalyzesTo(a,"foo bar Foo Bar",
new String[]{"foo","bar","foo","bar"});
assertAnalyzesTo(a,"foo bar ./ Foo ./ BAR",
new String[]{"foo","bar","foo","bar"});
assertAnalyzesTo(a,"foo ./ a bar such ,./<> FOO THESE BAR ",
new String[]{"foo","bar","foo","bar"});
// assertAnalyzesTo(a,"張,/步/,峰,.是.,天大<>學(xué)生",
// new String[]{"張步","步峰","峰是","是天","天大","大學(xué)","學(xué)生"});
//assertAnalyzesTo(a,"張。、步。、峰是。天大。學(xué)生",
// new String[]{"張步","步峰","峰是","是天","天大","大學(xué)","學(xué)生"});
//支持中英文同時(shí)寫(xiě)
assertAnalyzesTo(a,"張步峰是天大學(xué)生 i love",
new String[]{"張步","步峰","峰是","是天","天大","大學(xué)","學(xué)生","i","love"});
}
public void testTjuChineseAnalyzer() throws Exception{
/**
* TjuChineseAnlyzer的功能相當(dāng)強(qiáng)大,在中文分詞方面由于其調(diào)用的為ICTCLAS的java接口.
* 所以其在中文方面性能上同與ICTCLAS.其在英文分詞上采用了Lucene的StopAnalyzer,可以去除
* stopWords,而且可以不區(qū)分大小寫(xiě),過(guò)濾掉各類標(biāo)點(diǎn)符號(hào).
*/
Analyzer a = new TjuChineseAnalyzer();
String input = "體育訊 在被尤文淘汰之后,皇馬主帥博斯克拒絕接受媒體對(duì)球隊(duì)后防線的批評(píng),同時(shí)還為自己排出的首發(fā)陣容進(jìn)行了辯護(hù)。"+
"“失利是全隊(duì)的責(zé)任,而不僅僅是后防線該受指責(zé),”博斯克說(shuō),“我并不認(rèn)為我們踢得一塌糊涂。”“我們進(jìn)入了半決賽,而且在晉級(jí)的道路上一路奮 "+
"戰(zhàn)。即使是今天的比賽我們也有幾個(gè)翻身的機(jī)會(huì),但我們面對(duì)的對(duì)手非常強(qiáng)大,他們踢得非常好。”“我們的球迷應(yīng)該為過(guò)去幾個(gè)賽季里我們?cè)诠谲姳械谋憩F(xiàn)感到驕傲。”"+
"博斯克還說(shuō)。對(duì)于博斯克在首發(fā)中排出了久疏戰(zhàn)陣的坎比亞索,賽后有記者提出了質(zhì)疑,認(rèn)為完全應(yīng)該將隊(duì)內(nèi)的另一 "+
"名球員帕文派遣上場(chǎng)以加強(qiáng)后衛(wèi)線。對(duì)于這一疑議,博斯克拒絕承擔(dān)所謂的“責(zé)任”,認(rèn)為球隊(duì)的首發(fā)沒(méi)有問(wèn)題。“我們按照整個(gè)賽季以來(lái)的方式做了,"+
"對(duì)于人員上的變化我沒(méi)有什么可說(shuō)的。”對(duì)于球隊(duì)在本賽季的前景,博斯克表示皇馬還有西甲聯(lián)賽的冠軍作為目標(biāo)。“皇家馬德里在冠軍 "+
"杯中戰(zhàn)斗到了最后,我們?cè)诼?lián)賽中也將這么做。”"+
"A Java User Group is a group of people who share a common interest in
Java technology and meet on a regular basis to share"+
" technical ideas and information. The actual structure of a JUG can
vary greatly - from a small number of friends and coworkers"+
" meeting informally in the evening, to a large group of companies based in the same geographic area. "+
"Regardless of the size and focus of a particular JUG, the sense of community spirit remains the same. ";
outputAnalyzer(a,input);
//此處我已經(jīng)對(duì)大文本進(jìn)行過(guò)測(cè)試,不會(huì)有問(wèn)題效果很好
outputAnalyzer(a,"我愛(ài)大家 ,,。 I love China 我喜歡唱歌 ");
assertAnalyzesTo(a,"我愛(ài)大家 ,,。I love China 我喜歡唱歌",
new String[]{"愛(ài)","大家","i","love","china","喜歡","唱歌"});
}
}
ExtJS教程- Hibernate教程- Struts2 教程- Lucene教程
|