1.多线程索引,共享同一个IndexWriter对象
这种方式效率很慢,主要原因是因为:
- public void addDocument(Document doc, Analyzer analyzer) throws IOException {
- SegmentInfo newSegmentInfo = buildSingleDocSegment(doc, analyzer);
- synchronized (this) {
- ramSegmentInfos.addElement(newSegmentInfo);//这句很占用效率
- maybeFlushRamSegments();
- }
- }
2 多线程索引, 先写到RAMDirectory,再一次性写到FSDirectory
功能:首先向RAMDirectory里写,当达到1000个Document後,再向FSDirectory里写。
当多线程执行时,会大量报java.lang.NullPointerException
自己写的多线程索引的类为(IndexWriterServer,该对象只在Server启动时初始化一次):
- public class IndexWriterServer{
- private static IndexWriter indexWriter = null;
- //private String indexDir ;//索引目录;
- private static CJKAnalyzer analyzer = null;
- private static RAMDirectory ramDir = new RAMDirectory();
- private static IndexWriter ramWriter = null;
- private static int diskFactor = 0;//内存中现在有多少Document
- private static long ramToDistTime = 0;//内存向硬盘写需要多少时间
- private int initValue = 1000;//内存中达到多少Document,才向硬盘写
- private static IndexItem []indexItems = null;
- public IndexWriterServer(String indexDir){
- initIndexWriter(indexDir);
- }
- public void initIndexWriter(String indexDir){
- boolean create = false;//是否创建新的
- analyzer = new CJKAnalyzer();
- Directory directory = this.getDirectory(indexDir);
- //判断是否为索引目录
- if(!IndexReader.indexExists(indexDir)){
- create = true;
- }
- indexWriter = getIndexWriter(directory,create);
- try{
- ramWriter = new IndexWriter(ramDir, analyzer, true);
- }catch(Exception e){
- logger.info(e);
- }
- indexItems = new IndexItem[initValue+2];
- }
- /**
- * 生成单个Item索引
- */
- public boolean generatorItemIndex(IndexItem item, Current __current) throws DatabaseError, RuntimeError{
- boolean isSuccess = true;//是否索引成功
- try{
- Document doc = getItemDocument(item);
- ramWriter.addDocument(doc);//关键代码,错误就是从这里报出来的
- indexItems[diskFactor] = item;//为数据挖掘使用
- diskFactor ++;
- if((diskFactor % initValue) == 0){
- ramToDisk(ramDir,ramWriter,indexWriter);
- //ramWriter = new IndexWriter(ramDir, analyzer, true);
- diskFactor = 0;
- //数据挖掘
- isSuccess = MiningData();
- }
- doc = null;
- logger.info("generator index item link:" + item.itemLink +" success");
- }catch(Exception e){
- logger.info(e);
- e.printStackTrace();
- logger.info("generator index item link:" + item.itemLink +" faiture");
- isSuccess = false;
- }finally{
- item = null;
- }
- return isSuccess;
- }
- public void ramToDisk(RAMDirectory ramDir, IndexWriter ramWriter,IndexWriter writer){
- try{
- ramWriter.close();//关键代码,把fileMap赋值为null了
- ramWriter = new IndexWriter(ramDir, analyzer, true);//重新构建一个ramWriter对象。因为它的fileMap为null了,但是好像并没有太大作用
- Directory ramDirArray[] = new Directory[1];
- ramDirArray[0] = ramDir;
- mergeDirs(writer, ramDirArray);
- }catch(Exception e){
- logger.info(e);
- }
- }
- /**
- * 将内存里的索引信息写到硬盘里
- * @param writer
- * @param ramDirArray
- */
- public void mergeDirs(IndexWriter writer,Directory[] ramDirArray){
- try {
- writer.addIndexes(ramDirArray);
- //optimize();
- } catch (IOException e) {
- logger.info(e);
- }
- }
- }
主要原因大概是因为:在调用ramWriter.close();时,Lucene2.1里RAMDirectory 的close()方法
- public final void close() {
- fileMap = null;
- }
把fileMap 给置null了,当多线程执行ramWriter.addDocument(doc);时,最终执行RAMDirectory 的方法:
- public IndexOutput createOutput(String name) {
- RAMFile file = new RAMFile(this);
- synchronized (this) {
- RAMFile existing = (RAMFile)fileMap.get(name);//fileMap为null,所以报:NullPointerException,
- if (existing!=null) {
- sizeInBytes -= existing.sizeInBytes;
- existing.directory = null;
- }
- fileMap.put(name, file);
- }
- return new RAMOutputStream(file);
- }
提示:在网上搜索了一下,好像这个是lucene的一个bug(http://www.opensubscriber.com/message/java-user@lucene.apache.org/6227647.html),但是好像并没有给出解决方案。
3.多线程索引,每个线程一个IndexWriter对象,每个IndexWriter 绑定一个FSDirectory对象。每个FSDirectory绑定一个本地的磁盘目录(唯一的)。单独开辟一个线程出来监控这些索引线程(监控线程),也就是说负责索引的线程索引完了以后,给这个监控线程的queue里发送一个对象:queue.add(directory);,这个监控现成的queue对象是个全局的。当这个queue的size() > 20 时,监控线程 把这20个索引目录合并(merge):indexWriter.addIndexes(dirs);//合并索引,合并到真正的索引目录里。,合并完了以后,然后删除掉这些已经合并了的目录。
但是这样也有几个bug:
a. 合并线程的速度 小于 索引线程的速度。导致 目录越来越多
b.经常会报一个类似这样的错误:
2007-06-08 10:49:18 INFO [Thread-2] (IndexWriter.java:1070) - java.io.FileNotFoundException: /home/spider/luceneserver/merge/item_d28686afe01f365c5669e1f19a2492c8/_1.cfs (No such file or directory)
4.单线程索引,调几个参数後,效率也非常快(索引一条信息大概在6-30 ms之间)。感觉一般的需求单线程就够用了。这些参数如下:
private int mergeFactor = 100;//磁盘里达到多少後会自动合并
private int maxMergeDocs = 1000;//内存中达到多少会向磁盘写入
private int minMergeDocs = 1000;//lucene2.0已经取消了
private int maxFieldLength = 2000;//索引的最大文章长度
private int maxBufferedDocs = 10000;//这个参数不能要,要不然不会自动合并了
得出的结论是:Lucene的多线程索引会有些问题,如果没有特殊需求,单线程的效率几乎就能满足需求.
如果单线程的速度满足不了你的需求,你可以多开几个应用。每个应用都绑定一个FSDirectory,然后通过search时通过RMI去这些索引目录进行搜索。
RMI Server端,关键性代码:
- private void initRMI(){
- //第一安全配置
- if (System.getSecurityManager() == null) {
- System.setSecurityManager( new RMISecurityManager() );
- }
- //注册
- startRMIRegistry(serverUrl);
- SearcherWork searcherWork = new SearcherWork("//" + serverUrl + "/" + bindName, directory);
- searcherWork.run();
- }
- public class SearcherWork {
- // Logger
- private static Logger logger = Logger.getLogger(SearcherWork.class);
- private String serverUrl =null;
- private Directory directory =null;
- public SearcherWork(){
- }
- public SearcherWork(String serverUrl, Directory directory){
- this.serverUrl = serverUrl;
- this.directory = directory;
- }
- public void run(){
- try{
- Searchable searcher = new IndexSearcher(directory);
- SearchService service = new SearchService(searcher);
- Naming.rebind(serverUrl, service);
- logger.info("RMI Server bind " + serverUrl + " success");
- }catch(Exception e){
- logger.info(e);
- System.out.println(e);
- }
- }
- }
- public class SearchService extends RemoteSearchable implements Searchable {
- public SearchService (Searchable local) throws RemoteException {
- super(local);
- }
- }
客户端关键性代码:
- RemoteLuceneConnector rlc= new RemoteLuceneConnector();
- RemoteSearchable[] rs= rlc.getRemoteSearchers();
- MultiSearcher multi = new MultiSearcher(rs);
- Hits hits = multi.search(new TermQuery(new Term("content","中国")));