Differences between revisions 44 and 185 (spanning 141 versions)
Revision 44 as of 2007-10-20 12:50:26
Size: 1500
Editor: p54A5E4BF
Comment:
Revision 185 as of 2012-02-28 15:38:20
Size: 6564
Editor: Hannah Bast
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
#acl All:read
Line 2: Line 3:
== Documentation ==

[wiki:Self:completesearch/IndexBuilding Index Building: Tools, Formats, etc.]

[wiki:Self:completesearch/DocumentFormats Document Formats: .docs, .words, .vocabulary, etc.]

[wiki:Self:completesearch/OverviewCode Source code overview]

[wiki:Self:CodingConventions Coding Conventions]

[wiki:Self:completesearch/DesignConventions OO and C++ Design Conventions]
= CompleteSearch =
Line 15: Line 6:
== Quick Intro ==
Line 16: Line 8:
== Compilation etc. == Follow these steps to checkout the !CompleteSearch code from our SVN, compile it, build an index, run a server for that index, and ask queries to that server via HTTP. Don't be afraid, it's easy. If you have questions, send an email to Hannah Bast <bast@informatik.uni-freiburg.de>.
Line 18: Line 10:
[wiki:Self:completesearch/Installation Installation Guide] === 0. Get source code ===
Line 20: Line 12:
[wiki:Self:completesearch/GNUBuildSystem How to use the autoconf/automake tools to build and deliver the project.] {{{
svn checkout https://ad-svn.informatik.uni-freiburg.de/completesearch/codebase
Username: [ask us]
Password: [ask us]
}}}
Line 22: Line 18:
[wiki:Self:completesearch/CMakeBuildSystem How to use CMake to build and deliver the project.] Third-party code you need to install (some of it might already be installed):
Line 24: Line 20:
[wiki:Self:completesearch/MinGW Compiling under MinGW] {{{
sudo apt-get install g++ # GNU C++ compiler (version 4.6 and below works for sure).
sudo apt-get install zlib1g-dev # Compression library.
sudo apt-get install libexpat1-dev # Expat library for XML parsing.
sudo apt-get install libboost-all-dev # Boost (http://www.boost.org).
sudo apt-get install libsparsehash-dev # Google Hash Map (http://code.google.com/p/google-sparsehash).
sudo apt-get install libgtest-dev # Google Test (http://code.google.com/p/googletest).
sudo apt-get install libstxxl-dev # STXXL (http://stxxl.sourceforge.net).
}}}

Install it all at once (convenient for copy & paste):

{{{
sudo apt-get install g++ zlib1g-dev libexpat1-dev libboost-all-dev libsparsehash-dev libgtest-dev libstxxl-dev
}}}

=== 1. Compile ===

Edit the Makefile and set ''CS_CODE_DIR'' to the absolute path of the folder containing the Makefile, then do:

{{{
make build-all
}}}

This will build three binaries: ''buildIndex'', ''buildDocsDB'', ''startCompletionServer''.

If you call any of these binaries without parameters you will get usage
info with all the available options.

=== 2. Parse ===

Use our generic XML Parser (see ''codebase/parser/XmlParserNewExampleMain.cpp'' for an example usage)
or our generic CSV Parser (''codebase/parser/CsvParserMain'') or your own parser to produce the following
two intermediate files.

'''2.1''' The file ''<base-name>.docs'', with lines of the form

{{{
<doc id><TAB>u:<url of document><TAB>t:<title of document><TAB>H:<raw text of document>
}}}

This file must be sorted such that ''sort -c -k1,1n'' does not complain.
Here is a simple example (the multi-spaces are all TABs)

{{{
1 u:http://some.url.wherever/foo t:First document H: This is a stupid document.
2 u:http://some.url.wherever/bar t:Second document H: This is a boring document.
}}}

This file is the basis for the result snippet returned by !CompleteSearch for a document id that matches a query; see below.
Line 27: Line 72:
== Specfications == '''2.2''' The file ''<base-name>.words'', with lines of the form
Line 29: Line 74:
[wiki:Self:completesearch/ExcerptGenerator Excerpt Generator requirements] {{{
<word><TAB><doc id><TAB><score><TAB><position>
}}}
Line 31: Line 78:
This file must be sorted such that ''sort -c -k1,1 -k2,2n -k4,4n'' does not complain. Here is a
simple example, matching the example above (again multi-spaces are all TABs):
Line 32: Line 81:
== HOWTOs == {{{
a 1 1 5
a 2 1 5
boring 2 1 6
document 1 2 2
document 1 1 7
document 2 2 2
document 2 1 7
first 1 1 1
is 1 1 4
is 2 1 4
second 2 1 1
stupid 1 1 6
this 1 1 3
this 2 1 3
}}}
Line 34: Line 98:
[wiki:Self:completesearch/SeleniumRC Testing with SeleniumRC] This file is the basis on which !CompleteSearch determines which document ids match a given query.
Line 36: Line 100:
[wiki:Self:completesearch/ModPhpStartetExe Apache mit mod_php startet externe Programme unter Windows] Understand that it usually makes sense that the words in ''<base-name>.words'' are those from the documents in ''<base-name>.docs''. But it can also make sense to add other index words, and this is indeed used in many applications of !CompleteSearch. For example if we add ''dumb 1 1 6'' to the ''<base-name>.words'' file above, then document 1 would also match the query ''dumb'', even though what would be displayed for the document does not contain the word ''dumb''.
Line 38: Line 102:
[wiki:Self:completesearch/CharacterEncoding Character Encoding] === 3. Build the word index ===
Line 40: Line 104:
[wiki:Self:completesearch/Templates Template peculiarities in the Complete``Search code] {{{
buildIndex HYB <base-name>.words
}}}
Line 42: Line 108:
This produces the (binary) index file ''<base-name>.hybrid''.
It enables fast processing of the powerful query language offered by !CompleteSearch (including
full-text search, prefix search and completion, synonym search, error-tolerant search,
etc.).
Line 43: Line 113:
== TODOs == ''buildIndex'' also produces the file ''<base-name>.vocabulary'' that provides
the mapping from word ids to words. This is an ASCII file, you can just look at it.
Line 45: Line 116:
[wiki:Self:completesearch/TODO TODO list] Note that by default, the HYB index is built with blocks of fixed sizes. It is more
efficient though to pass it an explicit list of block boundaries (-B
option). TODO: say something about this here, it's actually quite easy.
Line 47: Line 120:
[wiki:Self:NewFeatures New Features that would be nice to have] === 4. Build the doc index ===
Line 49: Line 122:
{{{
buildDocsDB <name>.docs
}}}
Line 50: Line 126:
== Miscellaneous / Not yet sorted in == This produces the (binary) file ''<base-name>.docs.DB'' which provides an efficient mapping
from doc ids to documents. This is needed if you want to show excerpts / snippets
from documents matching the query (which is almost always the case).
Line 52: Line 130:
[wiki:Self:completesearch/CVSHistory CVS history] === 5. Start server ===
Line 54: Line 132:
[wiki:Self:completesearch/Examples Example programs etc.] {{{
startCompletionServer -Z <base-name>.hybrid
}}}

This starts the server. If you run it without argument, it prints usage
information and shows you the (very many) command line options. The ''-Z''
argument lets the server run in the foreground, and
output everything to the console, which is convenient for testing. The default
mode is to run as a background process and write all output to a log file.

=== 6. Queries ===

The server listens on the port you specified in step 6 (''8888'' by
default), and speaks ''HTTP''. For example:

{{{
curl "http://localhost:8888/?q=doc*&h=1&c=3"
}}}

This will return the result as an XML, which should be self-explanatory.

Here is the list of parameters which you may pass along with the query
(q=...)

 * h : number of hits
 * c : number of completions (of last query word, if you put a * behind it)
 * f : send hits starting from this one (default: 0)
 * en : number of excerpts per hit
 * er : excerpt radius (number of words to the left and right of matching words)
 * rd : how to rank the documents (0 = by score, 1 = by doc id, 2 = by word id, append a or d for ascending or descending)
 * rw : how to rank the words (0 = by score, 1 = by doc count, 2 = by occurrence count, 3 = by word id, 4 = by doc id, append a or d as above)
 * s : how to aggregate scores (expert option, ignore for the moment)
 * format : one of xml, json, jsonp. Return result in that format.

== More detailed information ==

Here is the link to the [[MpiiWiki|old Wiki from the MPII]]. This contains lots of detailed information, but most of this is really for developers of the code. For building applications, the above should be enough.

CompleteSearch

Quick Intro

Follow these steps to checkout the CompleteSearch code from our SVN, compile it, build an index, run a server for that index, and ask queries to that server via HTTP. Don't be afraid, it's easy. If you have questions, send an email to Hannah Bast <bast@informatik.uni-freiburg.de>.

0. Get source code

svn checkout https://ad-svn.informatik.uni-freiburg.de/completesearch/codebase
Username: [ask us]
Password: [ask us]

Third-party code you need to install (some of it might already be installed):

sudo apt-get install g++                 # GNU C++ compiler (version 4.6 and below works for sure).
sudo apt-get install zlib1g-dev          # Compression library.
sudo apt-get install libexpat1-dev       # Expat library for XML parsing.
sudo apt-get install libboost-all-dev    # Boost (http://www.boost.org).
sudo apt-get install libsparsehash-dev   # Google Hash Map (http://code.google.com/p/google-sparsehash).
sudo apt-get install libgtest-dev        # Google Test (http://code.google.com/p/googletest).
sudo apt-get install libstxxl-dev        # STXXL (http://stxxl.sourceforge.net).

Install it all at once (convenient for copy & paste):

sudo apt-get install g++ zlib1g-dev libexpat1-dev libboost-all-dev libsparsehash-dev libgtest-dev libstxxl-dev

1. Compile

Edit the Makefile and set CS_CODE_DIR to the absolute path of the folder containing the Makefile, then do:

make build-all

This will build three binaries: buildIndex, buildDocsDB, startCompletionServer.

If you call any of these binaries without parameters you will get usage info with all the available options.

2. Parse

Use our generic XML Parser (see codebase/parser/XmlParserNewExampleMain.cpp for an example usage) or our generic CSV Parser (codebase/parser/CsvParserMain) or your own parser to produce the following two intermediate files.

2.1 The file <base-name>.docs, with lines of the form

<doc id><TAB>u:<url of document><TAB>t:<title of document><TAB>H:<raw text of document>

This file must be sorted such that sort -c -k1,1n does not complain. Here is a simple example (the multi-spaces are all TABs)

1       u:http://some.url.wherever/foo  t:First document        H: This is a stupid document.
2       u:http://some.url.wherever/bar  t:Second document       H: This is a boring document.

This file is the basis for the result snippet returned by CompleteSearch for a document id that matches a query; see below.

2.2 The file <base-name>.words, with lines of the form

<word><TAB><doc id><TAB><score><TAB><position>

This file must be sorted such that sort -c -k1,1 -k2,2n -k4,4n does not complain. Here is a simple example, matching the example above (again multi-spaces are all TABs):

a         1      1      5
a         2      1      5
boring    2      1      6
document  1      2      2
document  1      1      7
document  2      2      2
document  2      1      7
first     1      1      1
is        1      1      4
is        2      1      4
second    2      1      1
stupid    1      1      6
this      1      1      3
this      2      1      3

This file is the basis on which CompleteSearch determines which document ids match a given query.

Understand that it usually makes sense that the words in <base-name>.words are those from the documents in <base-name>.docs. But it can also make sense to add other index words, and this is indeed used in many applications of CompleteSearch. For example if we add dumb 1 1 6 to the <base-name>.words file above, then document 1 would also match the query dumb, even though what would be displayed for the document does not contain the word dumb.

3. Build the word index

buildIndex HYB <base-name>.words

This produces the (binary) index file <base-name>.hybrid. It enables fast processing of the powerful query language offered by CompleteSearch (including full-text search, prefix search and completion, synonym search, error-tolerant search, etc.).

buildIndex also produces the file <base-name>.vocabulary that provides the mapping from word ids to words. This is an ASCII file, you can just look at it.

Note that by default, the HYB index is built with blocks of fixed sizes. It is more efficient though to pass it an explicit list of block boundaries (-B option). TODO: say something about this here, it's actually quite easy.

4. Build the doc index

buildDocsDB <name>.docs

This produces the (binary) file <base-name>.docs.DB which provides an efficient mapping from doc ids to documents. This is needed if you want to show excerpts / snippets from documents matching the query (which is almost always the case).

5. Start server

startCompletionServer -Z <base-name>.hybrid

This starts the server. If you run it without argument, it prints usage information and shows you the (very many) command line options. The -Z argument lets the server run in the foreground, and output everything to the console, which is convenient for testing. The default mode is to run as a background process and write all output to a log file.

6. Queries

The server listens on the port you specified in step 6 (8888 by default), and speaks HTTP. For example:

curl "http://localhost:8888/?q=doc*&h=1&c=3"

This will return the result as an XML, which should be self-explanatory.

Here is the list of parameters which you may pass along with the query (q=...)

  • h : number of hits
  • c : number of completions (of last query word, if you put a * behind it)
  • f : send hits starting from this one (default: 0)
  • en : number of excerpts per hit
  • er : excerpt radius (number of words to the left and right of matching words)
  • rd : how to rank the documents (0 = by score, 1 = by doc id, 2 = by word id, append a or d for ascending or descending)
  • rw : how to rank the words (0 = by score, 1 = by doc count, 2 = by occurrence count, 3 = by word id, 4 = by doc id, append a or d as above)
  • s : how to aggregate scores (expert option, ignore for the moment)
  • format : one of xml, json, jsonp. Return result in that format.

More detailed information

Here is the link to the old Wiki from the MPII. This contains lots of detailed information, but most of this is really for developers of the code. For building applications, the above should be enough.

CompleteSearch: FrontPage (last edited 2017-03-19 13:30:19 by Hannah Bast)