Differences between revisions 153 and 188 (spanning 35 versions)
Revision 153 as of 2012-01-25 20:51:04
Size: 3715
Editor: Hannah Bast
Comment:
Revision 188 as of 2016-07-15 15:54:13
Size: 6672
Editor: Hannah Bast
Comment:
Deletions are marked like this. Additions are marked like this.
Line 5: Line 5:
[[MpiiWiki|Old Wiki from the MPII]] (lots of detailed / internal information)
Line 9: Line 8:
Follow these steps to checkout the !CompleteSearch code from our SVN, build it, build an index, run a server on that index, and ask queries to that server via HTTP. Don't be afraid, it's easy. If you have questions, send an email to Hannah Bast <bast@informatik.uni-freiburg.de>. Follow these steps to checkout the !CompleteSearch code from our SVN, compile it, build an index, run a server for that index, and ask queries to that server via HTTP. Don't be afraid, it's easy. If you have questions, send an email to Hannah Bast <bast@informatik.uni-freiburg.de>.
Line 14: Line 13:
svn checkout
http://vulcano.informatik.uni-freiburg.de/svn/completesearch/codebase
svn checkout https://ad-svn.informatik.uni-freiburg.de/completesearch/codebase
Line 20: Line 18:
Third-party code you need to install (some of it might already be installed):

{{{
sudo apt-get install g++ # GNU C++ compiler (version 4.6 and below works for sure).
sudo apt-get install zlib1g-dev # Compression library.
sudo apt-get install libexpat1-dev # Expat library for XML parsing.
sudo apt-get install libboost-all-dev # Boost (http://www.boost.org).
sudo apt-get install libsparsehash-dev # Google Hash Map (http://code.google.com/p/google-sparsehash).
sudo apt-get install libgtest-dev # Google Test (http://code.google.com/p/googletest).
sudo apt-get install libstxxl-dev # STXXL (http://stxxl.sourceforge.net).
}}}

Install it all at once (convenient for copy & paste):

{{{
sudo apt-get install g++ zlib1g-dev libexpat1-dev libboost-all-dev libsparsehash-dev libgtest-dev libstxxl-dev
}}}
Line 22: Line 38:
Edit the Makefile and set ''CS_CODE_DIR'' to the absolute path of the folder containing the Makefile, then do:
Line 23: Line 41:
make all make build-all
Line 26: Line 44:
This will build three binaries:

* buildIndex
* buildDocsDB
* startCompletionServer
This will build three binaries: ''buildIndex'', ''buildDocsDB'', ''startCompletionServer''.
Line 35: Line 49:
=== 2. Input (to be produced by a suitable parser) === === 2. Parse ===
Line 37: Line 51:
Ínput 1: a file ''<base-name>.words'', with lines of the form Use our generic XML Parser (see ''codebase/parser/XmlParserNewExampleMain.cpp'' for an example usage)
or our generic [[CsvParser]] (''codebase/parser/CsvParserMain'') or your own parser to produce the following
two intermediate files.

'''2.1''' The file ''<base-name>.docs'', with lines of the form

{{{
<doc id><TAB>u:<url of document><TAB>t:<title of document><TAB>H:<raw text of document>
}}}

This file must be sorted such that ''sort -c -k1,1n'' does not complain.
Here is a simple example (the multi-spaces are all TABs)

{{{
1 u:http://some.url.wherever/foo t:First document H: This is a stupid document.
2 u:http://some.url.wherever/bar t:Second document H: This is a boring document.
}}}

This file is the basis for the result snippet returned by !CompleteSearch for a document id that matches a query; see below.


'''2.2''' The file ''<base-name>.words'', with lines of the form
Line 43: Line 78:
This file must be sorted such that ''sort -c -k1,1 -k2,2n -k4,4n'' does not complain. This file must be sorted such that ''sort -c -k1,1 -k2,2n -k4,4n'' does not complain. Here is a
simple example, matching the example above (again multi-spaces are all TABs):
Line 45: Line 81:
Input 2: a file ''<base-name>.docs'', with lines of the form {{{
a 1 1 5
a 2 1 5
boring 2 1 6
document 1 2 2
document 1 1 7
document 2 2 2
document 2 1 7
first 1 1 1
is 1 1 4
is 2 1 4
second 2 1 1
stupid 1 1 6
this 1 1 3
this 2 1 3
}}}
Line 47: Line 98:
<doc id><TAB>u:<url of document><TAB>t:<title of document><TAB>H:<raw
text of document>
This file is the basis on which !CompleteSearch determines which document ids match a given query.
Line 50: Line 100:
This file must be sorted so that ''sort -c -k1,1n'' does not complain.

You find a very simple example under
http://www.mpi-inf.mpg.de/~bast/topsecret/example.tgz
Understand that it usually makes sense that the words in ''<base-name>.words'' are those from the documents in ''<base-name>.docs''. But it can also make sense to add other index words, and this is indeed used in many applications of !CompleteSearch. For example if we add ''dumb 1 1 6'' to the ''<base-name>.words'' file above, then document 1 would also match the query ''dumb'', even though what would be displayed for the document does not contain the word ''dumb''.
Line 62: Line 109:
It enables fast processing of the powerful query language offered by CompleteSearch (including It enables fast processing of the powerful query language offered by !CompleteSearch (including
Line 69: Line 116:
Note that by default, the HYB index is built with block of fixed sizes. It is more Note that by default, the HYB index is built with blocks of fixed sizes. It is more
Line 79: Line 126:
This produces the (binar) file ''<base-name>.docs.DB'' which provides an efficient mapping This produces the (binary) file ''<base-name>.docs.DB'' which provides an efficient mapping
Line 85: Line 132:
startCompletionServer -Z <name>.hybrid {{{
startCompletionServer -Z <base-name>.hybrid
}}}
Line 87: Line 136:
This starts the server. If you run it without argument, it prints usage
information. The -Z argument lets the server run in the foreground, and
output everything to the console, which is convenient for testing.
This starts the [[CompletionServer]]. If you run it without argument, it prints usage
information and shows you the (very many) command line options. The ''-Z''
argument lets the server run in the foreground, and
output everything to the console, which is convenient for testing. The default
mode is to run as a background process and write all output to a log file.
Line 91: Line 142:
6. Queries === 6. Queries ===
Line 93: Line 144:
The server listens on the port you specified in step 6 (8888 by
default), and speaks HTTP. For example:
The server listens on the port you specified in step 6 (''8888'' by
default), and speaks ''HTTP''. For example:
Line 96: Line 147:
curl "http://localhost:8888/?q=die*&h=1&c=3" {{{
curl "http://localhost:8888/?q=doc*&h=1&c=3"
}}}
Line 103: Line 156:
h : number of hits
c : number of completions (of last query word, if you put a * behind it)
f : send hits starting from this one (default: 0)
en : number of excerpts per hit
er : size of excerpt
rd : how to rank the documents (0 = by score, 1 = by doc id, 2 = by word
id, append a or d for ascending or descending)
rw : how to rank the words (0 = by score, 1 = by doc count, 2 = by
occurrence count, 3 = by word id, 4 = by doc id, append a or d as above)
s : how to aggregate scores (expert option, ignore for the moment)
 * h : number of hits
 *
c : number of completions (of last query word, if you put a * behind it)
 * f : send hits starting from this one (default: 0)
 * en : number of excerpts per hit
 * er : excerpt radius (number of words to the left and right of matching words)
 *
rd : how to rank the documents (0 = by score, 1 = by doc id, 2 = by word id, append a or d for ascending or descending)
 * rw : how to rank the words (0 = by score, 1 = by doc count, 2 = by occurrence count, 3 = by word id, 4 = by doc id, append a or d as above)
 * s : how to aggregate scores (expert option, ignore for the moment)
 * format : one of xml, json, jsonp. Return result in that format.
 * p : which of the fields specified via --show is returned for each hit (default: p=0).
Line 114: Line 167:
Jede Menge Doku auf http://search.mpi-inf.mpg.de/wiki/CompleteSearch == More detailed information ==

Here is the link to the [[MpiiWiki|old Wiki from the MPII]]. This contains lots of detailed information, but most of this is really for developers of the code. For building applications, the above should be enough.

CompleteSearch

Quick Intro

Follow these steps to checkout the CompleteSearch code from our SVN, compile it, build an index, run a server for that index, and ask queries to that server via HTTP. Don't be afraid, it's easy. If you have questions, send an email to Hannah Bast <bast@informatik.uni-freiburg.de>.

0. Get source code

svn checkout https://ad-svn.informatik.uni-freiburg.de/completesearch/codebase
Username: [ask us]
Password: [ask us]

Third-party code you need to install (some of it might already be installed):

sudo apt-get install g++                 # GNU C++ compiler (version 4.6 and below works for sure).
sudo apt-get install zlib1g-dev          # Compression library.
sudo apt-get install libexpat1-dev       # Expat library for XML parsing.
sudo apt-get install libboost-all-dev    # Boost (http://www.boost.org).
sudo apt-get install libsparsehash-dev   # Google Hash Map (http://code.google.com/p/google-sparsehash).
sudo apt-get install libgtest-dev        # Google Test (http://code.google.com/p/googletest).
sudo apt-get install libstxxl-dev        # STXXL (http://stxxl.sourceforge.net).

Install it all at once (convenient for copy & paste):

sudo apt-get install g++ zlib1g-dev libexpat1-dev libboost-all-dev libsparsehash-dev libgtest-dev libstxxl-dev

1. Compile

Edit the Makefile and set CS_CODE_DIR to the absolute path of the folder containing the Makefile, then do:

make build-all

This will build three binaries: buildIndex, buildDocsDB, startCompletionServer.

If you call any of these binaries without parameters you will get usage info with all the available options.

2. Parse

Use our generic XML Parser (see codebase/parser/XmlParserNewExampleMain.cpp for an example usage) or our generic CsvParser (codebase/parser/CsvParserMain) or your own parser to produce the following two intermediate files.

2.1 The file <base-name>.docs, with lines of the form

<doc id><TAB>u:<url of document><TAB>t:<title of document><TAB>H:<raw text of document>

This file must be sorted such that sort -c -k1,1n does not complain. Here is a simple example (the multi-spaces are all TABs)

1       u:http://some.url.wherever/foo  t:First document        H: This is a stupid document.
2       u:http://some.url.wherever/bar  t:Second document       H: This is a boring document.

This file is the basis for the result snippet returned by CompleteSearch for a document id that matches a query; see below.

2.2 The file <base-name>.words, with lines of the form

<word><TAB><doc id><TAB><score><TAB><position>

This file must be sorted such that sort -c -k1,1 -k2,2n -k4,4n does not complain. Here is a simple example, matching the example above (again multi-spaces are all TABs):

a         1      1      5
a         2      1      5
boring    2      1      6
document  1      2      2
document  1      1      7
document  2      2      2
document  2      1      7
first     1      1      1
is        1      1      4
is        2      1      4
second    2      1      1
stupid    1      1      6
this      1      1      3
this      2      1      3

This file is the basis on which CompleteSearch determines which document ids match a given query.

Understand that it usually makes sense that the words in <base-name>.words are those from the documents in <base-name>.docs. But it can also make sense to add other index words, and this is indeed used in many applications of CompleteSearch. For example if we add dumb 1 1 6 to the <base-name>.words file above, then document 1 would also match the query dumb, even though what would be displayed for the document does not contain the word dumb.

3. Build the word index

buildIndex HYB <base-name>.words

This produces the (binary) index file <base-name>.hybrid. It enables fast processing of the powerful query language offered by CompleteSearch (including full-text search, prefix search and completion, synonym search, error-tolerant search, etc.).

buildIndex also produces the file <base-name>.vocabulary that provides the mapping from word ids to words. This is an ASCII file, you can just look at it.

Note that by default, the HYB index is built with blocks of fixed sizes. It is more efficient though to pass it an explicit list of block boundaries (-B option). TODO: say something about this here, it's actually quite easy.

4. Build the doc index

buildDocsDB <name>.docs

This produces the (binary) file <base-name>.docs.DB which provides an efficient mapping from doc ids to documents. This is needed if you want to show excerpts / snippets from documents matching the query (which is almost always the case).

5. Start server

startCompletionServer -Z <base-name>.hybrid

This starts the CompletionServer. If you run it without argument, it prints usage information and shows you the (very many) command line options. The -Z argument lets the server run in the foreground, and output everything to the console, which is convenient for testing. The default mode is to run as a background process and write all output to a log file.

6. Queries

The server listens on the port you specified in step 6 (8888 by default), and speaks HTTP. For example:

curl "http://localhost:8888/?q=doc*&h=1&c=3"

This will return the result as an XML, which should be self-explanatory.

Here is the list of parameters which you may pass along with the query (q=...)

  • h : number of hits
  • c : number of completions (of last query word, if you put a * behind it)
  • f : send hits starting from this one (default: 0)
  • en : number of excerpts per hit
  • er : excerpt radius (number of words to the left and right of matching words)
  • rd : how to rank the documents (0 = by score, 1 = by doc id, 2 = by word id, append a or d for ascending or descending)
  • rw : how to rank the words (0 = by score, 1 = by doc count, 2 = by occurrence count, 3 = by word id, 4 = by doc id, append a or d as above)
  • s : how to aggregate scores (expert option, ignore for the moment)
  • format : one of xml, json, jsonp. Return result in that format.
  • p : which of the fields specified via --show is returned for each hit (default: p=0).

More detailed information

Here is the link to the old Wiki from the MPII. This contains lots of detailed information, but most of this is really for developers of the code. For building applications, the above should be enough.

CompleteSearch: FrontPage (last edited 2017-03-19 13:30:19 by Hannah Bast)