1660
Comment:
|
3281
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
= About character encoding (28May07 Markus)= | #acl All:read |
Line 3: | Line 3: |
CompletionSearch supports ISO-8859-1 and the multibyte character encoding UTF-8. UTF-8 is the default encoding with the following consequences: |
= CompleteSearch = |
Line 6: | Line 5: |
* The $AC->settings->encoding is 'utf-8' unless overriden in autocomplete_config.php * The texts in text.php are saved as UTF-8 * The css file uses '@charset "utf-8";' * We use mb_strtolower (instead of strtolower) with parameter $AC->settings->encoding to enable UTF-8 |
[[MpiiWiki|Old Wiki from the MPII]] (lots of detailed / internal information) |
Line 11: | Line 7: |
We do the following depending on the defined encoding: * We UTF-8 encode $AC->settings->capitals if $AC->settings->encoding is UTF-8 * In ajax.php we UTF-8 encode the query string if $AC->settings->encoding is UTF-8 and the charset of content_type is not UTF-8 (means the request is sent as a non-UTF-8 type) * We set the page encoding of index.php, options.php and change_options.php according to $AC->settings->encoding (<meta http-equiv="content-type" content="text/html;charset=<?php echo $AC->settings->encoding; ?>"> ) * Texts from text.php are UTF-8 decoded by $AC->get_text() if $AC->settings->encoding is ISO-8859-1 * We url encode the javascript code in function javascript_rhs (in generate_javascript.php) if $AC->settings->encoding is not UTF-8 (this is not necessary if utf-8 is used) |
== Quick Intro == |
Line 19: | Line 9: |
Follow these steps to checkout the CompleteSearch code from our SVN, build it, build an index, run a server on that index, and ask queries to that server via HTTP. Don't be afraid, it's easy. If you have questions, send an email to <bast@informatik.uni-freiburg.de>. | |
Line 20: | Line 11: |
0. Get source code | |
Line 21: | Line 13: |
== The PHP Apache extension php_mbstring == | svn checkout http://vulcano.informatik.uni-freiburg.de/svn/completesearch/codebase Username: [ask us] Password: [ask us] |
Line 23: | Line 18: |
The use of the mb_strtolower function (and other mb_ functions) requires the extension php_mbstring in php.ini: | 1. Compile |
Line 25: | Line 20: |
{{{ In windows: extension=php_mbstring.dll |
make all |
Line 29: | Line 22: |
or in linux: extension=php_mbstring.so }}} |
This will build three binaries: |
Line 33: | Line 24: |
(On geek, the mb_... functions were available by default, on Markus' laptop the line above had to be added.) | buildIndex buildDocsDB startCompletionServer If you call any of these binaries without parameters you will get usage info with all the available options. 2. Input (to be produced by a suitable parser) a <name>.words file, with lines of the form <word><TAB><doc id><TAB><score><TAB><position> Must be sorted so that sort -c -k1,1 -k2,2n -k4,4n does not complain. And a <name>.docs file, with lines of the form <doc id><TAB>u:<url of document><TAB>t:<title of document><TAB>H:<raw text of document> Must be sorted so that sort -c -k1,1n does not complain. You find a very simple example under http://www.mpi-inf.mpg.de/~bast/topsecret/example.tgz 3. Build the word index buildIndex HYB <name>.words This produces the main index file <name>.hybrid needed for prefix search (this is a binary file). It also produces the file <name>.vocabulary, that provides the mapping from word ids to words (it's an ascii file, you can just look at it). Note that by default, HYB is built with block of fixed sizes. It is more efficient though to pass it an explicit list of block boundaries (-B option). Let's talk about this more when efficiency becomes an issue for you. 4. Build the doc index buildDocsDB <name>.docs This produces the file <name>.docs.DB which provides efficient mapping from doc ids to documents. Needed if you want to show excerpts/snippets from documents matching the query. 5. Start server startCompletionServer -Z <name>.hybrid This starts the server. If you run it without argument, it prints usage information. The -Z argument lets the server run in the foreground, and output everything to the console, which is convenient for testing. 6. Queries The server listens on the port you specified in step 6 (8888 by default), and speaks HTTP. For example: curl "http://localhost:8888/?q=die*&h=1&c=3" This will return the result as an XML, which should be self-explanatory. Here is the list of parameters which you may pass along with the query (q=...) h : number of hits c : number of completions (of last query word, if you put a * behind it) f : send hits starting from this one (default: 0) en : number of excerpts per hit er : size of excerpt rd : how to rank the documents (0 = by score, 1 = by doc id, 2 = by word id, append a or d for ascending or descending) rw : how to rank the words (0 = by score, 1 = by doc count, 2 = by occurrence count, 3 = by word id, 4 = by doc id, append a or d as above) s : how to aggregate scores (expert option, ignore for the moment) Jede Menge Doku auf http://search.mpi-inf.mpg.de/wiki/CompleteSearch |
CompleteSearch
Old Wiki from the MPII (lots of detailed / internal information)
Quick Intro
Follow these steps to checkout the CompleteSearch code from our SVN, build it, build an index, run a server on that index, and ask queries to that server via HTTP. Don't be afraid, it's easy. If you have questions, send an email to <bast@informatik.uni-freiburg.de>.
0. Get source code
svn checkout http://vulcano.informatik.uni-freiburg.de/svn/completesearch/codebase Username: [ask us] Password: [ask us]
1. Compile
make all
This will build three binaries:
buildIndex buildDocsDB startCompletionServer
If you call any of these binaries without parameters you will get usage info with all the available options.
2. Input (to be produced by a suitable parser)
a <name>.words file, with lines of the form
<word><TAB><doc id><TAB><score><TAB><position>
Must be sorted so that sort -c -k1,1 -k2,2n -k4,4n does not complain.
And a <name>.docs file, with lines of the form
<doc id><TAB>u:<url of document><TAB>t:<title of document><TAB>H:<raw text of document>
Must be sorted so that sort -c -k1,1n does not complain.
You find a very simple example under http://www.mpi-inf.mpg.de/~bast/topsecret/example.tgz
3. Build the word index
buildIndex HYB <name>.words
This produces the main index file <name>.hybrid needed for prefix search (this is a binary file). It also produces the file <name>.vocabulary, that provides the mapping from word ids to words (it's an ascii file, you can just look at it).
Note that by default, HYB is built with block of fixed sizes. It is more efficient though to pass it an explicit list of block boundaries (-B option). Let's talk about this more when efficiency becomes an issue for you.
4. Build the doc index
buildDocsDB <name>.docs
This produces the file <name>.docs.DB which provides efficient mapping from doc ids to documents. Needed if you want to show excerpts/snippets from documents matching the query.
5. Start server
startCompletionServer -Z <name>.hybrid
This starts the server. If you run it without argument, it prints usage information. The -Z argument lets the server run in the foreground, and output everything to the console, which is convenient for testing.
6. Queries
The server listens on the port you specified in step 6 (8888 by default), and speaks HTTP. For example:
curl "http://localhost:8888/?q=die*&h=1&c=3"
This will return the result as an XML, which should be self-explanatory.
Here is the list of parameters which you may pass along with the query (q=...)
h : number of hits c : number of completions (of last query word, if you put a * behind it) f : send hits starting from this one (default: 0) en : number of excerpts per hit er : size of excerpt rd : how to rank the documents (0 = by score, 1 = by doc id, 2 = by word id, append a or d for ascending or descending) rw : how to rank the words (0 = by score, 1 = by doc count, 2 = by occurrence count, 3 = by word id, 4 = by doc id, append a or d as above) s : how to aggregate scores (expert option, ignore for the moment)
Jede Menge Doku auf http://search.mpi-inf.mpg.de/wiki/CompleteSearch