Mnogosearch installieren
Die Suchmaschine für die eigene Webseite
Mnogosearch ist eine Suchmaschine, die bei Hostsharing als Debian Paket installiert ist und für die eigene Webseite verwendet werden kann. Die bei HS installierte Version verwendet SQLite als Datenbank und ist für Webseiten bis einige tausend Seiten und Traffic, wie er auf einem Shared-Webserver vorkommt, gut geeignet.
Hier wird die Grundinstallation beschrieben. Mnogosearch ist eine sehr umfangreiche Software mit mannigfaltigen Konfigurationsmöglichkeiten. Für die weitergehende Konfiguration muß auf die Dokumentation von Mnogosearch verwiesen werden.
Installation
Für die Datenbank und die Konfigurationsdatei wird am besten pro Domain ein Verzeichnis "mnogosearch" eingerichtet und da die Konfigurationsdatei indexer.conf erstellt. Diese Datei kann auch von /etc/mnogosearch/indexer.conf kopiert werden und muss dann entsprechend geändert werden.
# cd /home/doms/example.com # mkdir mnogosearch # cd mnogosearch # touch indexer.conf # edit indexer.conf
Inhalt der Basis indexer.conf
Dies ist die stark gekürzte und auf das Notwendigste reduzierte Fassung. Ganz zu Beginn wird die Datenbankdatei und am Ende der zu indizierende Webserver definiert:
########################################################################### # DBAddr <URL-style database description> # Options (type, host, database name, port, user and password) # to connect to SQL database. # Should be used before any other commands. # Has global effect for whole config file. # Format: #DBAddr <DBType>:[//[DBUser[:DBPass]@]DBHost[:DBPort]]/DBName/[?dbmode=mode] DBAddr sqlite://mnogo:@/home/doms/example.com/mnogosearch/mnogosearch.db/?dbmode=multi # Default LocalCharset is iso-8859-1 (latin1). # Full UNICODE # LocalCharset UTF-8 ########################################################################### # StopwordFile <filename> # Load stop words from the given text file. You may specify either absolute # file name or a name relative to mnoGoSearch /etc directory. You may use # several StopwordFile commands. # StopwordFile stopwords/de.sl ####################################################################### # HTTPHeader <header> # You may add your desired headers in indexer HTTP request. # You should not use "If-Modified-Since","Accept-Charset" headers, # these headers are composed by indexer itself. # "User-Agent: mnoGoSearch/version" is sent too, but you may override it. # Command has global effect for all configuration file. # HTTPHeader "User-Agent: example.com-indexer" ########################################################################## # Section 2. # URL control configuration. # Allow Configuration # Examples # Allow everything: #Allow * # Allow everything but .php .cgi .pl extensions case insensitively using regex: #Allow NoMatch Regex \.php$|\.cgi$|\.pl$ # Allow .HTM extension case sensitively: #Allow Case *.HTM Allow *.html ########################################################################## #Disallow [Match|NoMatch] [NoCase|Case] [String|Regex] <arg> [<arg> ... ] # # Examples: # Disallow URLs that are not in udm.net domains using "string" match: #Disallow NoMatch *.udm.net/* # Disallow any except known extensions and directory index using "regex" match: #Disallow NoMatch Regex \/$|\.htm$|\.html$|\.shtml$|\.phtml$|\.php$|\.txt$ # Exclude cgi-bin and non-parsed-headers using "string" match: #Disallow */cgi-bin/* *.cgi */nph-* # Exclude anything with '?' sign in URL. Note that '?' sign has a # special meaning in "string" match, so we have to use "regex" match here: #Disallow Regex \? # Exclude some known extensions using fast "String" match: Disallow *.b *.sh *.md5 *.rpm Disallow *.arj *.tar *.zip *.tgz *.gz *.z *.bz2 Disallow *.lha *.lzh *.rar *.zoo *.ha *.tar.Z Disallow *.gif *.jpg *.jpeg *.bmp *.tiff *.tif *.xpm *.xbm *.pcx Disallow *.vdo *.mpeg *.mpe *.mpg *.avi *.movie *.mov *.wmv Disallow *.mid *.mp3 *.rm *.ram *.wav *.aiff *.ra Disallow *.vrml *.wrl *.png *.ico *.psd *.dat Disallow *.exe *.com *.cab *.dll *.bin *.class *.ex_ Disallow *.tex *.texi *.xls *.doc *.texinfo Disallow *.rtf *.pdf *.cdf *.ps Disallow *.ai *.eps *.ppt *.hqx Disallow *.cpt *.bms *.oda *.tcl Disallow *.o *.a *.la *.so Disallow *.pat *.pm *.m4 *.am *.css Disallow *.map *.aif *.sit *.sea Disallow *.m3u *.qt # Exclude Apache directory list in different sort order using "string" match: Disallow *D=A *D=D *M=A *M=D *N=A *N=D *S=A *S=D # More complicated case. RAR .r00-.r99, ARJ a00-a99 files # and UNIX shared libraries. We use "Regex" match type here: Disallow Regex \.r[0-9][0-9]$ \.a[0-9][0-9]$ \.so\.[0-9]$ ################################################################ # Section 3. # Mime types and external parsers. ################################################################ #AddType [String|Regex] [Case|NoCase] <mime type> <arg> [<arg>...] # This command associates filename extensions (for services # that don't automatically include them) with their mime types. # Currently "file:" protocol uses these commands. # Use optional first two parameter to choose comparison type. # Default type is "String" "NoCase" (case sensitive string match with # '?' and '*' wildcards for one and several characters correspondingly). # AddType image/x-xpixmap *.xpm AddType image/x-xbitmap *.xbm AddType image/gif *.gif AddType text/plain *.txt *.pl *.js *.h *.c *.pm *.e AddType text/html *.html *.htm AddType text/rtf *.rtf AddType application/pdf *.pdf AddType application/msword *.doc AddType application/vnd.ms-excel *.xls AddType text/x-postscript *.ps # Default unknown type for other extensions: AddType application/unknown *.* # Use ParserTimeOut to specify amount of time for parser execution # to avoid possible indexer hang. ParserTimeOut 300 ####################################################################### # Section 5. # Servers configuration. ####################################################################### # Document sections. # # Format is: # # Section <string> <number> <maxlen> [clone] [sep] [{expr} {repl}] # # where <string> is a section name and <number> is section ID # between 0 and 255. Use 0 if you don't want to index some of # these sections. It is better to use different sections IDs # for different documents parts. In this case during search # time you'll be able to give different weight to each part # or even disallow some sections at a search time. # <maxlen> argument contains a maximum length of section # which will be stored in database. # "clone" is an optional parameter describing whether this # section should affect clone detection. It can # be "DetectClone" or "cdon", or "NoDetectClone" or "cdoff". # By default, url.* section values are not taken in account # for clone detection, while any other sections take part # in clone detection. # "sep" is an optional argument to specify a separator between # parts of the same section. It is a space character by default. # "expr" and "repl" can be used to extract user defined sections, # for example pieces of text between the given tags. "expr" is # a regular expression, "repl" is a replacement with $1, $2, etc # meta-characters designating matches "expr" matches. # Standard HTML sections: body, title Section body 1 256 Section title 2 128 # META tags # For example <META NAME="KEYWORDS" CONTENT="xxxx"> # Section meta.keywords 3 128 Section meta.description 4 128 # HTTP headers example, let's store "Server" HTTP header # # #Section header.server 5 64 # Document's URL parts Section url.file 6 0 Section url.path 7 0 Section url.host 8 0 Section url.proto 9 0 # CrossWords Section crosswords 10 0 # # If you use CachedCopy for smart excerpts (see below), # please keep Charset section active. # Section Charset 11 32 Section Content-Type 12 64 Section Content-Language 13 16 # Uncomment the following lines if you want tag attributes # to be indexed #Section attribute.alt 14 128 #Section attribute.label 15 128 #Section attribute.summary 16 128 #Section attribute.title 17 128 #Section attribute.face 27 0 # Uncomment the following lines if you want use NewsExtensions # You may add any Newsgroups header to be indexed and stored in urlinfo table #Section References 18 0 #Section Message-ID 19 0 #Section Parent-ID 20 0 # Uncomment the following lines if you want index MP3 tags. #Section MP3.Song 21 128 #Section MP3.Album 22 128 #Section MP3.Artist 23 128 #Section MP3.Year 24 128 # Comment this line out if you don't want to store "cached copies" # to generate smart excerpts at search time. # Don't forget to keep "Charset" section active if you use cached copies. # NOTE: 3.2.18 has limits for CachedCopy size, 32000 for Ibase and # 15000 for Mimer. Other databases do not have limits. # If indexer fails with 'string too long' error message then reduce # this number. This will be fixed in the future versions. # Section CachedCopy 25 64000 # A user defined section example. # Extract text between <h1> and </h1> tags: #Section h1 26 128 "<h1>(.*)</h1>" $1 ######################################################################### #Server [Method] [SubSection] <URL> [alias] # This is the main command of the indexer.conf file. It's used # to describe web-space you want to index. It also inserts # given URL into database to use it as a start point. # You may use "Server" command as many times as a number of different # servers or their parts you want to index. # To index whole server "www.example.com": Server http://www.example.com/
Unbedingt im ganzen Dokument example.com durch den eigenen Domainnamen ersetzen!
Datenbank erstellen
Die Datenbank wird mit dem Programm indexer erstellt. Dabei muss indexer natürlich (und wie bei jedem Aufruf) der Pfad zur eigenen Konfiguration mitgeteilt werden.
# indexer -Ecreate /home/doms/example.com/mnogosearch/indexer.conf
Erster Indexlauf
# indexer -a /home/doms/example.com/mnogosearch/indexer.conf