Fast index updating strategy needed...
markspoonSun Jun 16, 2013 02:08 AM
in our company we store customer files in a SQL database. for search performance, we want to store each file in a lucene index, that is the masterdata (customers address and its name.), as well as the file status (deactivated, activated, protected). Unfortunately, our database is frequently updated and the DB commit is needed after each record is processed. We have the assignment that the lucene index is bounded to the DB transaction, meaning we need to call commit() on the index after processing each document - which is very slow since we need to update as many as 30.000 records nightly. Matters were complicated further by the fact that the file status can change independently from the masterdata, so this means we have to use an indexreader/searcher before we call update for a single document on the index.
Do you have any ideas of how we could address our needs with an acceptable index updating strategy?
Thanks in advance for any help!
Report | Quote This | Reply | Print