Is there a way to limit the size of an index?
This is a slightly modified answer from Doug Cutting:
The easiest thing is to set IndexWriter.maxMergeDocs.
If, for instance, you hit the 2GB limit at 8M documents set maxMergeDocs to 7M. That will keep Lucene from trying to merge an index that won't fit in your filesystem. It will actually effectively round this down to the next lower power of Index.mergeFactor.
So with the default mergeFactor set to 10 and maxMergeDocs set to 7M Lucene will generate a series of 1M document indexes, since merging 10 of these would exceed the maximum.
A slightly more complex solution:
You could further minimize the number of segments if, when you've added 7M documents, optimize the index and start a new index. Then use MultiSearcher to search the indexes.
An even more complex and optimal solution:
Write a version of FSDirectory that, when a file exceeds 2GB, creates a subdirectory and represents the file as a series of files.