[Replaced initial section with an update for feed-reader results]
Guess I want to hit this part once more, before leaving it alone.
All the solutions that depend on “parallel I/O” can only yield fast results when the file is already in memory. The are some (few) scenarios where you can expect the file contents to be already in the disk cache, but this is not the general case.
Do I really have to explain how horribly the “wide finder” examples that count on “parallel” I/O will degrade? If the file contents is not in the disk cache, attempting to read in “parallel” will force a lot of disk seeks. For test files - where the files are likely adjacent and unfragmented - the degradation from many small seeks will be significant. For real log files - accumulated over a long period of time. the files are more likely non-adjacent and somewhat-fragmented - the degradation from many longer seeks could prove profoundly painful.
Not a nice thing to do to your customers.