Our Java Chronicle in action case
Processing quite large text files is not an easy thing to accomplish even with all the “good guys” around like Hadoop, powerful machines, concurrency frameworks (others than Java concurrency utilities). And this because using those comes with a cost (and here we can mention money, time, or persons with necessary qualification), that not all the time is negligible and in the same time with limitations. For example, if you have to validate some of the content with a 3rd service, using Hadoop for this is a well-known anti-pattern.