I have a large file containing two million lines . I'm looking to traverse through each line of the file and, process it into a key value pair and store it into a hashmap to make comparisons later on. However, I do not want to have a hashmap with 2 million key value pairs in the interest of space complexity. Instead , I would like to iterate through N lines of the files and load their key value pairs in the hashmap , make comparisons and then load the next N lines into the hashmap and so on.
An example of the use case :
File.txt:
1 Jack London
2 Mary Boston
3 Jay Chicago
4 Mia Amsterdam
5 Leah New York
6 Bob Denver
.
.
.
Assuming N=3 as the size of my hashmap, at the first iteration my hashmap would store key value pairs for the first three lines of the file i.e
1 Jack London
2 Mary Boston
3 Jay Chicago
After making comparisons on these key value pairs , the next 3 lines are loaded into the hashmap as key value pairs:
4 Mia Amsterdam
5 Leah New York
6 Bob Denver
and so on until all the lines in the file have been iterated over. How do I implement this using the iterator design pattern in java?
Aucun commentaire:
Enregistrer un commentaire