lundi 10 mai 2021

how to frontend pagination in offline-first applications that consume pagination API?

i asked this question multiple times during the past couple of years on StackOverflow and other blogs never got a satisfying answer.

problem: a request to /articles return something like

{ total: 1000, pageSize: 100, currentPage: 1, data:[ ...100 article] }

its easy enough you can save such data in local database/store indexed by page number.

problem is when data is rapidly updating (records inserted other deleted). new records appended/prepended would totally change the content of a page, and you can end up with the same record exsists in multiple pages. this would force us to de-validate the whole cache every time if user requested a certain page and the response is not typical to what we had before.

we have 1000 record in cache for 10 pages, if a single record in page 3 was deleted, we have to throw away the whole 1000 record

I) using offset/limit: eg. user has page 1 you request /resource?limit=100&page=1 .. now if user-requested page 3 skipping page 2, given 10 new records where added to the server, then the first 10 articles will be duplicates of what you already have on page 1..

II) using markers (max_id/min_id like twitter does) has its restrictions too, especially if you don't use infinite loading and use pagination.. then if user had page 1 only in cache, and requested page 3 for example.. you don't know what marker to use..


solution I had in mind, is storing data in an array.. and insert null/undefined for empty pages untill its loaded, and every time user load a certain page, we detect a shift in records and append/remove nulls to keep the array in a correct order, and in frontend simply use [..whole db].slice(page*offset, pageSize)

example:

// given response came r={ totalCount: 1000, currentPage: 1, pageSize: 100, data:[...records] }
const firstIndex = (r.currentPage-1) * r.pageSize;
const offlineStore = Array.from({ length: r.total}, (e, idx) => idx>=firstIndex ? r.data[idx-firstIndex] : e||null)

problem is logic to detect data displacement by new records on the server/ preventing bad user experience from seeing records in different pages is a bit complex and bug-prone.

what do you suggest for a pagination design pattern that would work for both infinte scroll / ordered pages scenarios ?

Aucun commentaire:

Enregistrer un commentaire