Redux guidelines suggest to
think at an app's state as a database
and to prefer key-based objects over arrays when it comes to store resources. This makes totally sense, since it simplifies 99% of the most common use cases when dealing with collections: search, find, add more, remove, read...
Unfortunately the downsides show up when it comes to keep a filterable and sortable collection of resources synched with APIs responses. For example a typical request:
GET /users?status=active&orderBy=name&orderDir=asc&lastID=1234&limit=10
will return a filtered, sorted and paged, list (array) of users. Typically the reducer will then take this array do something like:
users: {...state.users, keyBy(action.payload, 'id')}
This will merge new data with previously fetched ones breaking the computation done from APIS. The app must then perform a second, client-side, computation on the collection to reconstruct the expected list. This results in:
- redundant computation (redo something that has been already done by the server)
- duplication of logic (same filtering and sorting code deployed both client-side and server-side)
- maintenance coast (client app developers must take the extra burden to keep filters and sort logic synched every time it changes on the backend to guarantee consistency)
Another downside, if you are implementing a sort of infinite loading, is that you must keep track of the lastID, since there is not way do deduce what is the last loaded id after results have been merged.
So the question:
What's the best approach to design stores and reducers that must deal with sorted/filterd/paged data fetched via APIs?
Aucun commentaire:
Enregistrer un commentaire