Web app works with my server that keeps only id's of some Things, and a Third Party Service (TPS) API, that provides "thing details" by id (say, text title).
API has its limits: max 3 requests per second, max 25 ids in a request.
So I need some intermediate dispatcher that would accept "requests" for details by an id from various points in the app, and would queue them with respect to the API limits.
Is there some good pattern for that?
My current implementation is some custom Object, a dispatcher.
A Backbone Model, once initialized with just an id, calls the "dispatcher" with the id and a callback.
Dispatcher has a dictionary that holds all "known so far" models by their id. Some of them have all details, some are in "pending" state, queued for update via API. Another dict holds id:[callback1, cb2, ..] pairs.
Dispatcher makes calls to the API, keeping track of last 3 times they were completed.
It keeps its API communication state: free / busy.
It would attempt to issue a new API call with up to 25 ids once it gets a new id inserted, if the state is not "busy".
Once it gets a response from API it pushes the corresponding model data to its registered callbacks. Then it looks for unresolved ids and issues the next API call or switches to "free" state if no more pending ids.
Is this solution an optimal one. Isn't it covered by some well-known pattern? Is there a ready solution/library for that?
Aucun commentaire:
Enregistrer un commentaire