In many enterprise applications, implementing a Create/Update/Delete operation requires performing the following actions:
- Updating one or more pieces of information in a database
- Communicating with one or more external systems
- Refreshing/invalidating some in-memory cache
My question is: What is the best way to implement the steps above so that there is consistency/eventual consistency across DB, external systems and cache in the face of computer or network failures?
I understand that, if DB + external system + cache are 2-phase commit capable, the whole thing can be kept consistent using a global transaction. But that's rarely an option.
One approach I can think of is as follows:
- In a single DB transaction, perform the updates and create a new "workflow" object with status=PENDING and containing enough information to communicate with the external system and refresh/invalidate the cache.
- Communicate with the external systems, in a idempotent manner
- Refresh/invalidate the cache
- Set the status of the workflow object to COMPLETE (or just delete it)
- Have a periodic background task that searches for workflow objects with status=PENDING and that have been sitting idle for too long. For each of those objects, repeat steps #2, 3 and 4 above.
Does the approach above seem reasonable? Are there better alternatives?
Thank you in advance for any comments/suggestions.
Aucun commentaire:
Enregistrer un commentaire