lundi 28 octobre 2019

Design pattern: table architecture and polling algorithm for intermediate "pending" states

There are 2 parts to this problem:

  1. Architecture: I have a table that holds "applications". These applications are submitted to a third party service where it takes about a day or two to either "approve" or "deny", the state is "pending" in the meantime. The third-party service has a logging endpoint where they continually publish the status's of applications when they are "approved" or "denied".

  2. Algorithm: The endpoint is queryable by an application request id (int generated upon submitting an application) param and a timestamp, and returns a list of statuses. This list that is returned starts from the closest id/timestamp published, based on the params you hit the endpoint with, and is of a size based on an int you provide to the endpoint: https://endpoint?id=&listSize=&dateTime=

My questions:

  1. Architecture: Should I put all these applications (those with states "pending" and "approved") into 1 table, and have a status column, or should I break this up into 2 tables and have a table for all the "approved" applications and another table for all of the "pending" applications? (And maybe even a table for all of the "denied" application.)

  2. Algorithm: What is the best way (Design pattern) to continuously poll the logging endpoint in order to realize the next state and then perform the necessary update based on the final status of these applications?

Big picture:

Does this design pattern already exist and are there solutions for this?

Does AWS provide something built for this and/or is software like Celery recommended to perform such routine logging and updating tasks?

Aucun commentaire:

Enregistrer un commentaire