mercredi 3 novembre 2021

How to structure my Go app for transactions via pgx

I have the following models

type UsersModel struct {
    db           *pgx.Conn
}

func (u *UsersModel) SignupUser(ctx context.Context, payload SignupRequest) (SignupQueryResult, error) {
    err := u.db.Exec("...")
    return SignupQueryResult{}, err
}
type SessionsModel struct {
    db           *pgx.Conn
}
 
func (s *SessionsModel) CreateSession(ctx context.Context, payload CreateSessionRequest) error {
    _, err := s.db.Exec("...")
    return err
}

and my service calls UsersModel.SignupUser as follows

type SignupService struct {
    userModel signupServiceUserModel
}

func (ss *SignupService) Signup(ctx context.Context, request SignupRequest) (SignupQueryResult, error) {
    return ss.userModel.SignupUser(ctx, request)
}

Now, I need to tie SignupUser and CreateSession in a transaction instead of isolated operations, not sure what the best way to structure this is, and how to pass transaction around while maintaining that abstraction of DB specific stuff from services. Or should I just call the sessions table insert query(which I'm putting in *SessionsModel.CreateSession directly in *UsersModel.SignupUser?

For reference, transactions in pgx happen by calling *pgx.Conn.Begin() which returns a concrete pgx.Tx , on which you execute the same functions as you would on *px.Conn , followed by *pgx.Tx.Commit() or *pgx.Tx.Rollback()

Questions I have are:

  • Where to start transaction - model or service?
  • If in service, how do I do that while abstracting that there's an underlying DB from service?
  • How do I pass transaction between models?

Springboot - scheduling app - design pattern for update records

I have a mongo database and two collections A and B, I want to check if record with A.id exists in collection B, and if it exist I Want to do some stuff. I need to do this over and over again. So I want to use simple SpringBoot app with @EnableScheduling and @Schedule.

It is quite easy if we will have few records in both collections, but I need to process large amount of records in one run it should be about 5000 and task shoul be repeated in 30 seconds cycles.

I was wondering is there any smart solution for this case? How should this be designed to support scaling? - I cant get db.A.find() because this would give more on less the same results on every instance.

What with multithreading? My only idea is to limit the mongo query db.A.find().limit() to let say 100 and create ThreadPoolExecutor with blocking queuq also set to 100, but I think this is not a good solution.

Potential race conditions with ConcurrentBag and multithreaded application

I've been wrestling for the past few months with how to improve a process where I'm using a DispatcherTimer to periodically check resources to see if they need to be updated/processed. After updating the resource("Product"), move the Product to the next step in the process, etc. The resource may or may not be available immediately.

The reason I have been struggling is two-fold. One reason is that I want to implement this process asynchronously, since it is just synchronous at the moment. The second reason is that I have identified the area where my implementation is stuck and it seems like not an uncommon design pattern but I have no idea how to describe it succinctly, so I can't figure out how to get a useful answer from google.

A rather important note is that I am accessing these Products via direct USB connection, so I am using LibUsbDotNet to interface with the devices. I have made the USB connections asyncronous so I can connect to multiple Products at the same time and process an arbitrary number at once.


public Class Product
{
 public bool IsSoftwareUpdated = false;
 public bool IsProductInformationCorrect = false;
 public bool IsEOLProcessingCompleted = false;

 public Product(){}
 ~Product()
}

public class ProcessProduct
{
 List<Product> bagOfProducts                   = new List<Product>(new Product[10]);

 ConcurrentBag<Product> UnprocessedUnits       = new ConcurrentBag<Product>();
 ConcurrentBag<Product> CurrentlyUpdating      = new ConcurrentBag<Product>();
 ConcurrentBag<Product> CurrentlyVerifyingInfo = new ConcurrentBag<Product>();
 ConcurrentBag<Product> FinishedProcessing     = new ConcurrentBag<Product>();

 DispatcherTimer _timer = new DispatcherTimer();

 public ProcessProduct()
 {
     _timer.Tick += Timer_Tick;                            //Every 1 second, call Timer_Tick
     _timer.Interval = new TimeSpan(0,0,1);                //1 Second timer
     
     bagOfProducts.ForEach(o => UnprocessedUnits.Add(o));  //Fill the UnprocessedUnits with all products
 
     StartProcessing();
 }
 private void StartProcessing()
 {
     _timer.Start();
 }

 private void Timer_Tick(object sender, EventArgs e)
 {
     ProductOrganizationHandler();

     foreach(Product prod in CurrentlyUpdating.ToList())
     {
         UpdateProcessHandler(prod);  //Async function that uses await
     }
     foreach(Product prod in CurrentlyVerifyingInfo.ToList())
     {
         VerifyingInfoHandler(prod);  //Async function that uses Await
     }
     if(FinishedProcessing.Count == bagOfProducts.Count)
     {
         _timer.Stop();  //If all items have finished processing, then stop the process
     }
 }
 
 private void ProductOrganizationHandler()
 {
     //Take(read REMOVE) Product from each ConcurrentBag  1 by 1 and moves that item to the bag that it needs to go
     //depending on which process step is finished
     //(or puts it back in the same bag if that step was not finished).
     //E.G, all items are moved from UnprocessUnits to CurrentlyUpdating or CurrentlyVerifying etc.
     //If a product is finished updating, it is moved from CurrentlyUpdating to CurrentlyVerifying or FinishedProcessing
 }
 private async void UpdateProcessHandler(Product prod)
 {
     await Task.Delay(1000).ConfigureAwait(false);
     //Does some actual work validating USB communication and then running through the USB update
 }
 private async void VerifyingInfoHandler(Product prod)
 {
     await Task.Delay(1000).ConfigureAwait(false);
     //Does actual work here and communicates with the product via USB
 }
}

Full Compile-ready code example available via my code on Pastebin.

So, my question really is this: Are there any meaningful race conditions in this code? Specifically, with the ProductOrganizationHandler() code and the looping through the ConcurrentBags in Timer_Tick() (since a new call to Timer_Tick() happens every second). I'm sure this code works the majority of the time, but I am afraid of a hard-to-track bug later on that happens because of a rare race condition when, say, ProductOrganizationHandler() takes > 1 sec to run for some dumb reason.

As a secondary note: Is this even the best design pattern for this type of process? C# is my first OOP language and all self-taught on the job (nearly all of my job is Embedded C) so I don't have any formal experience with OOP design patterns.

My main goal is to asynchronously Update/Verify/Communicate with each device as it becomes available via USB. Once all products in the list are finished (or a timeout), then the process finishes. This project is in .NET 5.

Avoid Service locator in strategy design pattern

Take a look on that pseudocode

class A,B,C implements StrategyInterface
{
    private dep;

    constructor(Dep dep) {
        this->dep = dep;    
    }
}

class StrategyResolver
{
    private locator;

    constructor(ServiceLocator locator) {
        this->locator = locator;
    }
    
    public function resolve(data): StrategyInterface
    {
        if ( xxx ) {
            return locator->get(A);
        } else if ( yyy ) {
            return locator->get(B);
        }
        return locator->get(C);
    }
}

As service locator is considered as anti pattern, how to avoid it in this case? A,B,C can have various dependencies, thats why I would like to instantiate it using all benefits of dependency injections. I could albo inject A,B,C as dependencies of StrategyResolver, but what if I have there like 10 strategies. The StrategyResolver dependency list is then too long.

mardi 2 novembre 2021

Pine Script routine to find different Pattern

I'm looking for a way to program a routine in PineScript that can, for example, recognize the pattern in the picture. One problem for me is that it should work in different symbols, regardless of whether they cost 0.345 USD or 312.68 USD. So in different values.

It is specifically about a routine that recognizes these 5 bars, for example, regardless of the resolution/value.

Thanks for your help.

enter image description here

perl regex - pattern matching

Can anyone explain what is being done below?

 $name=~m,common/([^/]+)/run.*/([^/]+)/([^/]+)$,;

Is there a book with Azure Architecture Patterns?

I'd like to buy a book with all the information available in the link below. Is there any book available in the market?

https://docs.microsoft.com/en-us/azure/architecture/patterns