Currently we have system deployed on-premises. Our target is to deploy to AWS using EKS or ECS. So we want to optimally use cloud resources and hence we are thinking about design change. The code is running in .NET Core and written in C#.
My problem statement is as - Currently, user selects multiple accounts and runs different processes on them. This processing happens on background thread(ThreadPool). So user can select 100 accounts (it can be 2000-3000 also) and 20 processes as example. So we divide this work such that we split this into multiple threads with each thread processing one process and one or more accounts. So all this processing happens as part of UI trigger from user executing API call on the server to which load balancer routes the request. So this work currently is not split up between multiple servers if they are idle at the same time. Now I want to design this such that I am able to split my work between multiple pods and containers instead of single node doing all the work. Is this what can be achieved by Kubernetes configuration or what design pattern I can apply to split work between multiple nodes?
One approach could be to expose the method which executes process and one or more accounts as a separate http endpoint which will be triggered as part of initial api call and load balancer routing request to multiple nodes and doing the split. But this might not efficiently utilize idle nodes.
Aucun commentaire:
Enregistrer un commentaire