문제

I've been hearing lately more and more about serverless architectures which are based on products such as AWS Lambdas, Azure Functions, Google Cloud Functions, etc.

I understand the advantages of using such architectures, but what eludes me is how small business logic components can be efficiently shared between those functions. I'll give an example:

The backend of my app is composed of a group of functions deployed in one of the services I mentioned above. Each of these functions uses a logging framework. I wouldn't want to put the logging logic in a function of its own because I don't want the overhead of making a remote function call every time I want to log something. The issue arises when I want to make a change in the logging framework. Now, I need to re-deploy ALL the functions that use this framework in order to apply the change.

In contrast, in "regular" architectures, I would only need to deploy the application, which is something I probably do anyway once in a short while.

So, is it an inherent problem with these serverless architectures? or am I missing something?

도움이 되었습니까?

해결책

Architecturally, it makes sense to deploy such a shared component separately so that its concerns were properly isolated from the others. Instead of calling it as an internal component, you would issue a (potentially fire and forget) call to another "function" for system-level logging. This also makes sense for scalability because as your system scales out based on demand, your logger component could become a bottleneck unless it also scales out.

If it had to be a shared component (say some logic or common library), one way to mitigate deployment issues is with continuous integration / build / deployment which automates the testing, packaging and (test) deployment for you. You would have to redeploy all the microservices functions with the updated shared component. But with a good integration and deployment processes in place, you can mitigate the risks by discovering problems prior to full deployment. See blue/green deployment for instance. It is extra work to develop deployment processes, but since continuous-* processes have merit for other reasons, it is an easy tradeoff for many teams.

다른 팁

AWS and maybe others provide a built-in service for logging that you are invited to use. If you don't want to use it, you probably have to push an object to a cloud storage instead.

In a classic lambda function architecture, your service is launched on call, ressources are allocated for the duration of the program, then fred. You typically don't have access to low overhead persistance, because once your service execution is over, the ressouces are gone.

For the deployment model, as far as I understand, AWS lamba does not offer the possibility to use a lib for several packages as far as I understand ; you'd have to pack all services in a single deployment package with your library included, which is not a very scalable architecture.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 softwareengineering.stackexchange
scroll top