[{"content":"","date":"9 October 2024","externalUrl":null,"permalink":"/en/articles/","section":"Articles","summary":"","title":"Articles","type":"articles"},{"content":"Articles, proofs of concept, and practical code examples covering software architecture, cloud infrastructure, and design patterns.\n","date":"9 October 2024","externalUrl":null,"permalink":"/en/","section":"Library","summary":"","title":"Library","type":"page"},{"content":" Introduction # In a scenario where the same software is offered to different customers (tenants), we need architectural patterns that isolate them to prevent problems such as:\nData leakage Compliance and regulatory issues Performance impact on one customer caused by another (Noisy Neighbors) In a multi-tenant architecture, depending on business maturity, contractual requirements, and budget, different strategies can be adopted, including:\nPool: Share-Everything — Tenants share resources but are logically isolated, for example, in the database schema. Silo: Share-Nothing — Each tenant has dedicated resources, providing resource-level isolation and avoiding noisy neighbor issues. Bridge: Hybrid approach — Uses shared services while critical workloads are isolated in resource-level silos. In this article, I present a proof of concept of an architecture that follows the bridge strategy, with a single entry point where requests are routed to the application plane of each tenant.\nThe advantage of this solution is that we can share app planes with less critical tenants and isolate more important ones in dedicated silos. For this, applications must be treated as deployable artifacts across different planes without duplicating code or creating tenant-specific affinities.\nContainers # Each tenant has its own namespace, ensuring logical isolation within a single Kubernetes cluster. In this POC, I do not go into node-level distribution and isolation; however, in production, tenants can be placed on separate node groups in EKS based on criticality.\nFor the application, there is a single source of truth for the source code, managed as an artifact by Helm. This allows deploying the app in multiple namespaces while changing only values such as memory/CPU limits and Horizontal Pod Autoscaler configuration.\nRouting and Authorization # The routing layer is supported by Envoy Proxy as a reverse proxy and Open Policy Agent (OPA), providing authorization for the requested application plane. Envoy and OPA communicate over gRPC locally inside the pod, as a sidecar, offering low latency.\nEach request is routed based on an HTTP header named x-tenant-id. For security, OPA evaluates the provided JWT to verify the header matches the token claims and to confirm tenant existence.\nNote that this Rego implementation does not validate JWT signature or expiration, as it is a proof of concept; in production, this validation is essential.\nRun locally # To test on your machine or inspect the POC implementation source code, visit https://github.com/margato/multi-tenant-k8s\nReferences # Re-defining multi-tenancy Silo, Pool, Bridge Models ","date":"9 October 2024","externalUrl":null,"permalink":"/en/articles/aplicacoes-multi-tenant-kubernetes-helm/","section":"Articles","summary":"","title":"Multi-tenant applications with Kubernetes and Helm","type":"articles"}]