Sidecars For Endpoint Security

Shun
5 min readSep 27, 2020

Context and Problem

Applications and services often require related functionality, such as monitoring, logging, configuration, and networking services. These peripheral tasks can be implemented as separate components or services.

If they are tightly integrated into the application, they can run in the same process as the application, making efficient use of shared resources. However, this also means they are not well isolated, and an outage in one of these components can affect other components or the entire application. Also, they usually need to be implemented using the same language as the parent application. As a result, the component and the application have close interdependence on each other

Microservices architecture, with a large number of services exposing their assets and capabilities through APIs and an increased attack surface, demands a zero trust security architecture — ‘never trust, always verify’.

Many of the technical solutions we build today run in increasingly complex polycloud or hybrid-cloud environments with multiple distributed components and services. Under such circumstances, we apply two security principles early in implementation: zero trust network, never trust the network and always verify; and the principle of least privilege, granting the minimum permissions necessary for performing a particular job.

Solution

Implement cross-cutting concerns in a sidecar process or container that runs alongside the service instance.

Sidecars for endpoint security is a common technique we use to implement these principles to enforce security controls at every component’s endpoint, e.g., APIs of services, data stores or K8S control interface. We do this using an out-of-process sidecar — a process or a container that is deployed and scheduled with each service sharing the same execution context, host and identity.

Sidecars for endpoint security minimize the trusted footprint to a local endpoint rather than the network perimeter.

A sidecar service is not necessarily part of the application, but is connected to it. It goes wherever the parent application goes. Sidecars are supporting processes or services that are deployed with the primary application. On a motorcycle, the sidecar is attached to one motorcycle, and each motorcycle can have its own sidecar. In the same way, a sidecar service shares the fate of its parent application. For each instance of the application, an instance of the sidecar is deployed and hosted alongside it.

Sidecar component with an Application

Idea here is to run your peripheral task, say authentication module in a sidecar container and let your application just focus on the API logic. All the API requests are routed through a sidecar proxy, which uses the authentication module to do the access control and then route the requests to the API application only after successful authorization.

Sidecar Design Pattern

When to Use this Pattern

Use this pattern when:

  • Your primary application uses a heterogeneous set of languages and frameworks. A component located in a sidecar service can be consumed by applications written in different languages using different frameworks.
  • A component is owned by a remote team or a different organization.
  • A component or feature must be co-located on the same host as the application
  • You need fine-grained control over resource limits for a particular resource or component. For example, you may want to restrict the amount of memory a specific component uses. You can deploy the component as a sidecar and manage memory usage independently of the main application.

Examples

  • Infrastructure API — The infrastructure development team creates a service that’s deployed alongside each application, instead of a language-specific client library to access the infrastructure. The service is loaded as a sidecar and provides a common layer for infrastructure services, including logging, environment data, configuration store, discovery, health checks, and watchdog services. The sidecar also monitors the parent application’s host environment and process (or container) and logs the information to a centralized service.
  • Polyglot support — Do you have non-JVM languages with which you want to take advantage of Eureka, Ribbon, and Config Server? The Spring Cloud Netflix Sidecar was inspired by Netflix Prana. It includes an HTTP API to get all of the instances (by host and port) for a given service. You can also proxy service calls through an embedded Zuul proxy that gets its route entries from Eureka. The Spring Cloud Config Server can be accessed directly through host lookup or through the Zuul Proxy. The non-JVM application should implement a health check so the Sidecar can report to Eureka whether the app is up or down.
  • Sending logs to logging servers: There are numerous logging service providers paid and open source. If we keep the logging service logic considering one service, it makes us vendor lock. By offloading the log transmitting logic to sidecar container, we can easily shift and also use multiple logging platform at once.

Guidance On Using This Pattern

A Good indicator is having a cross-cutting concern that is not your business logic like Observability, Security, Performance, Reliability, Auditing, and Infrastructure/Operations. Another point of consideration is that you might want to avoid binary coupling with the applications.

  • In a container and Kubernetes world, the sidecar design pattern is more compatible
  • Open Policy Agent (OPA) and Envoy are tools that implement this technique
  • Recommend you look into using Istio, linkerd or Envoy before implementing your own sidecars for endpoint security.
  • Don’t use this ,If you need extreme performance and super-low latency, definitely making a lib would be faster than a sidecar because you would be running embedded on the application.
  • Sidecar is a very interesting architecture pattern we can use in Cloud-native applications. However, we need to resist making “sidecar everything” otherwise we won’t have benefits.
  • I’m still working on implementing this in a serverless environment.
  • The responsibility of sidecar’s security policy configuration left with the team that is responsible for the endpoint and not a separate centralized team.

--

--

Shun

Engineering Leader,Cloud Architect, Data Science