Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
10 min read
Share
This blog is part of the APIClarity Overview Series.
In this blog, we’ll explore the architecture and components of APIClarity. For a higher-level introduction to APIClarity, see the APIClarity Introduction blog.
APIClarity is an open-source project that runs as a pod deployment within a Kubernetes cluster and analyzes API traffic for security risks. Let’s take a look at the different components.
The main APIClarity service consists of a server pod and a PostgreSQL database running in the “apiclarity” namespace, shown in dark green in Figure 1 below.
The server pod is responsible for collecting all of the incoming API traffic, analyzing it and reporting any API security risks via the UI. The API specifications, traffic and analysis are stored in the database.
In order to analyze API traffic, APIClarity has many plugins that can be used with different types of traffic sources to tap the API traffic and send it to the APIClarity server. The different plugins are shown in light green in Figure 1, and each has an arrow that feeds into the larger “API Traffic” arrow that is sent to the APIClarity server, for purposes of illustration.
To monitor API traffic sourced externally, APIClarity has plugins for the following:
For internal API traffic between application microservices, APIClarity integrates with service meshes by installing WebAssemby (WASM) filters at the envoy level to tap API traffic. Istio and Kuma service meshes are supported. The light green “WASM” boxes in Figure 1 represent the envoy WASM filters for APIClarity.
In addition, APIClarity has an API tapping capability that will passively tap API traffic for a given Kubernetes namespace. This is shown in light green and labeled “APIClarity Tapper” in Figure 1.
A UI is available to see the API traffic that was observed and check for any abnormalities or security risks that were reported by APIClarity.
Let's take a look at the functionality of the APIClarity Server.
This module allows the upload of existing OpenAPI specifications (specs) or learns and reconstructs specs based on observed API traffic if none are provided. The reconstructed specs are available in the UI, where they can be reviewed and approved by the user. OpenAPI v2.0 and v3.0 are supported.
The “spec diff” detector looks for differences between the approved OpenAPI specs and the observed API traffic. It can detect shadow and zombie APIs. Shadow APIs are ones that are observed, but are not in the approved spec, meaning they are unknown API calls. Zombie APIs are deprecated API versions that are still being used. These will be explored in future blogs.
The BFLA detector builds an authorization model for application microservice interactions by first observing the API interactions and then detecting any discrepancies from the model. A BFLA violation would mean that functionality within the application was being used without authorization. The user is able to mark any interactions that have been learned as “illegitimate”, in which case those interactions would be flagged as BFLA violations going forward. Much more information is available in the README file.
The trace analyzer is used to detect different kinds of API security weaknesses in the observed API traffic, either at the API endpoint level or at the event level (i.e. an actual API call). It provides a score for each detected vulnerability of low, medium, or high. You can configure some of the things that the trace analyzer scans for, such as dictionary matches and regex rules for matching sensitive information. There’s also a way to ignore findings if desired.
There are many types of security vulnerabilities the trace analyzer can detect and flag.
If basic authentication (username/password) is used for an application, the trace analyzer will check for short, weak (well-known) or reused passwords.
If JSON web tokens (JWT) are used for an application, the trace analyzer will check for the following:
Sensitive information, including Personally Identifiable Information or PII, can be detected by configuring a set of regex patterns to compare against. An example is the keyword “password”, a phone number, a social security number, etc.
Easily guessed object IDs, for example IDs in ascending or descending order, can be detected and flagged. These could leave the application at risk of a BOLA attack (see next section).
A BOLA attack is where objects are accessed in an application without the proper authorization. One way to detect BOLAs is by looking for “non-learnt identifiers” in API requests, meaning that a request is being made for an object ID that hasn’t been provided by the application in a previous response. A guessable object ID can contribute to this problem.
APIClarity has a data fuzzer component that detects data injection risks. Using the approved OpenAPI specs for an application, the fuzzer attempts to inject unauthorized or invalid data into application API endpoints to flag weaknesses in input validation and processing.
A UI is available to see the API traffic that was observed and check for any abnormalities or security risks that were reported by APIClarity.
APIClarity uses its own PostgreSQL database to store OpenAPI specs, API traffic flows and traffic analysis. If installed via Helm, PostgreSQL will require a persistent volume for storage.
The tables in the APIClarity database are the following:
This table lists the results from the trace analyzer that occur at the API endpoint level.
This table populates the “API Events” UI pane, and for each observed API call it includes a timestamp, RESTAPI method, URL, status, source/destination IP and port, host, external/internal flag, and a list of alerts that have been detected.
This table populates the “API Inventory” UI pane, and lists the API endpoints for an application.
This table lists the results from the BFLA detector, the trace analyzer and the fuzzer that occur at the event level.
This table provides a list of user reviews for reconstructed API specs.
This table contains samples of API calls from trace sources (next section).
This table maintains a list of trace sources for APIClarity, which are API traffic sources external to an application, including Apigee X Gateway, F5 BIG-IP LTM Load Balancer, an OpenTelemetry Collector or the API tapper.
Incoming WASM traffic filters are set within envoy sidecars for the API microservice application that APIClarity will profile. These WASM filters forward incoming, internal API traffic (i.e. traffic between application microservices) to the APIClarity engine. APIClarity has WASM filter support for Istio and Kuma service meshes.
Details on how WASM filters are configured to export HTTP traffic for APIClarity are here. Additionally, a proxy template is used to install the WASM filter for Kuma.
APIClarity includes support for many different API traffic sources that interact with Kubernetes applications. We’ll take a look at the current set of traffic sources.
The Kong plugin can be installed by either running a script or by running a post-install patch to the Kong container by setting the following values as appropriate for your deployment in the APIClarity values.yaml file:
kong:
## Enable Kong traffic source
##
enabled: true
## Carry out post-install patching of kong container to install plugin
patch: true
## Specify the name of the proxy container in Kong gateway to patch
##
containerName: "proxy"
## Specify the name of the Kong gateway deployment to patch
##
deploymentName: ""
## Specify the namespace of the Kong gateway deployment to patch
##
deploymentNamespace: ""
## Specify the name of the ingress resource to patch
##
ingressName: ""
## Specify the namespace of the ingress resource to patch
##
ingressNamespace: ""
The Tyk plugin can be installed by either running a script or by running a pre-install init container that will add the plugin by setting the following values as appropriate for your deployment in the APIClarity values.yaml file:
tyk:
## Enable Tyk traffic source
##
enabled: true
## Enable Tyk verification in a Pre-Install Job
##
enableTykVerify: true
## Specify the name of the proxy container in Tyk gateway to patch
##
containerName: "proxy"
## Specify the name of the Tyk gateway deployment to patch
##
deploymentName: ""
## Specify the namespace of the Tyk gateway deployment to patch
##
deploymentNamespace: ""
The following external traffic sources are supported by APIClarity.
In order to tap traffic in an Apigee X Gateway that is external to the Kubernetes cluster where your application is running, you’ll need to configure a proxy so that Apigee-X has reachability to APIClarity, install the APIClarity public certificate in Apigee-X, and configure a shared flow bundle. See the README for more details.
In order to tap traffic in a BIG-IP Local Traffic Manager (LTM) that is external to the Kubernetes cluster where your application is running, you’ll need to install the APIClarity Agent on a host VM (separate from LTM) with reachability to both the LTM and to APIClarity. This will act as a proxy for the forwarded traffic. See the README file for installation steps.
APIClarity has an HTTP exporter that can be built into an external OpenTelemetry collector and forward API traces and metrics to the APIClarity server. The OpenTelemetry collector must be built with the APIClarity exporter image and configured with the APIClarity server endpoint.
APIClarity has an API traffic tapper that deploys a daemonset in a given namespace and forwards API traffic to the APIClarity server without needing an envoy sidecar or service mesh. It will use this “tap stream” as a traffic source for API monitoring.
To use it, set the following values in APIClarity’s values.yaml as appropriate for the namespace you want to tap, and redeploy APIClarity:
tap:
## Enable Tap traffic source
##
enabled: true
## Enable APIClarity Tap in the following namespaces
##
namespaces:
- default
Whew, that was a lot of information coming at you, but hopefully it was useful to understand a bit more about how APIClarity works.
Next in this blog series, I’ll give installation steps to get you started on your APIClarity journey to protect your cloud-native apps!
Anne McCormick is a cloud architect and open-source advocate in Cisco’s Emerging Technology & Incubation organization.
Get emerging insights on emerging technology straight to your inbox.
Discover why security teams rely on Panoptica's graph-based technology to navigate and prioritize risks across multi-cloud landscapes, enhancing accuracy and resilience in safeguarding diverse ecosystems.
The Shift is Outshift’s exclusive newsletter.
Get the latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.