LaunchDarkly uses a novel streaming architecture to serve feature flags without making remote requests. We use server-sent events (SSE), a protocol for one-way real-time messaging, to send messages to your servers whenever you change the feature flag rules on your dashboard. SSE is widely used for server-to-browser messaging, but it works equally well for server-to-server communication. The SSE connection is all handled under the hood by our SDKs.
Performance, Speed, and Reliability
What happens if the SDK loses connectivity to LaunchDarkly?
The SDK relies on its stored state to evaluates flags. By default the SDK first initializes with an empty state. When the SDK first initializes, it opens a streaming connection to LaunchDarkly. The initial response from LaunchDarkly contains the SDKs current state. The SDK will then keep this streaming connection open and when any change is made in the LaunchDarkly dashboard or via the Rest API, LaunchDarkly will send these changes to all currently connected SDKs.
If the SDK ever loses connectivity to LaunchDarkly then it will continue attempting to establish a streaming connection until it succeeds. If you try to evaluate a flag before the SDK receives its initial state, or you try to fetch a flag which otherwise doesn't exist, then the SDK will return the fallback value. All SDKs provide synchronous (through blocking and/or polling) and asynchronous ways of waiting for the SDKs state to initialize.
Do you make a remote call every time a feature flag is requested?
How do you ensure no latency?
Our unique streaming architecture ensures that feature flags are updated nearly instantaneously, without introducing any latency to your site. LaunchDarkly’s performance is even faster than storing feature flags in your own database. We also have multiple layers of redundancy to ensure your users always receive a flag.
Do I need to modify my firewall to use LaunchDarkly?
In most cases no— our streaming connection only requires that your server be able to make an outbound HTTPS connection to *.launchdarkly.com.
What’s the overhead of a feature flag request?
Almost nothing. LaunchDarkly’s SDKs use a streaming connection to asynchronously receive updates to your feature flag rules. Your actual feature flag requests are served from memory (or a Redis store, if you configure one). This adds less than 1 millisecond of latency to your page loads— about as fast as looking up a value in a hash table.
Do you support Redis caching?
Yes, we support Redis caching. Our SDKs can be configured to use Redis as a persistent store, so even if an initial connection to LaunchDarkly fails, the last flag configurations will be used instead of the fallbacks.
Redis caching is currently supported by the following SDKs:
You may also use the LaunchDarkly Relay to handle feature updates, offloading that responsibility from the SDKs running in your servers.
If I have a large number of backend servers, will LaunchDarkly serve features to my users consistently?
Yes. With our streaming architecture, updates will be reflected in all your servers within milliseconds. If you require even stronger consistency guarantees, you can configure our SDKs to read and write from a shared Redis store.
What is the LaunchDarkly Relay?
The LaunchDarkly Relay establishes a connection to the LaunchDarkly streaming API, then proxies that stream connection to multiple clients.
The relay lets a number of servers connect to a local stream instead of making a large number of outbound connections to
The relay can be configured to proxy multiple environment streams, even across multiple projects. Check out the docs.
What is the fallback variation?
The fallback variation is the feature flag value served if LaunchDarkly is unreachable and the user has no previously stored flag settings. You define the fallback variation in your code.