• What happens if LaunchDarkly goes down?

    Outages happen, but LaunchDarkly uses a CDN to continue serving your feature flags uninterrupted in case something goes wrong with our backend servers. Even if the CDN goes down (we use Fastly, a widely-used CDN provider), your servers will continue operating with the last set of feature flag rules, so your customers will continue seeing the right set of features.

     
  • Do you make a remote call every time a feature flag is requested?

    LaunchDarkly uses a novel streaming architecture to serve feature flags without making remote requests. We use server-sent events (SSE), a protocol for one-way real-time messaging, to send messages to your servers whenever you change the feature flag rules on your dashboard. SSE is widely used for server-to-browser messaging, but it works equally well for server-to-server communication. The SSE connection is all handled under the hood by our SDKs.

     
  • How do you ensure no latency?

    Our unique streaming architecture ensures that feature flags are updated instantly, without introducing any latency to your site. LaunchDarkly’s performance is even faster than storing feature flags in your own database. We also have multiple layers of redundancy to ensure your users always receive a flag.

     
  • Do I need to modify my firewall to use LaunchDarkly?

    In most cases no— our streaming connection only requires that your server be able to make an outbound HTTPS connection to *.launchdarkly.com.

  • What’s the overhead of a feature flag request?

    Almost nothing. LaunchDarkly’s SDKs use a streaming connection to asynchronously receive updates to your feature flag rules. Your actual feature flag requests are served from memory (or a Redis store, if you configure one). This adds less than 1 millisecond of latency to your page loads— about as fast as looking up a value in a hash table.

  • Do you support Redis caching?

    Yes, we support Redis caching. Our SDKs can be configured to use Redis as a persistent store, so even if an initial connection to LaunchDarkly fails, the last flag configurations will be used instead of the fallbacks.  For example, in Java, you can configure a Redis store like this: http://launchdarkly.github.io/java-client/com/launchdarkly/client/RedisFeatureStore.html

    You can also use the LaunchDarkly Relay to handle feature updates, offloading that responsibility from the SDKs running in your servers.

  • If I have a large number of backend servers, will LaunchDarkly serve features to my users consistently?

    Yes. With our streaming architecture, updates will be reflected in all your servers within milliseconds. If you require even stronger consistency guarantees, you can configure our SDKs to read and write from a shared Redis store.

  • What is the LaunchDarkly Relay Proxy?

    The LaunchDarkly Relay Proxy establishes a connection to the LaunchDarkly streaming API, then proxies that stream connection to multiple clients.

    The relay proxy lets a number of servers connect to a local stream instead of making a large number of outbound connections to stream.launchdarkly.com.

    The relay proxy can be configured to proxy multiple environment streams, even across multiple projects. Check out the docs.

     
  • What is the fallback variation?

    The fallback variation is the feature flag value served if LaunchDarkly is unreachable and the user has no previously stored flag settings.   You define the fallback variation in your code.