Affected: All SDKs
Symptoms
After setting up an experiment and starting an iteration, the following status may appear on the Experiment Results page when you would expect results to show:
This metric has never received an event for this iteration
Cause
LaunchDarkly has not yet received a metric event with a context key that matches a feature event from a context that evaluated the experiment flag and matched the experiment rule in the flag.
Solution
Check against the list to see if any of the following issues apply:
- Are there any context instances showing traffic counts in the Experiment Results? If not, you may need to check if your SDK implementation sends feature events correctly. One example would be using the
variation
method when a context would encounter the experiment. More information on how to do this: How to register custom conversion metric events in your experiment - If using a page view or click metric but there are page redirects in your application, switch to using a custom conversion metric and calling
track
andflush
manually instead to ensure the event is not lost during the redirect. - If using a custom conversion or numeric metric, check to make sure you are calling
track
correctly: Sending custom events - When calling
track
, make sure the key of the metric matches the event key in the LaunchDarkly dashboard. - If using a JavaScript-based SDK, check to see if the browser has any ad-blocker extensions or Do Not Track settings enabled, as they will block sending the events necessary to register the metric event in your experiment.
If using Segment to send events to LaunchDarkly metrics, the Segment event name and LaunchDarkly event key must match exactly. More information on how to do this: Segment
Once you've made edits to a metric, you must run a new iteration of the experiment to check if the metric has received an event. A running iteration will continue to use the version of the metric it started with. Therefore, any edits made to a metric will not be immediately reflected in the analysis of an ongoing experiment. This design ensures that metrics cannot be altered midway through an experiment, which could invalidate the results. To apply the latest version of a metric, stop the current iteration and start a new one: Metric versions