Evaluate the performance of your app as you implement new features and conduct experiments
There are different cases where you end up having different feature flags or experiments for different users. for example if you are:
- Controlling your rollout by enabling features to a % of your users to monitor Performance and Stability
- Creating and testing different mobile onboarding experiences concurrently
- Testing different landing pages for your mobile app
- Implementing new features with different UI
Through the “Experiments” API you can keep track of your experiments and its performance for each user and even filter by them. This can help you in:
- Detecting if the the potential source of any latency or issues in the app is introduced by different variants of the experiment or new features
- Having visibility for the latencies of your variants over different metrics
- Filtering by your experimental variants to analyze if they impact your performance or cause crashes
- Debugging issues faster by understanding if the experimental values contributed in a issue
Simply, to track your feature flags or experiments in the dashboard, use the next method.
You can have up to 200 experiments, with no duplicates. Each experiment can consist of 70 characters at most and is not removed at the end of the session or if logOut is called
Experiments in performance monitoring
Once you add the API in your code, you will be able to view the experiments in the patterns section of Cold App Launch and UI Hangs
You can see the different latencies of your metric in correlation with the experimental variant. For example, in the previous screenshot, users who had guest_mode enabled had a very different Apdex score, p50 and p95 latencies.
You can also isolate your experiment by filtering with a specific experiment value for further analysis to understand if they are impacting the latency of App launch or are causing more UI hangs.
If you filter by guest_mode and No experiments as shown on the following screenshot, the No Experiments presents occurrences without any experiments applied. You can also filter by one or more experimental values.
The No Experiments selection will help you spot and compare any difference in performance in each metric.
Experiments in Crash Reporting
Rolling out new features or doing modifications in your code can increase the number of errors you are seeing. By analyzing how different experiment variants are contributing to your crashes, you can minimize the debugging efforts and team members can save time.
For example, if you just rolled out a new recommendation feature for a subset of your users, you can view all the crashes that occurred to the users who had this feature enabled by using the filters.
In the screenshot below, we filtered by experiment Recommendations_enabled, to view the relevant crashes
You can also view the experiment variants attached to each crash report on your dashboard in the patterns section of a crash.
Experiments and Team Ownership
If you have a team who is responsible for a specific feature flag or an experiment, you can automatically assign them the relevant issues and forward it to their favorite tool. For more details on Team Ownership, click here
In the screenshot below, we wanted to assign crashes relevant to the experiment Recommendations_enabled to the team responsible for this feature and auto-forward it to their Jira board
If your experiment is concluded or you would like to simply remove it, you can use this method:
You can use the below method to clear all the experiments from your reports:
Updated about 2 months ago