Skip to main content

🧪 A/B testing with campaigns

Run simple A/B tests by creating two campaigns and assigning them to different devices. Analysis is manual; there’s no auto-winner promotion

Leo avatar
Written by Leo
Updated over 5 months ago

How it works in Merlin Cloud

  • Two campaigns, one release: Create Variant A and Variant B from the same Experience Release so only content differs.

  • Manual traffic split: Assign each variant to a mutually exclusive set of devices (use quick-select by country/location if helpful).

  • Scheduling: Use the same London start time for both variants to avoid bias.

  • No built-in stickiness: The platform doesn’t enforce user-level sticky variants across sessions/devices. (Enterprise option available on request.)

Set up a manual A/B test

  1. Duplicate your current campaign twice → rename to Variant A and Variant B.

  2. Edit the variant content (e.g., hero image, copy, CTA).

  3. In Targets, assign distinct device lists to A and B.

  4. Schedule both for the same start time (London) or Publish both.

  5. After start, allow a 15-minute reload window for devices to pick up updates.

Tip: Keep images ≤ 500 KB and total campaign size ≤ 20 MB (unless using video) so load times don’t skew results.

Measuring results

Define success upfront (examples):

  • Engagement: sessions, page views per session, dwell time

  • Interaction: button taps, product views, conversions

  • Kiosk funnel: views → interactions → completions

Use the dashboard to compare metrics for A vs B device sets. Export if needed and calculate lift manually. There’s no auto-winner - promote the winner by publishing it to more devices.

Best practices

  • Keep variants identical except for the element under test.

  • Ensure device lists are comparable (footfall, store type, locale).

  • Avoid overlapping schedules that cause conflict resolution to override one variant.

  • Test long enough to smooth out daily swings; monitor Last seen and health so downtime doesn’t bias data.

  • Preview both variants and validate required fields before scheduling.

Limitations and enterprise options

  • No rules-based targeting or built-in randomisation.

  • No auto-winner or MAB; all analysis and promotion are manual.

  • Sticky user assignment and enhanced media processing can be enabled for enterprise clients on request.

Did this answer your question?