How it works in Merlin Cloud
Two campaigns, one release: Create Variant A and Variant B from the same Experience Release so only content differs.
Manual traffic split: Assign each variant to a mutually exclusive set of devices (use quick-select by country/location if helpful).
Scheduling: Use the same London start time for both variants to avoid bias.
No built-in stickiness: The platform doesn’t enforce user-level sticky variants across sessions/devices. (Enterprise option available on request.)
Set up a manual A/B test
Duplicate your current campaign twice → rename to Variant A and Variant B.
Edit the variant content (e.g., hero image, copy, CTA).
In Targets, assign distinct device lists to A and B.
Schedule both for the same start time (London) or Publish both.
After start, allow a 15-minute reload window for devices to pick up updates.
Tip: Keep images ≤ 500 KB and total campaign size ≤ 20 MB (unless using video) so load times don’t skew results.
Measuring results
Define success upfront (examples):
Engagement: sessions, page views per session, dwell time
Interaction: button taps, product views, conversions
Kiosk funnel: views → interactions → completions
Use the dashboard to compare metrics for A vs B device sets. Export if needed and calculate lift manually. There’s no auto-winner - promote the winner by publishing it to more devices.
Best practices
Keep variants identical except for the element under test.
Ensure device lists are comparable (footfall, store type, locale).
Avoid overlapping schedules that cause conflict resolution to override one variant.
Test long enough to smooth out daily swings; monitor Last seen and health so downtime doesn’t bias data.
Preview both variants and validate required fields before scheduling.
Limitations and enterprise options
No rules-based targeting or built-in randomisation.
No auto-winner or MAB; all analysis and promotion are manual.
Sticky user assignment and enhanced media processing can be enabled for enterprise clients on request.
