A controlled experiment: half the visitors see A, half see B, measure which wins.
A/B testing (also called split testing) is how you know if a change actually worked. You randomly split traffic: 50% see the original (A), 50% see the variant (B). Same period, same audience, different experience. Whichever has the higher conversion rate (above a statistical threshold) wins. Without this, you're guessing.
Intuition about what will convert is usually wrong. A/B testing replaces opinion with evidence. But: you need enough traffic to reach statistical significance (usually thousands of visitors per variant), and you can only test so many things at once. Prioritize tests by potential impact.