This is something a smith chart could show you graphically. For antenna analyzers, a half wave jumper will ensure the complex impedance seen at the coax input is the same (discounting loss) as at the antenna feed point without needing to calibrate to the end of the coax.
Think of coax cable as an impedance transformer that changes the impedance of a mismatch (only) and comes back around to the same impedance every half wavelength.
However, coax will not change the SWR of a mismatch. This is because varying the transmission line causes the impedance to vary in a way that maintains a constant SWR. The R and the X change together such that their |Z| remains constant. What is necessary for a match changes, but the reflection coefficient does not. For a matched system, it does not matter.
Use a half wave when you want to measure feed point impedance at the antenna for the purpose of designing matching networks or to change a mismatch to something a specific tuner can handle, but if all you care about is SWR and watts, don't worry about length.
One exemption to this is for common-mode current. Coax length does change the length of the common-mode path, so if there is a current imbalance at the feed point, length can have an impact on the SWR because the shield is radiating like an antenna.
edit: Jims comment stems from the fact that early analyzers, like the MFJ and early rig expert did not provide a convenient way to calibrate to the end of the cable, so someone wanting to measure an antenna to design a loading coil or other match would need the half wave jumper to get useful starting numbers, unless of course you plotted it on a smith chart and added the coax effects manually. Nowadays, this isn't an issue because almost all analyzers can OSL or port extend to the end of the coax.