Because we will just limit available IPs to the guest (no S/D NAT), pretty much all packets get tracked (none are normally discarded unless user tries to use IPs he's not allowed to).
When pushing to the maximum PPS which we can via Linux kernels (around 1.8 mio PPS in practice) connection tracking reduces that number by 60+% (until we start getting dropped packets) in my tests. Connection tracking also eats RAM besides CPU cycles. And I also do not feel like changing maximum / default values after work, just because we reached the limit of the conntrack table or breaching RCF IPv4 standards by changing connection timeout value and closing open connection before 5 days are over, just because we do not have big enough connection tracking table.
While I agree, for not so busy servers connection tracking penalty might be negligible, but for production servers which have many users or have high amount of small packets, this definitely is a problem. And not just in the terms of latency, but also of silent packet drops / discards, which are especially painful for stateless protocols like UDP (example usual DNS request packets and sites not resolving).
I'm just guessing here, but there might be cases where people just went and bought faster hardware, just to be able to forward packets reliably instead of just disabling packet tracking.