> Our final analysis showed a reduction of over 400 vCPU cores, leading to a projected annual infrastructure cost saving of over $300,000.
Dam those are some expensive vCPUs. $750 per vCPU per year. It probably accounts the memory savings too but still.
It would be nice to read about what they did try before rewritting. They mention flamegraphs but nothing specific.
In my Go experience while working with fintech, Go API services can become CPU bound with excessive context switching if GOMAXPROCS is not well tunned in hot services and also with things like string maps. These tend to be easy to catch with pprof. I've had some 2x speedups with those. Certainly they already checked those quick wins I presume.
I'm curious what the bottleneck actually was and if it couldn't be solved with some ugly go optimisations instead? Not saying it was wrong to rewrite it, I just wanna know more because I find it super interesting.
[dead]
I had very similar savings on memory and cpu when I ported my monolith from Go to Rust. I also noticed a roughly 40% reduction in lines of code.