

A quick shallow look.
- Avoid single hard paths. Provide fall-backs. Make them all configurable. Use xdg (properly)…etc.
- Avoid
.unwrap()or any source ofpanic!()for non-fatal things that can actually fail. - Make non strictly necessary fields optional in you model, if that helps.
- Use
.filter_map()and.collect()in your parsing code, instead of all the matches andcontinues in a for loop. You can use.ok()?to early-return withNoneon errors. - And finally, since you’re micro-benchmarking, try
speedyorborshinstead of bincode, unless you need theserdecompat for some reason.
With GPU rendering, you should learn about GPU processing and memory usage too, not that it would matter much for such a use-case.
nvtopis nice for displaying all that info (it’s not nvidia-specific).Also % CPU usage is not a good metric, especially when most people forget to set CPU frequencies to fixed values before measuring. And heterogenous architectures (e.g. big.LITTLE) make such numbers meaningless anyway (without additional context). But again, none of this really matters in this use-case.