- Rust on arm64 performed CPU-intensive tasks up to 5x faster than x86
- Arm64 reduces cold start latency by up to 24% across all runtimes
- Python 3.11 on arm64 outperforms newer versions in memory-intensive workloads
AWS Lambda benchmarking this year shows that the arm64 architecture consistently outperforms x86 on most workloads.
Testing covered CPU-intensive, memory-intensive, and lightweight workloads on Node.js, Python, and Rust runtimes.
For CPU-bound tasks, Rust on arm64 performed SHA-256 hash loops 4-5x faster than Rust x86 once architecture-specific assembly optimizations came into play.
Cold start and hot start efficiency
Python 3.11 on arm64 also outperformed newer Python versions, while Node.js 22 ran significantly faster than Node.js 20 on x86.
These results show that arm64 not only improves raw computing performance, but also maintains consistency under different memory configurations.
Cold start latency plays a crucial role in serverless applications, and arm64 offers clear improvements over x86.
Across all runs, arm64 delivered a 13-24% faster cold boot.
Rust, in particular, recorded almost imperceptible cold start times at 16ms, making it well suited for latency-sensitive applications.
Warm boot performance also favored arm64, and memory-intensive workloads benefited from the architecture’s ability to handle larger memory allocations more efficiently.
Python and Node.js showed a bit more variability, although arm64’s gains remained.
These performance improvements are most pronounced in production environments where frequent cold starts occur.
Cost analysis shows that arm64 offers on average 30% lower compute costs than x86.
For memory-intensive workloads, cost savings were up to 42%, especially for Node.js and Rust.
Lightweight workloads, which rely heavily on I/O latency rather than raw compute, showed minimal performance differences between architectures.
This shows that cost optimization is more important than execution time selection in these scenarios.
For CPU- and memory-intensive workloads, arm64 delivered higher cost-to-performance ratios, confirming its value in production deployments.
These tests indicate that arm64 should be the default CPU target for most Lambda workloads, unless specific library compatibility issues arise.
Rust workloads on arm64 maximize both performance and cost savings, while Python 3.11 and Node.js 22 provide solid alternatives for other use cases.
Organizations that rely on Lambda for their enterprise-wide applications or run multiple functions in a single data center will likely see marked efficiency improvements.
From a desktop perspective, the results suggest that developers compiling locally for CPU-intensive workloads can also benefit from native arm64 builds.
Although these tests are extensive, individual workloads and dependency configurations may lead to different results. Further testing is therefore advised before large-scale adoption.
Organizations that leverage Lambda for enterprise-wide applications or run multiple functions in a single data center will likely see tangible improvements in efficiency.
From a desktop perspective, the results suggest that developers compiling locally for CPU-intensive workloads can also benefit from native arm64 builds.
Although these tests are extensive, individual workloads and dependency configurations may yield different results. Further testing is therefore advised before large-scale adoption.
By Chris Ebert
Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.




