At the risk of sounding like a crustacean cult member, really hope the skeptics read this post. No hype, no drama, just slow, steady high perf incremental improvement in a crucially important area without any feet blown off.
I feel bad for other/new system languages, you get so much for the steeper learning curve with Rust (cult membership optional). And I think it’s genuinely difficult to reproduce Rust’s feature set.
amelius 25 minutes ago [-]
Rust was made for this kind of thing.
Too bad there are those who think they should use Rust to write GUIs and end-user applications, which is where Rust ergonomics breaks down fast.
landl0rd 9 hours ago [-]
I stole these graphs for a branch of that thread ffmpeg started on twitter. The one where they were flaming rav1d vs dav1d performance to attack Rust generally.
I don't like the RiiR cult. I do like smart use of a safer language and think long-term it can get better than C++ with the right work.
jaas 6 hours ago [-]
I'm the person who is running the rav1d bounty, also involved with the Rustls project.
In many (most?) situations I think Rust is effectively as fast as C, but it's not a given. They're close enough that depending on the situation, one can be faster than the other.
If you told me I had to make a larger and more complex piece of code fast though, I'd pick Rust. Because of the rules that the Rust compiler enforces, it's easier to have confidence in the correctness of your code and that really frees you up when you're dealing with multi-threaded or algorithmic complexity. For example, you can be more confident about how little locking you can get away with, what the minimum amount of time is that you need to keep some memory around, or what exact state is possible and thus needs to be handled at any given time.
There are some things that make rav1d performance particularly challenging. For example - unlike Rustls, which was written from the start in Rust, rav1d is largely the result of C to Rust translation. This means the code is mostly not idiomatic Rust, and the Rust compiler is generally more optimized for idiomatic Rust. How much this particular issue contributes to the performance gap I don't know, but it's a suspect and to the extent that it's worth pursuing, one would probably want to figure out where being idiomatic matters most instead of blanket rewriting everything.
LoganDark 6 hours ago [-]
> I don't like the RiiR cult.
For certain types of people, Rust has a way of just feeling better to use. After learning Rust I just can't imagine choosing to use C or C++ for a future project ever again. Rust is just too good.
WJW 4 hours ago [-]
Choosing Rust for new projects is very different than trying to rewrite an existing codebase with thousands of hours poured into it. Or worse, demanding that someone else do that for you for free.
josephg 2 hours ago [-]
People keep saying this. But in my experience, directly porting code from one language to another is much easier that people think. I can do maybe about 500 lines per day depending on the language similarity. ChatGPT is great for doing a first pass - though it will add small, subtle bugs in the process.
I’m not arguing that we should rewrite everything in rust. C and C++ are fine languages. But sometimes it really is better to just have your code in a different language rather than deal with FFI. For example, I have some collaborative text editing code in rust and recently I just ported the whole thing to typescript, because it’s just straight out easier to use in a browser that way, compared to dealing with a wasm bundle.
I think the big mistake people make when rewriting into a different language is doing a refactor at the same time. This is the wrong way to go about it. First port directly the code you have. Then port your tests and get them passing in the new language. Then refactor. Obviously there’s always some language differences - but ideally you can confine differences within modules, and keep most of the module boundaries intact through the rewrite. You can also refactor before translating your code. If I were porting something to rust that wouldn’t pass the borrow checker, this is probably what I’d do. First refactor everything to make the borrow checker happy - so for example, make sure your structs / classes are in a strict tree. Then get tests passing. Translate between languages and cleanup.
If you approach it like that, rewriting code is a largely mechanical process. It really takes a lot less time than people think to translate code, since you don’t actually have to understand every single line to do it correctly. So the time taken scales based on the number of lines of code. Not the number of hours it took to write! And then, if you want to refactor your new program at the end, go for it.
IshKebab 22 minutes ago [-]
Yeah it depends on the project I think. There have been quite a lot of successful rewrites of large projects into Rust. For example Fish and svgr.
sshine 2 hours ago [-]
Yes, those are different experiences.
But assuming you're experienced with Rust, porting something is actually both easy and enjoyable.
Porting something as your first Rust project is going to end up like a dirty hybrid.
It's an excellent learning opportunity, but it will not leave a good showcase of Rust.
laerus 4 hours ago [-]
Rewriting could also make sense if there is a chance to improve the architecture based on the experience with the existing codebase. Something that would otherwise be impossible to even consider.
rastignack 6 hours ago [-]
Is not rustls a mix of c++, assembly and rust ?
I think it’s not a good indication of the success of the language.
jaas 6 hours ago [-]
In Rustls, TLS is implemented entirely in Rust. It uses aws-lc-rs [1] for cryptography, and aws-lc-rs uses assembly for core cryptographic routines, which are wrapped in some C code, which then exposes a Rust API which Rustls uses.
It's not practical right now to write high performance cryptographic code in a secure way (e.g. without side channels) in anything other than assembly.
> A portable C implementation of all algorithms is included and optimized assembly implementations of select algorithms is included for some x86 and Arm CPUs.
It also states that it kind of forked BoringSSL and OpenSSL.
You’re right though that most of the memory safety attack surface has been replaced with Rust.
jaas 3 hours ago [-]
I think what you're quoting says what I was saying - assembly with some C around it, wrapped in a Rust API. At least for the "select" (read: most important) algorithms. The details of the C/asm boundary in aws-lc are hard to summarize.
Ideally the C would eventually move to Rust, but I think aws-lc needs to work in many contexts where a Rust toolchain is not available so it might be a while.
Graviola is an interesting option under development, in part because it gets rid of the C:
I wonder if it would be possible to implement a safe_asm macro in Rust?
Even if unrestricted asm is inherently unsafe, there's got to be a subset of instructions and operand types you can guarantee is safe if called a certain way.
rastignack 5 hours ago [-]
In rust with some C code, ok.
How is the DER format parsed for example ?
Regarding crypto operations, I know as of now for rust projects assembly is a must to have constant time guarantees.
Maybe there could be a way with intrinsics and a constant-time marker, similar to unsafe, to use pure rust.
In the meantime I think there still is too much C code.
It’s a great step in the good direction by the way.
jaas 5 hours ago [-]
In Rustls, DER, and all certificate parsing and validation in general, is done in Rust.
I wish they included details on how they ran these benchmarks, like they did last year [1].
I'd like to take a look and try to understand why there's such a big difference in handshake performance. I wouldn't expect single threaded handshake performance to vary so much between stacks... it should be mostly limited by crypto operations. Last time, they did say something about having a cpu optimization for handshaking that the other stack might not have, but this is on a different platform and they didn't mention that.
I'd also be interested in seeing what it looks like with OpenSSL 1.1.1, given the recent article from HAProxy about difficulties with OpenSSL 3 [2]
Out of curiosity, rustls uses aws-lc-rs which in turn uses aws-lc, which is in turn "based on code from the Google BoringSSL project and the OpenSSL project."
You're trying to get rid of OpenSSL, but you're actually relying on OpenSSL code. Sounds a bit iffy imo. Can somebody provide a bit more depth here?
Or is it just the OpenSSL TLS API that is hopelessly confusing and bug inducing? I can imagine that the crypto primitives in OpenSSL are very solid.
bastawhiz 12 hours ago [-]
I'm not a Rust guy and I probably won't be any time soon, but Rustls is such an exciting project in my eyes. Projects like BoringSSL are cool and noble in their intentions, but having something that's not just a hygienic codebase but an implicitly safer one feels deeply satisfying. I'm eagerly looking forward to this finding its way into production use cases.
PoignardAzur 3 hours ago [-]
That name is confusing. Reading the headline, I first thought it was about the deprecated language server and was very confused.
jjice 58 minutes ago [-]
That was rls, but I can see what you mean from the name of the package once I looked at it again.
TLDR OpenSSL days seem to be coming to an end, but Rustls C bindings add not production ready yet.
jaas 6 hours ago [-]
Rustls has two C APIs.
The first is C bindings for the native Rustls API. This should work great for anyone who wants to use Rustls from C, but it means writing to the Rustls API.
The second is C bindings that provide OpenSSL compatibility. This only supports a subset of the OpenSSL API (enough for Nginx but not yet HAProxy, for example), so not everything that uses OpenSSL will work with the Rustls OpenSSL compatibility layer yet. We are actively improving the amount of OpenSSL API surface that we support.
Twirrim 11 hours ago [-]
Would love to see compliance and accreditation coming through for native rusttls, like FIPS. That'll unlock a large potential market, which can in turn unlock other markets.
You can get FIPS by using some of the third party back-end integration via aws-lc-rs.
jaas 6 hours ago [-]
The default cryptographic back-end for Rustls, aws-lc-rs, is FIPS compliant and integrated in a FIPS-compliant way so it's easy to get FIPS compliance with Rustls.
nyanpasu64 12 hours ago [-]
I wonder if replacing the encryption key every 6 hours would be a good use case for a crossbeam-epoch, though this may be premature optimization, and that library requires writing unsafe code as far as I can tell.
toast0 8 hours ago [-]
I think it is worth optimizing, there's a noticable, but small, dip in handshakes per second going from 1 to 2 threads.
If I were to optimize it, and the cycling rate is fixed and long, I would have the global storage be behind a simple Mutex, and be something like (Expiration, oldval, newval), on use, check a threadlocal copy, use it if it's not expired, otherwise lock the global, if the global is not expired, copy it to thread local. If the global is expired, generate a new one, saving the old value so that the previous generation tickets are still valid.
You can use a simple Mutex, because contention is limited to the expiration window. You could generate a new ticket secret outside the lock to reduce the time spent while locked, at the expense of generating a ticket secret that's immediately discarded for each thread except the winning thread. Not a huge difference either way, unless you cycle tickets very frequently, or run a very large number of threads.
AIUI epoch GC doesn't require Arc's atomic increment/decrement operations which can be slower than naive loads (https://codeberg.org/nyanpasu64/cachebash), but at this point we're getting into nano-optimization territory.
dochtman 6 hours ago [-]
We tried this when we were benchmarking and it was not significantly better than the Arc<RwLock<_>> that we're using now.
thevivekshukla 7 hours ago [-]
Wow this is fast.
However I tried rustls with redis for my axum application, for some reason it was not working, even though my self signed ca certificate was updated in my system's local CA store.
After a lot of try I gave up then thought about trying native tls, and it worked in first go.
dochtman 6 hours ago [-]
Did you file an issue or ask in the rustls Discord channel? We're happy to help.
encom 3 hours ago [-]
>Discord
whizzter 7 hours ago [-]
The irony is that due to CA stores (and how verification is handled) it's usually a tad more ficklish to replace TLS clients than TLS servers.
Was there no way to provide a custom CA store (that only included your self signed one)?
koakuma-chan 13 hours ago [-]
It's blazingly fast.
Rendered at 13:19:17 GMT+0000 (UTC) with Wasmer Edge.
I feel bad for other/new system languages, you get so much for the steeper learning curve with Rust (cult membership optional). And I think it’s genuinely difficult to reproduce Rust’s feature set.
Too bad there are those who think they should use Rust to write GUIs and end-user applications, which is where Rust ergonomics breaks down fast.
I don't like the RiiR cult. I do like smart use of a safer language and think long-term it can get better than C++ with the right work.
In many (most?) situations I think Rust is effectively as fast as C, but it's not a given. They're close enough that depending on the situation, one can be faster than the other.
If you told me I had to make a larger and more complex piece of code fast though, I'd pick Rust. Because of the rules that the Rust compiler enforces, it's easier to have confidence in the correctness of your code and that really frees you up when you're dealing with multi-threaded or algorithmic complexity. For example, you can be more confident about how little locking you can get away with, what the minimum amount of time is that you need to keep some memory around, or what exact state is possible and thus needs to be handled at any given time.
There are some things that make rav1d performance particularly challenging. For example - unlike Rustls, which was written from the start in Rust, rav1d is largely the result of C to Rust translation. This means the code is mostly not idiomatic Rust, and the Rust compiler is generally more optimized for idiomatic Rust. How much this particular issue contributes to the performance gap I don't know, but it's a suspect and to the extent that it's worth pursuing, one would probably want to figure out where being idiomatic matters most instead of blanket rewriting everything.
For certain types of people, Rust has a way of just feeling better to use. After learning Rust I just can't imagine choosing to use C or C++ for a future project ever again. Rust is just too good.
I’m not arguing that we should rewrite everything in rust. C and C++ are fine languages. But sometimes it really is better to just have your code in a different language rather than deal with FFI. For example, I have some collaborative text editing code in rust and recently I just ported the whole thing to typescript, because it’s just straight out easier to use in a browser that way, compared to dealing with a wasm bundle.
I think the big mistake people make when rewriting into a different language is doing a refactor at the same time. This is the wrong way to go about it. First port directly the code you have. Then port your tests and get them passing in the new language. Then refactor. Obviously there’s always some language differences - but ideally you can confine differences within modules, and keep most of the module boundaries intact through the rewrite. You can also refactor before translating your code. If I were porting something to rust that wouldn’t pass the borrow checker, this is probably what I’d do. First refactor everything to make the borrow checker happy - so for example, make sure your structs / classes are in a strict tree. Then get tests passing. Translate between languages and cleanup.
If you approach it like that, rewriting code is a largely mechanical process. It really takes a lot less time than people think to translate code, since you don’t actually have to understand every single line to do it correctly. So the time taken scales based on the number of lines of code. Not the number of hours it took to write! And then, if you want to refactor your new program at the end, go for it.
But assuming you're experienced with Rust, porting something is actually both easy and enjoyable.
Porting something as your first Rust project is going to end up like a dirty hybrid.
It's an excellent learning opportunity, but it will not leave a good showcase of Rust.
I think it’s not a good indication of the success of the language.
It's not practical right now to write high performance cryptographic code in a secure way (e.g. without side channels) in anything other than assembly.
[1] https://github.com/aws/aws-lc-rs
From the AWS-LC README: https://github.com/aws/aws-lc
> A portable C implementation of all algorithms is included and optimized assembly implementations of select algorithms is included for some x86 and Arm CPUs.
It also states that it kind of forked BoringSSL and OpenSSL.
You’re right though that most of the memory safety attack surface has been replaced with Rust.
Ideally the C would eventually move to Rust, but I think aws-lc needs to work in many contexts where a Rust toolchain is not available so it might be a while.
Graviola is an interesting option under development, in part because it gets rid of the C:
https://github.com/ctz/graviola
Even if unrestricted asm is inherently unsafe, there's got to be a subset of instructions and operand types you can guarantee is safe if called a certain way.
Regarding crypto operations, I know as of now for rust projects assembly is a must to have constant time guarantees.
Maybe there could be a way with intrinsics and a constant-time marker, similar to unsafe, to use pure rust.
In the meantime I think there still is too much C code.
It’s a great step in the good direction by the way.
https://github.com/rustls/webpki
I'd like to take a look and try to understand why there's such a big difference in handshake performance. I wouldn't expect single threaded handshake performance to vary so much between stacks... it should be mostly limited by crypto operations. Last time, they did say something about having a cpu optimization for handshaking that the other stack might not have, but this is on a different platform and they didn't mention that.
I'd also be interested in seeing what it looks like with OpenSSL 1.1.1, given the recent article from HAProxy about difficulties with OpenSSL 3 [2]
[1] https://www.memorysafety.org/blog/rustls-performance-outperf...
[2] https://www.haproxy.com/blog/state-of-ssl-stacks
https://rustls.dev/perf/2024-11-28-threading/
You're trying to get rid of OpenSSL, but you're actually relying on OpenSSL code. Sounds a bit iffy imo. Can somebody provide a bit more depth here?
Or is it just the OpenSSL TLS API that is hopelessly confusing and bug inducing? I can imagine that the crypto primitives in OpenSSL are very solid.
https://github.com/rust-lang/rls
TLDR OpenSSL days seem to be coming to an end, but Rustls C bindings add not production ready yet.
The first is C bindings for the native Rustls API. This should work great for anyone who wants to use Rustls from C, but it means writing to the Rustls API.
The second is C bindings that provide OpenSSL compatibility. This only supports a subset of the OpenSSL API (enough for Nginx but not yet HAProxy, for example), so not everything that uses OpenSSL will work with the Rustls OpenSSL compatibility layer yet. We are actively improving the amount of OpenSSL API surface that we support.
You can get FIPS by using some of the third party back-end integration via aws-lc-rs.
If I were to optimize it, and the cycling rate is fixed and long, I would have the global storage be behind a simple Mutex, and be something like (Expiration, oldval, newval), on use, check a threadlocal copy, use it if it's not expired, otherwise lock the global, if the global is not expired, copy it to thread local. If the global is expired, generate a new one, saving the old value so that the previous generation tickets are still valid.
You can use a simple Mutex, because contention is limited to the expiration window. You could generate a new ticket secret outside the lock to reduce the time spent while locked, at the expense of generating a ticket secret that's immediately discarded for each thread except the winning thread. Not a huge difference either way, unless you cycle tickets very frequently, or run a very large number of threads.
However I tried rustls with redis for my axum application, for some reason it was not working, even though my self signed ca certificate was updated in my system's local CA store.
After a lot of try I gave up then thought about trying native tls, and it worked in first go.
Was there no way to provide a custom CA store (that only included your self signed one)?