Why is Deno that slow? #15121
-
Many systems developed in Rust are very fast and designed for performance. Reference: Benchmarks on https://bun.sh/ |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 5 replies
-
I'm not part of the Deno team, but can give my view on a few things. FFIDeno's FFI has recently been changing quite a bit, and moving to a more direct binding. The numbers used on Bun's website (around 350 ns per FFI call) seem in line with what the FFI performance was after my semi-recent addition of FFI callbacks, during which I also ended up adding a lot of JS-side overhead. A later PR by the core team has removed that overhead and further made the previously indirect FFI call binding a direct one, meaning that the current FFI performance is one order of magnitude better, around 33 ns per noop FFI call on my machine. This will bring Deno's FFI more in line with Node's NAPI (33ns/op comes to about 29,600,00 ops per second) but falls a bit short, which is as expected since while Node's NAPI is not quite a direct FFI call API, it still gets to benefit from customized function bindings for each FFI function. This means that eg. a noop NAPI call doesn't need to check the passed in parameters (as I understand anyway), the bindings already know the parameters that the C function takes which is none in this case. Bun's FFI is similar to Deno's in that it is bound from the JS side, instead of the JS side loading up ready-built bindings like Node does. What Bun does differently to Deno, however, is that it JIT-compiles a custom binding function for each FFI function call. This means that for the noop FFI call, Bun's binding layer drops down to nearly nothing, it's just the JS engine calling a C function which immediately turns around and calls the C FFI noop function, nothing more. Not even the return value needs to be handled. This is a great way to lower the overhead of FFI calls. Deno currently has a draft PR open of implementing the same kind of system, so it might well be that Deno's FFI will soonish reach the same heights as Bun's does. With the V8 engine there's even a chance to go beyond what JavaScriptCore enables using the V8 Fast API Calls (though I may be wrong and maybe JSC also has something similar). Look forward to that. SQLiteWhile this benchmark is a fair benchmark in the sense of asking "how quickly can these different runtimes load a huge sqlite database", they're not exactly fair if the question is asked in a more limited fashion, eg. "using this given API, how quickly can these different runtimes load a huge sqlite database". Bun has a big leg up here: They implement their own custom SQLite binding directly into the runtime, which gives them a great advantage in optimizing the performance of the binding. The Node package used, Finally, the Deno library used is a WebAssembly based sqlite library. While WebAssembly is fast, it cannot quite measure up to pure native code. Of course between Deno and Wasm there is also a binding layer just how there is one between the others, so the Deno library doesn't get any magical wins here. So, while the benchmark is definitely accurate for usage purposes, it's not really measuring the performance of Bun vs Node vs Deno but just different libraries doing the same thing in different runtimes. HTTPWith HTTP performance I have less understanding. I could point to Deno's ops (calls from JS code to code running on the Rust side) using a general purpose low-overhead binding layer whereas Bun I think may have a more hand-written binding layer between JS calls and the corresponding Zig functions. However, I think a big part of the reason for this performance result is simply that they've done a great job in writing the HTTP server in Bun. I guess I'm also somewhat bound to note that there's an ongoing effort to push the HTTP server performance to even greater heights in Deno. |
Beta Was this translation helpful? Give feedback.
-
The benchmark is not representative of many real world traffic scenarios. There are certain HTTP benchmarks where Deno is faster than Node, where Node will be faster than bun, or where bun will be fastest. You can not compare the performance of two systems through a single HTTP benchmark. The benchmark does not inspect request headers, request method, request url, or any number of other fields which are important in real traffic scenarios. It doesn't do routing, static file serving, etc. Additionally these are just very different HTTP servers: Deno supports h2 and TLS on it's http server, while bun supports neither. We also can do automatic response body compression, full-duplex body streaming, etc. Jarred is doing very interesting work with Bun, and that is pushing us to improve performance, but comparing system performance through a single isolated benchmark is not helpful. If you are just looking for synthetic benchmarks where Deno is faster, those exist. (They are not useful at all though). |
Beta Was this translation helpful? Give feedback.
-
Well for one, Deno is losing in HTTP performance while consuming 183% CPU-time while Bun is winning consuming 99% CPU-time so you're using almost twice the juice for half the journey. That's a pretty huge difference not accounted for. CPU-time normalize the results and you have a clear winner in Bun. Secondly, just yesterday we saw Bun add both TLS and routing support (canary) where, like I have tried to point out, the routing performance already being accounted for since, like again I tried to point out, uWS always does routing. So there is no loss to be further accounted for there. I'm sure we can get those 29 bytes of Date added to the response (and yes I agree that should have been done before you pointed it out). |
Beta Was this translation helpful? Give feedback.
-
There is no point in a debating a "clear winner". Locking the discussion so its stays on the original topic. Please do report perf issues in the issue tracker. |
Beta Was this translation helpful? Give feedback.
The benchmark is not representative of many real world traffic scenarios. There are certain HTTP benchmarks where Deno is faster than Node, where Node will be faster than bun, or where bun will be fastest. You can not compare the performance of two systems through a single HTTP benchmark. The benchmark does not inspect request headers, request method, request url, or any number of other fields which are important in real traffic scenarios. It doesn't do routing, static file serving, etc.
Additionally these are just very different HTTP servers: Deno supports h2 and TLS on it's http server, while bun supports neither. We also can do automatic response body compression, full-duplex body stre…