The fact that a request can happily get a mutable reference to a shared context felt suspicious to me, so I ran a quick test, and it seems like the whole server is single-threaded:
$ cat src/main.rs
use feather::{App, AppContext, MiddlewareResult, Request, Response};
use std::{thread, time};
fn main() {
let mut app = App::new();
app.get(
"/",
|_req: &mut Request, res: &mut Response, _ctx: &mut AppContext| {
res.send_text("Hello, world!\n");
thread::sleep(time::Duration::from_secs(2));
MiddlewareResult::Next
},
);
app.listen("127.0.0.1:3000");
}
$ cargo run -q &
[1] 119407
Feather Listening on : http://127.0.0.1:3000
$ curl localhost:3000 & curl localhost:3000 & time wait -n && time wait -n
[2] 119435
[3] 119436
Hello, world!
[2]- Done curl localhost:3000
real 2.008s
Hello, world!
[3]+ Done curl localhost:3000
real 2.001s
That is: when the request handler takes 2 seconds, and you fire two requests simultaneously, one of them returns in 2 seconds, but the other one takes 4 seconds, because it has to wait for the first request to finish before it can begin.
It feels like this has to be the behavior from the API, because if two threads run `ctx.get_mut_state::<T>()` to get a `&mut T` reference to the same state value, only one of those references is allowed to be live at once.
It doesn't quite seem fair to call this "designed for Rust’s performance and safety". One of the main goals of Rust is to facilitate safe concurrency. But this library just throws any hope of concurrency away altogether.
Yes, if you want a mature web framework that doesn't force you to use async then Rocket already exists, which is multithreaded and quite performant - and now allows you to use async if you want to.
Feather seems fundamentally single threaded and requires more boilerplate for something pretty simple. So I'm not sure the claim about developer experience holds up to scrutiny here either.
I noticed the same thing. I would have expected an Arc<Mutex<…>> or something similar for safe concurrency. Not sure what value is delivered by a single threaded, blocking web server.
This framework does thread per connection, but all requests go into a global request queue, and when you call `listen`, it enters an infinite loop which pops requests from the queue and processes them synchronously (one by one).
It sounds like this framework is susceptible to head of line blocking. In my experience, that significantly reduces the utility of any applications written choosing this framework. What’s the benefit being delivered?
No benefit, this appears to be a student's pet project. The submitter has 179k karma and they aren't this framework's author. Either the submitter is unfamiliar with Rust and mistakenly thought this is a real deal or there's some kind of karma abuse/farming going on.
After creating two non trivial desktop apps with Tauri framework which is Rust-based I would not consider using a Rust web framework. Web development is too dynamic and messy in my opinion, and Rust slows you down too much.
It’s for the same reason that some people are leaving Rust behind when it comes to game development after the initial excitement fades and problems start.
Now I do understand that there are cases where this might be viable (for example if you already have a team of experienced Rust developers) but I think in majority of cases you would not want to use Rust for web development.
You're talking about a very different type of application than what the thread is discussing (server development, not overall web development - i.e not Tauri/Leptos/etc).
Not a Rust expert by any means, but what does it bring that there is no async in the framework? Wouldn’t most of the libraries use async anyway, connecting to queues, databases, external services via HTTP? It’s hard to imagine a backend that still won’t need async, so I wonder if it is even worth trying… (please do let me know if it is)
Async is a language feature to enable scalability, but an alternative approach is just to spawn a bunch of threads and block threads when waiting for I/O to happen. That is the approach used by this framework.
This is especially true for newcomers, but async Rust has significant mental overhead. You quickly run into things like the Pin marker, Tokio runtime, complex compiler errors related to ownership, basically each "normal" component of the language get some additional complexity due to async.
If you're new to Rust and you want to "just make a web app", the view at the async Rust landscape could be a turnoff for novices. I speak from experience, having started a couple Rust projects in Python/C++ teams. After writing in Rust for 3+ years I can navigate async contepts without troubles, but for someone coming from the usual popular languages (Python/C#/Java/C++), there are simply too many new things to learn when jumping straight into an async Rust codebase.
IMO this framework is going in a good direction, assuming that it will only be used for small/educational projects.
For the async Rust landscape, things are improving every year, IMO we're around 5-10 years until we get tooling which will feel intuitive to complete newcomers.
A large application, maybe, but sometimes you have a very small scope application that won't otherwise use async, so you value binary size, compile time, etc over theoretical XXX ops/sec
It seems it goes more in the direction easy to use and quick setup of small endpoints and if you need some more you could integrate the tokio runtime (or any other async runtime) on top of it.
Do we know if a Rust webserver can provide just more pure raw metal performance? I believe I've heard the case to be true for Go. What use case do we have for this, high performance chat/game servers?
Rust typically beats Go web frameworks on tech empower performance benchmarks, if you're curious where languages typically rank up in terms of web framework performance. https://www.techempower.com/benchmarks/#section=data-r23
What does "pure raw metal" performance mean? Go has a garbage collector, which I usually hear causing GC pauses negatively affecting performance compared to C/C++/Rust.
It means exactly what it means. If I get a pure bare metal server, will that computer simply handle more requests than a Go or a Node server (assuming the same single-threaded paradigm)? That's the only reason I'd ever consider moving away from the ergonomics of something like Node or Python, if my bare metal server can save me money by simply handling more requests with less cpu/memory.
Edit:
Thanks for that link though, just got turned onto this:
> That's the only reason I'd ever consider moving away from the ergonomics of something like Node or Python, if my bare metal server can save me money by simply handling more requests with less cpu/memory.
… but what does the "bare metal server" have to do with it? Presumably, Occam's Razor would suggest that a Rust framework that outperforms Go on a VM would likely continue to outperform it on a bare metal. The bare metal machine might outperform a VM, but these are most two orthogonal, unrelated axes: bare metal vs. VM, or a Rust framework vs. a Go or Node framework…
It's a base case. We can use VM if you like, I just went further. We can go even further, will it simply be faster on my laptop compared to the others? I have a real use case for running a highly performant server locally so as not to hamper the user with extra resource usage.
You're right, it doesn't really seem necessary and makes the responses of the route end up as side effects rather than part of the return type of the route functions.
Most web frameworks in Rust don't make responses a side effect and keep them as a response return type since that's better devex and much less boilerplate.
Why does it not look lightweight? I think you might be seeing "middleware" and thinking that it enables a load of middleware by default, which is unlikely to be the case?
When you have 20 routes each being terminated with redundant `res.json(success);\n MiddleWare::Next` I think you can imagine why someone might see it as not lightweight in terms of unnecessary boilerplate - which most Rust webframeworks, async or not, don't require you to write out.
I am mostly naive to Rust and web server frameworks, so its a naive thought that may be completely contraindicated due to other issues, but I don't expect to see meaningless(?)/repetitive code in the advertisement for a framework that advertises as lightweight.
The fact that a request can happily get a mutable reference to a shared context felt suspicious to me, so I ran a quick test, and it seems like the whole server is single-threaded:
That is: when the request handler takes 2 seconds, and you fire two requests simultaneously, one of them returns in 2 seconds, but the other one takes 4 seconds, because it has to wait for the first request to finish before it can begin.It feels like this has to be the behavior from the API, because if two threads run `ctx.get_mut_state::<T>()` to get a `&mut T` reference to the same state value, only one of those references is allowed to be live at once.
It doesn't quite seem fair to call this "designed for Rust’s performance and safety". One of the main goals of Rust is to facilitate safe concurrency. But this library just throws any hope of concurrency away altogether.
Yes, if you want a mature web framework that doesn't force you to use async then Rocket already exists, which is multithreaded and quite performant - and now allows you to use async if you want to.
Feather seems fundamentally single threaded and requires more boilerplate for something pretty simple. So I'm not sure the claim about developer experience holds up to scrutiny here either.
Reading the latest stable documentation [0], it appears that you have to use async?
[0]: https://rocket.rs/guide/v0.5/upgrading/#stable-and-async-sup...
Sorry, so you can use synchronous functions for writing middleware and routes, but the rocket core does use tokio.
Not all async Rust webframeworks let you do away with async and futures entirely in your business logic.
So the caveat is you need to call `spawn_blocking` with synchronous functions. I see.
With a framework like Axum, yes, but with Rocket, no - you can just declare synchronous functions and pass them as a route handler, e.g.: https://github.com/Qwuke/recurse-ring/blob/main/src/main.rs#...
If you're averse to touching async fn's or tokio APIs _at all_, it's nice devex.
I noticed the same thing. I would have expected an Arc<Mutex<…>> or something similar for safe concurrency. Not sure what value is delivered by a single threaded, blocking web server.
This framework does thread per connection, but all requests go into a global request queue, and when you call `listen`, it enters an infinite loop which pops requests from the queue and processes them synchronously (one by one).
It sounds like this framework is susceptible to head of line blocking. In my experience, that significantly reduces the utility of any applications written choosing this framework. What’s the benefit being delivered?
No benefit, this appears to be a student's pet project. The submitter has 179k karma and they aren't this framework's author. Either the submitter is unfamiliar with Rust and mistakenly thought this is a real deal or there's some kind of karma abuse/farming going on.
After creating two non trivial desktop apps with Tauri framework which is Rust-based I would not consider using a Rust web framework. Web development is too dynamic and messy in my opinion, and Rust slows you down too much.
It’s for the same reason that some people are leaving Rust behind when it comes to game development after the initial excitement fades and problems start.
Now I do understand that there are cases where this might be viable (for example if you already have a team of experienced Rust developers) but I think in majority of cases you would not want to use Rust for web development.
You're talking about a very different type of application than what the thread is discussing (server development, not overall web development - i.e not Tauri/Leptos/etc).
This is a backend framework not a frontend/fullstack one like Tauri.
Not a Rust expert by any means, but what does it bring that there is no async in the framework? Wouldn’t most of the libraries use async anyway, connecting to queues, databases, external services via HTTP? It’s hard to imagine a backend that still won’t need async, so I wonder if it is even worth trying… (please do let me know if it is)
Async is a language feature to enable scalability, but an alternative approach is just to spawn a bunch of threads and block threads when waiting for I/O to happen. That is the approach used by this framework.
This is especially true for newcomers, but async Rust has significant mental overhead. You quickly run into things like the Pin marker, Tokio runtime, complex compiler errors related to ownership, basically each "normal" component of the language get some additional complexity due to async.
If you're new to Rust and you want to "just make a web app", the view at the async Rust landscape could be a turnoff for novices. I speak from experience, having started a couple Rust projects in Python/C++ teams. After writing in Rust for 3+ years I can navigate async contepts without troubles, but for someone coming from the usual popular languages (Python/C#/Java/C++), there are simply too many new things to learn when jumping straight into an async Rust codebase.
IMO this framework is going in a good direction, assuming that it will only be used for small/educational projects.
For the async Rust landscape, things are improving every year, IMO we're around 5-10 years until we get tooling which will feel intuitive to complete newcomers.
A large application, maybe, but sometimes you have a very small scope application that won't otherwise use async, so you value binary size, compile time, etc over theoretical XXX ops/sec
It seems it goes more in the direction easy to use and quick setup of small endpoints and if you need some more you could integrate the tokio runtime (or any other async runtime) on top of it.
In most cases, you have both an async and blocking/sync approach, sometimes even in the same library.
Could be confused with Feathers, a Javascript web framework.
Do we know if a Rust webserver can provide just more pure raw metal performance? I believe I've heard the case to be true for Go. What use case do we have for this, high performance chat/game servers?
Rust typically beats Go web frameworks on tech empower performance benchmarks, if you're curious where languages typically rank up in terms of web framework performance. https://www.techempower.com/benchmarks/#section=data-r23
What does "pure raw metal" performance mean? Go has a garbage collector, which I usually hear causing GC pauses negatively affecting performance compared to C/C++/Rust.
I wouldn't cite tech empower since they only benchmark HTTP/1.1
pure raw metal
It means exactly what it means. If I get a pure bare metal server, will that computer simply handle more requests than a Go or a Node server (assuming the same single-threaded paradigm)? That's the only reason I'd ever consider moving away from the ergonomics of something like Node or Python, if my bare metal server can save me money by simply handling more requests with less cpu/memory.
Edit:
Thanks for that link though, just got turned onto this:
https://github.com/uNetworking/uWebSockets/blob/master/misc/...
> That's the only reason I'd ever consider moving away from the ergonomics of something like Node or Python, if my bare metal server can save me money by simply handling more requests with less cpu/memory.
… but what does the "bare metal server" have to do with it? Presumably, Occam's Razor would suggest that a Rust framework that outperforms Go on a VM would likely continue to outperform it on a bare metal. The bare metal machine might outperform a VM, but these are most two orthogonal, unrelated axes: bare metal vs. VM, or a Rust framework vs. a Go or Node framework…
It's a base case. We can use VM if you like, I just went further. We can go even further, will it simply be faster on my laptop compared to the others? I have a real use case for running a highly performant server locally so as not to hamper the user with extra resource usage.
Yes, it will handle more requests than Node or Go.
What is DX?
Developer experience, like how nice it is to work with
I see, thanks
Developer experience
Thank you!
I'm curious, in this example, what does MiddlewareResult::Next do?
Given my lack of experience, I'm sure it's needed, it's just unclear to me what purpose it would serve in a server app.You're right, it doesn't really seem necessary and makes the responses of the route end up as side effects rather than part of the return type of the route functions.
Most web frameworks in Rust don't make responses a side effect and keep them as a response return type since that's better devex and much less boilerplate.
Why does it not look lightweight? I think you might be seeing "middleware" and thinking that it enables a load of middleware by default, which is unlikely to be the case?
When you have 20 routes each being terminated with redundant `res.json(success);\n MiddleWare::Next` I think you can imagine why someone might see it as not lightweight in terms of unnecessary boilerplate - which most Rust webframeworks, async or not, don't require you to write out.
I don't think a simple enum return value really counts it out as lightweight.
^ This.
I am mostly naive to Rust and web server frameworks, so its a naive thought that may be completely contraindicated due to other issues, but I don't expect to see meaningless(?)/repetitive code in the advertisement for a framework that advertises as lightweight.