The killer app for Rust microservices is finally here! A Rust-RPC framework that is most like an RPC framework

Mayer
7 min readFeb 18, 2024

--

krpc-rust: A Rust RPC Framework That Feels Like an RPC Framework

For people who are just learning the Rust language or who don’t know much about Rust-RPC frameworks, this might seem like another clickbait title. However, people who have studied this area know that there is still a big difference between mainstream Rust-RPC frameworks and the actual definition of RPC frameworks. Let’s take a look at how it is implemented in Java, using the Java version of this project, krpc-java, as an example.

krpc

Implement an RPC framework based on the Netty single-channel reuse network model, support Spring Boot startup, support ZooKeeper and Nacos registry center.

Start in the Spring Boot way

API Example

public interface ExampeService {
ResponseDTO doRun(RequestDTO requestDTO);
}

Customer

Configure application.properties

krpc.registeredPath = nacos://your.nacos.ip:8848

@KrpcResource(version = "1.0.1",timeout = 1000)
private ExampeService exampeService;

Provider

configure application.properties
krpc.registeredPath = nacos://your.nacos.ip:8848
krpc.port = 8082

@KrpcService(version = "1.0.1")
public class ExampeServiceImpl implements ExampeService {

@Override
public ResponseDTO doRun(RequestDTO requestDTO) {
ResponseDTO responseDTO = new ResponseDTO();
responseDTO.setDate(new Date(requestDTO.getDate().getTime() + (long) requestDTO.getNum() * 60 * 60 * 1000));
return responseDTO;
}
}

As we can see, we only need to define an interface, then implement the interface on the server side, and add an annotation to the interface on the client side to call RPC. This is because Java has a powerful weapon called runtime reflection, which can easily enhance classes at runtime. However, this is also a major disadvantage of Java. Because runtime reflection exists, it causes the program execution to slow down. Rust, which is known for its high performance, naturally does not have runtime reflection, but it also lacks this feature. So how do the current mainstream Rust-RPC frameworks solve this problem? There are two major products on the market, Alibaba’s Dubbo and ByteDance’s Volo. Let’s first take a look at how Dubbo does it.

The Dubbo Quick Start chapter introduces how to use Dubbo Rust. In fact, it is mainly divided into three parts:

1.Define the interface. Dubbo currently supports many protocols, including gRPC. The Rust version uses the ProtoBuf protocol to implement the interface.

2. The second part is to implement the related Rust code through the definition file. Since Rust does not have runtime, the client cannot generate the client class through dynamic proxy when calling. Dubbo’s solution is to generate the related client call code by defining the interface content, so as to “reduce” the user’s usage cost.

3. The third part is to write the related server-side code logic, and then call RPC through the generated client code.

ByteDance’s Volo is also based on this idea. It defines interfaces through IDL and then generates the code for calling the interfaces through scaffold scripts. People who are interested can take a look at the quick start of Volo-grpc.

In summary, it is to generate call code and service interfaces using scripts defined by interfaces, and then implement server-side business and client calls.

Looking at the implementations of Dubbo and Volo, especially compared to the Java version, are they still far from a real RPC framework? There are still many problems, such as:

  1. It is necessary to use some specifications of RPC interfaces, such as what error codes to respond to.
// #[async_trait]
#[async_trait]
impl Greeter for GreeterServerImpl {
async fn greet(
&self,
request: Request<GreeterRequest>,
) -> Result<Response<GreeterReply>, dubbo::status::Status> {
println!("GreeterServer::greet {:?}", request.metadata);

Ok(Response::new(GreeterReply {
message: "hello, dubbo-rust".to_string(),
}))
}
}

2. The request body of the client call must be wrapped in the style generated by the code.

  let req = volo_gen::proto_gen::hello::HelloRequest {
name: FastStr::from_static_str("Volo"),
};

3. The key is that if we want to modify the request and response fields or add new interfaces, we must regenerate all the code through scripts, including the client and server sides. We all know that companies with software quality requirements must evaluate the impact range before changing the code and then submit it for testing. So does it mean that if we have some small adjustments, we have to let the test do all the tests?

Isn’t the lack of runtime reflection in Rust also a “disadvantage”? It seems that this is the case at present. The two major companies can only come up with such an unsatisfactory answer. Java has the big killer of reflection, so it is the leader in the microservice field. What can Rust do to challenge Java in the microservice field? Then we have to mention the nuclear weapon of Rust macros.

Rust macros

Rust macros are often joked about as being able to implement another programming language, which shows the power of macros. We all know that macros work at compile time, so can’t we just use macros to implement compile-time reflection? In fact, it is indeed possible. After all that nonsense, let’s get to the main topic and see how krpc-rust performs RPC calls.

Server

use krpc_core::server::KrpcServer;
use krpc_macro::krpc_server;
use serde::{Deserialize, Serialize};

#[derive(Serialize, Deserialize, Default, Debug)]
struct ReqDto {
str: String,
}

#[derive(Serialize, Deserialize, Default)]
struct ResDto {
str: String,
}

#[derive(Clone)]
struct TestServer {
_db: String,
}

krpc_server! {
TestServer,
"1.0.0",
async fn do_run1(&self,res : ReqDto) -> ResDto {
println!("{:?}" ,res);
return ResDto { str : "TestServer say hello 1".to_string()};
}
async fn do_run2(&self,res : ReqDto) -> ResDto {
println!("{:?}" ,res);
return ResDto { str : "TestServer say hello 2".to_string()};
}
}

#[tokio::main(worker_threads = 512)]
async fn main() {
let server: TestServer = TestServer {
_db: "iamdatabase".to_string(),
};
KrpcServer::build()
.set_port("8081")
.add_rpc_server(Box::new(server))
.run()
.await;
}

Client

use krpc_core::client::KrpcClient;
use krpc_macro::krpc_client;
use lazy_static::lazy_static;
use serde::{Deserialize, Serialize};

lazy_static! {
static ref CLI: KrpcClient = KrpcClient::build("http://127.0.0.1:8081".to_string());
}

#[derive(Serialize, Deserialize, Default, Debug)]
struct ReqDto {
str: String,
}

#[derive(Serialize, Deserialize, Default,Debug)]
struct ResDto {
str: String,
}

struct TestServer;

krpc_client! {
CLI,
TestServer,
"1.0.0",
async fn do_run1(&self,res : ReqDto) -> ResDto
async fn do_run2(&self,res : ReqDto) -> ResDto
}

#[tokio::main(worker_threads = 512)]
async fn main() {
let client = TestServer;
let res = client.do_run1(ReqDto{str : "client say hello 1".to_string()}).await;
println!("{:?}",res);
let res = client.do_run2(ReqDto{str : "client say hello 2".to_string()}).await;
println!("{:?}",res);
}

Let’s just run it and see.

Isn’t this what an RPC framework should look like?

Should the people who have read this point a Star to this project to express their gratitude? This project is a great learning project, and I also hope that through this project Rust can also develop in the microservice field.

Benefiting from the concept of Rust’s zero abstraction cost, this project also aims for high performance. So let’s just do a simple stress test, because I didn’t get the open source version of Dubbo to run after a while… So let’s compare it with Volo.

https://github.com/kwsc98/krpc-rust

The test machine this time is MacBook Pro M1 16 + 512

The test content is four million requests, with 512 asynchronous threads on both the client and server sides. Since RPC calls are IO-intensive, we need to open more threads.

The following is the test script:

krpc-rust test results

Four million requests, completed in an average of 47 seconds, with 8.5w+ QTS per second! ! !

And the memory usage is also relatively stable

I can only say that Rust is worthy of its name, Java is envious…

Next, let’s look at Volo’s performance.

Uh… there was a problem. When testing 100 concurrency, it worked fine, but the memory and time consumption were abnormally high during the stress test. Because the log printing was turned off for the stress test, let’s turn on the log and take a look again.

When set to 500 concurrency, there was an error with the socket connection. Only 139 of the 500 requests were successful. There may be some problems with Volo at present, but it does not have a big impact. We have already proved that Rust has the opportunity to replace Java in the microservice field.

If you are interested, you can join us in the discussion and learning. At present, there is still some work to be done on this project, such as exception handling, multi-component support, service registration and discovery, but the framework skeleton has been built and verified, and the future is promising~

--

--

No responses yet