Known for its high performance, steep learning curve, and vibrant community. Rust is a great language for building all kinds of applications. In this tutorial, we will build a REST-style API.
Axum, a rust web framework from the Tokio project and Diesel, a popular type safe ORM for Rust.
An introduction to Diesel and Axum
What is Diesel?
Diesel describes itself as a safe, extensible ORM and Query Builder for Rust. In simpler terms, diesel provides an abstraction layer over SQL databases, allowing developers to interact with databases using Rust code instead of writing raw SQL queries, although writing raw SQL is supported.
What is Axum?
Axum is a modular web framework from the Tokio project. Axum shines with its masync support and type-safe routing; designed for high performance, Axum allows easy creation of APIs without sacrificing speed.
Prerequisites
This article assumes some fundamental knowledge of Rust and SQL. In addition, you would need the following installed to follow along:
Creating a Rust Project
In this tutorial, we will be creating an API to store rocks! A bit contrived but mirrors typical use cases for an API, in a directory of choice, run the following command to initialize a new rust project:
If you haven’t already, install cargo before creating a new rust project:
curl https://sh.rustup.rs -sSf | sh
cargo new rusty-rocks && cd rusty-rocks
Adding dependencies
Next, we need to add a couple of dependencies to the project. Open up Cargo.toml
and add the following lines to the [dependencies]
block:
[dependencies]
diesel = { version = "2.1.0", features = ["postgres"] }
dotenvy = "0.15"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
axum = {version = "0.6.20", features = ["headers"]}
tokio = { version = "1.0", features = ["full"] }
thiserror = "1.0.61"
Here’s a breakdown of the dependencies we just added:
diesel
is the core ORM and Query Builder library"postgres"
feature enables PostgreSQL database support
dotenvy
would help environment variables from a.env
fileserde
provides serialization/deserialization of data structures"derive"
feature generates code for (de)serialization traits
serde_json
enables JSON (de)serializationaxum
is a web application framework for building APIs"headers"
feature allows handling HTTP headers
tokio
is the asynchronous runtime for handling async operations"full"
feature includes all Tokio components
thiserror
allows us to handle errors easier than the standard library
Save the file and run the following command to download the dependencies
cargo check
Connecting to the Database
Before we connect, we need a database. In this example, we will be launching a new database on Civo using the CLI:
Creating a database
civo database create -m PostgreSQL rusty-rocksdb
This will launch a Postgres database with one node of size g3.db.small
Alternatively, you can start a Postgres container using docker:
docker run --name my-postgres -e POSTGRES_USER=admin -e POSTGRES_PASSWORD=longtestintesting -p 5432:5432 -d postgres
To help with migrations, Diesel provides a CLI. You can install this using the cargo package manager:
Installing diesel CLI
cargo install diesel_cli
Before creating any migrations, we need to supply Diesel with credentials for the database.
Retrieving database credentials
You can get your Civo database credentials using the CLI
civo db credential rusty-rocksdb
Export your credentials to an env file:
echo DATABASE_URL=postgres://username:password@host/rustyrocks > .env
Finally, run the Diesel setup to create the migrations directory and create the database specified in your DATABASE_URL
:
diesel setup
Creating a Migrations
SQL migrations are versioned sets of instructions that allow you to incrementally update the schema of a database over time in a structured and reproducible manner. With our database setup, we can define a schema and create our first migration.
Using the diesel CLI, we can create a new migration named create_rocks
:
diesel migration generate create_rocks
Running the command above should create a new directory within migrations/
with the current date and createrocks
as the suffix. This will also create two new files up.sql
which is where we will define all our tables and down.sql
which would undo any of the commands defined in up.sql
Creating Rocks Table
Open up up.sql
and add the following SQL to create
--migrations/_create_rocks/up.sql
CREATE TABLE rocks (
id SERIAL PRIMARY KEY,
name VARCHAR NOT NULL,
kind VARCHAR NOT NULL
)
Save the file, and add the following code in down.sql
--migrations/_create_rocks/down.sql
-- This file should undo anything in `up.sql`
DROP TABLE rocks
Run the migrations using Diesel:
diesel migration run
Database Operations in Rust
With migrations written, we can actually start interacting with our database in Rust, we’ll start by defining models for the database schema we just created, in the src
directory and create a file called models.rs
and add the following code:
// src/models.rs
use crate::schema::rocks;
use diesel::prelude::*;
use serde::{Deserialize, Serialize};
use thiserror::Error;
#[derive(Error, Debug)]
pub enum DbError {
#[error("Database error: {0}")]
DbError(#[from] diesel::result::Error),
}
#[derive(Serialize)]
pub struct ErrorResponse {
pub error: String,
}
#[derive(Queryable, Selectable, Serialize)]
#[diesel(table_name = rocks)]
#[diesel(check_for_backend(diesel::pg::Pg))]
pub struct Rock {
pub id: i32,
pub name: String,
pub kind: String,
}
#[derive(Insertable, Deserialize, Serialize)]
#[diesel(table_name = rocks)]
pub struct NewRock<'a> {
pub name: &'a str,
pub kind: &'a str,
}
In the file above, we begin by importing the necessary modules and traits for our code. Firstly, we import the rock
table from our project's schema file, which is generated by Diesel based on our database schema. We also create two custom error types which will be useful once we begin creating endpoints.
Secondly, we import the prelude from the diesel
crate, which brings in various traits and types that we'll be using. Finally, we import the Deserialize
and Serialize
traits from the serde
crate, which will allow us to (de)serialize data.
#[derive(Queryable, Selectable, Serialize)]
#[diesel(table_name = rocks)]
#[diesel(check_for_backend(diesel::pg::Pg))]
pub struct Rock {
pub id: i32,
pub name: String,
pub kind: String,
}
Here, we define a struct called Rock
that represents a row in our rocks
table. The #[derive(Queryable, Selectable, Serialize)]
attribute tells Diesel to automatically implement the Queryable
and Selectable
traits for this struct, which allows us to query and select data from the database. The Serialize
trait is derived from the serde
crate, enabling us to serialize instances of this struct to a format like JSON.
The #[diesel(tablename = rocks)]
attribute specifies that this struct maps to the rocks
table in our database. The #[diesel(checkfor_backend(diesel::pg::Pg))]
attribute tells Diesel that we're using the PostgreSQL backend, ensuring that the generated code is compatible with PostgreSQL.
The Rock
struct has three fields: id
(an integer representing the primary key), name
(a String representing the rock's name), and kind
(a String representing the rock's kind).
#[derive(Insertable, Deserialize, Serialize)]
#[diesel(table_name = rocks)]
pub struct NewRock<'a> {
pub name: &'a str,
pub kind: &'a str,
}
This struct, NewRock
, is used for inserting new rows into the rocks
table. The #[derive(Insertable, Deserialize, Serialize)]
attribute tells Diesel to implement the Insertable
trait for this struct, allowing us to insert new instances into the database.
The NewRock
struct has two fields: name
and kind
, both of which are string slices ('a str
). The lifetime parameter 'a
is used to ensure that the string slices don't outlive the references they borrow from.
Next, we define a couple of helper functions to help with database operations. Within the src
directory, create a file called lib.rs
// src/lib.rs
pub mod models;
pub mod schema;
use std::env;
use diesel::{
pg::PgConnection, Connection, RunQueryDsl, SelectableHelper,
};
use dotenvy::dotenv;
use models::{NewRock, Rock};
use schema::rocks;
pub fn establish_connection() -> Result<PgConnection, diesel::result::ConnectionError> {
dotenv().ok();
let db_url = env::var("DATABASE_URL").map_err(|_| diesel::result::ConnectionError::BadConnection("DATABASE_URL not set".into()))?;
PgConnection::establish(&db_url)
}
pub fn insert_rock(
conn: &mut PgConnection,
name: &str,
kind: &str,
) -> Result<Rock, diesel::result::Error> {
let new_rock = NewRock { name, kind };
diesel::insert_into(rocks::table)
.values(new_rock)
.returning(Rock::as_returning())
.get_result(conn)
}
pub fn get_rocks(conn: &mut PgConnection) -> Result<Vec<Rock>, diesel::result::Error> {
use schema::rocks::dsl::*;
rocks.load::<Rock>(conn)
}
In the file above, we start by exporting Rust modules models
and schema
. These modules contain data structures and schema definitions for our application.
Next, we create a function establish_connection
to set up the connection to our database. This function reads the database URL from an environment variable using std::env
and dotenvy
. It then uses the PgConnection::establish
function from the diesel
crate to create a connection to the PostgreSQL database specified by the URL, if any of the operations fail we return an error type of diesel::result::ConnectionError
The insertrock
function takes a mutable reference to a PgConnection
and the name
and kind
of a new rock as arguments. It creates a NewRock
struct using the provided name
and kind
, and then uses the diesel::insertinto
function to construct an INSERT
query. The values
method sets the values to be inserted, and returning
specifies that we want the newly inserted row to be returned. Finally, get_result
executes the query and returns the inserted Rock
struct.
The get_rocks
function retrieves all rocks from the database. It imports the rocks
table definition from the schema
module and uses the load
method from diesel
to execute a SELECT
query and return a Vec
containing all rows from the rocks
table.
Routing using Axum
Next, we'll be setting up routing for our web application using the Axum. Open up src/main.rs
and add the following code:
use axum::{http::StatusCode, response::IntoResponse, routing::get, routing::post, Json, Router};
use rusty_rocks::{establish_connection, get_rocks, insert_rock,models::ErrorResponse};
use serde::Deserialize;
#[tokio::main]
async fn main() {
let app = Router::new()
.route("/", post(create_rock))
.route("/rocks", get(rocks));
println!("booting up server");
axum::Server::bind(&"0.0.0.0:9093".parse().unwrap())
.serve(app.into_make_service())
.await
.unwrap();
}
#[derive(Deserialize)]
struct CreateRockRequest {
name: String,
kind: String,
}
async fn create_rock(
Json(payload): Json<CreateRockRequest>,
) -> Result<impl IntoResponse, StatusCode> {
let conn = &mut establish_connection().unwrap();
let new_rock = insert_rock(conn, &payload.name, &payload.kind).unwrap();
Ok(Json(new_rock))
}
async fn rocks() -> Result<impl IntoResponse, StatusCode> {
let conn = &mut establish_connection().unwrap();
match get_rocks(conn) {
Ok(rocks) => Ok(Json(rocks).into_response()),
Err(e) => {
let error_response = ErrorResponse { error: e.to_string() };
Ok((StatusCode::INTERNAL_SERVER_ERROR, Json(error_response)).into_response())
}
}
}
We begin by importing the required libraries, next, we create a new Router
instance and define two routes:
- The
/
route, which handlesPOST
requests using thecreate_rock
handler function. - The
/rocks
route, which handlesGET
requests using therocks
handler function.
We then start the Axum server by binding it to the 0.0.0.0:9093
address and serving the router instance using the serve
method.
The CreateRockRequest
struct is defined with the #[derive(Deserialize)]
attribute, which allows Axum to automatically deserialize the request body into this struct for POST
requests.
The createrock
handler function is an asynchronous function that takes a Json
payload as input. It establishes a connection to the database using establishconnection
, inserts a new rock using insertrock
, and returns the newly created Rock
struct as a JSON response using Ok(Json(newrock))
. If any error occurs during connection establishment or rock insertion, it returns an ErrorResponse
with a status code of INTERNALSERVERERROR
.
The rocks
handler function is also asynchronous. It establishes a database connection, retrieves all rocks using getrocks
, and returns the vector of rocks as a JSON response. If an error occurs, it returns an ErrorResponse
with a status code of INTERNALSERVER_ERROR
.
Testing the API
With the routes created, we can start up the server using cargo:
cargo run
Next, let’s create a new entry using curl:
curl -H 'Content-Type: application/json' -X POST -d '{"name": "Granite","kind": "Igneous"}' localhost:9093/
curl -H 'Content-Type: application/json' -X POST -d '{"name": "Sandstone","kind": "Sedimentary"}' localhost:9093/
curl -H 'Content-Type: application/json' -X POST -d '{"name": "Slate","kind": "Metamorphic"}' localhost:9093/
The response should look something like this
{
"id": 1,
"name": "Granite",
"kind": "Igneous"
}
Finally, you can test the /rocks
route by running:
curl localhost:9093/rocks
The response should look something like this:
[
{
"id": 1,
"name": "Granite",
"kind": "Igneous"
},
{
"id": 2,
"name": "Sandstone",
"kind": "Sedimentary"
},
{
"id": 3,
"name": "Slate",
"kind": "Metamorphic"
}
]
Cleaning up
After completing this tutorial, be sure to clean up the database created earlier. You can do this by running the following command:
civo db rm rusty-rocksdb
Wrapping up
In this tutorial, we learned how to build an API using Rust, Diesel, and Axum. We started by setting up a new Rust project and adding the necessary dependencies, including Diesel for ORM, query building, and Axum for routing.
While this tutorial covered the basics of building an API with Diesel and Axum, there are many more advanced features and concepts to explore. Axum's extensive examples page on GitHub is an excellent resource for learning about extractors, state management, and other advanced topics.