< Return to Blog

Building an Inventory Management System (from scratch) with Rust and React

I don't always set myself hard to obtain goals, as you know, "ambition is the last refuge of failure". No, I haven't failed!
Look at those sexy UUIDs.  Oh my!
The premise was simple.  I wanted to create a "simple" Rust backend API that would basically be a RUST API around a Postgres DB.  The other part is of course the frontend; at the started I didn't put too much stock into this until I had the Rust API out of the way.
Much of the initial Rust work was in setting up the boilerplate Axum handlers, getting a test-harness setup for end-to-end- integration tests and a sensible MPSC setup.  Thankfully, this has been my wheelhouse since January 2022.
Before I dive into some of the nuances of the Rust approach, I had a feeling I would go with React/Redux as a light frontend, although the last time I touched any of this was in mid-2021.
The final part to the puzzle is to create a working build system and thereby provide a smooth development flow.
Thankfully, I can report that I have achieved this quite nicely.

Taking a simple first step

I'm sure that any one reading this, will probably have a few gripes with my early decision here. I wanted to keep things simple in the sense a "no brainer" and I decided to go with a nested-set hierarchy, but a pretty rigid one at that.
The Root nodes are Locations and every Location can have nested Containers. Similarly, every Container can have many Items.  Containers also have the option of a container_id to allow nesting containers (but I didn't flesh this out).
Diving into the advanced topic of storing hierarchical data in a RDBMS shows there's much depth to read up on, and it is also worth noting two potential approaches with Postgresql, as that's the backend I'm using.
The first step here, is that of choosing the "Static tree" approach which at its very least form are 3-levels deep and can get n-levels deeper, depending on intermediate Containers as well.

Embracing Serde

In this project, I ended up really embracing Serde and Enums, in particular -- which isn't also by accident.  Back then, it was another episode of my beloved "Crust of Rust", and Jon was building the basics of the Maelstrom protocol. This endeavour also happened to include a fair bit of use of Serde and Enums - which gave me an idea of using this combination to provide better context based JSON feedback.
Here's a quick example - notice how the key of the object under payload changes depending on the context of the API call? I also wanted to keep the code a bit more sane and one approach was to call these Entities. What this means is that any node has an EntityType, where the type can either be a Location, Container or Item.
curl --location 'http://0.0.0.0:8001/v1/locations/2b17e607-71c0-4d73-af6f-528ec389d223' { "payload": { "location": { "address": "GPS coords", "created_at": "2023-06-05T08:25:11.639924Z", "id": "2b17e607-71c0-4d73-af6f-528ec389d223", "name": "This one is hidden quite well", "updated_at": null } }, "status": "success" } curl --location 'http://0.0.0.0:8001/v1/locations/2b17e607-71c0-4d73-af6f-528ec389d223/containers' { "payload": { "containers": [ { "container_id": null, "created_at": "2023-06-06T03:24:19.508986Z", "id": "723995e0-bfda-437d-a7eb-88d7908a5ae8", "location_id": "2b17e607-71c0-4d73-af6f-528ec389d223", "meta_data": null, "name": "Kitchen stuff", "updated_at": "2023-06-10T06:40:02.128706Z" }, { "container_id": null, "created_at": "2023-06-05T08:34:08.706808Z", "id": "6d961ae9-ee09-4f5b-a7e8-129f48b2e75d", "location_id": "2b17e607-71c0-4d73-af6f-528ec389d223", "meta_data": null, "name": "Desk stuff", "updated_at": "2023-06-10T06:40:22.418005Z" } ] }, "status": "success" }

Axum Request Handlers and Async MPSC

Given the rather trivial nature of this API, its CRUD actions are close to that of a "Todo list" type app, most of my readers here might argue this could be done with less verbosity and much less LOC if written say in Python or Ruby.  Whist I do not wish to venture a guess as to such a potential outcome, I'd be inclined to think the LOC count in Python should be less. But that isn't why (most of us) love Rust.
Each API request handler (in Axum) as async fn's that process the request and provide an infallible response, to the client. As part of this API though, we have some "long running" tasks, this is where the MPSC approach to concurrency/multi-threading comes in.
Here's one such handler where it sends message over an MPSC channel to trigger sending QR code image bytes to AWS S3. Config data is "globally" Boxed inside a Mutex, so a lock is obtained first before we operate on this.
pub async fn handle_post_location( Extension(actor_handle): Extension<MyActorHandle>, extract::Json(payload): extract::Json<CreateLocation>, ) -> Result<Response, Error> { let name = payload.name; let address = payload.address; let location = Location::new(name.as_str(), address.as_str()); { let config_lock = &mut *APP_CONFIG.lock().await; if let Some(config) = config_lock { match location.create(config).await { Ok(opt) => { if let Some(new_location) = opt { let new_location_id = new_location.id; tracing::info!("location.create() {:#?}", new_location); let j = serde_json::to_string(&new_location)?; let output = json_output(ApiJsonOutputPayload::Location(new_location)); // Upload to S3 let (_timestamp, data_to_persist) = upload_qr_code( actor_handle, name, new_location_id.to_string(), Entity::Location.to_string(), ) .await?; // Save image to disk // Redact basic setup for persisting data save_bytes_to_disk( &full_filepath, Into::<&[u8]>::into(data_to_persist.as_slice()), ) .await?; } } } } } }
The QR code image bytes are written straight to bytes on the heap (no disk i/o) and therefore the uploaded from memory into S3.  As a "cheap" way to host the images, I ultimately decided these should also live on the disk, just as a $-cost compromise as this will run on home infrastructure.  Typically one would expose the S3 bucket(s) to public access and directly link to those image URIs, but in this case I decided to store a "cached" set and throw nginx on top.  The "push bytes in memory to S3 directly" is just a nice-to-have thing, which can be handy should we never need to persist these to disk in the future.
With regards to persisting to disk, I've also adopted writing tid-bits like this.  Extremely reusable and handy to adopt.
async fn save_bytes_to_disk<P, D>(path: P, data: D) -> Result<(), Error> where P: AsRef<std::path::Path>, D: AsRef<[u8]>, { let mut file = tokio::fs::File::create(path).await?; file.write_all(data.as_ref()).await?; Ok(()) }
The call to upload the QR code simply sends a message using the MPSC sender; it's the single consumer end that matches on the message's enum variant and handles the upload process accordingly.
pub async fn upload_qr_code( actor_handle: MyActorHandle, name: String, owned_id: String, owned_type: String, ) -> Result<(String, Vec<u8>), Error> { let timestamp = get_utc_time().to_rfc3339(); let image_request = UploadQrImageRequest { owned_id, owned_type, name, ..Default::default() }; let image_request_to_persist = image_request.clone(); let qr_code_bytes = generate_qr_code_image(image_request_to_persist, &timestamp)?; let data_to_persist = qr_code_bytes.clone(); actor_handle .upload_qr_image(image_request, qr_code_bytes, &timestamp) .await; Ok((timestamp, data_to_persist)) } pub struct MyActorHandle { pub sender: mpsc::Sender<ActorMessage>, } impl MyActorHandle { pub async fn upload_qr_image( &self, image: UploadQrImageRequest, bytes: Vec<u8>, timestamp: &str, ) { let msg = ActorMessage::UploadQrImageToS3 { image, bytes, timestamp: timestamp.to_string(), }; tracing::info!( "MyActorHandle.upload_qr_image(): Sending message / timestamp={}", &timestamp ); if (self.sender.send(msg).await).is_err() { tracing::info!("receiver dropped"); assert!(self.sender.is_closed()); } } }
I've also added the ability to upload images, which are stored as polymorphic in the PG db with owner_id and owner_type columns.

CRUD and ElasticSearch

Apart from the long running uploading of image access to S3, images are written to disk (a simple cache for nginx) and CRUD details are pushed to ElasticSearch at the time of creating Entities.

Bespoke Test Harness

I've repurposed a test client within the Axum code base (which also happens to use reqwest), to act as a simple wrapper around my Rust API. Keen eyed will notice that setup_test() below always creates a Location and Container.
This fn also truncates all tables and I also make sure to run all tests in a single thread.
#[tokio::test] #[traced_test] /// GET /v1/locations async fn get_locations() { let (client, location1_id, _container_id) = setup_test().await.unwrap(); // Build path let path = "/v1/locations".to_string(); dbg!("TestClient path: {}", &path); let location2_id = create_location(&client, "location 2").await.unwrap(); let client_resp = client.get(path.as_str()).send().await; let resp = serde_json::from_str::<ApiResponse<TestLocations>>( client_resp.text().await.as_str(), ) .unwrap(); dbg!(&resp); let collection = resp.payload.locations; assert_eq!(collection[0].id.to_string(), location1_id); assert_eq!(collection[1].id.to_string(), location2_id); }
The next part will cover some initial planning and the direction of taking this to production, with the main goal of (1) Keeping It (really) Simple Stupid and (2) break eggs, and move fast.