Why Not Blog

#programming

Now that I'm mostly in a version one mode I figured I'd go over how I implemented my website and it changed over time. I'm not going to go over the motivations, as that's been done in my init post. This will be purely design and learning with using the tool.

The Initial Design

I knew I wanted to use a CLI for updating the contents of the site. I started with a simple implementation of that CLI with two commands each taking a single parameter (that being the slug of the resource). The system was built off of a routing file that was loaded in at start time that defined how the routes were laid out. Each route had a type that define the handler that needed to be use. Here's the model I used, try and guess if you can see the problem.

#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct RouteConfig {
    pub path: String,
    pub template: String,
    pub route_type: RouteType,
    pub page_id: Option<String>,
}

From a routing perspective I used ServeDir part of tower_http pointing at my assets folder and a fallback handler that was my actual generic handler with internal route matching. Outside of the structs, the first several iterations had virtually everything in the lib.rs.

I decided I didn't want calling the database to be as verbose as it is when using libsql. So I did the natural thing and made two functions. One to get a single item and the one to get a Vec. This was a problem. My editor started complaining about something called Higher-kinded Polymorphism. This was my first major roadblock with the borrow checker. I had tried to do what would be simple in other languages. Make a function that took a generic with a constraint.

Here's an un-fixed version of what I tried.

pub async fn get_list<T: Deserialize>(
    conn: Connection,
    sql: &str,
    params: impl libsql::params::IntoParams,
) -> AppResult<Vec<T>> {
    let mut iter = conn.query(sql, params).await?;
    let mut ret: Vec<T> = Vec::new();

    while let Some(page) = iter.next().await? {
        ret.push(de::from_row::<T>(&page)?);
    }

    Ok(ret)
}

The problem, as I know it now, is due to ownership of data. Specifically the way that serde works is that it uses zero-copy deserialization. What that means is that it re-uses memory instead of copying it to the new structure. This is why it's ridiculously fast. The side effect of that is that ownership of that memory can be a problem. Since the memory is reused, once the function is over (or likely once it's started the next iteration) the memory is destroyed because it's no longer referenced.

The solution is to tell rust that the memory used by the Deserialize trait belongs to T. The easiest way of doing so is just swapping out for DeserializeOwned. That's syntactic sugar for changing the constraint to for <'a> Deserialize<'a>. Or rather, it passes on whatever T's lifetime is to Deserialize.

Once I got that working I had my two methods nice and clean. The next thing to cover is templates. Since I needed (and in fact still need) templates that don't need to be compiled, some popular choices are out. I chose the handlebars_rust package. I'm glad I did since it's got this pretty neat feature where if you call a function enabling dev mode it reloads the template files as they change. I also didn't have to specify names for some templates. The only ones with names are these.

  • page
  • post
  • search

They go into the templates folder alongside any other needed templates. Anything in that folder with the hbs extension gets loaded. The search page acts as the tag search and the homepage which is helpful to reduce copy/pasting.

Where We Are Now

As I wrote the endpoint handlers for the homepage/tag search I ran into the issue of "I need a bunch of random per/route type metadata". While I could have used some sort of config bag that contained what I needed, there wouldn't really be a good way of typing that which would have made it really annoying. I decided to just make the / route the tag search in addition to the regular home page. The route has two parameters (both optional). The first is the tag to search and the second is the page number.

Rust, like a lot of my favorite languages has switch statements as expressions. This means that you can alter the query based on request state.

let posts: Vec<PostSearchResult> = match search_params.tag.clone() {
    Some(tag) => {
        let sql = "SELECT slug, tag, title FROM post WHERE published = TRUE AND tag = ?1 ORDER BY timestamp DESC LIMIT 9 OFFSET ?2";

        data::get_list(conn, sql, libsql::params![tag.as_str(), skip]).await?
    }
    None => {
        let sql = "SELECT slug, tag, title FROM post WHERE published = TRUE ORDER BY timestamp DESC LIMIT 9 OFFSET ?1";

        data::get_list(conn, sql, libsql::params![skip]).await?
    }
};

If later I want to add a search I can update the match here to check for a search prop and return the appropriate data. To get the next page (if it exists) I just check the length of the response. My pages only show eight entries so if I just check if it's got nine entries it's got at least one more page. Previous is a little more complicated (though it could have been less so if I wasn't pedantic). I used another match statement with three branches depending on the current page.

prev = match page {
    1 => Some(format!("/?tag={}", tag)),
    n if n > 1 => Some(format!("/?tag={}&page={}", tag, page - 1)),
    _ => None,
};

This method allows me to strip the page parameter in the case it's the first page. I feel like it's cleaner and will get search engines less confused.

After doing that I realized that my routes file setup was over-engineering and I decided to just make two more routes. /page/:slug for pages and /post/:slug for posts. That consistency made it easier to deal with the routing, and later on made the CDN a lot simpler to setup.

Setting up a proper CLI was very interesting. I used clap for the CLI, which uses enums for sub commands and it was a very good experience. I've written CLIs in a number of languages (such as node, C#, python, go, etc) and clap is easily the best I've used. It overtook go using cobra which was my previous favorite.

As far as the CSS goes, I switched to tailwind. I didn't do so out of any necessity, but as an opportunity to learn how to use it. It's pretty solid, and I understand now the purpose of the declarative nature of how it works. Using it with something like React, or a different component system would remove the verbosity from the consuming side, limiting the amount of CSS that would traditionally need to be written for customized components.

What's Next?

I need exactly nine articles in order to properly test paging, once that's done I may add a generate sitemap function. I also might not, so we'll see. The site is currently on 1.0.0-rc.2, though I've got a fix that'll go into rc.3 since I forgot to trim the posts down to eight entries. I figure it'll take me a month to get enough articles, though I'll probably port some of my past articles from medium.

Once that's done I'll mostly just write. I don't need to constantly update for no reason. I'll put a page together to record annoyances and fix them during the new year while it's slow.


Welcome to my website! I like to program.