New blog who dis

Posted On: Apr 15, 2023

It has been a long time since I last posted... but I re-did my blog!

The old blog was hosted on squarespace and used their WYSIWYG tools, which are nice, but one day I had the "genius" idea of making my own web server for the blog. How hard could it be?

Reasoning

The thought behind this was I could save money by hosting it myself. I'm a programmer I can make a simple web server and content management system. Right? I thought I could save money and move some other services over to a cloud instance. As I mentioned I thought I could save money and I would own my data, throughout the process I only ended up owning my data for about $30 extra a year.

Tech

Turns out web development is harder then it looks. The actual web server wasn't too bad, golang made that simple. Fuck css, html isn't that bad, I purposely only used javascript for what I needed.

I used the go programming language for the web server, sqlite for the database and linode for the hosting. For the front end I used a javascript library called prismjs for syntax highlighting in the code blocks. There was some javascript that I used for the content management system, but I'm not going to show or talk about that here because 1) it is bad and 2) I'm not that knowledgeable about web security and don't want to expose something silly

Other than my struggles with css and the poor content management system, I'm really happy with the tech I used! I don't get to use golang that much for game development, so it was nice using it for the actual purpose it was built for. Sqlite is sqlite, it works nothing too crazy there. The only odd thing is the way I did my analytics, if I had one big database I couldn't make a lot of request because I was trying to read a blog post and write analytics. I assumed this was an IO issue and split the database into two, one for blog data and another for analytical data. This worked, but wasn't elegant.

Some code

Here is my basic server initialization:

staticFiles := http.FileServer(http.Dir("./res/"))
http.Handle("/styles/", staticFiles)

uploads := http.FileServer(http.Dir("./uploads"))
http.Handle("/images/", http.StripPrefix("/images/", uploads))

http.HandleFunc("/", handleRequests)
http.HandleFunc("/favicon.ico", faviconHandler)
http.HandleFunc("/scripts/", handleScripts)

log.Fatal(http.ListenAndServeTLS(":{PORT}", "certificate.ppm", "key.ppm", nil));

http.ListenAndServeTLS does all the magic, my res folder and uploads folder are static content, res has css stuff, uploads have images in them (not sure why the folder is called uploads, or why it's outside the res folder).

I really like the http.HandleFunc, way of doing this. I think C# also has something similar as well as other languages, regardless of what language it feels nice to use. Nothing crazy happens here so ... on to some more code!

func handleRequests(w http.ResponseWriter, r *http.Request) {
    // This handles no trailing '/' if the user didn't put this in their address bar.
    var te, blog = getTemplateEngine(r)
    if r.URL.Path == "/" {
        // omitted this section as it's a lot of noise and not too interesting.
        // this is the landing page for the blog and will give you a list of all blog posts, ordered by descending, giving you a snippet of said post.

        writeTemplate(w, te, blog)
        return
    }

    go writeAnalytics(r.URL.Path)
    
    blogPost, blogErr := populateBlogFromDatabase(r.URL.Path)
    if blogErr != nil {
        log.Printf("Got an invalid request from: %s", r.RemoteAddr)
        handleNotFound(w, blogErr)
        return
    }
    
    if blogPost.PostDate.Valid {
        writeTemplate(w, te.Lookup(blog), blogPost)
    } else {
        writeTemplate(w, te, blogPost)
    }
}

This is how I handle a web request, if it's the / character we go down the omitted part and list all the blog posts. Otherwise if we hit a different page (/about, /projects) or a blog post then we look that up (line 13) then write the page to the response using the go's template engine! (line 20)

But how does that work? And why did you make your own function you ask? Go has this notion of a template, you can "inject" data into it at runtime. This works well for web pages but I think the template engine is for text in general and the go devs made a specific templator for http. One issue I didn't mention was the need for mobile specific pages, line 3 has the getTemplateEngine which looks at the http requests header and decides if it should use the desktop layout or the mobile layout.

import (
    "html/template"
    //Other imports
)
//Other variables
tmpl = template.Must(template.New("layout.html").Funcs(template.FuncMap{
    "formatDate": func(t sql.NullTime) string {
        if !t.Valid || t.Time.IsZero() {
            return ""
        }
        return t.Time.Format("2006-01-02")
    },
    "formatDateLong": func(t sql.NullTime) string {
        if !t.Valid || t.Time.IsZero() {
            return ""
        }
        return t.Time.Format("Jan 02, 2006")
    },
}).ParseFiles("templates/layout.html",
    "templates/blog.html"))
mobile_tmpl = template.Must(template.New("mobile_layout.html").Funcs(template.FuncMap{
    "formatDate": func(t sql.NullTime) string {
        if !t.Valid || t.Time.IsZero() {
            return ""
        }
        return t.Time.Format("2006-01-02")
    },
    "formatDateLong": func(t sql.NullTime) string {
        if !t.Valid || t.Time.IsZero() {
            return ""
        }
        return t.Time.Format("Jan 02, 2006")
    },
}).ParseFiles("templates/mobile_layout.html",
    "templates/mobile_blog.html"))

Here's my two template variables, one for desktop tmpl and one for mobile mobile_tmpl. You define how you want to reference the template ("layout.html"/"blog.html"), then you parse the actual files with ParseFiles. The template.Must will panic at runtime if the templates have errors in them. While we're here, I like that you can define functions here to format data (formatDate, formatDateLong) this isn't a full tutorial on go's template engine but in the template you can define variables like this {{.PostDate}} and if you wanted to format that date with one of the functions {{formatDateLong .PostDate}}.

The next part of the mobile pages is a function called getTemplateEngine this reads the http request header to determine if the request was from a mobile device and then use the above mobile_tmpl, otherwise the desktop pages are default and tmpl will be used.

func getTemplateEngine(r *http.Request) (*template.Template, string) {
    var userAgent = r.Header["User-Agent"];
    var isMobile = false;
    for _, ua := range userAgent {
        if (strings.Contains(ua, "iPhone")) {
            isMobile = true;
            break;
        }
    }
    var te = tmpl
    var blog = "blog.html"
    if (isMobile) {
        te = mobile_tmpl
        blog = "mobile_blog.html"
    }
    return te, blog
}

I completely forgot about the goroutine for analytics! Above in the handleRequests function line 11: go writeAnalytics(r.URL.Path) this is a goroutine to my analytics, this might not need to be a goroutine haven't measured it, but this lightweight concurrency is really nice. As for the function itself writeAnalytics just does an upsert of how many times a particular blog url was hit that day. No IP tracking, or country tracking, or device information is stored. Now that I think of it maybe some device info would be nice (IE: mobile vs desktop).

Things Lost

Squarespace's analytics were good, lots of interesting data. However I only looked at this a few times a year, if that. I also never did anything with that data. The bigger thing I've lost, that you may be noticing now, is comments are gone. I don't think they add much given the headache, both in terms of me developing them and also responding to them. As part of changing my blog I also have my own gitea instance for git projects, that could be an avenue for code questions or bugs with my tutorials.

I might have lost some other things, RSS which I'll have to read more on to figure out if that is something on my end or not. Also little things like SEO, liking a page, searching the blog catalog. Overall I'm really happy with the new site, it was a lot of headache and probably wasn't worth it from a time and money point of view but I had fun doing it!