Understanding Containers with Docker

This year, for my professional development I had my institution purchase Reclaim Hosting’s Educational Technology Subscription. Reclaim Hosting has been doing one flex course per month and this month I participated in their Understanding Containers flex course.

This is a little paradoxical, but one of the things I appreciated about this course is that it taught me how to use Reclaim Cloud, Reclaim Hosting’s containerized, autoscaling, modern cloud computing platform. Yet at the same time, one of the things I appreciated about his month’s course is that the skills I learned with docker are pretty transferable to any other hosting provider where docker can be installed. So I feel pretty good about both learning my current vendors platform a little better, but also know that the docker skills that I learned will likely transfer anywhere.

One of the advantages of Docker is that a Docker image includes all of the dependencies and configurations to run an application. I think back to one of the first servers I ever ran, which was Pressbooks on a Digital Ocean server. I installed a basic LAMP stack, added additional PHP modules, configured Apache, installed WordPress doing the famous 5 minute install (which only is a five minute install if you know what you are doing!), struggled with .htaccess rules and pretty permalinks, and then struggled again to add the Pressbooks dependencies which are a little famously a bear to get configured. As I reflect back on that time, each step in that list was time spent searching for guides, reading through stack overflow, and doing things from the command line that I didn’t understand at the time, but slowly gained an understanding to what I was doing.

A well-constructed Docker image makes sure that all of server-side dependencies are done for you, so that when you pull and run a docker file, it creates the environment, sets your variables, and you very quickly have a running application on a lightweight server that minimizes bloat and extra software installed. Honestly, its a breath of fresh air, because while I got pretty skilled at running Ubuntu Linux servers and WordPress-based applications, I don’t want to learn node.js, other databases, or load balancers.

Taylor really hit that home in week 4, when he installed 3 separate applications in a forty-minute session. The first application, HedgeDoc is a collaborative interface for writing markdown files built on NodeJS. Node projects are famous for having tons of dependencies, and using the NPM installer to install a large amount of smaller building block programs. It’s actually the issue that I was having last week that I blogged about in my post Productive Failure: MathJax and Pressbooks, where I reported to Pressbooks that their installation scripts for their MathJax Node were broken partly due to using dependencies that were end of life. With the docker image, the class was able to quickly spin up their own instance and get started very quickly.

Next, we switched to work on setting up a Baserow installation, an open source database similar to Airtable. This project is built off Django / Python, with a familiar web user interface built on top of Redis. Pulling the official docker image, and using their documentation, we were able to get a server set up in about fifteen minutes. The final piece of software that we got going was WBO, an online whiteboard application also built on top of NodeJS.

During the course, I set up 4 different applications using Docker: Nextcloud, HedgeDoc, Baserow, and WBO. Each one of the applications used docker and their docker-compose file a little differently, so it gave me an opportunity to troubleshoot a bunch of differently configured docker-compose files, and in general, feel a little bit more comfortable with running containerized apps.

Of course the next step, beyond this piece of professional development that I enjoyed is to think about when or where I might use this in the context of my institution. Right now, I am very happy with using the shared hosting that is part of our Domain of One’s Own hosting package. In our traditional hosting package, we are limited to applications that run on Linux, Apache, MySQL, and PHP (commonly referred to as LAMP). Many applications that were built in the early 2000s to now shared that common setup, because it was just so easy. When I look at usage in DoOO, we are predominantly using WordPress, Pressbooks, Commons in a Box OpenLab, Omeka, and Omeka S. One of the things I like about using these applications is that they are ubiquitous and have longevity. But that also keeps us away from projects that are built on newer frameworks.

I see two possible use cases for Reclaim Cloud. The first is hosting a LAMP based project that is taking up too much of the shared resources. The other would be to host a project that isn’t based on LAMP. One of the issues of this of course will be the resulting projects will be more dependant on me or my team, where when I set a project up in DoOO, I start to transfer upkeep responsibilities to a project lead slowly, and make myself available for support. In my mind, I think of SUNY Oneonta’s Domain of One’s Own initiative as a series of four tiers, and Reclaim Cloud is the fourth and highest tier where there are still (solvable) problems to figure out.

The first project that may benefit from being rebuilt and migrated to a containerized environment might be the SUNY Oneonta OpenLab, which has more and more student portfolios each semester.

One response to “Understanding Containers with Docker”

  1. Taylor Jadin Avatar

    Thanks for participating in the flex course, and the blog post summary, Ed! I really like the tiers, metaphor for Oneonta’s DoOO. I always think of Reclaim Cloud as sort of the escape hatch option for institutions who have DoOO, basically, if it won’t run on DoOO you can be pretty much assured it will run on the cloud, but I thinking in terms of tiers is maybe a more coherent metaphor 🙂

Leave a Reply