Deploying a Ruby on Rails application to Google Kubernetes Engine: a step-by-step guide - Part 5: Conclusion, further topics and Rails extras

6 minute read

Update: I’ve now created a premium training course, Kubernetes on Rails, which takes some inspiration from this blog post series but updated with the latest changes in Kubernetes and Google Cloud and greatly simplified coursework based on feedback I got from these blog posts. All packaged up in an easy-to-follow screencast format. Please check it out! ☺️ - Abe

Neon Genesis-style congratulations

Welcome to the last post of this five-part series on deploying a Rails application to Google Kubernetes Engine. If you’ve arrived here out-of-order, you can visit the previous parts:
Part 1: Introduction and creating cloud resources
Part 2: Up and running with Kubernetes
Part 3: Cache static assets using Cloud CDN
Part 4: Enable HTTPS using Let’s Encrypt and cert-manager

Congratulations, we’ve finished deploying the application!

Conclusion

Docker was revolutionary, but it mainly gave us low-level primitives without a way to assemble them for production-ready application deployments. I hope through this tutorial I’ve shown that Kubernetes meets that need by providing the abstractions that let us express application deployments in logical terms, and that GKE is an excellent managed Kubernetes solution.

I’ll close with a great thought by Kelsey Hightower, in that Kubernetes isn’t the final word in a story that doesn’t end:

Thank you

HUGE thanks to my reviewers, Daniel Brice (@fried_brice) and Sunny R. Juneja (@sunnyrjuneja) for reviewing very rough drafts of this series of blog post and providing feedback. 😍 They stepped on a lot of rakes so that you didn’t have to - please give them a follow! 😀

Any mistakes in these posts remain of course solely my own.

Topics for further exploration

This blog post turned into a novel, and yet there are still many topics that I didn’t cover well. Here are some you should check out on your own.

Web console, Stackdriver

We did a lot of work in the CLI in this post, but GCP’s web console is pretty nice, and there are a lot of features available that are worth exploring there.

In particular I suggest checking out the Stackdriver features Logs, Error Reporting, and Trace. Error Reporting will require the service to be enabled:

$ gcloud services enable clouderrorreporting.googleapis.com

Declarative cloud provisioning

Terraform logo

We did a whole lot of manual gcloud and gsutil CLI commands to provision cloud resources, which is really unwieldy and error-prone for non-trivial projects.

An alternative is using a tool like Terraform in which you declaratively specify the resources you need and Terraform creates/modifies/deletes to achieve that state. Terraform plans can have output variables which can be fed into other tools like Kubernetes manifest templating.

Continuous Integration (CI) and Continuous Delivery (CD)

Manually submitting Docker image builds and then manually deploying them like we did is obviously not sustainable for a real project.

There are unlimited possibilities for automating the builds and deploys; a simple first step might be setting up a Container Builder build trigger to automatically build a Docker image when there’s a new push to the git repo.

Tools that are specific to Kubernetes CI/CD that I think are worth mentioning include Keel and Jenkins X (I haven’t tried either one but have heard good things).

Helm

Helm logo

Helm is Kubernetes’s official package manager. We touched on using Helm when we installed chart-manager, but it’s worth exploring further. Helm can also be useful for organizing your own project’s resources. It also comes with templating so if nothing else it can replace the envsubst solution we used earlier.

Kubernetes manifest templating

Speaking of templating, I mentioned in a footnote earlier that there are many solutions worth investigating. You should try to find one that fits your workflow the best.

Firewalls

We didn’t touch on configuring firewalls, but that should be investigated for a production GKE cluster.

Kubernetes itself also has a Network Policy feature that may be worth checking out.

AppEngine

GKE and Kubernetes give you a lot of power for deploying and managing your application, but also a lot of complexity. If you have a really simply application, it’s worth considering simpler PaaS-style alternatives.

In this vein GCP has AppEngine, which supports several programming languages upfront as well as custom workloads using containers (AppEngine Flex). Here’s a nice article that can help one decide whether to use App Engine Flex or GKE.

References

Here are some miscellaneous links I found useful while learning Kubernetes/GKE that I couldn’t find a relevant place to link to earlier in this post.

Code Cooking: Kubernetes

Managing Rails tasks such as ‘db:migrate’ and ‘db:seed’ on Kubernetes while performing rolling deployments

Global ingress in practice on Google Container Engine — Part 1: Discussion

Kubernetes Engine Samples

Understanding kubernetes networking: pods

Extras for Rails developers

Rails logo

In the interest of keeping the tutorial as content-agnostic as possible I moved Rails-specific notes to the end. If you’re a Rails developer and had some questions or concerns hopefully this section addresses them.

Why an nginx reverse proxy?

Some folks will recommend configuring Rails to serve static assets, and simply put a CDN in front to cache assets so that only the first asset request (which actually hits Rails) is slow. However, I like that nginx reduces the need to have a bunch of Rack middleware (e.g. for enforcing SSL access, gzip-compressing requests, aborting slow requests), and supports features Rails/Rack doesn’t have quite yet (like on-the-fly brotli compression), as well as letting you opt-out of using a CDN while still having decent asset load performance.

The cost of course is running another container in each application server Pod. I think it’s worth that marginal extra cost in resources and deployment complexity, but I appreciate others won’t.

Opening a remote Rails console

The simplest way to open a Rails console is attaching to a running Rails server that’s part of a Deployment using kubectl exec:

$ kubectl get pods
NAME                                    READY     STATUS    RESTARTS   AGE
captioned-images-web-588759688d-8dlxp   3/3       Running   0          1d
captioned-images-web-588759688d-x87qr   3/3       Running   0          1d
$ kubectl exec -it captioned-images-web-588759688d-8dlxp -c captioned-images-web -- /var/www/docker/docker-entrypoint.sh bash
web@captioned-images-web-588759688d-8dlxp:/var/www$ bundle exec rails c
Loading production environment (Rails 5.1.4)
irb(main):001:0> CaptionedImage.count
D, [2018-03-29T02:07:43.331468 #54] DEBUG -- :    (14.2ms)  SELECT COUNT(*) FROM "captioned_images"
=> 1
irb(main):002:0> CaptionedImage.first
D, [2018-03-29T02:11:33.687385 #54] DEBUG -- :   CaptionedImage Load (6.6ms)  SELECT  "captioned_images".* FROM "captioned_images" ORDER BY "captioned_images"."id" ASC LIMIT $1  [["LIMIT", 1]]
=> #<CaptionedImage id: 1, caption: "test", image_data: "{\"original\":{\"id\":\"82e9768a035d39050eaf01689537fdf...", created_at: "2018-03-26 19:05:58", updated_at: "2018-03-26 19:05:59">
irb(main):003:0>

A better way to do it would be to create a one-off Pod, copying the Pod template/spec from the Deployment, and running the Rails console on that Pod. Because affecting the resources of a running web server Pod that’s handling traffic is not the best idea.

Dockerfile

A couple things to note about the Dockerfile:

  • Ruby is compiled using jemalloc to improve memory usage and performance
  • Brotli compression is done by a custom Python script that runs after the normal rake assets:precompile step rather than being integrated into the asset pipeline. There is a gem that can add Brotli compression directly to Sprockets, but it depends on a newer version of Sprockets that I find buggy, so for now I still use my own script

I also included a Makefile like I do on most projects, so that I can just type make build to build the image or make push to push it without having to remember what I named the Docker image or what Docker registry I’m using. I usually also include a make test that is sort of a poor man’s CI that builds the image and runs rake test using docker-compose, but this app doesn’t have tests because it’s not the focus of the blog post.

Useful gems

A few lesser-known gems I used in the demo app that I think deserve some props:

  • Shrine is extremely pleasant for handling image uploads compared to previous experiences I’ve had with Carrierwave, Paperclip, and Refile.
  • rails-pulse is a simple gem that handles the health checking by setting up a route that does a SELECT 1 to ensure the database is up.
  • ENVied is really useful to ensure the app fails fast (at bootup) if I’m missing a required environment variable.

Updated: