by Rich Burroughs
In this series, we’re looking at alternatives to using Docker Compose for building apps that run in Kubernetes clusters. While Compose is a handy way to stand up apps locally, there are advantages to running your apps in a Kubernetes environment while you develop. Your environment will be more like your production environment, and you can work with Kubernetes specific objects and manifests.
Previously in this series we’ve covered Tilt and Skaffold. Next up in the series is DevSpace, one of the other popular open source tools in the space. For transparency’s sake, I work for Loft Labs, which maintains DevSpace.
DevSpace is client-only, and it has a lot of use cases for developing in Kubernetes. The client can be used to develop in local and remote Kubernetes clusters, build containers, and integrate nicely into your CI/CD pipelines.
DevSpace is configured with the devspace.yaml file and it supports port forwarding to connect to apps running in your cluster, as well as reverse port forwarding.
Since DevSpace is a single Go binary, installation is very simple. I’m on a Mac, so I installed it with Homebrew:
$ brew install devspace
There are other options in the installation instructions, like using npm, yarn, or just downloading the correct binary to your system.
Initializing Your Project
To set up your project to run with DevSpace, use the init subcommand:
$ devspace init
The DevSpace client will ask a few questions about deploying and building your project and set up a basic devspace.yaml file based on your answers.
There are several options for deploying your project with DevSpace, like using an existing Helm chart, Kubernetes manifests, existing Kustomize configuration files, or using the Component Helm Chart. This DevSpace feature builds a Helm chart on the fly based on your devspace.yaml file. If you want to try the DevSpace quickstart, using the Component Helm Chart is recommended.
For building your containers, you can use an existing Dockerfile or a custom build process. You can optionally ship the container images that are built to a registry. However, one of the most powerful features about DevSpace is that you can skip image building entirely and instead use the hot reloading which refreshes your running containers without having to rebuild the container image. We’ll explore this feature in more detail later on.
devspace init, you will have a devspace.yaml file to get started with, which you can further configure to suit your needs.
Entering Development Mode
To start developing with DevSpace, you first need to select the kube-context and namespace you will be working with. DevSpace makes this easy with the following commands:
$ devspace use context $ devspace use namespace
Those two commands are convenient for people who work with multiple clusters and namespaces. Setting the namespace with
devspace use namespace means you don't have to pass it in with kubectl commands. But if you don't want to use the interactive menu, you can pass in the context name and namespace as additional arguments to those commands (like
devspace use namespace [NAMESPACE].
Once you’ve set your context and namespace, you enter development mode with this command:
$ devspace dev
This will get your project running in your cluster, as well as setting up any port forwarding that you’ve defined. At the end of the command, DevSpace will open a terminal similar to
kubectl exec and you can start your application inside the container but still be able to access the application using your browser on localhost due to port-forwarding running in the background.
One of the significant features that DevSpace offers is container hot reloading. This means that DevSpace can update your running app without building a new container image each time you make a change. When you save a change in your code using your local IDE, DevSpace will automatically sync the changed files to your container that’s already running and can even be configured to (rebuild and) restart your app inside the container to pick up the changes.
Hot reloading can have a huge impact on developer productivity, as you can imagine. It provides faster feedback and improves cycle times, which also impacts developer satisfaction and happiness. You can try out hot reloading yourself using one of the quickstart projects, or you can see an example in this short YouTube video.
#Developing Microservices with Kubernetes
When you’re working on a microservice, questions about how to handle service dependencies usually come up. If your service depends on APIs provided by an upstream service, do you need to run that other service too? And how do you manage it? This question often comes up in testing, too, regarding which dependencies to mock and which to run.
DevSpace makes it easy to set up multiple apps in different Git repositories that you’d like to run alongside your app. You can spin up the entire environment, including your dependencies, by running
devspace dev. Docker Compose can't work with multiple Git repos but with DevSpace you can set up a devspace.yaml in each Git repository and then add dependencies between the repositories, so DevSpace knows which services require each other when you start developing them.
Defining a dependency from one repo to another one is pretty straightforward in devspace.yaml as shown in this example:
- name: api-server
Hooks and Custom Commands
DevSpace offers a lot of extensibility, and two features that enable that are hooks and custom commands.
Hooks are actions that you want to occur in your build and deploy pipeline. Things you can do with hooks include executing commands, either on the local machine or in a container, uploading and downloading files, printing container logs, and more. Here’s an example of defining hooks in your devspace.yaml file, from the docs:
# Execute the hook in a golang shell (cross operating system compatible)
- command: "echo before image building"
# Execute the hook in a golang shell (cross operating system compatible)
- command: |
# Execute the hook directly on the system (echo binary must exist)
- command: "echo"
args: ["before image building"]
The ability to chain actions together with hooks in your build/deploy processes is very powerful.
But what if you want to do other repeatable actions while developing your project? That’s where custom commands come in. You can create custom commands to do all kinds of tasks and then execute them at any time with the
devspace run command, like:
$ devspace run [COMMAND NAME]
And remember how we talked about working with dependencies that are in other Git repos? You can run the custom commands for those dependencies that you’ve pulled in too. Let’s say you have a service with a database as dependency, one custom command could reset the database to a set of test data or it could execute a database migration, for example.
Hooks and custom commands allow you to tailor your DevSpace set up to your specific projects and languages and define your dev workflow as code. This is powerful, as it creates a shared workflow for your team that’s self-documenting.
Web UI For Kubernetes Development
While many teams prefer to use the DevSpace CLI to do a lot of their work, a web UI is also available to get a quick overview of what is running inside the current namespace.
With the DevSpace UI, you can view the logs of your running containers, view your build and deployment configurations, and view the custom commands you’ve defined in devspace.yaml. There’s also a link to open a terminal to a running container. You can do that from the command line, too, by running
DevSpace is a powerful tool that also is very flexible. People use it to automate even very complicated dev workflows, and hooks and custom commands allow you to fit it to your needs.
We talked earlier about the idea of encoding your dev workflows as code, and this is a compelling concept. Think about it as Infrastructure as Code but for your development workflow. Your devspace.yaml file becomes the source of truth for how you develop, and having your workflow defined in code makes onboarding new team members easier. It’s self-documenting.
All of the tools that we’ve looked at in this series are open source and free, and they all offer different benefits. If you’re developing apps that run in Kubernetes clusters, it would be worth your while to look at the tools in this space and find the one that works best for your team.
Originally published at https://loft.sh.