Let me start out by saying, I love the idea of containerized storage.  It makes storage portable and easy to worry less about what disks carry what information, especially if you plan to move between bare metal and cloud hosting, or even between different bare metal providers.  There are a lot of options to choose from, however, and it makes sense for you to kick the tires on a couple before selecting one, for various reasons.  As with most software technologies, There Is No Silver Bullet.  Things grow more complex and give different trade-offs from each other, and it is worth studying a solution before deploying it to production.  Right?

I thought so.  Well, I work on a Windows 10 Pro machine, and I wanted to get some simple K8s running locally so I could try them out.  Here's what I found:

  • Windows Subsystem Linux 2 (WSL2) comes with various flavors of Linux that you can deploy quickly and performs extremely well.  While it's awesome for basic Linux work and even does really well with Kubernetes deployments, it will not work for most storage CNI systems.  At least, the slimmed down Ubuntu Server 18.04 LTS completely lacks the /run/udev system that is necessary for discovering devices.  OpenEBS and Longhorn both need that to install properly.  It's a dead end.
  • If not otherwise instructed to use WSL2, Docker Desktop for Windows automatically creates a Hyper-V VM for you that is based on LinuxKit.  This is a stripped down kernel that lacks many features and is essentially impossible to ssh into, and even if you trick the system with a local mounted folder and network layer (the -v /:/host and chroot trick), you won't be able to install open-iscsi on it anyway, because as I said, it's stripped down.  You have to rebuild the kernel to get it in there.  Pretty much every major CNI relies on this lib to detect the block devices on the local machine, so you're SOL here too.

Well, where did I end up?  Installing an old-school VM with a full-fat version of Ubuntu Server 18.04 LTS.  This is for all intents and purposes a bare metal machine, running at somewhat lower performance, but with all the devices and libraries available to actually install a Kubernetes install.  As far as I can tell, this (or minikube, which is essentially the same thing) is all you can do on a Windows machine to run a Linux K8s cluster properly.  I blew a week figuring this out, trying every possible alternative only to hit a roadblock.

Some helpful advice is that, by default, dockerd will run over a local unix:// domain socket, so the safest way to get your docker CLI to work on the Windows command shell is to set up a passwordless ssh key to a user on the VM, and add that user to /etc/sudoers with visudo (make sure you add your line to the end!) as a passwordless sudoer.  Then, you can set a docker context up that looks like this:


You can then execute docker commands and have it ssh to the VM on your behalf and talk to the docker daemon over there.

Hope this helps someone else, even if confirming your suspicions that you're trying to do something that isn't going to work.