DEV Community

Cover image for LTSP on LXD: A Fun Dev Trip
Peter W
Peter W

Posted on

LTSP on LXD: A Fun Dev Trip

Recently I've been excited about setting up a little 'homelab' to experiment with virtual machines (VMs), containers, Linux admin, and networking.

There's just two problems: first, I don't yet have hard drives for my machines and second, I don't yet have a great place for the machines to be set up.

To solve the first problem (lack of drives), I've been looking at a project I first played with years ago - LTSP, the Linux Terminal Server Project. LTSP makes "maintaining tens or hundreds of diskless clients is as easy as maintaining a single PC". Isn't that great?! The 'terminals' (client machines) will boot over the network without needing any permanent storage attached.

To solve the second problem (lack of space), I'm initially going to entirely skip setting up any physical client machines at all. The idea is to run both the LTSP server and the LTSP client "machine" as containerized instances inside my dev box (e.g. laptop). There are a number of ways this can be done, and a lot of homelabbers might use Proxmox if they have that running already, but today I'm going to use software from Canonical (the company behind Ubuntu Linux) & others called "LXD". Later, after getting the systems set up 'virtually', I could move plug in actual physical machines with very little changes and have it all just work. (That's the idea anyway!)

In this post, I'll be talking about using LXD with LTSP, but if you had another set of instances you wanted to run for development purposes instead of LTSP, it should be simple to see how you'd do that. A note on terminology: I'll use "instances" to refer to either a virtual machine instance or a system container instance.

Article © 2025. All rights reserved. Not for AI/ML training or data-mining use.

Why LXD?

We use LXD for its ability to manage virtual machines and system containers.

If we wanted to run just a single application in a container, we'd use something like Docker.

But in this case, we need to run the entire Linux operating system in a container, which is possible via the Linux kernel's "LXC" ("Linux Containers") system. We'll get an LTSP client set up as a VM (booting via iPXE) and the LTSP server set up as a system container.

LXD provides both KVM-based VMs and system containers based on LXC – that can run a full Linux OS – in a single open source virtualisation platform. ref

Note: LXC stands for "Linux Containers", so naturally LXD stands for "Linux Container Daemon"; if "lxd" vs "lxc" seems confusing, read this.

This set up also allows us to do development on a personal/development machine, without the need to do something like install proxmox on the bare metal. That saves us from needing a dedicated 'server' machine.

Sidenote:

  • In 2023, Canonical took over more direct control of the LXD project (which they had sponsored from its beginning); this prompted the creation of a community fork, Incus. I assume you can use either one fairly interchangeably, and these notes may help if you wanted to do that.

Why LTSP

As previously mentioned, Linux Terminal Server Project makes it easy to manage multiple diskless computers. That's why I'm using it here, but it could also be useful to anyone looking to reduce costs or admin efforts in e.g. a lab, a school, a home context, or heck even a small research cluster!

LTSP can be set up in a variety of ways. The hypothetical LTSP network I'll aim to emulate for this project is similar to what is shown below:

Example illustration of LTSP network (illustrated via FossFlow). Shown in the network is an internet router, connected to a server, which connects to a switch, which then connects to multiple PCs

(We'll simplify this a bit: since all our hypothetical clients are identical, to get this working we'll just start with a single client.)

Prerequisites

If you'd like to follow along, here's what you'd need:

  1. A computer running Linux, with internet & root/sudo access. It may be easiest if you use some kind of Debian or Ubuntu variant.

  2. Enough space (on the above computer) to store the VM images.

That's it! In a future post, I'll share more on testing this out with real client machines.

Sidenote: you could even follow along without owning a Linux computer; by running a Linux VM from a Mac or Windows box, or even just online by using something like https://killercoda.com/playgrounds/scenario/ubuntu (I haven't tried it, so if you do, let us know if it works!).

For this guide, we'll be using Linux Mint because that would be good in a school context. (Also because initially I tried the latest Ubuntu, 25.10, but ran into conflicts getting cloud-init and LTSP to play nice together, so we'll save you the headaches! I may write up how to work around those in a future post.)

Virtual Network Overview

The broad system architecture of this is pretty simple - it's just two "computers", one for the server and one client (and later we could add any number of clients easily). As always though, the devil is in the details.

The set up below aims to do two things: 1) it roughly matches the physical network topology illustrated above (where the server has two connections, one to the internet and one to a switch), and 2) it helps us get around an issue you might otherwise run into in getting an LTSP 'network' working with LXD. Note also that the server can be either a VM or a "system container", but the client must be a VM (so it can run the full boot process).

LXD network diagram

For the unfamiliar, lxdbr0 and lxdbr1 are "bridge" devices created in the host, by LXD, which can be thought of as physical switches, and they allow the containers to communicate with anything else attached to the bridge. Thus, the server and client in our setup will communicate over lxdbr1. These LXD-managed devices also come with extra goodies, like the ability to provide containers with IP addresses via DHCP, or have internet access through the host using NAT. Using NAT and DHCP, our lxdbr0 is effectively acting like the internet router in the physical network topology example shown earlier.

Now that you know where we're aiming, let's get started!

Step 1: Install LXD

Below you'll see the bare minimum commands which should work on a modern Ubuntu-based system. You may need to refer to the LXD docs and in particular the 'first steps' if your situation differs.

sudo snap install lxd
sudo lxd init # this must be done as root
Enter fullscreen mode Exit fullscreen mode

You'll then be prompted to configure LXD. Perhaps the only change I'd recommend is just use "dir" for the storage backend (this seems the simplest setup for development/testing purposes, as it uses a local directory for storage).

sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (ceph, lvm, pure, btrfs, dir, powerflex, zfs) [default=zfs]: dir
Would you like to connect to a MAAS server? (yes/no) [default=no]: no
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like the LXD server to be available over the network? (yes/no) [default=no]:    
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]: 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Enter fullscreen mode Exit fullscreen mode

If you later want to see the config, run lxd init --dump.

To make life easy:

‘lxc’ commands can be run as any user who is a member of group lxd ref

So we run:

sudo adduser `whoami` lxd 
# note: above, `whoami` will insert your username

newgrp lxd
# note, 'newgrp' only needed to avoid having to logout/login
# for the group addition to take effect

Enter fullscreen mode Exit fullscreen mode

WARNING: re: adding a user to lxd group: "you should only give such access to users who you'd trust with root access to your system." LXD Security

Finally for the host setup, we will want a way to view the VGA video output from our client:

sudo apt install spice-client-gtk
Enter fullscreen mode Exit fullscreen mode

Step 2: Access the LXD web UI

Let's get LXD's admin interface working, so you can use it to easily inspect or change anything we create.

Follow this guide to setup the LXD admin web UI access.

Make sure that your LXD server is exposed to the network. You can expose the server during initialization, or afterwards by setting the core.https_address server configuration option.

lxc config set core.https_address :8443
Enter fullscreen mode Exit fullscreen mode

Now we should be able to open web browser to https://localhost:8443 - you'll likely see a security warning due to a self-signed certificate, so you'll need to click through to accept the risk and continue.

Next up, you'll need to follow the instructions on the LXD admin website for how to set up a client certificate for TLS login:

LXD admin web console prompting the user to download a certificate and install it in the web browser

Once logged in with the TLS cert, you'll be shown some instructions for generating a trust token. It should be something like below (and you'll be asked to paste the generated token into the web admin page).

lxc auth identity create tls/lxd-ui --group admins
Enter fullscreen mode Exit fullscreen mode

It'll be easiest to do the above steps from your Linux box. If you try accessing that website from another machine on your network, you might get blocked by your Linux machine's built-in firewall. But assuming you don't take that advice, and wanted to, say, connect to the linux box from your desktop running on 10.0.0.2, if your Linux machine is Ubuntu you could use a ufw rule like below.

sudo ufw allow from 10.0.0.2 to any port 8443 proto tcp comment 'Access LXD from desktop on IP=10.0.0.2'
Enter fullscreen mode Exit fullscreen mode

By this point, you should now be able to access the web UI (which would look similar after our VMs are created in a later step):

LXD web admin UI showing status of running containers/vms

Note: The GUI can be very helpful, but if anything goes wrong, don't worry - we can actually do everything via the command line.

Step 3 [optional]: create a test instance

Let's create and run a virtual machine just to confirm that the basic LXD system is all working. The purpose of this step is also to familiarize you with some of the lxc commands (so you can skip it if already familiar.)

Run the command below, and be ready to go for a walk as the image download may take a while!

lxc launch ubuntu:25.10 sample-vm --vm
# alternatively, "... launch ubuntu:questing ..."
# since Ubuntu 25.10 == Ubuntu Questing, outcome is the same
Enter fullscreen mode Exit fullscreen mode

Note the "--vm" (virtual machine) flag. Virtual machines will use more resources than a system container. VMs also include the kernel files that we may need to share with clients over the network -- if you use a container, you'll also need to install a kernel.

Verify the VM has launched:

lxc list -c nstm # for columns below; "lxc list" for default columns
+-------------+---------+-----------------+--------------+
|    NAME     |  STATE  |      TYPE       | MEMORY USAGE |
+-------------+---------+-----------------+--------------+
| sample-vm   | RUNNING | VIRTUAL-MACHINE | 499.82MiB    |
+-------------+---------+-----------------+--------------+
Enter fullscreen mode Exit fullscreen mode

If desired, the emulated hardware can be configured like so (which could be useful, for example, to test out a limited-resource client VM):

lxc config set sample-vm limits.cpu=1
lxc config set sample-vm limits.memory=500MiB
Enter fullscreen mode Exit fullscreen mode

You can log into the VM (sample-vm):

lxc exec sample-vm bash

Enter fullscreen mode Exit fullscreen mode

Once logged in, verify the VM is able to connect to the internet:

root@sample-vm:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu questing InRelease
Get:2 http://security.ubuntu.com/ubuntu questing-security InRelease [136 kB]
Hit:3 http://archive.ubuntu.com/ubuntu questing-updates InRelease
Hit:4 http://archive.ubuntu.com/ubuntu questing-backports InRelease
Get:5 http://security.ubuntu.com/ubuntu questing-security/main amd64 Components [208 B]
Get:6 http://security.ubuntu.com/ubuntu questing-security/universe amd64 Components [208 B]
Get:7 http://security.ubuntu.com/ubuntu questing-security/restricted amd64 Components [212 B]
Get:8 http://security.ubuntu.com/ubuntu questing-security/multiverse amd64 Components [212 B]
Fetched 136 kB in 2s (54.6 kB/s)
All packages are up to date.
Enter fullscreen mode Exit fullscreen mode

Step 4: Create the LTSP network

Before creating the server and client, we need to create the lxdbr1 bridge/network that we will use to connect those instances. The settings we use here will use the LTSP standard/example subnet (192.168.67.1/24), and we don't need NAT, and it's important that we disable LXD's DHCP, as LTSP will be providing a PXE-enabled DHCP for the clients to enable network booting.

lxc network create lxdbr1 \
  ipv4.address=192.168.67.1/24 \
  ipv4.nat=false \
  ipv4.dhcp=false \
  ipv6.address=none
Enter fullscreen mode Exit fullscreen mode

You might wonder why we don't need to create lxdbr0. That's because that was already created for us by LXD after we ran sudo lxd init; you'll recall it asked these questions and the default answers have set you up with a local network bridge:

Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
Enter fullscreen mode Exit fullscreen mode

Running lxc network list should confirm that we now have our two bridge networks ready, so we're ready to create our instances.

Step 5: Create & configure the server container

Create the server container:

lxc init images:mint/xia ltsp-server 
Enter fullscreen mode Exit fullscreen mode

To get LTSP to work in a server container, we'll need to relax some of LXD's security. Only do this if you trust everything running in the container, and are comfortable with something that is described as "not safe at all" and is strongly discouraged -- but I'm OK with this for now, since this is mainly for a personal development purpose, and I've explored other options. Please act responsibly if you are using any of this in production!

lxc config set ltsp-server security.nesting true
lxc config set ltsp-server security.privileged true
Enter fullscreen mode Exit fullscreen mode

We also need to run this little 3-step process to set up the container so it can use host loop devices (note: this also naturally reduces the security benefits of containerization, so again - act responsibly. If anyone has better ideas please share!)

# Set raw.lxc for loop device permissions
lxc config set ltsp-server raw.lxc "lxc.cgroup2.devices.allow = b 7:* rwm
lxc.cgroup2.devices.allow = c 10:237 rwm"

# 1) Add loop-control device
lxc config device add ltsp-server loop-control unix-char path=/dev/loop-control source=/dev/loop-control

# 2) Count loop devices on host
HOST_LOOP_COUNT=$(ls /dev/loop[0-9]* 2>/dev/null | wc -l)
echo "Found $HOST_LOOP_COUNT loop devices on host"

# 3) Add matching number of loop devices to container
for i in $(seq 0 $((HOST_LOOP_COUNT - 1))); do
  if [ -e /dev/loop$i ]; then
    lxc config device add ltsp-server loop$i unix-block path=/dev/loop$i source=/dev/loop$i
  fi
done
Enter fullscreen mode Exit fullscreen mode

Optionally, bump up the resources allocated to the server. I do this just so that later, when we build a compressed image of the server's file system, it'll run quickly, but adjust these as you see fit:

lxc config set ltsp-server limits.cpu=5
lxc config set ltsp-server limits.memory=3GiB
Enter fullscreen mode Exit fullscreen mode

This can also be set on the web UI via Instances[Instance name]ConfigurationResource limits, shown below:

Setting LXD Instance Resource Limits

The server will ultimately have two (virtual) network interface cards, one attached to each of the bridges. The first NIC will take its IP address from lxdbr0 (which uses LXD's DHCP provider), but the second NIC be attached to the lxdbr1 which has DHCP disabled, so we'll need to assign a static IP address to this second NIC. (If LXD's DHCP were enabled on the bridge, we'd be able to have LXD provide a pre-set IP, but with DHCP disabled we'll have to use the distribution's standard method for assigning an IP from within the server container.)

Connect eth1 to the lxdbr1 bridge (LTSP internal network). Note that eth0 would have already been connected to lxdbr0 by default when our container was created.

lxc config device add ltsp-server eth1 nic \
  network=lxdbr1 \
  name=eth1
Enter fullscreen mode Exit fullscreen mode
lxc start ltsp-server
# startup is very quick (compared to a VM)
Enter fullscreen mode Exit fullscreen mode

At this point you can confirm that the server has an IP address (which came from DHCP) on eth0, but has no IPv4 address on eth1:

lxc exec ltsp-server -- ip --brief address
lo     UNKNOWN  127.0.0.1/8
eth0   UP       10.81.43.103/24 ... <----- comes from LXD's DHCP
eth1   UP        ... <--- no IPv4 address
Enter fullscreen mode Exit fullscreen mode

Create a netplan file so eth1's static IP address will persist across reboots:

lxc exec ltsp-server -- bash -c 'cat > /etc/netplan/60-ltsp-static.yaml << EOF
network:
  version: 2
  ethernets:
    eth1:
      addresses:
        - 192.168.67.2/24
EOF'
Enter fullscreen mode Exit fullscreen mode

Fix permissions so netplan won't complain:

lxc exec ltsp-server -- sh -c 'chmod 600 /etc/netplan/*.yaml'
Enter fullscreen mode Exit fullscreen mode

And finally, apply this change:

lxc exec ltsp-server -- netplan apply
# may give "WARNING:root:Cannot call Open vSwitch: ovsdb-server.service is not running." but can be ignored
Enter fullscreen mode Exit fullscreen mode

Then you'll be able verify the change took effect; let's look at just the IPv4 addresses (-4):

lxc exec ltsp-server -- ip --br -4 a
lo      UNKNOWN  127.0.0.1/8
eth0    UP       10.81.43.103/24
eth1    UP       192.168.67.2/24 # <-- now set
Enter fullscreen mode Exit fullscreen mode

Step 6: Create the client VM

Create the client VM as an emtpy instance (we don't need a disk, as we will be doing netbooting)

lxc init ltsp-client --vm --empty
# emulate a low-resource client (optional):
lxc config set ltsp-client limits.cpu=1
lxc config set ltsp-client limits.memory=500MiB
Enter fullscreen mode Exit fullscreen mode

We need to disable UEFI Secure Boot so the VM will boot up. I'm asserting that should be fine, since 1) this writeup is meant for a developer context, and 2) we control the VMs and the virtual network between them. But if for some reason you're not comfortable with this, here are some resources.

lxc config set ltsp-client security.secureboot=false
Enter fullscreen mode Exit fullscreen mode

Ensure the eth0 device is attached to our LTSP network bridge:

lxc config set ltsp-client agent.nic_config=true
lxc config device add ltsp-client eth0 nic \
  network=lxdbr1 \
  name=eth0
Enter fullscreen mode Exit fullscreen mode

At this point, the client should be created but not yet started. Verify:

lxc list -c nst
+-------------+---------+-----------------+
|    NAME     |  STATE  |      TYPE       |
+-------------+---------+-----------------+
| ltsp-client | STOPPED | VIRTUAL-MACHINE |
+-------------+---------+-----------------+
| ltsp-server | RUNNING | CONTAINER       |
+-------------+---------+-----------------+
Enter fullscreen mode Exit fullscreen mode

Super. However, we won't/can't start the client just yet, because we first need to set up the LTSP software and config back on our server.

Step 7: Install LTSP on the server container

We won't deviate much from the standard LTSP docs, except for a couple things.

The first wrinkle is that (as noted here) it is recommended to use an Ubuntu LTS release for production and use the PPA, but since we're using Linux Mint we'll skip adding the PPA.

While we could do lxc exec ltsp-server -- <command>, we can also just run a bash shell on the server, so we'll do that (note the prompt will change to reflect that we're logged in):

lxc exec ltsp-server bash

<for commands below, we're in our server's bash shell>

root@ltsp-server# # skipping this: add-apt-repository ppa:ltsp/ppa
root@ltsp-server# sudo apt update
Enter fullscreen mode Exit fullscreen mode

Ref

The second deviation from the docs is that we'll need to install the kernel files. Typically containers don’t have these files as they don’t need them, but since we are using the server’s file system as the source of our client image, we will need these so the client can boot.

root@ltsp-server# apt install --no-install-recommends linux-generic initramfs-tools
Enter fullscreen mode Exit fullscreen mode

Install the LTSP server packages (docs ref). Note that (per docs) we install ipxe rather than ltsp-binaries package since we're not using the PPA.

root@ltsp-server# apt install ipxe 
root@ltsp-server# apt install --install-recommends ltsp dnsmasq nfs-kernel-server openssh-server squashfs-tools ethtool net-tools epoptes
Enter fullscreen mode Exit fullscreen mode

During apt install you may observe a failure to start dnsmasq (see below). That is safe to ignore, for reasons explained here.

Setting up dnsmasq (2.90-0ubuntu0.22.04.1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/dnsmasq.service → /lib/systemd/system/dnsmasq.service.
Job for dnsmasq.service failed because the control process exited with error code.
See "systemctl status dnsmasq.service" and "journalctl -xeu dnsmasq.service" for details.
...
Starting dnsmasq - A lightweight DHCP and caching DNS server...
dnsmasq: failed to create listening socket for port 53: Address already in use
dnsmasq.service: Control process exited, code=exited, status=2/INVALIDARGUMENT
dnsmasq[6780]: failed to create listening socket for port 53: Address already in use
...
systemd[1]: dnsmasq.service: Failed with result 'exit-code'.
systemd[1]: Failed to start dnsmasq - A lightweight DHCP and caching DNS server.
Enter fullscreen mode Exit fullscreen mode

We're ready to let LTSP configure dnsmasq (which sets up DHCP with PXE network booting for the clients). Read the docs but for this setup, we'll just need to run:

root@ltsp-server# ltsp dnsmasq
Enter fullscreen mode Exit fullscreen mode

And let's create a user. Below we'll call it sam (for "System AdMin") but you can call it anything. The user will be able to log into our LTSP server and the client, and by adding to the 'epoptes' group the user will have special admin access for the LTSP network.

root@ltsp-server# useradd sam
root@ltsp-server# passwd sam # pick a secure password
root@ltsp-server# gpasswd -a sam epoptes # only do this step for admin users
Enter fullscreen mode Exit fullscreen mode

Step 8: Create the LTSP image & complete LTSP setup

LTSP supports three methods to maintain a client image. They are documented in the ltsp image man page. Ref

We will be using the first method ("chrootless") here.

Build an image based on the LTSP server's filesystem:

ltsp image /
Enter fullscreen mode Exit fullscreen mode

Sidenote: if for some reason your linux distro has a separate /boot partition, you would need to replace the above command with: ltsp image /,,/boot,subdir=boot. This comes from a recommendation in the LTSP discussion forums and for more info, see the ltsp image manpage.

This step may take a few minutes, but you should see it progressing and some output similar to:

ltsp image /
Using x86_64 as the base name of image /
Running: mount -t tmpfs -o mode=0755 tmpfs /tmp/tmp.wWZ787eg3d/tmpfs
Running: mount -t overlay -o upperdir=/tmp/tmp.wWZ787eg3d/tmpfs/0/up,lowerdir=/,workdir=/tmp/tmp.wWZ787eg3d/tmpfs/0/work /tmp/tmp.wWZ787eg3d/tmpfs /tmp/tmp.wWZ787eg3d/root/
Trying to acquire package management lock: /var/lib/dpkg/lock
Cleaning up x86_64 before mksquashfs...
Replacing /tmp/tmp.wWZ787eg3d/root/etc/ssh/ssh_host_ecdsa_key
Replacing /tmp/tmp.wWZ787eg3d/root/etc/ssh/ssh_host_ed25519_key
Replacing /tmp/tmp.wWZ787eg3d/root/etc/ssh/ssh_host_rsa_key
Parallel mksquashfs: Using 5 processors
Creating 4.0 filesystem on /srv/ltsp/images/x86_64.img.tmp, block size 131072.
[====================================================================================================================================| ] 35182/35228  99%
Unrecognised xattr prefix system.posix_acl_access
Unrecognised xattr prefix system.posix_acl_default
[=====================================================================================================================================|] 35228/35228 100%
Enter fullscreen mode Exit fullscreen mode

I also observed, below, the failed attempts to remove files under /tmp -- this seems like a known issue and seems safe to ignore for our purposes.

rmdir: failed to remove '/tmp/tmp.SR8el2YEmh/tmpfs': Directory not empty
LTSP command failed: rmdir /tmp/tmp.SR8el2YEmh/root /tmp/tmp.SR8el2YEmh/tmpfs
rmdir: failed to remove '/tmp/tmp.SR8el2YEmh': Directory not empty
LTSP command failed: rmdir /tmp/tmp.SR8el2YEmh
Running: ltsp kernel /srv/ltsp/images/x86_64.img
Running: mount -t squashfs -o ro /srv/ltsp/images/x86_64.img /tmp/tmp.5MzaNEOmF0/tmpfs/0/looproot
Running: mount -t overlay -o upperdir=/tmp/tmp.5MzaNEOmF0/tmpfs/0/up,lowerdir=/tmp/tmp.5MzaNEOmF0/tmpfs/0/looproot,workdir=/tmp/tmp.5MzaNEOmF0/tmpfs/0/work /tmp/tmp.5MzaNEOmF0/tmpfs /tmp/tmp.5MzaNEOmF0/root/
-rw-r--r-- 1 root root 62933977 Dec  1 09:43 /srv/tftp/ltsp/x86_64/initrd.img
-rw-r--r-- 1 root root 16542088 Dec 1 08:41 /srv/tftp/ltsp/x86_64/vmlinuz
rmdir: failed to remove '/tmp/tmp.5MzaNEOmF0/tmpfs': Directory not empty
LTSP command failed: rmdir /tmp/tmp.5MzaNEOmF0/root /tmp/tmp.5MzaNEOmF0/tmpfs
rmdir: failed to remove '/tmp/tmp.5MzaNEOmF0': Directory not empty
LTSP command failed: rmdir /tmp/tmp.5MzaNEOmF0
To update the iPXE menu, run: ltsp ipxe
Enter fullscreen mode Exit fullscreen mode

One important note, from the LTSP docs:

You need to run these commands every time you install new software or updates to your image and want to export its updated version

Now continuing the commands listed in the docs:

root@ltsp-server:~# ltsp ipxe
root@ltsp-server:~# ltsp nfs
root@ltsp-server:~# ltsp initrd
Enter fullscreen mode Exit fullscreen mode

Pay particular attention to this bit about ltsp initrd (again, straight from the docs):

This compresses /usr/share/ltsp, /etc/ltsp, /etc/{passwd,group} and the server public SSH keys into /srv/tftp/ltsp/ltsp.img, which is transferred as an "additional initrd" to the clients when they boot. You can read about its benefits in its man page, for now keep in mind that you need to run ltsp initrd after each LTSP package update, or when you add new users, or if you create or modify /etc/ltsp/ltsp.conf, which replaced the LTSP 5 "lts.conf".

With that, we're now ready to try booting the LTSP client VM!

Step 9: "Boot up" the client VM over the network

At this point, our LTSP server VM is running but the client is not, which we can quickly confirm below.

lxc list -c ns
+-------------+---------+
|    NAME     |  STATE  |
+-------------+---------+
| ltsp-client | STOPPED |
+-------------+---------+
| ltsp-server | RUNNING |
+-------------+---------+
Enter fullscreen mode Exit fullscreen mode

Now we're now ready to start the client and have it boot over the virtual network from the image that was built via the previous step.

This process uses PXE, the "Preboot eXecution Environment". A client booting with PXE will use DHCP to communicate with the server early in the boot sequence:

[The client] broadcasts a DHCPDISCOVER packet containing PXE-specific options to port 67/UDP (DHCP server port); it asks for the required network configuration and network booting parameters. The PXE-specific options identify the initiated DHCP transaction as a PXE transaction.

So let's have the LTSP server show us the DHCP network traffic on ports 67 (which the server listens on) and 68 (which the server responds on).

# back on the host
lxc exec ltsp-server -- ip -br a # confirm eth1 is on the 192.x net
lxc exec ltsp-server -- apt install tcpdump
lxc exec ltsp-server -- tcpdump -vvv -i eth1 -n port 67 or port 68
Enter fullscreen mode Exit fullscreen mode

At this point, the server is just waiting for clients to boot up. Next, we'll boot up the client and you'll see two things: 1) on that server shell you'll see the DHCP packets (first a DHCPDISCOVER message from the client, then an OFFER from the server, and so on), and 2) you'll see the client boot up on an emulated monitor.

Start the client & watch it boot:

lxc start ltsp-client && \
lxc console ltsp-client --type=vga
Enter fullscreen mode Exit fullscreen mode

That will open a GUI window showing the boot up process (similar to what you'd see from power-on of actual hardware). If it fails, make sure you installed the spice-client-gtk in a previous step.

First you'll see the BIOS acquire a DHCP address and download the initial file:

Start of PXE boot up

Then you'll see the iPXE menu; either hit Enter or just let it boot from the default option (x86_64.img):

iPXE boot menu

Then you'll see the LTSP image files being loaded:

Loading kernel files

After this the boot process switches to outputting in video mode only, hence why you needed the "--type=vga" command line argument earlier. If you inadvertently left that argument off, the boot process continues in the background but it could just look hung here as you don't see any updates (in which case: hit Ctrl-A then Q, then jump back into the video console: lxc console ltsp-client --type=vga).

If all goes well, you would see the rest of the boot process and finally a login screen, and you'll be able to log in to the LTSP client as the user we created (on the server!) earlier.

LTSP client login

When ready to shut down, run:

lxc stop --force ltsp-client # --force is optional but faster, probs don't use in production!
Enter fullscreen mode Exit fullscreen mode

Troubleshooting

If the boot process is only partially successful and displays an error about 'Access Denied', you may need to shut down the VM, run the following command, then try starting the VM again:

lxc config set ltsp-client security.secureboot=false
Enter fullscreen mode Exit fullscreen mode

If while running the lxc console... command you get an error like unshare: write failed /proc/self/uid_map: Operation not permitted, then use the workaround here.

Conclusion

By this point, we have successfully emulated a complete Linux Terminal Server Project network (server + client) running as virtual machines on a single physical machine with LXD installed.

I'm happy to now have a base for LTSP server+client devops explorations from the fully contained comforts of my laptop at home or on the road.

Hopefully you've enjoyed this tour of LXD + LTSP and learned something useful along the way.

If you found this useful, or ran into any questions, or have ideas for other ways to use LXD and/or LTSP, please leave a comment, I'd love to hear about it!

In a future post I may cover: 1) adding a GUI environment to our client and 2) setting up a physical machine to netboot from our LXD LTSP server container.

Other resources & further reading

Things tried at one point or another but probably don't need:

lxc config set ltsp-server linux.kernel_modules overlay,squashfs,nfsd,nfs_acl,lockd
Enter fullscreen mode Exit fullscreen mode

Copyright notice

© 2025 Peter W. All rights reserved. Not for AI/ML training or data-mining use.

NO AI TRAINING: Without in any way limiting the author’s exclusive rights under copyright, any use of this publication to “train” generative artificial intelligence (AI) technologies to generate text is expressly prohibited. The author reserves all rights to license uses of this work for generative AI training and development of machine learning language models.

Top comments (0)