Creating a custom Ubuntu Linux thin client distribution

14 07 2013


I’ve been the de-facto IT guy for my dad’s small business since I was a teenager. His 15 Windows XP desktops and 2 Server 2003 servers — which were, of course, state of the art when we set them up — have gotten a bit long in the tooth. We decided for various reasons to replace that infrastructure with a thin-client setup based on Server 2012’s Remote Desktop Services.

Now, thin clients tend to cost a few hundred bucks a pop. Meanwhile, we have a fleet of old Dell desktops, most of which are perfectly functional, at least physically (their years-old XP installations are another story). I thought that surely there must be a way to repurpose these machines as thin clients. I didn’t want to keep Windows on these machines — having to stay current on Windows updates and antivirus definitions and worrying about what people were doing to them wasn’t appealing. Microsoft has this thing called Thin PC, but it requires Software Assurance licensing, which we don’t have.

So I decided to look for a Linux-based solution. The closest thing I could find to what I wanted was ThinStation, which is a teeny-tiny Linux distribution that can used as a thin client on really old hardware. But the documentation seemed not-so-great, the ISO download links weren’t working at the time I tried them, and I didn’t want to have to deal with finding the specific drivers I’d need for the different kinds of hardware we have, if they were available at all.

The approach I ended up choosing was to build a custom Ubuntu distribution based on Ubuntu Mini Remix, a very lightweight LiveCD edition of Ubuntu. It combines Ubuntu’s very good hardware support and Debian-based toolset with an extremely slimmed down base installation, allowing you to install and customize it however you like.

My objective was to create a custom LiveCD that, when booted, would login in automatically, start a bare-bones graphical interface, and launch a Remote Desktop session using the FreeRDP client. I also wanted it to be installable onto the hard drive of the system to improve boot time and so that the CD would not be required. I found the process of doing this to be not terribly well documented and I definitely hit some problems along the way. So I thought I would document for posterity the process I followed to create my simple thin client distribution.

Prep Work

To get started, you’ll need a copy of Ubuntu installed on a local computer to work with. I installed the Ubuntu 12.04 desktop edition in a Parallels-based VM on my Mac with all defaults, and that worked just fine.

Next, download an ISO CD image of Ubuntu Mini Remix onto your Ubuntu machine. As of this writing, the current version is 12.10, which you can download with wget:


Now it’s time to choose an Ubuntu LiveCD customization tool. All of these tools do basically the same thing: they take your LiveCD ISO, extract it to some location on your system, and using chroot to set that location as a temporary root directory, allow you to run commands, edit files, etc. within the context of the filesystem contained on the ISO. When you’re done, it rolls up the finished product back into a new ISO. I tried uck and a few other customization tools, but ultimately had the best luck with Customizer. It’s certainly not perfect, but it mostly works. Customizer is maintained in a git repository, so to retrieve it, you’ll need to install git:

sudo apt-get install git

With git installed, you can clone the Customizer repository to /opt/Customizer, which is where it expects to live on your machine:

sudo git clone /opt/Customizer

Next, install GAMBAS 2, a development environment used by Customizer; and SquashFS Tools, which Customizer uses to extract the LiveCD’s compressed filesystem:

sudo apt-get install gambas2 squashfs-tools

Setting Up Customizer

Now we’re ready to customize the ISO. Launch Customizer with the following command line. (Note: it wasn’t immediately apparent that sudo was necessary, but things won’t work without it.)

sudo /opt/Customizer/GUI.gambas

Next, we tell Customizer which ISO we’ll be customizing. Click the Select ISO button and browse to the Mini Remix ISO we downloaded earlier.


Once you hit Open, Customizer extracts the ISO and the “squashed” filesystem to the directories /home/ISO and /home/FileSystem. (Note: This is obviously a weird place to put files. Customizer does theoretically let you specify another location from the Settings dialog box, but that didn’t work for me. The ISO would be extracted, but the whole interface would remain grayed out.)

Now, the interface should light up and let us start working on our distribution:


Distribution Configuration

Now, let’s customize the name and version of our configuration. (Note that by default, these changes will appear in some places but not others.) While we’re at it, we’ll change the default username and hostname.


Configuring Apt Sources

The Mini Remix ISO ships with the ‘universe’ and ‘multiverse’ apt repositories disabled, and we’ll need those enabled to install some of our software. Click the “Edit Sources” button — it’s misspelled, but that’s OK 🙂 — and uncomment all the lines that begin with “deb” or “deb-src”. Save the file and close it.


Install GUI Environment with Openbox

Next, we’ll install X, the Linux GUI environment, along with a window manager for our thin client to run. I went with the bare-bones Openbox. Customizer will actually do this for us if we go to the Extras dropdown and select Install GUI. A text-based menu appears. We select 6 to install Openbox.


Custom Configuration With Terminal

Now, we want to make some changes to our distribution that Customizer doesn’t know how to do. Fortunately, Customizer provides a a Terminal function. When you click the Terminal button, Customizer opens up a special Terminal window chroot’d to its working directory. This will allow us to install packages and edit configuration files within the confines of the distribution we’re building, not on our local computer. So let’s click the Terminal button, which gives a window that looks like this:


Create Custom User Account

This threw me for a loop. Even though we gave Customizer a custom username for our LiveCD, that doesn’t actually create the user account. Ain’t that a hoot? So let’s create it now, and we’ll give it a blank password:

useradd -m dumbuntu
usermod dumbuntu -p U6aMy0wojraho

Auto Logon

Now, we want the Openbox environment to launch at boot, automatically logged in with our generic “dumbuntu” username. To do this, we’ll install and configure SLiM, a login manager. First, we install it:

apt-get install slim

Then, we’ll open the configuration file /etc/slim.conf in a text editor like nano and add the following lines:

auto_login yes
default_user dumbuntu

Install FreeRDP and Write Launch Script

The whole point of this thin client distribution is to connect to Remote Desktop Services using FreeRDP, so let’s install that now:

apt-get install freerdp-x11

Now, we’ll create a script to launch the FreeRDP client with our desired parameters. We’ll run it in an infinite loop so that if the user closes the client or if it crashes for some reason, it’ll just start again. Since the dumbuntu user will be running, we’ll put the script in that user’s home directory and set the ownership of the script accordingly.

cat<<EOF > /home/dumbuntu/
while true; do
  xfreerdp -x 0x80 -f -T 'Remote Desktop Session' --no-nla --plugin rdpsnd --data alsa -- server-hostname-goes-here
  sleep 2
chmod 755 /home/dumbuntu/
chown dumbuntu:dumbuntu /home/dumbuntu/

Install ALSA for Sound

To make sound work, we need to install the alsa-base package. This will provide access to the amixer utility, which can be used to unmute and set the audio volume.

apt-get install alsa-base

Configure Post-Login Commands

After auto-login, we want to configure audio volume in Ubuntu to 100% (users can change it within the remote desktop session if they like). We also want to launch our FreeRDP script automatically. We can configure OpenBox to do this by editing the file /etc/xdg/openbox/autostart and adding the following lines:

amixer set Master 100%
amixer set Master unmute

Only One Virtual Desktop (and Edit Key Bindings)

By default, Openbox configures 4 virtual desktops. I don’t want my users to accidentally move the FreeRDP window to another desktop and get confused. To change this, you can edit /etc/xdg/openbox/rc.xml and look for a line that says <desktops>. Below that, there’s a line beginning with <number>. Change the 4 on that line to 1.

This file also controls the key bindings that Openbox associates to various window management tasks. I wasn’t too concerned about this, since FreeRDP takes over most of these key bindings when it’s open anyway. If you wanted to, you could certainly delete or comment out some of the <keybind> items in the <keyboard> section.

Configure Openbox Menu

The only menu our streamlined graphical environment will have is the Openbox desktop context menu, accessible by right-clicking on the desktop. This menu can be customized by editing /etc/xdg/openbox/menu.xml. I decided to configure a single menu option to run my FreeRDP Launch script — just in case the user accidentally kills the instance that runs automatically — and a submenu with a few administrative tools. Here’s what it looks like.

<?xml version="1.0" encoding="UTF-8"?>

<openbox_menu xmlns=""

<menu id="root-menu" label="Dumbuntu">
 <item label="Remote Desktop">
 <action name="Execute"><execute>/home/dumbuntu/</execute></action>
<menu id="utils-menu" label="Utilities">
 <item label="XTerm">
 <action name="Execute"><execute>xterm</execute></action>
 <item label="Local Install">
 <action name="Execute"><execute>ubiquity --automatic</execute></action>
 <item label="Restart">
 <action name="Execute"><execute>sudo reboot</execute></action>
 <item label="Shut Down">
 <action name="Execute"><execute>sudo halt</execute></action>


Install the Installer

As discussed earlier in this article, Ubuntu Mini Remix is a LiveCD distribution. This means that you can boot off the CD directly into the operating system. What we’ve built so far, therefore, is a customized LiveCD distribution. That’s cool, but I’d like to be able to actually install this distribution on a PC’s hard drive. To do that, we need to install the Ubuntu installer, Ubiquity. We’ll configure it in the next step.

apt-get install ubiquity

Done With Customizer Terminal

We’re done customizing our new distribution’s filesystem. Type exit to close the Terminal. Don’t just close the window! Customizer doesn’t like that. Keep the main Customizer window open — we’ll come back to it.

Configure the Installer

To configure the installer, we actually need to do some work in a regular Terminal, not in the Customizer Terminal. The reason is that all the work we’ve done so far has been focused on customizing our distribution’s filesystem. The work we need to do on the installer will live outside the filesystem on the CD itself.

Create the Preseed File

Ideally, we don’t want the installation process to require a bunch of clicks – I like things to be automated according to set a predetermined choices. Fortunately, Ubiquity supports what’s called a preseed file, which contains instructions that will be fed to the installer. Create a file called /home/ISO/preseed/dumbuntu-preseed.cfg. There are lots more options you can customize, but these values worked for me.

d-i debian-installer/locale string en_US

d-i console-setup/ask_detect boolean false
d-i keyboard-configuration/layoutcode string us

d-i netcfg/choose_interface select auto
d-i netcfg/get_hostname string tmsbuntu
d-i netcfg/get_domain string tmsbuntu
d-i netcfg/wireless_wep string

d-i mirror/country string manual
d-i mirror/http/hostname string
d-i mirror/http/directory string /ubuntu
d-i mirror/http/proxy string

d-i clock-setup/utc boolean true
d-i time/zone string US/Eastern
d-i clock-setup/ntp boolean true

d-i partman-auto/method string lvm
d-i partman-lvm/device_remove_lvm boolean true
d-i partman-md/device_remove_md boolean true
d-i partman-lvm/confirm boolean true
d-i partman-auto/choose_recipe select atomic
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
d-i partman-md/confirm boolean true
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true

d-i passwd/user-fullname string TMS User
d-i passwd/username string tmsbuntu
d-i passwd/user-password-crypted password U6aMy0wojraho
d-i user-setup/allow-password-weak boolean true
d-i user-setup/encrypt-home boolean false

tasksel tasksel/first multiselect ubuntu-desktop

d-i grub-installer/only_debian boolean true
d-i grub-installer/with_other_os boolean true
d-i finish-install/reboot_in_progress note

xserver-xorg xserver-xorg/autodetect_monitor boolean true
xserver-xorg xserver-xorg/config/monitor/selection-method \
 select medium
xserver-xorg xserver-xorg/config/monitor/mode-list \
 select 1024x768 @ 60 Hz

Customize the Boot Menu

The question comes to mind: how will folks actually install our distribution? If you’ve ever installed the Ubuntu LiveCD, you know that when you boot off the CD, you get a prompt to either boot into the LiveCD environment or into the installer. Our distribution will offer the same choice. But there’s one problem: it doesn’t work. With the Mini Remix distribution, the installer choice appears in the boot menu, but if you choose it, nothing happens — it just boots into the LiveCD environment anyway. Bummer.

Unfortunately, I don’t have an answer to this one, but I can offer a workaround: once you boot into Openbox, you can manually launch the Ubiquity installer in automatic mode — the command, oddly enough, is ubiquity –automatic — which will kick off the installation of our distribution using our preseed file. Remember when we configured the Openbox menu earlier? You may have noticed that I sneaked into the menu an option called Local Install to do just that.

For Ubiquity to pick up our preseed file, we need to edit our CD’s boot menu file, which can be found in /home/ISO/isolinux/txt.cfg, to specify the preseed filename. I took the opportunity to clean out some extraneous options also:

default live
label live
  menu label ^Try or install Dumbuntu
  kernel /casper/vmlinuz
  append  file=/cdrom/preseed/dumbuntu-preseed.cfg boot=casper initrd=/casper/initrd.lz quiet splash --
label hd
  menu label ^Boot from hard drive
  localboot 0x80

Build, Test, Rinse, Repeat

Phew! It’s finally time to build the ISO for our distribution. Back in Customizer, click Build ISO. This takes a fairly long time, so go make a pizza or something. Once it’s done, the ISO will be in the /home directory. you can use Customizer’s built-in QEMU virtualization feature to boot the ISO, or you can create a VM in your virtualization software of choice, boot it up with the ISO, and see what happens.

If you need to tweak things, you can always go back to Customizer, make more changes, and build the ISO again. As long as you don’t click the Clean button or tamper with the /home/ISO and /home/FileSystem folders, all your customizations should remain waiting for you to come back and continue working. Even if you do wipe out your customizations in those folders, you can always open up your customized distribution’s ISO file in Customizer and go from there.

Once you’re happy with your distribution, you’re ready to redistribute your ISO or burn it to CD and start using it. Have fun!


Windows 7 Task Scheduler: “The user account does not have permission to run this task”

14 02 2012

I was encountering a problem with the Windows 7 Task Scheduler. I had a task configured, and it was running correctly at the specified time. But whenever I would open the Task Scheduler and try to run the task manually, it would fail with this error:

I had no idea what this error meant. It could’ve meant that the user account that the task was configured to run under didn’t have permission to run the task — but that didn’t make sense, because the task ran fine at its scheduled time. It could’ve meant that the user account I was using didn’t have permission to run the task — but I was running as a system admin. I spent a while searching Google, and while I found people talking about the error, I couldn’t find any useful information about what it meant or how to fix it. Finally, I whipped out my old friend ProcMon, which helped me see what was happening:

The Windows 7 Task Scheduler stores tasks as individual XML files in the directory C:\Windows\System32\Tasks. This task in particular had been created by a script using the Schtasks.exe utility. The way Schtasks had configured the permission on the task file was very strange — it had granted the Administrators group all permissions except Execute:

Oddly enough, you need to have Execute permission on the task file in order to run it. This can be edited through the Windows UI, or from the command line by running:

cacls "C:\Windows\System32\Tasks\Task Name" /g Administrators:F

Naturally, you’ll need to replace “Task Name” with the actual name of the task, and Administrators with the user or group to grant access.

Changing the Windows system path programmatically

11 02 2012

In our test environment, we automatically install a bunch of utilities — like Notepad++ and the Sysinternals tools — on every Windows system. As part of this, I wanted to add some directories to the Windows system path so these utilities could be easily accessed from the command line. I knew that this could be done manually through the System applet in Control Panel, but it took me a few minutes to figure out how to do it programmatically.

When you edit the system path from the Control Panel, what you’re actually doing is modifying the registry value HKLM\System\CurrentControlSet\Control\Session Manager\Environment\Path. So I wrote a very simple VBS script that takes a directory as an argument and appends it to the value in the registry. Note that the key won’t be re-read until the next time you log into Windows, so your path won’t actually be updated until then.

PathRegKey = "HKLM\System\CurrentControlSet\Control\Session Manager\Environment\Path"
If WScript.Arguments.Count = 0 Then
 WScript.Echo "Please specify a path to add."
End If
Set WshShell = WScript.CreateObject("WScript.Shell")
UpdatedPath = WshShell.RegRead(PathRegKey) & ";" & WScript.Arguments(0)
WshShell.RegWrite PathRegKey, UpdatedPath, "REG_EXPAND_SZ"

Allowing Unauthenticated Access to Windows Shares

1 01 2012

At my job, we have a Windows-based test environment on a standalone Active Directory domain. I wanted to allow users to to access file shares within the test domain from their computers on other domains without being prompted for credentials. (Since it’s a test environment, I don’t really care about security.)

Google sent me on a wild goose chase into the Local Security Policy, but the solution was deceptive simple. It turns out that when you connect to a file share on another domain, the server tries to authenticate you with the local Guest account. The problem is that by default, Windows (correctly) disables the Guest account. You can enable it from Computer Management (Start > Run > compmgmt.msc):

Next, you have to update the permissions on the share and the NTFS permissions on the underlying folder so that Guest will have access. Guest is a member of the Everyone group, so if you grant permission to Everyone, you should be good to go. If you want to set special permissions for Guest — maybe you only want to grant anonymous users read-only access — you can do that too. Just make sure to grant the permission to either the local Guest account or the local Guests group, not the domain Guest account:

WordPress, Page Caching and “Missed Schedule”

28 11 2011

When we first put Varnish in front of our WordPress installation, we noticed that post scheduling became pretty unreliable. About half the time we’d schedule a post, it would either appear much later than scheduled, or it would never appear on the site at all with the WordPress control panel showing the post’s status as “Missed Schedule.”

It turns out that WordPress has an, uh, interesting way of implementing post scheduling: because they don’t want to require that people have access to a proper cron daemon, WP has its own jury-rigged cron wannabe called WP-Cron, which relies on users regularly accessing WordPress PHP pages to kick off scheduled tasks at the appropriate time. The problem is that Varnish was working so damn well and caching so much content from cache, the Apache/WordPress backend wasn’t getting hit often enough for WP-Cron to work reliably. Based on the miscellaneous kvetching that can be found about WP-Cron, this is apparently only one circumstance in which it may not work reliably.

The way to fix this turned out to be forcing the WP-Cron script to run every minute using regular cron by configuring a job like this on the web server:

* * * * * lynx --dump > /dev/null 2>&1

Note that you’ll want to have the job hit the WP-Cron page directly through Apache, bypassing Varnish or whatever page cache you’re using. (Or, you could just configure Varnish not to cache that page, but that would allow the public to hit your WP-Cron page and potentially cause a spike in your resource utilization, which may not be desirable.)

RunOnce for Linux

27 11 2011

On occasion, I’ve wished there was a Linux feature that enabled me to run any command once the next time the system comes up (sort of similar to Windows’ RunOnce). The last time I needed this, I put together a simple init script to provide the functionality. I use this on Debian, but it should work on any UNIX-y OS with Sys-V style init. Create a file called /etc/init.d/runonce with the following content. Don’t forget to make it executable (chmod a+x).

#! /bin/sh
# Provides: runonce
# Required-Start:
# Required-Stop:
# Should-Start:
# Default-Start: S
# Default-Stop:
# Short-Description: RunOnce
# Description: Runs scripts in /usr/local/etc/runonce.d


. /lib/init/
. /lib/lsb/init-functions

do_start () {
 mkdir -p $RUNONCE_D/ran > /dev/null 2>&1
 for file in $RUNONCE_D/*
 if [[ ! -f "$file" ]]
 mv "$file" "$RUNONCE_D/ran/"
 logger -t runonce -p "$file"

case "$1" in
 echo "Error: argument '$1' not supported" >&2
 exit 3
 # Do nothing
 echo "Usage: runonce [start|stop]" >&2
 exit 3

Then, you’ll need to symlink this script into the directories for the appropriate runlevels, which can be done easily on Debian with the following command:

update-rc.d runonce defaults

Finally, create a directory called /usr/local/etc/runonce.d. Now, you can simply put executable scripts or symlinks to utilities on the system into that directory. They’ll be run the next time you boot up, and then moved into the subdirectory /usr/local/etc/runonce.d/ran for posterity.

Getting Capistrano destination hosts from Puppet

26 11 2011

If you’re using Capistrano to deploy code to web servers and Puppet to manage those servers, the DRY principle suggests that it may be a bad idea to hardcode your list of web servers in your Capistrano recipe — instead, you may want to dynamically fetch the list of web servers from Puppet prior to each deploy. Here’s one way of doing this.

Puppetmaster Configuration

First, you’ll need to turn on storeconfigs on your Puppetmaster. This allows Puppet to store all of its node information in a database (which enables us to access it for other purposes). Note that these instructions assume you have a MySQL database for your storeconfigs.

Next, create a script on your Puppetmaster called /usr/local/bin/list_nodes_by_class.rb. Fill in the database username, password, host and schema name used to access your storeconfigs in lines 4-7. Make sure the script is executable.

require 'mysql'


QUERY="select from hosts h join resources r on = r.host_id join resource_tags rt on = rt.resource_id join puppet_tags pt on rt.puppet_tag_id = where = 'class' and r.restype = 'Class' and r.title = '#{ARGV[0].gsub("'","")}' order by;"

res = my.query(QUERY)
res.each_hash { |row| puts row['name'] }

Capistrano Recipe Modification

Note that I’ll assume here that your Capistrano recipe uses the “web” role to determine where to deploy code to, and that the Puppet class you use to designate web servers is “role_webserver” — you may need to change these.

First, add the following task and helper function to your recipe:

task :set_roles_from_puppet, :roles => :puppetmaster do
 get_nodes_by_puppet_class('role_webserver').each {|s| role :web, s}

def get_nodes_by_puppet_class(classname)
 hosts = []
 run "/usr/local/bin/list_nodes_by_class.rb #{classname}", :pty => true do |ch, stream, out|
 out.split("\r\n").each { |host| hosts << host }

Next, go to wherever you’re defining your roles. Add a role for your Puppetmaster:

role :puppetmaster,  ""

Finally, delete your existing definition of the “web” role and replace it with the following:


Now, when you do a Capistrano deploy, the destination servers should be dynamically retrieved from the Puppet database.

Puppet and the Ghetto DNS

25 11 2011

Suppose you have a small network of Linux servers powering your web site. You’re probably going to want a way of accessing the servers by hostname from one another — for example, so your web servers can find your database servers without resorting to hardcoding IP addresses in your Apache configuration. What’s the best way to do this?


Well, if you’re already hosting your own DNS servers, you can consider a split-horizon configuration, which is supported by major DNS daemons like BIND. But if you’re using your ISP’s DNS servers, for example, setting up your own DNS for this purpose seems like overkill (and a pain). Personally, I like to run as few services on my network as possible, just as a matter of principle.

Another option is to just add A records for each of your servers to your site’s public DNS zone. But adding private (RFC1918) IP addresses to the public DNS means that anyone who can operate dig can find a complete list of your servers and their private IP addresses. Some would argue that I’m advocating for security by obscurity, but I just can’t see the upside of unnecessarily exposing information about your internal network to the public. Also, while it’ll work, exposing RFC1918 IPs via the public DNS just seems icky.

Hosts file

The “lightweight” approach to the problem of hostname to IP address resolution is to just add entries to the /etc/hosts file on each server. Of course, this is completely unmaintainable if you have more than 2 or 3 servers.

Hosts file + Puppet = Ghetto DNS

But if you’re using Puppet to automate your server infrastructure, it turns out that exported resources offer a nice solution to this problem. You can create and deploy a simple Puppet module to each machine that exports a Host entry for itself, and then assembles a /etc/hosts file based on the Host entries exported by all the machines Puppet knows about:

class hosts {
 host { "localhost.localdomain":
 ip => "",
 host_aliases => [ "localhost" ],
 ensure => present,
 @@host { "$fqdn":
 ip => $ipaddress_eth1,
 host_aliases => [ $hostname ],
 ensure => present,

Host <<| |>>

This “Ghetto DNS” setup can be just right if manually maintaining hosts files seems impractical, but running your own DNS seems like overkill.

Getting started

25 11 2011

I recently wrapped up a side gig running the server infrastructure for a popular technology news site. I’m going to get this blog started by posting some useful tidbits I happened upon during that project.