Sail-2-Software

: A technology blog... Sort of! Everything from sailing to software.

Systemd Is in the Trees

| Comments

You read that right – systemd is “in the trees… It’s coming!” *

If you pay attention to things happening in the world of Linux, you probably know that systemd will be replacing sysvinit for Debian systems.

In the last couple of days I’ve switched 5 Debian testing/Jessie systems from sysvinit to systemd. Overall, it’s been a smooth transition, with just a couple of problems I ran into which were not ultimately systemd issues.

It’s easy to test out, because you can selectively boot with systemd as long as you don’t install systemd-sysv, which enables systemd by default. I think this is a good way to test, and if the system boots OK and everything works as expected, you can then go all-in and enable systemd by default.

I also discovered the debian systemd implementation doesn’t run /etc/rc.local by default, but that’s easy to solve as well.

The debian wiki has excellent documentation on systemd and includes links for other common distributions as well.

As far as the two (non systemd related) issues I ran into:

  • On one server system, any USB related operations would hang with a kernel call stack dumped to dmesg. It turned out that this was some issue related to upgrading to the 3.13 kernel and had nothing to do with systemd. I verified this because I ran into the same issue when booting with sysvinit under the kernel. I was able to workaround this problem by unplugging a certain USB device (an external hard drive) until after the system booted. This system is working fine with systemd, and the external drive, I just have to unplug it if I need to reboot. Had I simply rebooted before trying out systemd I would have found this issue and not been debugging the wrong thing.

  • On my Acer C720 chromebook (yes, it runs Debian), the kernel would panic when I attempted to boot with init=/bin/systemd. I’m a bit embarrassed to admit what the cause was… Well, I forgot one little important step in this process: I forgot to install systemd!

Installing systemd

Make sure your system is fully up to date and running the latest kernel. I.e.

$ sudo aptitude full-upgrade

At this point you should reboot if any major updates, such as a new kernel, were installed. You only want to be debugging one problem, not two as I described in my issues above!

Now is a really good time to make a full backup of your system, as well.

To start with, we’ll install systemd alongside sysvinit:

$ sudo aptitude install systemd

This will likely bring along a few dependencies, such as libpam-systemd, libsystemd-daemon0, libsystemd-journal0, and libsystemd-login0.

Testing systemd

Now you’ll need to reboot and manually edit the boot line in grub to enable systemd. To do this, hit ‘e’ at the grub boot menu for your kernel, and append init=/bin/systemd to the line that starts with linux. It might look something like the following, depending on the options for your specific kernel:

linux   /vmlinuz-3.13-1-amd64 root=/dev/mapper/root-root ro quiet init=/bin/systemd

Press F10 or ctrl+x to boot. If all goes well, and it should, your system will boot and run like normal. You can tell if systemd is in use if PID 1 is systemd* and not init**, for example:

$ ps -p1
PID TTY          TIME CMD
  1 ?        00:00:01 systemd

Watch out! some ps options will show it as /sbin/init, so be careful not to confuse yourself.

At this point, we can also use systemctl:

$ systemctl status
[long output deleted]

The nice thing about testing this way is that the systemd switch is not persistent. If you reboot again, you are back to sysvinit.

If you are happy at this point, and your system is stable, you can install systemd-sysv, which will remove sysvinit. You can also run in this testing configuration for a week or so if you prefer.

Enabling systemd by default

Once you do this, there’s no going back! If your system started ok with systemd, it should be safe to do this – just be aware that you absorb some risk…

$ sudo aptitude install systemd-sysv

This will remove sysvinit and make systemd the default. Next, reboot:

$ sudo shutdown -r now

Your system should reboot cleanly with systemd. Once again, you can use systemctl status to verify that everything looks good.

Longer term testing

If you aren’t ready to commit to systemd, you can always modify your grub configuration so that it boots with systemd by default, but if you run into trouble you can always disable it again. Note that if you have a non-booting system due to systemd you’ll have to boot a rescue livecd/liveusb and modify /boot/grub/grub.cfg manually.

To configure grub to boot with systemd by default, add init=/bin/systemd to the GRUB_CMDLINE_LINUX_DEFAULT line of /etc/default/grub such that it looks something like:

GRUB_CMDLINE_LINUX_DEFAULT="quiet init=/bin/systemd"

Note that yours may look different.

Configuring rc.local under systemd

Finally, let’s get rc.local working under systemd.

Create the following file as /etc/systemd/system/rc-local.service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU General Public License as published by
#  the Free Software Foundation; either version 2 of the License, or
#  (at your option) any later version.

[Unit]
Description=/etc/rc.local Compatibility
ConditionPathExists=/etc/rc.local

[Service]
Type=forking
ExecStart=/etc/rc.local start
TimeoutSec=0
StandardOutput=tty
RemainAfterExit=yes
SysVStartPriority=99

[Install]
WantedBy=multi-user.target

This file needs to be executable, or systemd won’t run it (I discovered this the hard way), and we need to tell systemd to enable our new service:

$ sudo chmod +x /etc/systemd/system/rc-local.service
$ sudo systemctl enable rc-local

After rebooting, you can use systemctl to check that everything worked as expected:

1
2
3
4
5
$ systemctl status rc.local
rc-local.service - /etc/rc.local Compatibility
   Loaded: loaded (/etc/systemd/system/rc-local.service; enabled)
   Active: active (exited) since Sat 2014-03-29 15:49:01 PDT; 1h 15min ago
  Process: 1097 ExecStart=/etc/rc.local start (code=exited, status=0/SUCCESS)

And that’s all there is to it! So far, systemd is a rather nice upgrade over the aging sysvinit. It will take some time for all your various services and programs to get updated with systemd init scripts, but until then the default systemd compatibility mode will still start all those scripts for you.

* Yes, this is a reference to the Kate Bush lyric from Hounds of Love, originating in the movie Curse of the Demon aka Night of the Demon. Since systemd is about managing system daemons, somehow the line “It’s in the trees… It’s coming!” popped into my head as I was thinking about a good title for this article.

A Better Dynamic MOTD

| Comments

A better dynamic MOTD

Debian based systems (and derivatives such as Ubuntu) have a facility built into PAM that can display a dynamically generated MOTD on login. Debian doesn’t use this by default, but Ubuntu does. I wanted to add this to my Debian testing/Jessie boxes, but the Ubuntu version performs horribly – if you’ve ever wondered why Ubuntu hangs for a second or so upon login while displaying the MOTD, this is why.

Taking a closer look at the Ubuntu /etc/update-motd.d/ files, it was clear to me why the default Ubuntu implementation is so slow – Two reasons, in fact. First, text that doesn’t change frequently is generated every time, such as the hostname banner and the script to display the number of available updates. The latter is horribly slow and something that doesn’t need to be checked at every login anyway. Second, the script for truly dynamic content forks way more processes than necessary and can be easily tuned and improved.

With my revisions to these scripts and process, my logins are instantaneous and I’ve even added running the MOTD display on each invocation of urxvt, the terminal program I use.

Here’s how to implement a much better dynamic MOTD. The source for this is available in my linux-configs github project.

Make static content static

What I did was separate the scripts into two configuration directories – one containing dynamic content, generated on each execution (just like the default), and a second for static content, which is only generated occasionally via cron (every 30 minutes).

The cron job uses run-parts to run the static content scripts, which write to /var/run. These files are then read directly via the dynamic content scripts.

Here’s a brief overview of the layout, but more detail about the scripts is provided below as well.

/etc/update-motd_local.d contains the static content scripts, run by a simple cron job.

/etc/update-motd.d/ contains the dynamic content scripts. These scripts are also responsible for displaying the static files from /var/run. There must at least be one script in this directory for each static script, but there can also be additional dynamic content scripts with no corresponding static content. Note that scripts that simply cat the statically generated files are simply symbolic links to 00-header.

/var/run/motd_local- will contain the static content files.

And here’s the crontab.

Make dynamic content faster

As mentioned above, the default /etc/update-motd.d/10-sysinfo file from Ubuntu does considerable more forking than is necessary – even doing things such as

cat foo | awk   # Don't do this!

instead of:

awk <foo

The cat and pipe are entirely unnecessary.

Additionally, some of the awk scripts piped to other awk scripts, or were pipes from grep to awk, which can all be handled via a single awk script. Also, commands like “ps” or “free” were being run when the information is already available in “/proc” My resulting script runs about 3 times faster than the original, entirely excluding the static content improvements, and is significantly nicer on system resources!

Where ssh would hang for a second or so on each login, it’s now instantaneous.

You’ll need figlet and update-notifier-common packages, if you don’t already have them.

$ sudo aptitude install figlet update-notifier-common

A closer look at the scripts

The dynamic scripts

Here’s the updated /etc/update-motd.d/10-sysinfo:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
#!/bin/bash
#
#    10-sysinfo - generate the system information
#    Copyright (c) 2013 Nick Charlton
#
#    Authors: Nick Charlton <hello@nickcharlton.net>
#
#    This program is free software; you can redistribute it and/or modify
#    it under the terms of the GNU General Public License as published by
#    the Free Software Foundation; either version 2 of the License, or
#    (at your option) any later version.
#
#    This program is distributed in the hope that it will be useful,
#    but WITHOUT ANY WARRANTY; without even the implied warranty of
#    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#    GNU General Public License for more details.
#
#    You should have received a copy of the GNU General Public License along
#    with this program; if not, write to the Free Software Foundation, Inc.,
#    51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

#
# The upstream version of this script was very inefficient - forking processes
# when not needed. This version significantly reducses the number of processes
# required to get the same info, and as a result is much, much faster.
#
# Additionally, static-ish stuff like the hostname and packages to install
# is only generated once every 30 minutes (or as configured in cron).
#
# As a result, this shaves off the amount of time required to login to the system
# by about 1 second or so, and when running as part of urxvt is nearly instant.

load=`awk '{print $1}' /proc/loadavg`
root_usage=`df -h / | awk '/\// {print $(NF-1)}'`
memory_usage=`awk '/^MemTotal:/ {total=$2} /^MemFree:/ {free=$2} /^Buffers:/ {buffers=$2} /^Cached:/ {cached=$2} END { printf("%3.1f%%", (total-(free+buffers+cached))/total*100)}' /proc/meminfo`
swap_usage=`awk '/^SwapTotal:/ { total=$2 } /^SwapFree:/ { free=$2} END { printf("%3.1f%%", (total-free)/total*100 )}' /proc/meminfo`
users=`users | wc -w`
time=`awk '{uptime=$1} END {days = int(uptime/86400); hours = int((uptime-(days*86400))/3600); printf("%d days, %d hours", days, hours)}' /proc/uptime`
processes=`/bin/ls -d /proc/[0-9]* | wc -l`
ip=`/sbin/ifconfig eth0 | awk -F"[: ]+" '/inet addr:/{print $4}'`

printf "System load:\t%s\t\tIP Address:\t%s\n" $load $ip
printf "Memory usage:\t%s\t\tSystem uptime:\t%s\n" $memory_usage "$time"
printf "Usage on /:\t%s\t\tSwap usage:\t%s\n" $root_usage $swap_usage
printf "Local Users:\t%s\t\tProcesses:\t%s\n" $users $processes
echo

The scripts that cat the static content look like the below, and actually, there’s just one of these, the rest are simply symbolic links to the first, as we use the script filename to determine which static file to show:

1
2
3
4
5
6
#!/bin/sh
#
# symlink this to additiona files as needed, matching scripts in
# /etc/update-motd_local.d

cat /var/run/motd_local-$(basename $0)

In my case, I have 00-header, 20-sysinfo, and 90-footer, matching the same dynamic scripts.

The static scripts

The static scripts are 00-header, 20-sysinfo, and 90-footer, as listed just above. There is not a 10-sysinfo script in the static scripts, since that is dynamic only. Make sure you understand run-parts, as it is key to how these scripts are executed.

Let’s take a look at 00-header next. This isn’t much different than the original, except we dup the stdout file descriptor to write to our file in /var/run (a partial filename is passed in as the first argument).

I also chose a figlet font that I like better as well, which doesn’t take up quite as much space and, well, it looks spiffy.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
#!/bin/sh
#
#    00-header - create the header of the MOTD
#    Copyright (c) 2013 Nick Charlton
#    Copyright (c) 2009-2010 Canonical Ltd.
#
#    Authors: Nick Charlton <hello@nickcharlton.net>
#             Dustin Kirkland <kirkland@canonical.com>
#
#    This program is free software; you can redistribute it and/or modify
#    it under the terms of the GNU General Public License as published by
#    the Free Software Foundation; either version 2 of the License, or
#    (at your option) any later version.
#
#    This program is distributed in the hope that it will be useful,
#    but WITHOUT ANY WARRANTY; without even the implied warranty of
#    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#    GNU General Public License for more details.
#
#    You should have received a copy of the GNU General Public License along
#    with this program; if not, write to the Free Software Foundation, Inc.,
#    51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

OUT=${1}$(basename $0)
exec >${OUT}

[ -r /etc/lsb-release ] && . /etc/lsb-release

if [ -z "$DISTRIB_DESCRIPTION" ] && [ -x /usr/bin/lsb_release ]; then
        # Fall back to using the very slow lsb_release utility
        DISTRIB_DESCRIPTION=$(lsb_release -s -d)
fi

figlet -f smslant $(hostname)

printf "Welcome to %s (%s).\n" "$DISTRIB_DESCRIPTION" "$(uname -r)"
printf "\n"

Here’s 20-sysinfo – the most expensive part of the original script, which determines how many packages are out of date and whether a system reboot is needed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/bin/bash
#
#    20-sysinfo - generate the system information
#    Copyright (c) 2013 Nick Charlton
#
#    Authors: Nick Charlton <hello@nickcharlton.net>
#
#    This program is free software; you can redistribute it and/or modify
#    it under the terms of the GNU General Public License as published by
#    the Free Software Foundation; either version 2 of the License, or
#    (at your option) any later version.
#
#    This program is distributed in the hope that it will be useful,
#    but WITHOUT ANY WARRANTY; without even the implied warranty of
#    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#    GNU General Public License for more details.
#
#    You should have received a copy of the GNU General Public License along
#    with this program; if not, write to the Free Software Foundation, Inc.,
#    51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

OUT=${1}$(basename $0)
exec >${OUT}

/usr/lib/update-notifier/apt-check --human-readable
/usr/lib/update-notifier/update-motd-reboot-required
echo

And finally, the diminutive 90-footer which just appends the content of /etc/motd.tail if it exists:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/bin/sh
#
#    90-footer - write the admin's footer to the MOTD
#    Copyright (c) 2013 Nick Charlton
#    Copyright (c) 2009-2010 Canonical Ltd.
#
#    Authors: Nick Charlton <hello@nickcharlton.net>
#             Dustin Kirkland <kirkland@canonical.com>
#
#    This program is free software; you can redistribute it and/or modify
#    it under the terms of the GNU General Public License as published by
#    the Free Software Foundation; either version 2 of the License, or
#    (at your option) any later version.
#
#    This program is distributed in the hope that it will be useful,
#    but WITHOUT ANY WARRANTY; without even the implied warranty of
#    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
#    GNU General Public License for more details.
#
#    You should have received a copy of the GNU General Public License along
#    with this program; if not, write to the Free Software Foundation, Inc.,
#    51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.

OUT=${1}$(basename $0)
exec >${OUT}

[ -f /etc/motd.tail ] && cat /etc/motd.tail || true

Those scripts could probably use some fencing, such as checking that the directory argument is valid, and that the directory exists.

File layout in /etc

The layout of /etc/update-motd.d and /etc/update-motd_local.d should look like the following:

1
2
3
4
5
6
7
8
9
10
$ ls -l update-motd.d/*
-rwxr-xr-x 1 root root  144 Mar 29 14:58 update-motd.d/00-header
-rwxr-xr-x 1 root root 2639 Mar 29 15:21 update-motd.d/10-sysinfo
lrwxrwxrwx 1 root root    9 Mar 29 14:58 update-motd.d/20-sysinfo -> 00-header
lrwxrwxrwx 1 root root    9 Mar 29 14:58 update-motd.d/90-footer -> 00-header

$ ls -l update-motd_local.d/*
-rwxr-xr-x 1 root root 1372 Mar 29 14:58 update-motd_local.d/00-header
-rwxr-xr-x 1 root root 1044 Mar 29 14:58 update-motd_local.d/20-sysinfo
-rwxr-xr-x 1 root root 1088 Mar 29 14:58 update-motd_local.d/90-footer

Note the relationship between files in update-motd_local.d, and the corresponding file/link in update-motd.d, along with scripts for dynamic content.

The crontab entry

The crontab to run the static scripts every 30 minutes is quite trivial:

1
*/30 * * * * /bin/run-parts --arg=/var/run/motd_local- /etc/update-motd_local.d

Important: Don’t forget to run that same command in /etc/rc.local or the static content files won’t be populated in /var/run until the cron job runs!

My per-user scripts and configuration

Here’s my ~/bin/urxvt that I use to launch urxvt:

1
exec /usr/bin/urxvt -e sh -c "run-parts /etc/update-motd.d; exec $SHELL"

Note that here we are executing run-parts on the dynamic scripts, where the crontab and /etc/rc.local execute on the static scripts.

And finally, my i3 keybinding line:

1
bindsym $mod+Return exec bin/urxvt

A few other details

The only other non-obvious detail is that /etd/motd needs to be a symbolic link to /var/run/motd, which you can set up like so:

1
2
$ sudo rm /etc/motd
$ sudo ln -s /var/run/motd /etc/motd

There is a little bit of subtle magic here – /var/run/motd may not exist when you create the symbolic link, but that’s actually OK – it will be created when the motd is generated.

If you don’t already have them, you’ll need a couple of packages:

1
$ sudo aptitude install figlet update-notifier-common

Until either the crontab runs, or a reboot executes /etc/rc.local, the static contant won’t be present. To do a one-time update of it, run:

1
$ sudo sudo /bin/run-parts --arg=/var/run/motd_local- /etc/update-motd_local.d

And that about covers it! With this, I have a nice dynamic MOTD which doesn’t slow me down.

Note that the scripts above may become out of date over time. Check my linux-configs github repo for the latest, up-to-date version.

Port Knocking With Single Packet Authorization

| Comments

A few weeks ago I discovered fwknop which is a very clever mechanism to secure services. I’m using this so I can ssh into a Linux server on my home network without opening the sshd port up to the world.

Single packet authorization works by sending a single, encrypted UDP packet to a remote system. The packet is never ACKd or replied to, but if it’s validated by the remote system, then it uses iptables to temporarily open up the service port for access (the filter is limited to the client’s IP address). If the packet isn’t valid, it is simply ignored. In either case, to an external observer the packet appears to go into a black hole. After a user-configurable amount of time (30 seconds by default), the service port is closed, but stateful iptable rules keep existing connections active.

This is really great because all ports to my home IP address appear, from the internet, to be black holes – my router firewall drops all incoming packets, and the specific ports open for fwknop are dropped via iptables on my Linux server.

Configuring this solution isn’t too difficult if you are familiar with networking and Linux network and system administration, but it can be a bit tricky to test.

Server Configuration

There are four areas that need to be configured on the server-side:

  • Fwknop needs to be configured with appropriate ports and security keys
  • iptables policy needs to be created for each service port
  • Services need to listen on appropriate ports
  • Router firewall needs to forward fwknop and service ports to the server

My per-service iptables policies are done in /etc/rc.local and look like:

1
2
/sbin/iptables -I INPUT 2 -i eth0 -p tcp --dport 54321 -j DROP
/sbin/iptables -I INPUT 2 -i eth0 -p tcp --dport 54321 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

It’s subtle, but note the rule order of “2” is used, instead of the default (which is “1”). This is to make sure that these rules are in the table after the FWKNOP_INPUT rule, which fwknop will create when it starts. Likewise, the order of the rules above is important, and are in the order they are to make sure the ESTABLISHED,RELATED rule ends up before the DROP rule. When loaded, the second rule will displace the first as they have the same priority.

/etc/fwknop/fwknopd.conf excerpt from the server:

1
PCAP_FILTER                 udp port 12345;

On my debian testing/Jessie server I also had to add this line to fwknopd.conf:

1
PCAP_DISPATCH_COUNT            1;

/etc/fwknop/access.conf excerpt from the server:

1
2
3
4
SOURCE                    ANY
REQUIRE_SOURCE_ADDRESS    Y
KEY_BASE64                SOME_BASE64_ENCODED_KEY
HMAC_KEY_BASE64           SOME_BASE64_ENCODED_HMAC_KEY

I didn’t use the default port as a bit of added measure, and additionally I’m running sshd on a different port as well, as a small added bit of security.

Adding an additional port in sshd is really simple, just add an additional Port line and restart sshd:

1
2
Port 22
Port 54321

Only port 54321 is port forwarded on the router, but I can still use port 22 while on my home network.

Client Configuration

On the client, I have a simple script that:

  • Sends the authorization packet via the fwknop client
  • ssh’s into my server on the configured sshd port

The script looks something like:

1
2
fwknop my.host.fqdn
ssh -p 54321 my.host.fqdn

Excerpt from .fwknoprc on the client:

1
2
3
4
5
6
7
8
[my.host.fqdn]
ACCESS                      tcp/54321
SPA_SERVER                  my.host.fqdn
SPA_SERVER_PORT             12345
KEY_BASE64                  SOME_BASE64_ENCODED_KEY
HMAC_KEY_BASE64             SOME_BASE64_ENCODED_HMAC_KEY
USE_HMAC                    Y
ALLOW_IP                    resolve

From this config, you can see that the fwknop port is 12345, and sshd is listening on 54321 (though these aren’t the real ports or FQDN in use). The KEY_BASE64 and HMAC_KEY_BASE64 values need to match between client and server. I chose to use symmetric keys but you can use asymmetric keys via GPG if you prefer.

See the fwknop documentation for more information on configuring everything. There are a lot of options, so you’ll have to figure out what to do based on your individual needs.

I’m using a free dynamic DNS service so that I don’t have to remember the dynamic IP address assigned by my ISP.

Further Reading

The documentation is decent, and I’ve found this solution works very nicely for me, without exposing any detectible open ports on my network. Unlike with simple port knocking, it is virtually impossible for someone to use packet capture replay to access the system. Because all packets are dropped unless the authorization packet opens up the service port, it is completely undetectable via port scanning that fwknop is even in use.

Give it a try!

Blog Migrated to Github

| Comments

This blog is now being hosted on github pages using octopress.

A large part of the reason for this is diversification away from Google. As much as I love Google for many reasons (and will continue to be a faithful Android user for the near future), I am moving most of my stuff off Google infrastructure.

I’ll write more about the reasons for this later, as well as mention which services I’ve migrated to (email, search, browser, etc).

Please bear with me as I finish migrating all the content and fix up some formatting, images, and such here and there…

I3 - a Tiling Window Manager for Mortals

| Comments

After kicking around a few different desktop environments and window managers, I’ve settled in with i3 as my window manager of choice - and no desktop environment at all. This is by far the most productive user interface I’ve used and is now in residence on my home laptop, work laptop, work desktop, and shiny new Intel-based Chromebook (as well as it’s predecessor - an ARM-based Chromebook). I’m still using the Debian Jessie/testing distribution of Linux, which has been fantastic - and I can’t recommend it enough.



You can read my previous blog for the few weeks that I used a BlueTile-derived xmonad configuration on top of the xfce desktop environment. While this worked pretty well most of the time, and was fairly productive, xmonad is a pain as it requires writing haskell code to change the configuration. A found this to be a bit burdensome over time - when I just want to tweak a setting I’d much rather just tweak the setting, and not have to debug code.

i3 is just about as flexible as xmonad, but everything is in a regular configuration file, so it doesn’t require users to write their own window manager in haskell to get the configuration you want.

It’s easy to plug-in different implementations in i3 for cases where the configuration isn’t sufficient - for example I found the default i3 status bar to be a bit limiting for the configuration I wanted, so I handle that via conky, outputting in JSON (i.e. not using conky’s X integration at all) to i3. System notifications come via dunst. dmenu is used as a launcher. Everything plays nicely together, and configuration is a snap.

Not running a regular desktop environment has not in any way been an issue. I can still use any application (for example, see “gnome-screenshot” used in the screenshot above). I don’t use graphical file managers as a general rule, and while I could probably install nautilus or thunar, I’ve found rox-filer works just as well and doesn’t require many dependencies. Debian already includes the necessary wiring such that installing i3-wm sets up a lightdm session.

Suspend, shutdown, reboot, logout is handled via simple keybindings in i3 or from a terminal - I have no problem typing “sudo shutdown -h now”, and I can type it just as fast as navigating to some menu.

I found that I was confortable and productive in i3 within just a few days - you definitely have to make a commitment to learn keybindings and modes, and understand the container model, but once you do it’s amazing how quickly you can navigate applications, workspaces, and desktops without ever having to take your hands off the keyboard. Learning how to effectively use workspaces for your workflow is super important - i3 allows several different layouts and each workspace can have it’s own. Switching between layouts is a snap, and I find myself switching, for example, from a tiling layout to a tabbed layout to get a larger window. i3 remembers layouts, so switching back to tiling puts everything back how it was before. Very nice.

Anyone who’s ever seen my desktop knows that I like to have a lot of terminal windows open, very specifically placed. In a traditional window manager doing this is painful - either I open a bunch of terminals, and manually move them around and resize them (I absolutely hate doing this), or write a script that starts all the terminals with the right geometry (also a painful operation, working out the geometry of each window). With i3, and a tiling layout, you never worry about window geometry or location - which is awesome. If I want 4 equally sized terminals on a workspace (with a horizontal tiling default layout), I use the following keystrokes - Super-Enter, Super-Enter, Super-v, Super-Enter, Super-Left, Super-v, Super-Enter. Once you learn the keybindings and container model, this sort of sequence becomes second nature and takes just a few seconds.

This configuration is really great and runs fast, with low resource usage - very important when running on a Chromebook (my install only requires around 2GB of disk) and gives more resources to things such as Java VMs when running on my fat work desktop.

I’ve decided not to dump my configuration in this post - code in blogger doesn’t work all that well (see my previous post) and the configuration gets stale when I don’t bother to update it (also see my previous post, which does not reflect my final configuration). Although it won’t help with the latter, you can find my configs, at least some version of it, on github.

The i3 website is here - http://i3wm.org/

My BlueTile Based Xmonad Configuration

| Comments

UPDATE: I’ve switched to the i3 window manager as of a couple of months back. i3 is really great - I recall someone saying something along the lines of “xmonad is not a window manager - it’s a library for people who want to write their own”. This is very true, and I don’t miss hacking around in haskell since switching away from xmonad. Additionally, i3 doesn’t require working around deficiencies in xmonad such as spoofing the window manager to make Java applications work

Screenshot

For the past couple of months I’ve been playing with several different Linux distributions, desktop environments, and window managers. I can backup, reinstall, and restore to a fully working state - even when changing distributions - within two or three hours, so the barrier of entry is fairly low for me. I do limit myself to the world of Debian-based systems, since apt is great and familiar, and there are lots of good reasons to live somewhere in the Debian ecosystem.

For several years I used Kubuntu, until KDE4 came out. KDE4 was released way before it was ready, and while I slogged along with it for about 6 months until I finally gave up and switched to Xubuntu with XFCE. I really like XFCE for the most part - it’s simple and fast, but sometimes lacking in features and feels a bit old. When Unity came out, I gave it a try. A very short one. Unity is an unusable disaster. At this point I decided to abandon Ubuntu and give LinuxMint a try, which I did moving back to XFCE. I switched to Cinnamon when it came out, and I have to say I really like Cinnamon in general - it’s based on Gnome Shell so is up-to-date, but it looks and works like Gnome2, making it super accessible and usable.

Although I think LinuxMint has some good points, it’s based on Ubuntu and there’s very little reason to use it over Ubuntu (if that’s what you want to use), especially now that CInnamon is available for several different distributions, including Ubuntu as a PPA. So, I went back to Xubuntu with Cinnamon.

But, I’m really not very happy with Ubuntu. Go take a look at ubuntu.com - you won’t find the word “linux” anywhere on that page. That’s not acceptable. Ubuntu is a Linux distribution and they should advertise that fact. Ubuntu has shown poor direction in other ways as well, for example the horrid Unity interface and writing their own display system, Mir, to replace Xorg.

Ubuntu is a Debian distribution, so why not just install Debian? I probably should have thought about this a long time ago!

I’m now running Debian testing (Jessie) on 4 different computers - 3 are amd64 systems, 1 (the one I’m typing this on) an arm based Chromebook.

Debian was just as easy as LinuxMint or Ubuntu for installation - and supports LUKS so I can run with full disk encryption on my laptops (though not the Chromebook, unfortunately). LUKS/dm-crypt is the only way to go - encfs and ecryptfs are horrible hacks, in general. And all the packages I want are available in base Debian.

Jessie, even though it’s “testing”, does still lag a bit behind other distributions in some ways, but I’ve found it recent enough for everything I do - and not as risky as Debian unstable, Sid. I wasn’t interested in Debian stable, as it’s just too far behind for me.

OK, so after installing Debian I went with XFCE, which is a nice choice because all my hardware supports it, so I can use the same configuration and setup everywhere, even the Chromebook. XFCE is also familiar to me, having used it for a couple of years, generally happily.

For fun, I decided to give Gnome Shell a try. Many people have been pretty negative about Gnome Shell, but I actually found it to be pretty nice and usable, after I installed several extensions to get some better usability. With Gnome Shell you have to think a little bit differently about window management and make good use of workspaces to organize things. One thing I didn’t like about Gnome Shell was how much I had to move the mouse to do things.

Still, I managed to get a pretty usable workflow out of Gnome Shell, and played with some nice tiling extensions - shellshape and shelltile, which sort of worked. However shellshape doesn’t support multiple monitors (which I like on my work system), and shelltile was too mousy though a nice idea. If you are interesting in tiling window managers on Gnome Shell, and only have one monitor, give shellshape a try - it’s pretty nice, with some shortcomings.

One issue with Gnome Shell is that it doesn’t work on my Chromebook, since it doesn’t have accelerated video (currently, until I get armsoc running on it). The fallback mode is usable, much like Gnome 2, but then I also couldn’t run the nifty Gnome Shell extensions I discovered.

I’ve always been intrigued by tiling window managers, which automatically arrange windows such that nothing overlaps, and which typically have very simple interfaces that maximize screen real estate (for example, by not having title bars on windows). They also tend to be driven by keyboard, minimizing mouse usage.

I’ve tried xmonad before, but was scared away by having to basically write a configuration in haskell. I tried a few other tiling window managers as well, but never found any that I really liked or felt like I wanted to invest the time in them.

This takes me back to shellshape - which uses the same keybindings for BlueTile, a xmonad-based tiling window manager. I liked the key bindings in shellshape, so I thought maybe that would make BlueTile accessible. I was also curious if I could run xmonad it with XFCE so I could have my usual panels and a nice menu (the menu plugins for xmonad are… primitive at best).

So how did my experience go? Well, here I am typing this on my Chromebook, running XFCE and xmonad with a BlueTile based configuration. I used this same configuration on my work laptop and desktop all day today as well, and found it very productive and fast, but I sure have a lot of new keybindings to remember!

Below is my .xmonad/xmonad.hs file. In order to use this in Debian, all I needed to do was install the xmonad package, as it already has BlueTile in the base xmonad package! Note that there is a separate bluetile package - you can install this, but you will not be able to apply my customized settings, if you so desire.

It took me many hours to get this configuration working and looking the way I wanted to. I’m not completely happy with my approach to eliminating the window title bars, but it does work, though it’s a bit hackish - and relies on having certain colors configured in XFCE (you may need to change “#cecece” below).


– My BlueTile Configuration

– BlueTile is a great place to start using xmonad, but I wanted to customize a number of things. I didn’t feel like writing
– my own xmonad implementation from scratch, so I use the xmonad-contrib BlueTile configuration with a few modifications to
– make it work the way I want to. I’m using this inside an XFCE session, which provides my panels.

– I’m new to xmonad and haskell, so this is a hack at best, but it gives me the look and behavior I want - and was a great
– way to ease from BlueTile into a more custom xmonad configuration.

– My blog: http://sail2software.com

– Differences from a vanilla BlueTile config:
–   * No titlebar (there’s probably a better way to do this)
–   * focusFollowsMouse is enabled
–   * pointer follows focused window (middle of window)
–   * WM is spoofed to LG3D so Java apps work
–   * terminal is set to xfce4-terminal
–   * focusedBorderColor is red
–   * borderWidth is 2

– Adapted from BlueTile (c) 2009 Jan Vornberger http://bluetile.org


import XMonad hiding ( (|||) )

import XMonad.Layout.BorderResize
import XMonad.Layout.BoringWindows
import XMonad.Layout.ButtonDecoration
import XMonad.Layout.Decoration
import XMonad.Layout.DecorationAddons
import XMonad.Layout.DraggingVisualizer
import XMonad.Layout.LayoutCombinators
import XMonad.Layout.Maximize
import XMonad.Layout.Minimize
import XMonad.Layout.MouseResizableTile
import XMonad.Layout.Named
import XMonad.Layout.NoBorders
import XMonad.Layout.PositionStoreFloat
import XMonad.Layout.WindowSwitcherDecoration

import XMonad.Hooks.CurrentWorkspaceOnTop
import XMonad.Hooks.EwmhDesktops
import XMonad.Hooks.ManageDocks
import XMonad.Hooks.SetWMName

import XMonad.Actions.UpdatePointer

import XMonad.Config.Bluetile

import XMonad.Util.Replace

myTheme = defaultThemeWithButtons {
    activeColor = “red”,
    activeTextColor = “red”,
    activeBorderColor = “red”,
    inactiveColor = “#cecece”,
    inactiveTextColor = “#cecece”,
    inactiveBorderColor = “#cecece”,
    decoWidth = 1,
    decoHeight = 1
}

myLayoutHook = avoidStruts $ minimize $ boringWindows $ (
                        named “Floating” floating |||
                        named “Tiled1” tiled1 |||
                        named “Tiled2” tiled2 |||
                        named “Fullscreen” fullscreen
                        )
        where
            floating = floatingDeco $ maximize $ borderResize $ positionStoreFloat
            tiled1 = tilingDeco $ maximize $ mouseResizableTileMirrored
            tiled2 = tilingDeco $ maximize $ mouseResizableTile
            fullscreen = tilingDeco $ maximize $ smartBorders Full

            tilingDeco l = windowSwitcherDecorationWithButtons shrinkText myTheme (draggingVisualizer l)
            floatingDeco l = buttonDeco shrinkText myTheme l

main = replace >> xmonad bluetileConfig {
    layoutHook = myLayoutHook,
    logHook = currentWorkspaceOnTop >> ewmhDesktopsLogHook >> updatePointer (Relative 0.5 0.5),
    focusFollowsMouse = True,
    borderWidth = 2,
    focusedBorderColor = “red”,
    terminal = “xfce4-terminal”,
    startupHook = setWMName “LG3D”
}

Are Your Financial Institutions’ Websites Developed With Agile Practices?

| Comments

If so, you are lucky - because mine sure aren’t. Seems like every bank or other financial institution that I do business with is about a decade or so behind in web technology. They have very, very long and infrequent software development cycles, don’t support recent (much less latest) client technologies, and have “major new feature releases” that are pretty darn uninspiring.

I can understand the importance of moving slow when it comes to people’s finances, but it seems they don’t really have a distinction between basic usability and those things that could result in serious financial exposure. In these days of distributed systems there’s no excuse to muddle UIs with the backend.

I have three quick stories from the last year or so across two different financial institutions. They will remain nameless, although one of them (which comprises the first two stories) I’m in the process of closing accounts down, and in the third I have a great personal relationship with individual people - and even though their technology is pretty bad, the institution itself is excellent.

Story #1 - Please Downgrade Your Browser

So I’ve been using the institution’s website several times a week with no problem, then suddenly it stops working from Chrome - I was unable to login, it would just redirect me to the login page. Maybe Chrome updated, I’m not really sure (my Linux distribution handles updates so well I seldom pay any attention into what get’s updated). So, I try Firefox and it works fine, I am able to login to my account, so I fire off an email to customer support to let them know Chrome 17 (this is May 2012) doesn’t work with their site. Following is the response:

I apologize that you have encountered this difficulty. Unfortunately, higher versions of chrome are unsupported with our website. When a browser version is unsupported the functionality will be intermittent at best. Sometimes it will work fine for months, and then one day stop working all together.

In order to continue using Chrome without issue, please use one of the following supported versions:

· Chrome: Versions 11 or 12

Once again, I do apologize for any inconvenience this may have caused. We are continually working on updating our supported browsers, but at this time those are the only truly supported versions.

Realize that the institution’s website doesn’t do anything complex - absolutely nothing that an update from Chrome 16 to 17 should cause inability to login. It’s only because of using non-w3c compliant practices that this would happen in the first place, but it’s absolute madness to suggest I would downgrade back 5 or 6 versions of Chrome! I’m not even sure how I would do that, does Google even have archives where you can download old versions?

At least they didn’t tell me to use IE. That’s the response I’ve gotten from a number of customer support interactions for various things over the years, even after telling them I’m not on Windows.

I don’t remember how long it was before I could use the site again in Chrome, but it was at least several months, and I received no notification or follow-up, I just tried one day and it worked.

How would a more modern site deal with this? First of all, they would be testing not only the latest stable versions of browsers, but the bleeding edge, and be prepared. They would also be agile with the ability to test and push new releases daily, not monthly, to resolve fundamental usability issues. Good customer support would also dictate closing the loop with the customer, rather than leaving them hanging.

Story #2 - We Will Resolve This In 48 Hours

My paycheck was deposited into one institution, then partially transferred to another institution (the one in story #1) via an inter-bank ACH transfer initiated by the second institution. This had been working fine for a couple of years, then one day my transfer doesn’t go through. Did they notify me of the failure? Nope. Did the account transfer page show anything interesting? Nope - the transfer just disappeared, unlike past transfers which had a history, it just vanished - like it never happened.

So, being the reasonable person that I am, I called customer service and opened a ticket with them. They said they would look into it, and it should be resolved in 48 hours. One month later they resolved the issue. You read that right, it took a month for their 48 hour fix. For the first 3 weeks I called them several times a week for a status, and their answer was always 48 hours - even after we were in this situation for 3 weeks. All they would tell me is that they were “investigating the issue” and that it would be resolved soon. Not knowing what was happening, and assuming that they would get the transfer through “real soon now” we ended up going low on funds in that institution and were basically unable to use our account. If I had known it would be weeks I would have deposited via other means, but when I’m told it’s only a couple of days because they are nearly done with the fix I guess I was overly optimistic. To make matters worse, when I was concerned about automatic, scheduled payments, they told me there wouldn’t be a problem because the problem was on their end - fool me once, shame on you… It was a problem because those payments still went through, even though they told me all was fine. As it turns out, they finally admitted it was a software bug in their ACH system, but for this institution it was too late as we had already taken our banking elsewhere.

How would a more modern company deal with this? This isn’t strictly a site issue, but it was a back-end software problem, and repeatedly telling me it would be resolved in 2 days when in fact it would take 30 did nothing to help. I’m sure they didn’t actually know, but better to give the customer a pessimistic answer than an unreasonably optimistic one. I arranged my finances based on what they told me and would have reacted completely differently otherwise. Additionally, I had to keep calling them for status, as they would never call me back after the promised 48 hours. They finally did call me back when it resolved, but by then it didn’t matter anymore.

Story #3 -  We Are Experiencing Known Performance Issues

On a Saturday morning I logged into my account at this institution to check some activity. At least, that’s what I was trying to do - the institution’s landing page didn’t mention a thing, and it was only after logging in that they displayed a page that the site was down for maintenance for the whole weekend.

Really? Down for the whole weekend for an upgrade? This is 2013!  And with zero notification in advance that the site would be down (they have my email address). And I had to login to even find out.

The last couple of companies I’ve worked at have prided themselves on the ability to do live rolling upgrades with no site outage. This is not a hard thing to do these days with a good architecture. Being down for 2 days would mean going out of business for a lot of places, but for a financial institution it’s considered normal, I guess.

OK great, so now it’s Monday morning and I try to login again. Right, so the page times out. I reload, and after several minutes, I finally get a page. Clicking on anything results in a similar pattern - either a timeout or page load time of several minutes. I tried to use the online feedback form, but that timed out too.

A call to customer service resulted in a long wait time with a message that they were aware of site performance issues due to huge demand for the upgraded site. I guess that’s one way to put it, but clearly they did insufficient testing and weren’t ready for Monday morning. There was no reason for me to waste support’s time at that point, so I hung up and tried to login again later.

How great was the new site? It sucks just as bad as the old one did. It’s really pretty bad.

How would a more agile company deal with this? They would test things in advance, do rolling updates rather than having a site outage for 2+ days, and route just a portion of traffic to the new site until being comfortable that all is well prior to a full cut-over.

What’s the lesson from all of this? Financial institution IT is well behind the times and could learn a lot of things from agile companies who release continually. Again, I understand the financial risk, but a good architecture would insulate those details from the website and it’s usability for customers.

Using Etckeeper in Linux Mint (or Any Debian/Ubuntu Distribution)

| Comments

My systems tend to live for a long time with very frequent updates. By this, I mean that I don’t reinstall the system, but continually upgrade the same Linux installation. While I apply upgrades to patches frequently (daily), this also applies to major distribution upgrades as well.

Performing in-place distribution upgrades is easier in some distributions than others - as with most things in life there are tradeoffs, so distributions likes raw Debian, or LMDE (Linux Mint Debian Edition) are always rolling upgrades, but tend to lag far behind Ubuntu or other mainline distributions (Linux Mint, Arch, Gentoo etc). On the flip-side, there are distributions who strongly discourage doing in-place upgrades, recommending users back-up and restore the system for each upgrade.

While Linux Mint is in that latter category, it’s actually quite easy to do an in-place upgrade without losing anything. Although some people have reported issues doing so, I’ve never had any insurmountable issues with doing it. In fact, one of my computers last had a full install of Linux Mint 11, and is now running Linux Mint 15, having been upgraded each time.

In fact, in some cases doing upgrades have given me greater stability and options than a fresh install, such as when there was a kernel 3.8 issue in Linux Mint 15 that would have prevented boot, but instead I was able to simply boot under an older 3.5 kernel from Linux Mint 14, which I used until a newer kernel was available with the issues fixed.

Wait, isn’t this a blog about etckeeper? Oh, right! :-) All of the discussion above is a great reason for why you might want to keep your /etc configuration files under version control. As packages are upgraded, or as you change configurations over time, you can easily diff configurations. And if you do a reinstall type upgrade, you can save off your entire /etc directory along with the version history, for comparison.

Whenever a package has an updated configuration file it’s a bit less stressful to just answer “Yes, overwrite my customized configuration with the package maintainers version”. Although it always saved back-up files previously, it wasn’t always easy to find all the files related to a specific upgrade. With etckeeper, it’s trivial - just diff the commits!

Setting up etckeeper is easy, and in it’s simplest form you don’t have to manage it at all - as you apt-get (or use aptitude, or other higher level package manager utilities), your /etc configuration will be automatically versioned. I recommend git for the version control system (VCS), which is what is in the examples below.
  • Install etckeeper (and git if you don’t already have it):

 $ sudo apt-get install git etckeeper 
  • Update /etc/etckeeper/etckeeper.conf, commenting all VCS lines except git:
 $ sudo vi /etc/etckeeper/etckeeper.conf 
  • The first few lines of that file should look like:
 # The VCS to use. 
 #VCS=”hg”         
 VCS=”git”         
 #VCS=”bzr”        
 #VCS=”darcs”      
  • And finally, initialize the etckeeper repository:
 $ sudo etckeeper init         
 $ sudo etckeeper commit “initial version” 
  • This will create an initial commit of all your configuration files as they are now.
For the simplest use-case, that’s all there is to using etckeeper! Whenever you do any system changes through the apt-get package manager (directly or indirectly), etckeeper will automatically get executed to version your configuration files in /etc before and after any package changes are applied to your system.

You can also explicitly commit changes when you manual change configuration files, and use the commit command to track them. See the manpages and the etckeeper site for more information. 

And finally, because this just uses git under the covers, you can use git commands to diff versions, etc - just cd to /etc and use git as root.

T-Mobile Please Get Your Act Together!

| Comments

I’ll start by saying yes, this is a moderately whiny complaint blog. But if it helps one other person, then it’s worth it.

T-Mobile has a great $30 pre-pay plan - unlimited text, unlimited data, 100 minutes talk. For our usage, this plan is perfect and pretty darn cheap. Sure, the “unlimited” data means that overage happens at a lower speed, but we’ve never come close to that limit.

Because this plan is such a good fit for people who don’t use their phones as phones much, this is a great plan. However as far as I can figure out, T-Mobile doesn’t want anyone to actually use this plan. It’s easy, when you know the tricks, but forget trying to get T-Mobile to help you get this plan. Their customer support can’t get you this plan (and will even go so far as to not tell the truth, insisting that you have to buy a new phone at Walmart). With the information in this post, you’ll learn how to activate this plan and save a ton of money.

I’ve been a generally happy T-Mobile customer for a 8 years now, and I’ve had up to 4 lines on the account, though for the last 2 years we’ve had 2 pre-pay and 1 post-pay. The sole post-pay has been my Android phone, with the 2 pre-pay being feature phones used by other members of the family. You’d think that being a good customer in good standing for a long time would matter, but alas, in today’s business world, it doesn’t - they really just don’t care.

In the last 9 months or so I’ve gone through a spree of updating my Android phone from a Samsung Galaxy 1, then a Samsung Galaxy 2, and now a Nexus 4. I’ve had a 2 year contract (which just expired), but swapped the phones around under the contract without even notifying T-Mobile. I bought the Galaxy 2 off a coworker, and the Nexus 4 straight from Google, so T-Mobile wasn’t involved in the purchase of any device aside from the Galaxy 1 that I used when I started my 2 year contract.

I had to do some hacking to cut the regular SIM card down to microSIM for the Nexus 4, but that turned out to be really easy with some guidance from a coworker who had already done the same thing. Seriously, it’s really easy to make a microSIM. Don’t pay T-Mobile $50 for a replacement SIM, just grab some scissors and do it for free. You’ve got nothing to lose, and if you screw it up (but you won’t, it’s easy), then you can still pay T-Mobile for a replacement SIM. Don’t buy a SIM cutter, don’t buy a cutting pattern, just find someone with another microSIM so you get the outline right, cut it a bit big, and trim to size. If I can do it, you can do it.

When I bought the Galaxy 2, I sold my Galaxy 1 to a friend of mine. But when I bought the Nexus 4, I gave my wife the Galaxy 2 as an upgrade to her feature phone. The $30 pre-pay plan was appealing, so I bought a SIM card for $1 from the T-Mobile online store. It came a few days later, I activated it under a new number, with no problem, then called T-Mobile’s horrible offshore pre-pay customer service to cancel her old number and transfer this number to the new number. This is really important: Do not, under any circumstances, attempt to activate the plan or SIM via T-Mobile customer service. They cannot and will not help you. You need to buy a SIM online and activate it online, then only call customer service to port your number afterwards.

Fast forward a few weeks until my 2-year post-pay contract is up. I wanted to switch my phone to the same $30 pre-pay plan. I know the plan works, because my wife has been using it for a few weeks, but unlike her plan where we were converting pre-pay to pre-pay, I was converting post-pay to pre-pay. Cancelling post-pay requires a conversation with customer service, but I thought it would be easy - and it is, now that I know the secret, but boy did I go about it the wrong way (by trying to go about it the right way)!

My initial plan was to call up post-pay and cancel. Then I would buy a pre-pay SIM for $1 from the online store, activate it, and just like with the Galaxy 2, be set and done with this. Sounds simple, right?

So I called up post-pay customer service and told them I wanted to cancel. I get an amazingly awesome CSR (customer service rep), who first tells me he can get my current $79 plan for $55. This is a decent discount, but $55 is obviously more than $30, so I told him what I planned to do - that I was going to cancel, and just reactivate with a pre-pay SIM. He agreed with me that requiring customers to cancel just to activate pre-pay was silly, and he said he could override pre-pay and give me the $30 plan I wanted. Pretty awesome, right?! So, the guy sets me up and says I’ll get an SMS within 24 hours, and that I’ll need to then go activate with that code to switch to pre-pay.

I can’t say enough good things about this post-pay CSR who helped me. Unlike every pre-pay CSR, and most other post-pay CSRs, he really wanted to help me - and understood that ultimately the end state would be the plan I wanted, so T-Mobile might as well make it easy on everyone involved and just do it.

Unfortunately, it didn’t quite work out that way - pre-pay denied the account conversion. Of course they couldn’t be bothered to actually call me, or notify me - instead they just blocked the conversion silently. Two days later I called, as I hadn’t received the SMS to activate the new plan. This is where things got ugly. Post-pay wouldn’t help me, and transferred me to pre-pay. Pre-pay told me that I couldn’t get this plan and that it was only available if you bought a new phone from Walmart (This is obviously untrue, and a total lie on T-Mobile’s part, as I had activated this exact same plan previously with the Galaxy 2!). I got really angry with the pre-pay CSR, who just started to give me a bunch of run-around. Asking me for account and SIM card numbers (which I never needed before), and finally just refusing to help me, even when provided with this information as well as the confirmation reference from my call two days previous. I asked, several times, to just transfer me to someone who could help me, but she basically refused. I was really surprised, customer service isn’t the way it used to be, that’s for sure. Sure, I was getting pretty angry at this point, but only because she wouldn’t listen to me when I knew that I was right.

It was at this point that I remembered I had ordered a second SIM card when I bought the $1 SIM for activating the Galaxy 2 for my wife! I totally forgot about this - since they are only $1, I bought two just in case I needed another pre-pay SIM for some reason in the future.

Thirty minutes later, it was all done - I did the conversion from post-pay to a pre-pay plan that T-Mobile says there’s no way you can get. And here’s how you do it:

  1. Wait until you contract is nearly up. Don’t blame me if you screw this part up and have contract cancellation fees!
  2. Go to the T-Mobile online store and order a new pre-pay SIM card for your phone for $1 plus shipping.
  3. Wait a few days for the SIM card to arrive.
  4. Activate the SIM card online, and choose the $30 plan with unlimited text, unlimited data, 100 minutes talk. And do not, under any circumstance, call T-Mobile customer support as part of the activation! Don’t forget to port your old number over at the appropriate place in the process (and if you can’t do it at the time of activation, you can always do that bit later, as we did for my wife’s phone).
  5. Cancel your old plan.

That’s all there is to it - that’s how you do the impossible, to get the plan that T-Mobile says you cannot have.

Atari Inc. Business Is Fun

| Comments

If you are interested in the history of Atari, I would encourage you to grab a copy of ”Atari Inc. Business Is Fun” by Curt Vendel and Marty Goldberg. This is a big book - 800 pages of narrative, pictures, and internal documents that tell the unabridged history of this pioneer video game company. What Atari was doing in the 70s and early 80s shaped not only the video game industry, but the entire computer industry.

As an added bonus, there’s even a picture of me in this book! My father was an engineer for Atari from the beginning until the fall of Atari in 1984.

Go grab yourself a copy by clicking below - you won’t be disappointed!