Sunday, October 23. 2022
Let's Encrypt with Octavia in OpenStack Posted by Andrew Ruthven
at
05:09
Comments (0) Trackbacks (0) Let's Encrypt with Octavia in OpenStackI like using Catalyst Cloud to host some of my personal sites. In the past I used to use CAcert for my TLS certificates, but more recently I've been using Let's Encrypt for my TLS certificates as they're trusted in all browsers. Currently the LoadBalancer as a Service (LBaaS) in Catalyst Cloud doesn't have built in support for Let's Encrypt. I could use an apache2/nginx proxy and handle the TLS termination there and have that manage the Let's Encrypt lifecycle, but really, I'd rather use LBaaS. So I thought I'd set about working out how to get Dehydrated (the Let's Encrypt client I've been using) to drive LBaaS (known as Octavia). I figured this would be of interest to other people using Octavia with OpenStack in general, not just Catalyst Cloud. There's a few things you need to do. These instructions are specific to Debian:
As we're using HTTP-01 Challenge Type here, you need to have the LoadBalancer forwarding port 80 to your website to allow for the challenge response. It is good practice to have a redirect to HTTPS, here's an example virtual host for Apache: <VirtualHost *:80> ServerName www.example.com ServerAlias example.com RewriteEngine On RewriteRule ^/.well-known/ - [L] RewriteRule ^/(.*)$ https://www.example.com/$1 [R=301,L] <Location /> Require all granted </Location> </VirtualHost>You all also need this in /etc/apache2/conf-enabled/letsencrypt.conf: Alias /.well-known/acme-challenge /var/lib/dehydrated/acme-challenges <Directory /var/lib/dehydrated/acme-challenges> Options None AllowOverride None # Apache 2.x <IfModule !mod_authz_core.c> Order allow,deny Allow from all </IfModule> # Apache 2.4 <IfModule mod_authz_core.c> Require all granted </IfModule> </Directory> And that should be all that you need to do. Now, when Dehydrated updates your certificate, it should update your LoadBalancer as well! Sample hook.sh:deploy_cert() { local DOMAIN="${1}" KEYFILE="${2}" CERTFILE="${3}" FULLCHAINFILE="${4}" \ CHAINFILE="${5}" TIMESTAMP="${6}" shift 6 # File contents should be: # export OS_PASSWORD='your password in here' . /etc/dehydrated/catalystcloud/password # OpenRC file from the Catalyst Cloud dashboard . /etc/dehydrated/catalystcloud/openrc.sh --no-token # UUID of the LoadBalancer to be managed LB_LISTENER='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' # Barbican uses P12 files, we need to make one. P12=$(readlink -f $KEYFILE \ | sed -E 's/privkey-([0-9]+)\.pem/barbican-\1.p12/') openssl pkcs12 -export -inkey $KEYFILE -in $CERTFILE -certfile \ $FULLCHAINFILE -passout pass: -out $P12 # Keep track of existing certs for this domain (hopefully no more than 100) EXISTING_URIS=$(openstack secret list --limit 100 \ -c Name -c 'Secret href' -f json \ | jq -r ".[]|select(.Name | startswith(\"$DOMAIN\"))|.\"Secret href\"") # Upload the new cert NOW=$(date +"%s") openstack secret store --name $DOMAIN-$TIMESTAMP-$NOW -e base64 \ -t "application/octet-stream" --payload="$(base64 < $P12)" NEW_URI=$(openstack secret list --name $DOMAIN-$TIMESTAMP-$NOW \ -c 'Secret href' -f value) \ || unset NEW_URI # Change LoadBalancer to use new cert - if the old one was the default, # change the default. If the old one was in the SNI list, update the # SNI list. if [ -n "$EXISTING_URIS" ]; then DEFAULT_CONTAINER=$(openstack loadbalancer listener show $LB_LISTENER \ -c default_tls_container_ref -f value) for URI in $EXISTING_URIS; do if [ "x$URI" = "x$DEFAULT_CONTAINER" ]; then openstack loadbalancer listener set $LB_LISTENER \ --default-tls-container-ref $NEW_URI fi done SNI_CONTAINERS=$(openstack loadbalancer listener show $LB_LISTENER \ -c sni_container_refs -f value | sed "s/'//g" | sed 's/^\[//' \ | sed 's/\]$//' | sed "s/,//g") for URI in $EXISTING_URIS; do if echo $SNI_CONTAINERS | grep -q $URI; then SNI_CONTAINERS=$(echo $SNI_CONTAINERS | sed "s,$URI,$NEW_URI,") openstack loadbalancer listener set $LB_LISTENER \ --sni-container-refs $SNI_CONTAINERS fi done # Remove old certs for URI in $EXISTING_URIS; do openstack secret delete $URI done fi } HANDLER="$1"; shift #if [[ "${HANDLER}" =~ ^(deploy_challenge|clean_challenge|sync_cert|deploy_cert|deploy_ocsp|unchanged_cert|invalid_challenge|request_failure|generate_csr|startup_hook|exit_hook)$ ]]; then if [[ "${HANDLER}" =~ ^(deploy_cert)$ ]]; then "$HANDLER" "$@" fi Sunday, April 19. 2020Install Fedora CoreOS using FAII've spent the last couple of days trying to deploy Fedora CoreOS to some physical hardware/bare metal for a colleague using the official PXE installer from Fedora CoreOS. It wasn't very pleasant, and just wouldn't work reliably. Maybe my expectations were to high, in that I thought I could use Ignition to prepare more of the system for me, as my colleague has been able to bare metal installs correctly. I just tried to use Ignition as documented. A few interesting aspects I encountered:
During the night I got feed up with that process and wrote a Fully Automatic Installer (FAI) profile that'd install CoreOS instead. I can now use setup-storage from FAI using it's standard disk_config files. This allows me to build complicated disk configurations with software RAID and LVM easily. A big bonus is that a rebuild is a lot faster, timed from typing reboot to a fresh login prompt is 10 minutes - and this is on physical hardware so includes BIOS POST and RAID controller set up, twice each. I thought this might be of interest to other people, so the FAI profile I developed for this is located here: https://github.com/catalyst-cloud/fai-profile-fedora-coreos FAI was initially developed to deploy Debian systems, it has since been extended to be able to install a number of other operating systems, however I think this is a good example of how easy it is to deploy non-Debian derived operating systems using FAI without having to modify FAI itself. Monday, July 23. 2018
linux.conf.au 2019 - Call for Proposals Posted by Andrew Ruthven
in catalyst at
04:57
Comments (0) Trackbacks (0) linux.conf.au 2019 - Call for ProposalsAt the start of July, the LCA2019 team announced that the Call for Proposals for linux.conf.au 2019 were open! This Call for Proposals will close on July 30. If you want to submit a proposal, you don't have much time! linux.conf.au is one of the best-known community driven Free and Open Source Software conferences in the world. In 2019 we welcome you to join us in Christchurch, New Zealand on Monday 21 January through to Friday 25 January. For full details including those not covered by this announcement visit https://linux.conf.au/call-for-papers/, and the full announcement is here. IMPORTANT DATES
Sunday, September 17. 2017
Missing opkg status file on LEDE... Posted by Andrew Ruthven
in catalyst at
11:51
Comments (0) Trackbacks (0) Missing opkg status file on LEDE...
I tried to install LEDE on my home router which is running LEDE, only to be told that libc wasn't installed. Huh? What's going on?! It looked to all intents as purposes as though libc wasn't installed. And it looked like nothing was installed.
What to do if opkg list-installed is returning nothing? I finally tracked down the status file it uses as being /usr/lib/opkg/status. And it was empty. Oh dear. Fortunately the info directory had content. This means we can rebuild the status file. cd /usr/lib/opkg/info And then for the special or virtual packages (such as libc and the kernel): for x in *.control; do I then had to edit the file tidy up some newlines for the kernel and libc, and set the status lines correctly. I used "install hold installed". Now I that I've shaved that yak, I can install tcpdump to try and work out why a VoIP phone isn't working. Joy. Saturday, September 2. 2017
Network boot a Raspberry Pi 3 Posted by Andrew Ruthven
in catalyst at
11:31
Comments (0) Trackbacks (0) Network boot a Raspberry Pi 3I found to make all this work I had to piece together a bunch of information from different locations. This fills in some of the blanks from the official Raspberry Pi documentation. See here, here, and here.
Image Download the latest raspbian image from https://www.raspberrypi.org/downloads/raspbian/ and unzip it. I used the lite version as I'll install only what I need later. To extract the files from the image we need to jump through some hoops. Inside the image are two partitions, we need data from each one. # Make it easier to re-use these instructions by using a variable IMG=2017-04-10-raspbian-jessie-lite.img fdisk -l $IMG You should see some output like: Disk 2017-04-10-raspbian-jessie-lite.img: 1.2 GiB, 1297862656 bytes, 2534888 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x84fa8189 Device Boot Start End Sectors Size Id Type 2017-04-10-raspbian-jessie-lite.img1 8192 92159 83968 41M c W95 FAT32 (LBA) 2017-04-10-raspbian-jessie-lite.img2 92160 2534887 2442728 1.2G 83 Linux You need to be able to mount both the boot and the root partitions. Do this by tracking the offset of each one and multiplying it by the sector size, which is given on the line saying "Sector size" (typically 512 bytes), for example with the 2017-04-01 image, boot has an offset of 8192, so I mount it like this (it is VFAT): mount -v -o offset=$((8192 * 512)) -t vfat $IMG /mnt # I then copy the data off: mkdir -p /data/diskless/raspbian-lite-base-boot/ rsync -xa /mnt/ /data/diskless/raspbian-lite-base-boot/ # unmount the partition now: umount /mnt Then we do the same for the root partition: mount -v -o offset=$((92160 * 512)) -t ext4 $IMG /mnt # copy the data off: mkdir -p /data/diskless/raspbian-lite-base-root/ rsync -xa /mnt/ /data/diskless/raspbian-lite-base-root/ # umount the partition now: umount /mnt DHCP When I first set this up, I used OpenWRT on my router, and I had to patch /etc/init/dnsmasq to support setting DHCP option 43. As of the writting of this article, a similar patch has been merged, but isn't in a release yet, and, well, there may never be another release of OpenWRT. I'm now running LEDE, and the the good news is it already has the patch merged (hurrah!). If you're still on OpenWRT, then here's the patch you'll need: https://git.lede-project.org/?p=source.git;a=commit;h=9412fc294995ae2543fabf84d2ce39a80bfb3bd6 This lets you put the following in /etc/config/dnsmasq, this says that any device that uses DHCP and has a MAC issued by the Raspberry PI Foundation, should have option 66 (boot server) and option 43 set as specified. Set the IP address on option 66 to the device that should be used for tftp on your network, if it's the same device that provides DHCP then it isn't required. I had to set the boot server, as my other network boot devices are using a different server (with an older tftpd-hpa, I explain the problem further down). config mac 'rasperrypi' option mac 'b8:27:eb:*:*:*' option networkid 'rasperrypi' list dhcp_option '66,10.1.0.253' list dhcp_option '43,Raspberry Pi Boot' tftp Initially I used a version of tftpd that was too old and didn't support how the RPi tried to discover if it should use the serial number based naming scheme. The version of tftpd-hpa Debian Jessie works just fine. To find out the serial number you'll probably need to increase the logging of tftpd-hpa, do so by editing /etc/default/tftpd-hpa and adding "-v" to the TFTP_OPTIONS option. It can also be useful to watch tcpdump to see the requests and responses, for example (10.1.0.203 is the IP of the RPi I'm working with): tcpdump -n -i eth0 host 10.1.0.203 and dst port 69 This was able to tell me the serial number of my RPi, so I made a directory in my tftpboot directory with the same serial number and copied all the boot files into there. I then found that I had to remove the init= portion from the cmdline.txt file I'm using. To ease debugging I also removed quiet. So, my current cmdline.txt contains (newlines entered for clarity, but the file has it all on one line): idwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/nfs nfsroot=10.1.0.253:/data/diskless/raspbian-lite-base-root,vers=3,rsize=1462,wsize=1462 ip=dhcp elevator=deadline rootwait hostname=rpi.etc.gen.nz NFS root You'll need to export the directories you created via NFS. My exports file has these lines: /data/diskless/raspbian-lite-base-root 10.1.0.0/24(rw,no_root_squash,sync,no_subtree_check) /data/diskless/raspbian-lite-base-boot 10.1.0.0/24(rw,no_root_squash,sync,no_subtree_check) And you'll also want to make sure you're mounting those correctly during boot, so I have in /data/diskless/raspbian-lite-base-root/etc/fstab the following lines: 10.1.0.253:/data/diskless/raspbian-lite-base-root / nfs rw,vers=3 0 0 10.1.0.253:/data/diskless/raspbian-lite-base-boot /boot nfs vers=3,nolock 0 2 Network Booting Now you can hopefully boot. Unless you into this bug, as I did. Where the RPi will sometimes fail to boot. Turns out the fix, which is mentioned on the bug report, is to put bootcode.bin (and only bootcode.bin) onto an SD card. That'll then load the fixed bootcode, and which will then boot reliably. Saturday, August 20. 2016MythTV on a Raspberry Pi 3I'm in the process of building a new MythTV front end using a Raspberry Pi 3 to replace our aging VIA EPIA M10000, which has been in use since about 2003. For MythTV, I'm using MythTV Light from Peter Bennett. I have a dedicated back end that lives in the garage, so the front end is nice and easy. With the VIA front end, I built an IR receiver that plugs into the serial port. For the new box, I decided to try using a Sapphire Remote using Mark Lord's excellent looking driver. However, since his driver uses a Makefile which just install the module into the right place, I decided to use the Debian way of doing things. Below is my approach. apt-get install raspberrypi-kernel-headers dkmsDownload the tar ball from http://rtr.ca/sapphire_remote/. Extract it in /usr/src/modules and then rename the directory to sapphire-remote-6.6 (the version may differ!). Put the following into a file called dkms.conf in that directory: PACKAGE_VERSION="6.6" # Items below here should not have to change with each driver version PACKAGE_NAME="sapphire-remote" MAKE[0]="make -C ${kernel_source_dir} SUBDIRS=${dkms_tree}/${PACKAGE_NAME}/${PACKAGE_VERSION}/build modules" CLEAN="make -C ${kernel_source_dir} SUBDIRS=${dkms_tree}/${PACKAGE_NAME}/${PACKAGE_VERSION}/build clean" BUILT_MODULE_NAME[0]="sapphire" DEST_MODULE_LOCATION[0]="/extra/" AUTOINSTALL=yes REMAKE_INITRD=noAnd then run: version=6.6 dkms add -m sapphire-remote -v $version dkms build -m sapphire-remote -v $version dkms install -m sapphire-remote -v $version modprobe sapphire-remote dmesg | tailYou should see something like this at the bottom of that dmesg | tail command: [89133.468858] sapphire_init: sapphire remote control driver v6.6 [89133.469680] input: sapphire as /devices/virtual/input/input0 Sunday, July 24. 2016
Allow forwarding from VoiceMail to ... Posted by Andrew Ruthven
in catalyst at
03:22
Comments (0) Trackbacks (0) Allow forwarding from VoiceMail to cellphonesSomething I've been wanting to do with our Asterisk PBX at Catalyst for a while is to allow having callers that hit VoiceMail to be forwarded the callee's cellphone if allowed. As part of an Asterisk migration we're currently carrying out I finally decided to investigate what is involved. One of the nice things about the VoiceMail application in Asterisk is that callers can hit 0 for the operator, or * for some other purpose. I decided to use * for this purpose. I'm going to assume a working knowledge of Asterisk dial plans, and I'm not going to try and explain how it works. Sorry. When a caller hits * the VoiceMail application exits and looks for a rule that matches a. Now, the simple approach looks like this within our macro for handling standard extensions: [macro-stdexten] ... exten => a,1,Goto(pstn,027xxx,1) ... (Where I have a context called pstn for placing calls out to the PSTN). This'll work, but anyone who hits * will be forwarded to my cellphone. Not what I want. Instead we need to get the dialled extension into a place where we can perform extension matching on it. So instead we'll have this (the extension is passed into macro-stdexten as the first variable - ARG1): [macro-stdexten] ... exten => a,1,Goto(vmfwd,${ARG1},1) ... Then we can create a new context called vmfwd with extension matching (my extension is 7231): [vmfwd] exten => 7231,1,Goto(pstn,027xxx,1) I actually have a bit more in there to do some logging and set the caller ID to something our SIP provider will accept, but you get the gist of it. All I need to do is to arrange for a rule per extension that is allowed to have their VoiceMail callers be forwarded to voicemail. Fortunately I have that part automated. The only catch is for extensions that aren't allowed to be forwarded to a cellphone. If someone calling their VoiceMail hits * their call will be hung up and I get nasty log messages about no rule for them. How do we handle them? Well, we send them back to VoiceMail. In the vmfwd context we add a rule like this: exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option}) same => n,Hangup So any extension that isn't otherwise matched hits this rule. We use ${voicemail_option} so that we can use the same mode as was used previously. Easy! Naturally this approach won't work for other people trying to do this, but given I couldn't find write ups on how to do this, I thought it be might be useful to others. Here's my macro-stdexten and vmfwd in full: [macro-stdexten] exten => s,1,Progress() exten => s,n,Dial(${ARG2},20) exten => s,n,Goto(s-${DIALSTATUS},1) exten => s-NOANSWER,1,Answer exten => s-NOANSWER,n,Wait(1) exten => s-NOANSWER,n,Set(voicemail_option=u) exten => s-NOANSWER,n,Voicemail(${ARG1}@sip,u) exten => s-NOANSWER,n,Hangup exten => s-BUSY,1,Answer exten => s-BUSY,n,Wait(1) exten => s-BUSY,n,Set(voicemail_option=b) exten => s-BUSY,n,Voicemail(${ARG1}@sip,b) exten => s-BUSY,n,Hangup exten => _s-.,1,Goto(s-NOANSWER,1) exten => a,1,Goto(vmfwd,${ARG1},1) exten => o,1,Macro(operator) [vmfwd] exten => _XXXX,1,VoiceMail(${EXTEN}@sip,${voicemail_option}) same => n,Hangup #include extensions-vmfwd-auto.conf And I then build extensions-vmfwd-auto.conf from a script that is used to generate configuration files for defining accounts, other dial plan rule entries and phone provisioning files. With thanks to John Kiniston for the suggestion about the wildcard entry in vmfwd. Tuesday, December 2. 2014
LCA2015 - Debian Miniconf & ... Posted by Andrew Ruthven
in catalyst, family at
00:08
Comments (0) Trackbacks (0) LCA2015 - Debian Miniconf & nz2015 Debian mini-DebConf
nz2015 mini-DebConf
Already attending linux.conf.au? Come a couple of days earlier and attend the mini-DebConf too! There will be a day of talks with a strong focus on the Debian project and a bug squashing day. Debian Miniconf After 5 years, the Debian Miniconf is back! Run as part of linux.conf.au 2015, this event will attract speakers talking on topics that suit the broader audience attending LCA. The Debian Miniconf has been one of the largest miniconfs in the history of linux.conf.au. For more information about both these events which I'm organising, head over to: nz2015.mini.debconf.org! Thursday, July 10. 2014Cloud - in New Zealand!
I've spent a reasonable chunk of the past year working on a project we launched last month, Catalyst Cloud! It is using OpenStack with Ceph as the object store. It has taken a lot of work, and it is now very exciting seeing the level of interest there we're receiving about this new service!
The great part of this is that we can now offer private cloud services to our customers which provides all the flexibility that we've come to expect with the "cloud", but hosted in New Zealand by a New Zealand owned company so no concerns about jurisdiction of your data! Not only are we able to offer private cloud services on our OpenStack cluster(s), but we can also deploy OpenStack onto our customers own hardware using our ProdStack solution (I get to look directly at the Dashboard shown on that page, which is pretty cool). Next up is deploying another OpenStack cluster in our new data centre (which is another project I'm working on). In the near future we also hope to start using Open Compute Project hardware for our clusters. Thursday, July 10. 2014
LCA2015 - Debian Miniconf submitted Posted by Andrew Ruthven
in catalyst, family at
21:43
Comments (0) Trackbacks (0) LCA2015 - Debian Miniconf submitted
Phew, I've submitted a proposal to run a Debian Miniconf at linux.conf.au 2015 here's hoping that it is accepted!
The Debian Miniconf was held in 2008 in Melbourne, so I feel it is well overdue to run it again. Tuesday, January 28. 2014
Laptops and networks Posted by Andrew Ruthven
in catalyst, family at
09:32
Comments (0) Trackbacks (0) Laptops and networks
Back in the old days, we had workstations. And only workstations. They lived on a network, and having them work in that network was simple. Printers just worked (thank you printcap), network shares just worked (thank you NFS) and life was good.
Then along came laptops. We wanted to be more mobile, using our laptops on different networks or even without a network! No one wanted hardcoded printers anymore, or network shares defined in /etc/fstab. Using an Automounter was an option, but if you were on a different network then having the Automounter around would stall tools like nautilus and file indexers etc. So we need something which can start up relevant services when you connect to a network, and then stop them when you leave that network. To support this, a few years ago I wrote a NetworkManager dispatcher.d script to do just that. When you connect to a specific network (using the NetworkManager UUID or a specific gateway MAC) or a VPN connection then autofs is started, users GTK bookmarks have any bookmarks for their Network shares added and CUPS is restarted. When the connection goes away, then autofs is stopped, any GTK bookmarks for the Network shares are removed and any mounts for the Network shares are lazily unmounted. I'm not sure if this will of use to anyone else, but if it is I'd love to hear from you. You can browse the code or clone the repo. Included are sample autofs config files, the dispatcher, and the tools for managing the GTK bookmark files. Monday, July 1. 2013
linux.conf.au 2014 - Call for papers Posted by Andrew Ruthven
in catalyst, family at
01:42
Comments (0) Trackbacks (0) linux.conf.au 2014 - Call for papersHoly crap, it's the last week of the linux.conf.au 2014 call for papers! We've got a bunch of great submissions, but we want more! From the CFP Announcement: The linux.conf.au 2014 papers committee is looking for a broad range of proposals, and will consider submissions on anything from programming and software, to desktop, mobile, gaming, userspace, community, government, space and education. There is only one rule: Your proposal must be related to open source This year, the papers committee is going to be focused on linux on the frontier and deep technical content-- that might range from cybernetics and mobile operating environments to large astronomy projects and big data projects. However, the conference is to a large extent what the speakers make it -- if we receive many excellent submissions on a topic, then it’s sure to be represented at the conference. Here’s a few ideas to get you started:
LCA is known for presentations and tutorials that are strongly technical in nature, but proposals for presentations on other aspects of free software and open culture, such as educational and cultural applications of open source, are welcome. Thursday, June 7. 2012
linux.conf.au 2013 - Call for Proposals Posted by Andrew Ruthven
in catalyst, family at
22:58
Comments (0) Trackbacks (0) linux.conf.au 2013 - Call for ProposalsWe are pleased to announce that the Call for Proposals for linux.conf.au 2013 is now open! The conference will showcase the best of open source and community-driven software and hardware. It will be held in Canberra at the Australian National University from Monday 28 January to Saturday 2 February, 2013, and provides a great opportunity for open source developers, users, hackers, and makers to share their ideas and further improve their projects. Important DatesCall for proposals opens: 1 June 2012Call for proposals closes: 6 July 2012 Email notifications from papers committee: 28 August 2012 Early Bird registrations open: 1 October 2012 Conference dates: Monday 28 January to Saturday 2 February 2013 Information on ProposalsThe linux.conf.au 2013 papers committee is looking for a broad range of proposals, and will consider submissions on anything from programming and software, to desktop, userspace, community, government, and education. There is only one rule: Your proposal must be related to open source. This year, the papers committee is going to be focused on deep technical content, and things we think are going to really matter in the future -- that might range from freedom and privacy to open source cloud systems or to energy efficient server farms of the future. However, the conference is to a large extent what the speakers make it -- if we receive many excellent submissions on a topic, then it’s sure to be represented at the conference. For more information see the full call for proposals on the linux.conf.au 2013 website. Wednesday, June 8. 2011
Wolrd IPv6 Day - Catalyst Posted by Andrew Ruthven
in catalyst, family at
00:39
Comments (0) Trackbacks (0) Wolrd IPv6 Day - Catalyst
Excellent, due to a little hack we now have the Catalyst website up on IPv6. Thanks David!
This is using the same method that we used to get another large NZ site IPv6 enabled for World IPv6 Day. Funnily enough we've discovered there is a NZ company that is providing a commercial solution using the same method we're using. Even though it is dirty, and is really, really the wrong way to do it. Note: It is worth noting that Catalyst's email server has been IPv6 enabled for several years now, as have our DNS servers. Tuesday, June 7. 2011World IPv6 Day
In the vein of World IPv6 Day, I've finally re-enabled IPv6 for the etc.gen.nz mailserver and for our main website (and my git repo).
These services used to have IPv6 enabled, but when I moved them from my home server to one hosted in a data centre we lost IPv6 support. However in the last few months, our hosting company has deployed IPv6 support to their hosting facility, and I finally found time to finish setting it up on the server. So, we're back on IPv6, just in time for World IPv6 Day! |
Calendar
ArchivesCategoriesSyndicate This BlogBlog AdministrationShow tagged entriesPowered by |