Produce a members report for all your Mailman lists

I recently had cause to produce a report on the membership of all our Mailman mailing lists, so rather than doing it manually I knocked together the following handy bash script…change mailman location and output file as desired 🙂

CURRMONTH=`date +%m-%Y`
LISTS=`/usr/local/mailman/bin/list_lists | awk '{print $1}' | grep -v [!0-9]`
echo "Mailman Report for ${CURRMONTH}" > ${OUTPUTFILE}
echo >> ${OUTPUTFILE}
for x in ${LISTS}
echo "Members of List ${x}:" >> ${OUTPUTFILE}
LIST_MEMBERS=`/usr/local/mailman/bin/list_members ${x}`
for mems in ${LIST_MEMBERS}
echo ${mems} >> ${OUTPUTFILE}
echo >> ${OUTPUTFILE}
/bin/mail -s "Mailman_Report_for_${CURRMONTH}" -c < ${OUTPUTFILE}

Shared Network Storage with iSCSI and OCFS2

So we got a bunch of new hardware at work recently to build a crunch farm for all our heavyweight data processing. Part of that system is two very beefy servers which share a SAN (This one for those interested) for the majority of their disk storage. The SAN uses iSCSI, which was fairly straightforward to set up (I’ll document it here anyway) so I got that all set up and then made a nice big ext3 partition for the servers to share. All so far so good, the servers were talking to the SAN, could see the partition, read and write to it etc. The only problem seemed to be that when one server changed a file, the other server wouldn’t pick up the change until the partition had been re-mounted. What I hadn’t accounted for was that ext3 doesn’t expect multiple machines to share the same block device so it wasn’t synching changes.

I knew that filesystems designed for exactly this sort of sharing were available but hadn’t done much with them but after investigating for a bit, it seemed like the Oracle Clustered File System (linky) was the best option as it was already supported by the Linux kernel and was pretty mature code. The main problem I had in setting all of this up was that the available documentation was very much geared towards people who already had in-depth experience of OCFS whereas I’d never used it before. Hence this blog post, which details setting up iSCSI and then configuring both servers to talk to the same OCFS partition. The instructions are written for Ubuntu Server, but will work on any distro which used apt. Packages are also available for rpm distros, the only instructions you need to change are the package fetching ones.

Setting up iSCSI

* Install Open-iSCSI

apt-get install open-iscsi

* Edit the Open-iSCSI configuration file

The default configuration file could be located at /etc/openiscsi/iscsid.conf or ~/.iscsid.conf. Open the file and set the parameters as required by your iSCSI device. I’ve included the (mostly default) options I used for reference

node.startup = automatic node.session.timeo.replacement_timeout = 120 node.conn[0].timeo.login_timeout = 15 node.conn[0].timeo.logout_timeout = 15 node.conn[0].timeo.noop_out_interval = 10 node.conn[0].timeo.noop_out_timeout = 15 node.session.iscsi.InitialR2T = No node.session.iscsi.ImmediateData = Yes node.session.iscsi.FirstBurstLength = 262144 node.session.iscsi.MaxBurstLength = 16776192 node.conn[0].iscsi.MaxRecvDataSegmentLength = 65536

* Save and close the file. Restart the open-iscsi service:

/etc/init.d/open-iscsi restart

Now you need to run a discovery against the iscsi target host which basically finds all the iSCSI targets the SAN can give us:

iscsiadm -m discovery -t sendtargets -p ISCSI-SERVER-IP-ADDRESS

Finally restart the service again:

/etc/init.d/open-iscsi restart

Now you should see an additional drive on the system such as /dev/sdc. Look in the /var/log/messages file to find out device name:

Next, you need to use fdisk to create a blank partition on the device. This is pretty well documented so I’ll skip these steps, other than to say that I’ll assume the device was called /dev/sdc, and the new blank partition is called /dev/sdc1 for the remainder of this post. So now we’re talking to our iSCSI device and we’ve got a blank partition all ready to format as an OCFS drive. Next, how exactly we do that!

To be continued…

Creating DMG Files Without MacOS X

I’ve put together a script for creating DMG files without using OS X…it requires Linux, I’ve tested it on Kubuntu 7.10 but it should work on anything recent. The process will also be Wiki’d, but in the meantime, instructions are below for the curious!

Run the following commands:

# This gets and builds a patched version of Apple's diskdev_cmds package which will work on Linux
tar xzf diskdev_cmds-332.14.tar.gz
bunzip2 -c diskdev_cmds-332.14.patch.bz2 | patch -p0
cd diskdev_cmds-332.14
make -f Makefile.lnx

# Create symlinks to the mkfs and fsck commands for HFS+
sudo cp newfs_hfs.tproj/newfs_hfs /sbin/mkfs.hfsplus
sudo cp fsck_hfs.tproj/fsck_hfs /sbin/fsck.hfsplus

# Get and enable the hfsplus kernel module
sudo apt-get install hfsplus
sudo modprobe hfsplus

Now that's done, you can use the following handy bash script (must be run as root) I've written to create a DMG file which contains the contents of a directory you specify on the command line.


# DMG Creation Script
# Usage: makedmg <imagename> <imagetitle> <imagesize (MB)> <contentdir>
# imagename: The output file name of the image, ie foo.dmg
# imagetitle: The title of the DMG File as displayed in OS X
# imagesize: The size of the DMG you're creating in MB (Blame Linux for the fixed size limitation!!)
# contentdir: The directory containing the content you want the DMG file to contain
# Example: makedmg foo.dmg "Script Test" 50 /home/jon/work/scripts/content
# Author: Jon Cowie
# Creation Date: 02/04/2008

if [ ! $# == 4 ]; then
	echo "Usage: makedmg <imagename> <imagetitle> <imagesize (MB)> <contentdir>"

	if [ ${USER} != "root" ]; then
		echo "makedmg must be run as root!"
		echo "Creating DMG File..."
		dd if=/dev/zero of=${OUTPUT} bs=1M count=$FILESIZE
		mkfs.hfsplus -v "${TITLE}" ${OUTPUT}

		echo "Mounting DMG File..."
		mkdir -p ${TMPDIR}
		mount -t hfsplus -o loop ${OUTPUT} ${TMPDIR} 

		echo "Copying content to DMG File..."

		echo "Unmounting DMG File..."
		umount ${TMPDIR}
		rm -rf ${TMPDIR}

		echo "All Done!"

Hope it’s useful!

Groovy Virtualisation Hardware

I came across some interesting news today (Linky) that said Neterion is releasing a fairly hardcore network card designed for offloading VM Network management from the Hypervisor. I think this has the potential to be quite an interesting field in the future…in small scale Virtualisation deployments it’s not such a big deal that the Hypervisor has to do all the legwork for IO, but when you scale up to much larger deployments, network IO has the potential to be a significant bottleneck. You can mitigate this somewhat by utilising the physical network card, but this in turn shifts the load onto the host OS. It should be interesting to see in future what else hardware manufacturers come up with along similar lines: VM aware disk & memory controllers, for example…Intel have already made strides towards VM aware CPUs with VT as well.

Being a rather geeky type, I’m quite excited to see what all these clever hardware types come up with in the next few years…wouldn’t it be nice to be able to buy a server full of VM aware kit which lets you run multiple VMs as quickly as if you were using just one host OS? I can’t see virtualisation going away any time soon, it’s just too damn useful – so I reckon this might be something just around the corner.

New Start

So with my pending move to a Web 2.0 friendly company I thought I probably ought to start updating my blog, something I’ve been meaning to do for ages! I’m going to be working as a System Engineer for Trampoline Systems who are doing some very funky stuff to allow large organisations to enable more natural communication…the theory is that most organisations make humans try to communicate like computers, which stifles our natural instincts. Anyway, go check them out, it’s all very exciting! In other news, 8 days until I say bye-bye to the premium rate telephony sector.

And now for some stuff I like…