Thursday, November 27, 2008

Custom Linux Router

I’ve never had much luck with home broadband routers from Linksys and the like. That’s not to say that they’re bad - for most users they’re more than adequate. I just happen to have expectations for a router that SOHO appliances can’t meet. About five years ago a friend of mine introduced me to Smoothwall. I built a smoothie on old parts I had from crap PCs I bought at thrift stores. The case and power supply were from an old Midwest Micro tower, the innards from an old Compaq desktop. While by today’s standards, an AMD K6 with 144MB of RAM is pathetic it has served me well as my router, firewall, DHCP server, etc for the last five years without even flinching.



Alas, I discovered that the poor old IDE drive was finally giving up the ghost. I discovered this quite by accident. I logged into it to troubleshoot a network issue on a game console and wanted to see if the device was even reaching the gateway. I quickly discovered that I couldn’t even run ls, much less tcpdump. It must have died in the last few days because I know I’ve logged into it as recently as a week ago. The crazy thing is, because of the stability of Linux and the narrowly-defined responsibilities of this box, it’s continued to function as a router and firewall with no issue for at least a few days, despite a dead hard drive. If I rebooted it, I’d probably be in bad shape but for now I’m still online.



For a while I’ve wanted to replace the Smoothwall with a custom made router/firewall based on Debian Linux and iptables. I don’t dislike Smoothwall. What you get out of the box with Smoothwall is stellar. For my purposes, the guts are too complicated for the customizations I’d like to have and there’s a lot of cool features that I am not interested in using. For my purposes, a few shell scripts and a couple services will be more than sufficient and easier for me to manage.



IMPORTANT NOTE



My Internet service includes a static IP. This router configuration will not pull DHCP on the WAN (it will provide DHCP to the LAN). It will not connect via PPPoE either.



Build Environment



VMWare



I’ve got a system running VMWare Server 2.0 that I use for development. I created two virtual machines: one (frankenstein-test) has the first NIC bridged to my LAN, the other was on a “Host Only” network internal to VMWare. The second virtual machine only had a NIC on the “Host Only” network. This second VM (router-client) has no access to my physical network so if it’s going to reach the Internet it must do so through frankenstein-test.



Network



  • Real LAN: 192.168.1.0/24

  • Real Gateway: 192.168.1.1

  • Host Only LAN: 192.168.128.1/24

  • frankenstein-test eth0: 192.168.1.180

  • frankenstein-test eth1: 192.168.128.1

  • router-client eth0: 192.168.128.50

I had three main areas that I needed to get working. The first was routing with NAT, the second was TCP and UDP port forwarding, the third was DHCP with dynamic and static assignments.



Debian VM Install



I chose Debian because after using it on servers at work I’ve found it very intuitive as an administrator. I feel that default configuration choices and locations for things is very sensible. That’s not to speak ill of any other distribution. Regardless, the core ideas should be easy to implement on any Linux distribution.



I have a Debian Etch net install iso on my VMWare server to do installs from. I went through the install configuring the primary network interfaces manually with the desired IPs. My preference for partitioning on frankenstein-test was to have a large /var partition as I might set up a caching squid proxy later. That’s outside the scope of this article and I might cover it later. I have a caching squid proxy on my network already to speed package downloads so I pointed my systems to that. When I got to the point of selecting what I wanted installed, I deselected everything. I installed the router-client system the same way and I already had frankenstein-test working as a router when I went to build the client so it was able to install of the proxy on my real LAN.



The Build



At the time I began writing this I had a working router in my test environment that meets all three of my requirements. I copied the config files that I changed off of frankenstein-test, issued a soft-shutdown, and took a VMWare snapshot. The snapshot is so that when I rebuild if I find that I missed anything I can go back and grab it. I like snapshots of powered off VMs because I know there won’t be any funky “missing time” issues.



I power the VM up with it configured to boot off of my ISO to begin a clean install. My choices by screen are as follows:



  • (Choose Language) English

  • (Choose Language) United States

  • (Choose Keyboard Layout) American English

  • (Configure the Network) - Found two NICs, I chose eth0

  • (It came up via DHCP with no default route, so I say //No// to //Continue without a default route//.

  • Configure Network Manually

  • (IP Address) //192.168.1.180//

  • (Netmask) //255.255.255.0//

  • (Gateway) //192.168.1.1//

  • (Nameserver)

  • (Hostname) //frankenstein-test//

  • (Domain name) //internal.lub-dub.org//

  • (Partitioning) Manual Partitioning - this is an 8GB virtual disk

    • 100MB, Primary, /boot

    • 2GB, Primary, /

    • 512MB, Primary, swap

    • (remainder), Primary, /var


  • (Partition Disks) Write Changes to Disk

  • (Configure time zone) - Pacific

  • (Root Password) - //insecure//

  • (Confirm Root Password) - //insecure//

  • (User Name) - //Jason Mansfield//

  • (Username) - //jason//

  • (Password) - //insecure//

  • (Confirm Password) - //insecure//

  • (Use a network mirror) - Yes

  • United States

  • ftp.us.debian.org

  • (Proxy) - //http://proxy.internal.lub-dub.org:3128/ // - You’ll probably leave this blank.

  • (Popularity Contents) - No

  • (Software Selection) - Uncheck //Standard System//

  • (Install GRUB to MBR) - Yes

Configuration



I log in as root and want to install openssh but our sources are still pointing to the CD. I comment out the deb cdrom line from /etc/apt/sources.list. Before I can install anything I need to update my local apt database:



apt-get update



I can now install openssh:



apt-get install openssh-server



I edit /etc/ssh/sshd_config and set:



ListenAddress 192.168.128.1



and restart sshd.



Router/Firewall/Port Forwarding



I alas, my network configuration is still based on DHCP. Here’s what I have for /etc/network/interfaces:




# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug eth0
iface eth0 inet dhcp


I change it to this:




# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo eth0 eth1
iface lo inet loopback

# The primary network interface
iface eth0 inet static
address 192.168.1.180
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers
dns-search .internal.lub-dub.org

iface eth1 inet static
address 192.168.128.1
netmask 255.255.255.0
up /etc/firewall/iptables.sh


You’ll notice the up line in eth1. That’s the script the script that does the magic for our router/firewall. The /etc/firewall/ directory doesn’t exist by default so I had to create it. The script /etc/firewall/iptables.sh looks like this:




#!/bin/bash

IPTABLES=/sbin/iptables
EXTIF=eth0
INTIF=eth1
EXTIP=192.168.1.180
INTIP=192.168.128.1
INTCIDR=/24
PORTFWFILE=/etc/firewall/portfw

for i in filter nat mangle
do
${IPTABLES} -t ${i} -F
done

${IPTABLES} -t nat -A POSTROUTING -o ${EXTIF} -j SNAT --to ${EXTIP}

${IPTABLES} -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
${IPTABLES} -A INPUT -m state --state NEW -i ${INTIF} -j ACCEPT
${IPTABLES} -P INPUT DROP
${IPTABLES} -A FORWARD -i ${EXTIF} -o ${EXTIF} -j REJECT

${IPTABLES} -N spoof
${IPTABLES} -A spoof -j LOG --log-prefix 'SPOOF' ${LOGOPTIONS}
${IPTABLES} -A spoof -j REJECT

for PRIVNET in 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 169.254.0.0/16 127.0.0.0/8
do
${IPTABLES} -t filter -A FORWARD -i ${EXTIF} -s ${PRIVNET} -j spoof
done

echo 1 > /proc/sys/net/ipv4/ip_forward

for RULE in `cat ${PORTFWFILE} | grep -v ^# | grep -v '^[ \t]\{0,\}$'`
do
INPORT=`echo ${RULE} | cut -d: -f 1`
OUTIP=`echo ${RULE} | cut -d: -f 2`
OUTPORT=`echo ${RULE} | cut -d: -f 3`
PROTO=`echo ${RULE} | cut -d: -f 4`
${IPTABLES} -t nat -A PREROUTING -d ${EXTIP} -p ${PROTO} --dport ${INPORT} -j DNAT --to ${OUTIP}:${OUTPORT}
${IPTABLES} -A INPUT -p ${PROTO} -m state --state NEW --dport ${INPORT} -i ${EXTIF} -j ACCEPT
${IPTABLES} -t nat -A POSTROUTING -p tcp -s ${INTIP}${INTCIDR} -d ${OUTIP} --dport ${OUTPORT} -j SNAT --to ${EXTIP}
done


You’ll need to make the script executable. Here’s what it all does:




IPTABLES=/sbin/iptables
EXTIF=eth0
INTIF=eth1
EXTIP=192.168.1.180
INTIP=192.168.128.1
INTCIDR=/24
PORTFWFILE=/etc/firewall/portfw


Define our internal and external interfaces and internal and external IPs. The PORTFWFILE is used to define port forwarding rules as explained later. The CIDR netmask is optionally used in port forwarding later on.




for i in filter nat mangle
do
${IPTABLES} -t ${i} -F
done


We want to clear our existing rules for each of the filter, nat, and mangle tables.




${IPTABLES} -t nat -A POSTROUTING -o ${EXTIF} -j SNAT --to ${EXTIP}



Traffic going out the external interface should be NATted to the external IP.




${IPTABLES} -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
${IPTABLES} -A INPUT -m state --state NEW -i ${INTIF} -j ACCEPT


Allow traffic through that we’ve already decided is OK. Allow all outbound new connections.




${IPTABLES} -P INPUT DROP
${IPTABLES} -A FORWARD -i ${EXTIF} -o ${EXTIF} -j REJECT


Anything we haven’t matched with a prior rule gets dropped. Any traffic that would be routed in and out the external interface gets rejected.




${IPTABLES} -N handle_spoof
${IPTABLES} -A handle_spoof -j LOG --log-prefix 'SPOOF' ${LOGOPTIONS}
${IPTABLES} -A handle_spoof -j REJECT

for PRIVNET in 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16 169.254.0.0/16 127.0.0.0/8
do
${IPTABLES} -t filter -A FORWARD -i ${EXTIF} -s ${PRIVNET} -j handle_spoof
done


Here we want to deal with traffic coming in from the Internet that shouldn’t actually originate from the Internet because it’s on private/reserved/loopback IP space. These are known as Martian packets.




echo 1 > /proc/sys/net/ipv4/ip_forward



Allow the system to function as a router.




for RULE in `cat ${PORTFWFILE} | grep -v ^# | grep -v '^[ \t]\{0,\}$'`
do
INPORT=`echo ${RULE} | cut -d: -f 1`
OUTIP=`echo ${RULE} | cut -d: -f 2`
OUTPORT=`echo ${RULE} | cut -d: -f 3`
PROTO=`echo ${RULE} | cut -d: -f 4`
${IPTABLES} -t nat -A PREROUTING -d ${EXTIP} -p ${PROTO} --dport ${INPORT} -j DNAT --to ${OUTIP}:${OUTPORT}
${IPTABLES} -A INPUT -p ${PROTO} -m state --state NEW --dport ${INPORT} -i ${EXTIF} -j ACCEPT
${IPTABLES} -t nat -A POSTROUTING -p tcp -s ${INTIP}${INTCIDR} -d ${OUTIP} --dport ${OUTPORT} -j SNAT --to ${EXTIP}
done


This sets up port forward rules based on the contents of PORTFWFILE. The file has one rule per line and allows comments beginning with # and blank/whitespace-only lines. Fields are separated by colons. The first is the port coming into the firewall. The second is the IP on the internal network the packet should be sent to. The third is the port it should be sent to. The last is the protocol the rule applies to (tcp or udp). The first iptables command performs the port manipulation, the second allows the packet.



The last is funky and optional. If you have services hosted on your internal network but other clients on the internal network access them by the external IP (probably via DNS) weird routing can happen. Packets are not coming in the external interface so they don’t get source NATted. They are directed at the external IP so they get destination NATted. They are sent to the internal server but the source address is the internal client. The internal server responds to the internal client directly. The internal client was expecting responses to come from the external IP, not the internal server IP so it discards the reply packets. The last rule source NATs traffic from the internal network destined to the external IP to the external IP. You could source NAT it to the internal IP of the router. I prefer the internal IP because it makes it easy differentiate between traffic originating from the internal network and traffic originating from the router itself.



The /etc/firewall/portfw file looks like this:




# inport:dest IP:dest port:(tcp|udp)

22:192.168.128.50:22:tcp
53:192.168.128.50:53:udp
53:192.168.128.50:53:tcp
25:192.168.128.50:25:tcp


Two of my networking objectives are knocked out. I initiated a soft reboot and found that router-client could get out just fine. When I ssh to 192.168.1.180 I end up logging in to router-client.



DHCP



Now, on to DHCP. I used ISC DHCP (the same group that makes BIND.



apt-get install dhcp3-server



I only want to serve DHCP on my internal network, not for my ISP. I edit /etc/default/dhcp3-server:



INTERFACES="eth1"



Then to configure the actual DHCP daemon, I made the following edits to /etc/dhcp3/dhcpd.conf:




option domain-name "internal.lub-dub.org";
option domain-name-servers ns1.internal.lub-dub.org;
...
authoritative;
...
subnet 192.168.128.0 netmask 255.255.255.0 {
range 192.168.128.100 192.168.128.200;
option domain-name-servers ns1.internal.lub-dub.org;
option domain-name "internal.lub-dub.org";
option routers 192.168.128.1;
option broadcast-address 192.168.128.255;
default-lease-time 600;
max-lease-time 7200;
}


Restart DHCP:



/etc/init.d/dhcp3-server restart



I configured router-client for DHCP and restarted networking to verify that it would get an IP address, which it did. My port forwarding seemed to break here because router-client was no longer at 192.168.128.50.



Next is to make static DHCP assignments.



I create a file called /etc/dhcp3/static-dhcp.conf with these contents:




host router-client {
hardware ethernet 00:0c:29:ae:37:10;
fixed-address router-client.internal.lub-dub.org;
}


The ethernet address is that assigned by VMWare. The fixed-address line needs to resolve for the DHCP host so I put an entry in my /etc/hosts file for router-client.internal.lub-dub.org pointing to 192.168.1.50. On my actual network, everything has proper forward and reverse records. I add this line to my dhcpd.conf:



include "/etc/dhcp3/static-dhcp.conf";



I restart dhcpd on frankenstein-test. I then restart networking on router-client and find that it gets 192.168.128.50. My port forwards now work again.



Conclusion



My test router works in my virtual environment and is ready to be loaded on my hardware with a good hard drive. This article only covered making a basic firewall router. There are countless enhancements that could be applied to this system. One possibility is a caching proxy on the internal interface. You can also provide DNS service directly off this system. It’s my opinion that the best decision for security is to have as few services on the firewall as you can get away with and those that you do have on should only listen on your internal interfaces.



There are also a number of security enhancements you can apply to this system. It can be configured to not return ICMP, run Snort or other IDS/IPS applications. Keep in mind that the more you add to this security appliance the more potential security issues there are. As I make major changes to my firewall I’ll try and produce articles to match.



My next objective is to take all the explanation on this page of the iptables script and make it into comments in the script itself, where that information should be. I’ll also be doing an internal NTP server and arpwatch or something like it.



Check back on this page to see these enhancements. The bottom of the page features a timestamp of when this page was last edited. There’s also an RSS feed link at the bottom.



Sources



Tuesday, November 4, 2008

Black President

I’m sure there have been a number of people in America who have said something similar to this:




When we get a black president, I’m moving to Canada/Mexico/Wherever




Well, time to pony up.

Monday, November 3, 2008

Software Piracy (A Parable)

YAR! The diseased dangerous life fer me, HO HO!




GRR… had a nice, long, well-written article almost done and lost my draft. Here’s the abridged version




I don’t normally pirate software. I used to years ago, but got sick of it and switched to Linux for my desktop (already had it on my server). I found an application for Windows that I really wanted but it was $500-600 which is a lot of money for something I might not like.



So I pulled it down off BitTorrent. Knowing it might be a malware vehicle I grabbed an Open source virus scanner and scanned it before installing. It came up clean and I ran the patcher to disable the licensing check. A few hours later my Automatic Updates are turned off and won’t come back on.



I start throwing all the free and open source malware removal tools I can at it and have little success. After three days (interleaved with work, sleep, etc) I think I’ve got my system clean but it required editing my NTFS partition from Linux and hand-hacking my registry using chntpw. I used to know quite a bit about removing Windows malware but I’ve gotten rusty, what with all this Linux and Mac OS X usage and all.



I learned a lot in the process but I estimate I spent 12-18 hours in this mess. Most of the time I can’t actually use my computer because I’m not willing to supply any login credentials to a compromised system. If I assume that my the value of my time is reflected by my gross salary, I spent about $400-600 worth of time on this, not counting the loss of my free time. Since the software costs $500-600 it probably would have been cheaper for me just to buy it.

Thursday, October 9, 2008

Meatalurgy

I love science and I love food. When I can combine the two it is especially good. I had a brainstorm today that I think represents the future of food.



Metallurgy is essentially the study of metals and alloys (combinations of metals and other materials). Iron is one thing and carbon is another. When you combine the two you get steel which has some of the properties of each making a more useful material. It’s not that simple though as different ratios of iron to carbon changes the properties of the steel, as does adding other metals and materials. All of these combinations have special utility in specific situations - you can’t just make a steel that serves all purposes.



Classically trained Japanese Hamsmith
People have combining foods together as long as we have been eating. This has varied from things as simple as applying sauce to something else or as complex as Turducken. Alas our attempts at combining foods to date have been primitive at best. We put one on top of the other or perhaps stew one in the juice of another but our ability to combine foods has drastically lagged behind developments in European methods of combining metals despite having much more interaction with food.



Combining Foods At a New (Molecular) Level



Despite the “stacking” approach we’ve taken with combining foods we haven’t done very well with it. I think true innovation in food combination requires that we look to ancient Japanese metallurgy for inspiration. They created steel of extremely high quality by layering and folding materials then exposing them to heat and pressure. I think we need to explore food combinations produced through a similar process, which I call “Meatalurgy”.



Layering of Meat Atoms
We must start by taking existing raw meats and slicing them as thin as possible, preferably only a few microns thick. We then layer them together, alternating meats in a distribution that creates the desired ratio of meats in use. For simple 1/3 shrimp, 1/3 bacon, 1/3 pork chop you would just rotate the application of layers.



In the same way that steel is neither iron nor carbon, meats produced in this fashion are not the meats from which they were produced. They take on a whole new flavor, texture, color, and will require their own preparation techniques that will have to be discovered for each. As part of gaining acceptance, it will be important not to call one of these “hypermeats” by the ingredients and ratios but by unique names. We often find things displeasing because they contradict our expectations and not by their own merits so it’s important not to give the taster an inaccurate idea about what the hypermeat will taste like. A scientist colleague of mine suggested the name be a variation on the most dominate flavor. For example, if the most dominant taste is lamb you might call a hypermeat “Metrolamb”, even if there’s no actual lamb in the hypermeat.



Hypermeats Please PETA



An area of concern among many, particularly those in PETA is the genetic engineering of animals. The current climate of genetic engineering is the injection of genetic material from one living thing into another to produce an animal with the properties of both. For a lot of people this presents a difficult moral problem. This technique could produce animals with meat that combines the taste of chicken with steak. However, I believe the hypermeats produced by Meatalurgy are superior in many ways.



Creating hypermeats through Meatalurgy allows for greater control over the combination of meats producing the hypermeat. The creator can specifically produce a hypermeat of any desired ratios. Because GEd hypermeats are organically grown, this control is impossible.



Also, creating hypermeats with Meatalurgy allows for very complex combinations that would be extremely difficult to produce through GE as they may represent combinations of DNA that are not compatible. Beyond this, Meatalurgy allows for the production of hypermeats with materials that are not meats, like Tofu. This creates a whole new world where someone might be “85% Vegetarian” meaning they will not eat meat that is not hypermeat and only hypermeat that is 85% Tofu or more.



The Way to a Tastier Future



To get this burgeoning science of the ground, we need to encourage collaboration among chefs, butchers, and engineers around the world. With the advent of the Internet, nascent meatalurgists can exchange meatalurgical formals via email and produce the same hypermeats in their own labs as someone else on the other side of the globe.



I see a whole new industry on the horizon, where meatalurgists create new and exciting tastes to be explored by hypermeat engineers and produced by hypermeat technicians.



Join me in the quest for new dimensions of in the name of science!

Sunday, September 28, 2008

Forever Kitten

I often have ideas that I consider million-dollar ideas (particularly given the current state of the dollar). This one is a billion-dollar idea.



Like and Love




People love kittens. When they go to buy a cat, most of the time people want to buy a kitten rather than a full-grown cat. I don’t fully understand why but perhaps the best part of owning a cat is cleaning up crap and replacing furniture. While there’s no accounting for taste, this does represent a critical market opportunity.



The problem with kittens (which people love) is that they eventually turn into cats (which people dislike or are indifferent towards). What someone needs to create is a kitten that stays a kitten forever.



Science to the Rescue!



With the strides we’ve made in genetic research, I think this is a reasonable goal. The primary problem is mortality. If we found a way to keep any mammal alive and young indefinitely we’d really be on to something, but both of these things are out of reach.



What’s needed to conquer the problem is a hard look at the real nature of the problem. When we talk about are desires of “forever” we’re generally talking about lifetime, not until the end of the universe. For example, when people in a couple say they wish to stay together forever, they don’t mean until the Heat-Death of the universe, they mean until they die. It’s this connotation of “forever” that makes the problem relatively easy to solve.



Mortality: Cat’s Ask for It By Name!



Forever doesn’t really mean forever to us unless we’re speaking in a scientific, religious, or similar context that deals with the universe as a whole. When we say forever, we really mean “until death”. We don’t need to keep kittens in that state indefinitely, we need to keep them in that state until they die.



While there are a few variables here, we have the most control over one: death. We can’t control growth rates of kittens, but we can control their deaths. While the easiest way to go about this is in a purely mechanical way, the most elegant is to genetically engineer the cats to die sometime in the period of adolescence.



That’s Horrible! I love Fluffy!



With this leap, we can create kittens that never have to turn into cats. They stay cute, fuzzy, and cuddly until they die. As an owner of a Forever Kitten, you never have to deal with a fat, old, lazy cat. As a matter of fact, you never have to pay for vaccinations which can easily be several times the cost of the cat.



Some may note that they love their cats and don’t want them to die. Unfortunately, death is inevitable for all of us, save perhaps viruses and bacteria. Your cat will die whether your like it or not. Certainly there’s value in this cherished pet dieing within a reasonable designated period of time. This gives everyone the chance to prepare. Similarly, Forever Kitten teaches young ones the dangers of attachment and prepares them for the bereavement that will surely punctuate their life. Lastly, we all love Fluffy. But we loved him more when he was fluffier, more rambunctious, and more full of that youthful spark.



The Reality of Designer Pets



Many might argue the morality of Forever Kitten. Is it bad to engineer an animal to have mortal defects? Is it our place to decide the fate of animals yet to be born? We’ve already answered that question for ourselves time and time again.





For those that believe Forever Kitten is wrong, I offer the dachshund. This dog is the result of careful, selective breeding to produce and animal with its unique “weiner-dog”. characteristics. As a result we have a dog that’s extremely cute in its size and awkwardness. Also as a result we have a breed of dogs so riddled with hereditary health problems that if it were your child to be, you’d strongly consider termination so that it might not suffer.



There is the heart of the question of morality. Is it acceptable for us to create an animal whose life will surely be a decline into serious, debilitating health problems as age sets in but not to create one that will live a normal, healthy, but shortened life?

Thursday, September 25, 2008

Abnomalies

My friend Aaron coined the term “abnomalies” and I like it, if for no other reason than that it’s Autological.




QlpoOTFBWSZTWSPtY64ARABfd2BxQwBgIHECCDCAQEABAIBAIEAAAACQADABGAFA
AAAACgAAAAAUqqMQ9BNNqYR5qNQEPYBD5AQ5AIYAQ2AIZAIYqqg7AIZAIYAQ9wEN
wCGKqoM9MAIdQEM/ACH8AhgBDuAhoAh/gIaeAENdQENaqoMAIZVVQfACHGqqDMBD
yAh5AQ9C7kinChIEfax1wA==




QlpoOTFBWSZTWQ14TiwAEKD/////9vhMJch7Y+xugK/n/mTrAUoAZgCHASDAgPpE
h1WdwAL8cqqgFUcAAAAAAAAAZDQAAAAAAAAAAcAAAAAAAAAZDQAAAAAAAAAAIpSm
epJ6QYQYmamgxDTTJiBhGE0GCNGgGTQ0DaE09JtI2gUqUmSn6Jk1PTSepoB6m0gB
o0HqA0AAAaAGNQ0009T1AZHlGmj29aAk1FfHnEngwTYgmvWkOGsVKYrAkxEu7rEE
41aCROQrBRbmpWQULWwUXwLGMVSaEwIohZrzF5dcsQAW/JsLPnFKSylKUiUhVS4n
RMrczdKLuL6O1EhRS2WkLMAoFIBFsVwV0q3KzIJs5q+YSZa1qzK2xEtlor5UATHW
fQhaMGILAu8s9ZuUuCVBsOr1qyGavDEnriTcgmVlrpwSYsbWIggiIiInLhoAn76q
01+sLXlXXp7f83+OuhBPhEmIVWw6C3WYSfvs16HsV9bonx9DrzRM7lEWkukAiaQv
rFp/hAE213hf7BOUJNN/Qk5zf0ZMmsxhjGMYZnPqwXq4xMYx9gkwORwq/TaV88Ez
MiIm1Pu6WSZMmgRE5wEROZBNUSdASebBMwCIm6rAERNd/ok7MSbbkt5prpt5hvsr
8GV2jiuOzNN1Gi6jZaLecxmfrBMNBtM7Scty2+0Gq4zXajiMiZU5OZxWww47OsrO
0lnTTa73mqztZxHaq3m+3eGCcqt6sCTq1wVwQE6KAJ1K6otv9xZ3BqLQXhWrWARK
+hqLV9I3RgmXREmlBNWs+3m/tXWr85jGK68AT/iz6wtqv+rLwrYoAkqi7SAJQRSN
sCImuJOvXPgn21zRJ/4u5IpwoSAa8JxY

Monday, March 31, 2008

Dump Unix File Mode and Ownership

I found several situations where I needed to recursively record file ownerships and permissions for the sake of documentation or to restore them later. I didn’t bother making something to do it until I had a project that required it.



This might not be the smartest way to go about this. I recognize that this functionality exists in rsync but you might not want to copy the files or you might want to recover this information later rather than on a remote system.



Dump



The following find command will recurse on /path and dump the uid, gid, mode, and path for each file into /path/to/dump. Be aware that if your dump file is within /path it is likely to appear in the list.




find /path -exec stat --format '%u %g %a %n' {} \; >> /path/to/dump


Restore



The following short perl script will read one or more dump files from standard input or as command line arguments and apply set the owner, group, and mode for each file therein. For this work it must be run as root or via sudo.




#!/usr/bin/perl
# Read a mode/owner dump from standard input or as command line arguments

use strict;

while (my $input = <>) {
chomp $input;
my ($uid,$gid,$mode,$path) = split(/\s+/,$input);
print "$path: $uid:$gid $mode\n";
chmod oct($mode), $path;
chown $uid, $gid, $path;
}

Sunday, March 30, 2008

Trigger a Command On a Log Event

I was trying to track down the cause of some iptables log messages. I wanted a packet dump while the problem was occurring but the symptom appears sporadically with 15 minute or so gaps. There’s a lot of traffic flowing through the system in question so if I leave tcpdump running I’ll too much traffic to sort through. What I needed was a means of starting tcpdump when the log messages appear. Luckily for me the messages appear over 10-20 seconds and I was pretty sure I could miss a couple as long as I grabbed some of them I’d get some insight.



I realized I could just have tail follow the logs and stop when a line appeared so tcpdump could run. I had to run tcpdump as sudo and my sudo token my expire before tcpdump was started so I wrote a script to run as sudo:




#!/bin/bash

tail -n0 -f /var/log/syslog|grep -l WINDOW

tcpdump -nvv -s0 -c 1000 -w /tmp/blarg.pcap -p host 10.2.3.4 and not proto ether \\arp


The -n0 option to tail has it reading 0 lines of the log file. I had the log entries in the log from earlier and I didn’t want grep to match on those. I gave tcpdump -s0 so it would capture whole packets and -c 1000 to only capture 1000 packets.

Thursday, March 20, 2008

VA Car Stacking

Parking at the San Diego VA Hospital is a nightmare for outpatients and visitors. To help with this they instituted free valet parking so they can stack cars three or four deep. When I went to the VA Hospital a few weeks ago to get treated for the same illness that was sweeping across the country I found something disconcerting.



To combat the parking issues they’ve started using the entire visitor parking area as mixed valet and self-parking so they have more space to stack cars in. As I parked I was directed which spot to park in which was unexpected but convenient. I’m guessing that with self parking they go three deep and make sure that a valet car is blocking at least one side so they can move it to let a self-parked car out.



As I’m making my way out of the parking lot I overhear an interesting exchange between a valet and a woman who seemed familiar with him. The woman inquired as to when the VA started doing this to which the valet responded, “The first.” The rest of the exchange went something like this:



“So what’s the flaw, having to run back and forth for the keys?” the woman inquired.



The valet responded making an effort to lower his voice, “No, the flaw is that the keys are on the front tire.”



I’ve developed sufficient self-control to keep walking nonchalantly, rather than going slack jawed, when I hear something like this. The situation must be pretty desperate to do something this stupid. There actually are people who look around for stuff like this. There actually are people who notice stuff like this without trying. There actually are people who overhear valets explaining this. We can’t all have solid morals. Even moral people might notice a set of misplaced car keys (or 50 of them) and take them to the security window. If anyone reads and happens to be near the San Diego VA Hospital, maybe you should turn in some found car keys. I’d do it but I’m just the idea guy (and I’m a chickenshit).



I’ve never thought that the convenience of having someone park your car was worth paying for. I’ve never used the free valet at the VA just because of the vague dislike of the idea of handing over my keys to a stranger. Now I have a very real concern about handing my keys over: they might not go into a monitored location. I’ve known that a valet might damage or steal my car but at least they’re accountable for that. I hadn’t previously considered the possibility that they might make it very easy for someone else to steal it.

Tuesday, January 29, 2008

Monday, January 28, 2008

Debian vs Gentoo

At work we were asked via email what we liked about Debian. This is my response:



I came to Debian from Gentoo and have found that everything that is a pain in Gentoo is sensible in Debian. For most of the things that are well done in Gentoo have comparable features in Debian.



In Gentoo, I have an incredible amount of control over my packaging. In Gentoo I can choose to have certain features included or omitted in a package without much effort. This reduces software bloat and reduces the potential points of failure/insecurity in a given piece of software. Package dependencies are usually handled very well and it’s easy to find and get information about packages. In Gentoo it’s trivial to install an unstable version of a specific package and restrict any package to certain versions (or to accept an unstable package up to a certain version). Gentoo cleanly separates configs for services and service startup. Gentoo is an awesome system to get things just the way you like them.



The problem with this level of detail and configurability is the need to actually manage it. Because Gentoo as an OS doesn’t have a version, sometimes you’ll perform regular updates and break a service: this is normally handled by having a new OS version, forcing you to reconfigure things intentionally. Debian makes intelligent decisions about when a new, incompatible version should be made available simply as a new package version or as part of the next OS version. Gentoo’s incredible detail and flexibility means that you have to consider the details and possibilities for each package that you install. While Debian doesn’t tend to offer you choices within a specific package it does make intelligent decisions as to when to split something into multiple packages, when certain things should be defaults, and when certain things should go through a post-config dialog. Package updates in Gentoo are like russian roulette while in Debian they’re just routine package updates.



Gentoo tries to make absolutely no assumptions about what you want. Debian assumes that you want a stable, easy to configure and maintain linux system and acts accordingly.

Debian vs Gentoo

At work we were asked via email what we liked about Debian. This is my response:



I came to Debian from Gentoo and have found that everything that is a pain in Gentoo is sensible in Debian. For most of the things that are well done in Gentoo have comparable features in Debian.



In Gentoo, I have an incredible amount of control over my packaging. In Gentoo I can choose to have certain features included or omitted in a package without much effort. This reduces software bloat and reduces the potential points of failure/insecurity in a given piece of software. Package dependencies are usually handled very well and it’s easy to find and get information about packages. In Gentoo it’s trivial to install an unstable version of a specific package and restrict any package to certain versions (or to accept an unstable package up to a certain version). Gentoo cleanly separates configs for services and service startup. Gentoo is an awesome system to get things just the way you like them.



The problem with this level of detail and configurability is the need to actually manage it. Because Gentoo as an OS doesn’t have a version, sometimes you’ll perform regular updates and break a service: this is normally handled by having a new OS version, forcing you to reconfigure things intentionally. Debian makes intelligent decisions about when a new, incompatible version should be made available simply as a new package version or as part of the next OS version. Gentoo’s incredible detail and flexibility means that you have to consider the details and possibilities for each package that you install. While Debian doesn’t tend to offer you choices within a specific package it does make intelligent decisions as to when to split something into multiple packages, when certain things should be defaults, and when certain things should go through a post-config dialog. Package updates in Gentoo are like russian roulette while in Debian they’re just routine package updates.



Gentoo tries to make absolutely no assumptions about what you want. Debian assumes that you want a stable, easy to configure and maintain linux system and acts accordingly.

Thursday, January 24, 2008

KDE 4.0 Screenshots Tour

I went through this KDE 4.0 screenshots tour to check it out. I can’t say I’m thoroughly impressed.



I’ve been using KDE for years and have loved it. I’ve been using a Mac for almost a year and I love that too. Given my experience with the two I think the new KDE looks good and all but I’m really unimpressed. I’ve read elsewhere that 4.1 is supposed to be the real deal. Let’s hope so. The above tour makes it look like they’re mostly just reimplementing features that Mac users have enjoyed for years. I think KDE 4.0 could beat the crap out of GNOME if they put a lot of work into integrating applications with DCOP or DCOP’s successor so KDE users can enjoy things that Mac+AppleScript+Quicksilver have enjoyed for years. I keep making comparisons to Mac because Mac really has created innovation on the desktop. While I think that KDE just isn’t stacking up, given that it’s FOSS I think it’s great.

KDE 4.0 Screenshots Tour

I went through this KDE 4.0 screenshots tour to check it out. I can’t say I’m thoroughly impressed.



I’ve been using KDE for years and have loved it. I’ve been using a Mac for almost a year and I love that too. Given my experience with the two I think the new KDE looks good and all but I’m really unimpressed. I’ve read elsewhere that 4.1 is supposed to be the real deal. Let’s hope so. The above tour makes it look like they’re mostly just reimplementing features that Mac users have enjoyed for years. I think KDE 4.0 could beat the crap out of GNOME if they put a lot of work into integrating applications with DCOP or DCOP’s successor so KDE users can enjoy things that Mac+AppleScript+Quicksilver have enjoyed for years. I keep making comparisons to Mac because Mac really has created innovation on the desktop. While I think that KDE just isn’t stacking up, given that it’s FOSS I think it’s great.

Thursday, January 3, 2008

Pizza Hut Password

I like ordering pizza online because it’s really convenient. My preference is Papa John’s but I make concessions for others. I ordered pizza a while back from Pizza Hut (using nyms for the email address) and everything went okay.



I went back this evening but had forgotten my one-off password. I used their password reset and what did I get in my email? My original password. In cleartext. Normally the way this works is they email you a link that you can click on to set a new password. The link is only sent to your email so hopefully you’re the only person that gets it. The link in the email is specific to you and will eventually expire. Apparently, Pizza software developers have never actually used any other e-commerce systems, or forum or news or blog or any other system that uses a password either.



Now more than ever I’m making sure I use unique, randomly-generated password for everything and if I lose it, so be it.