Saturday, October 2, 2021

Go: Reference Mutation Testing

In the Go community we do a lot of table-driving testing, where we write tests that iterate over a slice or map of test cases and run them through a single test where the inputs vary. These are great because it reduces code (and usually bugs!) and makes it easier to focus on the data.

I've recently come up with specialization of this that I think helps focus on the exact differences between "good" and "bad" data in test cases. I'm probably not the first person to come up with this and if someone provides me with a link to prior art I'll update the post with the correct terminology and links to sources.

A common idiom for table-driving testing is that each test case provides a unique input and an expected outcome. For each case the full input needs to be provided. For test inputs with many fields this creates a lot of repetition which makes it more difficult to see what the novel changes are from one test case to the next.

In this technique rather than providing the full input for each test case, a single "reference" input is created and used repeatedly. The test cases then include a function that takes a pointer of the reference value type and mutates that value in some way. For each test case the the reference value is copied , the operation under test is applied to the copied value, and the outcome is verified. Then, the test case's mutator is called on the copied value. The operation is then repeated and the outcome is verified.

I call this technique "reference mutation testing". Here's some example code:

package main
import (
"errors"
"testing"
)
type Record struct {
Label string
Value int
}
func (r Record) Validate() error {
if len(r.Label) == 0 {
return errors.New("label cannot be empty")
}
if len(r.Label) > 64 {
return errors.New("label must be less than 64 characters")
}
if r.Value < 0 {
return errors.New("value must be positive")
}
if r.Value > 1023 {
return errors.New("value cannot exceed 1023")
}
return nil
}
 
func TestValidateErrors(t *testing.T) {
t.Parallel()
goodRecord := Record{
Label: "test",
Value: 16,
}
for label, tCase := range map[string]struct {
mutate    func(*Record)
expectErr bool
}{
"empty label": {
mutate: func(r *Record) {
r.Label = ""
},
expectErr: true,
},
"short label": {
mutate: func(r *Record) {
r.Label = "t"
},
expectErr: false,
},
"overlong label": {
mutate: func(r *Record) {
r.Label = "0123456789abcdef" +
"0123456789abcdef" +
"0123456789abcdef" +
"0123456789abcdef" +
"0"
},
expectErr: true,
},
"long label": {
mutate: func(r *Record) {
r.Label = "0123456789abcdef" +
"0123456789abcdef" +
"0123456789abcdef" +
"0123456789abcdef"
},
expectErr: false,
},
"overhigh value": {
mutate: func(r *Record) {
r.Value = 1024
},
expectErr: true,
},
"high value": {
mutate: func(r *Record) {
r.Value = 1023
},
expectErr: false,
},
"negative value": {
mutate: func(r *Record) {
r.Value = -1
},
expectErr: true,
},
"0 value": {
mutate: func(r *Record) {
r.Value = 0
},
expectErr: false,
},
} {
label, tCase := label, tCase
goodRecord := goodRecord
t.Run(label, func(t *testing.T) {
t.Parallel()
if err := goodRecord.Validate(); err != nil {
t.Fatalf("unexpected error: %s", err.Error())
}
tCase.mutate(&goodRecord)
err := goodRecord.Validate()
if tCase.expectErr && err == nil {
t.Fatal("expected error, got none")
} else if !tCase.expectErr && err != nil {
t.Fatalf("got unexpected error: %s", err)
}
})
}
}

Using the mutator function makes it clear what the exact differences are between each test case. Since you're mutating the reference value it's important that you work on a copy. It's tempting to factor testing the reference value out of the test case, but you shouldn't. Even working on a copy of reference value changes can persist; changing the contents of a member slice or map can persist between copies. By testing the reference value each time you're reducing the chances that mutations persisting will go undetected.

This technique isn't ideal for every situation or even every table-driven test but it can be handy where you want to test how behavior changes in response to very specific changes to input.

Tuesday, September 21, 2021

Not Quite Zero

Fact and Fiction In Zero Trust Architecture

Zero Trust Architecture (ZTA) is a panacea for organizational security challenges... at least that's what vendors would have you believe. There is so much confusion around what ZTA is and isn't that it's easy to believe that it's all smoke and mirrors; a grift to siphon millions from corporations desperate to shift their liability to someone else.


ZTA is a real thing but it's not a panacea or product. It's a philosophy for designing interactions between clients and services. Implementing ZTA involves tackling a mountain of individualized complexity. In this post I'll discuss some of this complexity and in doing so I hope to separate the sham from the substance and help you decide whether or not ZTA is something worth pursuing within your organization. Along the way I hope to show that implementing a Zero Trust Architecture is not an advanced technology problem, but one of integrating technologies we're already familiar with.


Much like Kubernetes if you don't know exactly why you need Zero Trust Architecture, you probably don't and the process of trying to implement it will be expensive and painful. You should know what you're getting into.

Why Me?

I was on the Google security team during the conception and implementation of BeyondCorp: Google's ZTA solution. I developed components related to machine inventory and device access. I know this flavor of Kool-Aid very well and I've seen things that worked very well and things that went poorly.


Everything I'm discussing here is relative to Google's BeyondCorp project. Other ZTA strategies may exist, I'm not an authority on them. Additionally, I left Google in 2016 and I'm confident that their solution has seen significant change since that time. That said, I'm sure the concepts are still valuable.

Problem Statement

It's a common aspect of network architecture that you divide your network into different segments for different purposes. Perhaps you have a common network for clients, a restricted network for internal services, a guest network for untrusted devices, and one or more lab networks for R&D or testing.


The common client network is intended for corporate devices. Those devices need access to services that shouldn't be exposed to the Internet at large and those services are either on the same network as the clients, directly exposed to those clients, or with minimal segregation and filtering. These are company client devices and are assumed trustworthy.


The internal network may have poorly-protected ingress points such as shared WiFi passwords or network jacks with no access control. Arbitrary devices can access these internal networks, regardless of whether or not they are corporate devices.


Even devices on networks with strong mechanisms to keep out non-corporate devices (e.g. 802.1x) aren't strictly trustworthy. Compromised clients can relay traffic from external attackers onto the internal network.


In both cases the network has a hard outer shell and a soft interior. That hard shell is easily bypassed by determined attackers, bringing the bad guys onto the good network. Security practitioners have long understood that trusting the internal network is tenuous at best.

What Is Zero Trust Architecture?

Zero Trust Architecture primarily attempts to address the trust misplaced in corporate networks. The crux of ZTA is that client devices are distinctly identified and that access to the service is granted not just based on the access granted to the user but also based on the security posture of the device.  In the following sections we discuss different aspects of device authentication and authorization, each progressing from the status quo to the ZTA ideal.

Client Authentication

Distinctly identifying the client device requires each device to have unique credentials that can be used to distinguish between it and others. A simple device secret could suffice, as is the case with classic Kerberos. When these secrets are held in the filesystem, an attacker compromising the device can copy that secret and impersonate the device.


Client x509 certificates are a commonly used mechanism for device authentication with mutual TLS (mTLS). Usually a client certificate's private key is held in the filesystem and can be exported by an attacker like a simple secret. Managing an internal corporate x509 public key infrastructure (PKI) is tricky and requires internal development to manage if it is to operate successfully at a non-trivial scale. There are few standards for device enrollment and little in the way of off-the-shelf solutions.


Ideally, a certificate's private key is held in hardware in a way that it can't be exported, such as in a Trusted Platform Module (TPM). Clients with certificates based on non-exportable keys have the strongest device authentication. Issuing certificates for these devices can be tricky and require custom automation beyond conventional PKI certificate enrollment processes.


Many devices will support mTLS but can't provide certificates based on non-exportable keys. Most commonly this includes user devices such as laptops and desktops with software configurations that support mTLS but lack the crypto hardware for non-exportable keys. These devices can uniquely identify themselves but not in such a way that they can't be impersonated if compromised.


Finally, there are devices that can't authenticate themselves strongly, like simple IoT devices, or at all, like classic network printers.

Device State Based Access

Services normally don't consider the client device at all when making access decisions. In some rare cases services will attempt to identify the client device as "one of ours" with mTLS. It is almost never the case that access is granted or denied based on how secure the device is.


Corporate IT departments maintain a device inventory. If it's up to date devices can be recognized as corporate assets, assuming they can be identified.


IT departments often deploy centralized device management solutions. An agent runs on each device and reports the current state of the device to a central system and potentially corrects deviations from the desired configuration. The central management system knows whether or not a device has the properties of a secure configuration such as full disk encryption, up to date patches, client firewall, etc.


Similarly there are often endpoint protection solutions deployed to each device. These attempt to prevent intrusions and will report attempted intrusions and indications of compromise to a central monitoring system.


Ideally, services that make access decisions would identify the client device and consider its current state in granting or denying access to resources. Basic device inventory can distinguish between "friendly" and "unfriendly" devices. The inventory indicates who the device was issued to and access might only be granted to friendly devices where the authenticated user matches the user the device was issued to.


Device management knows if the device has a suitable security configuration and that the configuration has been checked recently. Similarly endpoint security knows with some certainty if the device has been compromised.

Device State Policy

When we grant users access to protected resources we do so while ensuring that those users are capable of and prepared to protect those resources. We do this through user training and hopefully not putting a user in a role where they can't fulfill their security obligations.


We don't usually take into account similar considerations for devices. A lot of valuable information about device posture is available but not taken into consideration when making an access decision. Should a device that is known to be compromised be allowed to access any resources that aren't part of the remediation process? Should a device be permitted to download intellectual property if it's not known to be able to protect that data when that device is lost or stolen?


Devices accessing the most sensitive resources should provide the best available protections. They should have full disk encryption with the company's configuration to reduce the chances that data can be recovered if the device is stolen. This configuration may be distinct to each organization. There may be multiple acceptable configurations in the device fleet simultaneously, especially when users have varied platforms. When a decision is made to grant or deny access to a resource, the policy system should understand what can constitutes appropriate full disk encryption and know how to verify it by checking with configuration management.


This extends to other security and IT controls like endpoint security, device management, automated backups, etc. These each play a role in protecting data and resources and it makes sense to deny access to devices that can't protect those resources.


It's not feasible, however, to require the strictest device configuration to access all resources. A service providing software and updates may need to provide packages and configuration required for the device to be in the desired configuration. The device can't attest to its endpoint security configuration if it doesn't have the endpoint security software installed.


In a Zero Trust Architecture the current state of the accessing device is taken into consideration as much as possible when making access decisions. Which security properties are required and which are nice to have varies from organization to organization and even from resource to resource. Access policies cannot be delivered "canned" or prescribed by an external group and can't be part of a generic solution.

Enforcing Policy

Knowing and codifying device security requirements for resource access is useless if those policies cannot be enforced. While many services can authenticate a client through mTLS, authorization following from that is rare. Services will sometimes have rich languages for doing access control for users but rarely for devices. Having state data to make access decisions on is a complex problem.


Supposing the service has the device policy to enforce, it needs to know the device's state. Aspects of the device's state could be encoded into the certificate it presents. That state is then fixed for the lifetime of the certificate. If the device's state degrades the service won't know either until the certificate expires or it appears in revocation. Short lived certificates add operational overhead and if certificate enrollment requires user interaction it becomes a burden to the user. If the incident response team wants to keep a device from accessing sensitive resources, should they be content waiting for daily cert expiration? Revocation is an alternative but revocation is rarely used and the state of the device likely changes frequently.


Few services have the capability to enforce these policies. The simplest solution is to put services behind a reverse proxy that can enforce policy. This proxy then needs access to device state information. If it's not delivered as part of mTLS it needs to be available in some other way. The state information either needs to be synchronized into the proxy itself or the proxy needs a way to retrieve state information. If the proxy is retrieving the information it either needs to make queries to the sources of truth at request time which has performance cost and reliability risk or it needs to synchronize the data from those sources locally. This has better performance and reliability but will naturally provide stale results some of the time.


Given a standard way to retrieve state data, does each service have its own integration? This can represent large development or operational overhead. Device state can be centrally collected to a service of its own. Policy decisions can then be made using state data managed by a shared resource. This simplifies the individual services but adds a potential point of failure to the serving infrastructure.


Getting the state information to consult is a challenge on its own. Device management, endpoint security, and related systems often have APIs that automation can interact with but these APIs are usually unique. There are few common APIs or data schemas to allow for interoperability. For each API you wish to retrieve data from you likely need to create a custom integration.


The proxy solution is viable for services you control. Cloud SaaS services are another matter entirely. An organization can try to put a cloud service behind a reverse proxy but this is frequently brittle and users will often find a way around the proxy which will usually be faster.


All of this is for HTTP services. It's possible to create a TCP proxy that can enforce device policy but how does it deny a connection? The general solution is to send a RST. For the user, this is indistinguishable from a non-security-related service failure; they can't tell if they've been denied or if the service is broken, leading to support headaches. This TCP proxy could be aware of the protocol and attempt to send a protocol specific message, such as IMAP AUTHORIZATIONFAILED. Now the proxy needs to be aware of the protocols it's passing.

Solutions

Implementing a real Zero Trust Architecture presents a lot of problems. The solution for Google is straightforward: leverage an army of world-class software engineers. Google created most of the technology it uses and has the resources to adapt them to new standards. As a web technology company it's not crazy for them to make every service a web service and put it behind a web proxy. They use very little vendor software and have little trouble building integrations between those systems that generate state data, those that house the state data, and those that need the state data.


Chances are, you're not Google or a company with similar resources. Every organization has their own security infrastructure and device configurations. Taking all of those unique properties into account requires custom integrations. The whole point of Zero Trust Architecture is to make security decisions based on how the organization's devices should be configured and how they should behave. That information is available in the disparate device management solutions but the systems that should be making use of it don't have access to it.


There aren't off the shelf solutions to pull this together so if you want it you have to build it. This was a significant undertaking even for the might of Google, but Google has orders of magnitude more resources, infrastructure, and sources of data than most companies. While you may not have the resources to throw at the problem, you also don't have the same scale of problems.


The goal of Zero Trust Architecture is to use all available data when making an access decision. The challenge is that this data isn't very available. For Zero Trust Architecture to be realistic for all but the most well-heeled organizations there needs to be new standards, particularly around interoperability. Systems collecting device state data must make that data available to automation in a common way to centralized collection systems such that integration is a matter of configuration rather than bespoke software development. Similarly, we need standards for retrieving that data by systems that have to make access decisions.

Misconceptions

It Means Don't Trust Anything

Zero Trust is a terrible name; Zero Faith would be more accurate. It's really about being very explicit about what does and doesn't make a device trustworthy for a certain action.

It Means All Devices Use mTLS

Yes and no. Client certificates are much stronger device authentication than IP addresses, for sure. In the simple case a client certificate and private key are simply files that can be exported from the device. An attacker with these files can then impersonate a compromised device from other devices they control. Ideally, mTLS is rooted in TPM credentials which cannot be exported from the device.


A compromised device with non-exportable credentials can perform proper authentication under an attacker's control. Strong client authentication only establishes the device's identity, not that the device is trustworthy.


Some devices simply can't do mTLS on their own. A multifunction copier is never going to be subject to strong authentication or state attestation. For those devices, the services they need to access either need to have more lenient security requirements or something else must act on the device's behalf.

It Means Get Rid of Your Firewalls

We use firewalls to segregate "our network" from "not our network"; while not strictly safe the former is safer than the latter. If we don't trust the internal network there's no point in segregating it.


Having devices on an internal network doesn't make them "good", but it can make them "not obviously bad". Your firewall can keep out obviously bad stuff. If you don't run your own mail services, is there any reason to permit mail traffic onto your network? If you never expect to receive traffic from some part of the world, maybe you can afford to preemptively filter it out. Your coarse filtering devices help ensure that your network resources are devoted to your needs.

It Means Get Rid of Your VPN

VPNs are a useful tool. They also allow your users to extend the corporate network into arbitrary other networks. When services can make use of strong device authentication it becomes more reasonable to make them Internet-facing. This provides convenience for the users because it can reduce the time users need to be on the VPN. For some services it will be infeasible for the organization to restrict access with strong device authentication. For these services a VPN might be the right solution.

It's a Product

There's no shortage of vendors who will sell you a Zero Trust Architecture solution. At best they are taking creative liberties with the term. At worst they know they're selling snake oil. In between are a lot of products and services that represent a partial solution.


No vendor is offering a solution that works with all of your services and understands your all your client devices. Perhaps it's a viable offering if they provide 100% outsourced IT. Then the vendor controls all the clients and all the services and can ensure that all aspects of IT infrastructure have Zero Trust solutions.

Summary

Zero Trust Architecture is the future of network security. As is often the case the future has not been evenly distributed. If we want this capability to be commonly available we must put pressure on vendors to ensure simple, robust, efficient APIs are available for retrieving data about our devices from their products. For services, we need them to have mechanisms to make use of the data being collected. When vendors are competing for our contracts we can choose interoperability as a key deciding factor.

Tuesday, August 11, 2020

Go: Convert errors.Wrap calls to fmt.Errorf

 I was a longtime fan of https://github.com/pkg/errors. It was a great way to add context to why an error was being returned which made tracing them easier. The need for pkg/errors has gone away with the new fmt.Errorf %w directive, errors.Is(), and errors.As().

I used errors.Wrap() a lot so naturally my code has lots of function calls I need to migrate. One repo had close to 300 calls to errors.Wrap() which is more than I'm willing to do by hand. I wrote a simple tool to take care of the most common case I have: errors.Wrap(err, "<message>").

  • Any line it doesn't know how to handle it leaves unchanged.
  • It looks specifically for errors.Wrap(err, "
  • On that same line it expects to find a double quote followed by a closing paren
  • The existing context string has : %w appended to it
  • It does not edit your imports; you should run goimports or a similar tool
  • By default the tool just outputs to stdout; use -o to overwrite the file in-place
  • Fix everything by doing for i in $(grep -R errors.Wrap `ls`); do errors_wrap_convert -in $i -o; end
  • Definitely make sure you have a snapshot of your code to revert back to in case this tool does bad things
  • This could have been done better using gofix but I was in too much of a hurry to learn how to extend gofix.
# errors_wrap_convert.go
package main

import (
        "bufio"
        "bytes"
        "flag"
        "fmt"
        "io"
        "io/ioutil"
        "log"
        "os"
        "strings"
)

var (
        fIn        = flag.String("in", "", "input file")
        fOverwrite = flag.Bool("o", false, "overwrite the existing file")
)

func fatalIfError(err error, msg string) {
        if err != nil {
                log.Fatal("error ", msg, ": ", err)
        }
}

func main() {
        flag.Parse()
        b, err := ioutil.ReadFile(*fIn)
        fatalIfError(err, "reading input file")

        var out io.WriteCloser = os.Stdout
        if *fOverwrite {
                out, err = os.Create(*fIn)
                fatalIfError(err, "opening output file")
        }
        defer out.Close()

        scanner := bufio.NewScanner(bytes.NewBuffer(b))
        for scanner.Scan() {
                fmt.Fprintln(out, Rewrite(scanner.Text()))
        }
        fatalIfError(scanner.Err(), "scanner error")
}


func Rewrite(in string) string {
        idx := strings.Index(in, `errors.Wrap(err, "`)
        if idx == -1 {
                return in
        }

        eIdx := strings.Index(in[idx:], ")")
        if eIdx == -1 {
                return in
        }
        eIdx += idx

        q1Idx := strings.Index(in[idx:], `"`)
        if q1Idx == -1 {
                return in
        }
        q1Idx += idx

        q2Idx := eIdx - 1
        if in[q2Idx] != '"' {
                return in
        }

        out := in[:idx] +
                `fmt.Errorf(` +
                in[q1Idx:q2Idx] +
                `: %w", err)` +
                in[eIdx+1:]
        return out
}

And a couple of basic tests:

# errors_convert_test.go
package main

import "testing"

func TestRewrite(t *testing.T) {
        t.Parallel()
        for in, want := range map[string]string{
                "": "",
                `               return nil, errors.Wrap(err, "bad thing") // foo bar`: `                return nil, fmt.Errorf("bad thing: %w", err) // foo bar`,
                `return nil, errors.Wrap(err, "foo " + blarg + " bar")`: `return nil, fmt.Errorf("foo " + blarg + " bar: %w", err)`,
        } {
                got := Rewrite(in)
                if got != want {
                        t.Fatalf("got %q, want %q, for %q", got, want, in)
                }
        }
}

I had searched for a tool to do this but it either doesn't exist or my searching ability failed me. If you would like to pick this up and generalize I'd happily refer to your version as canonical.

Sunday, August 2, 2020

The Autobucket Saga

The Leaking A/C and Early Failure

A few years ago our air conditioning started leaking. We discovered this when a stream of water started running from the corner of our kitchen's ballast lighting. Naturally we were alarmed. We found the location of the leak and got a plastic tub underneath it to catch the water. Until we could get a technician to the house we got to choose why we weren't sleeping well; either because it was too hot with the A/C off or every couple of hours one of us had to empty the tub with a wet vac.

This problem annoyed me. I'm a smart, technical guy, I should be able to solve this. I had taught myself some electronics and should be able to programmatically control a pump. I bought a little 5v USB pump and some float switches. I hooked it all up to a raspberry pi with the pump's power being controlled by the pi via an NPN transistor. Pump turns on when the high-water mark switch closes. Pump turns off when the low-water mark switch opens. Super simple, and for the life of me I couldn't get it to actually work.

The Student Elevates Himself

Since then I've learned a lot more about electronics, though I'm still a newbie. This year at BSides San Diego I bought an arduino-compatible microcontroller board and some other components as a way to help fund the event. Since I bought them I had to experiment with them!

The basic stuff is pretty easy! In the course of fiddling and experimenting I realized the problem with my original setup; with neither the pump control transistor nor the float switches was I tying them to a ground reference (via resistor, of course).

I spoke about my new understanding and new confidence to my loving partner. She noted how the condensation from the A/C just gets pumped out down the side of the house. We catch it with a bucket but rarely think to dump that water on our orange trees. It should would be nice to have the water moved over there automatically! I was inspired.

The Autobucket Is Born

This came together on a breadboard pretty quickly.

Random USB wires to your gaming laptop is fine, right?

Each float switch goes to a GPIO pin on a raspi zero w with the other side having a 10k resistor to ground and a 1k resistor to the pi's 3v3. I elected to have discrete pulldown resistors rather than integrated pulldown/pullup purely because I understand it better. The pump was directly connected to USB 5v with ground going to the NPN's collector. The base has 10k to ground and 1k to a GPIO pin. I wrote all the software in go with gobot including a feature to notify me via Telegram as the pump cycles on and off.

Amazingly, it just worked! On my bench I could move the float switches and see the program change state. Rather than powering on a dry pump I plugged in a device that charges via USB with a charging indicator LED. In the appropriate states it would turn on and off. Awesome!

Next up, the bucket. We have a bunch of orange buckets so hacking one up wasn't an issue:

Water, wires: besties!

I installed a couple of holes at different heights and installed the float switches. With their rubber gaskets it didn't even leak! I dropped the pump in there and for the time being just accepted that it didn't sit fully at the bottom but sufficiently below the low-water mark.

I didn't want the control system exposed to rain and sunlight and I needed it to be near power. I have a covered patio nearby with AC outlets, I just needed to establish connectivity between the two points. I needed 5 wires: USB+, USB ground, 3v3+, float switch 1, float switch 2. I have a supply of CAT-5 which is great since it even provides easy to differentiate individual wires. I wanted to be able to disconnect it so two wires went to a female USB connector and three into a molex hard drive power connector from my parts box. With this in place I could connect and disconnect as needed. Once things were settled I could shrink-wrap the connections for some weather protection.

I'm playing outside!

Testing again with the circuit on the board and again, success! You may note in this photo that power is supplied by exposed USB wires and gator clips. This is fine for a day of testing but not a workable long-term solution.

I'd like a neater board but don't keep perfboard handy. I do, however, have a 3D printer, a decent ability with CAD, and general lack of good judgement.


Sorry about your gag reflex.

Since I'm already using CAT-5 to connect to the bucket, why not RJ45? I had some RJ45 keystone jacks so I super glued a couple to my board. One was intended to connect to the bucket, the other to go to the raspi. Instead I ended up connecting to the raspi via a female header connector snipped in half then glued together so that I could easily connect and disconnect to GPIO. While I was at it, I hooked a BME280 sensor via I2C so I'd have an outdoor temperature/pressure/humidity sensor that I can expose as a webserver.

For power I grabbed a phone cable I had with only two wires. Part of the way through the conversion I had something that at least made me chuckle:

Windows is configuring your new magic smoke

This will need an enclosure but for the next step I started with a semi-disposable plastic storage container.

Pioneering Avant Garde project enclosures

And much to my surprise, it's still working at this stage. Before I go for a more permanent enclosure I want to let it run for a few days and make sure it doesn't need any changes.

Trouble In the Garden of E-Dumb

It does work, mostly. The pump is tiny and weak which is to be expected. I don't really care how fast the water makes its way to the orange trees, just that it gets there. Eventually though the pump is being started but not shut off. The pump is on, no water is flowing. It has to push the water through 1/4" inner diameter vinyl tubing up the height of the bucket, then a few yards over to the tree. I fluff the tubing and the water starts flowing. My hope at this point is that I'd only have to prime it after it's been idle for a while. The next step up in pump power is 12v and I'm reluctant to go there.

This problem persists. If I prime it, it gets going otherwise not much is happening. At first I thought maybe the primary purpose the pump is serving here is to get the initial water over the bucket height and then siphoning is taking care of the rest. It's not reliably doing even that though.

It eventually occurs to me that I can help the siphoning action by elevating the bucket. The path along the ground from the bucket to the tree is all flat. If the water source is elevated higher than the destination the siphoning should be more effective. I'm about to arrive...

The Autobucket: Passive Edition

I eventually realize I'm being pretty stupid. I've realized that gravity is doing most of the work here. I can ensure that the water reservoir is higher than the outlet.

I don't need the pump, the sensors, or the control at all. I need a hole near the bottom of the bucket and to seal the tube in that hole. Gravity will cause the water to drain through the tube. I had fixated on a solution to the complete neglect of the objective.

In summary:
  • What I built: A network-connected gray water reclamation and irrigation system
  • What I needed: A bucket with a hose glued into it

Sometimes the dumb solution is the right solution

Reflections

I basically took three lefts instead of a right but the journey wasn't all for naught. I got validation that I had overcome my gaps in electronics knowledge since the leaky air conditioner.

One thing that went very well was my process. In past projects I've had frustrating failures by pushing through to a complete solution. When something didn't work I had gone so far that troubleshooting meant tearing down and starting over. This time I worked much more incrementally, validating my progress at each stage and having the chance to make corrections. While I rarely needed corrections along the way, the anxiety that I might have screwed something up and wasted all my time was minimal.

I diagrammed lots of things. I put things together on the breadboard to make it work then translated that to a circuit diagram that I could follow more easily. When wiring up the CAT-5 I wrote down which colors would do what before I made any connections. From then on I could easily know which was the correct wire. I could then maintain that color scheme for portions downstream from the cable to keep it consistent and easier to wrap my head around. I haven't built up my electronics chops yet to keep what each wire does in my head.

Wire type really matters. I've often used internal CAT-5 strands as plentiful solid-core wire with color-coding. When I put that on the female pin header it didn't flex for anything and was very hard to work with. I had some stranded but only three colors, which complicated things. I ordered an assorted color stranded wire kit and for subsequent projects and it's been great.

The weather sensor was a great addition. I loved being able to check it from my phone. I ended up scrapping the raspi setup and setting up just weather sensors with the BME280, an Adafruit ESP8266 board, and some software that made the data available in Prometheus format.

Chibi Weather Station

Ultimately we don't learn a whole lot from our successes; we learn much more reflecting on failure. Maybe my missteps can be useful to you.

Thursday, February 27, 2020

Priority Channel in Go

I'm kind of impressed with this ugly monster:

package main

import (
 "context"
 "fmt"
 "sync"
 "time"
)

func main() {
 const levels = 3
 const sourceDepth = 5
 sources := make([]chan int, levels)
 for i := 0; i < levels; i++ {
  sources[i] = make(chan int, sourceDepth)
 }
 out := make(chan int)

 ctx, cancel := context.WithCancel(context.Background())

 wg := &sync.WaitGroup{}
 pc := New(ctx, sources, 10, out)
 wg.Add(1)
 go func() {
  defer wg.Done()
  defer close(out)
  pc.Prioritize()
 }()

 wg.Add(1)
 go func() {
  defer wg.Done()
  for i := range out {
   fmt.Println("i: ", i)
   time.Sleep(time.Second / 4)
  }
 }()

 for _, i := range []int{0, 2, 1, 0, 2, 1, 0, 2, 1} {
  fmt.Println("submitting ", i)
  pc.Submit(i, i)
 }
 time.Sleep(time.Second * 3)
 cancel()
 wg.Wait()
}

type PriorityChannel struct {
 notify  chan struct{}
 sources []chan int
 out     chan int
 ctx     context.Context
}

func New(ctx context.Context, sources []chan int, cap int, out chan int) PriorityChannel {
 pc := PriorityChannel{
  notify:  make(chan struct{}, cap),
  sources: sources,
  out:     out,
  ctx:     ctx,
 }
 for i := 0; i < cap; i++ {
  pc.notify <- struct{}{}
 }
 return pc
}

func (pc PriorityChannel) Prioritize() {
 for {
  // block until there's a value
  select {
  case pc.notify <- struct{}{}:
   // proceed
  case <-pc.ctx.Done():
   return
  }
 SOURCES:
  for _, rcv := range pc.sources {
   select {
   case i := <-rcv:
    pc.out <- i
    break SOURCES
   default:
    // keep looping
   }
  }
 }
}

func (pc PriorityChannel) Submit(i, priority int) {
 if priority < 0 || priority >= len(pc.sources) {
  panic("invalid priority")
 }
 pc.sources[priority] <- i
 <-pc.notify
}

Monday, January 13, 2020

The Tool Concert: A Synopsis

Drummer: <Bonk uh dunk
Bonk uh dunk
Bonk uh dunk tsh
Bonk uh dunk
Bonk uh dunk
Bonk uh dunk tsh>

Lead Guitar: <Grong gugga gug
Gug grong gugga gug
Gug grong gugga gug>

Bass Guitar: <Do doon doon do>
(no one knows what a bassist is doing)

Singer: I can't express the pain of being intellectually superior to everyone

Background Visuals: <Terence McKenna and David Cronenberg are fighting for control of the Winamp visualization plugins>

My conclusion: Tool is an alternate reality version of Phish where they dedicated themselves to rebelling against yuppies and neocons.


In all seriousness though, it was a really great show, and I'm not really a Tool fan. For the songs I was familiar with they were exactly as you hear on the radio; Rush level precision.

The opening act was awful and I won't dignify the name. Tool played a two and a half hour set which included a fifteen minute intermission. Also, I've never seen a crowd so engaged.

I was surprised by how much I liked the show. Not my favorite style of music but they really did earn my respect. 

Friday, November 29, 2019

It's Possible To Not Feel Like Garbage

I commonly see a type of post on social media. In this post, the person says something to the effect of "You matter" or "You are loved". The intent is to bolster the spirits of people who feel hopeless. It's well-meaning but in my opinion is useless at best and counterproductive at worst.

When you have depression part of your brain is dedicated to crushing your spirit. It knows all about you; all your doubts, fears, and regrets which it will use to bring down your sense of self. It is always with you and always working against you.

Your own self tells you that you are worthless and unlovable. So when a stranger says you have value and that you are loved it doesn't come across as a message of hope. At best it's a message of ignorance and at worst it's patronizing. "You don't know the first thing about me" is obvious and true. A perfect stranger telling you a fact about yourself is pretty hard to swallow.

A better message is "You don't have to feel like garbage". Depression makes you believe that feeling worthless is simply natural to you. It sounds silly but the idea that you can feel simply okay is a genuine message of hope. True happiness might be unrealistic but "not garbage" is something people with depression do experience on occasion. It's plausible that this is a normal state and might be achievable.

When trying to reach out, keep in mind that your positive messages may be hard to believe. People that may need to hear you won't always listen or be ready to understand. Be patient, be open-minded, and accept that their problems are unique to them and will require solutions unique to them that you may not have access to.

Friday, June 14, 2019

The Scenic Route To Go Interfaces

Go is an awesome language and interfaces are one of its most powerful features. They allow for decoupling pieces of code cleanly to help make components like database implementations interchangeable. They're the primary mechanism for dependency injection without requiring a DI framework.

Newcomers are often mystified by them but I think they're less confusing if you get to them via the scenic route. Let's look at creating our own types in Go. Along the way we'll find parallels that help make interfaces more clear.

Sidebar: Java Interfaces

If you're not experienced with Java, move on to the next section. Nothing to see here.

If you're experienced with Java, Go interfaces will be pretty familiar and comfortable. The key difference is that a class in Java must explicitly implement a predefined interface. In Go, any type that has the proper method signatures implements an interface, even interfaces created after the type. In Go, implementing an interface is implicit.

Custom Primitive Types

Go emphasizes simple, clear types. You can define your own to help model your problem space. Here I might want to capture a set of boolean flags in one variable:

type BitFlags int32
  • I'm defining my own type
  • I'm giving it the name BitFlags
  • It represents an int32
Why not just use int32 if that's what I want?

One reason is methods. I can attach methods to a type I've defined to give my type additional behavior. Perhaps I define a bunch of constants to represent individual flags and I provide methods like IsSet(BitFlag) bool and Set(BitFlag).

Another reason is explicit type conversion. In other languages it's valid to assign a 64 bit integer variable to a 32 bit integer variable. They're both integers so it's logical to do so. However, you're possibly losing the high 32 bits of the source value. There's an implicit type conversion happening that is often silent and often surprising.

Go doesn't allow implicit type conversions:

i32 := int32(17)
var bf BitFlags
bf = i32 // not allowed
bf = BitFlags(i32) // just fine

This is done to eliminate surprises. The compiler isn't silently setting a type conversion that can change your data without your knowledge. It requires that you state that you want the conversion. This makes it harder for users of the BitFlags type to accidentally provide a numeric value that shouldn't be interpreted as flags.

Custom Struct Types

type Foo struct {
A string
B int
}
  • I'm defining my own type
  • I'm giving it the name Foo
  • It contains the following data
Structs allow you to bundle pieces of data together into a single item. That item can be passed around as a unit. Like custom primitive types, custom struct types can have methods attached.

Also like custom primitive types you can assign one to the other if they are equivalent using an explicit type cast:

type Foo struct {
A string
B int
}

type Bar struct {
A string
B int
}

func main() {
f := Foo{A: "foo", B: 3}
var b Bar
b = f // invalid
b = Bar(f) // just fine
}

Custom Interface Types

An interface type specifies requirements for behavior. Methods are behavior which is why we tend to name them with verbs or action phrases. In go, any type that has those exact method signatures satisfies the interface's requirements.

type Storage interface {
Create(key string, o Object) error
Read(key string) (Object, error)
Update(key string, o Object) error
Delete(key string) error
}

  • I'm defining my own type
  • I'm giving it the name Storage
  • Anything with these methods qualifies as this type

Using interfaces I can define requirements for a storage system for my application to use. My application needs something through which I can create, read, update, and delete objects associated with a given key.

func GeneratePDFReport(output io.Writer, storage Storage) error {
// ...
}

My application isn't concerned with how those operations are actually performed. The underlying storage could be an SQL database, S3 bucket, local files, Mongo, Redis, or anything that can be adapted to do those four things. Perhaps the report generator supports many storage mechanisms and when the application starts it decides which storage to use based on a config file or flags. It also means that when I need to write tests for my report generator I don't need to have an actual SQL database or write files to disk; I can create an implementation of Storage that only works with test data and behaves in an entirely predictable way.

Interface nil and Type Assertions

For all variables of interface types the runtime keeps track of two things: the underlying value and that value's type. This leads to two different ways an interface variable can be nil. First, the interface value itself can be nil. In this case there's no type information, no underlying value; nothing to talk about. This is very common with the error interface. In the case of no error the interface variable itself is nil because there's no error to be communicated.

In the second case there's type information but the underlying value is nil. A comparison like myInt == nil returns false because the interface value exists and points to type information. Ideally in this case nil is useful for that type as in the final example in Dave Cheney's zero post.

If needed you can get at the underlying value inside an interface variable.

io.Writer is a commonly-used interface. It has only one method: Write([]byte) (int, error). If I have a variable out of type io.Writer the only operation I can perform on it is Write. What if I want to Close it? Ideally if you have Close as an requirement you should make Close part of the interface type of your variable (or use io.WriteCloser instead of io.Writer).

For the purposes of illustration you can do a type assertion. This asks that the runtime verify that the underlying thing in your variable is of a certain type:

if c, ok := out.(io.WriteCloser); ok {
err := c.Close()
// handle error
} else {
// not an io.WriteCloser!
}

In the above example, if out happens to be an io.WriteCloser then ok will be true and c will be out as type io.WriteCloser. If out doesn't happen to be an io.WriteCloser, ok is false and c is zero for io.WriteCloser which is nil.

Anonymous Struct Types

Given a preexisting struct type I can create a value of that with data in one statement:

f := Foo{
A: "foo",
B: 17,
}
  • I'm creating a variable f
  • It is of type Foo
  • It contains these values

In the above struct examples each of the types I defined have a name; this isn't always necessary providing I'm creating the struct on the spot and assigning it somewhere.

ff := struct {
A string
B int
}{
A: "foo",
B: 17,
}



  • I'm creating a variable ff

  • It is of this type

  • It contains these values

  • Like the named struct types above I can do an assignment with an explicit type conversion:

    f = ff // invalid
    f = Foo(ff) // totally fine

    This sort of anonymous struct type is common in table driven tests. It's also not uncommon in defining nested structs as mholt's JSON to Go converter does.

    You'll also sometimes see this:

    stringSet := map[string]struct{}{}



  • I'm creating a variable stringSet

  • It is of this type

  • It contains these values

  • The last part looks a little strange. It's a map with strings for keys but what are the values? The values are empty structs: they contain nothing and therefore take up no memory. What good is that? It's a map that only tracks the presence of keys which functions as a logical set. The final magenta curly braces define the initial contents of the map; it's empty.

    Anonymous Interfaces

    Just like you can have anonymous struct types you can have anonymous interface types. The following are equivalent:

    var foo io.Reader

    var foo interface{
    Read([]byte) (int, error)
    }

    In either case I can assign anything with a Read([]byte) (int, error) method to foo.

    We're near the end of our journey which brings us to the enigmatic interface{}:

    foo := interface{}{}
    var foo interface{} = nil
    • I'm defining a variable
    • I'm giving it the name foo
    • Anything with these methods qualifies as this type
    • The contents are explicitly zero
    interface{} is an anonymous interface type. It has no requirements so any value is suitable. I can pass around a value of type interface{} but I can't do anything with it without using a type assertion or the reflect package.

    In this way the empty interface is different than other interface types in that it doesn't specify required behavior. It sidesteps the type system and turns what could be compile-time errors into run-time errors. When writing code that uses the empty interface use great care.