Tuesday, September 21, 2021

Not Quite Zero

Fact and Fiction In Zero Trust Architecture

Zero Trust Architecture (ZTA) is a panacea for organizational security challenges... at least that's what vendors would have you believe. There is so much confusion around what ZTA is and isn't that it's easy to believe that it's all smoke and mirrors; a grift to siphon millions from corporations desperate to shift their liability to someone else.


ZTA is a real thing but it's not a panacea or product. It's a philosophy for designing interactions between clients and services. Implementing ZTA involves tackling a mountain of individualized complexity. In this post I'll discuss some of this complexity and in doing so I hope to separate the sham from the substance and help you decide whether or not ZTA is something worth pursuing within your organization. Along the way I hope to show that implementing a Zero Trust Architecture is not an advanced technology problem, but one of integrating technologies we're already familiar with.


Much like Kubernetes if you don't know exactly why you need Zero Trust Architecture, you probably don't and the process of trying to implement it will be expensive and painful. You should know what you're getting into.

Why Me?

I was on the Google security team during the conception and implementation of BeyondCorp: Google's ZTA solution. I developed components related to machine inventory and device access. I know this flavor of Kool-Aid very well and I've seen things that worked very well and things that went poorly.


Everything I'm discussing here is relative to Google's BeyondCorp project. Other ZTA strategies may exist, I'm not an authority on them. Additionally, I left Google in 2016 and I'm confident that their solution has seen significant change since that time. That said, I'm sure the concepts are still valuable.

Problem Statement

It's a common aspect of network architecture that you divide your network into different segments for different purposes. Perhaps you have a common network for clients, a restricted network for internal services, a guest network for untrusted devices, and one or more lab networks for R&D or testing.


The common client network is intended for corporate devices. Those devices need access to services that shouldn't be exposed to the Internet at large and those services are either on the same network as the clients, directly exposed to those clients, or with minimal segregation and filtering. These are company client devices and are assumed trustworthy.


The internal network may have poorly-protected ingress points such as shared WiFi passwords or network jacks with no access control. Arbitrary devices can access these internal networks, regardless of whether or not they are corporate devices.


Even devices on networks with strong mechanisms to keep out non-corporate devices (e.g. 802.1x) aren't strictly trustworthy. Compromised clients can relay traffic from external attackers onto the internal network.


In both cases the network has a hard outer shell and a soft interior. That hard shell is easily bypassed by determined attackers, bringing the bad guys onto the good network. Security practitioners have long understood that trusting the internal network is tenuous at best.

What Is Zero Trust Architecture?

Zero Trust Architecture primarily attempts to address the trust misplaced in corporate networks. The crux of ZTA is that client devices are distinctly identified and that access to the service is granted not just based on the access granted to the user but also based on the security posture of the device.  In the following sections we discuss different aspects of device authentication and authorization, each progressing from the status quo to the ZTA ideal.

Client Authentication

Distinctly identifying the client device requires each device to have unique credentials that can be used to distinguish between it and others. A simple device secret could suffice, as is the case with classic Kerberos. When these secrets are held in the filesystem, an attacker compromising the device can copy that secret and impersonate the device.


Client x509 certificates are a commonly used mechanism for device authentication with mutual TLS (mTLS). Usually a client certificate's private key is held in the filesystem and can be exported by an attacker like a simple secret. Managing an internal corporate x509 public key infrastructure (PKI) is tricky and requires internal development to manage if it is to operate successfully at a non-trivial scale. There are few standards for device enrollment and little in the way of off-the-shelf solutions.


Ideally, a certificate's private key is held in hardware in a way that it can't be exported, such as in a Trusted Platform Module (TPM). Clients with certificates based on non-exportable keys have the strongest device authentication. Issuing certificates for these devices can be tricky and require custom automation beyond conventional PKI certificate enrollment processes.


Many devices will support mTLS but can't provide certificates based on non-exportable keys. Most commonly this includes user devices such as laptops and desktops with software configurations that support mTLS but lack the crypto hardware for non-exportable keys. These devices can uniquely identify themselves but not in such a way that they can't be impersonated if compromised.


Finally, there are devices that can't authenticate themselves strongly, like simple IoT devices, or at all, like classic network printers.

Device State Based Access

Services normally don't consider the client device at all when making access decisions. In some rare cases services will attempt to identify the client device as "one of ours" with mTLS. It is almost never the case that access is granted or denied based on how secure the device is.


Corporate IT departments maintain a device inventory. If it's up to date devices can be recognized as corporate assets, assuming they can be identified.


IT departments often deploy centralized device management solutions. An agent runs on each device and reports the current state of the device to a central system and potentially corrects deviations from the desired configuration. The central management system knows whether or not a device has the properties of a secure configuration such as full disk encryption, up to date patches, client firewall, etc.


Similarly there are often endpoint protection solutions deployed to each device. These attempt to prevent intrusions and will report attempted intrusions and indications of compromise to a central monitoring system.


Ideally, services that make access decisions would identify the client device and consider its current state in granting or denying access to resources. Basic device inventory can distinguish between "friendly" and "unfriendly" devices. The inventory indicates who the device was issued to and access might only be granted to friendly devices where the authenticated user matches the user the device was issued to.


Device management knows if the device has a suitable security configuration and that the configuration has been checked recently. Similarly endpoint security knows with some certainty if the device has been compromised.

Device State Policy

When we grant users access to protected resources we do so while ensuring that those users are capable of and prepared to protect those resources. We do this through user training and hopefully not putting a user in a role where they can't fulfill their security obligations.


We don't usually take into account similar considerations for devices. A lot of valuable information about device posture is available but not taken into consideration when making an access decision. Should a device that is known to be compromised be allowed to access any resources that aren't part of the remediation process? Should a device be permitted to download intellectual property if it's not known to be able to protect that data when that device is lost or stolen?


Devices accessing the most sensitive resources should provide the best available protections. They should have full disk encryption with the company's configuration to reduce the chances that data can be recovered if the device is stolen. This configuration may be distinct to each organization. There may be multiple acceptable configurations in the device fleet simultaneously, especially when users have varied platforms. When a decision is made to grant or deny access to a resource, the policy system should understand what can constitutes appropriate full disk encryption and know how to verify it by checking with configuration management.


This extends to other security and IT controls like endpoint security, device management, automated backups, etc. These each play a role in protecting data and resources and it makes sense to deny access to devices that can't protect those resources.


It's not feasible, however, to require the strictest device configuration to access all resources. A service providing software and updates may need to provide packages and configuration required for the device to be in the desired configuration. The device can't attest to its endpoint security configuration if it doesn't have the endpoint security software installed.


In a Zero Trust Architecture the current state of the accessing device is taken into consideration as much as possible when making access decisions. Which security properties are required and which are nice to have varies from organization to organization and even from resource to resource. Access policies cannot be delivered "canned" or prescribed by an external group and can't be part of a generic solution.

Enforcing Policy

Knowing and codifying device security requirements for resource access is useless if those policies cannot be enforced. While many services can authenticate a client through mTLS, authorization following from that is rare. Services will sometimes have rich languages for doing access control for users but rarely for devices. Having state data to make access decisions on is a complex problem.


Supposing the service has the device policy to enforce, it needs to know the device's state. Aspects of the device's state could be encoded into the certificate it presents. That state is then fixed for the lifetime of the certificate. If the device's state degrades the service won't know either until the certificate expires or it appears in revocation. Short lived certificates add operational overhead and if certificate enrollment requires user interaction it becomes a burden to the user. If the incident response team wants to keep a device from accessing sensitive resources, should they be content waiting for daily cert expiration? Revocation is an alternative but revocation is rarely used and the state of the device likely changes frequently.


Few services have the capability to enforce these policies. The simplest solution is to put services behind a reverse proxy that can enforce policy. This proxy then needs access to device state information. If it's not delivered as part of mTLS it needs to be available in some other way. The state information either needs to be synchronized into the proxy itself or the proxy needs a way to retrieve state information. If the proxy is retrieving the information it either needs to make queries to the sources of truth at request time which has performance cost and reliability risk or it needs to synchronize the data from those sources locally. This has better performance and reliability but will naturally provide stale results some of the time.


Given a standard way to retrieve state data, does each service have its own integration? This can represent large development or operational overhead. Device state can be centrally collected to a service of its own. Policy decisions can then be made using state data managed by a shared resource. This simplifies the individual services but adds a potential point of failure to the serving infrastructure.


Getting the state information to consult is a challenge on its own. Device management, endpoint security, and related systems often have APIs that automation can interact with but these APIs are usually unique. There are few common APIs or data schemas to allow for interoperability. For each API you wish to retrieve data from you likely need to create a custom integration.


The proxy solution is viable for services you control. Cloud SaaS services are another matter entirely. An organization can try to put a cloud service behind a reverse proxy but this is frequently brittle and users will often find a way around the proxy which will usually be faster.


All of this is for HTTP services. It's possible to create a TCP proxy that can enforce device policy but how does it deny a connection? The general solution is to send a RST. For the user, this is indistinguishable from a non-security-related service failure; they can't tell if they've been denied or if the service is broken, leading to support headaches. This TCP proxy could be aware of the protocol and attempt to send a protocol specific message, such as IMAP AUTHORIZATIONFAILED. Now the proxy needs to be aware of the protocols it's passing.

Solutions

Implementing a real Zero Trust Architecture presents a lot of problems. The solution for Google is straightforward: leverage an army of world-class software engineers. Google created most of the technology it uses and has the resources to adapt them to new standards. As a web technology company it's not crazy for them to make every service a web service and put it behind a web proxy. They use very little vendor software and have little trouble building integrations between those systems that generate state data, those that house the state data, and those that need the state data.


Chances are, you're not Google or a company with similar resources. Every organization has their own security infrastructure and device configurations. Taking all of those unique properties into account requires custom integrations. The whole point of Zero Trust Architecture is to make security decisions based on how the organization's devices should be configured and how they should behave. That information is available in the disparate device management solutions but the systems that should be making use of it don't have access to it.


There aren't off the shelf solutions to pull this together so if you want it you have to build it. This was a significant undertaking even for the might of Google, but Google has orders of magnitude more resources, infrastructure, and sources of data than most companies. While you may not have the resources to throw at the problem, you also don't have the same scale of problems.


The goal of Zero Trust Architecture is to use all available data when making an access decision. The challenge is that this data isn't very available. For Zero Trust Architecture to be realistic for all but the most well-heeled organizations there needs to be new standards, particularly around interoperability. Systems collecting device state data must make that data available to automation in a common way to centralized collection systems such that integration is a matter of configuration rather than bespoke software development. Similarly, we need standards for retrieving that data by systems that have to make access decisions.

Misconceptions

It Means Don't Trust Anything

Zero Trust is a terrible name; Zero Faith would be more accurate. It's really about being very explicit about what does and doesn't make a device trustworthy for a certain action.

It Means All Devices Use mTLS

Yes and no. Client certificates are much stronger device authentication than IP addresses, for sure. In the simple case a client certificate and private key are simply files that can be exported from the device. An attacker with these files can then impersonate a compromised device from other devices they control. Ideally, mTLS is rooted in TPM credentials which cannot be exported from the device.


A compromised device with non-exportable credentials can perform proper authentication under an attacker's control. Strong client authentication only establishes the device's identity, not that the device is trustworthy.


Some devices simply can't do mTLS on their own. A multifunction copier is never going to be subject to strong authentication or state attestation. For those devices, the services they need to access either need to have more lenient security requirements or something else must act on the device's behalf.

It Means Get Rid of Your Firewalls

We use firewalls to segregate "our network" from "not our network"; while not strictly safe the former is safer than the latter. If we don't trust the internal network there's no point in segregating it.


Having devices on an internal network doesn't make them "good", but it can make them "not obviously bad". Your firewall can keep out obviously bad stuff. If you don't run your own mail services, is there any reason to permit mail traffic onto your network? If you never expect to receive traffic from some part of the world, maybe you can afford to preemptively filter it out. Your coarse filtering devices help ensure that your network resources are devoted to your needs.

It Means Get Rid of Your VPN

VPNs are a useful tool. They also allow your users to extend the corporate network into arbitrary other networks. When services can make use of strong device authentication it becomes more reasonable to make them Internet-facing. This provides convenience for the users because it can reduce the time users need to be on the VPN. For some services it will be infeasible for the organization to restrict access with strong device authentication. For these services a VPN might be the right solution.

It's a Product

There's no shortage of vendors who will sell you a Zero Trust Architecture solution. At best they are taking creative liberties with the term. At worst they know they're selling snake oil. In between are a lot of products and services that represent a partial solution.


No vendor is offering a solution that works with all of your services and understands your all your client devices. Perhaps it's a viable offering if they provide 100% outsourced IT. Then the vendor controls all the clients and all the services and can ensure that all aspects of IT infrastructure have Zero Trust solutions.

Summary

Zero Trust Architecture is the future of network security. As is often the case the future has not been evenly distributed. If we want this capability to be commonly available we must put pressure on vendors to ensure simple, robust, efficient APIs are available for retrieving data about our devices from their products. For services, we need them to have mechanisms to make use of the data being collected. When vendors are competing for our contracts we can choose interoperability as a key deciding factor.

Tuesday, August 11, 2020

Go: Convert errors.Wrap calls to fmt.Errorf

 I was a longtime fan of https://github.com/pkg/errors. It was a great way to add context to why an error was being returned which made tracing them easier. The need for pkg/errors has gone away with the new fmt.Errorf %w directive, errors.Is(), and errors.As().

I used errors.Wrap() a lot so naturally my code has lots of function calls I need to migrate. One repo had close to 300 calls to errors.Wrap() which is more than I'm willing to do by hand. I wrote a simple tool to take care of the most common case I have: errors.Wrap(err, "<message>").

  • Any line it doesn't know how to handle it leaves unchanged.
  • It looks specifically for errors.Wrap(err, "
  • On that same line it expects to find a double quote followed by a closing paren
  • The existing context string has : %w appended to it
  • It does not edit your imports; you should run goimports or a similar tool
  • By default the tool just outputs to stdout; use -o to overwrite the file in-place
  • Fix everything by doing for i in $(grep -R errors.Wrap `ls`); do errors_wrap_convert -in $i -o; end
  • Definitely make sure you have a snapshot of your code to revert back to in case this tool does bad things
  • This could have been done better using gofix but I was in too much of a hurry to learn how to extend gofix.
# errors_wrap_convert.go
package main

import (
        "bufio"
        "bytes"
        "flag"
        "fmt"
        "io"
        "io/ioutil"
        "log"
        "os"
        "strings"
)

var (
        fIn        = flag.String("in", "", "input file")
        fOverwrite = flag.Bool("o", false, "overwrite the existing file")
)

func fatalIfError(err error, msg string) {
        if err != nil {
                log.Fatal("error ", msg, ": ", err)
        }
}

func main() {
        flag.Parse()
        b, err := ioutil.ReadFile(*fIn)
        fatalIfError(err, "reading input file")

        var out io.WriteCloser = os.Stdout
        if *fOverwrite {
                out, err = os.Create(*fIn)
                fatalIfError(err, "opening output file")
        }
        defer out.Close()

        scanner := bufio.NewScanner(bytes.NewBuffer(b))
        for scanner.Scan() {
                fmt.Fprintln(out, Rewrite(scanner.Text()))
        }
        fatalIfError(scanner.Err(), "scanner error")
}


func Rewrite(in string) string {
        idx := strings.Index(in, `errors.Wrap(err, "`)
        if idx == -1 {
                return in
        }

        eIdx := strings.Index(in[idx:], ")")
        if eIdx == -1 {
                return in
        }
        eIdx += idx

        q1Idx := strings.Index(in[idx:], `"`)
        if q1Idx == -1 {
                return in
        }
        q1Idx += idx

        q2Idx := eIdx - 1
        if in[q2Idx] != '"' {
                return in
        }

        out := in[:idx] +
                `fmt.Errorf(` +
                in[q1Idx:q2Idx] +
                `: %w", err)` +
                in[eIdx+1:]
        return out
}

And a couple of basic tests:

# errors_convert_test.go
package main

import "testing"

func TestRewrite(t *testing.T) {
        t.Parallel()
        for in, want := range map[string]string{
                "": "",
                `               return nil, errors.Wrap(err, "bad thing") // foo bar`: `                return nil, fmt.Errorf("bad thing: %w", err) // foo bar`,
                `return nil, errors.Wrap(err, "foo " + blarg + " bar")`: `return nil, fmt.Errorf("foo " + blarg + " bar: %w", err)`,
        } {
                got := Rewrite(in)
                if got != want {
                        t.Fatalf("got %q, want %q, for %q", got, want, in)
                }
        }
}

I had searched for a tool to do this but it either doesn't exist or my searching ability failed me. If you would like to pick this up and generalize I'd happily refer to your version as canonical.

Sunday, August 2, 2020

The Autobucket Saga

The Leaking A/C and Early Failure

A few years ago our air conditioning started leaking. We discovered this when a stream of water started running from the corner of our kitchen's ballast lighting. Naturally we were alarmed. We found the location of the leak and got a plastic tub underneath it to catch the water. Until we could get a technician to the house we got to choose why we weren't sleeping well; either because it was too hot with the A/C off or every couple of hours one of us had to empty the tub with a wet vac.

This problem annoyed me. I'm a smart, technical guy, I should be able to solve this. I had taught myself some electronics and should be able to programmatically control a pump. I bought a little 5v USB pump and some float switches. I hooked it all up to a raspberry pi with the pump's power being controlled by the pi via an NPN transistor. Pump turns on when the high-water mark switch closes. Pump turns off when the low-water mark switch opens. Super simple, and for the life of me I couldn't get it to actually work.

The Student Elevates Himself

Since then I've learned a lot more about electronics, though I'm still a newbie. This year at BSides San Diego I bought an arduino-compatible microcontroller board and some other components as a way to help fund the event. Since I bought them I had to experiment with them!

The basic stuff is pretty easy! In the course of fiddling and experimenting I realized the problem with my original setup; with neither the pump control transistor nor the float switches was I tying them to a ground reference (via resistor, of course).

I spoke about my new understanding and new confidence to my loving partner. She noted how the condensation from the A/C just gets pumped out down the side of the house. We catch it with a bucket but rarely think to dump that water on our orange trees. It should would be nice to have the water moved over there automatically! I was inspired.

The Autobucket Is Born

This came together on a breadboard pretty quickly.

Random USB wires to your gaming laptop is fine, right?

Each float switch goes to a GPIO pin on a raspi zero w with the other side having a 10k resistor to ground and a 1k resistor to the pi's 3v3. I elected to have discrete pulldown resistors rather than integrated pulldown/pullup purely because I understand it better. The pump was directly connected to USB 5v with ground going to the NPN's collector. The base has 10k to ground and 1k to a GPIO pin. I wrote all the software in go with gobot including a feature to notify me via Telegram as the pump cycles on and off.

Amazingly, it just worked! On my bench I could move the float switches and see the program change state. Rather than powering on a dry pump I plugged in a device that charges via USB with a charging indicator LED. In the appropriate states it would turn on and off. Awesome!

Next up, the bucket. We have a bunch of orange buckets so hacking one up wasn't an issue:

Water, wires: besties!

I installed a couple of holes at different heights and installed the float switches. With their rubber gaskets it didn't even leak! I dropped the pump in there and for the time being just accepted that it didn't sit fully at the bottom but sufficiently below the low-water mark.

I didn't want the control system exposed to rain and sunlight and I needed it to be near power. I have a covered patio nearby with AC outlets, I just needed to establish connectivity between the two points. I needed 5 wires: USB+, USB ground, 3v3+, float switch 1, float switch 2. I have a supply of CAT-5 which is great since it even provides easy to differentiate individual wires. I wanted to be able to disconnect it so two wires went to a female USB connector and three into a molex hard drive power connector from my parts box. With this in place I could connect and disconnect as needed. Once things were settled I could shrink-wrap the connections for some weather protection.

I'm playing outside!

Testing again with the circuit on the board and again, success! You may note in this photo that power is supplied by exposed USB wires and gator clips. This is fine for a day of testing but not a workable long-term solution.

I'd like a neater board but don't keep perfboard handy. I do, however, have a 3D printer, a decent ability with CAD, and general lack of good judgement.


Sorry about your gag reflex.

Since I'm already using CAT-5 to connect to the bucket, why not RJ45? I had some RJ45 keystone jacks so I super glued a couple to my board. One was intended to connect to the bucket, the other to go to the raspi. Instead I ended up connecting to the raspi via a female header connector snipped in half then glued together so that I could easily connect and disconnect to GPIO. While I was at it, I hooked a BME280 sensor via I2C so I'd have an outdoor temperature/pressure/humidity sensor that I can expose as a webserver.

For power I grabbed a phone cable I had with only two wires. Part of the way through the conversion I had something that at least made me chuckle:

Windows is configuring your new magic smoke

This will need an enclosure but for the next step I started with a semi-disposable plastic storage container.

Pioneering Avant Garde project enclosures

And much to my surprise, it's still working at this stage. Before I go for a more permanent enclosure I want to let it run for a few days and make sure it doesn't need any changes.

Trouble In the Garden of E-Dumb

It does work, mostly. The pump is tiny and weak which is to be expected. I don't really care how fast the water makes its way to the orange trees, just that it gets there. Eventually though the pump is being started but not shut off. The pump is on, no water is flowing. It has to push the water through 1/4" inner diameter vinyl tubing up the height of the bucket, then a few yards over to the tree. I fluff the tubing and the water starts flowing. My hope at this point is that I'd only have to prime it after it's been idle for a while. The next step up in pump power is 12v and I'm reluctant to go there.

This problem persists. If I prime it, it gets going otherwise not much is happening. At first I thought maybe the primary purpose the pump is serving here is to get the initial water over the bucket height and then siphoning is taking care of the rest. It's not reliably doing even that though.

It eventually occurs to me that I can help the siphoning action by elevating the bucket. The path along the ground from the bucket to the tree is all flat. If the water source is elevated higher than the destination the siphoning should be more effective. I'm about to arrive...

The Autobucket: Passive Edition

I eventually realize I'm being pretty stupid. I've realized that gravity is doing most of the work here. I can ensure that the water reservoir is higher than the outlet.

I don't need the pump, the sensors, or the control at all. I need a hole near the bottom of the bucket and to seal the tube in that hole. Gravity will cause the water to drain through the tube. I had fixated on a solution to the complete neglect of the objective.

In summary:
  • What I built: A network-connected gray water reclamation and irrigation system
  • What I needed: A bucket with a hose glued into it

Sometimes the dumb solution is the right solution

Reflections

I basically took three lefts instead of a right but the journey wasn't all for naught. I got validation that I had overcome my gaps in electronics knowledge since the leaky air conditioner.

One thing that went very well was my process. In past projects I've had frustrating failures by pushing through to a complete solution. When something didn't work I had gone so far that troubleshooting meant tearing down and starting over. This time I worked much more incrementally, validating my progress at each stage and having the chance to make corrections. While I rarely needed corrections along the way, the anxiety that I might have screwed something up and wasted all my time was minimal.

I diagrammed lots of things. I put things together on the breadboard to make it work then translated that to a circuit diagram that I could follow more easily. When wiring up the CAT-5 I wrote down which colors would do what before I made any connections. From then on I could easily know which was the correct wire. I could then maintain that color scheme for portions downstream from the cable to keep it consistent and easier to wrap my head around. I haven't built up my electronics chops yet to keep what each wire does in my head.

Wire type really matters. I've often used internal CAT-5 strands as plentiful solid-core wire with color-coding. When I put that on the female pin header it didn't flex for anything and was very hard to work with. I had some stranded but only three colors, which complicated things. I ordered an assorted color stranded wire kit and for subsequent projects and it's been great.

The weather sensor was a great addition. I loved being able to check it from my phone. I ended up scrapping the raspi setup and setting up just weather sensors with the BME280, an Adafruit ESP8266 board, and some software that made the data available in Prometheus format.

Chibi Weather Station

Ultimately we don't learn a whole lot from our successes; we learn much more reflecting on failure. Maybe my missteps can be useful to you.

Thursday, February 27, 2020

Priority Channel in Go

I'm kind of impressed with this ugly monster:

package main

import (
 "context"
 "fmt"
 "sync"
 "time"
)

func main() {
 const levels = 3
 const sourceDepth = 5
 sources := make([]chan int, levels)
 for i := 0; i < levels; i++ {
  sources[i] = make(chan int, sourceDepth)
 }
 out := make(chan int)

 ctx, cancel := context.WithCancel(context.Background())

 wg := &sync.WaitGroup{}
 pc := New(ctx, sources, 10, out)
 wg.Add(1)
 go func() {
  defer wg.Done()
  defer close(out)
  pc.Prioritize()
 }()

 wg.Add(1)
 go func() {
  defer wg.Done()
  for i := range out {
   fmt.Println("i: ", i)
   time.Sleep(time.Second / 4)
  }
 }()

 for _, i := range []int{0, 2, 1, 0, 2, 1, 0, 2, 1} {
  fmt.Println("submitting ", i)
  pc.Submit(i, i)
 }
 time.Sleep(time.Second * 3)
 cancel()
 wg.Wait()
}

type PriorityChannel struct {
 notify  chan struct{}
 sources []chan int
 out     chan int
 ctx     context.Context
}

func New(ctx context.Context, sources []chan int, cap int, out chan int) PriorityChannel {
 pc := PriorityChannel{
  notify:  make(chan struct{}, cap),
  sources: sources,
  out:     out,
  ctx:     ctx,
 }
 for i := 0; i < cap; i++ {
  pc.notify <- struct{}{}
 }
 return pc
}

func (pc PriorityChannel) Prioritize() {
 for {
  // block until there's a value
  select {
  case pc.notify <- struct{}{}:
   // proceed
  case <-pc.ctx.Done():
   return
  }
 SOURCES:
  for _, rcv := range pc.sources {
   select {
   case i := <-rcv:
    pc.out <- i
    break SOURCES
   default:
    // keep looping
   }
  }
 }
}

func (pc PriorityChannel) Submit(i, priority int) {
 if priority < 0 || priority >= len(pc.sources) {
  panic("invalid priority")
 }
 pc.sources[priority] <- i
 <-pc.notify
}

Monday, January 13, 2020

The Tool Concert: A Synopsis

Drummer: <Bonk uh dunk
Bonk uh dunk
Bonk uh dunk tsh
Bonk uh dunk
Bonk uh dunk
Bonk uh dunk tsh>

Lead Guitar: <Grong gugga gug
Gug grong gugga gug
Gug grong gugga gug>

Bass Guitar: <Do doon doon do>
(no one knows what a bassist is doing)

Singer: I can't express the pain of being intellectually superior to everyone

Background Visuals: <Terence McKenna and David Cronenberg are fighting for control of the Winamp visualization plugins>

My conclusion: Tool is an alternate reality version of Phish where they dedicated themselves to rebelling against yuppies and neocons.


In all seriousness though, it was a really great show, and I'm not really a Tool fan. For the songs I was familiar with they were exactly as you hear on the radio; Rush level precision.

The opening act was awful and I won't dignify the name. Tool played a two and a half hour set which included a fifteen minute intermission. Also, I've never seen a crowd so engaged.

I was surprised by how much I liked the show. Not my favorite style of music but they really did earn my respect. 

Friday, November 29, 2019

It's Possible To Not Feel Like Garbage

I commonly see a type of post on social media. In this post, the person says something to the effect of "You matter" or "You are loved". The intent is to bolster the spirits of people who feel hopeless. It's well-meaning but in my opinion is useless at best and counterproductive at worst.

When you have depression part of your brain is dedicated to crushing your spirit. It knows all about you; all your doubts, fears, and regrets which it will use to bring down your sense of self. It is always with you and always working against you.

Your own self tells you that you are worthless and unlovable. So when a stranger says you have value and that you are loved it doesn't come across as a message of hope. At best it's a message of ignorance and at worst it's patronizing. "You don't know the first thing about me" is obvious and true. A perfect stranger telling you a fact about yourself is pretty hard to swallow.

A better message is "You don't have to feel like garbage". Depression makes you believe that feeling worthless is simply natural to you. It sounds silly but the idea that you can feel simply okay is a genuine message of hope. True happiness might be unrealistic but "not garbage" is something people with depression do experience on occasion. It's plausible that this is a normal state and might be achievable.

When trying to reach out, keep in mind that your positive messages may be hard to believe. People that may need to hear you won't always listen or be ready to understand. Be patient, be open-minded, and accept that their problems are unique to them and will require solutions unique to them that you may not have access to.

Friday, June 14, 2019

The Scenic Route To Go Interfaces

Go is an awesome language and interfaces are one of its most powerful features. They allow for decoupling pieces of code cleanly to help make components like database implementations interchangeable. They're the primary mechanism for dependency injection without requiring a DI framework.

Newcomers are often mystified by them but I think they're less confusing if you get to them via the scenic route. Let's look at creating our own types in Go. Along the way we'll find parallels that help make interfaces more clear.

Sidebar: Java Interfaces

If you're not experienced with Java, move on to the next section. Nothing to see here.

If you're experienced with Java, Go interfaces will be pretty familiar and comfortable. The key difference is that a class in Java must explicitly implement a predefined interface. In Go, any type that has the proper method signatures implements an interface, even interfaces created after the type. In Go, implementing an interface is implicit.

Custom Primitive Types

Go emphasizes simple, clear types. You can define your own to help model your problem space. Here I might want to capture a set of boolean flags in one variable:

type BitFlags int32
  • I'm defining my own type
  • I'm giving it the name BitFlags
  • It represents an int32
Why not just use int32 if that's what I want?

One reason is methods. I can attach methods to a type I've defined to give my type additional behavior. Perhaps I define a bunch of constants to represent individual flags and I provide methods like IsSet(BitFlag) bool and Set(BitFlag).

Another reason is explicit type conversion. In other languages it's valid to assign a 64 bit integer variable to a 32 bit integer variable. They're both integers so it's logical to do so. However, you're possibly losing the high 32 bits of the source value. There's an implicit type conversion happening that is often silent and often surprising.

Go doesn't allow implicit type conversions:

i32 := int32(17)
var bf BitFlags
bf = i32 // not allowed
bf = BitFlags(i32) // just fine

This is done to eliminate surprises. The compiler isn't silently setting a type conversion that can change your data without your knowledge. It requires that you state that you want the conversion. This makes it harder for users of the BitFlags type to accidentally provide a numeric value that shouldn't be interpreted as flags.

Custom Struct Types

type Foo struct {
A string
B int
}
  • I'm defining my own type
  • I'm giving it the name Foo
  • It contains the following data
Structs allow you to bundle pieces of data together into a single item. That item can be passed around as a unit. Like custom primitive types, custom struct types can have methods attached.

Also like custom primitive types you can assign one to the other if they are equivalent using an explicit type cast:

type Foo struct {
A string
B int
}

type Bar struct {
A string
B int
}

func main() {
f := Foo{A: "foo", B: 3}
var b Bar
b = f // invalid
b = Bar(f) // just fine
}

Custom Interface Types

An interface type specifies requirements for behavior. Methods are behavior which is why we tend to name them with verbs or action phrases. In go, any type that has those exact method signatures satisfies the interface's requirements.

type Storage interface {
Create(key string, o Object) error
Read(key string) (Object, error)
Update(key string, o Object) error
Delete(key string) error
}

  • I'm defining my own type
  • I'm giving it the name Storage
  • Anything with these methods qualifies as this type

Using interfaces I can define requirements for a storage system for my application to use. My application needs something through which I can create, read, update, and delete objects associated with a given key.

func GeneratePDFReport(output io.Writer, storage Storage) error {
// ...
}

My application isn't concerned with how those operations are actually performed. The underlying storage could be an SQL database, S3 bucket, local files, Mongo, Redis, or anything that can be adapted to do those four things. Perhaps the report generator supports many storage mechanisms and when the application starts it decides which storage to use based on a config file or flags. It also means that when I need to write tests for my report generator I don't need to have an actual SQL database or write files to disk; I can create an implementation of Storage that only works with test data and behaves in an entirely predictable way.

Interface nil and Type Assertions

For all variables of interface types the runtime keeps track of two things: the underlying value and that value's type. This leads to two different ways an interface variable can be nil. First, the interface value itself can be nil. In this case there's no type information, no underlying value; nothing to talk about. This is very common with the error interface. In the case of no error the interface variable itself is nil because there's no error to be communicated.

In the second case there's type information but the underlying value is nil. A comparison like myInt == nil returns false because the interface value exists and points to type information. Ideally in this case nil is useful for that type as in the final example in Dave Cheney's zero post.

If needed you can get at the underlying value inside an interface variable.

io.Writer is a commonly-used interface. It has only one method: Write([]byte) (int, error). If I have a variable out of type io.Writer the only operation I can perform on it is Write. What if I want to Close it? Ideally if you have Close as an requirement you should make Close part of the interface type of your variable (or use io.WriteCloser instead of io.Writer).

For the purposes of illustration you can do a type assertion. This asks that the runtime verify that the underlying thing in your variable is of a certain type:

if c, ok := out.(io.WriteCloser); ok {
err := c.Close()
// handle error
} else {
// not an io.WriteCloser!
}

In the above example, if out happens to be an io.WriteCloser then ok will be true and c will be out as type io.WriteCloser. If out doesn't happen to be an io.WriteCloser, ok is false and c is zero for io.WriteCloser which is nil.

Anonymous Struct Types

Given a preexisting struct type I can create a value of that with data in one statement:

f := Foo{
A: "foo",
B: 17,
}
  • I'm creating a variable f
  • It is of type Foo
  • It contains these values

In the above struct examples each of the types I defined have a name; this isn't always necessary providing I'm creating the struct on the spot and assigning it somewhere.

ff := struct {
A string
B int
}{
A: "foo",
B: 17,
}



  • I'm creating a variable ff

  • It is of this type

  • It contains these values

  • Like the named struct types above I can do an assignment with an explicit type conversion:

    f = ff // invalid
    f = Foo(ff) // totally fine

    This sort of anonymous struct type is common in table driven tests. It's also not uncommon in defining nested structs as mholt's JSON to Go converter does.

    You'll also sometimes see this:

    stringSet := map[string]struct{}{}



  • I'm creating a variable stringSet

  • It is of this type

  • It contains these values

  • The last part looks a little strange. It's a map with strings for keys but what are the values? The values are empty structs: they contain nothing and therefore take up no memory. What good is that? It's a map that only tracks the presence of keys which functions as a logical set. The final magenta curly braces define the initial contents of the map; it's empty.

    Anonymous Interfaces

    Just like you can have anonymous struct types you can have anonymous interface types. The following are equivalent:

    var foo io.Reader

    var foo interface{
    Read([]byte) (int, error)
    }

    In either case I can assign anything with a Read([]byte) (int, error) method to foo.

    We're near the end of our journey which brings us to the enigmatic interface{}:

    foo := interface{}{}
    var foo interface{} = nil
    • I'm defining a variable
    • I'm giving it the name foo
    • Anything with these methods qualifies as this type
    • The contents are explicitly zero
    interface{} is an anonymous interface type. It has no requirements so any value is suitable. I can pass around a value of type interface{} but I can't do anything with it without using a type assertion or the reflect package.

    In this way the empty interface is different than other interface types in that it doesn't specify required behavior. It sidesteps the type system and turns what could be compile-time errors into run-time errors. When writing code that uses the empty interface use great care.

    Saturday, June 8, 2019

    Wednesday, March 6, 2019

    3D Printer Filament Cheap Vacuum Desiccation

    Lots of 3D printer filaments are hygroscopic; they absorb water from the atmosphere. This is problematic for filaments because during printing they are subject to a sudden, dramatic jump in temperature. This causes the absorbed water to explode potentially making detrimental changes to the material properties of the filament. For filaments like PETG and Nylon this makes for a rougher surface and anomalies in the finished part.

    The standard mechanism for drying filament is to put it in the oven for a while. I started getting strange results with my PETG because I got lazy and left it out between prints. I wanted to fix the issue but wasn't keen about putting plastics in my oven. I wondered about other methods and the silica gel packets came to mind. Those are the little packets that come with shoes and other things and say DO NOT EAT. I was skeptical about the power of these to reverse the absorption that had already taken place, at least not in a timely fashion. Also, I like to tinker and wanted to experiment with weird ideas.

    I like to sous vide and had recently gotten a food vacuum sealer with an attachment for sucking the air out of wine bottles. I could seal the filament spools in a vacuum bag with some desiccant but I felt like it would work better if I could get closer to a proper vacuum. I didn't want to invest in a high-grade vacuum chamber and pump for an experiment. I started looking around and found the FoodSaver Quick Marinator; a rigid container with a valve for the hose that connected the vacuum sealer to attachments.

    Here's my setup:

    • Off-brand food vacuum sealer
    • The FoodSaver Quick Marinator
    • The vacuum sealer attachment hose
    • Coffee Filters
    • Flower desiccant powder
    The flower desiccant powder is the same material as the silica gel packets you can get but it's very fine, providing a higher surface area, and they're trivial to renew.

    I put 5 (arbitrary) tablespoons of desiccant powder into a coffee filter, fold it up, and tape it closed so the powder can't easily escape.

    I put my filament spool into the marinator, drop my desiccant pack on top and put the lid on the with tabs on the same side.

    Then, suck the air out. I run the pump until it shuts off. Wait a few seconds, and repeat. I go through about three cycles of this. After the pump has shut off, ensure the pump isn't warm; you don't want to ruin your pump by overheating it.

    Finally I ensure the valve is closed before removing the hose.

    Surprisingly, this worked. My next round of PETG printing produced a nice, smooth finish. I've taken to storing the spools with one of my homemade desiccant packets in a small Space Saver bag and vacuum the air out. It doesn't look like they sell just the small bags and the medium and larger are grossly too large for a filament spool.

    It's important to note that so far I've only done this desiccating process on Dremel filament spools which are 0.5kg. As you can see in the picture above these spools just fit in the quick marinator. Larger spools probably won't fit, loose filament can depending on how much you want to stuff in there. When I get larger spools I'll probably have to come up with another mechanism.

    Thursday, January 10, 2019

    New 3D Printer: Dremel 3D45

    Background

    For a while I had a 3D printer. Specifically a Printrbot Simple Metal. When it worked, it was a lot of fun. I could start with an idea and turn it into a real physical object without the need for a big workshop, huge mess, or personal injury. When it worked, it was great! But the process of getting it to work and keeping it working took all the joy out of the prospect of desktop fabrication.

    Three years later I found myself thinking about getting another 3D printer, hoping that the reliability had improved. I started looking at what was on the market with an emphasis on reliability: I didn't want to tinker with the printer, I just wanted to turn designs into objects. Dremel seemed to have some reliable offerings though a little pricey. I pulled the trigger on the 3D45 and so far I'm very pleased.

    Disclaimer

    I should be getting compensated for this review. My printer came with a small flyer that said if I post a review of the printer on my blog I would receive two spools of PLA for free. The flyer didn't provide direction on what the review should say.

    Initial Impressions

    I pulled the printer out of the box and had it set up in about 20 minutes. I was able to start a job for a frog figure from the internal storage. It came out almost perfect with a couple of misaligned layers and some poor adhesion ("springing") at the top.

    Overall though I had good luck with the printer initially. I would sometimes have trouble with first layer adhesion which is the most common problem printing. Trying different things to correct it I found that washing the build plate, perform bed leveling, and reapplying glue would almost always correct the issue.

    The Good, the Bad, and the Ugly

    Pros

    It just works. I've used an entire spool of ECO-ABS with an overwhelmingly positive success to failure ratio. I've printed in PETG and Nylon as well with no failures there. Nylon is notoriously annoying to print with and it worked on my first try. Note; The ECO-ABS is a proprietary formulation of PLA.

    It looks nice. While I don't actually care how it looks, the fact that it's fully self-contained means that the cat, the dog, dust, Dorito crumbs, ejected shell casings, etc won't put my prints a risk.

    It has almost all the bells and whistles. As stated, it's fully-enclosed to help keep it work inside and reduce warping. It has a heated bed to help with the same. The build plate is tempered glass in an easy to remove and handle frame. I'm really pleased with how easy it is to work with the build plate. It's got a touch screen that provides helpful information and allows you to perform basic operations and start/pause/cancel prints from onboard storage or a USB drive. It prints "ECO-ABS", PLA, PETG, and Nylon. It has an integrated web cam, runout sensor, yadda-yadda-yadda.

    The filament seems really good. The color and performance are consistent throughout the spool. I don't really have problems with first-layer or intra-layer adhesion. The Dremel filament spools have an embedded RFID tag that the printer reads and use to automatically set the temperature for the extruder and bed.

    Cons

    It's expensive. The printer retails for $1800. I got mine for $1400 on Amazon black Friday. That's fine as a reliable device that lasts years is worth investing in. The filament, however, is also expensive. Common, non-funky filaments run between $15-30 per kg. Dremel filament runs $30-40 per .5kg. This is tempered somewhat by the fact that between the quality of the printer and their filament my prints succeed more so I waste less (save my own design mistakes).

    It's something of a walled garden. There's not a lot of room to tinker with the printer. I don't want to tinker with the printer... I don't want to have to. However, should it be necessary it would be nice. I can't complain too much as I made a conscious decision to sacrifice hackability for reliability.

    The network features are weak. You can manage and monitor print jobs through their cloud service but you can't do so over the LAN. The frame rate for the video in their cloud service is like 0.1fps. It's difficult to actually understand what's happening.

    WTF?

    The cloud slicer is garbage. You can upload an STL to the cloud service and have it do the slicing for you. It was easy! I never got a successful print from the cloud slider. Usually there would be a loss of inter-layer adhesion leaving me with a slinky vaguely in the shape of my print. Using the Dremel Cura desktop slicer works extremely well and when I upload the gcode it produces to the cloud service the results are great. Unfortunately they kind of tout the cloud service and using only the cloud service brought me nothing but failure. Luckily I have some printing experience and knew how to troubleshoot this.

    "Filament DRM". They strongly encourage you to print using only Dremel filament. It seems to be high quality and the RFID identification feature is definitely neat. However, the state that using non-Dremel filament will void your warranty. This seems insane to me. I get that they wouldn't want to support issues people have using janky filament. I would totally accept that if I use 3rd party filament and my house burns down they shouldn't be held liable. But to say that they are unwilling to support me for using less expensive filament leaves a bad taste.

    And the filament cost. I would feel a lot less annoyed about the quasi-restriction of using only Dremel filament if it weren't 2-4 times the cost of other filaments. It is good but the difference in cost is kind of crazy. To some extent they market this product line for education; schools can't afford this. Also, they should encourage other filament vendors to use the same RFID spec. It's a cool feature. Overall, they should make the filament significantly cheaper or penalize their customers less for using third-party filaments.

    I'm really disappointed that I can't really use the built-in camera decently. I get that they wouldn't want to stream 720p over the Internet to everyone; they aren't trying to be a video streaming service. There should be a way for me to watch it in full quality over my home network.

    Trouble Strikes!

    Eventually I had a serious issue that wash/level/glue wouldn't resolve. I wasn't getting a first layer. The extruder would travel around the surface of the bed and yell CLICK CLICK CLICK CLICK as though some gears were slipping, like something was stuck. Thinking I had a clog I looked in the manual for their clearing process. The troubleshooting matrix says that if have a clog to contact support. Simultaneously the manual has instructions for clearing a clog. I went through the clog clearing process and found that the extruder could push filament out fine. Attempting another print I got the same symptom. It seemed that the tip of the nozzle was flush with the bed and it was clogged because there was no place to extrude filament to.

    I call customer support but it was a Friday afternoon. Wonderfully, they have US-based support. Unfortunately for me they're in Central Time, I'm in Pacific Time and they were already closed. I used their email contact form and received an automated response saying I'd hear back from them in two business days. This being Friday I shouldn't hear back from them until Tuesday or Wednesday. On Monday I got a response with corrective instructions that fixed the issue in a few minutes. It sucks that I was "out of commission" for several days. It's awesome that I had access to people whose job it was to help me. Free input on 3D printing issues will vary far and wide on the nature of your problem and remediation.

    Conclusions

    I think my money on this printer was well-spent. I've been really enjoying it. I feel like I got the reliability I was looking for. The filament is really expensive which makes me hesitant to experiment.