April 10, 2014

Mocking functions in Go

Functions in Go are first class citizens, that means you can have a variable that contains a function value, and call it like a regular function.
printf := fmt.Printf
printf("This will output %d line.\n", 1)
This ability can come in very handy for testing code that calls a function which is hard to properly test while testing the surrounding code.  In Juju, we occasionally use function variables to allow us to stub out a difficult function during tests, in order to more easily test the code that calls it.  Here's a simplified example:
// in install/mongodb.go
package install

func SetupMongodb(path string) error {
     // suppose the code in this method modifies files in root
     // directories, mucks with the environment, etc...
     // Actions you actively don't want to do during most tests.

// in startup/bootstrap.go
package startup

func Bootstrap() error {
    path := getPath()
    if err := install.SetupMongodb(path); err != nil {
       return err
So, suppose you want to write a test for Bootstrap, but you know SetupMongodb won't work, because the tests don't run with root privileges (and you don't want to setup mongodb on the dev's machine anyway).  What can you do?  This is where mocking comes in.

We just make a little tweak to Bootstrap:
package startup

var setupMongo = install.SetupMongodb

func Bootstrap() error {
    path := getRootDirPath()
    if err := setupMongo(path); err != nil {
       return err
Now if we want to test Bootstrap, we can mock out the setupMongo function thusly:
// in startup/bootstrap_test.go
package startup

type fakeSetup struct {
    path string
    err error

func (f *fakeSetup) setup(path string) error {
    f.path = path
    return f.err

TestBootstrap(t *testing.T) {
    f := &fakeSetup{ err: errors.New("Failed!") }
    // this mocks out the function that Bootstrap() calls
    setupMongo = f.setup
    err := Bootstrap()
    if err != f.err {
        t.Fail("Error from setupMongo not returned. Expected %v, got %v", f.err, err)
    expPath := getPath()
    if f.path != expPath {
        t.Fail("Path not correctly passed into setupMongo. Expected %q, got %q", expPath, f.path)

    // and then try again with f.err == nil, you get the idea
Now we have full control over what happens in the setupMongo function, we can record the parameters that are passed into it, what it returns, and test that Bootstrap is at least using the API of the function correctly.

Obviously, we need tests elsewhere for install.SetupMongodb to make sure it does the right thing, but those can be tests internal to the install package, which can use non-exported fields and functions to effectively test the logic that would be impossible from an external package (like the setup package). Using this mocking means that we don't have to worry about setting up an environment that allows us to test SetupMongodb when we really only want to test Bootstrap.  We can just stub out the function and test that Bootstrap does everything correctly, and trust that SetupMongodb works because it's tested in its own package.

April 1, 2014

Effective Godoc

I started to write a blog post about how to get the most out of godoc, with examples in a repo, and then realized I could just write the whole post as godoc on the repo, so that's what I did.  Feel free to send pull requests if there's anything you see that could be improved.

I actually learned quite a lot writing this article, by exploring all the nooks and crannies of Go's documentation generation.  Hopefully you'll learn something too.

Either view the documentation on godoc.org:


or view it locally using the godoc tool:

go get code.google.com/p/go.tools/cmd/godoc
go get github.com/natefinch/godocgo
godoc -http=:8080

Then open a browser to http://localhost:8080/pkg/github.com/natefinch/godocgo


March 28, 2014

Unused Variables in Go

The Go compiler treats unused variables as a compilation error. This causes much annoyance to some newbie Gophers, especially those used to writing languages that aren't compiled, and want to be able to be fast and loose with their code while doing exploratory hacking.

The thing is, an unused variable is often a bug in your code, so pointing it out early can save you a lot of heartache.

Here's an example:

50 func Connect(name, port string) error {
51    hostport := ""

52    if port == "" {

53        hostport := makeHost(name)

54        logger.Infof("No port specified, connecting on port 8080.")

55    } else {

56        hostport := makeHostPort(name, port)

57        logger.Infof("Connecting on port %s.", port)

58    }

59    // ... use hostport down here

60 }

Where's the bug in the above?  Without the compiler error, you'd run the code and have to figure out why hostport was always an empty string.  Did we pass in empty strings by accident?  Is there a bug in makeHost and makeHostPort?

With the compiler error, it will say "53, hostport declared and not used" and "56, hostport declared and not used"

This makes it a lot more obvious what the problem is... inside the scope of the if statement, := declares new variables called hostport.  These hide the variable from the outer scope, thus, the outer hostport never gets modified, which is what gets used further on in the function.

50 func Connect(name, port string) error {

51    hostport := ""

52    if port == "" {

53        hostport = makeHost(name)

54        logger.Infof("No port specified, connecting on port 8080.")

55    } else {

56        hostport = makeHostPort(name, port)

57        logger.Infof("Connecting on port %s.", port)

58    }

59    // ... use hostport down here

60 }

The above is the corrected code. It took only a few seconds to fix, thanks to the unused variable error from the compiler.  If you'd been testing this by running it or even with unit tests... you'd probably end up spending a non-trivial amount of time trying to figure it out.  And this is just a very simple example.  This kind of problem can be a lot more elaborate and hard to find.

And that's why the unused variable declaration error is actually a good thing.  If a value is important enough to be assigned to a variable, it's probably a bug if you're not actually using that variable.

Bonus tip:

Note that if you don't care about the variable, you can just assign it to the empty identifier directly:
_, err := computeMyVar()
This is the normal way to avoid the compiler error in cases where a function returns more than you need.

If you really want to silence the unused variable error and not remove the variable for some reason, this is the way to do it:
v, err := computeMyVar()
_ = v  // this counts as using the variable
Just don't forget to clean it up before committing.

All of the above also goes for unused packages.  And a similar tip for silencing that error:
_ = fmt.Printf // this counts as using the package

March 21, 2014

Go and Github

Francesc Campoy recently posted about how to work on someone else's Go repo from github.  His description was correct, but I think there's an easier way, and also one that might be slightly less confusing.

Let's say you want to work on your own branch of github.com/natefinch/gocog - here's the easiest way to do it:

  1. Fork github.com/natefinch/gocog on github
  2. mkdir -p $GOPATH/src/github.com/natefinch/gocog
  3. cd $GOPATH/src/github.com/natefinch/gocog
  4. git clone https://github.com/YOURNAME/gocog .
  5. (optional) go get github.com/natefinch/gocog

That's it.  Now you can work on the code, push/pull etc from your github repo as normal, and submit a pull request when you're done.

go get is useful for getting code that you want to use, but it's not very useful for getting code that you want to work on.  It doesn't set up source control.  git clone does.  What go get is handy for is getting the dependencies of a project, which is what step 5 does (only needed if the project relies on outside repos you don't already have).  (thanks to a post on G+ for reminding me that git clone won't get the dependencies)

Also note, the path on disk is the same as the original repo's URL, not your branch's URL.  That's intentional, and it's the key to making this work.  go get is the only thing that actually cares if the repo URL is the same as the path on disk.  Once the code is on disk, go build etc just expects import paths to be directories under $GOPATH.  The code expects to be under $GOPATH/src/github.com/natefinch/gocog because that's what the import statements say it should be.  There's no need to change import paths or anything wacky like that (though it does mean that you can't have both the original version of the code and your branch coexisting in the same $GOPATH).

Note that this is actually the same procedure that you'd use to work on your own code from github, you just change step 1 to "create the repo in github".  I prefer making the repo in github first because it lets me set up the license, the readme, and the .gitignore with just a few checkboxes, though obviously that's optional if you want to hack locally first.  In that case, just make sure to set up the path under gopath where it would go if you used go get, so that go get will work correctly when you decide to push up to github.

(updated to mention using go get after git clone)

March 15, 2014

Go Tips for Newbie Gophers

This is just a collection of tips that would have saved me a lot of time if I had known about them when I was a newbie:

Build or test everything under the current directory and subdirectories:

go build ./...
go test ./...

Technically, both commands take a pattern to match the name of one or more packages, and the ... specifier is a wildcard, so you could do .../foo/... to match all packages under GOPATH with foo in their path. 

Have an io.Writer that writes to an in-memory data structure:

b := &bytes.Buffer{}

Have an io.Reader read from a string (useful when you want to use a string as the input data for something):

r := strings.NewReader(myString)

Copy data from a reader to a writer:

io.Copy(toWriter, fromReader)

Timeout waiting on a channel:

select {
   case val := <- ch
       // use val
   case <-time.After(time.Second*5)

Convert a slice of bytes to a string:

var b []byte = getData()
s := string(b)

Passing a nil pointer into an interface does not result in a nil interface:

func isNil(i interface{}) bool {
    return i == nil

var f *foo = nil
fmt.Println(isNil(f))  // prints false

The only way to get a nil interface is to pass the keyword nil:

var f *foo = nil
if f == nil {
    fmt.Println(isNil(nil))  // prints true

How to remember where the arrow goes for channels:

The arrow points in the direction of data flow, either into or out of the channel, and always points left.

The above is generalizable to anything where you have a source and destination, or reading and writing, or assigning.

Data is taken from the right and assigned to the left, just as it is with a := b.  So, like io.Copy, you know that the reader (source) is on the right, the writer (destination) is on the left:  io.Copy(dest, src).

If you ever think "man, someone should have made a helper function to do this!", chances are they have, and it's in the std lib somewhere.

January 2, 2014

Bitcoin is like Magic (the Gathering)

People regularly prophecy the doom of Bitcoin.  It'll be outlawed, something else bigger or better will come along, there will be a major flaw found, people will just get bored and move on.

This reminds me a lot of the early days of Magic the Gathering.  It was 1995, and I had just gotten into Magic, which was a couple years old at that point.  The over-powered cards from early sets were already hitting $100 each, and it seemed like newbies would never stand a chance playing against the old guard who got in on the ground floor, unless you were willing to shell out a ton of money.  The game had a lot of confusing rules that were not very intuitive unless you were a hardcore rules nerd.  Obviously not something that looked like it would last for long, right?

Word got out that people were dropping hundreds of dollars at a time on this thing called a collectible card game, and so competitors came out of the woodwork.  Spellfire, Jyhad, Wyvern, Star Trek.... all came out and all failed more or less spectacularly within a few years.  And yet Magic the Gathering is still being produced and sold, 20 years later.  It's still the collectible card game.  Why?  What makes Magic so special, so long lasting?  Why couldn't any of those other games compete?  Because they weren't Magic.  Just as other cryptocurrencies aren't Bitcoin.

Magic had the first mover advantage.  It set up a base of players willing to spend all their allowance every week on packs of cards.  It defined what a collectible card game is. By the time the other games came along, Magic was 2-3 years into its production.  There was a history to the game, a depth of play in the cards available that a new game couldn't compete with.

Bitcoin has the same advantages.  It defined what a cryptocurrency is.  It has a history twice as long as any other cryptocurrency.  When someone says cryptocurrency, you know they're trying to be polite, but what they really mean is "Bitcoin and all the rest".

This brings us to the second way Magic won: the network effect.  Everyone had heard of Magic, or, if not, upon entering a gaming store, it was the game everyone was playing.  You could buy cards from other games, but you'd be lucky to find anyone who'd even bought any of those, let alone anyone heavily invested and wanting to play/trade.  If you chose Magic, you could interact with nearly everyone in the store.

The network effect is the huge win for Bitcoin.  Go online and look for stores that accept Bitcoin.  Now look for brick and mortar stores that accept Bitcoin.  There's a map for that.  Now try to even find a listing of places that accept coins that aren't bitcoin.  Which currency would you rather use, the one that can actually buy things, or the one with the cute mascot?  The network effect makes Bitcoin the Facebook of cryptocurrencies. There may be a few alternatives that don't die off, but they'll be far and away less used.

The final way that Magic beat the competition is through constant improvement.  When Magic was first introduced, the rules were incredibly over-complicated.  Ask any old player about damage prevention bubbles and interrupt windows.  You needed a flowchart and a spreadsheet and a PhD just to figure out how some relatively simple spells worked.  Luckily, the rules did not have to be static.  They were constantly updated.  Interactions were clarified, rules were simplified, and the game was made much easier to play and understand by even casual players.

This is a critical point that nearly all the critics of Bitcoin overlook.  Bitcoin is not set in stone.  Bitcoin is software, and software can be modified.  There is almost no conceivable bug that could be found in Bitcoin that would actually take down the currency entirely.  SHA256 turns out to be backdoored by the NSA?  It can change to something else.  Default .0001 btc transaction fee turns out to be too high once Bitcoin hits $100,000?  No big deal, it can be changed.  Bitcoin, unlike pretty much every other cryptocurrency out there, has actual developers being paid actual money to work on the software.  Any problem encountered can be overcome.

So, Magic the Gathering, 20 years later.  Still around, still arguably the best trading card game in existence.  This despite the explosion of German style board games, video games, and the internet.  Would someone from 1993 recognize the game?  Certainly. Would they be able to pick up current cards and play?  With a tiny bit of help, probably.

Bitcoin in 20 years will be much the same.  It might look a little different.  It might act a little different.  But, the basic idea and workings will be the same.  It'll just be more streamlined and a lot more prevalent in common society.  And people will barely remember that there used to a bevvy of altcoins trying to jump on the bandwagon.  Will one or two survive, the way Pokemon did, by chiseling out a niche that wasn't covered by the original?  Probably.  But there will still only be one original, biggest, best game in town, and that'll be Bitcoin.

November 10, 2013

Working at Canonical

I've been a developer at Canonical (working on Juju) for a little over 3 months, and I have to say, this is the best job I have ever had, bar none.

Let me tell you why.

1.) 100% work from home (minus ~2 one week trips per year)
2.) Get paid to write cool open source software.
3.) Work with smart people from all over the globe.

#1 can't be overstated. This isn't just "flex time" or "work from home when you want to".  There is literally no office to go to for most people at Canonical.  Working at home is the default.  The difference is huge.  My last company let us work from home as much as we wanted, but most of the company worked from San Francisco... which means when there were meetings, 90% of the people were in the room, and the rest of us were on a crappy speakerphone straining to hear and having our questions ignored.  At Canonical, everyone is remote, so everyone works to make meetings and interactions work well online... and these days it's easy with stuff like Google Hangouts and IRC and email and online bug tracking etc.

Canonical's benefits don't match Google's or Facebook's (you get the standard stuff, health insurance, 401k etc, just not the crazy stuff like caviar at lunch... unless of course you have caviar in the fridge at home).  However, I'm pretty sure the salaries are pretty comparable... and Google and Facebook don't let you work 100% from home.  I'm pretty sure they barely let you work from home at all.  And that is a huge quality of life issue for me.  I don't have to slog through traffic and public transportation to get to work.  I just roll out of bed, make some coffee, and sit down at my desk.  I get to see my family more, and I save money on transportation.

#2 makes a bigger difference than I expected.  Working on open source is like entering a whole different world.  I'd only worked on closed source before, and the difference is awesome.  There's purposeful openness and inclusion of the community in our development.  Bug lists are public, and anyone can file one.  Mailing lists are public (for the most part) and anyone can get on them.  IRC channels are public, and anyone can ask questions directly to the developers.  It's a really great feeling, and puts us so much closer to the community - the people that have perhaps an even bigger stake in the products we make than we do.  Not only that, but we write software for people like us.  Developers.  I am the target market, in most cases.  And that makes it easy to get excited about the work and easy to be proud of and show off what I do.

#3 The people.  I have people on my team from Germany, the UK, Malta, the UAE, Australia, and New Zealand.  It's amazing working with people of such different backgrounds.  And when you don't have to tie yourself down to hiring people within a 30 mile radius, you can afford to be more picky.  Canonical doesn't skimp on the people, either.  I was surprised that nearly everyone on my team was 30+ (possibly all of them, I don't actually know how old everyone is ;)  That's a lot of experience to have on one team, and it's so refreshing not to have to try to train the scrappy 20-somethings to value the things that come with experience (no offense to my old colleagues, you guys were great).

Put it all together, and it's an amazing opportunity that I am exceedingly pleased to have been given.

September 28, 2013

60 Days with Ubuntu

At the end of July, I started a new job at Canonical, the makers of Ubuntu Linux.  Canonical employees mostly work from home, and use their own computer for work.  Thus, I would need to switch to Ubuntu from Windows on my personal laptop.  Windows has been my primary operating system for most of my 14 year career.  I've played around with Linux on the side a few times, running a mail server on Mandrake for a while... and I've worked with Cent OS as server for the software at my last job... but I wouldn't say I was comfortable spending more than a few minutes on a Linux terminal before I yearned to friggin' click something already.... and I certainly hadn't used it as my day to day machine.

Enter Ubuntu 13.04 Raring Ringtail, the latest and greatest Ubuntu release (pro-tip, the major version number is the year it was released, and the minor version number is the month, Canonical does two releases a year, in April and October, so they're all .04 and .10, and the release names are alphabetical).

Installation on my 2 year old HP laptop was super easy.  Pop in the CD I had burned with Ubuntu on it, and boot up... installation is fully graphical, not too different from a Windows installation.  There were no problems installing, and only one cryptic prompt... do I want to use Logical Volume Management (LVM) for my drives?  This is the kind of question I hate.  There was no information about what in the heck LVM was, what the benefits or drawbacks are, and since it sounded like it could be a Big Deal, I wanted to make sure I didn't pick the wrong thing and screw myself later.  Luckily I could ask a friend with Linux experience... but it really could have done with a "(Recommended)" tag, and a link for more information.

After installation, a dialog pops up asking if I want to use proprietary third party drivers for my video card (Nvidia) or open source drivers.  I'm given a list of several proprietary drivers and an open source driver.  Again, I don't know what the right answer is, I just want a driver that works, I don't care if it's proprietary or not (sorry, OSS folks, it's true).  However, trying to be a good citizen, I pick the open source one and.... well, it doesn't work well at all.  I honestly forget exactly what problems I had, but they were severe enough that I had to go figure out how to reopen that dialog and choose the Nvidia proprietary drivers.

Honestly, the most major hurdle in using Ubuntu has been getting used to having the minimize, maximize, and close buttons in the upper left of the window, instead of the upper right.

In the first week of using Ubuntu I realized something - 99% of my home use of a computer is in a web browser... the OS doesn't matter at all.  There's actually very little I use native applications for outside of work.  So, the transition was exceedingly painless.  I installed Chrome, and that was it, I was back in my comfortable world of the browser.

Linux has come a long way in the decade since I last used it.  It's not longer the OS that requires you drop into a terminal to do everyday things.  There are UIs for pretty much everything that are just as easy to use as the ones in Windows, so things like configuring monitors, networking, printers, etc all work pretty much like they do in Windows.

So what problems did I have?  Well, my scanner doesn't work.  I went to get drivers for it, and there are third party scanner drivers, but they didn't work.  But honestly, scanners are pretty touch and go in Windows, too, so I'm not terribly surprised.  All my peripherals worked (monitors, mouse, keyboard, etc), and even my wireless printer worked right away.  However, later on, my printer stopped working.  I don't know exactly why, I had been messing with the firewall in Linux, and so it may have been my fault.  I'm talking to Canonical tech support about it, so hopefully they'll be able to help me fix it.

Overall, I am very happy with using Linux as my every day operating system.  There's very few drawbacks for me.  Most Windows software has a corresponding Linux counterpart, and now even Steam games are coming to Linux, so there's really very little reason not to make the switch if you're interested.

April 17, 2013

Statically typed generic data structures in Go

I gave a talk at the Go Boston meetup last night and figured I should write it up and put it here.

The second thing everyone says when they read up on Go is "There are no generics!".

(The first thing people say is "There are no exceptions!")

Both are only mostly true,  but we're only going to talk about generics today.

Go has generic built-in data structures - arrays, slices, maps, and channels. You just can't create your own new type, and you can't create generic functions. So, what's a programmer to do? Find another language?

No. Many, possibly even most, problems can be solved with the built-in data structures. You can write pretty huge applications just using maps and slices and the occasional channel. There may be a tiny bit of code duplication, but probably not much, and certainly not any tricky code.

However, there definitely are times when you need more complicated data structures. Most people writing Go solve this problem by using Interface{}, the empty interface, which is basically like Object in C# or Java or void * in C/C++.  It's a thing that can hold any type... but then you need to type cast it to get at the actual type. This breaks static typing, since the compiler can't tell if you make a mistake and pass the wrong type into something that takes an Interface{}, and it can't tell until runtime if a cast will succeed or not.

So, is there any solution? Yes. The inspiration comes from the standard library's sort package. Package sort can sort a slice of any type, it can even sort things that aren't slices, if you've made your own custom data structure. How does it do that? To sort something, it must support the methods on sort.Interface. Most interesting is Less(i, j int). Less returns true if the item at index i in your data structure is Less than the object at index j in your data structure. Your code has to implement what "Less" means... and by only using indices, sort doesn't need to know the types of objects held in your data structure. 

This use of indices to blindly access data in a separate data structure is how we'll implement our strongly typed tree. The tree structure will hold an index as its data value in each node, and the indices will index into a data structure that holds the actual objects. To make a tree of a new type, you simply implement a Compare function that the tree can use to compare the values at two indices in your data structure. You can use whatever data structure you like, probably a slice or a map, as long as you can use integers to reference values in the data structure.

In this way we separate the organization of the data from the storage of the data. The tree structure holds the organization, a slice or map (or something custom) stores the data. The indices are the generic pointers into the storage that holds the actual strongly typed values.

This does require a little code for each new tree type, just as using package sort requires a little code for each type. However, it's only a few lines for a few functions, wrapping a tree and your data. 

You can check out an example binary search tree I wrote that uses this technique in my github account


or go get the runnable sample tree:

go get github.com/natefinch/treesample

This required only 36 lines of code to make the actual tree structure (including empty lines and comments).

In some simple benchmarks, this implementation of a tree is about 25% faster than using the same code with Interface{} as the values and casting at runtime.... plus it's strongly typed.

April 16, 2013

Be not afraid.

The ultimate goal of terrorism is not carnage, it is fear. Go to work, love your family, enjoy life. Be not afraid. This is our best weapon against terrorism.

My thoughts go out to the victims 
of Boston Marathon bombing and their families. I don't know any of them, but we are all Bostonians and Americans in spirit.