Wednesday, October 22, 2014

Installing nginx on Mac OS X Yosemite

This is about how to use Homebrew to install nginx on OS X 10.10 Yosemite. I had a few issues that I still can't explain compared to when I did this on Mavericks, but was able to work around them for the most part.

My goal is to have nginx installed, listening on port 80 (running as root), and have it run when the computer starts.

The first part is easy:

$ brew install nginx

Now that nginx is installed, we need to move the plist file into ~/Library/LaunchAgents so launchd can load it:

$ mv /usr/local/Cellar/nginx/1.6.2/homebrew.mxcl.nginx.plist ~/Library/LaunchAgents/

(Notice the nginx version number here; make sure you use the right one for your installation!)

Now root needs to own it so it can run as root:

$ sudo chown root:wheel ~/Library/LaunchAgents/homebrew.mxcl.nginx.plist

The next part still baffles me, and it's where we try to get nginx to run when the computer starts. Run these next two commands to load the plist file into launchd and then start nginx:

$ sudo launchctl load -w ~/Library/LaunchAgents/homebrew.mxcl.nginx.plist
$ sudo launchctl start ~/Library/LaunchAgents/homebrew.mxcl.nginx.plist

The -w option should tell launchd to enable the service by overriding the Disabled property. Ideally, we should be done here... everything I can find seems to indicate this will work. And it does in OS X Mavericks, but for some reason, when I rebooted in Yosemite, I got this error on the console:

10/22/14 10:10:29.932 AM otherbsd[3120]: Failed to bootstrap agent: path = /Users/matt/Library/LaunchAgents/homebrew.mxcl.nginx.plist, error = 119: Service is disabled

Well that's no good. Why is it disabled!? I have no idea, but a little Googling led me to an overrides.plist file. Edit it:

$ sudo nano /private/var/db/launchd.db/com.apple.launchd/overrides.plist

And add:

<key>homebrew.mxcl.nginx.plist</key>
<dict>
    <key>Disabled</key>
    <false/>
</dict>

in line with all the other ones. From my understanding, -w is supposed to do this for you, but whatever.

Reboot your system (or log out then back in) and nginx should be running now:

$ ps aux | grep nginx

You can also check to make sure that launchd has it loaded with:

$ sudo launchctl list | grep nginx

If you see a 0 in the second column, that means it executed successfully. If you see a 1, something is wrong, and check the system logs (use the Console app and search for "nginx") to see what it is.

Good luck. May the brew be with you.

Monday, September 1, 2014

Papa Parse 3.1 released

Papa Parse 3.1 is now available. It has some edge-case bug fixes and also a few new features and enhancements.

Official release here on GitHub

Change Log

  • New config property: chunk. Specifying a chunk callback function still streams the input under the hood, but instead of getting the results row-by-row, you get the results a chunk at a time. This can still use more memory than specifying a step callback, but for very large files (say a million rows), invoking a function 10,000 times may be faster than invoking a function 1,000,000 times. Remember the chunk size is customizable.

  • New config property: keepEmptyRows. Sometimes it's important to maintain row parity with the input, for example, when you want some output to match row-for-row with the input. Rows that are empty are skipped and it can be impossible to know where empty lines were otherwise.

  • New config property: error. Actually, this appeared in the source code already but wasn't documented nor was it used where it should have been. The error callback gets executed if FileReader encounters a problem opening the file.

  • New feature: Pause and resume parsing. This only works without a worker thread. The step callback is passed a ParserHandle which has pause and resume functions exposed. This makes the parse function asynchronous if it wasn't already, but allows you to do other asynchronous things during parsing, for example, AJAX requests.

  • Enhancement: Meta has a new property called truncated. This value will be true if using preview and the preview did not parse all the results because they were longer than the preview length. It will be false if the preview was able to parse the entire input.

  • Fix: The preview value included the header row; in other words, a preview of 10 would produce 9 rows if the first row was a header row. Now, a preview of 10 parses 11 rows so that 10 are data rows.

  • New option: Papa.DefaultDelimiter. The default is comma, but specify a different (and valid) delimiter in the cases when a delimiter is not specified and detection fails.

  • Firefox fix: FileReader in worker thread. Firefox does not (currently) have FileReader in worker contexts. But it looks like they've considered that a bug and will be rolling it out in future versions of Firefox.

  • Fix: When converting JSON to CSV, null value is interpreted as empty string, whereas before an exception was thrown.

  • Fix: Using complete callback with a worker thread.

  • Fix: Escaped quotes in quoted field edge case.

Friday, August 29, 2014

Chrome rendering quirk when using 3D transforms

In Chrome 36.0.1985.143 I encountered a fascinating rendering quirk related to some elements that had a 3D transform and perspective. Firefox didn't exhibit the behavior. Baffled at first, I finally figured out what Chrome is doing.

Check out this layout where I've reproduced the problem. Try clicking the yellow box:

Notice that, even though the blue content box appears on top of the gray rectangle, you can only click on the yellow box over the word "me" where the gray box has ended.

Firefox lets you click the whole box as expected; but in Chrome, only a part.

I tried shifting the gray box by changing the left CSS property,wherever the gray box was, I couldn't click. ... until the gray box was shifted far over to the right:



Odd. Then I found a shift where I could only click half of it again (over the word "Click"):



Notice that you can only click the other side of the yellow box now, even though the gray box is entirely "behind" the yellow one.

So what's happening?

Chrome is rendering the z-index properly, but z-index does not apply to 3D transforms. The gray box was rotated on the Y axis so that, on the left, it's further away from the user on the Z axis, but on the right, it is closer to the user -- so close, in fact, that it is cutting through the yellow box, even though it does not appear this way visually.

This could be argued as an edge case, but it is a very real concern for web developers using 3D transforms in their layouts. The work-around is to use another 3D transform, translateZ, instead:



By translating the content on the Z axis toward the user enough so that it appears in front of even the gray element's right side, the entire content area is selectable and clickable again.

To see this even more clearly, try changing the translateZ value on #content to 90px instead of 100px in that last fiddle, and you'll see that only a small part of the yellow box is not clickable. That part is still considered as "behind" the gray box in Chrome's surface map.

I hope that made sense. This had me scratching my head for a few minutes. Firefox exhibited the behavior I expected, but Chrome was technically more correct in rendering the surface map in a 3D space. Now I'm not sure what to think.

Tuesday, August 5, 2014

Maximizing the Use of Interfaces in Go

Nerd alert: I made a type that satisfies every almost every Go standard library interface, just 'cause. And I noticed some interesting things about the standard lib along the way...

But in case you're in a hurry, I'll cut to the chase: Go programmers, as a community, should seek to minimize the declarations of new interfaces and improve on and maximize the use of existing ones.

Favorite thing about Go: can you guess?

Interfaces are my favorite thing about Go. (I share Russ Cox's sentiment for them.) Concurrency is cool too, but I am of the opinion that implicitly-implemented-interfaces trump even the mighty go keyword. (Seems that Dave Cheney thinks so too.)

To illustrate why, consider the tens of thousands of functions in Go code across the world that use an interface value of type error in its signature. All the functions that use error can automatically be compatible with your brand new type the world has never even seen before simply by adding an Error() string method to it.

Isn't that cool!? Two packages can work with each other without having to know anything about each other: the types just happen to have methods defined on them which implement a common interface. This has huge implications for how we, as a community, design software.

It gets better. When declaring your own interface, you can take an existing interface and embed it into yours, like io.ReadWriter does:

type ReadWriter interface {
    Reader
    Writer
}

Now any types that implement io.ReadWriter also implement io.Reader and io.Writer automatically. It's a 3 for 1 deal. Wow!

My interface addiction

A while ago, I self-diagnosed myself with Compulsive Interface Implementation Disorder (CIID). I was writing custom error types, first for the martini-contrib/binding package, then again for the more general mholt/binding. I satisfied my craving by satisfying the error interface. Then suddenly I realized what I'd done: I'd just given new life to thousands of existing libraries. All the code which requires an error could now use my new type.

Of course this isn't the first time I've used interfaces in Go before, and most likely I've been under-utilizing them. There was something so liberating about making this new type compatible with nearly everything else out there.

A look at interfaces in the standard library

As addicts do, I sought more of a high. So I wrote a little program that finds all the interfaces from the Go 1.3 standard library. Turns out there are 114 standard library interfaces.

Here's a few things I noticed while glossing over them.

There's plenty of overlap.

I asked the folks on Gophers Slack why several interface declarations contained duplicate methods: same names, same signatures. I found the following interfaces (there are probably others) with duplicate method definitions:

driver.Conn
driver.Rows
driver.Stmt
io.Closer
   Close() error

fmt.ScanState
io.RuneReader
ReadRune() (r rune, size int, err error)

fmt.ScanState
io.RuneScanner
UnreadRune() error

jpeg.Reader
io.ByteReader
ReadByte() (c byte, err error)

fmt.State
io.Writer
Write(p []byte) (n int, err error)

io.Seeker
http.File
Seek(offset int64, whence int) (int64, error)

dwarf.Type
binary.ByteOrder
expvar.Var
fmt.Stringer
flag.Value
net.Addr
cookiejar.PublicSuffixList
os.Signal
reflect.Type
parse.Node
String() string

I am curious if this duplication is intentional, because each of these methods need only be declared once. The other interfaces could then just embed that interface where the desired method is declared, like we saw with io.ReadWriter. (Note that by "duplication" I mean the method name & signature, not its actual implementation.)

Some participants in Gophers Slack suggested that the duplication was for stability: if an embedded interface was changed, it would probably break code that used the embedder interface. Consider this frailty when designing your own packages.

In a comment, Jan Mercl pointed out that embedding a type (say fmt.Stringer) requires that package to import fmt, even if it doesn't use it for anything else. This should be taken into consideration when embedding.

Also, using embedding implies a working relationship with the embedded type. Simply sharing the method signature is not necessarily enough reason to embed. Even if an interface you made for your own http package happens to have the same methods as another interface in some obscure package like syscall doesn't mean you should import syscall and embed the type.

Most standard lib interfaces are small.

I seldom saw interface declarations in the standard lib with more than 3 or 4 methods. This pattern is beneficial to us.

When two interfaces share a common method, it might be wise to pull that method declaration out into its own interface. Small interfaces are easy to implement and embed to make larger interfaces when needed.

Effective Go suggests that small interfaces should be common.

reflect.Type is a HUGE interface.

Speaking of keeping interfaces small, did you know that the interface reflect.Type has 28 exported methods? I suppose these are for implementing run-time reflection. Large interfaces are exceptionally time-consuming to implement, so avoid them when possible. If you do declare a large interface, take advantage of embedding when you can.

Some interfaces look like others.

The expvar.Var interface acts just like the fmt.Stringer interface because they both have exactly one method and its signature is the same: String() string. It might seem annoying that there are two different interfaces with the same definition.

Which one should you implement? Actually, you don't have to choose. In Go, implementing one means you have implemented the other, since they are the same by definition.

Even though these two interfaces seem redundant, they document code differently. Using expvar.Var makes it clear that you're working with an exported variable, not just some generic type that can stringify itself. In other words, this "redundant" interface is self-documenting, and it provides extra cognitive context. You might furrow a brow if you saw a function use fmt.Stringer in the expvar package in a place where it seems more natural to use a Var.

If you follow this pattern in your own projects, be sure it adds clarity to what your code is doing, and use godoc to help explain its purpose.

Some interfaces almost look like others.

In Go, it's so easy to implement interfaces that it can be accidental.

os.Signal caught my attention because of the comment in its definition:

type Signal interface {
        String() string
        Signal() // to distinguish from other Stringers
}

It looks like fmt.Stringer, except for that pesky Signal() method. This pattern is actually similar to the one above with expvar.Var, but is slightly more rigorous.

What is the purpose of Signal(), which in practice which does nothing? I did a little digging in the os and syscall packages. Lots of low-level stuff happens in there. For your type to work with the functions that use an os.Signal value, you need to deliberately assert that your type is, in fact, an os.Signal, not a mere fmt.Stringer.

It's a clever trick. Use it sparingly though, as it's seldom necessary to prevent others from accidentally implementing your interface.

Interface implementations are documented as such.

Nothing fancy here: just write a comment when you deliberately implement an interface, like the Go authors did with syscall.Signal. This makes it obvious where and how you can use the type.

Just for fun: a supermassive type

I implemented as many standard lib interfaces as I could on a single type. The Everything type implements 103 of the 114 Go standard library interfaces. You can use it in ~90% of the standard library functions that have an interface in the signature.

Interface power

Go programmers should use interfaces often. Design for them, and define them carefully.

We should be talking about specific interfaces as much as we talk about packages and projects. Announce interfaces that may be useful to others.

I hope that it becomes common for a list of interfaces implemented by a package to be promoted to the project's README. Implementing popular interfaces should be described as a feature!

As we find and use existing, exported interfaces, our libraries will become more interoperative and compatible at an exponential rate. (Incidentally, a way to search exported Go interfaces would be awesome. SourceGraph? Anyone?)

Go programmers, as a community, should try to minimize the declarations of new interfaces and maximize the use of existing ones.

Monday, August 4, 2014

Convert video to animated gif with ffmpeg


Thanks to the help in this SuperUser question, I'm posting some commands for future reference that make it easy to convert video or movie files to animated GIF.

This requires ffmpeg (and image-magick, if you want better quality). If you have a Mac and need those programs, you can do brew install ffmpeg image-magick to install them.

NOTE: To convert videos to a more HTML5-compatible format like WebM, install ffmpeg with the --with-libvpx --with-libvorbis --with-fdk-aacc options.

Basic conversion:

ffmpeg -i input.mp4 -r 20 output.gif

-r 20 means 20 frames per second, which is pretty common; 20 FPS looks smooth to the human eye.

Better quality:

ffmpeg -i input.mp4 -r 20 -f image2pipe -vcodec ppm - | convert -delay 5 - output.gif

-f image2pipe tells ffmpeg to output the images through the stdout pipe. -vcodec ppm indicates that the codec used should be for the PPM format (a kind of plain-text representation of images using RGB color values). The -delay 5 means that convert should insert a new frame every 5 ticks. There are 100 ticks in a second, so this equates to 20 FPS. By disproportioning the -r and -delay values, you can make your gif faster or slower.

Better quality at smaller file size:

ffmpeg -i input.mp4 -r 20 -f image2pipe -vcodec ppm - | convert -delay 5 - gif:- | convert -layers Optimize - output.gif

Here we run it through convert twice to optimize the different layers. This should reduce the file size a little bit.

But after all this, I discovered that just encoding the original videos using Handbrake with "Web Optimized" checked and moving up the RF slider to a higher value (like 22) was significantly better. A nice-looking gif was about 10 MB but a nice-looking, re-encoded video from Handbrake was only about 200 KB. Go figure.

Bonus tips: resizing images

Image Magick can also resize images on-the-fly and en masse.

One image:

convert input.png -resize 400x400 resized.png

Many images:

mogrify -resize 400x400 *.png

Note that, by default, convert and mogrify will resize the image to fit into those dimensions, keeping the aspect ratio. They will not distort the image. In other words, the -resize value is the maximum dimensions for either side (WxH).

Also, mogrify will overwrite original files. However, the latest versions generate backup files with a tilde (~) at the end of the file name. If you want to prevent this, specify a sub-directory to put the results into:

mogrify -path resized_images -resize 400x400 *.png

(I noticed a more pixellated quality on my Retina MBP when using the -path option. I don't know why. Also, the path must exist beforehand.)

Yay for not doing this all manually!

Extra bonus: convert MP4 to WebM

Thanks to this guide, it's pretty easy to convert MP4 to WebM:

ffmpeg -i input.mp4 -c:v libvpx -crf 40 -b:v 1M -c:a libvorbis output.webm

The CRF value can be anywhere from 4-63 where a lower value is higher quality. They recommend to start at 10, but if your goal is small file size, try a value around 40 and go from there. I found 40 to produce a small video that was still good enough for the web.

Thursday, July 31, 2014

Javascript Performance: Synchronous vs. Asynchronous FileReader

I learned something surprising about Javascript that didn't intuitively transfer over from other languages.

HTML5's FileReader API lets you read files from disk. The user must choose the file(s), but then you have permission to load them in text or binary format. If you've used it before, you know that it's asynchronous, kind of like XMLHttpRequest: you set up a FileReader with callback functions before you perform a read:

reader = new FileReader();
reader.onload = function(event) {
// ... event.target.result has file contents ...
};
reader.readAsText(file);
I've heard it said that asynchronous operations in Javascript run slower than synchronous ones. I assume this has to do with processor scheduling, the overhead of a function call, etc. Asynchronous can be a little cumbersome to use, too, and it can lend itself to spaghetti code.

But did you know that there is a synchronous FileReader? No callbacks necessary:
var contents = new FileReaderSync().readAsText(file);

Nice, huh! There's one catch, though: it's only available in web workers. This is because reading a file synchronously is a blocking operation and would lock up the web page. Therefore, you can only use FileReaderSync in a separate thread.

When I rewrote Papa Parse for version 3.0 which would support running in a web worker, I was excited to use this new FileReaderSync thing. One cool feature about Papa is that it can "stream" large files by loading them in pieces so that they fit into memory. Both FileReader APIs support this "chunking" mechanism, so I took advantage of it.

Finally, it was ready to test. I executed it and roughly measured its performance. My first test was in the main thread using FileReader (asynchronous):

That looked right. It took about 11 seconds to process just under five million rows. I was happy with the results.

Then I tried it in the worker thread using FileReaderSync. Like its asynchronous sister code, this routine was periodically sending updates to the console so I could keep an eye on it. Rather abruptly, updates stopped coming. I had to resort to system-level monitors to ensure my thread hadn't locked up or died. It was still there, but its CPU usage dropped significantly and updates slowed to a crawl:

It took nearly 20x longer to process the file in a dedicated worker thread using a synchronous file reader! What!?

The jagged line was expected: the results had to be sent to the main thread after every chunk for reporting. (In Javascript, data has to be copied between threads, rather than sharing memory. D'oh.)

Isn't it odd that its speed sharply, steadily declines at exactly 30 seconds?

Here's the two graphs on the same plane:
Obviously, this was unacceptable. What was going on?

I asked the folks on Stack Overflow and Google+, and guess where I got the answer? Google+, believe it or not. Tyler Ault suggested trying the regular, asynchronous FileReader in the worker thread.

I did and it worked.

Performance in the worker then was comparable to the main thread. No slowdowns. Why?

I don't know for sure. A couple theories were discussed:

  • The garbage collector only runs when the CPU is idle on that Javascript thread. Since it was all synchronous, there wasn't any time to breathe and clean things up. Memory had to be shifted around a lot and maybe even swapped out (though I never expended my system memory; maybe the browser would put a cap on it).
  • The fact that it slowed at exactly 30 seconds is interesting. It's possible the browser was doing some throttling of threads that ran for that long without a pause. (Though that seems to defeat the purpose of using worker threads.)
If you have more light to share on the topic, please feel free to comment (and +mention me to be sure I get it) or tell me on Twitter @mholt6 or something.

Monday, July 28, 2014

GoConvey for Gophers

GoConvey is a testing utility for Go built around the standard testing package and go tools. If you're not familiar with GoConvey yet, take a look, maybe try it out, or watch the intro video before finishing this article.

Mike (@mdwhatcott) and I (@mholt6) made GoConvey to improve our workflow and to help us write clearer and more deliberate tests.

I wanted to point out a few things that may not be clear or obvious about GoConvey. Then I'll discuss why it is or isn't a good choice for you and your project.

"GoConvey" means a few things.

In your editor, it's a testing framework (namely the Convey and So functions).

In the terminal, goconvey is an HTTP server that auto-runs tests.

At http://localhost:8080, it's a graphical interface for using go test and viewing results.

In your compiled binaries, it's nothing. (More on this later.)

GoConvey is loosely coupled.

That means you can use the DSL without the goconvey server. And vice-versa.

Many developers use the web UI to auto-run and visualize standard Go tests that don't use GoConvey's special functions. GoConvey can run tests built with other testing frameworks as long as they also use go test.

By using GoConvey, you're not automatically building another barrier to entry: you're automatically building and running Go tests.

On the flip-side, it's totally normal to just keep running go test from the command line manually while using GoConvey's syntax to structure tests. (If that's what you prefer.)

GoConvey runs go test.

Look! Nothing up my sleeve. All goconvey really does is run go test. Yeah, it uses the -cover flag and stuff, but all the output and data it generates comes from the go command.

go test runs GoConvey.

GoConvey tests are in _test.go files and inside Test functions with the testing.T type, just like regular tests.

It's a good idea to use regular tests when you don't want the BDD structure or when it becomes unnatural to describe your tests in the BDD style. You can use both kinds of tests in the same package and file.

GoConvey is not a CI tool any more than go test is.

Sometimes I see people refer to GoConvey as a CI (continuous integration) utility. Maybe they define that differently than I do, but the only thing "continuous" about GoConvey is the auto-test feature and the only "integration" features it has are with go test (and maybe with the file system, if you count polling the project directory for changes).

If it works for you in a CI environment, great! (Would you tell me how?) You can write tests using the DSL and run those in CI jobs, but beyond that, the web UI is interactive and I don't think the goconvey server that powers it is useful in automated scripts...

Test dependencies are not compiled into your programs.

Except for running go test, your Go programs are not built with test files and dependencies. This means that using a testing framework like GoConvey has no technical effects on your compiled program. The file size and function of your resulting binary is unchanged.

Even go-getting any package does not download its test dependencies. You need the -t flag for that.

Test dependencies are thus very deliberately obtained and are only needed by your library's contributors/hackers. Most people will never need to concern themselves with your testing framework or assertions package of choice.

GoConvey documents code beyond godoc.

Godoc is Go's convenient, built-in documentation system. It reads comments directly inline with your code to produce package, function, and variable documentation; examples; known bugs; and more.

Far from replacing godoc, GoConvey's DSL complements it. While godoc documents how to use a package, Convey tests document and enforce package functionality and behavior.

For example, suppose you wrote a bowling game. The godoc would probably explain how to use functions like NewGame, Score, and Roll, along with what they do. This is useful for users of the package, but leaves a lot to be assumed by any developers that want to start hacking away on it. To fix this, you could make your godoc much more verbose and describe how the package is supposed to behave, but that's just noise for users who don't need to know about the internals, and developers still don't have any assertions that prove your program works like you say. You could write regular tests, but then you hope the comments stay true to the code.

This is where behavioral tests come in. The GoConvey tests for the bowling game make it clear exactly what the program should do normally and in edge cases; plus, it asserts correct behavior in context of the test case. The tests actually convey intent and become that missing documentation.

See, by using godoc along with descriptive tests, you've now sufficiently documented your code for both users and developers.

Deciding when and how to use GoConvey


Should everyone use GoConvey for everything?



No.

GoConvey is all about workflow and intent. It's not for everyone in every situation. I don't even use it all the time.

It does some things very well, but other things not so much.

Does well:


  • Documents your code with descriptive, structured tests
  • Auto-runs tests
  • Assertions
  • Reports results in real-time in the browser or via desktop notifications
  • Integrates with go test and standard testing package
  • Stubs out test code for you

Doesn't do well:

  • Automated environments (regular go test works fine though)
  • Run outside of $GOPATH
  • Save battery life
  • Race detection
  • Truly randomized test execution order
  • Idiomatic Go (I'm referring to the DSL)

There’s no magic formula that I know of to determine whether you should use GoConvey one way or another. But here are a few ideas to try:

  • Execute the goconvey command and open the web UI on your existing Go tests. See how it feels.
  • If you are starting a new project, check out the examples folder and try a new way of writing tests using GoConvey's DSL (careful, you might like it). Make sure the GoConvey server is running so you get instant feedback after every save. (Or take a look at the tests for Go-AWS-Auth for a real use case.)
  • If you don't like the nested BDD structure, try using a single level of Convey instead. This way your tests can still be somewhat descriptive and you get the benefit of the many So assertions. (For example, you can still run table-driven tests using a for loop inside a single Convey level.)
  • Use Go's dot-import for the convey package so you can make it "one of your own," so to speak. Way more convenient.

One of our hopes for GoConvey is to make writing tests more of a pleasure than any old task on your to-do list. Watching the coverage bars go up, up, up and seeing that orange "FAIL" turn into a big green "PASS" is seriously motivating.




(okay, really; done with this meme now)