Tech Time

Incantations, poetry, and intellectual detritus flowing from great minds. By the makers of Harvest.

Improving My Vim Skills

I recently attended the Vim Masterclass online course lead by vimcasts.org's Drew Neil. It was a great learning experience and I immediately bought Drew's book, Practical Vim, to learn even more. This got me thinking about why I love Vim and why, after more than two years, I'm still finding myself striving to improve my Vim skills.

Why Learn Vim

It might seem a little odd to talk about having skills in a text editor. Certainly, I've never thought about improving my TextMate skills. I believe the reason is that Vim feels more like programming than it feels like a typing tool. Just as composing JavaScript functions creates the behavior in a web page, Vim commands generate the JavaScript file itself. If you think about Vim commands as a language to be learned, it will make more sense why it's a skill to be developed over time. Consider the following command:

22Gfzdaw

Without getting into the specifics, this line means, “Go to line 22, move the cursor to the first occurrence of the letter ‘z’ and delete the entire word.” Doesn't this seem like coding? Obscure coding, for sure, but once you understand the syntax, it's pretty powerful. Even though the biggest hurdle to understanding Vim is having to memorize such terse commands, there are plenty online tutorials to teach the mnemonics that will help you remember them.

Best Practices

Just like a programming language, there are many different ways to achieve the same outcome. And, just like a programming language, there are best practices to get the most out of what you type. In his online course, Drew Neil talks about what he calls “The Dot Formula.” In Vim, the Dot Command (pressing the dot key) will repeat a command. Since a lot of what we do as developers is repetitive in nature, this is a really handy feature. However, to get the most out of the Dot Command, you need to consider how you type your commands.

viwcfoo

ciwfoo

In the above two lines, they both will change a word to foo. The first command will use visual mode to highlight the word then change it to ‘foo’. This is nice because you see what you are about to change. However, you don't get the full power of the Dot Command. If you moved over to another word and pressed the dot key, you get unexpected results. Now try the second command. This one changes the word under the cursor to 'foo' without using visual mode. Now move to another word and press the dot key. That works as expected.

Although the above is a somewhat trivial example, it shows that really understanding the mechanics behind Vim's syntax can help you to be more efficient.

Coolest Thing I've Learned

I'm trying to get my point across without writing an insanely long post about Vim. So, I'm going to leave you with this one command that I learned from the Vim MasterClass that I was the most impressed with. The ‘normal’ command.

:10,20 normal A;<enter>

This essentially says, “Append a semi-colon to the end of each line on lines 10 through 20”. We are able to write one command to affect 10 lines of code. Additionally, you can use the Dot Command in normal mode to repeat the above on another selection of lines.

:30,40 normal .<enter>

Always Learning

This has barely touched the surface of what Vim can do. If you've never tried Vim, I hope I've made you curious. If you do use Vim, I hope I've given you something you haven't seen before.

The Secret to a Successful Distributed Team? Ask Aretha.

I’ve been at Harvest for more than two years, and in that time I’ve seen our team grow from 8 to 25 people. A large portion of our team is in NYC, but we’ve also got coworkers in 6 US states, Hungary and Canada. On top of that, Harvesters have also been know to take “work-cations” where they travel somewhere outside of the office, but work a full day as if they hadn’t left.

With a group scattered all over the place, it’s really important that the people who join our team can be counted on to deliver. I’ve been lucky enough to be involved in the hiring process at Harvest and, though I think I’ve done a fair job, I’ve never really nailed down a checklist of qualities someone should have before they’re allowed to join the Harvest fold.

Over the past two months, I’ve been working from the road as my wife and I tour the US and southern Canada. The trip has afforded me some time to visit four of my remote coworkers and spend a few days working with them on their home turf. Working in remote coworkers’ natural environments has given me a little perspective into how their day works and how they view our teammates. It’s also given me a better sense of who they are as people.

My trip has gotten me back to thinking about what makes a good coworker and my conclusions aren’t exactly rocket science: you want someone who builds things that she’s proud of and who cares about her tools and methodology, you want someone you can trust to sit and work and not play video games all day, and you want someone who can have a conversation with you even though your worldviews may not always be 100% lined up.

It turns out that these qualities can be summed up in one perfectly magical word: respect.

When talented people respect each other, building a web application feels like no big deal. Digging in on a hard problem isn’t as intimidating when you know there are 9 other developers who have your back. It’s more comfortable to argue a position you believe in when you know your opinion will receive real consideration. It’s easy to point out to someone a potentially better way to do something when you know they’re not going to freak out that you stuck your nose into their turf.

Harvest’s culture of respect isn’t something that can be artificially generated or forced upon a team. Respect can’t be learned from a company handbook or a few weeks of training – respect comes from a lifetime of experience interacting with other people. You can hire all the genius rockstar ninja magicians you want, but hiring even one person who holds his coworkers in contempt will poison the well for everyone.

The next time I’m asked to talk with a candidate, I’m going to put R-E-S-P-E-C-T at the top of my checklist.

Introducing Sidetap

When we began working on the New Harvest Mobile Timesheet, we set out some basic tenets that we wanted to guide the project: ease of use, speed and feel.

Of the three tenets, my responsibilities for the project mostly focused on making the app feel nice to use. We love the way iOS feels and wanted Harvest on the iPhone to feel as much like a native app as possible. Building a web interface that feels like a native app is a risky proposition because the closer you get to simulating native behavior, the more noticeable the little differences become (this is the web vs. native “uncanny valley”, if you will). Nevertheless, we focused on a few specific areas that we most associate with the feel of iOS apps: scrolling and animation between views.

Early on, we looked very closely at the Hacker News mobile interface built by Lim Chee Aun. Not only did it work great on iOS, but he took painstaking steps to document the experience in a massive blog post (part 1, part 2) that should be required reading for anyone building a web app targeted at iOS devices. His project didn’t work on Android or other platforms, but we were hopeful that we would be able to use his work as a starting point and bring a lot of that functionality back to Android. However, after building some early prototypes, it became clear that delivering a unified iOS-like experience to all mobile platforms was not going be a worthwhile pursuit.

Enter Sidetap

We decided to change tack and build a framework that delivers a satisfying but more simplistic interface to most mobile browsers. The basic interface doesn’t include things like fixed headers, content panel animations or hidden address bars, but it works in a way that doesn’t make it feel half-assed. If iOS5+ is detected, we enhance the interface to include the missing features and deliver a more robust, native-feeling interface.

We call our new framework Sidetap after the side-based navigation it was built around (the kind of side navigation you can find in iOS apps like Facebook and Sparrow).

Sidetap tries to be useful by not doing too much. It focuses on the hardest problems to solve which leaves you to think about how your content looks and works. These are the things that Sidetap does best:

Scrolling

Replicating iOS scrolling used to be a tremendous hassle and entire JavaScript libraries were written to attempt the feat. As of iOS 5, it has become a breeze — just add -webkit-overflow-scrolling: touch to an HTML element that has some kind of overflow set on it. There are some gotchas, but working with this has been pretty nice.

In our case, the element that gets scrolled is nestled three layers deep inside the content panel. Using three divs help stop the browser from scrolling past the element when you reach the bottom of the div. If you’re curious about this technique, you should read this discussion on Github.

Animation

All animations in Sidetap are powered by webkit-animation which run silky-smooth thanks to hardware acceleration built into mobile Safari. We’ve included two types of bi-directional content panel animations to start (slide-up and slide-over), but plan to include more in future releases.

Hiding the Address Bar

We wanted to take advantage of as much of the screen real estate as we could, so that meant hiding the address bar and keeping it hidden. Hiding the address bar is simply done by scrolling the page to the 0 y-coordinate. However, because Sidetap’s main container is set to 100% of the browser window, there wasn’t enough actual content to scroll anything. To get around this, we apply a large amount of padding to the body element on page load and then scroll to 0. After a delay, we reset the container to 100% height and remove the body padding. This gives us a div that’s the perfect window height and an address bar that’s gone away.

Fixed Headers

Fixed headers should be easy, right? On desktop browsers, using position: fixed is a reliable way to keep an element in the same position regardless of how a page is scrolled, but fixed positioning wasn’t built into mobile Safari until iOS5. We tried using position: fixed in an early Sidetap prototype and, unfortunately, the current implementation does not meet expectations (see Remy Sharp’s thorough treatise on all that is wrong with fixed positioning).

Luckily, there was a relatively simple workaround (thanks to the fact that we abandoned fixed headers on non-iOS platforms). To create our fixed headers, we just position them absolutely in the content-panel (which has a non-scrolling height of 100%) and then add the appropriate padding to our scrollable sub-divs. Piece of cake!

Works With the JS MVC of Your Choice

Though we built Sidetap alongside Backbone.js, it is not a requirement and Sidetap could operate alongside any MVC framework of your choice (or no framework at all). Sidetap’s job is simply to take content and animate it — it’s not opinionated about where that content comes from (though it does expect it to follow the basic markup pattern).

Just instantiate Sidetap and store a reference to it. To add and animate to new content, reference the class and Sidetap handles all the fun. Here’s how this looks in a Timesheet view (using Backbone):

Harvest.sidetap.slideToNewContent($@el)

That’s it.

What’s Next

Sidetap isn’t perfect and there are a few things that we’ve targeted for improvement:

  • Header animation: iOS header animation is surprisingly complicated and Sidetap doesn’t even come close to getting it right.
  • Touch events: there is a natural delay in touch events in mobile Safari for which several libraries have been created. Getting one of these libraries integrated would be nice.
  • More iPad friendly: The navigation should stay visible on the iPad when in landscape orientation.

Give It a Try

We realize there are no shortage of mobile frameworks available today, but we just didn’t find one that fit our desired style of navigation or didn’t include a lot of bloat (Sidetap is only 2k minified and gzipped). We feel that Sidetap fills this space nicely by trying to do as little as possible (just animate, baby).

We’re releasing Sidetap as an open source project today and we encourage you to try it out and see how it feels. Our sample app shows just how simple switching between content can be. Your contributions, issues and suggestions are very welcome.

Harvey: A Second Face for Your JavaScript

When media queries finally reached a state of good support across a lot of browsers, we started to make our web applications adapt to our users’ devices by optimizing the layout to focus on the content.

But now that we’ve grown to like and incorporate this new adaptive approach, what’s next? We set foot on fairly new grounds not too long ago and so we are still discovering new corners of this land we call Responsive Web Design. One of the things that we will explore next is the ability to add different modes of interaction to our sites, i.e., conditionally executing different JavaScript based on the screen dimensions of the rendering device.

While the browser (or rather, most browsers…) do a pretty good job when it comes to media queries in CSS, there is no easy and, more importantly, simplified tool for the JavaScript camp… until now!

Adapt or Die.

While we were working on the new site for WalkaboutNYC, I ran into different scenarios where it was necessary to change the UI drastically and pull in additional content for larger screens.

We built WalkaboutNYC “mobile first” to ensure that the site — especially the schedule and itinerary pages — was accessible on a wide range of different devices and could easily be used on-the-go by all attendees on the day of the event.

This approach allowed us to focus on only the most important and fundamental content and resulted in a single codebase of clean and semantic markup for the site. However, with increasing screen size, we wanted to enrich the user experience with enhanced navigation elements and offer a wider set of features which went beyond merely changing the layout. We had to change the DOM!

UI elements: Initially, we use native select boxes for more natural UX on small screens (e.g., phones), but then transform them into a list of radio buttons as soon as the screen is wide enough:

External resources: To avoid long load and processing times on small and less capable devices, we decided not to include social media sharing options on those devices. Instead, they are dynamically injected after the page is loaded — and only on large screens:

…Now It’s as Easy as Flipping A Coin

Instead of using the tradional, more cluttered and less reliable methods of checking the screen width in JavaScript and/or listening to window.onresize() to detect changes, I wrote Harvey.

Harvey executes certain parts of your JavaScript depending on the current device’s type, screen size, resolution, orientation, or any of the same media query types you would use in CSS. Harvey is originally written in CoffeeScript, weighs only 3k (1k gzipped) and has no external dependencies.

Once you attach a condition to Harvey (in the form of a valid CSS media query), you can register three callbacks for it. The first one, setup(), will be called the first time your media query becomes valid. The second and third callbacks, on() and off(), are executed each time that media query becomes valid or invalid, respectively:

Harvey.attach('screen and (min-width:600px)', {
    setup:  function() {},
    on:     function() {},
    off:    function() {}
});

Of course, you can attach as many media queries as you want using the same syntax. And all the media queries used in Harvey can be completely independent of the media queries you may be using in your CSS.

Harvey includes a modified version of Scott Jehl's and Paul Irish’s matchMedia polyfill as well as some bug fixes uncovered by Nicholas Zakas. It’s built on top of the matchMedia interface as defined in W3C’s CSSOM View Module.

Get Out There and Explore

Help us push the borders of frontend development to new undiscovered territories! Get the code, find out how it works, and use it in your own projects. Feel free to contribute and to extend Harvey with your own ideas!

An Experiment with Backbone.js

We are launching a new mobile web interface for Harvest today. It introduces a redesigned mobile time tracking interface and a new Team Status page, both of which are kept up to date regardless of updates being made on other devices. It is also technically implemented differently than most of the Harvest UI. We wanted to experiment with some cutting-edge technologies, and this is the result!

On the initial page load we serve a thin shell along with some bootstrapping data, then use JavaScript to draw user interface on the client side. Updates to the timesheet happen through background requests to the server using a REST-like API. Driving it all is Backbone.js, a framework that brings some order to client-side web-app development. It does this by bringing a few concepts to the browser: Models, Collections and Views.

Backbone Models

Models are the basic objects the application operates upon. In Harvest's case, one such Model could be Project. Usually there is a 1:1 mapping between the Backbone Models and backend Models (database tables), but this is not strictly necessary. For example, on the frontend we have a RecentProjectTask Model used to ease project task selection that does not have a direct match in the database.

The best part about Backbone Models is actually related to another Backbone feature called Events and how they interact with Views. In a framework like Rails, a Model is a passive object that gets operated on by a controller or perhaps another Model. Backbone Models, on the other hand broadcast events that are processed by Views subscribing to them. There is a useful decoupling between action and reaction.

Backbone Collections

Collections are a set of Models basically corresponding to a set of rows in a backend database, though again this need not be so. They also have the same Event love that Models have.

Backbone Views

Views are responsible for presenting the user interface and responding to user events. For this reason Backbone Views, are more than just passive string generators. In fact, as a best practice, the HTML for the user interface should be stored in Templates instead. The choice of how Templates are implemented is not determined by Backbone. Unlike some opinionated frameworks, not much is set in stone by Backbone.

Beside being able to react on events, the next best thing about Views is the ability to nest them. This nesting drives the structure of the code starting from the UI mockup, for example:

This screen suggests at least three different kinds of Views. First, there is the top container responsible for the header, the navigation to other days, the opening of the menu, etc. Second, there is the View shown in pink that draws out all the entries. This View is tied to a Backbone Collection.

On the third level, there are small individual Views tied to each entry present on the screen. Some of these are outlined in blue in the above picture. Nesting occurs naturally, the top container View creates the next level of Views below. The containing view may also referrences to its child Views in instance variables, but this is usually no necessary. The a nested View takes on an active role and does not strictly need to be operated on by its parent.

For example, when you delete timesheet entry the corresponding View (outlined in blue above) will get notified via a destroy event and will have the opportunity to remove itself accordingly. The deleted timesheet entry was part of a Collection that also gets notified, which in turn notifies the containing View, ultimately allowing a different Total Hours value to be displayed on screen. Control can flow entirely through events without needing instance variables.

It Is a Sour Cherry

The advantage of using a framework like Backbone is its ordered method for creating fluid interactions. Ajax made partial screen updates possible, but making this fluid is difficult if you're rendering parts of the UI on the backend.

For example, on the current non-mobile Harvest Timesheet interface, when you start a timer the response is instant, even though the save operation may actually happen a few milliseconds later. The JavaScript will redraw the UI before the operation completes, especially when we know no reasons why the backend should return with an error in the future. Since everything related to the UI is in JavaScript, there is no longer a risk of redrawing differently on page load compared to redrawing on UI interactions.

It is not all that sweet, though. JavaScript, for all its expressiveness (and lately speed), is still no match for the wide body of helpers and libraries available on the backend, especially when than backend is as full-featured as Rails. If you want to textilize timesheet notes, it is easy on the backend but most likely it is a missing library or a buggy one on the frontend. Want to format timestamps in a friendly way like “9 minutes ago”, easy in Rails, but not implemented on the frontend.

Even worse: error handling on the browser is primitive at best. Whereas we capture a full stack trace along with headers and other information for later analysis upon a Ruby exception, there is nothing universally supported and as comprehensive in JavaScript. Rails functional tests are suddenly no longer enough and you need to look into tools like capybara-webkit to exercise your JavaScript programmatically.

If you compare the development speed of a Backbone-based UI to one driven entirely by Rails, then the latter will always win. However, we rarely create interfaces in pure Rails these days, so the choice is actually between using a framework like Backbone or writting ad-hoc JavaScript to do much of the same. Backbone is not a replacement for Rails; rather it brings order to your JavaScript code.

Pantsless Pair Programming

Recently we replaced Harvest’s billing system. It was a long project. It was a difficult project. It was the most enjoyable project I have worked on in my five years at Harvest. The success of the project hinged on our first real attempt at pair programming. As is common at Harvest, we took on this challenge remotely (i.e., pantslessly).

Doug and I began working on the billing replacement project this past fall. Early on we realized it was a project to be conquered, but not divided. Doug arrived with deep knowledge of our existing billing system, which had grown cantankerous with age. I arrived with zero knowledge of our existing billing system, nor any desire to gain knowledge of the existing billing system. It turns out these disparate perspectives led to a solid program design.

Google Docs

Nearly every day during the project Doug and I would hop on video chat and get to work. In the beginning the work was understanding the requirements of the Harvest billing system. We considered existing screens and flows, the manner in which we accept payment from our customers, and some of the things we’d like to do in the future.

We built a simple document in Google Docs that allowed each of us to watch as the other person edited. As the energy on one end of the line waned, the person on the other end could pick up the document and go to it. This spec document was our bible throughout the project.

Screen Sharing

We vacillated between Google Hangouts and Skype during the project. The driver would tuck Photo Booth into the corner of his screen to allow for two-way video during the screen share. When one application started to become unbearably laggy, we would switch to the other. This setup was by no means perfect.

iChat appeared to have a more solid screen sharing option, but we didn’t like how easy it was for the observer to take control of the driver’s machine. We have no doubt that there is better software for screen sharing. We knew a couple “good enough” tools, however, and decided to just get to work.

Scheduling

While we weren’t capital ‘P’ pair programming, we did try to work together almost every day of the project. This meant we each got to know the other’s schedule pretty intimately. Luckily the Harvest team allowed us to play on our code island for an extended period of time. We were beholden to almost no one but each other.

Doug lives in Montana. I live in Minnesota. We had plenty of family commitments during the project. Naturally our schedules aligned. Sometimes we would choose to wrap work early for the day, rendezvousing back later after our children were asleep. On a couple occasions we decided to push a little harder and put in some time on the weekend to get past a project milestone sooner rather than later.

Launching

Pairing is not just a rote coding concept. Deploying three months of work with all kinds of migrations and data transformations is best done with a friend. Together we built a launch checklist with nearly 100 items to complete and confirm. The checklist included descriptions, timing and assignments. During launch our team of two became three as Warwick, our sysops maestro, joined us for launch activities.

Thinking through and following this checklist was absolutely essential to a successful launch. Checking each other's work at every step gave us confidence in our strategy. Having a brother-in-arms there for the launch made the eight hours feel like a lot less. For such a big deployment, it is also useful to have a wide array of space shuttle launching audio clips at our disposal.

Winding Down

We were excited to wind down such a rare multi-month Harvest project. For as much fun as we had working together building a killer design, it was time to move on. We launched billing over the winter, but continued to meet weekly for a couple months to address any bugs or deficiencies in the code. You bet we paired up again on these occasions, depending on each other to keep things in line.

Pants or no pants.

Are Tech Conferences Worth It?

Before I started working at Harvest, I had not attended a single tech conference. Frankly, I had not even given much consideration to attending one as I doubted they could be worth the price (including tickets, airfare, hotel, etc.) or time they cost. With most conferences making their talks available online at some point and a plethora of blog posts that sum up the key points, why would I even bother to leave my house?

Harvest has a very generous education policy (if something will make you better at what you do, you’re encouraged to do it) and conferences are something we’re welcome to explore. Some of my coworkers have attended conferences and seemed to feel they were worth it. I try to keep an open mind (especially when I haven’t given something a chance) and so I decided to attend a few events in the past year. I didn’t do it intentionally, but I was essentially the Goldilocks of conferences, attending four events of varying size and content.

The Mega Conference

In 2011, I joined a large Harvest contingent for a trip to SXSW Interactive. With nearly 20,000 people in town and hundreds and hundreds of events to choose from, SXSW was pretty overwhelming. The huge number of events being held across the entirety of downtown Austin made it very difficult to find quality sessions to go to and the giant parties that happen at night mean lots of waiting in line if you don’t know somebody.

Fear of Missing Out (FOMO) may not have been born to describe SXSW, but a more appropriate usage would be hard to find. Everything about the proceedings is plagued by the feeling that something better might be going on across town. By the end, I was happy to find a quiet place with my coworkers (some of whom I only see a few times a year at best) and talk/hack on projects.

Price: at least $2000. SXSW Interactive Badge ($595–950 depending on when you buy), Flight to Austin from NYC ($400ish), 5 nights of jacked-up hotels ($1000+++)

Would I spend my own money to attend? Not a chance. If I lived in Austin, I would try and hit up the side events that pop up, but I wouldn’t buy a badge. I enjoyed the experience of SXSW, but there’s just not enough value to justify spending so much money.

The One-Day Local Event

The next stop on my conference exploration was GothamJS, a one-day JavaScript conference right in NYC. This was the first year GothamJS was being run and they put together a nice roster of speakers on a mix of subjects that definitely exposed me to some new ideas. There were a few talks I didn’t enjoy, but there were also nice people and ice cream sandwiches. I was able to hop on the D train to get there, so the travel was a piece of cake.

Price: $220–250.

Would I spend my own money to attend? Yeah, I’d go again. Hard to beat a reasonably priced event with a solid speaker list in your own backyard. I don’t think I would spring for travel and hotel costs if I had to travel from another city, though.

The Multi-day Out-of-Town Conference

Last month, I attended JSConf in Scottsdale, AZ. Buying a ticket for JSConf was a leap of faith because they went on sale before speakers had been announced and they sold out in seconds. I had read about the event’s reputation for quality content, great organization and a wonderful hallway track so I didn’t mind grabbing a ticket and counting on the JSConf crew to put together an awesome event.

I was not disappointed.

The speakers overall were great and I especially enjoyed talks by David Nolan, Daniel Henry Holmes Ingalls, Jr., Jacob Thornton and Jake Archibald — not to mention the founder of the Swedish Pirate Party, Rick Falkvinge. More than the speakers, though, I really enjoyed the conversations that happened in the hallway. I had the opportunity to talk with people on the browser teams at Mozilla, Microsoft and Google. I talked to people writing user interface components and I talked to people going crazy with Node.js. There is a lot of excitement in the JavaScript community right now and it was fun to get to experience that energy in person. JSConf goes out of its way to make speakers available for Q&A and conversations (they stay for the whole event and attend all of the parties) and it’s an awesome way to follow up on something that caught your interest.

Price: $1800. JSConf ticket ($575), 4 hotel nights ($675), Airfare from NYC ($535)

Would I spend my own money to attend? I just wrote two paragraphs that didn’t mention the food (I paid for none of it), parties, freebies or the awesome (and free) pre-conference (NotConf). Yeah, I would go again and I wouldn’t hesitate to spend my own cash to do so.

The Multi-Disciplinary Conference

My coworker, Matthew, and I also attended the Harvest-sponsored ConvergeSE 2012 last month. Converge aims to examine the “intersection between design, development and marketing.” Speakers covered topics ranging from design and development of mobile apps to typography to customer service. Day 1 was filled with workshops across five tracks and I tried to sample a little from each. Day 2 was a more traditional single track speaker day with all of the previous days tracks represented.

Price: $1150. ConvergeSE tickets ($300), Airfare from NYC ($250), 3 hotel nights ($600)

Would I spend my own money to attend? Yeah, I think so. I would definitely do it if it was in driving range and I could split a room with someone. As a developer, I really appreciated the exposure to some sessions on design, typography and other things I don’t spend enough time thinking about.

So, Are They Worth It?

Yes, there are certainly some conferences that justify the expense. Spending a couple of days watching passionate presenters and chatting with amazing people had me fired up to go home and crank out some new work. It’s hard to put a price on that kind of motivation.

My Tips For Getting the Most Out of a Conference

I’m certainly not a grizzled conference veteran, but I definitely picked up a few tips from my mini conference tour. Maybe these will help you out.

  • Get Outside of Your Comfort Zone: When you’re at a multitrack conference, try to choose talks that expose you to something new or sound “a little out there.” If you stick with things you’re familiar with, it’s a lot less likely you’ll hear something that blows your mind.
  • Participate in the Hallway Track: Talk to anyone you can, but especially follow up with speakers. Take Jason Fried’s advice and give new ideas 5 minutes — there are smart people out there.
  • Get Your Hands Dirty: If someone is demoing something you find interesting, try it out. You’ll not likely have a better chance to get questions answered.
  • Enjoy the parties, but not too much. Waking up for a day of talks with a hangover is a good way to flush cash and opportunity down the toilet. Have a drink, but don’t go crazy.
  • Find Time to Explore: If you’re traveling for a conference, try to explore your new locale a bit. Ask someone to join you at a local restaurant or take a walk through a new neighborhood — new people and new places are a great way to keep your mind open.
  • Give a Talk: I haven’t done this yet, but I hope to soon. Speakers get a chance to present their thoughts and ideas and get immediate feedback from a number of their peers. It seems like a valuable experience and I’m ready to give it a shot.

Why not find an event near you?

My First Program: Dee Zsombor

I have forgotten what the first program I wrote actually did, but I remember the machine and circumstances quite well. It was on a Romanian Sinclair Spectrum clone called HC85:

HC85

It had the appearance of a bulky keyboard that you had to hook up to a television set for a display. Permanent storage was provided via old magnetic tapes of the same type as used for music before the Compact Cassete had become widespread. You had to hook the tape recorder, listen to noises akin to what modems made, while programs were being loaded. This was an isolated socialist country, I'm not that old.

In an age where buying a tv required years on a waiting list, the full setup above was very expensive and none I knew had one. Instead it was a shared system, made available on a rotating basis to interested kids passing a test in math. An hour every second week.

The machine had a key combination for every BASIC instruction, if you wanted to write "THEN" you had two press a two keyed combination instead of T-H-E-N you normally type today. Inserting new lines meant typing out the new line along with a line index, assuming you had the foresight of reserving insertion points. Changing a line meant typing it out again with the same line number. Saving the work was storing it on a tape to be continued next time.

The starting problems were simple formula based ones, introducing the concept of variables. For example compute the length of the hypotenuse in a right triangle, my first program must have been something similar, but the details are lost.

Do remember a wow moment later after creating a program that guessed a number by putting up questions like 'is it larger/smaller/equal than x?'. The idea that the machine by acting out on a plan, may have the appearance of perfect intelligence is fascinating to me.

Alas the initial flirtation with computers did not last long. The HC85 experience was too shackling with little opportunity for an imersion so I pursued other interests. Years later I encountered Turbo Pascal, now running on modern machine and it was that second time that I've got hooked!

Testing with Wrong

Years ago I added a testing library to Harvest called assert2. I really loved the syntax, and the test error output was miles ahead of what you get with default Test::Unit output. Here was an example of a simple library providing tons of value.

Over time assert2 died on the vine. Its mantle was picked up by another team, however, and Wrong was born. Wrong uses the same syntax as assert2, but has the added benefit of being actively developed. And the maintainers are responsive. Double bonus!

Nothing to see here when Wrong detects right:

> def slap!(person); person[:attitude] = 'aghast'; end

> joffrey = {:attitude => 'smug'}
> slap!(joffrey)
> assert{ 'aghast' == joffrey[:attitude] }
==> nil  # Test success!

Someone comes along and edits the method, introducing a blatant bug. Luckily there are tests:

> def slap!(person); person[:attitude] = 'humbled'; end

> slap!(joffrey)
> assert{ 'aghast' == joffrey[:attitude] }
==>
 Wrong::Assert::AssertionFailedError: Expected ("aghast" == joffrey[:attitude]),
 but Strings differ at position 0:
 first: "aghast"
 second: "humbled"
 joffrey[:attitude] is "humbled"
 joffrey is {:attitude=>"humbled"}

Objects are blown out and the details are right there in front of you. This makes solving many test failures a matter of looking at the details of your objects, rather than re-re-rerunning your tests with various puts statements. This simple example barely touches on the awesome object details Wrong will display during test failures. Big win.

The syntax is dead simple as well: assert{ thing-you-think-should-be-true }. No messing with a multitude of different method names (assert, assert_nil, assert_equal, etc).

This is how we use Wrong at Harvest. You should start using Wrong as well. Your flow will thank you for it.

Lessons Learned Upgrading Harvest to Ruby 1.9.3

We're thrilled to announce that all of our apps have been upgraded from REE to Ruby 1.9.3. We wanted to share some notes about what went well, what went wrong, and what we learned in the process.

The Payoff

NewRelic graph of average response time

The vertical red line marks our update to Ruby 1.9.3, and as you can see, the results were impressive (lower is better). Our average response time dropped from around 150ms per request to around 50ms.

Our server loads took similar dips:

Non-core cluster load average Main Harvest app load average

The first graph shows our server load for our non-core apps (Co-op, our forum, and some internal tools), and the second graph shows the load for our marketing site and for the main Harvest application.

Importantly, during the period shown in all the graphs above, our traffic volume was increasing steadily (and sometimes dramatically), and yet our resource usage still decreased with the upgrade.

Aside from those server-side gains, we enjoyed some local benefits as well. I did some benchmarking of our test suite for Harvest before and after the upgrade, and our suite runs 12.67% faster on Ruby 1.9.3, which saves us a few minutes on every run.

Procedure and Timelines

We upgraded six different apps from REE to 1.9.3. Our goal was to start with the smaller apps first and slowly learn our way up to the main Harvest application. Ultimately, I think this worked out well — we discovered a lot of the smaller gotchas earlier in the process on our simpler apps, and weren't sent on quite as many wild goose chases in the more complicated ones.

As we moved to each new app, our general procedure stayed roughly the same:

  1. Use RVM to jump to 1.9.3 and a clean gemset, fix any errors from bundle install (usually just by simply upgrading gem versions), then attempt to run our test suite. (Note: We've since transitioned to rbenv and rbenv-gemset due to some compatibility issues with pow, but the process is the same.)

  2. Usually, our suite would crash, and we'd have to upgrade a handful of gems and plugins.

  3. Once our tests were running, we'd step through each error and failure and work our way towards a clean run.

  4. Once we had a clean run of our test suite, we did some local click testing (hitting what we thought would be pain points), and then checked that app off the list and moved on, saving formal QA for after all apps were upgraded.

To run through these steps for each app was actually a surprisingly quick process. This blog and our forum each took less than one day, our marketing site took less than two days, and Co-op and Harvest each took just a week, although that was with the full-time focus of two developers (myself and prime hacker Barry Hess).

We were able to upgrade Ruby on all of our application servers without any downtime by using Chef and the nginx Healthcheck module (special hat-tip to the dev-ops wizardry of our very own Warwick Poole).

Changes and Pain Points

On the whole, the upgrade was a smooth affair, but we still needed to make a fair number of updates and ran into a couple of problems along the way.

Method Changes

The majority of our test failures and errors were caused by assorted syntax updates and deprecations in 1.9.3. Most of these were pretty minor, but were often hard to hunt down (like the changes to to_s for many-but-not-all classes).

  • Array#to_s performed a join in 1.8. In 1.9, it became an alias for inspect. A similar change occurred with Hash#to_s.

  • String no longer includes Enumerable, so there's no more String#each. It's been replaced by #each_byte, #each_char, #each_codepoint, and #each_line, depending on what you're after.

  • String#starts_with? and String#ends_with? became String#start_with? and String#end_with?, which was a nice and easy find-and-replace fix.

  • No more colons with when in case statements.

  • Date#parse no longer plays nicely with MM/DD/YYYY-style dates:

    1.8.7 > Date.parse("12/14/1986")
    => Sun, 14 Dec 1986
    1.9.3 > Date.parse("12/14/1986")
    ArgumentError: invalid date
    
  • Rational#to_s no longer reduces fractions-over-1 to just their integer representation:

    1.8.7 > Rational(2,1).to_s
    => "2"
    1.9.3 > Rational(2,1).to_s
    => "2/1"
    

This caused us to briefly inform customers that they had invoices that were "38/1 days late".

This list is not exhaustive, and we found many more in our pre-upgrade research that didn't hit us (Hash#key was replaced with Hash#index, Hash#select now returns a Hash instead of an Array, Object#type became Object#class, etc.), so your mileage may vary.

CSV Changes

FasterCSV has been brought into the 1.9 standard library and is now just CSV.

Most of the fixes to handle this were pretty easy: simply update the class name from FasterCSV to CSV, then make some straightforward updates to the new CSV reading and writing methods. That knocked out almost all of our issues.

Two edge cases ended up taking up the majority of time spent on CSV fixes: properly handling imported CSVs with BOMs and with carriage returns. Let's ignore those particular fixes, though, and focus on why this is another great example of why having an exhaustive test suite is a very good thing.

We probably would have never thought to check for these edge cases in our QA, but luckily, they're covered by tests in our suite. If those tests weren't there, we probably wouldn't have known those problems existed until a customer unsuccessfully tried uploading an Excel-generated CSV, leading to a support ticket and wasted developer time to fix a bug that we've seemingly already fixed once before.

So write those tests.

Encoding

Encoding ended up being our biggest real world problem, because it didn't bite us until we went to production with Co-op. We weren't the only ones to experience this pain.

If you're interested in the ins-and-outs of encoding in 1.9, check out James Edward Gray II's 11 Part Series on Character Encoding in 1.9.

Most of our problems in development were relatively minor and fixed with magic comments.

Our big problems came up in production with data that had been stored as one encoding but now was coming out and assumed to be in UTF-8.

  1. First, we had problems with encodings in the shared cache between Harvest and Co-op. Data was coming out of the shared cache in Co-op with a ASCII-8BIT encoding, which was not what the upgraded Co-op was expecting or could handle well with its own strings all in UTF-8. Monkeypatching memcache-client allowed us to force-encode all strings coming out of the cache to UTF-8. Notably, this was only an issue while Co-op and Harvest's Ruby versions were mismatched — once we upgraded Harvest, we removed the patch and everything worked perfectly between the two apps like before.

  2. Our next big encoding problem came from serialized YAML, just like Tobi said it would. Like with the shared cache with Co-op, when the data was serialized after the upgrade, there was no problem getting it back out, so this only affected data serialized before the upgrade that was accessed afterward. We considered a few fixes here — migrate the whole DB to fix the encoding, fix the encoding as the records were individually accessed, monkeypatch ActiveRecord — and ended up going with that last one.

    class ActiveRecord::Base
      def unserialize_attribute_with_utf8(attr_name)
        traverse = lambda do |object, block|
          if object.kind_of?(Hash)
            object.each_value { |o| traverse.call(o, block) }
          elsif object.kind_of?(Array)
            object.each { |o| traverse.call(o, block) }
          else
            block.call(object)
          end
          object
        end
    
        force_encoding = lambda do |o|
          o.force_encoding(Encoding::UTF_8) if o.respond_to?(:force_encoding)
        end
    
        value = unserialize_attribute_without_utf8(attr_name)
        traverse.call(value, force_encoding)
      end
      alias_method_chain :unserialize_attribute, :utf8
    end
    

Worth It?

We think so. There has been plenty written about the theoretical speed increases you'll see with Ruby 1.9.3, but we're glad to share that we've seen significant wins in our complex real-world applications and in our local environments with just a couple of weeks of development.

Discuss on Hacker News