Logical Reality Design: Web Design and Software Development

Cherry picking features between related git repos

November 6, 2014

We maintain some internal projects as related repositories - it's less than ideal, but for certain environments it's the best solution available.

What we often want to do is take a feature developed in one project and jump it over, complete with history, to another project.  Here's how that works:

First of all, we're going to be talking about two repos here: the giver and the receiver. You need to have both checked out locally.


In the receiver project:

git remote add <giver>
git fetch <giver>
git diff --diff-filter=AD --name-only <giver>/master

That outputs all the files that have been added or deleted between the two projects. You can change the filter to get different sets of files, and in the end that list is a guide. You want to choose all the paths that are related to the feature you want.

In the giver:

git --no-pager log --cherry --reverse --pretty=oneline <files>  | awk '{ print $2 }' > /tmp/commits.txt

The "<files>" above are the ones you determined above. This'll write a list of commits where those files were changed, in the order those changes happened.

Back in the receiver:

git cherry-pick $(cat /tmp/commits.txt)

This starts the process. In a mode similar to a merge conflict resolution, git will run over each of the commits named and apply it to the receiver repo, as if those changes had been made there. Sometimes, there will be conflicts, just as in a merge. When that happens:

git mergetool
git cherry-pick --continue

Eventually, the cherry-pick will complete, and you'll have laid the feature from the giver into the receiver, as if it had been written there in the first place.

Github to Gitlab

August 18, 2014

We've been in the process of adopting a self-hosted gitlab instance recently, for reasons that various and complicated enough to merit their own post.

In the meantime, I wanted to quickly note that now that GitLab is up and running, it's pretty easy to clone all the actual git stuff from Github. Voila:

git clone --mirror
cd projectname.git
gitlab create_project projectname
  # gitlab is a nifty gem that provides CLI for the whole GitLab shebang
git remote add gitlab <project url>
git push --mirror gitlab

That's it. All the branches and tags get cloned over.

What's missing are Github issues and pull-requests - those need to be replicated on Gitlab by hand, until we find a way to automate it.

Gentoo + Nginx + Passenger

July 10, 2014

From the "niche interest" desk:

Nginx builds all of it's components into one executable (they argue that it's necessary for their level of speed and security). Gentoo lets you configure software all kinds of ways, so it's pretty easy to configure exactly which of the myriad of modules you want to install.  But like every Linux distro, they've had an ongoing tete-a-tete with Rubygems, so Passenger isn't supported out of the box in the Gentoo set of options.  Passenger's idea of installing itself  into Nginx is to recompile the whole thing - without respecting the modules that were already installed.

It's a headache, and I've given up trying to assign blame - I think probably everyone solved the (very large) problems in front of  them, and it happens that their solutions don't interact well. (Gentoo Rubyists (and yes, I'm sure the plural there is optimistic): I'm liking chruby + direnv + bundler as an adjunct to eselect.)

But, there's a couple of hidden outs that make it entirely feasible to get a respectable nginx/passenger deploy going on Gentoo, which I'll record here for my own purposes, at least.

gem install passenger
mkdir /etc/portage/env
echo "NGINX_ADD_MODULES='$(passenger-config --nginx-addon-dir)'" > /etc/portage/env/nginx-passenger
echo "www-servers/nginx nginx-passenger" >> /etc/portage/package.env
emerge nginx

In /etc/nginx/nginx.conf, add:

passenger_root /usr/local/lib64/ruby/gems/2.0.0/gems/passenger-4.0.29/;
passenger_ruby /usr/bin/ruby20;
 to /etc/nginx/nginx.conf

Three Corner Rsync

April 1, 2014

This is such a good trick, I wanted to share.

Here's the situation: you have a ton of files on Old Server, and you want to get them to New Server. Both of them have nice fat pipes - they may even be on the save 10G ethernet switch.

You could do something like:

rsync -av old-server:/files/ /tmp/old-server/
rsync -av /tmp/old-server/ new-server:/files

But now you're pulling those files down to your local machine (which probably has the spare disk space) over your office pipe, which is maybe 10% the pipe your servers might share. Plus you're then pulling it over the office wi-fi, and that's down to 1% of the bandwidth. Plus everyone in your office is going to look around at the ops guy who's choking the pipe again.

Or you could go onto one of the servers and set up an SSH account for the other server and shuffle keys around and all that. But that's a big hassle, and you have to remember to get rid of the unnecessary account afterwards. Which if you're me, you'll forget to do.

But you already have an account on both machines with you public key in an authorized_keys file somewhere. In that case, start out with this:

ssh -O exit old-server
ssh -O check old-server

You want something like: "Control socket ... No such file or directory" Otherwise, it means that you have an existing connection to old-server. Usually you have an existing SSH session open, and you can just close it. If you're feeling lazy and irresponsible, the message we don't want says something like "Master running (pid=12345)" - you can just kill the master SSH process and end the connection that way...

Now, we make sure that we have the keys for the new-server in our agent:

ssh-add ~/.ssh/new_server_rsa
ssh-add ~/.ssh/old_server_rsa

With the preliminaries out of the way, the real juice looks like this:

ssh -A old-server 'rsync -av /files/ new-server:/files/'

And now you've got a full-pipe transfer going from the old server to the new one, using the credentials stored in your local agent to authenticate you first from your console to the old-server, and then from the old-server to the new one. You'll get the usual rsync update output as the transfer goes, and at the end everything will be transferred over.

Frankly, I think there ought to be a merit badge or something. It's that cool.

Thoughts on

March 18, 2014

Over the last couple of weeks, a startup called has been making the rounds, promising a much simpler take on PGP. A more humane interface on GnuPG, visual design by a renowned artist, even a web interface for your crypto. Fantastic, right?

Yeah, I can't get on board. And I really hope you won't either.

Admittedly, OpenPGP needs an interface that doesn't take a week of research to understand. Starting with "What's OpenPGP? Is that like the PGP 6.0? And GPG is different, right?" And then using any of the existing tools requires that you really understand the whole protocol, maybe back to the basic maths underneath. So where probably everyone in a mature digital society should be using cryptographic tools, the reality is that only the particularly paranoid (that is, international journalists and cypherpunks) really do.

So there's definitely an opportunity for everyone to benefit. And Keybase is capitalizing on that opportunity. If you don't know what I'm talking about, the site is here.

Here's the problems I see with Keybase:

You Don't Roll Your Own - And You Don't Have To

First, they're reinventing well known parts of OpenPGP. A public directory of publicly auditable keys? Allow me to introduce PKS (an example server) - a decentralized system for distributing public keys. Associating your key with a public identity? For email addresses (oh, wait, those are also globally unique user ids...), we match up user ids on the key. For other accounts, there's a system called "annotations." This is all built into PGP, and people, paranoid, technical people have been using it for decades.

And that's not a "well, they got there first" sour grapes complaint. There's a principle in computer security that you don't invent your own crypto. Which is exactly what Keybase is doing. They're using GnuPG under the hood, certainly, but they're distributing keys and associating them with public identities over a brand new, unreviewed protocol. That's troubling.

So it should be possible to write an all OpenPGP implementation that does everything that Keybase does on the command line ... without the central Keybase service.

That's Not Authentication

But the next problem is: should we do that? Keybase's pitch is that rather than use a web of trust (which is a tricky concept) let's pair up our public keys with public identities so that we can see that @yourfriendontwitter is also the owner of this particular public key. That's a cool idea - now we can skip that whole perl-mediated key-signing geekfest and use Web 2.0 tech to identify one another, right?

The limits on that identification are two-fold: First, the certainty with which we know that e.g. tweets are sent by their purported senders is limited by the security of Twitter. (I think there's heavy irony to how much the Keybase "Verifying myself" toots look like "you've been hacked" spam.) Second, all we are actually learning about the holder of the key is that they can also tweet as @yourfriendontwitter - not that they're actually your friend on Twitter, if you see what I mean.

I think there's a reason that the Keybase example pages use as examples messages about meeting up for drinks - you probably shouldn't trust a key you identified via Twitter about matters more weighty than you'd themselves discuss on Twitter.

Ultimately, I don't think you can bootstrap a secure system on the foundation of insecure systems. And the Keybase foundations right now are Twitter, Github, and your personal site.

By contrast, the goal with PGP is that you can extend the human trust of meeting someone face-to-face to communications on line. When someone who's key you've actually signed as a result of them presenting it to you in person signs an email with it later, you can trust that email as if they'd said those words to you in person. Otherwise, it could be anyone.

If They're Not Doing This Right...

Beyond that comes my biggest concern. The keybase founders (and I think it's pretty irrelevant that they're coming from OkCupid) are asking you to centralize your public key into their closed source service rather than use the existing infrastructure. Even better, they're suggesting that you do your cryptography in Javascript in the browser. And they admit that people should be suspicious of that, without addressing that suspicion. They even suggest that you should upload your private key to their servers. (Just for the record, you should never put your private key into anyone else's possession.) They claim that your key will be secure - because it will be secured with their own special blend of encryptions. But remember what we said about rolling their own crypto? There's no way to avoid the fact that triplesec is exactly that.

As an analogy, if you went to a new bank, and they started talking about "vig" instead of APR, or the locksmith who came to your house was "Bondo-ed," or your doctor told you medical science has no way of knowing where the heart is you'd start to wonder if you should do business we them, right? That's how the Keybase pitch sounds to me.

The best light I can see that in is that they're well meaning, but simply unqualified to design and run a cryptographic system. I imagine two undergrads starting a bank in their dorm room. But I worry that this is a social engineering attack on a grand scale - that they know exactly what they're doing: they're using a slick interface to draw in an initial group of tastemakers, until they have the Facebook of online security (with all the connotations of "having to be on Facebook"). Meanwhile, they're collect key pairs and subvert the meaning of authenticating a public key to the point where they can have severe impact on secure communications down the line.

Even if the founders well meaning, there's always the possibility that they make an exit, and the next owner of Keybase is evil. Imagine your personal communications being read by your least favorite corporate or governmental entity. Wasn't that exactly why we were using PGP in the first place?

Where do we go from here? I think Keybase does have a point: The user experience of public key cryptography is abominable.

It's a tricky subject though, because slick GUIs are really difficult for the any one user to verify that it's doing what it says it's doing and nothing more (like uploading your key somewhere...) But the command line has gotten a lot less scary to a lot more people, thanks to a raft of powerful web tools. In a related move, command line tools have gotten much more friendly than they were back when gpg was written.

So I'm proposing a set of curated shell scripts that wrap gpg to do a small set of common tasks. Ideally, they should cover the most useful subset of gpg operations, and be simple enough that anyone interested should be able to review them and satisfy themselves to their innocuity.

I need help though - I only use one platform for my computing, and I'd want to be sure that the scripts ran as close to everywhere as possible. Also, I have a bad habit of using words like "innocuity," so I could definitely use help with the documentation.

Update I've actually started the set of shell scripts. There's a public repository here for the interested. I would love to see contributions. (N.b. the code is public domain.)

An Authorization Scheme

January 6, 2014

I've been mulling an idea for web authentications, that I'd like to discuss a little bit in public before developing it in more detail.

Here's the problem I'm trying to address: developing user interfaces for authorization-controlled applications is a pain in the neck. You wind up having to lace "if_authorized?(something)" around links and forms, and make sure that the conditions you're applying align with what the affordances will actually do. Even looking past the labor that requires, it means that there's another set of variation in your pages, and that describing that variation to a cache in between may be difficult. (You could add something like an Authorized-By-Role header and tag it in the Vary ... but does every cache respect that header properly?)

In a world rushing towards the Single Page Application, I've started to think along these lines:

Let's consider ReSTful actions as the objects of authorization. That is: if you can PUT to /user/1, you can PUT anything to /user/1 - we're not going to consider individual parameters. We're also going to control for a specific set of HTTP actions, and anything not considered should be rejected by the server.

For every pair of {action, uri}, let's list a set of tokens that are authorized to perform that action. These tokens are arbitrary and opaque - they have no characteristics beyond identity - if I tell you "xyzzy" once, and "xyzzy" another time, I mean the same thing, but that's all. I don't mean you can teleport or something.

Every authenticated user also has a list of tokens - effectively permissions that they are granted. When a user attempts an action, we can compare the list of tokens they possess to the list of tokens that grant the action. If there's an intersection, the action proceeds. If there's not, well, here's your 401 page.

Now, all these tokens exist and are managed on the server. A user can inspect the tokens they have access to (maybe /user/:id/tokens, or as the body of the 401...) and the affordances for any action can have its tokens associated with it (e.g. data-access-token="xyzzy,fnord,tacos"). But clients never transmit to the server the assertion that they have a token, and the server certainly doesn't respect such an assertion. We look up your tokens based on your authentication every time you make a request that requires authorization.

Now your SPA JavaScript can do the comparisons and come to a conclusion about whether you'll actually be allowed to perform an action, and if not do something with the UI. For instance, it could completely remove a form, grey out a link, or whatever.

On the back end, there'd need to be a means to manage the tokens effectively. I have a vague conception of a rules processor that can turn "user X has access to /user/X, and user admin has access to all user/:x" into rules tokens. There's also the ever-present issue of un-authenticated users, but I think that's a special case - all authenticated users have the same (very limited, possibly empty) set of tokens.

I recognize that this scheme exposes some more URIs than eliding affordances completely. I think that's a security-through-obscurity concern though: it reveals the existence of locked doors, since the reason they're revealed it because we've locked them down.

I'm dimly aware that this resembles other authorization approaches - it's possible that there's nothing novel here at all. If so, fantastic! I'd love to know that there's a well known best practice and redirect efforts along those lines.

I'm concerned that I'm overlooking something in terms of the security of the approach though, which is why I'm talking about it in public.

Any thoughts? Resources I should review?


My presentation on TDD hits a minor milestone on YouTube.

September 3, 2013

This presentation I gave to the L.A. Ruby Meetup a few months ago has crossed 2000 views on YouTube! Hardly a viral cat video, but not bad for a programming topic. In it, I discuss test-driven development and how writing tests properly will not just improve your code, but get it written faster. I also discuss the analogy between test-first coding style and the preparation and planning that other professionals do in their business: just as you wouldn't conduct surgery without creating a written plan first, you shouldn't write code without a plan laid out, and your tests are that plan. Take a look!

Test-Driven Development: Write better code in less time by Evan Dorn

Thoughts on the Github Hack

March 5, 2012

Over the weekend, a young coder demonstrated a security vulnerability in - one with wide-reaching implications. An early demonstration is at:

Our friend went on to make several updates to github as he experimented/demonstrated the vulnerabilities, got his account suspended, reinstated and set off a firestorm of criticism every which way.

I was ready to put it all into the "someone is wrong" pile, until I ran across this pull request on the Rails core:  That's some mid-90's Microsoft style arrogance right there, and on the off chance that anyone is having trouble seeing it, I figured I might add my breath to the maelstrom.

First of all: it was not right to hack github.  It's not okay to ignore the intent of security, no matter how weak the enforcement.  Much better to have pointed out the vulnerability to github, although for sure the fame wouldn't have been as bright (which is why I'm pointedly not referring to him by name in this post.)  Given github's track record, I think there's a pretty good chance they would have come clean, admitted the fault, as well as crediting its reporter.  But that's complete supposition.

That said, I don't think it's legitimate to consider Github a blameless victim.  The flaw in Rails that was exploited is well known, and well reported, and easy, if irritating, to fix.

The technical aside here is pretty simple.  In a file in config/initializers add:

ActiveRecord::Base.__send__(:attr_accessible, nil)

Then you need to white-list mass-assignable attributes in your models:

attr_accessible :name, :body, :whatever

And keep an eye on your logs for

WARNING: Can't mass-assign protected attributes: :blah

Which is a sign that you might need to add an entry for :blah into the respective model.

All pretty simple.  There are a couple of other notes, like "don't allow reference fields (e.g. :person_id) to be mass assigned" but that's the meat of it.  Put the initializer in your generator (that's much harder) and you never have to think about it again.

So, Github didn't put a simple, well-reported fix into their code.  Is that so bad?  I think so.  Github not only invited the developer community to trust them with the products of their labor, pretty much ousting SourceForge from that position in the process and firming up a development environment choice for open source work (i.e. "use git for version control"), it also invites developers to trust them with secrets.  Specifically, the secret contents of client repositories.  Heck, they get you to pay for the privilege.   So, in short, Github is taking money to keep secrets.  And by not covering a known security hole in a default Rails deploy, they were failing to uphold the trust of their paying customers.

I think Github was letting us down pretty badly.  I think an overzealous coder did a bad thing to bring that to light, but you can't argue that Github should be surprised or is blameless.  Two bad things, no one is blameless.

But the last straw for me was reading the Rails core teams' replies to a pull request to set the default for whitelisting attributes in Rails 4.0 to 'true.'  (After previous discussion concluded that making the change for 3.2 would be "too disruptive.")  That having to do attr_accessible for every model was "a lot of paperwork" - the final commend is @dhh's "I don't like this. -1"  Which is to say: we would rather put unsanitized data into the database than do the bare minimum of manual review.  And that's pretty lame.


RSpec 2.0 and before/after hooks

June 7, 2011

As of RSpec 2, the configuration interface for RSpec changed dramatically.  What used to look like:

Spec::Runner.configure do |config|
config.prepend_before(:each, :type =&gt; :controller) do

Now looks more like:

RSpec::configure do |config|
  config.before(:each, :line =&gt; 153) do

One significant and interesting change is the way that before hooks are processed.  Specifically, the #before, #after, and #around methods are now part of the Hooks module, which is included in both ExampleGroup and in Configuration, so you call configure.before in exactly the same way as you do within a describe block.  Normally, you pass :each or :all, which sets the scope under which the hook will be called, but Hooks inspects the arguments for filtering metadata regardless of where you call it - I don't know that you'd want filter within an ExampleGroup, but you could...

Unfortunately, as cool as the metadata filtering capabilities are they aren't, as far as I can tell, very well documented.  The process of extracting the metadata lives in it's own :nodoc: limbo, and the attachment of metadata to a particular example is scattered throughout the RSpec code.  This, then, is an attempt to pick that apart.

Extracting Filters

When you call Hooks#before, for example (#after and #around work fundamentally the same way), the args are examined and two things are extracted:

A scope, which is the :each, :all, or :suite specification.

A metadata filter hash.  Normally, you call #before(:each, {:hash => [:of, :metadata]}), but you can instead do something like before(:all, :symbol) which will result in a metadata filter like {:symbol => true}

Again, probably if you need to add metadata inside of a describe block, you are Doing Something Wrong, but maybe there's a good reason.  The extreme (excessive?) flexibility of RSpec metadata and filtering does open up a lot of interesting possibilities.

Filter Matching

The metadata filter is used to decide if the hook should be run for a particular example block that it might apply to.  As such, it's a remarkably powerful filtering system, although there's a lot of assumptions about it's format that you need to bear in mind.

The actual mechanics of the metadata filtering happen in RSpec::Core::Metadata#apply? and #apply_condition - there's a long chain of delegation and extra-meta-programming that leads there.

The upshot is that your metadata filter will be compared to the metadata on the example key/value pair by pair, like this:

  • A regular expression in the filter will match against the appropriate value for the example.
  • If you pass :line_number => 17, Rspec will check to see if the example includes line 17, much like running rspec filename_spec.rb:17
  • Any other Fixnum will be compared with == to the value in the metadata
  • Anything else gets compared with == to the value in the metadata, after both values have been converted to a string.
  • A proc like {|value| ... } will get the value of the key, and can return true for a match.

Filters can nest Hashes, which will be compared to nested Hashes in the metadata.  In other words, if you want to be able to match for metadata like

{...,  :example_group =&gt; {..., :full_description =&gt; "A very long winded example of the group", ...}, ...}

You can do something like:

before(:each, {:example_group =&gt;{:full_description =&gt; /long winded/}})
RSpec attaches some metadata to examples and groups, but you can also explicitly add metadata to groups and examples as they're defined.  One useful example of that is
it "should do something useful, someday", :pending =&gt; "Not this day, though"

Which is much faster than using the pending method call inside the block, and can be applied to a describe block to make the whole thing pending - especially handy when you have a before block inside that is causing problems.

In the same token, the example given in RSpec 2 documentation and announcement posts has been doing something like:

it "should not be taking this looooong", :slow =&gt; true

Since metadata can also be used to filter examples, you could use this to pull out the examples that take forever from your all-the-time specs, and run them only before a push, for instance.

What Metadata Does RSpec Give Us?

Probably the best way to figure that out is this very pragmatic approach.

A Useful Trick

Very useful for experimenting with metadata is that the proc form of the metadata has a special case: if the proc takes two arguments, the whole metadata hash will get passed into the proc, so you can inspect it at leisure.  The snippet looks like:

require 'pp'
before(:each, :bogus =&gt; proc{|val, all| pp all}) {}
From a Rails controller spec:
:execution_result=&gt;{:started_at=&gt;Tue Jun 07 14:13:46 -0700 2011},
:full_description=&gt;"UserSessionsController should be authorized",
:description=&gt;"should be authorized",
  :file_path=&gt; "spec/controllers/user_sessions_controller_spec.rb",
  :block=&gt; #
  :caller=&gt; [ ... the whole backtrace of the group ... ]
:caller=&gt; [ ... the backtrace of the example ...]

One of the cool-but-problematic things about metadata in RSpec is that it get's added and updated all over the codebase, and constantly over the lifecycle of an example run and extensions (like Rspec-Rails) add their own fields and values, so it's very hard to have formal documentation for what you can match.  Also, somewhat troubling, is that none of these fields are an explicit part of the RSpec API, and so might change with very little notice.  It seems like the best way to manage working with the metadata is with the above pragmatic approach.

Extending form_for in Rails 3 with your own methods

April 25, 2011

At LRDesign, we have a bunch of internal tools to make laying out Rails views more consistent. I recently upgraded and improved some of ours for Rails 3, and published them as a gem. (The published / open source ones are available at, if you're interested). One of the handy techniques we figured out (poring through the Rails code) is how to correctly add a method to FormBuilder so that you can properly use it inside a form_for block.

An example method added to forms:

Since I nearly always want <input> and <label> tags at the same time, I created a labeled_input method that lets me say this (in HAML):

= form_for(@book) do |f|
    = f.labeled_input :title
    = f.labeled_input :author
    = f.labeled_input :price

to get:

<form action="/books/new">
  <div class="labeled_input">
    <label for="book_title">Title:</label><input id="book_title" name="book[title]" type="text" />
  <div class="labeled_input">
    <label for="book_author">Author:</label><input id="book_author" name="book[author]" type="text" />
  <div class="labeled_input">
    <label for="book_price">Price:</label>
    <input id="book_price" name="book[price]" type="text" />

Combined with some default CSS code in our application template that aligns the <label>s and <input>s in columns, this saves us a couple of hours setting up clean-looking forms on every new project, while significantly shortening and prettifying our view templates. (Markup Haiku, just like HAML intended.)

Implementing the extension in Rails 3

The code that handles form_for in Rails 3 is rather dense and incomprehensible and takes a while to pore through. Here's the short version to understanding it so you can add your own methods to FormBuilder properly. Since we dug through it, hopefully this will save others some time. The only Rails file you care about for this purpose is actionpack-3.0.x/lib/action_view/helpers/form_helper.rb.

  • module ActionView::Helpers::FormHelper defines a bunch of helpers, like label, text_field, etc. that define helpers you use outside of a form_for. For example, text_field(@user, :title) calls this version of the helper.
  • class ActionView::Helpers::FormBuilder is what's used to define the helpers you run inside a form_for. It works automatically via metaprogramming ... when loaded, it finds each helper in FormHelper (except for a few) and defines a similarly named method in FormBuilder. form_for(@user) { |f| f.text_field(:title) calls this version of the helper, which basically just calls the FormHelper version but passes the FormBuilder's @object_name as an additional first argument. In version 3.0.7, this metaprogramming happens on lines 1131-1141 of form_helper.rb.
  • As a result, if you were to write a new helper in ActionView::Helpers::FormHelper that uses the same argument structure as the pre-built ones, you'd automatically get both kinds of helper. However, if you're writing your own plugin or gem and injecting new helpers, this won't happen because by the time you inject your method FormBuilder will have already done its metaprogramming (it happens when the file is loaded).
  • The solution to this is that your gem needs to do the second half - defining the FormBuilder version of the helper - itself. I'll put an example below.
  • Most of the helper methods work by instantiating InstanceTag, a local one-size-fits-all class to emit a form tag, and then calling the appropriate method for the kind of tag that's wanted, like to_text_field_tag. It's very confusing why the Rails team decided to do one class for InstanceTag and a bunch of different methods, rather than make subclasses of InstanceTag for each kind of tag they want; an odd OOP decision, but that's what we've got.
  • InstanceTag itself has only one line: it includes InstanceTagMethods, a model that defines all the methods for the class, and which isn't used elsewhere.

So to implement a FormBuilder method yourself that you can use inside a form_for, the best way is to inject your method inside FormHelper, and then call that from a method you inject into FormBuilder. This gives you both versions of the method, in the same structure that Rails defines them. You could do this either in a helper file directly in your application, or in a gem (like we have) so you can reuse your form helpers in more than one projects.

An example implementation.

Here's a simplified construction of the labeled_input method we use at LRD. This one just emits a label and a text field and wraps them in a <div>.

Start by defining the helper:

module LRD
  module FormHelper
    def labeled_input(object_name, method, options = {})
      input = text_field(object_name, method, options)
      label = label(object_name, method, options)
      content_tag(:div, (label+input), { :class =&gt; 'labeled_input' }
ActionView::Helpers::FormHelper.send(:include, LRD::FormHelper)

This will successfully define labeled_input that you can use outside of a form_for.

Now add the FormBuilder version:

To get it working inside of a form_for, you need to add a similar method to ActionView::Helpers::FormBuilder. As mentioned above, Rails does this automatically for its own FormHelper methods using a metaprogramming approach. But since that has already happened by the time your code can inject into FormHelper, you have to do it yourself. The solution we used is to make our own FormBuilder module that manually defines the labeled_input method in the same format that FormBuilder would have done, and then auto-include that into FormBuilder when our own FormHelper module gets included. Add this stuff to the above code block:

# Inside LRD::FormHelper, add this method:
def self.included(arg)
  ActionView::Helpers::FormBuilder.send(:include, LRD::FormBuilder)
module LRD::FormBuilder
  # ActionPack's metaprogramming would have done this for us, if FormHelper#labeled_input 
  # had been defined  at load.   Instead we define it ourselves here.
  def labeled_input(method, options = {})
    @template.labeled_input(@object_name, method, objectify_options(options))

In practice, our labeled_input method is much more complex; it handles other input types, can add instructional comments/notes to the field, and can accept a block if you want to put something other than an <input> where the text field normally goes. This guide should get you started to writing your own form_for methods quickly, but if you want to see how to do more complex things, check out the full version.

Adding more input types or other tags.

If you wanted to add an entire different tag or input type (as opposed to combining different ones, the way labeled_input does), you would probably start by building a module that you inserted into InstanceTag or InstanceTagMethods. It should define a method like MyInstanceTagModule#to_some_funky_tag() in parallel with to_input_field_tag().

Testing it with rSpec 2

Another challenge we faced was writing specs for labeled_input's behavior. It's a bit of a trick because we needed to instantiate ActionView and render some templates to check the output, but rspec-rails is written with the assumption that you will be loading an entire rails project and all the rails gems. If you want to spec just a view helper, you need to load a bunch of rspec-rails's files one by one, and then manually include RSpec::Rails::ViewExampleGroup into RSpec's configuration. We may write a separate post on this process in the future, but in the meantime, take a look at lrd_view_tools' spec_helper file and example spec for labeled_input to get the sense of it.