Slow tests hitting HTTP APIs

Let's say you're integrating with some external API, and you have a bunch of acceptance specs that hit that API. (I like to use rspec and capybara for my acceptance tests, and more recently, I've been turned onto site_prism for super DRY in the right way, readable specs.)

Firstly, if you've been testing for any length of time, then you already know you don't want to actually hit an API during tests, because that's slow, brittle, and unwieldy.

So along come tools like WebMock and VCR.

WebMock to the rescue

These is great because they allow you to specify exactly what to expect back from the network and they're superfast compared to actually making HTTP calls.

Now if you're lazy, like you should be as a good developer, then you don't want to be typing out manually:

stub_request(:post, "").
  with(:body => /^.*world$/, :headers => {"Content-Type" => /image\/.+/}).
  to_return(:body => "abc")

VCR to the rescue

So you can just make real requests to the API, then record the responses. This is also great, because you've saved yourself writing out a stub and you can just wrap your tests in a simple block, or flag a spec as using a specific cassette, and it's all handled automatically for you.

That's all well and good, but if the API you're using is at all well established or well encapsulated, then you're probably using a gem that wraps up that API. It may be an official gem or just another gem that's become popular for using that API.

Here's a great post by thoughtbot on all the above.


You may find that you can pretty well communicate with the API because it's lightweight and just a couple of POSTs and a couple of GETs by using something like Faraday.

And then you add a couple of bits and pieces for convenience like a retry here and there, some extra logging, notifying of airbrake or honeybadger when something goes wrong. And you realise you're going to need a nice wrapper class.

Either way, we now have a wrapper class. If it's nicely abstracted enough you can draw an architectural line around it. Gemify it, then never touch it again. :)

You will touch it again, but at least you can isolate it.

Abstraction to the rescue

So you have either a well-tested 3rd party wrapper library around your service, or you've done the right thing and wrapped up your Faraday/HTTParty calls etc in a well-named, well-factored, isolated class or so, and you have a clearly defined interface for communicating with a service. You may have named the gem the same as the service, and there comes a point that you realise wouldn't it be great to trust this code.

funky_stats =

{january: 11290, february: 2934, march: 123313}

Wait, who's code am I testing?

So you may be thinking:

"I wrote tests that check that this code works well with already, and have a good development process and a reliable, stable, well-tested service. So why should I have to redo all my and their work? I wrote the wrapper gem to the API, don't I trust those tests there? I've already used VCR there, so I know it works. Don't I trust myself?

Why do I care about the specific HTTP requests too? Why draw the line there?

And you know what? You're right to question it. You're right to wonder why you should need to acceptance test other people's APIs. All the way through your application code, right up to the boundary of the ruby network library.

"We stop at the border of the ruby network library for what reason? Could we be checking individual packets get sent out? We could build a little test device that you connect 2 RJ45s into and verify for certain that the bits leave the server and head off to the router. Actually, why start trusting at that point?
To have complete confidence that your tests really work, why trust HTTP and the infrastructure inbetween?"
"We really need to ask our API provider to allow access to their servers to
verify at their end too, they must receive the same packets we sent off."

And we don't trust their servers and application code, so we're really going to
need to audit that too.

"And verify internally within the CPU registers, that
they actually receive the same bits we intended for them to receive. And no
bugs have crept in along the way. And who do they depend on? We've got a big
project on our hands and this simple 8 week web-app project has turned into a
£250m project."

Circle of trust

Of course this is all crazy. You have to start trusting someone somewhere, and it might feel like we have to just draw arbitrary lines

If you're writing a Rails app, then it's really easy to understand the point at which you should. Just. Stop testing and start trusting.

Imagine this stack. We're getting facebook friends using the Koala gem

#------ Somewhere round here is where WebMock chooses to stub it out
ruby/1.9.1/net/http.rb:762:in `initialize'
ruby/1.9.1/net/http.rb:762:in `open'
ruby/1.9.1/net/http.rb:762:in `block in connect'
ruby/1.9.1/timeout.rb:54:in `timeout'
ruby/1.9.1/timeout.rb:99:in `timeout'
ruby/1.9.1/net/http.rb:762:in `connect'
ruby/1.9.1/net/http.rb:755:in `do_start'
ruby/1.9.1/net/http.rb:744:in `start'
ruby/1.9.1/net/http.rb:1284:in `request'
faraday (0.8.8) lib/faraday/adapter/net_http.rb:75:in `perform_request'
faraday (0.8.8) lib/faraday/adapter/net_http.rb:38:in `call'
faraday (0.8.8) lib/faraday/request/url_encoded.rb:14:in `call'
faraday (0.8.8) lib/faraday/connection.rb:254:in `run_request'
faraday (0.8.8) lib/faraday/connection.rb:66:in `initialize'
faraday (0.8.8) lib/faraday.rb:11:in `new'
#----- maybe you could stub out here - at the edge of your HTTP wrapper?
koala (1.6.0) lib/koala/http_service.rb:76:in `make_request'
koala (1.6.0) lib/koala.rb:51:in `make_request'
koala (1.6.0) lib/koala/api.rb:51:in `api'
koala (1.6.0) lib/koala/api/graph_api.rb:468:in `graph_call'
koala (1.6.0) lib/koala/api/graph_api.rb:58:in `get_object'
#----- here is the seam
#----- between your app code and the external API
app/models/friend.rb:84:in `find_all'
app/interactors/find_friends.rb:36:in `perform'
app/controllers/friends.rb:36:in `index'

Which part of the app makes sense for you to test? At the boundary of the gem and the app. I.e. in this case

client = double("koala client",
  get_object: {
    heres:       "some object",
    we_expected: "from facebook"


Stubbing out common external dependencies in Rspec

What follows is a rough idea of a pattern you can follow to stub out your external dependencies in Rspec.

Note this also works as a general pattern for stubbing out
shared setup between tests. E.g. you could stub out
OAuth in this way too. Anything, that you need to do in
lots of tests, but don't actually need to test every time

It's just DRY enough, but flexible for introducing readable tests, and it emphasises only the differences between your tests. I.e. you're not reading through a bunch of boilerplate just to find out what the test is about.

No more spot the difference on two JSON, or heaven forbid, XML files.


config.extend StubOutExternal


module StubOutFacebookApi
  def stub_out_facebook_api!
    include MethodsAvailableToSpecs

    before do

  module MethodsAvailableToSpecs
    def facebook_default_attributes
      {id:     "1238190321",
       name:   "Carol",
       locale: "en_GB"

    def facebook_attributes

    def setup_facebook_user!
      client = double('koala facebook api', get_object: facebook_attributes)


describe "Doing something with facebook" do

  it "some default happy path" do
    #the Koala::Facebook::API is now stubbed.

  context "as Geoff" do
    let(:facebook_attributes) do
      default_attributes.merge(name: "Geoff")
    it "some specific test for Geoff"


When not to stub

Now if you want to have slower tests, but more confidence that the whole app works then you can still make the decision to:

  • use WebMock
  • use VCR
  • actually hit the web

If the app is large enough then you could have a bunch of acceptance specs with stubbing at the API level, one or two full system specs that go off and hit test servers, then finally monitoring style jobs that use an external service and run against prod/staging servers and assert that the actual systems you have in place all hold together.

The last point is not the same as testing, but it can be because of a lack of monitoring that people insist on very comprehensive far-reaching test suites.

I.e. lack of trust.

It's all about balance. I've found that with test suites that are:

  • very comprehensive
  • albeit slow
  • difficult to understand

it can be easier to introduce bugs, or not catch them in the first place.

Even if they theoretically should give you more confidence as they are hitting more of your codebase with each test.

On the corrolary test suites which are:

  • very loose
  • fast
  • minimal code
  • very easy to reason about
  • easy to change

Put you in a position that you can spot bugs more easily, prevent them from happening, and fix them quicker and with more confidence when you do let them through.

YMMV, as you may be working on a missile-launch system, or life-support systems, and what you do is far more business critical than the average tech startup.


By stubbing out at predefined well tested, object boundaries at the edge of your application you'll find your test suite a whole lot easier to reason about, and what's more it will be faster.

Don't overly test 3rd party libraries, particularly well tested ones, built by talented teams. Stubbing out with tools like WebMock means the language of your test will be separated than the language of your domain. E.g. you'll be talking in JSON/XML/SOAP or who knows what and not in specific attributes that your objects and app cares about.