Occasionally, times arise where you would like to unit test the inner workings of a method. As a disclaimer, I don't recommend it because tests should generally be behavior driven. Tests should treat your methods as black boxes; you put something in, you get something out. How it works internally shouldn't really matter. However, if you would like to test the inner workings of your methods there's a number of ways to do so with pure Ruby including :send, :instance_variable_get and others. Testing the innards feels dirty no matter which way you spin it but I like to at least do it with RSpec.
Lets say you have a method that does some expensive lookup:
class Library < ActiveRecord::Base
include ExpensiveQueries
att_accessor :books
def books
@books ||= expensive_query # The expensive query takes 5 seconds
end
end
The above example should be familiar, you plan to perform something Ruby or database expensive and you would like to cache the result in an instance variable so that all subsequent calls to that method draw from the instance variable.
If you were taking a TDD approach, you wouldn't have this class already written. In that case, you know you'll be performing something very expensive and you want to ensure that method caches the result. How do you test this without knowing the internals of the method?
Let's say you're writing your tests before you write the above class. You could use RSpec's stubbing library to ensure your method is caching it's result.
describe Library do
describe '#books' do
it 'caches the result' do
# Assume some books get associated upon creation
@library = Library.create!
# 5 seconds for this call
the_books = @library.books
# Stub out the expensive_query method so it raises an error
@library.stub(:expensive_query) { raise 'Should not execute' }
# If the value was cached, expensive_query shouldn't be called
lambda { @library.books }.should_not raise_error
end
end
end
The key component here is the @library.stub call. This is also where we're breaking the black box, behavior driven test idiom. We assume at this line that we know there will be a method call internally named expensive_query. This test is also brittle because if expensive_query ever changes it's name to really_expensive_query, our test will break even though the functionality of our method remains the same.
What if your expensive_query is really an ActiveRecord association? So, let's say your Library class looks more like the following:
class Library < ActiveRecord::Base
has_many books
att_accessor :authors
def authors
@authors ||= books.authors # The expensive query takes 5 seconds
end
end
You could use the nifty stub_chain method provided by RSpec to stub the books.authors method and ensure it only gets called once.
describe Library do
describe '#books' do
it 'caches the result' do
# Assume some books and authors get associated upon creation
@library = Library.create!
# 5 seconds for this call
the_authors = @library.authors
# Stub out the books.authors association so it raises an error
@library.stub_chain(:books, :authors) { raise 'Should not execute' }
# If the value was cached, books.authors shouldn't be called
lambda { @library.authors }.should_not raise_error
end
end
end
Arguments to stub_chain represent the associations used. stub_chain could also be used to stub out additional methods which get called within the chain.
Happy stubbing!
Posted by Mike Pack on 10/07/2011 at 11:20AM
Call me a stickler, but I think there should be two pages that load quickest in any web app: The home page (for people not logged in) and the initial page you see once you are logged in, usually the dashboard.
Tackling the first of these two criteria, the guest home page, is fairly easy. This page could be as simple or as complex as you want. Facebook keeps it simple and static. Foursquare adds some flair. Whatever the approach may be, it's pretty easy to control the load time of the guest home page because you're likely building it from scratch.
OmniAuth had the revolutionary idea to consolidate third party methods of authentication, most using OAuth 1 or 2. But as consumers of libraries which take on such a burden, we have to be extra careful of the intricacies. For OmniAuth, one of those intricacies includes authentication with Facebook.
When OmniAuth successfully authenticates with Facebook, somethings terrible happens: it makes a request to the Facebook Graph API...every...single...time. Not only the first time you log in with Facebook, but all subsequent times. This is because of the vast decoupling between OmniAuth and your app. OmniAuth knows nothing about your underlying data model so it can't reliably store the authenticated user's Facebook information (and know not to request it again). To provide the user's Facebook information within your success callback, OmniAuth makes a request to the Facebook API.
I think it goes without saying but this is really bad for usability. To glance at a couple problems with this approach, consider that the Graph API is down. Or, consider that it never responds at all. Or, consider that you're over your Facebook API quota. Your users will be sitting in limbo at the most critical time they're using your app, during the login process. Maybe this is their first time logging in. Making one API call could potentially make it their last. Impress users early and with OmniAuth's Facebook integration you could be missing out.
The solution is to roll your own Authentication. Facebook's JavaScript SDK is awesome (most of the time) and you could probably integrate it within the same timeframe as you could OmniAuth but with the added benifit of a much better user experience. Unlike similar solutions to the Facebook JS SDK (Twitter @Anywhere), Facebook provides you with everything OmniAuth does, including the API access token.
Sidenote: As of this posting, OmniAuth 1.0 is currently under active development and it doesn't look like this issue has leaked into the OmniAuth Facebook Extension yet. The official release is still at 0.2.6.
Happy Facebooking!
Posted by Mike Pack on 09/21/2011 at 01:49PM
There's a lot of ugly solutions out there regarding this problem and rightfully so, it's a pain in the ass. I've gone through a good number of options and found the following to be the most simple. Warning: It only works on Mac OS X.
Capybara allows you to set the server port to run the test server on. This may or may not be necessary depending on your environment. It was necessary for me because Pow hijacks my default port.
spec_helper.rb
Capybara.server_port = 6543
Mac OS X comes with a handy *.127localhost.com domain configured for you. So you can do contact.127localhost.com, test.127localhost.com, etc and it will all point to 127.0.0.1.
When you need to use your subdomain, simply use a url helper:
login_spec.rb
describe 'as a guest user on the mobile login page' do
before do
visit login_url(:subdomain => 'contact', :host => '127localhost.com', :port => 6543)
end
it 'shows me the login form' do
page.should have_content('Login')
end
end
Note: The :subdomain option isn't available by default. I stole it from Ryan Bate's Railscast and gisted it.
Fenangling the /etc/hosts feels dirty and pollutes domain resolution.
Ultimately, it would be awesome to boot a Pow server for testing and use something like http://contact.myapp.test where the development app lives at http://contact.myapp.dev. It's possible to get Pow to boot at http://myapp.test by using the POW_DOMAINS environment variable. The issue here is the Pow server still runs in development. You can start the Pow server in an environment other than development by using .powenv. The issue here is managing the .powenv file so that it only exists during testing. Also, there would be additional boot time by restarting the Pow server.
I hope that in future versions of Pow, it's possible to configure the environment based on the top-level domain such as dev or test. That way, you can modify /etc/resolver/test to point to the port for which your test app lives during runtime. The resolver files are pretty simple and could be modified to look like the following:
/etc/resolver/test
# Lovingly generated by Pow
nameserver 127.0.0.1
port 6543
Until Pow provides the ability to run different environments, I'll stick to *.127localhost.com.
Happy testing!
Posted by Mike Pack on 09/16/2011 at 11:01AM
Tags: subdomains, capybara, testing, rspec, pow