Acceptance tests are the way in which we assure that our application meets the needs of the users. In most Rails applications, an acceptance test performs a black box test against the HTTP endpoints and routes.
When our application uses a lot of JavaScript—as our Angular-powered typeahead search does—it’s often necessary for our acceptance tests to execute the downloaded HTML, CSS, and JavaScript in a running browser, so you can be sure that all of the DOM manipulation you are doing is actually working.
Typically, developers would use Selenium, which would launch an instrumented instance of Firefox, running it on your desktop during the acceptance testing phase. This is quite cumbersome and slow, and for running tests on remote continuous integration servers, it requires special configuration to allow a graphical app like Firefox to run.
Ideally, we’d want something that executes our front-end code in a real browser—complete with a JavaScript interpreter—but that can run headless, that is, without popping up a graphical application. PhantomJS[46] is such a browser.
PhantomJS describes itself as “a headless WebKit, scriptable with a JavaScript API.” WebKit is the browser engine that powers Apple’s Safari (and was the basis of Google’s Chrome). The “scriptable JavaScript API” means that you can interact with it in your tests.
Most Rails acceptance tests use Capybara,[47] which provides an API for interacting with such an instrumented browser. To allow Capybara to talk to PhantomJS, you’re going to use Poltergeist,[48] which is analogous to Selenium if you were using Firefox.
This may sound like a ton of new technologies and buzzwords, but it’s all worth it to get the kind of test coverage you need. You need to add the PhantomJS and Poltergeist gems to the Gemfile, do a bit of configuration, and start writing acceptance tests as you normally would. Let’s get to it.
First, you’ll need to download and install PhantomJS. The specifics of this depend on your operating system, but the details for Mac, Windows, and Linux are on PhantomJS’s download page.[49] You’ll need the latest version, which is 2.1.1 as of this writing. Pay particular attention to this since version 1.9 is still in wide use and will not work for our purposes here.
You can verify your install by running phantomjs and issuing some basic JavaScript:
| $ phantomjs --version |
| 2.1.1 |
| $ phantomjs |
| phantomjs> console.log("HELLO!"); |
| HELLO! |
| undefined |
| phantomjs> |
This is the only time you’ll need to interact with PhantomJS in this way, but it’s enough to validate your install.
Now, we’ll install Poltergeist, which is an adapter between the Ruby code you’ll write for our acceptance tests and the “scriptable JavaScript API” PhantomJS provides. To do this, add it to the testing group in your Gemfile and then do bundle install to install it.
| group :development, :test do |
| |
| # other gems... |
| |
| gem 'database_cleaner' |
| gem 'rspec-rails', '~> 3.4' |
» | gem 'poltergeist' |
| end |
Installing Poltergeist will bring in Capybara as a dependency. If you aren’t familiar with it, I’ll explain more when we see the acceptance tests.
To use Poltergeist to run acceptance tests in a browser, you have to do three things: (1) you have to configure Capybara to use it during test runs; (2) you must configure RSpec to handle the testing database differently for acceptance tests than for our unit tests; and (3) you have to make sure webpack-dev-server is running so that our JavaScript and CSS are served up during the tests.
To connect Poltergeist and Capybara, you just need a few lines in spec/rails_helper.rb. You’ll need to require Poltergeist and then set Capybara’s drivers to use it. Capybara has two different drivers: one default and one for JavaScript. This is handy if you don’t have a lot of JavaScript and want our acceptance tests to normally run using a special in-process driver that won’t execute JavaScript on the page. That’s not the case for Shine, so we’ll use Poltergeist (which is powering PhantomJS) for all acceptance tests.
Here are the changes to spec/rails_helper.rb:
| ENV['RAILS_ENV'] ||= 'test' |
| require 'spec_helper' |
| require File.expand_path('../../config/environment', __FILE__) |
| require 'rspec/rails' |
» | require 'capybara/poltergeist' |
| |
» | Capybara.javascript_driver = :poltergeist |
» | Capybara.default_driver = :poltergeist |
| |
| ActiveRecord::Migration.maintain_test_schema! |
| |
| RSpec.configure do |config| |
| |
| # rest of the file ... |
| |
| end |
Note that you’re using spec/rails_helper.rb and not spec/spec_helper.rb because these tests require the full power of Rails to execute (namely, access to Active Record and the path helpers).
Next, we need to deal with how we manage our test data during acceptance test runs. We’ll do that using a gem called DatabaseCleaner.
In a normal Rails unit test, the testing database is maintained using database transactions.[50] At the start of a test run, Rails opens a new transaction. Our tests would then write data to the database to set up the test, run the test (which might make further changes to the database), and then assert the results, which often require querying the database. When the test is complete, Rails will roll back the transaction, effectively undoing all the changes you made, restoring the test database to a pristine state.
This works because the process that starts the transaction can see all of the changes made to the database inside that transaction, even though no other process can. Because Rails runs our tests in the same process that it uses to execute them, using transactions is a clever and efficient way to manage test data. But our acceptance tests will actually run two processes: our application and our test code (which will use PhantomJS to access our application).
Because of the way database transactions work, our test code can see the data it’s inserting for the test (since it opened the transaction), but the application we’re testing—running in another process—cannot. Only if we commit that transaction in our test process can the server process see the data. We need to arrange to do that inside our acceptance tests.
Doing this creates a new problem, which is that we now need a way to restore the test database to a pristine state between test runs. For example, if we are testing our search by populating the database with four users named “Pat,” but we are also testing our registration by signing up a user named “Pat,” our search test might fail if the registration test runs first, since there would be five users named “Pat.”
Fortunately, this is a common problem and has a relatively simple solution: the DatabaseCleaner[51] gem.
DatabaseCleaner works with RSpec and Rails to reset the database to a pristine state without using transactions (although it can—it provides several strategies). RSpec allows us to customize the database setup and teardown by test type. This means you can keep the fast and efficient transaction-based approach for our unit tests, but use a different approach for our acceptance tests.
First, add DatabaseCleaner to the Gemfile and bundle install:
| group :development, :test do |
| |
| # other gems... |
| |
| gem 'database_cleaner' |
| end |
Now, configure it in spec/rails_helper.rb. To do this, disable RSpec’s built-in database handling code by setting use_transactional_fixtures to false (note that the generated rails_helper.rb will have it set to true). Then use RSpec’s hooks[52] to allow DatabaseCleaner to handle the databases. By default, use DatabaseCleaner’s :transaction strategy, which works just like RSpec and Rails’s default. But for our acceptance tests (which RSpec calls “features”), we’ll use :truncation, which means DatabaseCleaner will use the SQL truncate keyword to purge data that’s been committed to the database.
Here’s what we’ll add to spec/rails_helper.rb, with the most relevant parts highlighted:
| RSpec.configure do |config| |
» | config.use_transactional_fixtures = false |
» | config.infer_spec_type_from_file_location! |
| |
| # rest of the file... |
| |
» | config.before(:suite) do |
» | DatabaseCleaner.clean_with(:truncation) |
» | end |
| |
» | config.before(:each) do |
» | DatabaseCleaner.strategy = :transaction |
» | end |
| |
» | config.before(:each, :type => :feature) do |
» | DatabaseCleaner.strategy = :truncation |
» | end |
| |
» | config.before(:each) do |
» | DatabaseCleaner.start |
» | end |
| |
» | config.after(:each) do |
» | DatabaseCleaner.clean |
» | end |
| |
| end |
Now that you’ve set up PhantomJS, Poltergeist, and DatabaseCleaner, you’re ready to write an acceptance test.
There are two parts of the typeahead search we can test. The first is that merely typing in the search field will perform the search. The second is that our results are ordered according to our original specification from Chapter 4, Perform Fast Queries with Advanced Postgres Indexes .
To test the search, you’ll write two tests: one that searches by name, and a second that searches by email. This will allow us to validate that matching emails are listed first. Both tests will assert that merely typing in the search term returns results.
In addition to the one-time setup we just did, we also need to do more setup per test. First, all pages in Shine require a login, so we’ll need a valid user in the database we can use to log in. Second, we’ll need test customers in our database tos earch for. The basic flow of our test will be: create a test user, create test customers, log in as the test user, navigate to the customer page, and perform a search.
RSpec calls acceptance tests feature specs, and expects the tests to be in spec/features. Locating our tests files there is signal to RSpec to use Capybara and to run our tests as full-on acceptance tests. The first step, then, is to create spec/features/customer_search_spec.rb like so:
| require 'rails_helper' |
| |
| feature "Customer Search" do |
| |
| # setup and tests will go here... |
| |
| end |
Note the use of feature instead of describe. This is purely aesthetic, but provides useful context when editing test files on larger projects. First, let’s create a method that will create a test user for us.
Way back in Chapter 3, Secure the User Database with Postgres Constraints, we talked about how Devise properly secures our user information, including passwords. This means it’ll be difficult to write a valid encrypted password from our tests. Fortunately, the additions Devise made to our User model allow us to do this directly.
| User.create!(email: "pat@example.com" |
| password: "password123", |
| password_confirmation: "password123") |
This is what happens when a user registers, so we can just call code like this in our test. Create a helper method to do this (so we don’t clutter up our actual test):
| def create_test_user(email: , password: ) |
| User.create!( |
| email: email, |
| password: password, |
| password_confirmation: password) |
| end |
This method requires email and password as arguments because we’ll need those in our test to fill out the login form. The way to manage data like this in an RSpec test is to use let, which defines we can use in both setup and tests. There’s no need to use Faker, so just hard-code some valid values:
| let(:email) { "pat@example.com" } |
| let(:password) { "password123" } |
To create customers, we’re going to create them manually inside our test file. Although Rails provides test fixtures[53] to do this, we’re not going to use them here. Because tests related to search require meticulous setup of many different rows, we want that setup to be in our test file so that we (and future maintainers of our code) can clearly see what we’re setting up for our test.
To help create customers, you’ll define a helper method, create_customer, that will allow you to specify only those fields of a customer you want, using Faker to fill in the remaining required fields.
| def create_customer(first_name:, last_name:, email: nil) |
| username = "#{Faker::Internet.user_name}#{rand(1000)}" |
| email ||= "#{username}#{rand(1000)}@" + |
| "#{Faker::Internet.domain_name}" |
| |
| Customer.create!( |
| first_name: first_name, |
| last_name: last_name, |
| username: username, |
| email: email |
| ) |
| end |
With create_customer, email, and password in place, you can create the test data you need to run the tests. You can do this inside a before block, which we saw inside our RSpec configuration when setting up DatabaseCleaner. That syntax works in our test files as well, and defines code to run before each test:
| before do |
| create_test_user(email: email, password: password) |
| |
| create_customer first_name: "Chris" , last_name: "Aaron" |
| create_customer first_name: "Pat" , last_name: "Johnson" |
| create_customer first_name: "I.T." , last_name: "Pat" |
| create_customer first_name: "Patricia" , last_name: "Dobbs" |
| |
| # This user is the one we'll expect to be listed first |
| create_customer first_name: "Pat", |
| last_name: "Jones", |
| email: "pat123@somewhere.net" |
| end |
Now we can start writing tests. Our first test will search by name. If we search for the string "pat", given our test data, we should expect to get four results back. Further, we should expect that the test user named “Patricia Dobbs” will be sorted first, whereas the test user “I.T. Pat” will be last (since our search sorts by last name).
To assert this, we’ll use the all method of the page object Capybara provides in our tests. all returns all DOM nodes on the page that match a given selector. In our case, we can use a CSS selector to count all list items with the class list-group-item (you’ll recall from Chapter 5, Create Clean Search Results with Bootstrap Components that we designed our results using Bootstrap’s List Group component). You can also dereference the value returned by all to make assertions about the content of a particular list item.
Here’s what our test looks like (note the use of scenario instead of it—this is purely stylistic, but most RSpec acceptance tests use this for readability).
| scenario "Search by Name"do |
| visit "/customers" |
| |
| # Login to get access to /customers |
| fill_in "Email", with: email |
| fill_in "Password", with: password |
| click_button "Log in" |
| |
| within "section.search-form" do |
| fill_in "keywords", with: "pat" |
| end |
| |
| within "section.search-results" do |
| expect(page).to have_content("Results") |
| expect(page.all("ol li.list-group-item").count).to eq(4) |
| |
| list_group_items = page.all("ol li.list-group-item") |
| |
| expect(list_group_items[0]).to have_content("Patricia") |
| expect(list_group_items[0]).to have_content("Dobbs") |
| expect(list_group_items[3]).to have_content("I.T.") |
| expect(list_group_items[3]).to have_content("Pat") |
| end |
| end |
Before we run the test, there’s one more thing we have to do the code we’re testing. Our test makes logical sense, but if you think about it more deeply, it’s assuming that the back end will respond (and results will be rendered) between the time after the fill_in, but before the first expect. This is likely not enough time, meaning we’ll start expecting results in our test before they’ve been rendered in the browser (this is a form of race condition). Worse, if you try to debug it (see the following sidebar for some tips), the test will pass, because debugging will introduce enough of a delay that the DOM will render and make the test pass. This is a recipe for a flaky test.
What you want to do is have the test wait for the back end to complete. There are many ways to do this, but the cleanest way, and the way Capybara is designed, is to write the markup and tests so that a change in the DOM signals the completion of the back end.
For the Capybara part, we are actually already set up to wait for the DOM. within is implemented by using find,[54] which is documented to wait a configured amount of time for an element to appear in the DOM—exactly what we want!
The problem is that the markup we’re waiting on (<section class='search-results'>) is always shown, even before we have results. This means that the within won’t have to wait, since the element is there, and it will proceed with the expectations, which fail. So, we’ll change our template to only show this markup if there are search results by using *ngIf like so:
» | <section class="search-results" *ngIf="customers"> \ |
| |
| <!-- Rest of the template --> |
| |
| </section> \ |
*ngIf is similar to *ngFor in that the special leading asterisk is syntactic sugar for a much longer construct using the <template> element. I won’t show the expansion here, since you learned how this concept generally works already in Populate the DOM Using ngFor.
The way *ngIf works is to remove elements from the DOM if the expression given to it evaluates to false. This means that until you have results, there won’t be any element that matches section.search-results, which means that within will wait on that element to appear. As long as our back end returns in three seconds (Capybara’s default wait time), our test will pass. If you find that adding calls to sleep makes your tests pass, consider reexamining your use of within and find and see if you can change your markup to use this approach.
With this in place, we can now run our test and get a bizarre failure:
| > bin/rails spec SPEC=spec/features/customer_search_spec.rb |
| |
| Randomized with seed 21046 |
| |
| Customer Search |
| Search by Name (FAILED - 1) |
| |
| Failures: |
| |
| 1) Customer Search Search by Name |
| Failure/Error: <%= javascript_pack_tag 'application' %> |
| ActionView::Template::Error: |
| Can't find application.js in public/packs/manifest-test.json. |
| Is webpack still compiling? |
| # ./app/views/layouts/application.html.erb:12:in |
| `_app_views_layouts_application_html_erb___127481630256380… |
| # ------------------ |
| # --- Caused by: --- |
| # Webpacker::FileLoader::NotFoundError: |
| # Can't find application.js in public/packs/manifest-test.json. |
| # Is webpack still compiling? |
| # ./app/views/layouts/application.html.erb:12:in |
| `_app_views_layouts_application_html_erb___127481630256380623… |
| |
| Top 1 slowest examples (1.63 seconds, 94.7% of total time): |
| Customer Search Search by Name |
| 1.63 seconds ./spec/features/customer_search_spec.rb:56 |
| |
| Finished in 1.73 seconds (files took 2.03 seconds to load) |
| 1 example, 1 failure |
| |
| Failed examples: |
| |
| rspec ./spec/features/customer_search_spec.rb:56 # Customer Search Search by Name |
| |
| Randomized with seed 21046 |
When developing our features, we were running webpack-dev-server to serve up our compiled JavaScript and CSS. That server isn’t running when we run our tests, so it makes sense that our JavaScript and CSS can’t be found. To make this work, we can run bin/webpack manually in the test environment:
| > RAILS_ENV=test bin/webpack |
| |
| lots of output |
And now, our test passes:
| > bin/rails spec SPEC=spec/features/customer_search_spec.rb |
| |
| Randomized with seed 51846 |
| |
| Customer Search |
| Search by Name |
| |
| Top 1 slowest examples (2.25 seconds, 97.3% of total time): |
| Customer Search Search by Name |
| 2.25 seconds ./spec/features/customer_search_spec.rb:56 |
| |
| Finished in 2.31 seconds (files took 1.72 seconds to load) |
| 1 example, 0 failures |
| |
| Randomized with seed 51846 |
Having to remember to run Webpack before each test is a pain. We can arrange for it to run before all of our tests by augmenting the RSpec-provided spec task. Create the file lib/tasks/features.rake like so:
| task :run_webpack_in_test_env do |
| unless ENV["SKIP_WEBPACK"] == 'true' |
| system({ "RAILS_ENV" => "test"}, "bin/webpack") |
| end |
| end |
| task :spec => :run_webpack_in_test_env |
The test for the environment variable SKIP_WEBPACK allows us to avoid running Webpack when we are just testing one test that doesn’t need it. For example, to run the tests for User, we’d execute:
| > SKIP_WEBPACK=true bin/rails spec SPEC=spec/models/user_spec.rb |
| |
| Randomized with seed 6438 |
| |
| User |
| |
| absolutely prevents invalid email addresses |
| |
| Top 1 slowest examples (0.0591 seconds, 36.5% of total time): |
| User email absolutely prevents invalid email addresses |
| 0.0591 seconds ./spec/models/user_spec.rb:18 |
| |
| Finished in 0.16194 seconds (files took 2.22 seconds to load) |
| 1 example, 0 failures |
| |
| Randomized with seed 6438 |
This way, when we run all of our tests with bin/rails spec, Webpack will run and our acceptance tests will have the latest version of our Angular code and CSS. When we are doing focused TDD on one unit test, we can skip Webpack. This setup isn’t ideal, but a more sophisitcated solution is quite complex, and this will be good enough to get us working for now.
Now that you understand the importance of within, let’s see our test for searching by email. It will be structured similarly to our previous test, but we want to check that the user with the matching email is listed first. We’ll then check that the remaining results are sorted by last name, using a similar technique to what you saw in our search-by-name test.
| scenario "Search by Email" do |
| visit "/customers" |
| |
| # Login to get access to /customers |
| fill_in "Email", with: email |
| fill_in "Password", with: password |
| click_button "Log in" |
| |
| within "section.search-form" do |
| fill_in "keywords", with: "pat123@somewhere.net" |
| end |
| within "section.search-results" do |
| expect(page).to have_content("Results") |
| expect(page.all("ol li.list-group-item").count).to eq(4) |
| |
| list_group_items = page.all("ol li.list-group-item") |
| |
| expect(list_group_items[0]).to have_content("Pat") |
| expect(list_group_items[0]).to have_content("Jones") |
| expect(list_group_items[1]).to have_content("Patricia") |
| expect(list_group_items[1]).to have_content("Dobbs") |
| expect(list_group_items[3]).to have_content("I.T.") |
| expect(list_group_items[3]).to have_content("Pat") |
| end |
| end |
Now, let’s run our tests.
| $ bin/rails spec SPEC=spec/features/customer_search_spec.rb |
| |
| Customer Search |
| Search by Email |
| Search by Name |
| |
| Finished in 3.63 seconds (files took 7.08 seconds to load) |
| 2 examples, 0 failures |
They pass! We now have a way to test our features the way a user would use them: using a real browser. Our tests can properly handle our extensive use of JavaScript, but they don’t need to pop up a web browser, which makes them easy to run in a continuous integration environment.
Of course, testing Angular code purely in the browser is somewhat cumbersome, especially if it becomes complex with a lot of edge cases. To help get good test coverage without always having to go through a browser, you need to be able to unit test your Angular code.