1. Skip to navigation
  2. Skip to content

Entries in the Category “Testing”

Django 1.1 Testing and Debugging

written by Michael Trier, on Jul 6, 2010 9:39:00 PM.

I’ve been wanting to write this post for about four weeks now, and finally I have a chance to follow through on it. A little over a month ago I picked up Karen M. Tracey’s new book, by Packt Publishing. It’s been surprising to me that I’ve heard little in Django circles about this book because it fills a very nice space that has been a huge void in the Django world for some time. Testing within Django has always been something I’ve struggled with. The platform has never really “encouraged” testing in a way that I was used to in the Ruby on Rails or Pylons world. Likely this was more of an internal personal struggle of trying to figuring out how to fit testing within my workflow within the Django world than an actual shortcoming in the framework. For me, Ms. Tracey’s book was exactly what I needed to put things into perspective and to get clarity on an appropriate testing workflow in Django.

The book is very well written and quite readable. I found only a few minor code problems, and likely the result of chasing an ever evolving framework than actual oversight. I was able to digest the book over three days and began to put some of the teachings in the book into practice immediately.

The book starts off with some basic testing setup items and discusses testing approaches. Quickly Ms. Tracey moves into talking about doctests. Although I’m not a huge fan of doctests, I found the discussion easy to follow and Tracey does a great job of presenting the advantages and disadvantages of doctests. In chapter three Unit Tests using TestCase are covered and again Ms. Tracey does a very thorough job of covering the pros and cons of this approach. There were two areas of the book that probably helped fill in gaps for me the most, the chapter on using Client to do integration tests and the chapter on integrating third-party test tools. I still haven’t had a chance to dig into Twill but I definitely want to experiment in using Twill for my integration tests. The last half of the book is spent talking about debugging approaches and figuring out how to find and solve problems with your code. Although I personally didn’t get a lot of new information out of this section of the book, programmers new to Django / Python will find it to be an excellent resource in outlining the options that are available when things go wrong. The book finishes up with a chapter on moving your applications to production, debugging production problems, and load testing your applications.

I really only have two criticisms of the book and they are both minor. In chapter 5 where Ms. Tracey discusses integrating Django testing with third-party tools, I thought the book should have gone a little deeper. Coverage and Twill are covered in depth but not a lot of time is spent on using Nose as a test runner other than to provide the basic approach. I understand the book is not about extending Nose but it would have been nice to have a bit more to work with there. The only other criticism of the book was with the final chapter. Although there’s useful information in the “moving your app to production” chapter, it seemed out of scope for this book and a subject matter that is really deserving of an entire book itself. That said, I did enjoy the discussion on load testing and found it very helpful.

In summary, if you’re developing with Django this is another “must have” book in my opinion. There’s so much good information in this book, and it is presented in a very readable and easy to understand way. I’m planning to order a copy for each member of my team.

Automating Test Creation

written by Michael Trier, on Jul 24, 2008 1:53:00 PM.

Eric Holscher just posted a very nice article titled Automating tests in Django. The post goes through how to create integration tests for your Django applications in an automated way through the use of a Middleware that logs the test creation output to a file. It’s a creative approach and certainly very interesting. One additional benefit is that Eric created a screencast to go along with the post that is excellently done.

There is one thing about this approach to testing that doesn’t quite sit right with me and that’s that it seems like the testing process is backwards. If you’re creating tests based on what you have how are you possibly going to cover what’s specified but not implemented properly? It’s the same reason I’m not a fan of doctests. I think they encourage the wrong behavior, especially when often the output your matching to is so complex that the tendency is to just copy and paste from live results. I recognize that a lot of people don’t feel the same way, and perhaps I just need to give the idea more time to sink in.

I really appreciate all of the screencasts that are starting to show up within the Django community. I think it’s a vehicle that a lot of people enjoy and learn well from. I know that I’m certainly looking foward to more screencasts from Eric.

Flexible Creates in Testing

written by Michael Trier, on Mar 29, 2008 1:41:00 AM.

Here’s a syntax that I used quite frequently when doing unit tests in Ruby. A similar syntax in Python works quite well once you get past the death star syntax:


def create_category(self, **options):
    return Category.objects.create(**dict({'name': 'Python', 'description': 'Python rocks, mmmkay'}, **options))

By setting up the above method / function in your tests you can then use it as a default by just calling the function. But if you want to override a particular aspect of the create, for instance to set a required field to None, you can just pass that in easily with:


create_category(name=None):

It works quite well and makes it easy to provide defaults around which you modify to test certain aspects of your code.

Elegant Testing Decorators

written by Michael Trier, on Mar 28, 2008 10:20:00 PM.

I’ve been working on fleshing out the tests for Django-SQLAlchemy and right away I discovered a problem. Incidentally, this is why I love unit testing. The problems was that a very simple test of the query contains filter syntax was failing. I was expecting four items but was getting five back. After digging into it I discovered the problem was with SQLite. SQLite doesn’t seem to respect case when using the LIKE syntax.

Let us say we have the following setup:


from apps.blog.models import Category

class TestContains(object):
    def setup(self):
        Category.__table__.insert().execute({'name': 'Python'}, 
            {'name': 'PHP'}, {'name': 'Ruby'}, {'name': 'Smalltalk'}, 
            {'name': 'CSharp'}, {'name': 'Modula'}, {'name': 'Algol'},
            {'name': 'Forth'}, {'name': 'Pascal'})

a query with contains in Django on SQLite, like the following, will return five results, instead of the expected four results:


>>> Category.objects.filter(name__contains='a').count()
2008-03-28 20:41:43,228 INFO sqlalchemy.engine.base.Engine.0x..f0 BEGIN
2008-03-28 20:41:43,229 INFO sqlalchemy.engine.base.Engine.0x..f0 SELECT count(foo_category.id) AS count_1 
FROM foo_category 
WHERE foo_category.name LIKE ?
2008-03-28 20:41:43,229 INFO sqlalchemy.engine.base.Engine.0x..f0 ['%a%']
5

(Incidentally that’s actually going through the Django-SQLAlchemy backend and not Django’s ORM.)

So I kind of hemmed and hawed for a bit trying to figure out how I could make this work under different databases, at least from the testing standpoint. It finally occurred to me that SQLAlchemy has to be able to test their ORM against a lot of different backends so they must have a nice solution to this.

Decorators to the Rescue

It turns out that SQLAlchemy has implemented an elegant set of decorators for just this problem. They were also written in such a way that it was quite easy for me to extract them and modify them slightly to work with Django-SQLAlchemy tests. So what’s in this package?

  • fails_if(callable_) – Mark a test as expected to fail if callable_ returns True.
  • future – Mark a test as expected to unconditionally fail.
  • fails_on(dbs) – Mark a test as expected to fail on one or more database implementations.
  • fails_on_everything_except(dbs) – Mark a test as expected to fail on most database implementations.
  • unsupported(dbs) – Mark a test as unsupported by one or more database implementations.
  • exclude(db, op, spec) – Mark a test as unsupported by specific database server versions. This decorator allows an impressive list of options, for example @exclude('mydb', '<', (1,0))

There’s a lot more than that, but I will not detail them all here. If you want to dig through it all check out test/testlib/testing.py module.

The Implementation

So once I was able to extract and modify these decorators I ended up with very elegant syntax for my tests. Here is a sample:


class TestContains(object):
    def setup(self):
        Category.__table__.insert().execute({'name': 'Python'}, 
            {'name': 'PHP'}, {'name': 'Ruby'}, {'name': 'Smalltalk'}, 
            {'name': 'CSharp'}, {'name': 'Modula'}, {'name': 'Algol'},
            {'name': 'Forth'}, {'name': 'Pascal'})

    @fails_on('sqlite')
    def test_should_contain_string_in_name(self):
        assert 4 == Category.objects.filter(name__contains='a').count()
        assert 1 == Category.objects.filter(name__contains='A').count()

    @fails_on_everything_except('sqlite')
    def test_should_contain_string_in_name_on_sqlite(self):
        assert 5 == Category.objects.filter(name__contains='a').count()
        assert 5 == Category.objects.filter(name__contains='A').count()

    def test_should_contain_string_in_name_regardless_of_case(self):
        assert 5 == Category.objects.filter(name__icontains='a').count()
        assert 5 == Category.objects.filter(name__icontains='A').count()

Special thanks goes to Mike Bayer and the rest of the contributors to SQLAlchemy for providing such a great solution. I am constantly amazed by their code.

Stubbing Authentication in Your Controllers

written by Michael Trier, on Apr 6, 2007 8:31:00 AM.

I was trying to spec out a few of my controllers that had actions on them requiring authentication. After jumping through many mind hoops to figure out how to stub them out properly I asked the RSpec-Users list and received this solution from Graeme Nelson:

def mock_user_authentication(allow_user_to_pass=true)       
  controller.stub!(:login_required).and_return(allow_user_to_pass)
end

It’s elegantly simple and works well. I’m still fumbling around with this RSpec stuff and did not realize I could stub out a method directly on the controller. This has cleared up a big missing piece in my thinking.

Mocking RESTful Routes

written by Michael Trier, on Mar 27, 2007 1:12:00 PM.

I just spent about an hour trying to figure out why none of my RESTful routes were working properly within my RSpec controller specs. In my controller I had some boilerplate code like this:

# POST /categories
# POST /categories.xml
def create
  @category = Category.new(params[:category])

  respond_to do |format|
    if @category.save
      flash[:notice] = 'Category was successfully created.'
      format.html { redirect_to category_url(@category) }
      format.xml  { head :created, :location => category_url(@category) }
    else
      format.html { render :action => "new" }
      format.xml  { render :xml => @category.errors.to_xml }
    end
  end
end

The line causing the problem was:

redirect_to category_url(@category)

I kept receiving an error on the eval of category_url with an error description of “can’t convert Fixnum into String”.

I tried replacing @category with @category.id to see if I would get different results. The error went away but the test failed indicating that the id returned from the @category instance was not the same as I was expecting. This led me to determine that I needed to stub out the id property on my class. So I added the following to my setup:

@category.stub!(:id).and_return(1)

Everything worked. Problem solved. But wait, that’s ugly and smells of something wrong. I should be able to just pass the object to the category_url and have it return the correct value. What I did next was go down a rat hole trying to figure out what the named route was sending to the object to get the id. I had assumed id, but in fact it’s to_param, which I had already stubbed out as follows:

@category = mock_model(Category, :to_param => 1)

So what’s the problem? It turns out that to_param must return a string. Makes sense. I changed it to the following and everything worked perfectly:

@category = mock_model(Category, :to_param => "1")

It’s little things like this that make learning so much fun. This issue is really indicative of a much bigger problem—my lack of understanding mocks and stubs. But, I’ll have more to write about this later.

Overtesting, Who Cares?

written by Michael Trier, on Mar 18, 2007 9:05:00 AM.

As I indicated in a prior post I’m beginning to wrap my head around Rspec and use it in a new project that I’m working on. I recently came across a post where someone had written something similar to the following, indicating a spec for has_many and belongs_to in a model.
context "A Category with fixtures loaded" do
  fixtures :listings, :categories

  specify "should have many Listings" do
    l = categories(:cars).listings
    l.should_not be_nil
    l.should_not be_empty
    l.first.should be_an_instance_of Listing
  end
end

One of the comments to the blog post was critical of this approach suggesting that this spec was validating Rails code and that the author should focus only on code that he / she has written. I’ve seen this argument several times in the past and this issue actually came up briefly in the Advanced Rails Training course in Chicago.

While I agree that you shouldn’t be testing Rails code, that’s not what is going on here. The spec is for the existence of the defined relationships within the model, and that things are wired up properly to enable accessing the relationship properly within the model. If someone were to inadvertently remove the has_many method call in the model the spec would fail, which is exactly the behavior we want.

Secondly, even if we were testing Rails code, I think it’s better to error on the side of overcoverage than to not write tests at all. It is important that developers are not so overwhelmed with the “right” or “wrong” way to do testing that testing is not done at all.

The above code actually comes from a project I’m working on. This blog post aside, if you have recommendations on how it should be done differently, please let me know.

Working with Test::Rails

written by Michael Trier, on Mar 17, 2007 10:57:00 AM.

The past couple of weeks I’ve been digging into Test::Rails from the ZenTest library of tools. One of the main benefits to using Test::Rails is the ability to have separate Controller and View tests. It also includes a lot of test helpers that make the process of testing Views and Controllers much easier. I about gave up on Test::Rails a couple of different times; the docs are a bit sparse and there’s not a lot information on the internet about how to setup and use Test::Rails. I finally happened upon a blog post that gave me a few tidbits of information and got me pointed in the right direction.

The Test::Rails package comes bundled with ZenTest. So let’s go ahead and get that installed first:

gem install ZenTest

ZenTest depends on the hoe and rubyforge gems. Be sure to include those dependencies as well.

ZenTest is a collection of four different testing helper packages. I’m not going to go into detail on the other packages, but I encourage you to investigate them, especially autotest – something I can not live without these days.

Now that we have ZenTest installed, we need to hook Test::Rails into our existing testing framework. To do this we need to require the test/rails file which pulls in all of the other stuff needed to work with Test::Rails:

In the test/test_helper.rb file, add the following line, just before you require test_help. The top of my test/test_helper.rb file looks like the following:

ENV["RAILS_ENV"] = "test"
require File.expand_path(File.dirname(__FILE__) + "/../config/environment")
require 'test/rails'
require 'test_help

If you’ve read the Test::Rails rdoc files, you’ll see that you’re instructed to reopen up Test::Rails::TestCase in place of Test::Unit::TestCase. If you do this you will likely experience a lot of problems with your unit test, unless you change them to inherit from Test:Rails::TestCase instead of Test::Unit::TestCase. Personally I just leave my unit tests as is and leave the rest of my test/test_helper.rb file alone. Since Test::Rails::TestCase derives from Test::Unit::TestCase everything will continue to work properly.

At this point you can continue on with the rdoc documentation on Test::Rails, specifically starting with the section title “Writing View Tests”.

If you are attempting to convert your existing Functional tests into Controller and View tests, pay particular attention to the format of the sample Controller and View tests in the rdocs. Even better, get the VIC plugin written by Geoffrey Grosenbach. It contains three generators for generating the basic structure needed for Views, Integration, and Controller tests:

./script/generate integration_test JournalStories
./script/generate view_test Journals index edit new
./script/generate controller_test Journals index edit new

The nice thing about using Test::Rails is that you don’t have to convert everything wholesale. I still have some of my tests as functional tests, and others as Controller / View tests. Over time I will likely convert everything.

One thing lacking from the Test::Rails package is Helper tests. There is a Helper Test plugin available by Geoffrey Grosenbach and a great tutorial to go along with it.

Test::Rails also comes with a number of helpers that make testing a lot easier. Check out this Test Cheatsheet for a list of enhancements.

I really like the separation that Controller / View tests. Prior to using Test::Rails I always had this crazy suspicion that I was missing something. Now with clean separation I feel a lot more confident and have tests that are a lot more focussed.

Topfunky Power Tools Plugin

written by Michael Trier, on Feb 27, 2007 5:00:00 AM.

I recently started using Topfunky’s Power Tools Plugin to clean up some of my testing routines. It’s a nice little package that brings together lots of different asserts that have been out there in the wild in one form or another.

I really enjoy the assert_required_fields method. I used to implement my model checking like so:

def test_should_require_login
  assert_no_difference User, :count do
    u = create_user(:login => nil)
    assert u.errors.on(:login)
  end
end

...and now with the Topfunky Power Tools Plugin I end up with the following:

def test_should_require_login_password
  assert_required_fields :create_user, :login, :password
end

Notice how it allows you to check multiple fields at once. There’s a lot more available in the plugin, but sometimes all it takes is one or two little things to make your day.