Stuart Gunter http://stuartgunter.org Mon, 12 Nov 2012 16:45:30 +0000 en-US hourly 1 First release of Couchbase Maven Plugin http://stuartgunter.org/release-couchbase-maven-plugin/ http://stuartgunter.org/release-couchbase-maven-plugin/#comments Mon, 12 Nov 2012 16:44:31 +0000 Stuart Gunter http://stuartgunter.org/?p=554 As I mentioned earlier, I’ve been working on a Maven plugin to interact with Couchbase. This is mostly to be used for integration tests in our build process, where we want to be able to create a bucket at the pre-integration-test phase, use it for some tests, then delete it at the post-integration-test phase. All very simple, but there was no plugin to do this… so I wrote one. If you want more details, please check out my previous post on this topic.

If you just wanna start using it, it’s now on Maven Central and ready for download. These are the project coordinates to include in your pom.xml file:

<plugin>
  <groupId>org.stuartgunter</groupId>
  <artifactId>couchbase-maven-plugin</artifactId>
  <version>1.0.0</version>
</plugin>
view raw gav.xml This Gist brought to you by GitHub.

And here are some links to get you started:

Artifact on Maven Central
Plugin Site Docs
GitHub Project

Enjoy, and please let me know if there are any features you want added, bugs you want fixed, or even just let me know if you’re using it.

PS: For those that are interested… I abandoned site deployment to Amazon S3 due to some issues I was having with the maven-s3-wagon provider. Still investigating and hoping to migrate over there at some point. In the meantime I’m just sticking with GitHub Project Pages.

]]>
http://stuartgunter.org/release-couchbase-maven-plugin/feed/ 0
Static type checking is good, but… http://stuartgunter.org/static-type-checking-good-but/ http://stuartgunter.org/static-type-checking-good-but/#comments Fri, 09 Nov 2012 09:15:34 +0000 Stuart Gunter http://stuartgunter.org/?p=541 Ignoring the usual arguments for/against statically/dynamically typed languages, there’s a more subtle danger I’ve seen lurking in the shadows of some statically typed programmers’ minds.

I’ve heard the words “it’s only JavaDoc” too many times to count lately. That seemingly innocuous statement can be pretty dangerous, because you’re in effect saying “I might have completely changed the meaning of what this class/method does, but it’s ok ‘coz it compiles”. You might be mislead into believing that a small change in documentation here or there is no big deal, but the meaning of the words you use impacts the expectations of the developer that’s using your code. Something as small as changing “will” to “should” can carry a significant semantic shift that might be a lot more devastating than perhaps changing the type signature. Statically typed languages might tell you when there’s a compilation error, but they will not always tell you when the contract has changed!

I don’t know what programmers using dynamically typed languages do – because I’ve only used them for playing around on pet projects (so far) – but I’d imagine (or at least hope) that a higher value is placed on docs than what I’ve seen in many programs written in statically typed languages.

If you’re not convinced, here’s a little something to mull over: if word choice is not important, then RFC 2119 was a waste of time!

]]>
http://stuartgunter.org/static-type-checking-good-but/feed/ 0
Couchbase Maven Plugin http://stuartgunter.org/couchbase-maven-plugin/ http://stuartgunter.org/couchbase-maven-plugin/#comments Tue, 06 Nov 2012 15:01:49 +0000 Stuart Gunter http://stuartgunter.org/?p=523 We recently started using Couchbase Server at work and came up against a minor inconvenience in the lack of readily available build tools that integrate with Couchbase. We use Maven for our builds, and noted the absence of a plugin to handle interactions with Couchbase as part of our build process. In our particular case, we have a number of integration tests that rely on a running instance of Couchbase to store/retrieve data; and I can think of plenty more reasons why you might want to perform some administrative operations on Couchbase during a build. Having previously used Cassandra (and the cassandra-maven-plugin), we were used to being able to spin up an instance of the store, use it, and shut it down, all as part of the standard Maven build lifecycle. But Couchbase is slightly different in that you can’t simply spin up a new instance in the same way as Cassandra (at least, not as far as I can tell).

So we wanted to be able to interact with an already running instance of Couchbase, but provide separate buckets for each build. This avoids having parallel builds stepping on each others’ toes and also makes clean-up very simple. Fortunately Couchbase provides a REST API for administrative operations, so I decided to whack together a very basic plugin to handle the minimal cases that we need – creating & deleting buckets. Initially we just used the exec-maven-plugin to execute commands with the couchbase-cli, but that’s a little more environment-dependent as it needs the tool to be locally installed, permissions appropriately configured, and a bunch of other stuff that I won’t go into here.

Using the Plugin

The plugin is still in a fairly rough form, but it does the job. I’ll keep adding features / goals to the plugin if time permits, and please do send feature requests (or just submit them directly to the project) to make this more broadly useful. The couchbase-maven-plugin project is hosted on GitHub and I’m aiming to do the first release very soon (assuming all goes well). I’m just looking for an easy way to deploy site docs somewhere public and then should be ready to go – so if you have any suggestions for this, please let me know. I’m currently thinking of deploying to an Amazon S3 website-configured bucket, but just need to figure out how to get Maven to do that for me.

Here’s an example of how you can use the plugin to create & delete a bucket for integration tests:

<plugin>
  <groupId>org.stuartgunter</groupId>
  <artifactId>couchbase-maven-plugin</artifactId>
  <version>1.0.0</version>
  <configuration>
    <host>http://localhost:8091</host>
    <username>myuser</username>
    <password>mypass</password>
  </configuration>
  <executions>
    <execution>
      <id>create-bucket</id>
      <phase>pre-integration-test</phase>
      <goals>
        <goal>create-bucket</goal>
      </goals>
      <configuration>
        <bucketName>test-bucket</bucketName>
        <bucketType>memcached</bucketType>
      </configuration>
    </execution>
    <execution>
      <id>delete-bucket</id>
      <phase>post-integration-test</phase>
      <goals>
        <goal>delete-bucket</goal>
      </goals>
      <configuration>
        <bucketName>test-bucket</bucketName>
      </configuration>
    </execution>
  </executions>
</plugin>
view raw pom.xml This Gist brought to you by GitHub.

Supporting Parallel Builds

So one of the things I mentioned at the start was the ability to support parallel builds. Luckily this isn’t something we need to handle directly within the plugin, as it’s already supported by Maven and your build tool (in my case, Jenkins). Whenever a job executes, Jenkins sets a whole bunch of environment variables that you can use in your Maven build. In this case, we can safely use the BUILD_TAG variable as our bucket name, which isolates the Couchbase data buckets used for each job. You can find a list of the environment variables set by Jenkins here.

I hope this plugin is useful to someone out there. Feel free to fork it, submit pull requests for new functionality, or just use it quietly without anyone knowing. I’ll post an update when the first release is available in Maven Central.

UPDATE: This has now been released and is available on Maven Central. More details here.

]]>
http://stuartgunter.org/couchbase-maven-plugin/feed/ 0
Configure Java 7 in IntelliJ IDEA 10.5 on Mac http://stuartgunter.org/configure-java-7-intellij-idea-10-5-mac/ http://stuartgunter.org/configure-java-7-intellij-idea-10-5-mac/#comments Tue, 18 Sep 2012 21:12:41 +0000 Stuart Gunter http://stuartgunter.org/?p=511 I usually run IntelliJ IDEA 11 for most of my development, but my Mac at home hasn’t yet been upgraded and is still running 10.5.4. I was using it tonight for a little sideline project I’m working on and needed to set up Java 7, which proved to be a little more involved than I’d hoped. Hopefully this ranks well in Google so others with the same issue can get some coherent (I hope) help in one place rather than a bunch of stackoverflow questions and other random links that are similar but not quite on the money.

Install Java 7

Firstly, you can download Java 7 from the Oracle (as of this writing, it’s at JDK 7u7). Nothing special here. Download it. Install it. Done.

Once installed, you can find it at /Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/ (or obviously whichever version you’ve installed). Sadly this bit of info wasn’t made blatantly obvious, so I figured it’d be useful to know before you go filesystem hunting (and for me to come back to in the future when I’ve forgotten).

Configure IntelliJ IDEA 10.5

The latest version of IntelliJ (currently 11.1.3) didn’t seem to have any problems configuring Java 7, so these steps weren’t necessary for me. However, if you’re running 10.5.4 (or I assume some other 10.x version) then you’ll need to do some extra work.

First, you need to add Java 7 to your list of supported SDKs. You can configure that in Project Structure > Platform Settings > SDKs. Add your Java 7 SDK by selecting the appropriate “/Contents/Home” location (see the install location above), and you’ll notice that the list of JARs on the classpath is significantly shorter than what you had for Java 6. And that’s where the problems all started for me. It automatically included everything in the lib folder, but nothing under the jre/lib folder.

After much digging around, I found this page that highlights which JARs are required for IntelliJ (and Eclipse for that matter). Here’s that list reproduced here for your sanity. Copy & paste away!

/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/lib/ant-javafx.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/lib/dt.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/lib/javafx-doclet.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/lib/javafx-mx.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/lib/jconsole.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/lib/sa-jdi.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/lib/tools.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/jre/lib/jfr.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/jre/lib/jce.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/jre/lib/charsets.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/jre/lib/JObjC.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/jre/lib/jsse.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/jre/lib/management-agent.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/jre/lib/resources.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/jre/lib/rt.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/jre/lib/ext/dnsns.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/jre/lib/ext/localedata.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/jre/lib/ext/sunec.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/jre/lib/ext/sunjce_provider.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/jre/lib/ext/sunpkcs11.jar
/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk/Contents/Home/jre/lib/ext/zipfs.jar
view raw jars.txt This Gist brought to you by GitHub.

Hope that was helpful. If you notice any mistakes or omissions, please let me know so I can fix it for the next person that needs it.

]]>
http://stuartgunter.org/configure-java-7-intellij-idea-10-5-mac/feed/ 0
Running a Clojure Script in a Maven Build http://stuartgunter.org/running-clojure-script-maven-build/ http://stuartgunter.org/running-clojure-script-maven-build/#comments Sat, 15 Sep 2012 09:26:40 +0000 Stuart Gunter http://stuartgunter.org/?p=491 I recently needed to include some custom functionality in a Maven build and decided to use Clojure to write a little script for it. The process of getting my script running in Maven took me on an interesting journey, so I figured it’d be something that would be more broadly useful – especially to other Clojure newbies like me! Here’s a quick run-down of how you can get a Clojure script up & running in a matter of minutes.

There are two Clojure plugins for Maven that I’m aware of: clojure-maven-plugin and Zi. From what I could tell, Zi doesn’t provide a goal for executing a Clojure script so that unfortunately went out the window. I say “unfortunately” because it looks like a pretty cool plugin! Fortunately, clojure-maven-plugin provides a run goal for this exact purpose.

One other plugin I decided to investigate is exec-maven-plugin. This obviously hasn’t got anything specifically to do with Clojure which means it involves a little more manual effort (if you want to think of it that way), but it gives some really nice features that clojure-maven-plugin doesn’t provide.

To show how this works, I’ve written a trivial script that simply prints out some text:

(ns cljmvn)

(println (str "Hello " (first *command-line-args*)))
view raw cljmvn.clj This Gist brought to you by GitHub.

clojure-maven-plugin

My first attempt was to use the clojure-maven-plugin, for no other reason than I figured it was built specifically to handle Clojure so would most likely be the best. While this may be true if you’re building a Clojure project, it has some idiosyncrasies that make it less than ideal if you’re not. Unfortunately this plugin doesn’t seem to include plugin dependencies on the classpath when executing the run goal; it only includes compile-scoped dependencies for the project. This means you need to have any requirements of your script (i.e. clojure.jar and any other libraries you might need) declared as compile scope dependencies of your project. This isn’t really an ideal solution – particularly if you’re only using Clojure for a build script and not as your language of development. In my case, this proved to be a deal-breaker. My project was pure Java (unfortunately) and I wasn’t prepared to package up Clojure with my project just for the sake of running a Clojure script at build time. I think it’s a valid use case though, so have submitted a request to include this feature in the project. Aside from this issue, the plugin seems pretty good.

Here’s an example of how you can configure it to run a script:

<dependencies>
    <dependency>
        <groupId>org.clojure</groupId>
        <artifactId>clojure</artifactId>
        <version>${clojure.version}</version>
    </dependency>
</dependencies>
<build>
    <plugins>
        <plugin>
            <groupId>com.theoryinpractise</groupId>
            <artifactId>clojure-maven-plugin</artifactId>
            <version>1.3.11</version>
            <executions>
                <execution>
                    <id>exec-clojure-script</id>
                    <phase>validate</phase>
                    <goals>
                        <goal>run</goal>
                    </goals>
                    <configuration>
                        <script>src/build/scripts/cljmvn.clj</script>
                        <args>Stuart</args>
                    </configuration>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

Note that we had to declare the dependency on Clojure as a project-level dependency. This is only really suitable if you intend on packaging Clojure with your project. Fortunately the exec-maven-plugin doesn’t have this limitation.

exec-maven-plugin

The exec-maven-plugin has some really useful features that make it a better choice for this kind of thing – at least, I think it’s better. Firstly, it allows you to include plugin dependencies on the classpath. This is absolutely essential if it’s to be useful on any kind of Maven project. In my case, this meant that I didn’t need to unnecessarily package Clojure in my pure Java project just to get a build script to run. Big win! There are some other nice benefits too. For example, if your script is creating some output that is to be used as source for the project (e.g. code or resource generation), this plugin allows you to specify where the source is being written out and automatically add that directory to the list of source locations for the project (check out the sourceRoot and testSourceRoot configuration properties).

Here’s an example of how you can configure this plugin to run the same script as above:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>exec-maven-plugin</artifactId>
    <version>1.2.1</version>
    <executions>
        <execution>
            <id>exec-clojure-script</id>
            <phase>validate</phase>
            <goals>
                <goal>java</goal>
            </goals>
            <configuration>
                <arguments>
                    <!-- the script to import -->
                    <argument>src/build/scripts/cljmvn.clj</argument>
                    <!-- script args -->
                    <argument>Stuart</argument>
                </arguments>
                <mainClass>clojure.main</mainClass>
                <includePluginDependencies>true</includePluginDependencies>
                <includeProjectDependencies>false</includeProjectDependencies>
            </configuration>
        </execution>
    </executions>
    <dependencies>
        <dependency>
            <groupId>org.clojure</groupId>
            <artifactId>clojure</artifactId>
            <version>${clojure.version}</version>
        </dependency>
    </dependencies>
</plugin>

So the only extra configuration required here is mainClass, which we specify as clojure.main. This is really no different to the examples you can find on the Clojure site.

Hopefully that’s been helpful to someone. As I said before, I’m a bit of a newbie to Clojure so if there’s a better way this can be done, please let me know.

Update 16/09/12: I created an issue in GitHub for the plugin dependencies feature in clojure-maven-plugin and it has subsequently been added to the plugin. I have to say it was an incredibly fast turnaround, so major credit to Mark Derricutt (talios) for that. Since version 1.3.12, you can now specify the <includePluginDependencies> option for all goals (defaults to false) which means it’s a lot easier to use for non-Clojure projects.

]]>
http://stuartgunter.org/running-clojure-script-maven-build/feed/ 3
Experimenting with Horizontal WIP Limits in Kanban http://stuartgunter.org/experimenting-horizontal-wip-limits-kanban/ http://stuartgunter.org/experimenting-horizontal-wip-limits-kanban/#comments Mon, 06 Aug 2012 10:04:07 +0000 Stuart Gunter http://stuartgunter.org/?p=476 I’ve been using Kanban in my team for quite some time and have found it to be a very productive way of working for our particular situation. There are various reasons why this works better for us than Scrum did, but that’s not really what I wanted to talk about now. What I wanted to do was show our latest experiment with WIP limits.

So we’re probably all familiar with the usual way WIP limits are set within Kanban. You define the states (e.g. Ready, In Progress, Review, etc.) for your stories and each state has its own WIP limit that may not be exceeded. This works pretty well and ensures that we’re not working on too many things at once. It also makes sure that the team deals with problems early, as it often prevents us from moving on when there’s an obstacle in the way. It’s a gentle reminder that there’s no such thing as “someone else’s problem” when working on a team.

Vertical WIP limits

The problem I’ve found recently is that the vertical WIP limits we usually set in Kanban aren’t working as well as I’d like. In our example, we have a team of 6 people but we have about 5 states. We can come up with some reasonably small WIP limits for each state but that would still allow us to have more stories in play at one point in time than we have people to work on them. This to me seems risky. If your WIP limits allow you to have “idle” stories, something is wrong.

This is roughly how our Kanban board has been set up for the past few months (these aren’t our states… just there for illustrative purposes):

Horizontal WIP limits

Last week I decided to do something I’ve been wanting to do for quite a while now… I added a horizontal split to our board and we now operate as two groups within the team, each with their own global WIP limit. Let me explain this in more detail…

Firstly, we’re still a single team, but we operate as two groups within that team. The reason I don’t see this as two separate teams is because the groups are flexible and will change regularly. Think of it as similar to pair programming – just because you’re pairing, doesn’t mean you’re now in a different team. A lot of the benefit of working in a team is learning from and teaching each other – and that’s amplified when you have just the right number of people to teach and learn from. We conveniently have two developers and one QA per group, so we maintain a sufficiently mixed skill set across the groups. These groups are not isolated – so there’s nothing preventing one group from asking the other for help… in fact, it’s strongly encouraged. We still work in the same area and attend the same daily standups.

Secondly, the global WIP limit is now set horizontally per group. In our context, this makes a lot of sense. We now have three people that can work on at most 2 stories at once. This means that while one story might be in code review (or some other ‘waiting’ state), they can continue working on the other. But because there’s only one other thing to work on, it naturally leads each group to complete one story fully before getting too invested in the next. Having both stories in code review at the same time is most likely to be due to poor time management or planning (of course, it could be something else, but these seem most realistic).

So this is roughly what our board looks like now (I realise the location of the WIP limit can be misleading… it’s a lot clearer on our real board):

What are the benefits?

This is the extremely early stage of an experiment, so this is more of a hypothesis than a list of proven benefits. Once we’ve been running with this for a few weeks / months, I’ll report back on our findings.

Team Multitasking

This is a tricky topic, because at face value it appears that it’s playing into the hands of the “Everything is Priority 1″ crowd. In fact, that’s not what we’re doing (or rather, it’s not what we’re aiming to do). One of the difficulties we have in building a platform rather than an application is that we have multiple customers with related but different requirements. In the spirit of Agile, we aim to deliver working and usable functionality regularly so our customers can benefit from these new features. If we only worked on a single feature at a time, only one customer would benefit until that feature was complete; and we could then move on to satisfying our next customer. Something along the lines of round-robin feature delivery. Not ideal.

So what we’re hoping to gain with the team groups is have each group potentially work on a different feature. I say “potentially” because it’s not a rule set in stone. That way we can regularly deliver useful functionality to multiple customers at the same time – but it does have the unfortunate effect that we take longer to fully complete features. It’s a trade-off though. Our capacity has not changed, we’re just attempting to use it differently. We’ll see how well this goes in time.

Less context switching

In an environment where priorities change fairly rapidly, we must be able to react quickly and effectively to those changes. Previously, we were reacting quickly but not really that effectively. The reason we weren’t effective is because late changes or emergencies would impact the whole team. Everyone would be aware of the defect or story that came in late and would all be distracted from what they were doing because of it. Momentum was lost as a result, and the cost of late changes was higher than it needed to be.

Now that we have two separate groups, we can choose to distract only one group with late changes. This means that Group A can continue working on Feature X while Group B drops what they’re doing to react to some emergency that has just cropped up. We still want the whole team to be aware of work in progress, and this will be reported and discussed in the daily standups, but Group A doesn’t need to concern themselves with the details – they just need to be aware that it’s going on. Of course, if warranted, we could put the whole team on the emergency – but I’ve found that it’s very rare that we need to go to that extreme.

More focus on pair programming

The single most common topic in our retrospectives is pair programming. Without exception, everyone on the team finds it beneficial. This comes back to the teaching and learning thing again, and diversity in pairing with different people on different problems is very helpful. With the new groups we’re firmly enabling pairing, to the extent that our WIP limit prevents stories from being completed without some pairing effort (group size = 3; WIP limit = 2). It’s not enforced, because that again would be implying that one size fits all and every story is suitable for pairing (which we’ve also found is not always true). However, it means that no external influence can prevent the team from pairing if they so choose. No one can come to the board and under threat of deadlines or anything else tell us to work on more stories. Our WIP limit will not allow it, and so it both protects the team and the flow. Nice!

More focus

Lastly, I think this will help focus the team on completing one story at a time. Flow in Kanban is really important. “Flow” is really just another way of saying “momentum”. Breaking the momentum of a productive team is guaranteed to impact their effectiveness. But sometimes the team can unintentionally break their own momentum by trying to do too much at once. This is very often a well-intentioned attempt to get more done, but it backfires because more work in progress does not imply more work done. Hopefully our small group focus will help bring people together to solve a problem completely before getting too invested in the next. This does rely on effective exit criteria for each of the states, but I think that goes without saying.

Right, so that our latest team experiment and what we hope to gain from it. I’ll post an update after we’ve tried it for a few weeks / months to report back on the outcomes. Hopefully it’s in line with our expectations!

]]>
http://stuartgunter.org/experimenting-horizontal-wip-limits-kanban/feed/ 0
Unit Testing Views in Spring MVC http://stuartgunter.org/unit-testing-views-spring-mvc/ http://stuartgunter.org/unit-testing-views-spring-mvc/#comments Wed, 04 Jul 2012 18:38:48 +0000 Stuart Gunter http://stuartgunter.org/?p=438 We’re big on testing at Betfair. Personally, I don’t think that should be a surprise… what should be a surprise is any serious software developer that ISN’T. But one of the areas of development often neglected in being suitably tested is views. By ‘views’, I mean the templates that you use to render your user interface components. We test that our classes behave as we expect, so why don’t we test that our views (you know… the bits our users actually see) behave as we expect? Maybe you do, maybe you don’t… but if you don’t, you should!

The concept

The main principle here is to write a test that leverages the MVC pattern in much the same way as the real application, but exploits the power of standard unit testing practices (like mocking, assertions and verification) to ensure that the rendered view is what we expect when rendered using a known model. Following the BDD style of testing (like we do):

Given a certain model
When I render the view with that model
Then I expect the rendered HTML content to conform to some predictable result

How does it work?

We use TestNG for our unit and integration tests, which is supported by Spring via the spring-test artifact; but I was quite surprised when I found that Spring doesn’t provide a mechanism for creating an integration test within a WebApplicationContext. By extending AbstractTestNGSpringContextTests you get access to an ApplicationContext, but there’s (currently) no way to configure this to return a web application context. Fortunately the source is freely available and the changes are minimal, so I wrote an implementation that tackles this issue quite well (with help from the world wide web, of course).

Integration Test? I thought you said Unit Test?

Yes, that can be a source of confusion – but I really do mean unit test. It just so happens that we can use the infrastructure that Spring provides via it’s integration testing features in order to execute the unit test. Think of it this way… if you want to unit test a Java class, you instantiate the class (perhaps with some mocks), and then you invoke operations on that instance. Views can’t be instantiated in the same way as Java classes, but require some plumbing to make it happen – enter Spring’s integration testing capabilities. But even though we’re using integration testing capabilities, the unit testing principles still apply. We’re using the plumbing of the view rendering functionality within Spring to prepare and execute our unit test, but we’re still testing the view in isolation. The model we provide should contain mocks (and whatever else you would normally provide in the setup of a unit test). It should not rely on any external behaviour that, if changed, would break our test. We’re also using those same mocks (and other stuff) to verify that particular operations were invoked (or not) as expected or that the rendered output is valid with respect to the input given. This is a unit test.

Enough yakking… show me the code!

Fortunately there’s barely any code, so that won’t take long. This is actually quite a trivial concept to implement as most of the hard work is done for you. I’ve created a very simple demo web application to show how this might work and hosted it on GitHub, so head over there and check it out if you just want to dive straight in. I’ve structured this in a way that allows the framework to be separately packaged, as in our case we provide this as part of our web platform for all applications to build upon. So we use it both internally within the platform we build and our users (customer-facing products) also use it to test their views too.

Here’s a slightly more detailed explanation for those that want it – including some background on design decisions that I’d be very grateful to receive feedback on.

Design Decisions

One of the issues I grappled with is how to structure a test. There are loads of options here, and I’m still not convinced that I’ve found the best one. Fortunately this is still in its infancy, so it’s easy to change. Given that we use TestNG, I wanted to leverage some of the cool features it provides – like factories and data providers. I started off down this route, but soon found that it was just needlessly introducing complexity into what should be simple. Admittedly I may have been doing it wrong – so other suggestions are more than welcome.

After seeing this complexity, I removed the factories and data providers and was left with a single test base class that provides the actual test. The only thing each test must do is implement two methods: given(Model) and then(Document). As I mentioned earlier, we use the BDD style of testing, so this seemed like a natural way to express what these methods do. The base class contains the test, which is basically just a template method that does the test-specific setup, renders the view, then invokes the test-specific assertions & verifications.

Example Code

@ContextConfiguration(locations = "/my-mvc-context.xml")
public class ExampleTest extends AbstractViewRenderTest {

  @Override
  public String given(Model model) {
    Map module = new HashMap();
    module.put("id", "some-id");
    module.put("text", "some-text");

    model.addAttribute("module", module);

    return "examples/demo"; // this will test the template: "src/main/webapp/example/demo.ftl"
  }

  @Override
  public void then(Document html) {
    Element element = html.body().getElementById("some-id");
    assertNotNull(element);
    assertEquals("some-text", element.text());
  }
}

In the example above, the MVC context declared in @ContextConfiguration is referenced to include the ViewResolver that is used by the application. The view testing framework does not make any restrictions on the view rendering technology chosen, provided it can operate outside of a servlet container. This test would work equally well using either FreeMarker or Velocity (for example), provided that the appropriate ViewResolver was specified in the application context.

You’ll also notice that we’re using the Jsoup Document for the then(..) method, as that provides a nice way of dealing with the DOM. Obviously this won’t work if you’re using your templates to render non-HTML content (it’s a seriously minor change to just return a String containing the rendered view).

Context Configuration

To keep the test as simple (and fast) as possible, it is recommended that the application context be limited to the minimal definition of beans that are required for rendering.

  • It MUST include the ViewResolver and any associated configuration required to render a view (this SHOULD be the same configuration you use in your application if you want to test the view as it would appear when running for real).
  • It MAY also include the definition of a MessageSource if you prefer to test your actual localised text instead of the internationalised capability (more on this in the next section).
  • It MUST NOT include a LocaleResolver, as this would conflict with the LocaleResolver defined for use within the test framework (as we are not testing an incoming request, we cannot reliably use any LocaleResolver – so this is handled internally by the test framework to ensure correct locale resolution)

Most other bean definitions are not specifically related to view-rendering and should be defined elsewhere. By defining your application contexts in this way, you can compose multiple contexts together as and when required but also use them in limited scope when required. For view unit testing, you do not need (and should probably not define) any beans beyond the view-rendering beans specified above. Other code that facilitates view rendering – e.g. instances of classes passed into the model to be used by your template – should be mocked to avoid introducing an unnecessary dependency within the test. This means that the behaviour of the template can be verified independent of the behaviour of the referenced classes.

Testing Translations

Many applications make use of Spring’s localisation capabilities to provide a site suitable for multiple locales, but this often raises the question about how to test that functionality. I’ve seen many tests that rely on the translated values within resource bundles, but these prove to be extremely fragile and tend to break frequently (well, at least as frequently as the translations change). Testing in this way also means you need to update your tests when the template you’re testing hasn’t actually changed – which should ring warning bells!

My preferred approach to testing translations is to test that the application has been internationalised, not each and every localisation. This is a very important distinction that is lost on many people. To avoid confusion, here’s a brief definition of each (shamelessly copied from Wikipedia):

Internationalisation is the process of designing a software application so that it can be adapted to various languages and regions without engineering changes.
Localisation is the process of adapting internationalised software for a specific region or language by adding locale-specific components and translating text.

This approach to view unit testing is built in as the default mechanism (which can be overridden if you really want to). If you choose to use this default approach (strongly recommended) then all you need to do is include assertions that check for the presence of resource bundle keys. Take a look at the fwk-context.xml file in GitHub and you’ll see the MessageSource has been configured to always return the code that was supplied (no basenames provided and useCodeAsDefaultMessage=true). If you don’t get the appropriate key back, you certainly won’t get localised text in any locale! If you do get the key back, you can be certain that the appropriate text will be displayed. Whether or not the text in the resource bundle is correct is an entirely separate issue.

Is that all?

Yes, that’s all. Please go any play around with this and send your feedback. I’d particularly like to know whether you’ve done a similar thing in your projects and how your approach differs. Also, it’d be great to get some feedback on the test design… like I said, I’m sure it can be improved – I’ve just been staring at it too long!

Update: I forgot to mention in the original publishing of this post that some of the ideas were derived from a post by Ted Young. Somehow the reference got lost when I was writing up. But thanks to Ted for his ideas that helped me get this working.

]]>
http://stuartgunter.org/unit-testing-views-spring-mvc/feed/ 3
Adventures in Clojure http://stuartgunter.org/adventures-clojure/ http://stuartgunter.org/adventures-clojure/#comments Wed, 04 Jul 2012 05:30:33 +0000 Stuart Gunter http://stuartgunter.org/?p=432 After many months of wading in the shallow waters of functional programming, I’ve finally decided to dive in head first and do it properly. My interest was piqued after a week of Haskell programming and ‘pure’ functional programming immersion, followed by a few dips into Scala (which is definitely something I want to spend a bit more time on when I can afford it).

Being someone who doesn’t like to follow the hype, I prefer to wait for the right moment to get stuck in and also choose my tools carefully. If I want to choose the best tool for the job, I gotta know more than just one (if all you have is a hammer…)! So to get started, I’ve decided to dig deeper into Haskell and Clojure and see where that takes me. Why these languages? Well, Haskell is particularly relevant to me right now as it forms a key part of my MSc Software Engineering thesis; and Clojure appears to be an incredibly powerful language on a platform that I’m familiar with… the ubiquitous JVM.

So far I’ve read most of Real World Haskell and also the first few chapters of Clojure Programming, and will no doubt create a few simple applications along the way to make sure I have got my head fully around the functional approach to software engineering. These are likely to be web applications, as that’s my primary focus and interest both at work and personally. I also went along to the Functional Web Architecture event at Skills Matter last night and was introduced to both Ring and Compojure – two interesting Clojure web frameworks that looks fairly straight-forward to get started with (although I’ll be interested to see whether this simplicity remains as the complexity of the application grows).

I’ll do my best to report back progress for anyone else that shares this interest and hopefully get some useful feedback. I’m also curious to know which companies use Clojure in production or as a significant part of the development efforts.

]]>
http://stuartgunter.org/adventures-clojure/feed/ 0
Experimentation is intentional http://stuartgunter.org/experimentation-intentional/ http://stuartgunter.org/experimentation-intentional/#comments Thu, 31 May 2012 11:23:50 +0000 Stuart Gunter http://stuartgunter.org/?p=422 I found Seth Godin’s post today about experimentation very interesting. He very clearly spells out the difference between hiding from your failure by calling it an experiment, and intentionally experimenting where ‘failure’ is an acceptable outcome. Experimenting with something that doesn’t work is still a successful outcome – you’ve learned a valuable lesson from the experiment and your application can improve as a result of this. Experimentation must be intentional, otherwise it’s not experimentation – it’s just an accident or outright failure. As Seth says: “You don’t get to call it an experiment after it fails.”

The reason why this is particularly interesting to me right now is because the team at Betfair have recently built an excellent multivariate testing framework within our new Site Platform. This will be used to run experiments on our new sports betting site as we work harder to deliver what our customers want and what works best.

This is one of the major advantages of operating a web application, or at least one that you have a high degree of control over. It enables you to experiment, learn, and improve. This is an extremely powerful weapon in your competitive advantage arsenal… if you’re able to wield it appropriately.

]]>
http://stuartgunter.org/experimentation-intentional/feed/ 0
Are all your lifts working? http://stuartgunter.org/lifts-working/ http://stuartgunter.org/lifts-working/#comments Thu, 26 Apr 2012 07:20:09 +0000 Stuart Gunter http://stuartgunter.org/?p=413 The building where I work is currently undergoing renovations. When it’s all over, we’ll have a better working environment for about 2,000 people; a bigger and better canteen, more space, and no doubt a whole lot of other cool stuff. The downside is the temporary disruption caused by the contractors while the improvements are underway.

One of the biggest disruptions is the ability to get between floors. Some central stairwells have been blocked off which leaves the lifts (or elevators, for my American friends) as the only means of getting up/down a single floor. Of the 4 lifts, one serves just the basement and ground floor. The remaining three lifts serve about 2,000 people between the ground and 4th floors. Needless to say, there’s massive congestion at peak times – like lunch.

What I find interesting about this is that no one thought to open the 4th lift up to the entire building. What appears so obvious to the users isn’t always as obvious to the builder.

As software developers, this is a lesson we often learn the hard way. We seldom ask customers what they want. We seldom listen to their complaints or frustrations. We seldom watch them to see how they use our software. We seldom think of the little things that could make their lives better while the big things are being built.

So… are all your lifts working?

]]>
http://stuartgunter.org/lifts-working/feed/ 0