Tuesday, 15 May 2012

Easy Unit and Integration Code Coverage

This example shows how to generate coverage for unit and integration tests using Maven and Sonar.
It uses very simple techniques and should only take 10-15 minutes to get running in any existing Maven build.
It can be used across unit, integration, ATDD or any other kind of test suite.
The coverage results are shown in Sonar.

Whats Coming?

My previous post showed how we to use JUnit categories to easily split unit and integration test suites.
http://johndobie.blogspot.com/2012/04/unit-and-integration-tests-with-maven.html

The next logical step is to be able to look at metrics for each test suite.
This example shows how to do that using Jacoco and Sonar.

Code

The code for the example is here.
svn co https://designbycontract.googlecode.com/svn/trunk/examples/maven/categories-sonar
mvn clean install sonar:sonar

Sonar.

This example relies on Sonar to show the code coverage metrics. Sonar is a fanatastic open source code quality tool that everyone should have a look at.
http://www.sonarsource.org/

For our example there are a couple of simple configuration changes that are needed.
The following link shows how to install Sonar and make the changes
http://johndobie.blogspot.com/p/setting-up-sonar.html

Splitting The Test Suites.

This example relies on JUnit categories to split our tests.
We define a marker interface and then apply it to the tests we want to split.
public interface IntegrationTest {}
The category annotation is added to your test class. It takes the name of your new interface.
import org.junit.experimental.categories.Category;
@Category(IntegrationTest.class)
public class ExampleIntegrationTest{
 @Test
 public void longRunningServiceTest() throws Exception {
    
 }
}
The whole process is very simple and is fully explained here
http://johndobie.blogspot.com/2012/04/unit-and-integration-tests-with-maven.html

Analyzing The Code Coverage

We use the jacoco plugin to do the code coverage. There is an overview of Jacoco here.
http://johndobie.blogspot.com/2012/01/unit-test-code-coverage.html
We first define the directories for the jacoco coverage files.
<coverage.reports.dir>
  ${basedir}/target/coverage-reports
</coverage.reports.dir>
<sonar.jacoco.reportPath>
  ${coverage.reports.dir}/jacoco-unit.exec
</sonar.jacoco.reportPath>
<sonar.jacoco.itReportPath>
  ${coverage.reports.dir}/jacoco-it.exec
</sonar.jacoco.itReportPath>
<sonar.jacoco.jar>
  ${basedir}/lib/jacocoagent.jar
</sonar.jacoco.jar>

Configure the Unit Tests

Then we start the unit tests by running the standard the surefire plugin with the Jacoco agent pointing to ${sonar.jacoco.reportPath}.  This is used to store the unit test code coverage results.
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.7.2</version>
<configuration>
	<argLine>-javaagent:${sonar.jacoco.jar}=destfile=${sonar.jacoco.reportPath},includes=com.*</argLine>
	<includes>
		<include>**/*.class</include>
	</includes>
	<excludedGroups>com.test.annotation.type.IntegrationTest</excludedGroups>
</configuration>
</plugin>
we ignore any marked integration tests with the following config
<excludedGroups>com.test.annotation.type.IntegrationTest</excludedGroups>

Configure the integration tests

For the Integration tests we use the failsafe plugin and point the Jacoco agent to ${sonar.jacoco.itReportPath}.  This is used to store the integration test code coverage results.
<plugin>
<plugin>
	<artifactId>maven-failsafe-plugin</artifactId>
	<version>2.12</version>
	<dependencies>
		<dependency>
			<groupId>org.apache.maven.surefire</groupId>
			<artifactId>surefire-junit47</artifactId>
			<version>2.12</version>
		</dependency>
	</dependencies>
	<configuration>
		<groups>com.test.annotation.type.IntegrationTest</groups>
	</configuration>
	<executions>
		<execution>
			<goals>
				<goal>integration-test</goal>
			</goals>
			<configuration>
				<argLine>-javaagent:${sonar.jacoco.jar}=destfile=${sonar.jacoco.itReportPath},includes=com.*</argLine>
				<includes>
					<include>**/*.class</include>
				</includes>
			</configuration>
		</execution>
	</executions>
</plugin>
We also tell the plugin to use the correct JUnit categories
<configuration>
	<groups>com.test.annotation.type.IntegrationTest</groups>
</configuration>

When these are run they will produce the following 2 coverage files.


Start Sonar.

Before running the build you need to start your Sonar server.
http://johndobie.blogspot.com/p/setting-up-sonar.html

Running The Example

We can run the whole lot using the following command
mvn clean install sonar:sonar
You will see the following results if you browse to your sonar instance.

Friday, 27 April 2012

Unit and Integration Tests With Maven and JUnit Categories

Introduction

This example shows how to split unit and integration tests using Maven and JUnit categories.
It is especially useful for existing test suites and can be implemented in minutes.

Why use this?

My previous post showed how we to use a maven profile to split unit and integration tests.
http://johndobie.blogspot.co.uk/2011/06/seperating-maven-unit-integration-tests.html
This has been a very well read post and I like how it uses seperate directories. However this example show a much simpler technique that can easily be applied to legacy test suites.
It offers most of the benefits of the original, and sits more comfortably in the Maven world.

Code

The code for the example is here.
svn co https://designbycontract.googlecode.com/svn/trunk/examples/maven/categories
mvn clean install

JUnit Categories

As of JUnit 4.8 you can define your own categories for tests. This enables you to label and group tests.
This example shows how easy it is to separate unit and integration test using the @Catgegory annotation.
http://kentbeck.github.com/junit/javadoc/latest/org/junit/experimental/categories/Categories.html

Define the Marker Interface

The first step in grouping a test using categories is to create a marker interface.
This interface will be used to mark all of the tests that you want to be run as integration tests.
public interface IntegrationTest {}

Mark your test classes

Add the category annotation to the top of your test class. It takes the name of your new interface.
import org.junit.experimental.categories.Category;
@Category(IntegrationTest.class)
public class ExampleIntegrationTest{

 @Test
 public void longRunningServiceTest() throws Exception {
     
 }
}
Categories can be used to mark classes or methods. Really in my opinion you should only mark a class.
If you have both unit and integration tests in a single class then split it.

Configure Maven Unit Tests

The beauty of this solution is that nothing really changes for the unit test side of things.
We simply add some configuration to the maven surefire plugin to make it to ignore any integration tests.
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.11</version>
<dependencies>
 <dependency>
  <groupId>org.apache.maven.surefire</groupId>
  <artifactId>surefire-junit47</artifactId>
  <version>2.12</version>
 </dependency>
</dependencies>
<configuration>
 <includes>
  <include>**/*.class</include>
 </includes>
 <excludedGroups>com.test.annotation.type.IntegrationTest</excludedGroups>
</configuration>
</plugin>
There are 2 very important parts. The first is to configure surefire to exclude all of the integrations tests.
<excludedGroups>com.test.annotation.type.IntegrationTest</excludedGroups>
Surefire will run all of your tests, except those marked as an integration test.
The other important part is to make sure the surefire plugin uses the correct JUnit provider. The JUnit47 provider is needed to correctly detect the categories.
<dependencies>
 <dependency>
  <groupId>org.apache.maven.surefire</groupId>
  <artifactId>surefire-junit47</artifactId>
  <version>2.12</version>
 </dependency>
</dependencies>

Running the unit tests

To make sure this works correctly we can run the unit tests
mvn clean test
You can see from the output below that the unit test is run, but not the integration test.
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running com.test.EmptyUnitTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec

Results :

Tests run: 1, Failures: 0, Errors: 0, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

Configure Maven Integration Tests

Again the configuration for this is very simple.
We use the standard failsafe plugin and configure it to only run the integration tests.
<plugin>
 <artifactid>maven-failsafe-plugin</artifactId>
 <version>2.12</version>
 <dependencies>
  <dependency>
   <groupId>org.apache.maven.surefire</groupId>
   <artifactId>surefire-junit47</artifactId>
   <version>2.12</version>
  </dependency>
 </dependencies>
 <configuration>
  <groups>com.test.annotation.type.IntegrationTest</groups>
 </configuration>
 <executions>
  <execution>
   <goals>
    <goal>integration-test</goal>
   </goals>
   <configuration>
    <includes>
     <include>**/*.class</include>
    </includes>
   </configuration>
  </execution>
 </executions>
</plugin>
The configuration uses a standard execution goal to run the failsafe plugin during the integration-test phase of the build.
The following configuration ensures only the integration tests are run.
com.test.annotation.type.IntegrationTest
And again the JUnit provider must be correctly configured.
<dependencies>
 <dependency>
  <groupId>org.apache.maven.surefire</groupId>
  <artifactId>surefire-junit47</artifactId>
  <version>2.12</version>
 </dependency>
</dependencies>
That’s it!

Running the integration tests

We can now run the whole build.
mvn clean install
This time as well as the unit test running, the integration tests are run during the integration-test phase.
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running com.test.AnotherEmptyIntegrationTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.016 sec

Running com.test.EmptyIntegrationTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec

Results :
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0

Whats next

To see how easy it is to add code coverage to this method, check out this link.
http://johndobie.blogspot.co.uk/2012/05/easy-unit-and-integration-code-coverage.html

For a more complete example which uses starts Tomcat and a database.
svn co https://designbycontract.googlecode.com/svn/trunk/examples/maven/code-coverage
mvn clean install -Ptomcat-embedded
Its based on this example
http://johndobie.blogspot.com/2011/10/maven-integration-testing-and-spring.html

Monday, 16 January 2012

Code Forensics

How do you know if using code metrics really does help to produce code with fewer bugs.
I am convinced they do, but how can I possibly prove it?

All projects have historic data. This is usually stored in your bug tracking and source code control tools.
We can use the data stored in these systems to perform ‘code forensics.’
We use the historic data from real issues to see if they could have been avoided.

This can all be done without affecting any of your existing code or adding any risk to your project.
Surely that’s a useful Software Engineering technique?

Disclaimer

Firstly I realise that most bugs you find in a standard project are not caused by code quality – it’s probably only a small percentage. However the ones that exist are avoidable.
It is these avoidable quality issues that I want to concentrate on. I want to be able to determine when exceeding a metric threshold is likely to result in a problem
It’s possible that if enough code forensics are run on my individual code base, I may be able to come up with some numbers that are useful to me in the future.
In the long term it may be possible for someone to do a large study and come up with better guidelines.

Process

The process is quite straightforward.

1. Query your bug tracking tool for all the issues that required a code fix.
2. Assess the defects.
3. Identify the code.
4. Get the root cause.

Query your bug tracking tool.

First thing you need to do is to identify all your recent bugs, let’s say for the last month.
Do a simple query to bring back all of the bugs during that period. This should be easy - otherwise you’re using the wrong tool!
Now you have a full list of all of the defects that you are potentially interested in

Assess the defects.

You now need to go through each of the bugs and assess whether the issue really was a code issue.
Other things it might be include.

• A requirements issue.
• An issue with the deployment environment.
• Configuration Issues

What you are left with is a list of issues that were really caused because of bad code.

Identify the problematic code.

You now need to map your list of issues back to the relevant source.
You will not be able to do this unless you have been disciplined with your check in comments. In most places I have worked, when checking in a bug fix you always start it with a reference to the problem it fixes.
Assuming you have been commenting your commits with the reference, you can do a simple query to see which code was affected. This can be done in fisheye, tortoise etc to get the required code


Get to the root cause.

Finally you have something to look at, so what do you do with it? Well first you have to understand how the fix works and decide if it was a code quality issue. Perhaps the issue was a simple error rather than something a metric would have caught.
However you might open the code and find like this. The average complexity in our system is 10. This piece of code has a complexity of 106!
This was an accident waiting to happen!



Clearly the bug would have been more likely to have been caught, had we failed the build because the code failed to meet expected quality standards. This is a potentially avoidable error.

Another Angle.

Another way to try and establish a link between poor code quality and defects is to take advantage of something such as the Sonar hotspot view to see the most complex classes in you system.

 


You can then work backwards and examine the history of those files to see if those classes are causing issues in your codebase.
The trouble is that it is not that simple. High complexity files, which are used infrequently, are less likely to cause you trouble than those which are more frequently used, but of a lower complexity.

Automating the process.

For this to be any use it probably needs to be automated so that a large sample of data can be examined. Some tools already make this link between defects and the related fix source code.
The next step is to pull that data back and run your metrics analysis on the files.

Summary

None of this is conclusive, however I still think it's a useful technique.

What it is most likely to prove is that you have had past problems which you could have avoided with metrics. It should also give you an idea of which metrics to use.

It's likely also to show that most problems are not caused by poor code quality, but other factors instead.

Unit Test Code Coverage With Maven And Jacoco

Insert title here It’s easy to collect unit test code coverage because all of the common tools are geared up for it.
This article will explain how you can add unit test coverage to your Maven application in 10 minutes.
We will use the excellent Jacoco code coverage library to show how easy it is.

Examples

All of the examples come from this article.
http://johndobie.blogspot.com/2011/11/test-doubles-with-mockito.html

You can check them out from here
svn co
https://designbycontract.googlecode.com/svn/trunk/examples/testing/test-doubles


With Maven 3 installed, you can run them with this command.
mvn clean package

What is Jacoco

Jacoco is a free code coverage library for Java. http://www.eclemma.org/jacoco/
I use it because it is very simple to add to all types of build including ANT and Maven, and it is also very simple to add to Java containers or a standalone JVM.

How Does it Work?


Jacoco uses the standard JVM Tool Interface. http://java.sun.com/developer/technicalArticles/J2SE/jvm_ti/
In simple terms you attach a Jacoco agent to a JVM when it starts. It was introduced in JDK 5 for monitoring and profiling JVMs and being able to dynamically modify Java classes as they're being loaded.
Whenever a class is loaded Jacoco can instrument the class so it can see when the class is called and what lines are called. That’s how it builds up the coverage statistics. This is all done on the fly.
By default the results file is created when the JVM terminates.
You can also run the agent in server mode which allows you to trigger a dump of the results.

How do you attach the agent to the JVM?

This is a very simple process. You must specify where the jacoco jar is located and then you pass some parameters to define how the agent is to run.
-javaagent:[yourpath/]jacocoagent.jar=[option1]=[value1],[option2]=[value2]

A typical run might look like this
-javaagent:jacoco.jar=destfile=${sonar.jacoco.itReportPath},includes=com.dbc.*
A full reference is found here. http://www.eclemma.org/jacoco/trunk/doc/agent.html

Jacoco Support For Maven

The docs for the maven plugin are defined here. http://www.eclemma.org/jacoco/trunk/doc/maven.html
First we need to add the plugin itself.

  org.jacoco
  jacoco-maven-plugin
  0.5.5.201112152213

We can then define where the jacoco reports are output.

  ${basedir}/target/coverage-reports/jacoco-unit.exec
  ${basedir}/target/coverage-reports/jacoco-unit.exec

Finally we need to define the following 2 executions to make the agent run before the tests are run and also to make sure that the jacoco report task is run when package is executed.

  
    jacoco-initialize
    
      prepare-agent
    
  
  
    jacoco-site
    package
    
      report
    
  


All this together is shown here

 
  org.jacoco
  jacoco-maven-plugin
  0.5.5.201112152213
  
   ${basedir}/target/coverage-reports/jacoco-unit.exec
   ${basedir}/target/coverage-reports/jacoco-unit.exec
  
  
   
    jacoco-initialize
    
     prepare-agent
    
   
   
    jacoco-site
    package
    
     report
    
   
  
 
 
  org.apache.maven.plugins
  maven-compiler-plugin
  
   1.5
   1.5
  
 



To run the examples execute the following command.
mvn clean package

Results.

The results are published in /target/site/jacoco.

Friday, 25 November 2011

Mocks And Stubs - Understanding Test Doubles With Mockito

Introduction

A common thing I come across is that teams using a mocking framework assume they are mocking.
They are not aware that Mocks are just one of a number of 'Test Doubles' which Gerard Meszaros has categorised at xunitpatterns.com.
It’s important to realise that each type of test double has a different role to play in testing. In the same way that you need to learn different patterns or refactoring’s, you need to understand the primitive roles of each type of test double. These can then be combined to achieve your testing needs.
I'll cover a very brief history of how this classification came about, and how each of the types differs.
I'll do this using some short, simple examples in Mockito.

A Very Brief History

For years people have been writing lightweight versions of system components to help with testing. In general it was called stubbing. In 2000' the article 'Endo-Testing: Unit Testing with Mock Objects' introduced the concept of a Mock Object. Since then Stubs, Mocks and a number of other types of test objects have been classified by Meszaros as Test Doubles.
This terminology has been referenced by Martin Fowler in "Mocks Aren't Stubs" and is being adopted within the Microsoft community as shown in "Exploring The Continuum of Test Doubles"
A link to each of these important papers are shown in the reference section.

Categories of test doubles

The diagram above shows the commonly used types of test double. The following URL gives a good cross reference to each of the patterns and their features as well as alternative terminology.
http://xunitpatterns.com/Test%20Double.html

Mockito

Mockito is a test spy framework and it is very simple to learn. Notable with Mockito is that expectations of any mock objects are not defined before the test as they sometimes are in other mocking frameworks. This leads to a more natural style(IMHO) when beginning mocking.
The following examples are here purely to give a simple demonstration of using Mockito to implement the different types of test doubles.
There are a much larger number of specific examples of how to use Mockito on the website.
http://docs.mockito.googlecode.com/hg/latest/org/mockito/Mockito.html

Test Doubles with Mockito

Below are some basic examples using Mockito to show the role of each test double as defined by Meszaros.
I’ve included a link to the main definition for each so you can get more examples and a complete definition.

Dummy Object

http://xunitpatterns.com/Dummy%20Object.html
This is the simplest of all of the test doubles. This is an object that has no implementation which is used purely to populate arguments of method calls which are irrelevant to your test.
For example, the code below uses a lot of code to create the customer which is not important to the test.
The test couldn't care less which customer is added, as long as the customer count comes back as one.
public Customer createDummyCustomer() {
 County county = new County("Essex");
 City city = new City("Romford", county);
 Address address = new Address("1234 Bank Street", city);
 Customer customer = new Customer("john", "dobie", address);
 return customer;
}

@Test
public void addCustomerTest() {
 Customer dummy = createDummyCustomer();
 AddressBook addressBook = new AddressBook();
 addressBook.addCustomer(dummy);
 assertEquals(1, addressBook.getNumberOfCustomers());
}
We actually don't care about the contents of customer object - but it is required. We can try a null value, but if the code is correct you would expect some kind of exception to be thrown.
@Test(expected=Exception.class)
public void addNullCustomerTest() {
 Customer dummy = null;
 AddressBook addressBook = new AddressBook();
 addressBook.addCustomer(dummy);
}  
To avoid this we can use a simple Mockito dummy to get the desired behaviour.
@Test
public void addCustomerWithDummyTest() {
 Customer dummy = mock(Customer.class);
 AddressBook addressBook = new AddressBook();
 addressBook.addCustomer(dummy);
 Assert.assertEquals(1, addressBook.getNumberOfCustomers());
}
It is this simple code which creates a dummy object to be passed into the call.
Customer dummy = mock(Customer.class);
Don't be fooled by the mock syntax - the role being played here is that of a dummy, not a mock.
It's the role of the test double that sets it apart, not the syntax used to create one.
This class works as a simple substitute for the customer class and makes the test very easy to read.

Test stub

http://xunitpatterns.com/Test%20Stub.html
The role of the test stub is to return controlled values to the object being tested. These are described as indirect inputs to the test.  Hopefully an example will clarify what this means.
Take the following code
public class SimplePricingService implements PricingService
{ 
 PricingRepository repository;

 public SimplePricingService(PricingRepository pricingRepository) {
  this.repository = pricingRepository;
 }

 @Override
 public Price priceTrade(Trade trade) {
  return repository.getPriceForTrade(trade);
 }

 @Override
 public Price getTotalPriceForTrades(Collection trades) {
  Price totalPrice = new Price();
  for (Trade trade : trades)
  {
   Price tradePrice = repository.getPriceForTrade(trade);
   totalPrice = totalPrice.add(tradePrice);
  }
  return totalPrice;
 }
The SimplePricingService has one collaborating object which is the trade repository. The trade repository provides trade prices to the pricing service through the getPriceForTrade method.
For us to test the businees logic in the SimplePricingService, we need to control these indirect inputs
i.e. inputs we never passed into the test.
This is shown below.

In the following example we stub the PricingRepository to return known values which can be used to test the business logic of the SimpleTradeService.
@Test
public void testGetHighestPricedTrade() throws Exception {
  Price price1 = new Price(10); 
  Price price2 = new Price(15);
  Price price3 = new Price(25);
 
  PricingRepository pricingRepository = mock(PricingRepository.class);
  when(pricingRepository.getPriceForTrade(any(Trade.class)))
    .thenReturn(price1, price2, price3);
   
  PricingService service = new SimplePricingService(pricingRepository);
  Price highestPrice = service.getHighestPricedTrade(getTrades());
  
  assertEquals(price3.getAmount(), highestPrice.getAmount());
}

Saboteur Example

There are 2 common variants of Test Stubs: Responder’s and Saboteur's.
Responder's are used to test the happy path as in the previous example.
A saboteur is used to test exceptional behaviour as below.
@Test(expected=TradeNotFoundException.class)
public void testInvalidTrade() throws Exception {

  Trade trade = new FixtureHelper().getTrade();
  TradeRepository tradeRepository = mock(TradeRepository.class);

  when(tradeRepository.getTradeById(anyLong()))
    .thenThrow(new TradeNotFoundException());

  TradingService tradingService = new SimpleTradingService(tradeRepository);
  tradingService.getTradeById(trade.getId());
}

Mock Object

http://xunitpatterns.com/Mock%20Object.html
Mock objects are used to verify object behaviour during a test. By object behaviour I mean we check that the correct methods and paths are excercised on the object when the test is run.
This is very different to the supporting role of a stub which is used to provide results to whatever you are testing.
In a stub we use the pattern of defining a return value for a method.
when(customer.getSurname()).thenReturn(surname);
In a mock we check the behaviour of the object using the following form.
verify(listMock).add(s);
Here is a simple example where we want to test that a new trade is audited correctly.
Here is the main code.
public class SimpleTradingService implements TradingService{

  TradeRepository tradeRepository;
  AuditService auditService;
 
  public SimpleTradingService(TradeRepository tradeRepository, 
                              AuditService auditService)
  {
    this.tradeRepository = tradeRepository;
    this.auditService = auditService;
  }

  public Long createTrade(Trade trade) throws CreateTradeException {
  Long id = tradeRepository.createTrade(trade);
  auditService.logNewTrade(trade);
  return id;
}
The test below creates a stub for the trade repository and mock for the AuditService
We then call verify on the mocked AuditService to make sure that the TradeService calls it's
logNewTrade method correctly
@Mock
TradeRepository tradeRepository;
 
@Mock
AuditService auditService;
  
@Test
public void testAuditLogEntryMadeForNewTrade() throws Exception { 
  Trade trade = new Trade("Ref 1", "Description 1");
  when(tradeRepository.createTrade(trade)).thenReturn(anyLong()); 
  
  TradingService tradingService 
    = new SimpleTradingService(tradeRepository, auditService);
  tradingService.createTrade(trade);
  
  verify(auditService).logNewTrade(trade);
}
The following line does the checking on the mocked AuditService.
verify(auditService).logNewTrade(trade);
This test allows us to show that the audit service behaves correctly when creating a trade.

Test Spy

http://xunitpatterns.com/Test%20Spy.html
It's worth having a look at the above link for the strict definition of a Test Spy.
However in Mockito I like to use it to allow you to wrap a real object and then verify or modify it's behaviour to support your testing.
Here is an example were we check the standard behaviour of a List. Note that we can both verify that the add method is called and also assert that the item was added to the list.
@Spy
List listSpy = new ArrayList();

@Test
public void testSpyReturnsRealValues() throws Exception {
 String s = "dobie";
 listSpy.add(new String(s));

 verify(listSpy).add(s);
 assertEquals(1, listSpy.size());
}
Compare this with using a mock object where only the method call can be validated. Because we only mock the behaviour of the list, it does not record that the item has been added and returns the default value of zero when we call the size() method.
@Mock
List listMock = new ArrayList();

@Test
public void testMockReturnsZero() throws Exception {
 String s = "dobie";

 listMock.add(new String(s));

 verify(listMock).add(s);
 assertEquals(0, listMock.size());
}
Another useful feature of the testSpy is the ability to stub return calls. When this is done the object will behave as normal until the stubbed method is called.
In this example we stub the get method to always throw a RuntimeException. The rest of the behaviour remains the same.
@Test(expected=RuntimeException.class)
public void testSpyReturnsStubbedValues() throws Exception {
 listSpy.add(new String("dobie"));  
 assertEquals(1, listSpy.size());
  
 when(listSpy.get(anyInt())).thenThrow(new RuntimeException());
 listSpy.get(0);
}
In this example we again keep the core behaviour but change the size() method to return 1 initially and 5 for all subsequent calls.
public void testSpyReturnsStubbedValues2() throws Exception {
 int size = 5;
 when(listSpy.size()).thenReturn(1, size);
  
 int mockedListSize = listSpy.size();
 assertEquals(1, mockedListSize);
  
 mockedListSize = listSpy.size();
 assertEquals(5, mockedListSize);  

 mockedListSize = listSpy.size();
 assertEquals(5, mockedListSize);  
} 
This is pretty Magic!

Fake Object

http://xunitpatterns.com/Fake%20Object.html
Fake objects are usually hand crafted or light weight objects only used for testing and not suitable for production. A good example would be an in-memory database or fake service layer.
They tend to provide much more functionality than standard test doubles and as such are probably not usually candidates for implementation using Mockito. That’s not to say that they couldn’t be constructed as such, just that its probably not worth implementing this way.

References

Test Double Patterns
Endo-Testing: Unit Testing with Mock Objects
Mock Roles, Not Objects
Mocks Aren't Stubs
http://msdn.microsoft.com/en-us/magazine/cc163358.aspx

Monday, 17 October 2011

Maven Integration Testing And Spring Restful Services

Introduction

My original blog showed how to seperate maven unit and integration tests using a very simple example. http://johndobie.blogspot.com/2011/06/seperating-maven-unit-integration-tests.html Since then a lot of people asked me for a more realistic example than the one used originally. This post shows how you split your unit and integration tests using the original method in a realistic environment where the application is actually deployed to a server.
  • We use Maven to build and unit test some Spring based restful webservices.
  • We then use the Maven Jetty plugin to start a Web server and deploy them to it.
  • We create an In-memory database and create the schema 
  • Finally we run all of the integration tests in the seperate \src\integrationtest\java directory
This article is aimed squarely at showing how to use Maven in a realistic way to start and deploy a set of services to a running server, before running your integration tests. It is not about the subtle details of REST or Spring MVC. I'll cover this lightly enough to build a working application whilst providing references to more in depth articles for those that want more details.

Code Structure

Running the Example

The full code is hosted at google code. Use the following commands to check it out and run it. Make sure you have nothing running on port 8080 before running the tests.
svn co https://designbycontract.googlecode.com/svn/trunk/examples/maven/spring-rest-example
cd spring-rest-example
mvn clean install -Pit,jetty
You can see the full build on the following Cloudbees hosted Jenkins instance. https://designbycontract.ci.cloudbees.com/job/spring-rest-example/

Results of running the example

  • The tests in the standard maven test structure are run during the unit test phase as usual.
  • A Jetty Webserver is started
  • The war containing the web server is deployed to the server
  • The hsqldb in-memory database is started and the schema created.
  • The tests in the \src\integrationtest\java directory are run during the integration test phase.
  • The server is shutdown.

How to create the Spring Service class

The trade service is very simple. It uses a repository to create and find trades. I haven't included exceptions to keep the whole thing as simple as possible. The only trick here is to add the @Service annotation, otherwise it is straight Java.
@Service
public class SimpleTradeService implements TradeService {
  @Autowired
  TradeRepository tradeRepository; 
 
  public SimpleTradeService(TradeRepository tradeRepository)  {
    this.tradeRepository = tradeRepository;
  }
 
  @Override
  public Long createTrade(Trade t) {
    Long id = tradeRepository.createTrade(t);
    return id;
  }

  @Override
  public Trade getTradeById(Long id) {
    return tradeRepository.getTradeById(id);
  }

How to create the Database repository class

The above service uses a trade repository to create and find trades. We use the Spring class HibernateDaoSupoort to create this class and keep things simple. By extending this class we simply need to create our trade object class, and define our database details in the spring config. All of the other details are taken care of by the framework.
public class HibernateTradeRepository  extends HibernateDaoSupport implements TradeRepository{
  @Override
  public Trade getTradeByReference(String reference) {
       throw new RuntimeException();
  }

  @Override
  public Long createTrade(Trade trade) {
      return (Long) getHibernateTemplate().save(trade);
  }

  @Override
  public Trade getTradeById(Long id) {
      return getHibernateTemplate().get(Trade.class, id);
  }
}

How to create the Database Trade Class

We use standard JPA annotations to define our database trade object
@Entity
public class Trade {
 @Id
 private long id;
The @Entity annotation marks the object as a database entity. The @Id annotation shows which field we want to be our table primary key. For the rest of the fields we use default behaviour so no other annotations are required.

How to Configure the Database

For this example we are going to use Hsqldb to create our database. http://hsqldb.org/ A new instance of this will be created every time we start the server. To setup the database all we have to do is define it in the spring config trade-servlet.xml
<bean id="sessionFactory"   
<bean id="sessionFactory"  class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
    <property name="packagesToScan"
      value="com.dbc.model" />
    <property name="hibernateProperties">
      <props>
        <prop key="hibernate.show_sql">true</prop>
        <prop key="hibernate.format_sql">true</prop>
        <prop key="hibernate.transaction.factory_class">
          org.hibernate.transaction.JDBCTransactionFactory
        </prop>
        <prop key="hibernate.dialect">org.hibernate.dialect.HSQLDialect</prop>
        <prop key="hibernate.connection.pool_size">0</prop>
        <prop key="hibernate.connection.driver_class">org.hsqldb.jdbcDriver</prop>
        <prop key="hibernate.connection.url">
          jdbc:hsqldb:target/data/tradedatabase;shutdown=true
        </prop>
        <prop key="hibernate.connection.username">sa</prop>
        <prop key="hibernate.connection.password"></prop>
        <prop key="hibernate.connection.autocommit">true</prop>
        <prop key="hibernate.jdbc.batch_size">0</prop>
        <prop key="hibernate.hbm2ddl.auto">update</prop>
      </props>
    </property>
  </bean>
The session factory defines our database connection details. The most important property is
<prop key="hibernate.hbm2ddl.auto">update</prop>
This property tells hibernate to update the database when the application starts. It will effectively create the table for the trade object from the annotations on our trade object. When you run the tests, you will see that the following SQL is executed on startup.
11:30:31,899 DEBUG org.hibernate.tool.hbm2ddl.SchemaUpdate SchemaUpdate:203 
- create table 
Trade (id bigint          not null, 
       description        varchar(255), 
       reference          varchar(255), 
       primary key (id))
Thats a new database setup and ready to go.

Creating The Restful Interface.

I'm just going to cover the basics here. For some great examples follow these links http://blog.springsource.com/2009/03/08/rest-in-spring-3-mvc/ http://www.stupidjavatricks.com/?p=54

How to Create the Spring Controller

The Spring controller is the key to this whole example. It is the controller that takes our requests and passes them to the trade Service for processing. It defines the restful interface. We use @PathVariable to make things simple.
@RequestMapping(value = "/create/trade/{id}")
public ModelAndView createTrade(@PathVariable Long id) {
  Trade trade = new Trade(id); 
  service.createTrade(trade);
  ModelAndView mav = new ModelAndView("tradeView", BindingResult.MODEL_KEY_PREFIX + "trade", trade);
  return mav;
}

@RequestMapping(value = "/find/trade/{id}")
public ModelAndView findTradeById(@PathVariable Long id) {
  Trade trade = service.getTradeById(id);
  ModelAndView mav = new ModelAndView("tradeView", BindingResult.MODEL_KEY_PREFIX + "trade", trade);
  return mav;
}
It works quite simply by populating the @PathVariable id with the value from /find/trade/{id} For example, requesting /find/trade/1 will populate reference with "1" requesting /find/trade/29 will populate reference with "29" More information can be found here: http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/mvc.html#mvc-ann-requestmapping-uri-templates

How to configure the Web Application

The configuration of the web application in web.xml is very straightforward. First we register the Spring Servlet

  trade
  org.springframework.web.servlet.DispatcherServlet
  1
Next we define a mapping to the servlet. This mapping will pass all requests to our servlet.

  trade
  /*

How to Configure Spring

The Spring configuration consists of a number of distinct elements. The first line simply tells Spring where to look for annotations
The BeanNameViewResolver takes the name
This scary looking piece of XML does the job of making sure that the Trade object is returned as XML. XStream will take the object and automatically convert to an XML Format.

  
      
    
      
The Trade class defines the XStream annotation for this.
@XStreamAlias("trade")
public class Trade {
In our case you can see from the test that we get the following from /search/trade/1

  1

How to start and stop the Jetty Server

I use the Jetty Plugin to start the server and deploy the war file contained the services. http://docs.codehaus.org/display/JETTY/Maven+Jetty+Plugin The server is started with the following snippet from pom.xml
<execution>
  <id>start-jetty</id>
  <phase>pre-integration-test</phase>
  <goals>
    <goal>run</goal>
  </goals>
</execution>
The server is stoped with the following snippet from pom.xml
<execution>
  <id>stop-jetty</id>
  <phase>post-integration-test</phase>
  <goals>
    <goal>stop</goal>
  </goals>
</execution>

How to run the Integration Tests

The integration tests are run using failsafe as described in the orgiinal article. http://johndobie.blogspot.com/2011/06/seperating-maven-unit-integration-tests.html We use the new Spring RestTemplate to make the call to the service easy.
@Test
public void testGetTradeFromRestService() throws Exception {
  long id = 10L;
  createTrade(id);
  String tradeXml = new RestTemplate()
                          .getForObject(
                          "http://localhost:8080/find/trade/{id}",
                          String.class, id);
  
  System.out.println(tradeXml);
  Trade trade = getTradeFromXml(tradeXml);
  assertEquals(trade.getId(), id);
}

Wednesday, 3 August 2011

A Free EC2 Cloud Based Jenkins And Sonar Setup

Introduction

My original blog showed how to seperate maven unit and integration tests when doing continuous integration.
http://johndobie.blogspot.com/2011/06/seperating-maven-unit-integration-tests.html

This example builds on this to provide a platform to run the example.  We use a free Amazon EC2 cloud based solution to show how to deploy Jenkins and Sonar.  This is a low power server but it is useful for infrequent use.

The steps are very simple
  • Sign up for a 1 year free account from Amazon
  • Create a new server from an existing image with Hudson and Sonar.

Viewing The Final Platform

You can see an example of the finished platform by clicking on the links below.

Jenkins : http://ec2-75-101-221-43.compute-1.amazonaws.com:8080/

Sonar :   http://ec2-75-101-221-43.compute-1.amazonaws.com:9000/

Creating A Free Amazon Account

Amazon offer a free account for new customers. First sign up for the account at the following link.
http://aws.amazon.com/free/

Creating A Free Server

The free account only allows you to use a restricted set of images and a micro server.
Follow the steps below to create the server.

The first step is to log into the main Amazon console.  https://console.aws.amazon.com/s3/home
Go to the EC2 Tab and Click 'Launch Instance'



Next we have to choose the correct image.  Go to the community tab and look for the following image.
ami-5d5f9234


 Notice the 'Star' which shows it is free tier eligible if chosen with a Micro Instance.

This brings up the screen below.  Notice the type 'Micro (t1.micro, 613MB).
Leave the defaults and click on 'Continue'
Again leave the defaults and click on continue.





Once again leave the defaults and click on continue.




Create Key Pair


This key pair is important because it is needed to connect to your server. Click on the 'Create & Download your key pair'.  Keep the file safe.

Creating A Security Group

The next screen allows you to define the firewall rules for your server.  


 We are going to allow all of the following rules below.  You can make it more restrictive if you like once everything is running.




Instance Details.

The details of your new server will then be summarised. 


Click

 

Click on 'Launch' to start the server.




Tools to connect to your Server

You will need to connect to your server to install the applications. To do this download and install putty.
http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

This will give you all the tools you need to connect to the server.

Converting Your Security Key

Before you can use putty to login you need to convert the .pem key from step 1 to a .ppk key.
Go to your installation of putty and run puttygen.
Select load and choose your downloaded .pem file.  You should see the dialog below.



Next click on 'Save Private key' and save the file somewhere safe.

Connecting To your Server

run putty
- First fill in the in the server name



Next click on the SSH -> Auth property.  Browse to the new .ppk file and select it.




Then click open.
This should give you a terminal window as follows showing you connected to your ec2 instance.





Installing Java


Installing Unzip


Installing Jenkins

We use the following commands to install Jenkins.

Installing Hudson

TBD