Agile Software, People, and Teams. 1

Posted by Conrad Benham on October 20, 2008

One of the principles in the Agile Manifesto is “People over Process”, because a culture where teams can take initiatives to succeed is more productive than one which is mainly concerned with protected teams from failure. The promise of self-Organising Teams is that they’re effective because because the people who are closest to the work know best what to do and are most motivated to do a good job. No-one, however, promised that this is easy. Just telling an inexperienced team to Self-Organise is no guarantee of success.

In this talk, Steve will discuss his experiences with Self-Organising teams, and present some of the leading-edge results from Social Complexity science that will help you understand the dynamics of your team and how it can work together.

Steve Freeman is a pioneer of Agile software development in the UK, where he writes software and coaches teams in a variety of industries. Steve has trained and consulted for teams in Europe, America, and Asia, and co-authored the JMock library. Previously, he has worked in software houses, consultancies, and research labs, earned a PhD from the University of Cambridge, and written shrink-wrap software. Steve has also taught at University College London. He is a presenter and organiser at the major international industry conferences, and was  conference chair for the first London XpDay. Steve was also winner of the Agile Alliance Gordon Pask award 2006.

http://www.m3p.co.uk/blog

(Thank you to Steve for preparing the above abstract).

When: 7:30pm, Wednesday 29th of October 2008
Where: ThoughtWorks Hong Kong Office
Address: Room 1304, 13/F, Tai Tung Building, 8 Fleming Road, Wanchai
Map: ThoughtWorks Hong Kong
Contact

New sponsor: JetBrains

Posted by Conrad Benham on October 19, 2008

 

JetBrains

 IntelliJIdea        Resharper

I’m very excited to announce JetBrains as a new sponsor of Agile Hong Kong. JetBrains are best known for their excellent commercial IDE products IntelliJ IDEA for Java and ReSharper for C#. They are also known for TeamCity, their Continuous Integration server and for dotTrace, a memory and performance profiler for .NET.

Both IntelliJ IDEA and ReSharper have built in support for some XP practices such as automated refactorings, test driven development and automated build scripts. They are also responsible for building other quality products which you can read about here.

JetBrains have generously offered support by including Agile Hong Kong in the JetBrains User Group Giveaway program. Included in this giveaway program are licenses for the following products:

  • IntelliJ IDEA Personal
  • ReSharper Personal
  • TeamCity Build Agent
  • dotTrace Personal

I would like to thank JetBrains for supporting Agile Hong Kong. Expect to see giveaways of licenses for these products at code jams and as a thank you to speakers.

Testing with mocks and stubs

Posted by Conrad Benham on October 14, 2008

On Thursday we took a further look at automated testing. A couple of months ago we had a Code Jam focused on Continuous Integration, in that session we looked at automating the build as a whole, including running tests. In our last Code Jam, we paid greater attention to testing and automating those parts of a system that rely on external or difficult to configure components.

In the session we looked at the differences between state based testing and interaction based testing including the pro’s and con’s of the two approaches. We identified why mock based testing makes unit testing easier and discussed the layering that tends to occur when using mock based approaches. Shen Tham drew an excellent comparison between stubs and mocks, briefly discussing the concept of Inversion of Control (IOC) and how it applies to mocking (this is something we’ll look at more closely in a future Code Jam). Aaron Farr also showed us how he uses mocks when testing the Model View Presenter pattern.

A reasonably simple example was used to demonstrate the use of Mockito. The example is a simple but incomplete graphing application that shows how data retrieved from a feed can be provided to something that can render it. The intent is to show how the various components can be split from each other in such a way that they an be unit tested separately from any live system that needs to be configured. If you take a look at the code you will see that the data used in the graph can be sourced from any appropriate system such as databases, file systems, FTP and HTTP. Likewise, it should be easy to see that the data can easily be displayed in a variety of ways. It can be written to a stream (as per the example code), written to file, sent to a browser via HTTP, published to an FTP server as a file, the options are open. It should be obvious that the feed and display are independent provided the common application interfaces are adhered to. While the mocking and stubbing techniques don’t enforce this, they certainly make help make it easier.

We looked specifically at the following Open Source Java mocking frameworks:

For other languages you might want to have a look at the following tools:

For further reading on Mocking I suggest the following:

In closing it is important to understand that an emphasis has been placed on testing at the unit level using mocks and stubs. There are many reasons for doing this. One must remember that unit testing alone is not sufficient to ensure complete and reliable integration with 3rd party systems. For this we must still write automated integration tests.  However, these tests can be isolated in such a way that they are only executed by the continuous integration environment if they take too long to execute on a developer machine. Unit tests written with mocks and stubs are therefore not a replacement for integration or functional tests.

Many thank to Kalun Wong for his summary of the evening’s activities.

Code Jam: Automated Unit tests with Mocks and Stubs 1

Posted by Conrad Benham on October 02, 2008

According to Wikipedia “…unit testing is a method of testing that verifies the individual units of source code are working properly. A unit is the smallest testable part of an application.” Doing this can be difficult when we are unit testing code on boundaries. A naive attempt to unit test a system boundary such as a database would actually mean communicating with a live database directly. This means the database must exist, all tables created and all data must be in a known state prior to the test commencing. Once the test is complete (with a success or failure) the database must then be returned to it’s original known good state. Not only is building the infrastructure required to do this potentially time consuming, so too is the actual execution of such a test. Further, such testing is really no longer a unit test but an integration test, it certainly defeats the goals as stated in the above mentioned extract.

Enter mocks and stubs. In this Code Jam we’ll look at some techniques we can apply to reduce our reliance on external systems. This will be through the application of mocks and stubs. We’ll identify the differences between the two approaches and when one should be used over the other. We’ll also look at a variety of open source mocking frameworks. If time permits we’ll also look at how mocking helps layer an application to aid in low coupling and high cohesion (Vidor Hokstad has written a nice blog entry on this).

So, come along and learn some of the concepts of mocking. While this code jam is aimed at developers anyone who is interested should come along. While it is not necessary, as we tend to pair in these sessions, it would be helpful if you could bring a laptop along if you have access to one. If you don’t have a laptop don’t let that stop you from coming along.

When: 7:30pm, Thursday 9th of October 2008
Where: ThoughtWorks Hong Kong Office
Address: Room 1304, 13/F, Tai Tung Building, 8 Fleming Road, Wanchai
Map: ThoughtWorks Hong Kong
Contact

And the winner is… 1

Posted by Conrad Benham on October 02, 2008

…it’s a draw. We were very fortunate to have regular Agile Hong Kong participant Jonathan Buford give a very interesting presentation in which he discussed his experiences as a Toy Engineer. He covered the major aspects of toy development and how they relate to software development. On display were some of the toys he has worked on as a consultant, including one he is busy preparing for the end of year sales cycle. Jonathan also introduced us to an innovative project management tool he is working on with his business partner.

Toys vs Software: Fight

Software development and toy development share a number of similarities.  In both, an initial idea must be decided upon: the concept. Once chosen, a prototype is then built to validate the functionality of the toy. This is similar to a spike, particularly given a prototype instance of a toy will never reach the stores. Sometimes a toy will even be modeled in Computer Aided Design (CAD) software. These models are used to verify the operation of a toy, particularly one that has moving parts that interact with other parts to create a complex mechanical system. The CAD model is used to help fail fast in the case the product won’t work.

A major differentiation between product development and software development is that the business can be tempted to use a software spike in the case it is successful, where a toy prototype cannot be used as it is a single item that requires the engineering of a new item before there is something of salable value. Once a design is proven, the tooling must then be created to actually build the product. Something that can take a considerable amount of lead time, 60 days is not unusual.

Toy development differs vastly when it comes to automated testing phase. The scope for automated testing is less than for software development. In software development it is possible to automate the tests for many parts of the system. However, according to Jonathan, while it is possible to automate the tests for a toy it is not so simple. Jonathon did introduce the concept of stress testing in which a toy is placed in a machine that shakes the toy. This automated test is designed to verify the strength of a product. While other tests can be developed they need to be made on a case by case basis.

Thanks go to Jonathon for giving an interesting and insightful presentation on the processes involved with toy product development.