Two screens are bad? 3

Posted by steve on February 01, 2009

In the last meeting we had an interesting presentation/discussion on the Joel Test: 12 Steps to Better Code led by Julian Harley.As expected, some of the ideas were accepted by all, but a surprising number were contentious and threw up some good debates: (Apologies if my memory don’t match your recollection of the evening, I’ll blame the holiday I’ve just been on and the copius of amounts of saki which my parents-in-law forced me to drink. :) My fuzzy memory summarised the discussion as follows: 

  1. Do you use source control? Heck yeah! All agreed it is crazy not to do this.
  2. Can you make a build in one step? All agreed.
  3. Do you make daily builds? Daily builds were viewed as an absolute minimum and continuous integration and immediate builds were preferred.
  4. Do you have a bug database? All agreed.
  5. Do you fix bugs before writing new code? It depends. If a bug is found by the developer then it should be fixed, but we wouldn’t drop all our plans just because a bug was reported. The bug should be viewed alongside the new features to be developed priorities should be chosen (ideally by the client).
  6. Do you have an up-to-date schedule? Again, it depends. How detailed should a schedule be?
  7. Do you have a spec? Mmmmm. Once more, it depends. Everyone felt requirements were important but there was discussion about what level they should be and I can’t say a consensus was agreed upon.
  8. Do programmers have quiet working conditions? It was generally felt that there needs to be a balance between an environment which allows for osmostic communication so the team can benefit from the spred of knowledge and one which allows for quiet so the developer can concentrate.
  9. Do you use the best tools money can buy? This one was hard to pin down. Some people argued that not providing two screens for a developer should be a crime, whereas some argued that having two screens can be distracting and they often preferred just one screen.
  10. Do you have testers? All agreed.
  11. Do new candidates write code during their interview? Again surprisingly for me, this one had more discussion than expected. Some argued that judging people by a small coding sample could be unfair and too much emphasis was placed on this.
  12. Do you do hallway usability testing? All would like to follow the agile practice of having a customer or a proxy in the team so that we could have even better feedback than a random passer-by.

The discussion then focused on what we felt could be added to the list:

  1. Documentation was high on most people’s list but we found it hard to agree on exactly what kind of documentation.
  2. Collaborative aspect/tool. Something as simple as a wiki is often beneficial so that the team can record plans/discussions/decisions etc.
  3. UI designer. Having the app being designed by trained people instead of being designed by developers who think they can design a UI.
  4. Client presence. As mentioned in the final point, we felt that having a client or proxy see work as it is being developed was of great benefit.
  5. Automated Testing. Well we are agile after all :) Currently you can follow an interesting discussion between Joel Spolsky (and Jeff Atwood) on their stackoverflow podcast (#38 and #39) versus Uncle Bob Martin about automated testing. Hopefully Bob will appear on the stackoverflow podcast next week to sort out Joel and Jeff. [If you’re looking for a good podcast, the stackoverflow podcast is usually good (even though they can’t get the TDD idea!). I’ve also enjoyed the last two Hanselminutes, which discussed Uncle Bob’s SOLID principles and TDD]

Once again thanks again to Julian for hosting a great discussion. 

Competition results

Posted by Conrad Benham on January 12, 2009

The final meeting for 2008 saw us rap up with the presentation and announcement of winners from the coding competition. The aim of the competition was to encourage competitors to think about and implement their programs using Agile and XP practices. While it was difficult for entrants to practice Agile to it’s fullest, some competitors took pragmatic approaches to this. Some competitors explained trade offs they made in notes included in their submissions. XP, on the other hand was much more practical and was practiced by all competitors to varying degrees.

There were four competitors:

Each competitor was awarded a prize as they each demonstrated different components of XP that were equally interesting. Of the solutions submitted, Alex and Michael’s solutions were command line based, implemented in Java. Steve’s solution made use of Google Web Toolkit. Francis’ (who worked in a team) solution was implemented using Ruby on Rails. Congratulations go out to each competitor for taking part in the competition.

A very big thank you goes out to JetBrains who sponsored the prizes for the night. JetBrains gave licenses away to some of their fabulous products. An equally big thank you goes out to Pinpoint Asia for their sponsorship of the pizza.

Iteration 1, 2, 3: Lego Animal 2

Posted by Conrad Benham on November 27, 2008

Jenny Requesting StoriesOn Wednesday evening Jenny Wong did a fantastic job of leading a practical session demonstrating how to build things in an iterative, Agile fashion using Lego. The aim for the evening was to introduce participants to building a Lego product in small, incremental stages using a number of Agile concepts including stories, poker chip planning, iterations and on-site customer with show cases at the end of each iteration.

The group of twenty participants were divided into five groups each with four team members. Each team member was assigned a role: Business Analyst, Developer (two, to allow for pairing) and Quality Assurance (QA). Jenny and I acted as business proxies, deciding on which stories should be developed within an iteration.

The evening was divided into a single release with three iterations of 20 minutes duration each of which was further broken down into the following stages:

  • Estimation (3 minutes)
    • Teams estimated how much effort would be needed to develop a particular story. This stage involved the entire team.
  • Sign-up (3 minutes)
    • As business proxies, Jenny and I worked with the teams to determine what should be developed within the iteration. This included defects and stories not scoped for development during previous iterations. We also gave teams the opportunity to ask us questions about what should be developed when contradictory requirements were encountered. This stage mainly involved the Business Analysts but was open to developers and QA to listen in and ask clarifying questions. As business proxies Jenny and I used poker chip planning to ensure the functionality requested did not exceed what could be built during the iteration.
  • Development (4 minutes)
    • During this stage, the developers set to work, building the Lego animal. The responsibility of the BA at this stage was to ask the business proxies questions when development became difficult due to ambiguity.
  • Testing (5 minutes)
    • At this point the QAs tested the finished product to identify any defects giving the developers a chance to fix them.
  • Showcase (5 minutes)
    • The final stage of the iteration was the showcase in which the QA and BA worked together with the business proxies to show the product as it had been developed in the iteration. This gave the business proxies a chance to provide direction on the product, to ensure it met expectations.

At the end of the three iterations, five unique Lego animals had been built which were put on display for all to admire.

Lego Parade

To finish the evening, we held a retrospective. Ordinarily a retrospective is held at the end of each iteration. Due to time constraints we held a single iteration at the end of the release.  The aim of the retrospective is to gather information from people so they can share their thoughts. In this iteration we asked participants to talk about the following three areas:

  • What we have done well
  • What we should have done differently
  • Puzzles, questions, ideas

The following came out of the retrospective.

What we have done well

  • Prefers to be a developer, enjoyed playing with Lego.
  • Business value was useful for development team, because usually do not see this. We can focus on most important stories.
  • Knowing previous velocity helps plan for next iteration.
  • Asking leading questions that help answer questions for the team.
  • Push back on customer.
  • Mileage out of prebuilt components.
  • Throwing out low-value stories.
  • Velocity went up during the second and third iterations.

What we should have done differently

  • Customer feedback sucked: on-site business.
  • Ask business questions before the end of iteration.
  • View all stories at the start of iteration one.
  • Not enough time spent with the business.
  • More estimation points, not just 1, 2, 3 extend to include 5 and 8
  • Testing: acceptance criteria unknown
  • Non functional requirements not known until testing/showcase
  • Should have revisited estimations
  • Additional story points implemented was not appreciated; rather it was expected
  • QA to test during development
  • We didn’t do pair development
  • Stories were out of order
  • Not all teams had Lego wheels
  • Over-engineering once an idea had been considered
  • Assumptions were made about stories

Puzzles, questions, ideas

  • Is it better to have a lower velocity?
  • What was the project name?
  • Does a BA provide insight to developers during development?
  • Can one team act as a business representative for another team?

I’d like to thank everyone that came along and made the evening a fun and energetic event. Thanks also go to Jenny for running the event and to Pinpoint Asia for their sponsorship of the pizza. Thanks go to Vince Natteri for making a formal introduction of Pinpoint Asia to Agile Hong Kong.

Steve Freeman on People and Teams

Posted by Conrad Benham on November 09, 2008

On recent travels through Asia, Steve Freeman stopped by and shared his thoughts and experiences with self organising teams and people. In his presentation Steve introduced a number of concepts, drawing on the Agile Manifesto and also had the group engage in some practical exercises related to teamwork.

Steve Freeman

Steve introduced a number of people in his discussion, quoting from Deming, Cockburn and Snowden. When making decisions that affect a team it is important to be inclusive. Deming was quoted as saying that the people closest to the work should make the decision. In doing so you will have a team that is more likely to adopt the new direction. Cockburn adds that a project needs just enough process and detail, too much of these and the project will suffer. According to Snowden then, it’s important to pay attention to the feedback loop to understand whether the corrective measures taken by a team are working and where they can be further refined. Steve spoke of the concept of Forming, Storming, Norming and Performing first introduced in a short article in 1965 by Bruce Tuckman entitled Developmental sequence in small groups’ (PDF).

Steve introduced Ralph Stacey’s Agreement & Certainty Matrix. This is a model used to help determine the actions that should be taken in a complex environment using two criteria: degree of certainty against level of agreement. This matrix has a number of applications. It can be used to determine the balance between leadership and management, helping make decisions by aiding in their understanding and to communicate and justify a certain direction. Related is the Cynefin framework developed by Snowden and colleagues while working at IBM’s Institute of Knowledge Management. Like Stacey’s model, this framework is also used to help people make decisions based on complex data. In this context, Steve asked attendees to talk in pairs about a recent project they’ve worked on where there were difficulties, paying particular attention to the politics and religion that was at play on that project. He then asked each pair to identify where they fit within the two models mentioned.

For those interested, a summary of Steve’s presentation has been compiled by Steven Mak on InfoQ China. This summary is in simplified Chinese.

Testing with mocks and stubs

Posted by Conrad Benham on October 14, 2008

On Thursday we took a further look at automated testing. A couple of months ago we had a Code Jam focused on Continuous Integration, in that session we looked at automating the build as a whole, including running tests. In our last Code Jam, we paid greater attention to testing and automating those parts of a system that rely on external or difficult to configure components.

In the session we looked at the differences between state based testing and interaction based testing including the pro’s and con’s of the two approaches. We identified why mock based testing makes unit testing easier and discussed the layering that tends to occur when using mock based approaches. Shen Tham drew an excellent comparison between stubs and mocks, briefly discussing the concept of Inversion of Control (IOC) and how it applies to mocking (this is something we’ll look at more closely in a future Code Jam). Aaron Farr also showed us how he uses mocks when testing the Model View Presenter pattern.

A reasonably simple example was used to demonstrate the use of Mockito. The example is a simple but incomplete graphing application that shows how data retrieved from a feed can be provided to something that can render it. The intent is to show how the various components can be split from each other in such a way that they an be unit tested separately from any live system that needs to be configured. If you take a look at the code you will see that the data used in the graph can be sourced from any appropriate system such as databases, file systems, FTP and HTTP. Likewise, it should be easy to see that the data can easily be displayed in a variety of ways. It can be written to a stream (as per the example code), written to file, sent to a browser via HTTP, published to an FTP server as a file, the options are open. It should be obvious that the feed and display are independent provided the common application interfaces are adhered to. While the mocking and stubbing techniques don’t enforce this, they certainly make help make it easier.

We looked specifically at the following Open Source Java mocking frameworks:

For other languages you might want to have a look at the following tools:

For further reading on Mocking I suggest the following:

In closing it is important to understand that an emphasis has been placed on testing at the unit level using mocks and stubs. There are many reasons for doing this. One must remember that unit testing alone is not sufficient to ensure complete and reliable integration with 3rd party systems. For this we must still write automated integration tests.  However, these tests can be isolated in such a way that they are only executed by the continuous integration environment if they take too long to execute on a developer machine. Unit tests written with mocks and stubs are therefore not a replacement for integration or functional tests.

Many thank to Kalun Wong for his summary of the evening’s activities.

And the winner is… 1

Posted by Conrad Benham on October 02, 2008

…it’s a draw. We were very fortunate to have regular Agile Hong Kong participant Jonathan Buford give a very interesting presentation in which he discussed his experiences as a Toy Engineer. He covered the major aspects of toy development and how they relate to software development. On display were some of the toys he has worked on as a consultant, including one he is busy preparing for the end of year sales cycle. Jonathan also introduced us to an innovative project management tool he is working on with his business partner.

Toys vs Software: Fight

Software development and toy development share a number of similarities.  In both, an initial idea must be decided upon: the concept. Once chosen, a prototype is then built to validate the functionality of the toy. This is similar to a spike, particularly given a prototype instance of a toy will never reach the stores. Sometimes a toy will even be modeled in Computer Aided Design (CAD) software. These models are used to verify the operation of a toy, particularly one that has moving parts that interact with other parts to create a complex mechanical system. The CAD model is used to help fail fast in the case the product won’t work.

A major differentiation between product development and software development is that the business can be tempted to use a software spike in the case it is successful, where a toy prototype cannot be used as it is a single item that requires the engineering of a new item before there is something of salable value. Once a design is proven, the tooling must then be created to actually build the product. Something that can take a considerable amount of lead time, 60 days is not unusual.

Toy development differs vastly when it comes to automated testing phase. The scope for automated testing is less than for software development. In software development it is possible to automate the tests for many parts of the system. However, according to Jonathan, while it is possible to automate the tests for a toy it is not so simple. Jonathon did introduce the concept of stress testing in which a toy is placed in a machine that shakes the toy. This automated test is designed to verify the strength of a product. While other tests can be developed they need to be made on a case by case basis.

Thanks go to Jonathon for giving an interesting and insightful presentation on the processes involved with toy product development.

Control Chaos with Scrum 9

Posted by Conrad Benham on August 17, 2008

On Tuesday, Aaron Farr of JadeTower gave a very entertaining and informative overview of Scrum. Drawing on experience gained from working in Scrum teams at Siemens, Aaron was able to provide a thorough insight into the process.

Aaron opened by discussing how Scrum is a lightweight process that manages the chaos associated with building software. This chaos is controlled by providing guidelines designed to limit the impact. Various roles are necessary to enable this including a Product Owner, Scrum Master and Scrum Team.

Scrum is an iterative style of development where certain pieces of functionality are delivered in a fixed period of time known as Sprints. Aaron introduced the notion of a Product Backlog which is held in user stories and can be estimated and prioritised. The product backlog is defined during the planning meeting that occurs prior to the commencement of a sprint. The sprint runway was also discussed which clearly defines work that is to be complete.

Some interesting questions were raised as to how stories should be estimated. The two concepts are intuitive hours and the somewhat abstract idea of story points. Either way story estimates help determine velocity which can be used to create burndown charts.

Aaron ended the evening with a highly interactive session in which he demonstrated the creation of stories that were subsequently estimated. In this entertaining session he asked for input from people to determine the relative size of each story. That is, the size of each story in comparison to other stories that were estimated. The first story chosen was used as a baseline to estimate the size of subsequent stories.

Aaron has kindly put his slides on Slideshare. He also suggested people refer to the Scrum Checklists on InfoQ for more information in the way of mini-books about Scrum. Aaron also made reference during his presentation to the page You Aren’t Gonna Need It hosted on C2.

Practical Builds using Hudson 1

Posted by Conrad Benham on August 04, 2008

Our last Jam was both theoretical and practical. In the theoretical part we looked at what makes a quality build system. The practical part took some of the theoretical concepts and applied them, focusing specifically on continuous integration using Hudson. We also played around with some of the tools typically used in Java projects to aid in the project quality.

In the current context a build system is something that automates the static checking, compilation and running of automated tests (unit, integration and functional) against production code. If the build system has been carefully designed it may also include a process to manage the upgrade of a database and also create deployable artifacts (of course a build system may do more or less of the activities described). A facility to deploy these artifacts to various environments may also be included, though this is increasingly being handled by the continuous integration environments as they mature.

While the choice of tool used to automate the tasks mentioned above does not really matter, there are choices that will make the job easier. Some in the industry argue that the XML approach offered by tools such as Ant is cumbersome and not as powerful as new approaches that are offered by tools such as Rake or the Groovy Ant Task the script is actually written in code. Certainly the thing that must be avoided is starting a project using plugins that are offered by IDE’s to build certain parts of a system. Doing this tends to lock the building of a project into the IDE which makes the task automation impossible (unless of course if the IDE plugin has a scriptable equivalent). Over time it can become increasingly difficult to release the dependency on the IDE plugins. The aim is to create an automated build system that can be invoked by an external agent (a continuous integration tool or a developer) and not require any further intervention.

Simplicity is key! An easy to use build system will not require the definition of properties before it can be executed on a developer machine. For efficiency it should be possible for a developer to check a code base out of a repository and be able to invoke the build system on it (provided the requisite build tool has been installed correctly).

One goal of a build system is to make it fast, so it is painless and easy for a developer to run a  full build prior to checking code changes into the code repository. The pain comes when the build is slow to complete. In such a scenario it is worth optimising the system (code or build system itself) so it executes faster. Sometimes the pain can be transferred to the continuous integration tool. This implies that a different build task or target may actually be invoked by the continuous integration tool. It is common to have a default build that is executed by developers (remember our goal of making it easy for developers to get up and running) which runs a good portion of the build omitting that which is slow. The slow parts can be transferred to the continuous integration version of the build.

We also looked at failing fast. It is important to ensure that those things that run quickly do so early so they can also fail early. For example, static code analysis tools that check for duplicate code should be run first as they are generally quicker to execute than a compiler or automated tests and therefore save time by failing early and fast.

In the practical component of the evening we used Hudson to integrate a solution to the Anagrams problem that we solved in the first Code Jam. That solution was hosted on GoogleCode and can be found here. It is also the same as the solution posted in the first Code Jam. The aim of the practical component was to install and configure Hudson, understanding the capabilities of what it and tools like it can do. We then modified the build file that came with the project to add checks to it. We also looked at a number of tools that are commonly used in Java build systems. All of the tools we looked at are open source:

So how does a build system affect the agility of a project? A quality build system will help verify the quality of a system. Beyond that it aids in the concept of frequent releases (one goal of Agile) by making the artifacts required for deployment easy to create and obtain. Even deploying them in some cases.

CI is a Software Development Practice 2

Posted by Conrad Benham on July 21, 2008

A big thank you to Chris Stevenson for giving an “Introduction to Continuous Integration with a brief introduction to Cruise Control”. Chris works for ThoughtWorks. He has worked in many of the offices, most recently in Beijing and is en route to San Francisco where he will be lead developer of Cruise Control Enterprise.

Chris presented a solid introduction to continuous integration (CI) citing Martin Fowler’s work on CI. According to Chris, CI is about ensuring we have working, compiling code that passes unit tests. While many consider CI to be a tool, Chris made it clear that CI is actually a software development practice that is often supported by a tool, but does not have to be. As a result CI is a practice that is used by the team. Automated CI tools are important on projects that have larger teams. CI is the practice of continually integrating code changes made by members of a team with the code base, ensuring these changes don’t break the build in any way. This is verified by running an automated build (as distinguished from a CI tool) that compiles, tests and creates deployable artifacts.

Automated builds must include automated tests: both unit and behavioural tests. Chris introduced the practice of behaviour driven development (BDD), with specific reference to the tools RSpec (for Ruby) and JBehave (for Java). Chris also introduced a number of tools used for user interface testing: Selenium, WebTest, Sahi, Frankenstein, White, Abbot.

Chris spoke of the importance of small continuous commits to the code base. Small changes limit the amount of merge conflicts that occur when checking code into source code control systems and therefore minimising the pain that is associated with conflicts. It is therefore common for people to check code in many times a day. Chris highlighted the importance of a short build, discussing the implications of a build that takes a long time to complete.

Chris ended his presentation with a sneak preview of Cruise Control Enterprise which is to be released in the next week or so.

Interestingly, Eric Minick recently wrote a blog entitled “Continuous Integration: Was Fowler Wrong?“. Eric’s premise is that CI is about tests, not builds. Given the controversial nature of this, a discussion has commenced on The ServerSide.

A Code Jam will be announced in the next day or so which will provide a practical introduction to Continuous Integration tools and how to use them. So, stay tuned…

Lean Thinking with Richard Durnall 1

Posted by Conrad Benham on July 06, 2008

At our last gathering, guest speaker Richard Durnall gave a very interesting presentation on Lean Thinking in IT. In his presentation, Richard covered a large spectrum of the Lean Thinking space. From a solid background of applying Lean in the Automotive industry (where Lean grew from) Richard spoke with authority on the history of Lean and its application. He then drew from that experience to explain how Lean thinking can be applied in IT.

Lean Manufacturing has had a major influence from Japan where Toyota used it to great effect. This influence means that many Japanese words can be found in Lean to describe various components of it. One of those words, muda, for example is used to describe waste. Another word, jidoka, as Richard explained it, is about autonomation or automation with a human touch. It focuses on introducing technology to people in a structured and considered manner such that those people will adopt the technology being built for them. A system that is built that dismisses jidoka could be considered muda, particularly if the users it is built for do not adopt it.

Amongst the topics presented, he spoke briefly on the differences between Agile and Lean. In a recent blog post, Richard compares Agile and Lean. Martin Fowler has also weighed into the conversation with his own ideas.

Richard also made reference to a couple of books during his presentation:
•    Learning to See by John Shook
•    The Toyota Way by Jeffrey Liker

I’d like to thank Richard for presenting at our last meeting and for making his presentation available, which can be found in PDF form. I want to wish him well on the rest of his world tour in which he will continue to present on Lean Thinking.