Continuous Deployment changes your mind

Posted in agile, continuous delivery, continuous integration, deployment, management, software development, testing on August 14th, 2011 by Joerg – Be the first to comment

We are practicing Continuous Deployment since about a year. When looking back I realize the dramatic way this practice changed the way we work.

Just to draw the fine line between Continuous Delivery and Continuos Deployment I use the definition I once heard from Neal Ford. Continuous Delivery allows you to deploy every commit but you don’t do it every time. With Continuous Deployment you go further and really deploy every time.

I have recently been asked what the benefits of Continuous Deployment are compared to Continuous Delivery. I think the impact on your development practice is a huge thing.

In our company we practice agile development since about 2005. During that time I can remember many discussions about agile practices like:

  • Test Driven Development
  • Definition of Done
  • Potentially Shippable Product
  • Refactoring in baby steps
  • Not committing on a red build

Many of them do not easily reveal their benefits in daily business. So it becomes difficult to use them consequently.

The Definition of Done is a great example. It is easy to declare something finished or appropriately tested when there is usually a QA department that will find your mistakes during release tests. So the common recipe is write down your definition, post it on the wall and “be more disciplined”. Force yourself to write more tests, force yourself to declare things done only if they are really done and so on. This is not easy. In my experience it seldom works.

This changes dramatically when using continuous deployment. Between your commit and the production system is only an automated deployment pipeline, that usually takes something around 30 minutes. There is no QA department that does a extensive release test. Your mistakes will only be found by automated test that you wrote yourself. Well or they will be found by your Users.

Automated tests become your safety net. This again changes the way you think about TDD. It totally makes sense to have a near full coverage of your code. It also makes sense to have integration tests on API and GUI level to make sure your deployment pipeline catches most mistakes. It will not catch any mistake, but in my experience the same is true for most QA departments.

You will think differently about Potentially Shippable Products. Every commit has to be shippable and it will (not only potentially) be shipped.

This forces working in baby steps. It is just not possible to refactor a system by destroying it for several days.

If a build is red the deployment pipeline is halted. Nobody is able to apply more changes to the system unless this problem is fixed. Again something that used to be a discipline thing becomes a necessity. You need to help your teammates to fix the problem otherwise you can’t work yourself. The same is true if there is a red build in the evening. One of my teammates even talks about an addiction to fix the build before going home. It just gives you a good feeling to finish your day in a green state.

Continuous Delivery has several immediate business benefits I might tell you about another time. But most of them don’t need every commit to be deployed.

But I think the impact of deploying every commit on your development practice huge. Many agile practices make so much more sense. They fit naturally in this environment and you follow them easily. This will then result in benefits for your business on the long term.

Continuous Delivery Slides

Posted in agile, continuous delivery, continuous integration, deployment, java, management, software development, testing, version control on July 5th, 2011 by Joerg – Be the first to comment

The last Java User Group event in Berlin was hosted at Hypoport. It was about Continuous Delivery. We had about 100 guests.

There where two talks. One by Axel Fontaine who introduced Continuous Delivery especially in a Java environment. You can find his talk on Parleys or on his Blog.

I had the pleasure to introduce our experiences with Continuous Delivery. We are currently developing a new product. Since we started about a year ago we consequently used Continuous Delivery. We learned quite some lessons on our way which I presented that evening. My Slides are available here. (As this was a german event the slides are in german)

Thanks to everyone who attended the event. It was a really nice evening. Special thanks to everybody who helped organizing it.

Pre-tested “Commits” using Git

Posted in continuous delivery, continuous integration, deployment, tools, version control on May 29th, 2011 by Joerg – 20 Comments

The ability to use pre-tested commits is a feature of certain Continuous Integration servers like Teamcity. The whole concept is not easily transfered to a Git infrastructure. There are several approaches. Teamcity 6.5 now newly supports an approach called “Personal builds on branches” (see here). In our project we needed the feature a long time ago and created a Git workflow that fulfills the intention of pre-tested commits and does not need any special features of the CI server.

But first of all what is the problem that is solved by testing a commit before it is committed? I think Geek and Poke gets it perfectly. You want to be able to get the latest checkin from VCS an be sure that it builds. Classic CI is in the wrong sequence. You commit first and let the CI server test afterwards. Of course you recognize a problem fast but by then your fellow coders will already get your mess. The responsible coder will run the test on his machine first but what do you have a CI server for?

The solution is to let the CI server test your changes before they are available to other developers. The classic approach used by Teamcity is pretty complex. They use an IDE Plugin or a so called Command Line Runner. It works like this:

  1. Before you commit you will start the remote run using one of the tools. The tool will create a patch of your changes and send them to Teamcity.
  2. Teamcity will now checkout a fresh version from the VCS and apply the patch to it.
  3. Now your CI run will compile and test everything including your changes.
  4. If successful the plugin will proceed to really check in your changes into Version Control.

Aside from it’s complexity this solution did not work well with distributed VCS like Git or Mercurial.

With Version 6.5 there is another option “Personal builds on branches”. You commit your changes into special branches that follow a certain name pattern. If Teamcity recognizes this commit it will start a personal build and push to the master branch if the build was successful. A very similar approach has been described for Jenkins.

We are using Git as our VCS since more than a year but the new solution by Teamcity has only been added very recently. This situation forced us to create our own solution and to our surprise this was much easier than expected. The main reason was the distributed nature of Git. Some of the points are:

  • A commit is something that only effects your local repository as long as you don’t push it. This allows to test after a commit and before push. So you don’t need a special mechanism to create patches and send them somewhere.
  • The central repository is not the only one. Beside your local copy you can create as many repositories as you like. So you can push wherever you like to in order to e.g. run tests.
  • The central repository that you pull from doesn’t need to be the same that you push to.
  • Commits have a globally unique identifier. You can try to push a commit twice. Git will recognize it and don’t apply it the second time.

You can probably already guess what I am up to, so lets show a picture of the overall setup.

There is one central Git-repository that only contains pre-tested changes. I call this “Green Repository” because it should only contain changes that lead to green builds. Every developer pulls from this repository but nobody is allowed to push to it. Instead everybody has a personal repository (think fork if you were on GitHub). The CI Server watches those personal repositories. After a commit it starts the compile and test. If that was successful it pushes the changes to the Green Repository. Now everybody can pull your changes and can be sure they have been tested.

You might wonder about some details of this setup so I try to answer some questions here:

  • The remote server that contains the Green Repository and all personal remote repositories could be a simple git ssh setup. We are using a local installation of the Gitorious software instead. It is very easy to clone a central repository and to set access rights this way. There is also a commercial solution by the creators of GitHub called GitHub:FI.
  • The remote-run on the CI Server needs to include 2 steps. Running the build (e.g. maven) and pushing the changes after successful build. We are using build dependencies in Teamcity in order to do these two steps. The first step is a simple maven goal and the second step is a simple command line call “git push”.
  • In order to make the push to your personal remote repository easier you should create an alias. “git remote add remote-run [your remote repo url]” then it is easy to push your changes using “git push remote-run”.
  • This solution works best with small teams that do not change often. The setup procedure for a new team member requires good knowledge of the solution. The setup itself can be done in less than 15 Minutes.

The question is whether we will continue using our own solution or switch to the new feature of Teamcity. So far I can’t see any advantage of the Teamcity feature. With our solution we are even more flexible in regard of branch design. So I think our layout will live on for a while.

If your CI-Server has no support for pre-tested commits you might give our setup a try. I am eager to hear your experiences. I am also happy to answer any specific questions about the setup in comments.

Learn your damn vocabulary!

Posted in agile, management, software development on January 29th, 2011 by Joerg – 2 Comments

Yesterday in a conversation with a team-mate we had the idea of a metaphor that turned out to be very useful. Learning programming has a lot in common with learning a foreign language.

We were talking about different experiences while programming. Sometimes you are in a real flow. You can just fluently express all your ideas and create hundreds of lines without stopping. Other times you need to stop every second line to google some details or ask other people.

When using foreign languages you can find situations that feel exactly the same. The tourist with a dictionary needs to lookup every second word. There is no way to start a real conversation. It’s only enough to get to the next train station. But after some years of learning he might join the natives and enjoy a conversation. He can talk fluently and does not look up words or think about the grammar.

If the situations feel so similar, what can we learn from the metaphor. We should look what it takes to become a fluent speaker in a foreign language:

  • Learning vocabulary
  • Learning grammar
  • Using the language

Vocabulary in the world of programming is similar to basic language constructs, keywords and knowing what functionality your libraries are offering. I would also include knowledge about the editor or IDE, keyboard shortcuts and version control.
So what do you need to do to get fluent in your language. Learn your damn vocabulary! You really need to memorize it. It is not enough to be able to look it up. This would be like looking up every second word in a dictionary. A fluent conversation is impossible. Now the good news is, that in a natural language about 1000 words are enough to conduct a fluent conversation. Something similar is probably true for programming. You don’t need to know every detail to be fluent.
But the guy with a vocabulary of 1000 words will probably never win a Pulitzer prize. A lot of people think if they know the basics of their programming environment it is sufficient. No! You have to constantly extend your vocabulary. I am often surprised by discovering new features of the JDK, my IDE or the libraries we are using. Often I realize that I could have saved a lot of time, if only I did know this before. So go and discover new vocabulary and don’t forget to memorize it.

Grammar is more equivalent to higher level constructs like design patterns, data structures or algorithms. When you learn a foreign language, grammar is very helpful in understanding the right way to combine the words. If you don’t know the grammar, the way you use the language usually sounds funny to native speakers. The same goes for the grammar of programming. So learn it if you don’t want to sound funny.
There is another interesting effect. A fluent user of a language usually does not think about grammar. He can often not even explain why he used the words in a certain way. It just felt right. There is a lesson in it. Learn your grammar well but after a while don’t try to think about it to much in order to talk fluently.

Last but not least in order to get fluent in a language you have to use it. You usually start in a class together with a teacher. You have a save environment where you can try your vocabulary and grammar without fear. A good way to do this for programming are code katas. You are in a save environment with peers that support you and give you feedback. You have to repeat a kata often to memorize every important vocabulary.
There are many other well known ways to use programming in order to get fluent:

  • Read other peoples code
  • Do private projects (and open source them to get feedback)
  • And of course your do your daily business

Like learning a foreign language, learning programming takes many years. In the beginning you will be like the tourist and his dictionary. Just make sure that you don’t stay the tourist.

Next time you are stuck every second line while programming try to identify what vocabulary you just did not memorize or what grammar you probably not understood. Then memorize it and repeat …

Executing shell commands in Groovy

Posted in groovy, java, tools on September 30th, 2010 by Joerg – 15 Comments

Groovy is a good alternative to classic shell scripting for the Java experienced developer. I already wrote about this here. A common scenario is to call other shell commands from your scripts. Groovy makes this easy, but during a recent project I learned about some pitfalls. I will explain them here and also how to avoid them.

The simple way

Executing a simple command is just a matter of putting an .execute() after a string like that:

"mkdir foo".execute()

(Please ignore the fact for now, that you could do this using the Java standard functionality. This would be more effective in this example. Believe me, there are more complex requirements where you definitely need to call a shell command. I just keep the examples simple.)

Evaluating the exit value

What if you want to know whether this action was successful? In a shell script you usually check the exit value. Thats still straightforward:

def exitValue = "mkdir foo".execute().exitValue()
if(!exitValue)
  //do your error-handling

“execute()” just creates a java.lang.Process object. This simply returns an int with the exit value when calling exitValue() which you can use in your script.

Printing the output

Let’s say you want to see the result of the shell command during the runtime of your script. The easy approach would be:

println "ls".execute().text

If you want to combine text output and evaluation of the exit value it starts getting less elegant:

def process = "ls".execute()
process.text.eachLine {println it}
if(!process.exitValue())
...

This works, but not very well. It has two major issues:

  • The output will not happen until the shell command is finished. This is ok for a simple “ls” but will get ugly, when you have a long running command.
  • process.text contains only standard out by default, but not err out.

To solve this you can’t use the simple “.execute()” anymore. You need to fall back to java.lang.ProcessBuilder. This allows you to “redirectErrorStream()” which joins both streams together as it would look like when executing the command on a shell. Here is the example code:

  def process=new ProcessBuilder("ls").redirectErrorStream(true).start()
  process.inputStream.eachLine {println it}

The working directory

Another pitfall is the current working directory. In a simple shell script you would just “cd” into the new working directory and then continue with your script. This would not work in a groovy script. As groovy is using the underlying Java mechanisms the “cd” command has no effect to the following instances of Process. Instead you need to tell the ProcessBuilder where your working directory has to be. There are several ways to do this. The first would be to add a “directory(File)” call to the ProcessBuilder from the example above. This would look like that:

  def processBuilder=new ProcessBuilder("ls")
  processBuilder.redirectErrorStream(true)
  processBuilder.directory(new File("Your Working dir"))  // <--
  def process = processBuilder.start()
  ...

Fortunately there is a more convenient way when you use the execute method. You can use an overloaded version with two parameters. The first is a String array or a List containing environment variables or null if you don’t need them. The second is the working directory in form of a File object. Here is a code example:

   "your command".execute(null, new File("your working dir")) 

As a side note, if you need to set specific environment variables you also need to set them for each call.

Using wildcard characters

This pitfall is described in many Groovy books but this post would not be complete without it. Using Wildcards like in the following example would not work.

   "ls *.java".execute() 

Instead of listing all Java source files in the current directory as you might expect, it will try to list a file called “*.java”. It is very unlikely that this file is found. Wildcards like * are interpreted by the shell. The execute command is not using one. You need to call the shell first and let it interpret the wildcard-characters. On OSX and Linux this is simply done by adding a “sh -c” in front of the command. Something slightly different applies on Windows which I leave you to look up. The following command will do the expected.

   "sh -c ls *.java".execute() 

It doesn’t do any harm to add this prefix to all shell commands you like to start. You only need to take care of the operating system your script is running on. I am in the fortunate position to expect a UNIX-like shell on every computer in my project so I simply add “sh -c”.

Putting it all together

Because of all the issues mentioned above I ended up adding two methods to my Groovy scripts called executeOnShell. These methods solves the pitfalls mentioned and still result in a concise syntax when calling shell commands from the script.

def executeOnShell(String command) {
  return executeOnShell(command, new File(System.properties.'user.dir'))
}

private def executeOnShell(String command, File workingDir) {
  println command
  def process = new ProcessBuilder(addShellPrefix(command))
                                    .directory(workingDir)
                                    .redirectErrorStream(true) 
                                    .start()
  process.inputStream.eachLine {println it}
  process.waitFor();
  return process.exitValue()
}

private def addShellPrefix(String command) {
  commandArray = new String[3]
  commandArray[0] = "sh"
  commandArray[1] = "-c"
  commandArray[2] = command
  return commandArray
}

The method executeOnShell is overloaded, so that you can either call it with a working directory or without. It also returns the exitValue, so you can easily evaluate it. This is of course not complete, as it does not include the ability to set Environment variables and it works only on a Unix-like shell. It is sufficient for my current use cases.
Please feel free to use it and to add your own requirements. If you know of any other pitfalls when executing shell commands from Groovy please leave a comment.