Test on SteroID

Jul 18


I won’t try to convince you to test your project. If you are a techie, you’re probably tired to eat: TDD, CI… in each conference, slide or blog post (even here). Instead here’s a tip to mix my favorite tools together: behat & atoum.

Quick introduction to atoum and behat

To skip if you already know the tools

Atoum is a modern “unit test” framework write in PHP. I already expressed my opinion about it

Behat is a “Behavior Driven Development” aka “BDD” framework write in PHP by @everzet. Initially based on cucumber in Ruby, the tool has a great evolution and some Ruby nerds consider it as good as cucumber (and sometimes, even better. Sorry I don’t have the link to the tweet). In short, it’s IMHO, the best tool to do this job for PHP. Behat also uses the gherkin syntax and allows your tests to be readable by anyone involved in your project.

Atoum asserter into your Behat context

Behat is a modern tool (i.e: loosely coupled) and the best thing: it doesn’t try to reinvent the wheel. You’re free to pick any “asserter” you want! If you enjoy the atoum fluent syntax, you probably want to import the same goodness inside Behat. It’s easy and you just have to follow the classic 3-steps: “Install-Configure-Enjoy”.


First, install Behat and atoum: pear, git clone… If you’re lazy or smart, you probably rely on composer:

"require-dev": {
        "behat/behat": "2.4@stable",
        "mageekguy/atoum": "dev-master",


Then, inside your Behat FeatureContext class, use the atoum’s asserter namespace and don’t forget to (auto)load everything.

This is really simple, but some people were mixing wrong behat/atoum concepts. For example: it’s a really bad idea to use the atoum test class inside Behat. I don’t know what kind of “hope” you could have doing that, but I can predict you a nice headache.


Now you will be able to code your Behat steps using the atoum style:

Apr 10

Jenkins PHP template on the edge

This last 6 months, I had the chance to use many new & cool tools for PHP:

Previously, our Symfony projects used git submodule, PHPUnit and DocBlox, so I had to update our CI configuration (Jenkins and Sonar). In fact, it was pretty easy because all our builds are managed by a custom version of the excellent php jenkins template. Here’s a quick list of the updates for this new stack.

Add atoum dependency with composer

The standard edition of Symfony relies on composer, it’s trivial to add atoum. Yet William Durand suggested a nice practice: add this “dev” dependency in the suggested packages instead of the required packages. This way, your production stage avoid useless packages and you just need an argument to fetch the optional packages “php composer.phar install —install-suggest”.

Create an abstract test class

Atoum code convention is “full lower case” and the test framework expects your tests to live in a namespace “tests/units”. Since Symfony uses capitalized namespaces, we have to make some adjustments:

This abstract class will be extended by all our test classes, it redefines the “expected” namespace for atoum to “Tests\Units”. Since there is no Symfony AtoumBundle for the moment, this class should live in a bundle (to be autoloaded).

Create the atoum configuration file

As PHPUnit, atoum accepts a configuration file conventionally named “.atoum.php” at your project root. I only use this configuration file for the remote CI plateform, so I named this file “.atoum.ci.php” and I define some specific reports (xunit, clover, coverage).

Warning, at the moment xunit and clover reports are wip. It will be merged soon, yet to ease our processes, Plemi provides a fork with a branch “clover-xunit” including all of these features.

Add composer step to ant

Nothing magic here, we just need to download composer and run the dependencies installation. Yet don’t forget to update the “clean” task (to remove the composer files).

Add atoum step to ant

Again, as PHPUnit, atoum provides also a CLI. The -g/—glob options was added recently, and allows regex pattern and wildcard char. For the bootstrap file, be lazy, just use the app/bootstrap.php.cache generated by Symfony. Note: if you didn’t place the previous abstract tests class in a bundle, you should handle this file manually. Here’s the ant target in my build.xml file:

Add phpDocumentor step to ant

phpDocumentor is installed with PEAR, so the only thing we need is an Ant step to generate the code documentation. Good news: the CLI argument are exactly the same as DocBlox.

That’s it! if you already used the php-jenkins template before, you should be able to have all your previous metrics in your jenkins job. You may notice that the “clover HTML report” is a bit different, it’s not generated with PHP CodeCoverage anymore but with the native html report writer from atoum.

What about Sonar ? Since the version 2.14, I wasn’t able to import the clover xml reports generated by atoum, It’s because PHPUnit doesn’t generate a standard clover file and so the sonar plugin isn’t able to parse the standard file from atoum. Hopefully, Julien Bianchi is working hard on atoum integration in sonar with a plugin and will release it in “the next days”.

Bonus track #1: To run atoum test in your local environment, here’s the command:

Bonus track #2: here’s is a full template of an ant build.xml for a Symfony project using composer, atoum and phpDocumentor 2 (used by Plemi)

Mar 30

PHP unit testing with Atoum

It’s been a while since I posted here, my position at Plemi and the project Retentio required much time this last months. We’ve been playing with a lot of tools and I wishes I could have more time to share all the goodness of our experiences.

In the path of my previous post about the continuous integration I’d like to talk about atoum, a new framework for PHP unit testing.

In the PHP QA world, Sebastian Bergman is king. He’s the developer of PHPUnit, the powerful and standard framework for unit testing in PHP.

Yet a new challenger just came in the QA Arena: Atoum from Frédéric Hardy. I like to have choices, so I strongly support alternative projects when it’s reasonable, this means when the “philosophy” behind the projects aren’t the same and it’s not a waste of efforts justified by ego.

I tested atoum in 3 projects this last 3 months, One word: awesome. This tool embraces KISS principles. Is PHPUnit complex ? Not really, it just looks complex. Both frameworks are really great: I like the clover-html output of PHPUnit, I like the fluent interface of the atoum’s asserters, etc.

But, I would advise beginners and new comers into the TDD world to start with atoum. The learning curve is unbeatable, the tests are much more “readable” (IMHO) and finally, writing unit tests with atoum is fun.

Here’s a little comparison of asserters usage for example:

This is a major victory for the “underrated unit testing practice”, because when something is fun, people want to play. Atoum does not provide much documentation at the moment, but it’s easy to find what you need in the asserters directory.

As PHPUnit, Atoum provides standard reports:

So it will perfectly fit into your continuous integration stack, here at Plemi we already switched. (We won’t migrate our projects on PHPUnit, since there is no point to do a such thing.)

This post doesn’t mentions all the features of PHPUnit and Atoum, mock, test isolation, but right now, with theses two cool tools, you have no excuse to don’t test your PHP code. Try it, love it, atoum is the missing key for the TDD gate.

Dec 14

Continuous Integration Flow

At my new position at Plemi, I want to optimize the continuous integration flow. Last week, I had a pretty interesting conversation with Mathieu Nebra, we discussed about our same quest: best ROI through the development flow. I also had an awesome experience with Antoine Guiral (the man behind twoolr), we shared a “lean startup" friday night around a very interesting project.

In my previous article, I introduced a pragmatic Git flow. A first step in the flow focused on the local environment of the developer. I like to start this post with a personal, yet simple consideration : “the trust in your development team is reflected by your CI flow”.

It’s very reductive but the github flow is a perfect example. It’s composed of only one master stable branch and several feature branches, the production flow is deadly simple. However Github uses Github (dog fooding for the win) and this flow strategically relies on it own awesome feature called pull request. Sadly, I can’t afford to use github everywhere for many -good and bad- reasons. So I tried to set up a flow with a self-hosted git repository coupled to our continuous integration server

Overview of the development flow

  1. A Developer works on their local git repository in a dedicated feature branch
  2. He pushes often as he can (min 2 in a day) to the remote feature branch
  3. The CI builds on every push, merging the trunk and his feature branch
  4. When a feature is over, the developer merge it back to the trunk

The context and the tools

For the new comer, a quick summary of the tools used in this flow

The flow starts on the developer side

I already wrote on that topic in my previous article. In short, the developer works into a feature branch named against this pattern: "feature.ticketNumber_description". He writes code without worrying about the SCM. Then, he dedicates some time to nicely split his work into several independent commits. If it’s needed, he can rebase his local branch. Finally he pushes the commit pack.

On the continuous integration plateform side

We have two jobs for each repository: one for the develop branch and another one for all feature branches. The feature job is configured to build every branch with the pattern “feature.*”. Jenkins is notified after each push by a git post-receive hook.

Tip: this hook doesn’t start a build, it trigger Jenkins for a SCM polling. In this way Jenkins will start to build only if there is relevant change(s) in the repository. Read the official documentation: “Build by source changes"

Continuous merge

I configured Jenkins to merge locally the lastest pushed branch with the develop branch. This allow an early and cheap detection about merge conflicts.

If Jenkins can’t merge automatically, it fails the build and the developer is already aware that he needs to manually merge back the develop branch into his feature branch. He can’t delay this conflict resolution because it blocks the CI process: checkstyle, unit tests and coverage won’t be available to him until he resolves the merge.

This continuous merge process has the huge advantage to avoid the nightmare of a messy merge while your developer responsible of a shitty conflict is unreachable for an undefined period.”

Don’t trust Jenkins too much

I don’t use "push after a successful build" option provided by Jenkins. Basically, This allows you to automate the integration process: if a build is stable it will push the new modification to a remote branch. Yet, it don’t fit our needs since I ask the developer to push everyday. Many features need more than one day to be achieve and I won’t let Jenkins push some kind of work in progress.

Don’t trust a single developer

I don’t rely too much on Jenkins and this is also related to my first quote: I work with a pretty nice team but “I don’t trust a single developer” (and even myself!) about building the perfect git log (i.e. the perfect code) in a one shot. We are all new in the company and we need to practice a little bit more with the processes or the new coding standards for example.

Trust your team

Moreover, when I say: “I don’t trust a single developer”, it’s a way to highlight the code review phase. When a feature is closed, it must be fully reviewed and discussed by a technical leader before being merged into the develop branch. This has 2 majors consequences: First I can’t let Jenkins automatically merge into the develop branch; second, the technical leader is free to make a final rebase of the feature branch before the merge. This ensures a stable and clean develop branch.

Finally, we have to distinct the 2 usages of develop. Jenkins use this branch locally as an integration branch for each feature while the remote branch reflects the lastest stable version ready to be checked by Q&A and then deployed.

What’s next ?

Depends on your need, you’re free:

Dec 07

Pragmatic Git flow

I’m using git since more than 2 years, yet I was just using it naively the first months. Then, I needed a reliable workflow and git-flow was an huge revelation. I invite you to study this workflow, it’s a perfect start plan that you could customize for your needs.


First, a little reminder about the git-flow branches model:

Second, a short overview of the git-flow workflow:

  1. Every feature is developed in a dedicated feature branch.
  2. When the feature is over and stable, the related branch is merged back into develop.
  3. Then, develop is “freezed” into the release branch, allowing a deeper review/integration before production while other developers can continue merging new features into develop.
  4. Finally release is merged into master which is the production exposed branch.

I won’t detail the git-flow because the documentation is abundant on internet. Be aware that there is a lighter flow also feature-oriented (via @DavidGuyon). Scott Chacon wrote a nice article about this alternative. I find this flow really efficient, but imho, it relies on the excellent pull request feature from github. So if you’re on a self-hosted repository, it don’t fit to medium-sized projects, except if you work with a high-level senior team.

A clean history

In my previous job at Simple IT, I tried to use the git flow with a simple objective: a clear git history (to ease code review, code management and scm actions like revert) .

So, I started to ask the developers to focus on git, to split their commits by task/concern. It was an awesome success and also a fail. Success: git flow was a perfect flow for our AGILE needs. Fail: the git history was still filled by wrong files or messages, and with the git-flow, merge-commits were new parasites.

The reasons was obvious: it was relying only on the developer goodwill and it was asking him to provide a real-time effort about the SCM. Furthermore, developers had unequal skills: from the exceptional lead developer involved in open source projects to the kid which never meets the words “best” and “practices” together. We didn’t change anything, because the flow was cool and we we’re busy with real business-valued tasks.

I strongly believe that a SCM must not distract or disturb the developers from their main tasks. This means: when your team works on a feature, it shouldn’t be surrounded by questions like “Do I need to commit now ? Do I had to commit 2 files ago ?”. But I keep believing that an SCM history should be clean to be useful.

So I tested git and read a lot about it: the famous progit, the marvelous think-like-a-git and even git internal.

Rebase before pushing

One command solved almost all my problems: git rebase. Basically, it rewrites the git history. Wait a minute ? Isn’t a bad practice to rewrite an SCM history ? When you’re working with a team, the answer is definitively yes (this is condemned by the death penalty at our office).

Yet, git is a distributed SCM, so developers do not own a local copy, instead they manage a local repository (usually plugged to a remote one). I don’t care about their local repository, a developer needs to express his rage on his local commit message like: “Adds model”, “Fix model”, “Fix model 2”, “Fix BULLSHIT”…

More seriously, a developer will try to keep a clean history on his side, yet I don’t want to lower his productivity for an SCM purpose. He’s allowed to make mistake on his local env, I consider it as a “draft history”.

The push is the dramatic key in the flow. Before pushing, the developers have to rewrite their local history, to clean up their mess. How ? Thanks to rebase.

Here’s a basic example, I have worked hard and I’m ready to push. After a "git log", my history looks like this:

Since “MyClass” is a new file, I don’t really care about the minor refactor or the checkstyle fix. In the best world, I would have make only one commit, let’s correct that.

Git is based on the graph theory. Basically, my local repository:

This means that the branch “feature.alpha” starts from the commit B. As @ubermeda said : a branch is plugged to a commit! not to another branch, so when you said branch “alpha” comes from “develop”, it’s half-true.

We will use git rebase to rewrite the feature branch history. Note: rebase can be used in many different ways, google it to find more usages.

Here’s the command:

git rebase -i SHA1.

This can be translate to “I want to rewrite all the history of my current branch between now and the commit identified by this SHA1”. (now = head = the lastest commit of the branch).

Careful, it’s safer to have a clean stash/state before rebasing and you must not rebase a commit already pushed or your partners will be in big trouble.

In our example, we need to group the 3 commits (D-E-F) into one. I will type:

git rebase -i  [the SHA1 of the commit B].

The -i argument means interactive, a screen will popup:

Important: We notice that the commit order is reversed. Also, a lot of informations are available at the bottom of the screen (read it!). To simply regroup the 3 commits into a single commit, I will change some values:

This means: Start with the commit D, add it the commit E, add it the commit F. I used the keyword “fixup” which melds the commit to the previous one and discards the commit message (A perfect command for our needs). Again, read the bottom documentation of the rebase screen, each keywords are described. After saving this file, git will start the rebase.

Let’s check the git log

And my local repository looks like this:

Exactly what we expected. We notice 2 things: I didn’t change the commit message (I could do that during the rebase) and the “grouped” commit is a new commit with a new sha1. That’s why you must not rebase a pushed commit or it will totally break the history.

My local history is now clean, it’s time to push to the remote. It may trigger a post-receive hook for a Jenkins CI which will merge branches, test your code… No ? You should read my introduction about Continuous Integration

I didn’t mentioned it before, but I force the dev team to push every day, this brings a lot of advantages and I’ll probably will talk about it in my next post about continuous integration. In this way they don’t have to rebase hundreds commit

Finally, the productivity isn’t affected by the SCM concern, the developer could work without worrying about the SCM stuff. He just need to dedicate time to clean up and arrange his repository and the whole team benefits from a clean history.

Bonus Track

in my example I just grouped 3 commits. But you could do far more: re-ordering commit just by switching the line for example… In fact, you are totally free to rewrite your whole history.

Sometimes you can’t remember easily the common ancestor of your feature branch and your develop branch. In my example the lastest common ancestor was the commit B. Here’s an example:

Note: If you merge “feature.alpha” and “develop”, the last common ancestor for the both branches will be updated to G.
To find the last common ancestor, you just have to type this command:

git merge-base [branch1] [branch2]

It will output the sha1 of the last common commit, you could use this identifier for your rebase.

Nov 30

Keynote for Continuous Integration with PHP

Continuous integration is not a new concept in software development, yet it’s a trending topic in the PHP community. Actually, the PHP development is evolving to a professional step. AGILE processes are spreading the web development world and so the continuous integration becomes a real business need.

This is a short and condensed introduction to the continuous integration in PHP. My next posts will cover deeply some parts. The main purpose of this blog is to provide keywords and concepts to ease your research.

I discovered the test-driven development more than 3 years ago in the symfony 1 documentation. This framework embeds a rudimentary, but efficient, test suite called lime. Frankly the tools don’t matter without the real will to improve your code quality.

The first question for many companies is: how much will it cost?
The long version : how much time/money will I have to spend to get a more reliable product or service? For this wrong paradigm, my answer is a reversed approach: could you estimate precisely the time/money lost without a continuous integration process?

To be honest, for lightweight projects, it’s still possible to evaluate the maintenance cost. But for huge projects using legacy-hard-coupled code, it’s a nightmare.

The customer, the boss or the developers love a bug tracker when it’s empty. The idea behind Test Driven Development (TDD) is to test your application before the release instead of fixing it after. Yes, it’s an extra cost on development estimated to 20% by Hugo Hamon from Sensio.But try to estimate the cost of your maintenance time and compare it. With the classic Blind Development you can’t predict it, since you have no reliable metrics on your application.

Test-driven development and continuous integration come to the rescue and don’t only allow a correct planning, it also reduces the number of “friday’s night emergency fix of the magic bug”. In short, everybody is happy : the customer, the boss, the wife and thus you too.

Unit test, functional test, acceptance test… These are answers which could and must be combined to match your need. Below is a quick summary and a tool kit for each purpose.

If you’re new to TDD or if you have short deadline : focus on unit tests. It’s the first mandatory step for the backend development which will reward you greatly. For php, consider phpUnit which is the actual standard, but keep also an eye on atoum. I believe that Behavior Driven Development should be use for acceptance, yet phpspec, mentioned by @andhos on twitter, offers a different approach : it’s a BDD framework focused on unit testing.

Sometimes you will need to test your whole application and probably by the presentation layer (i.e. almost like an user). Two solutions are available, first “Headless browser emulators” which are fast and reductive test focused on raw http output (See Goutte or Zombie.js). A second option are “Browser controllers” which simulate navigation with advanced js for example; you can use Sahi which replaced the famous Selenium.

To go further, look at Mink. It’s a web acceptance test framework based on the behavior test framework Behat (a port from cucumber in php). Mink provides an abstract layer for the web acceptance test which could be then executed by any tools mentioned above.

Beyond the test, lot of metrics can be collected about your code like cyclomatic complexity, duplicated code, coupled code… Furthermore, code standards should be controlled by a tool rather than an human eye. See Pdepend, phpmd (Mess Detector), phpCPD (duplicated code)

Testing is one purpose, automating test is another one.
Let me introduce the gentleman called Jenkins (previously called Hudson), an open source java CI server. It’s the standard in this business. Sonar is another CI server, also in java, providing a nicer web GUI than Jenkins and maybe the nicest web GUI on the market for this concern. Finally, 2 new comers, first Travis CI a PaaS solution perfectly plugged with github, it’s a very sexy beta software. Second, mentioned in the comment by Pedro Mata-Mouros, an interesting solution called Cintient. It’s a beta but it seems to provide a cool UI with a minimal setup flow. I have to try it, yet expect a feedback on this blog soon.

For the deployment, only one answer : Capistrano. An unvaluable ruby tool for automated deployment processes.

Last thing, PHP is open-source like all the tools mentioned in this post. Most are hosted on github and provide a pear package. So now, you have no excuse to don’t start right now to improve your developments.

EDIT : thanks to the comments, I’ll added Cintient CI server.

Nov 25

Hello World

Symfony is available since 4 months and most of PHP developers around me are jumping in. My time has come with a new major project…

I’m not “coming from Symfony 1”, I’m leaving it.
I talked on twitter with some developers and we shared the same opinion : we started to hate symfony 1 for the same reasons we loved it at the first time : the magic of RAD.

Symfony 1 was a fullstack MVC RAD PHP framework.
Symfony 2 is a professional SoC PHP framework.

Basically, we trade some keywords for one : freedom or maybe chaos ?

People complain a lot about the lack of structure or conventions. The main reason is a business concern : interoperability. Developers were used to follow rules and accept many constraints rather than going freestyle because it was just easy to switch between projects and/or team.

But what if the rules appear outdated ? What if the constraints is just bad for your needs ? We start to adapt our business to our fantastic tool and we smell that something is wrong. That’s the exact feeling I had when I started to mastered symfony 1.

Symfony 2 is a philosophy relying on the developer brain. You can’t passively use the framework, you have to understand some concepts.

I loved symfony 1 and I think I’m very good for hacking the admin generator and some obscur parts of the tool. But what’s the point today ? Nothing. In contrast, learning Symfony 2 means just improving engineering software skill, not focus effort to learn a specific tool. That’s probably the best point.

So the lazy developer could be really stuck with this new tool because it ask you to be proactive. In the Sensio path, the evolution seems logic : the first tool introduced many best practices, the second version is going beyond.

Despite the fact that the tool was bringing lot of frustration, I always supported the Symfony ideology : constantly improve, respect standard and use best practices. Even if the new version is young or uncomplete, I’m still learning thanks/through it : with the community, open source bundle, documentation etc.

Finally in few words, to me symfony 1 was more an “answer” and now Symfony 2 is “new questions”.

Fabien Potencier wrote an article answering about "What is Symfony".
On this blog you will also find about the “How I work with it”.