Posts for the month of February 2013

Profiling the execution time of a Nagare application


In this post we'll talk about Nagare profiling. Here is the point: you have an application that's performing well but that's getting slower and slower after adding new features. So it's time for some optimizations!

One strategy would be to put some logs or print into your code to display execution time and find the parts that are too slow. This strategy has some disadvantages:

  • you need to put some debug code all around, launch the app, play the story, watch your logs... and you need to remove all the debug code after the optimization process which can be painful. Repeat this every time you want to optimize your code.
  • you don't really know where the bottleneck is. Code? Database? Filesystem? External system?

Another strategy is to take advantage of a profiling tool. And there's a WSGI middleware out there for that! The benefits are :

  • you can plug/unplug your middleware only by changing your app configuration. So no code change!
  • you can find instantly where is the slowest part of your app!

Great! So let's dive into that latter strategy.

Creating the profiling middleware


Install the library named repoze.profile using easy_install

easy_install repoze.profile

Creating the WSGI pipe

Add the following lines somewhere in your code (ex.

from repoze.profile import ProfileMiddleware
def profiler(app, options, config_filename, config, error):
    """Creates a WSGI pipe that profiles execution times.
    return ProfileMiddleware(
               app,  # app is your Nagare application (created in the file by default)

Registering the WSGI pipe

Now tell Nagare to wrap the application into the WSGI pipe.

In your application configuration file, simply add the wsgi_pipe directive in the [application] section

wsgi_pipe =

To deactivate profiling, simply comment this line.

Collecting the stats

Launch your app using the standard nagare-admin serve command and play the story you want to optimize.

While browsing into your application, the profiling middleware will automatically gather execution times.

Viewing the stats

By default the profiling middleware will be accessible at http://localhost:8080/myapp/__profile__

The following screenshot shows the result. As you can see, I added an time.sleep() call in my code to simulate a bottleneck.

Further readings

More informations can be found here :