Performance

Upstream OpenStack Performance and Release-Shaming

These topics may seem like strange bedfellows, but trust me: there's a method to my madness. Originally this was going to be part of my Berlin summit post, but as I was writing it got rather long and I started to feel it was important enough to deserve a standalone post. Since there are two separate but related topics here, I've split the post. If you're interested in my technical thoughts on upstream performance testing, read on. If you're only interested in the click-baity release-shaming part, feel free to skip to that section. It mostly stands on its own.

Host Filesystem Impact on Tempest Performance in OpenStack

As I mentioned in a previous post, about a year ago I picked up a 1U server from EBay to use as a local single-node OpenStack environment. In general I was quite happy with it, but at some point I got tired of paying for electricity to run a fairly power-hungry server that sits idle, or close to it, about 95% of the time. The fans also picked up an annoying whine somewhere along the line, so once I discovered how much more efficient a modern desktop processor would be, while actually performing better than the old dual server ones, I decided it was time for a new box. This post will be the story of my journey to get Tempest running in an acceptable fashion on it, and what I learned along the way.

Using pypi-mirror with devtest

A full run of TripleO's devtest takes a long time - around an hour or more on my i7/16GB box even with a hot squid cache. Quite a bit of the time is spent building images, and there are a few ways to speed that up, some of which are easier than others.

Subscribe to RSS - Performance