Feed the Beast


Dogfooding is such a weird word. It doesn't sound like something even close to a good thing. Despite that lingual issue, it is the very best thing to do. And we did. However, we didn't do it because we wanted to test our product a little more. The reason behind it is way nobler.

We built the Continuous Integration feature with a clear motivation, make the team development efficient. And we've done it right. So, it was time to reap what we sow. But don't take the wrong impression. We didn't wait for the world to test it for us. We just didn't have time to switch our process before we finished every aspect that the external development teams needed. Fortunately, we've reached this goal, and now it's our turn.

Kentico B.CI.

Probably not everyone knows what the development pipeline in Kentico looked like before Continuous Integration (B.CI.). We used the same pattern as all of our partners. The infamous shared database. The one thing in the world I'm encouraging people for nearly two years to stop using. All the rage I'm expressing whenever this comes along is based on many years under this type of development. Not that we had a better choice.

The first task of the day for a developer was to get the latest code from the TFS repository and build the solution—straightforward action. However, there were times when the site behaved oddly. If you were this poor developer, you could spend a good half hour debugging, just to find out someone changed the test data or a setting. The curious part is that you could change it back and broadcast a message to everyone to leave it be, but you still never knew if it would stay like that. We know now it won't. Let's call it the setting's ping pong where two developers are changing one setting back and forth for a week.

Needless to say, our QAs were also using this shared database. You can imagine the exciting possibilities. Even an innocent action could create mayhem when you're performing tests on a living organism like this system. You can try to delete a test page, and malfunctioning dependency detection is just enough to erase half of a database. And there's no rollback, only the database backup from the morning. The four hours of work by forty developers thrown through the window.

There were other technical aspects that made our lives miserable. For example, our build server had an extra code to exclude the testing objects and create proper default data every time a build was baked. It started as a simple procedure, but after a few years, the code was so complicated that every time we needed to tweak it we doubled the estimate just because we knew it would blow up in our faces. I could go on with this Series of Unfortunate Events, but you probably have a pretty good picture now. Hooray, we could finally switch.

The Switch

The Switch went pretty smooth. Weird, right? Needless to say, we didn't go for it with all guns blazing. A good amount of brainstorming took place a month before the D-day.

Do we want to have a pilot run with one or two teams? The pilot run means two teams using local databases and synchronize using one CI repository. That was the first important decision we had to make. We decided to go with the full development switch. The pilot would only slow us down and would probably bring more issues like synchronization back to the shared database used by the rest of the company. We knew there was a fail-safe. A quick fallback to the shared database in case of trouble. We were prepared to restore the CI repo into a shared database and change the connection strings. One swift operation that will eventually save the day. Operation "local development" was given the green light.

We created a TeamCity configuration which deploys the ground zero database, gets the latest version of the code and the CI repository, builds the solution, and runs the restore action. The ground zero database is a starting point database backup. It's a clean Kentico 10 database with all sample sites for the current v11 development cycle. 


Creating an automated process that will restore database backup and trigger CI restore action wasn't the hard part. We faced a bigger challenge, structural changes in the database. I'm not talking about new module classes nor new fields and columns. Those issues are covered in the CI by default.

Our everyday work goes far beyond standard customizations. There are regular needs to change or refactor a core functionality. This leads to scenarios where we need to delete or create a table outside of standard modules. We spend a good amount of our time dealing with performance which means new indices, foreign keys, procedures, views, etc. All these core changes are outside of the CI support, but we have to take care of them on a daily basis.

Therefore, we have created a simple SQL script versioning process that we call Migrations. We have a new folder in the CI repository called @migration. Every migration is a defensive SQL script in a file named after the issue it's solving. We have two types of this migration. One type is the "before" migration. Migrations of this type are executed before the CI restore action. The second type is "after" migration, executed after the CI repo is restored. Developers are obligated to create this script and push the changes within the changeset. The rest of the development can pull the changes and run a PowerShell script. This script will execute the "before" set, run the CI restore and then the rest of the migrations. Some of the developers have a special toolbar action in the Visual Studio to run it with a single click.

With all this in place, we switched.


We are using the local database for two months now in our entire development. Needless to say, we didn't have to execute the fail-safe plan. All of our developers have their local database with the test data they need. This database is smaller and way faster than the shared one ever was. All changes are synchronized via the Continuous Integration and TFS. The one downside is the extra time needed for the CI restore action. Developers have to wait a little longer if they need to retrieve the current state of solution. However, the benefits are tremendous.

We have full control over the data. Every change is stored in the versioning system with a comment who did it and why. This solves the setting's ping pong and also the casual database corruption followed by backup restore. The local development is faster, and reviews are much easier. The reviewer sees every data change the developer made.

There's also a nice side effect. Our database is basically mirrored in our file system. When a developer needs to make sure about some object, transformation method, or macro usage, he can run the search against the CI repository to find out, although this wasn't the reason to use CI in the first place. This ability to search through objects without a humongous SQL query is a very fancy addition.

I believe we are quite happy we did the switch. Maybe there are people with regrets that we didn't do it sooner. If you didn't already take the leap of faith with Continuous Integration, my advice is to go for it. I'm convinced it's worth the time and money.


Share this article on   LinkedIn

Michal Kadák

As the guardian of the core layers of Kentico, I'm here to keep you up to date on the place where all Kentico features meet - the platform.