New import module

As you could notice, the import package structure and the interface of the export and import process suddenly changed in version 3.1 which wasn't a major version. Let me explain you, what were the reasons for this and how does it influence your project development ...

One of the widely presented requirements from our customers was to extend our Staging process to metadata objects, such as design of the page, users, roles, etc. and also allow incremental deployment of the web site. 

Why is that? What was wrong?

The old Staging supported only synchronization of the document data and the rest of the web site changes had to bee updated manually or through the web site export and import. This sounds fine, we have the tool to do that, we have the export/import, until we realize that the export/import is possible only for the whole web site and must always create a new web site on the target server. I am pretty sure that you will agree with me that this was not the best thing we could offer you and didn't work well for larger projects which needed to be updated (developed) on a short-term basis. That's why it had to be changed somehow ...

Just so you know, here is a list of the major problems which were causing troubles to support these new features:

  • Every object had its own, special export code and there was a lot of code which had to be updated - bad maintenance and update
  • Every object type had its own export queries, which mostly supported only export of all objects, the particular objects couldn't be selected - too specialized operations
  • The interface didn't allow much configuration because of the limitations of API - just not supported
  • The developer had to write special export code for every new object coming with new version - additional time to support features
Let's change it!

Most of you probably worked on a project which requirements weren't complete and you had to argue with the client that you either need the complete requirements or the next requirements may cause a dead-end for current solution. Probably the hardest thing about developing a system based on customer's requirements is that the architecture must be open to cover everything you think might needed to be implemented in future and even to cover the things you never thought about. So, how do we design something what we never had in our minds? ... Very hardly and very carefully. What I am trying to say is that the export/import was never designed to support such complicated thing and it was written for each object separately so we had basically two options:

  • Keep current model and spend at least half a year rewriting the code for each object, ensuring the proper functionality and do the same for every new object - not the best solution ...
  • Put away current code, redesign the architecture and build the export/import and staging functionality on a same, unified model which will be simply updatable for new objects - yes, this is the right choice!
So shortly, we decided to completely rewrite the export/import module to open the way for new features and easier implementation of the next modules.

How to create unified architecture?
We have a solution consisting of similar objects which do not exactly look similar for outer viewer. Let's define the properties which are similar ... an interface and/or base class for all of them. I do not want to go to details and bore you already, so as always I just tell you what you can see from outside. You can see in our API that each metadata object (XXXInfo) implements interface IInfoObject and is inherited from several classes (BaseInfo, AbstractInfo, SynchronizedInfo) on a path to its class. This interface and base classes provide all the objects the same, basic functionality necessary to work with the object in a way that doesn't care what type of object it is, just knows it is an object with data. You may also find in our API, that the IInfoObject exposes the property TypeInfo with some interesting information. This is basically the information about object data structure, carefully prepared to define the object enough to be used by the involved modules. For example, the information contains the name of the column where the GUID of the object is stored which the import and staging use to find an existing object to see if the object is new or just updated in the target database. There is much more about this, a lot of information which is considered during this actions, and 99.9% of you won't need it so let's skip these details and go further ...

So now we have a centralized description of the objects and we can access them regardless of their type. We just divided them into several categories based on their purposes:
  • Main objects - The objects which can act as a standalone object, you can usually select it for staging and export / import by selecting, e.g. Workflow object.
  • Child objects - Objects dependent on some other object, which are extending the object properties. Usually cannot be selected by themselves and are automatically included into their parent object data, e.g. WorkflowStep object as child of Workflow
  • Bindings - Binding is another name for M:N relationships between objects, they are usually included in the one of the bound object data, typically the one which contains less records in this configuration, e.g. WorkflowStepRole as binding of WorkflowStep to Role
  • Site bindings - Site binding is a special type of binding which says if the object is assigned for particular site. They may or may not be included in the object data based on whether the object is assigned to the site or not.
  • Non-exported / Non-staged objects - Dynamic system objects are usually not processed by export or staging because it has no meaning to transfer them between projects, e.g. EventLog event or ForumPost for staging.
We could relatively simply create a cycle through all the metadata objects able to export the selected objects based on their data structure stored in  (of course with some help of the import module code, UI, Data layer and just a few special cases for non-standard items). The details about implementation are not so important here, what matters is that the the changes had to be done to keep the architecture robust enough for current and future use.

What this means for you and why is this so important?

First of all, you get all the new features which help you with your development, mainly metadata staging support, which weren't possible with the old model. What you also get is much less code which means you can get smaller DLLs and the application modules are now ready to be converted to be separable from the solution, which will be introduced in one of the next versions and allow you to fully remove some of the module to make the solution lightweight exactly by your needs. Later in future versions you will also be able to benefit from being able to include your own module data to the export process.

Is this normal?

You may ask if it is normal to reimplement in such way. The answer can be both yes and no. It is common that every project has some code which is not exactly the right one, including very large projects of well-known companies and there are basically two options what developers can do:
  • Leave the project in current state with all the bad things - This is nice for keeping API constant, but never brings any improvements.
  • Reimplement the code - When there is only very small number of users who uses the API which is subject to change, reimplementation is not harmful and in general it brings much more than it takes.
We always carefully consider which code can be changed and which API must be obsoleted so the changes of our code occur only when they are really necessary. If possible, we always keep the obsolete versions of the methods until next major version is released. In major versions, we always clean the solution from this obsolete code to reduce the amount of code and provide better performance from it.

This is basically our strategy, you get really nice and advanced features and you get them fast for the cost of method headers that can change from time to time, because it is always better to spend just a small amount of time by changing few lines of your code than being stressed by not having it all out of the box and take care of its implementation by yourself, right?

I hope you all understand that these things what we do aren't done to upset you but to give you some advantage over your competitors. Please add your comments, I will be glad to hear your opinion about this topic to see if you like the cool new features more than staying with the (good/bad) old ones.

See you at my next post ...

Share this article on   LinkedIn

Martin Hejtmanek

Hi, I am the CTO of Kentico and I will be constantly providing you the information about current development process and other interesting technical things you might want to know about Kentico.


Martin Hejtmanek commented on

Yes, that would be useful, but may be tricky, so this is why this is a long term plan.

CAPTCHA code is time-limited, I think it defaults to 30 minutes so if you are reading the post too long or think too long about your comment or any other delay, you may get that it is not valid.

random0xff commented on

Yes, I too would love to see something like "export site and only things that this site uses" and then see the simplified tree of objects, so you can exclude maybe one or two things.

PS. If I read a blogpost and try to comment, I always get " Please enter a valid security code." the first time I click the add button.

Martin Hejtmanek commented on

Hi Mario, please send this to, they will help you with your issue. And please check if you have CMS.Root in the SiteManager -> Development -> Document types and let them know. Thank you

Mario Duran commented on

Hi Martin, I´ve been trying to import a website and kentico gives me the following error message: Error during import process, ERROR: Error creating root of the site.
Exception: [TreeProvider.CreateSiteRoot]: Class CMS.Root not found.
Stack trace: at CMS.TreeEngine.TreeProvider.CreateSiteRoot(String siteName)
at CMS.CMSImportExport.ImportProvider.CreateSiteRoot(SiteImportSettings settings, TranslationHelper th)

ERROR: Error creating site skeleton.
Exception: [TreeProvider.CreateSiteRoot]: Class CMS.Root not found.
Stack trace: at CMS.CMSImportExport.ImportProvider.CreateSiteRoot(SiteImportSettings settings, TranslationHelper th)
at CMS.CMSImportExport.ImportProvider.CreateSiteSkeleton(SiteImportSettings settings, TranslationHelper th)

I'm not sure what could be wrong and don't know where to look for help on this topic. Thanks

Martin Hejtmanek commented on

Hi Mark, previous import was pretty similar, just less user friendly with less options, what significantly changed is the export where you can select only the parts you want to export. We do not have resources to implement selection of objects in use (is quite a hard at the moment considering multisite support) but we will consider it for next versions.

Mark commented on

We just started using Kentico with version 3.1 so I don't know what the previous import/export looked like. The option I'd like to see is a checkbox to select all objects in use. This would save a lot of time going through the list of all object types to select the few that are actually being used. Thanks for a great product!