Web content management deployments always start off with the best intentions. Content gets migrated. Users are trained. The web site is up and running, content production is decentralized and everything is going smoothly. Then at some point things start to go awry.
Over time, the very thing that enables us to manage content becomes itself unmanageable. The system begins to strain under the weight of an ever-growing amount of site structure, metadata and content. Processes enshrined into workflow during the implementation phase have broken down over time. The users of the system change and training is often forgotten.
It doesn’t have to be this way though. Just as with other business critical systems, we need ways to monitor and maintain the systems that we’ve put in place. As content management professionals we need to be able to recognize when our site structure, taxonomy or processes have gone rogue or become ineffective.
The “content is king” mantra may now only appear on buzzword bingo games, but the principle has survived. We’ve become quite sophisticated in being able to determine the quality and affect of our content once it’s published. There are numerous content compliance and quality tools telling us if our sites are accessible or contain broken links. We can also very accurately determine the affect that content has on visitors to our sites and their actions with a great range of web analytics tools.
By only monitoring the publicly facing aspects of our sites though, it’s possible to have what appears to be a very successful site that is effectively crumbling around us. When a once good content management system goes bad, content production processes for the site will become increasingly expensive and less efficient. It’s only a matter of time before your site begins to fail and it’ll be time for another expensive re-implementation. If we could instead monitor and effectively manage the content production process, perhaps then we could prevent good content management from going bad.