It's Dead, The Quality System As We Know It.
Summary
- Quality Systems of the past are not keeping up with today's information paradigms
- Wikis have solved some issues with flexibility but can still suffer from aging information
- Internet knowledge is vast and accessible, but the quality of the information can be questionable
- Most of our information is static
- Our systems need to evolve and include considerations for personalization, predictability, federation, and controlability.
It's Dead, The Quality System As We Know It.
If you were tasked to set up a quality system, you would typically start with a standard, such as ISO, ITIL, COPC, etc., then define and document your processes, capture them in a document management system, and thru audits and controls drive compliance to your quality system.
Now we have the internet, search engines, wikis, and a plethora of emerging web 2.0 network applications that put vast amounts of information at everyone's fingertips. People can find better ways to do things with a simple web search, challenging the corporate notion of creating a predictable system. A classic quality system just can't keep up. So what should we do?
Creation & Consumption of Knowledge
A strong quality system had a quality library that contained the processes and procedures that described how work got accomplished. In pursuit of predictability everyone was expected to do specific tasks a specific way as described in the quality library. We monitored and assessed our work processes against the documented structures and adjusted the library and/or adjusted the people to stay in compliance.
In the past, Wikis, intranets, the internet, and strong search engines were still technologies of the future. Everyone received the knowledge exactly the same way and authors created knowledge according to a prescribed and controlled method. Now, the purveyors of knowledge expect to be able to author and consume knowledge when they need it and in the most convenient form at the time. Knowledge management has become a very personal endeavor.
Wikis are increasingly popular, especially among the engineering community because of their flexibility and federated authoring capabilities. It’s a fairly simple task to create or edit a wiki page that can be available to others as soon as the author publishes the information. Add a moderator and you have an arguably robust and flexible knowledge management system.
However, as wikis age and moderators come and go, the knowledge will age similar to classic document management systems unless great effort is invested to assure the information stays relevant. Additionally, the ability to consume wiki information is still fairly prescriptive requiring the user to access the knowledge in a specific manner, e.g. via a browser. Depending on the individual this method may not fit into the individuals’ natural workflow thus rendering the knowledge of limited utility.
Structured vs. Unstructured
Most of the knowledge available is unstructured, hence the importance and popularity of search engines such as Google, Yahoo, and Microsoft. The nature of the unstructured information has lent great depth to the amount of information that is accessible. This is a double-edged sword. What used to take hours can now take minutes to find with a well formed query. The sheer volume of accessible information has made it possible to find any fact no matter how esoteric. However, caveat emptor, the validity or quality of the presented information is up to the finder to decipher.
Guiding the consumer to the knowledge that is appropriate or recommended is left up to ranking, voting and tagging, which can be useful but is not as controllable as a classic document management system, certainly more flexible and vast, but not as direct. The quality of the knowledge in any case, classic document management or searching unstructured information requires investment from the consumer, the author or both.
Static vs. Dynamic Knowledge
Much of the information that is accessible to us is static. That is, the information only changes when a human touches it. The good news is there are a lot of humans touching information, thus giving us a large population of meaningful knowledge. The bad news is the speed of knowledge is governed by the human condition. That is, knowledge creation can only occur when a human creates or modifies knowledge. The static nature of the knowledge has led us to be a search engine based community, where our first notion of finding knowledge is to go to a search facility to look for it.
Information should find us at the precise moment that we need it. It seems we don't hear enough (or really much) about artificial intelligence (AI) any more. In the 80s, AI was a hot topic, remember the War Games movie with Mathew Broderick? During the time I was working on my Masters thesis, there were a number of competing AI technologies, e.g. knowledge based expert systems, hueristic based expert systems, and neural networks learning systems. By the year 2000, we were supposed to have KIT (Knight Rider's Trans Am's computer) in all of our cars, perhaps we do given the popularity of Global Positioning Systems.
We do have simple bots (e.g. Travelocity's fare alerts, or Googles news alerts), desktop gadgets (e.g. traffic and weather monitors), and we have an emerging set of Web 2.0 applications in the form of browser plugins. But there is an enormous opportunity for data mining and analysis that can power predictive algorithms enabling knowledge management well beyond the current paradigms.
What should we do?
We need to evolve our current thinking to include new technology capabilities and new customer expectations regarding the creation and consumption of knowledge. Some key requirements should include:
Personalization - We need to evaluate how the consumers of knowledge work and make sure the delivery mechanisms are consistent with the individual's natural work patterns. For example, forcing people to a browser or a 'new' application is not the best way to get folks to adopt your Quality System because it's one more thing they need to do or to master.
Predictability - Every time a query, question, or need arises the knowledge management system should deliver the same or updated answer. Depending on the search engine and query structure, the returned results can vary widely. We should be able to pose a question, and through prior learning the technology should return what we need every time.
Federation - Multiple perspectives should be able to be included to strengthen and expand the knowledge. This has been the success story of Wikipedia and one of the major reasons there has been an increase of Wiki deployments.
Controlability - Since one of the main reasons Quality Systems exist is to increase the predictability of what is delivered, it also important that some sense of control be maintained in the new information paradigm.
We have much to learn and invent in order to evolve our Quality Systems. Personally, I'm very excited to see where this goes!
- Chris's blog
- Login or register to post comments
- Printer-friendly version
- Post to Twitter
- Send by email
- PDF version
Recent Updates
Microsoft Outlook Add-In
Focus on Your Life, Not Your Inbox
Achieve greater focus by shifting your attention from e-mail to accomplishing what matters most!
Company News
Stay up to date with our newsletter!