davidf wrote: ↑17 Aug 2022 15:01
Taking a break (to have lunch 16:00 BST UTC+1 !) .. to be continued.
Back to Knowledge Curation
The knowledge base has undergone massive change and looks and feels better, but has all that effort lead to it
being more helpful to users and taken some of the weight off the heavy lifters?
Quantitative Measurement
That is hard to measure quantitatively but
for instance analysis of web-logs might indicate whether we have fewer abandoned searches than before.
When you click your click usually gets logged usually with where (page-wise) "you" (the IP address you were using at the time) were and where you were going - a search that does not result in an onward click within the knowledge base or the forum has probably not helped the searcher and probably means that the "the curated and distilled wisdom of the members" is not being accessed.
If it is a big issue we can then possibly look to see if the search phrases in abandoned search have any patterns. (I say "we"!)
Does the FHUG "setup" have web-log analysis functionality that can easily answer that question, or if it does, do we "not have a baseline" from before the KB overhaul against which to make comparisons? Downloading that sort of data into Excel and doing a DIY analysis is almost certainly a deep time vacuum and would probably be an inefficient use of time.
Qualitative Assessment
Qualitatively have we seen a reduction in the sort of mailing list queries or forum questions which have an answer in the knowledge base there and available (if not as easily accessible as might be hoped).
Are there things that we can do to make the process (Query to workable solution) more effective and ultimately less time consuming - even less of a grind/graft - for those who contribute time and goodwill to answering questions (one "Megastar" comes to mind!)?
Will "closing the loop" help to do this? And is doing so (an effort now), long term, worth while (less future effort)?
Closing the Loop
What do I mean by "closing the loop"? It is almost a quality management concept. Can we use the result of a user query to improve the process so that next time someone has a similar query they can get to an answer more effectively.
We don't have the resources to do this for every user query (and review is clearly inapplicable for some queries) - in this instance I am thinking "we" should include the heavy regulars - the "minor megastars" (500+ Posts?) and probably other self-selected regulars. And we might initially chose to review only topics in a sample area or only topics that run to 4 pages plus - we need to constrain the scope to something that is manageable yet which will indicate whether the effort is sufficiently worth while to be extended.
Query Review: Outline
What might be in that review? Someone may have a ready made check-list, but in the absence of one I might propose (as review prompts amplifying the main headings rather than a "must complete"), in approximately time line order:
- Initial Query. Was it:
- In the sub-forum that we expected?
- Was it clear enough to attract replies?
- Any mention of attempting to find the answer in the Knowledge Base or Help file?
- Problems statement/ clarification
- Did the problem need clarification?
- Did respondents manage to get sufficient clarification?
- Initial Disposition; After clarification was
- The problem "solved" by the clarification (i.e. an issue of knowledge?)
- The problem "explained away" (i.e. was no longer a problem - an issue of understanding)
- The problem solvable (to the OP's apparent satisfaction) by reference to a KB article or possibly another forum post
- The problem was "addressable" by further discussion and problem solving
- Problem Solving: Did the process of problem solving:
- Solve the problem to the original poster's apparent satisfaction?
- Subjectively was the issue one adequately covered in the Knowledge Base (i.e. is the required content "in there")?
- Subjectively was the problem one of a non-knowledge-base specialist being able to access the information?
- Was the issue one outside the current support system (Help file, Knowledge base etc.)
- Follow up
- Does the knowledge base need revision? Now or a note captured for a later wider review of the article?
- Was the issue a new one whose solution could/should be captured as new content or a revision of content in the Knowledge base?
- If the issue was one of accessing information
- Poster unaware of the Knowledge Base
- Poster choosing not to use the Knowledge Base
- Difficulty in defining the problem into a searchable phrase
- Difficulty in defining the problem into a topic - sub topic hierarchy
- Was the issue a program bug? Has a ticket been raised with CP?
- Was the issue in the help file? Has a ticket been raised with CP?
- Was the issue a program shortcoming solvable by:
- A documented "work-around"?
- A new plug-in?
- A wish list request?
In a commercial environment this might be systematised as part of a quality management system.
In our environment issues of practicality and resources and other factors (confidentiality, disagreement etc.) arise.
Breaking here to enable digestion. Practicalities to follow?