All effective systems are based on a good understanding of user requirements. “We want it to work like Google” is an aspiration and not a user requirement. In this chapter a range of approaches are suggested to help define user requirements. There is no single approach that is better than the others and usually a blend of several is required. However a balance needs to be kept. At one end of the spectrum is the Google approach, in which innovations are tested out on customers and if there is a positive reaction then the innovation becomes a Google product. Apple is at the other end of the spectrum. The late Steve Jobs commented that Apple needed to provide customers with what they wanted even though they don’t know what this was.
One of the challenges of enterprise search is that almost everyone uses Google’s public web search as the definition of best practice. In Chapter 2 I pointed out that this is not a useful approach to defining the requirements for enterprise search but any discussion about user requirements will almost inevitably migrate towards a discussion about Google.
The general lack of support for search invariably means that little attention is paid to defining user requirements, and all too often changes to either a user interface or the implementation of a new search application are largely based on anecdote and hearsay.
The value of user research is not just in defining the requirements for technology but also in setting a benchmark that can then be used in the future to prioritise search enhancement activities.
In this chapter some of the techniques that can be used to define user requirements are presented. These may help define perhaps 80% of what is required. The remaining 20% will only be discovered over time and some proportion of the 80% will be found not to be of value. This is because:
The organization itself will change over time, giving rise to new requirements and making others less important.
As users become competent in using the search application they will start to push the boundaries of what is on offer.
Software upgrades will offer new search functionality.
As new content sources are indexed additional functionality may be required to optimize search performance.
Fortunately search applications are well suited to being modified and enhanced to meet emerging requirements, unlike many other enterprise applications where a change in business practice may require substantial and costly changes to be made.
For well over thirty years there has been a great deal of research into trying to understand how users go about seeking information. It is beyond the scope of this book to try and summarize all these models. Some of them have intriguing titles such as berry-picking, information foraging, information scent and orienteering. There is a good summary of these by Marti Hearst in her book Search User Interfaces, and Peter Morville takes a fresh and pragmatic view of information seeking in his book Search Patterns. Both books are in the Essential Search Library at the end of this book. What you will gain from reading about these information seeking models is that what is being attemped is the reduction of complex cognitive processes to a single process that can be evaluated in practice.
My contribution to the discussion about information seeking is rather simplistic. I refer to it as the Eureka! Triangle (see Figure 3-1).
When looking for information we will use the processes of browsing through the navigation of an intranet or folder structure of a document management system, of searching, and of being able to set up alerts either from RSS feeds or search profiles running in the background. These need to be kept in balance. In the case of intranets there can be such a focus on information architecture that when in usability tests someone uses search as an option the intranet team feel they have failed. The same applies to document management systems. Step outside to the web world, and organizations invest substantial amounts of money in designing home pages only to find that a significant number of site visitors arrive via Google and Bing and start deep inside the web site and possibly never see the wonderful carousel on the home page.
It is not unusual for an organization to have more than one search engine. The organization may have grown by acquisition, a major project justified having its own search engine and many enterprise applications will have embedded search functionality. There could also be clear business cases for eCommerce search on a web site and eDiscovery search for legal and compliance purposes. If there are existing search applications then the good news is that there will hopefully be some useful search logs and user experience. The bad news is that users will have found ways to get the best of the current search applications, and if search really is important to the organization there will be some reluctance to face the prospect of learning a new application.
Before any user requirement work is undertaken it is essential to have a good communications strategy that keeps everyone informed about the progress of the project. It could be that after a lot of user research the outcome is that there is no clear business case for an investment in a new search engine. As well as managing the expectations of all the stakeholders a news item on the intranet should make a point of inviting employees new to the organization to come forward and talk to the project team. The reasons are twofold. The induction period is always stressful and it is likely that newcomers have stress tested the current applications. The second reason is that they may have experience with other search engines and come with a different set of expectations about a good search experience.
The work carried out on defining user requirements is also of significant value in assessing search performance. From the outset the choice of a user research approach and the way that it is carried out should also take into account the potential use of the approach in search evaluation. If a survey of requirements is going to be conducted then the questions should be chosen so that at least some of them provide benchmarks for performance assessment in due course
Many organizations carry out what are often referred to as ‘climate surveys’ to assess the attitudes of staff towards culture, management approach and operational issues. These surveys are usually carried out annually and should include a question about whether employees feel that they can find the information they need to make decisions or carry out tasks. This is one good example of a metric that can be used to assess the post-implementation success of the search engine. If the current level of satisfaction is 60% there is certainly going to be room for improvement.
Asking people to maintain a diary of their search experiences can provide valuable information, but the design of the diary sheet needs to be developed with care, and with some pilot trials. Expecting people to complete a diary on a daily basis for a period of time is not realistic. This is at best a dip-stick test to see if there are any outlier search requirements which have not been identified using other techniques.
The information that could be collected in diary entry would include:
The reason for the search (“Needed to find the latest version of the security policy”)
The query used (“Security policy”)
How many results were returned?
Were you successful in finding the information, and how long did it take you?
If you were unsuccessful what did you do next?
The best way to get useful outcomes is to agree with volunteers perhaps just two days in a specific week they are going to use the diary, perhaps a day when they are planning an internal presentation or preparing a project report. A quick telephone call during the course of the day to be supportive will be welcomed by the volunteers as will a public acknowledgement of the role that they have played. These volunteers in particular would be a good set of participants in later proof of concept or implementation tests.
It can be very tempting to run focus groups. The logic is that getting together a group of people who make extensive use of search would be a good way to start to develop a set of requirements. However it is highly likely that these people would be able to use almost any search engine and get the best out of it. Providing a good solution to people who find the current search application untrustworthy or difficult to use is just as important but it can be very difficult to find potential participants.
There is usually pressure from senior managers to set up some focus groups. These rarely have the desired effect as the participants may be unwilling to highlight problems that they find in obtaining and using information lest the other participants mark them down as incompetent. Running a focus group also requires two people, one to facilitate and one to record the comments, so some of the potential gains in interviewer time are already at risk. Then there is the challenge of making sure that all the participants turn up, so that the group is representative of a group of employees. Having someone miss the meeting and then insist on having an individual interview again wastes time and delays the conclusion of the project.
It is probably better to use focus groups later in the requirements-gathering process to validate some initial outcomes than to use them as an initial source of requirements.
The team at New Idea Engineering use ‘Development Dollars” to prioritize requirements. They give the group $100 and ask them to buy the requirements that they need. They soon get the idea that budgets are limited and quickly allocate the $100 across perhaps just three or four requirements. The process itself can reveal a lot about the priorities of each of the members of the group that have not come out in the discussion phase of the group interview.
A review of help desk calls is a very important part of the user requirements gathering, even if there has not been a specific search help desk in the past. The help desk tickets may reveal many points of failure, even if rarely points of success. It is also important to bear in mind that reducing calls to help desks is important in terms of employee satisfaction and help desk productivity.
In 2002 Microsoft user experience researchers Joey Benedek and Trish Miner developed a set of 118 adjectives that could be used to define usability in test situations. These adjectives are often used in the initial stages of an intranet or web site implementation but are just as relevant in the early stages of defining search requirements.
Some of the adjectives in the list are directly relevant to search, including:
Comprehensive
Convenient
Customisable
Easy to use
Fast
Secure
The approach is especially useful when trying to understand the good and bad points about a current search implementation. There are various ways of using these terms in the process of starting to define user requirements. Ideally each word should be written on a card, and a set of cards given to small groups of users. The number in each group should be no more than five, because the objective is to get a discussion going about the terms that best describe the current search application, and the terms that should define the re-launched search. Initially each group should be asked to select eight cards for the current search application, and then in a second run for the new application. Once eight have been selected then the groups might be asked to bring the total down to five.
This approach is highly qualitative and its value is more in starting to gain the involvement of users than in developing a checklist of requirements based on the final outcomes of the card sorting tests.
It is possible to carry out this process remotely, just asking people to highlight the descriptions they have selected, but the best results are gained from a number of groups working together, presenting their results and then having a short discussion about the similarities and differences between the group results.
It is important to position this process as a ‘fun’ process which is just one input into defining the overall user requirements.
A widely-used technique in the design and development of web sites and intranets is the use of personas. A persona is a fictional person who represents characteristics of a group of people with similar requirements for information to undertake tasks.
Personas bring many overall user-focus benefits, including:
Users’ goals and needs become a common point of focus for the team.
The team can concentrate on designing for a manageable set of personas knowing that they represent the needs of many users.
By always asking, “Would Anne use this?” the team can avoid the trap of building what users ask for rather than what they will actually use.
Design efforts can be prioritized based on the personas and so design and project creep can be managed
Disagreements over implementation decisions can be sorted out by referring back to the personas.
Implementations can be constantly evaluated against the personas, where appropriate using business end-users who were involved in the development of each of the personas.
The usability consultant Donald Norman sums it up well:
Do Personas have to be accurate? Do they require a large body of research? Not always, I conclude. The Personas must indeed reflect the target group for the design team, but for some purposes, that is sufficient. A Persona allows designers to bring their own life-long experience to bear on the problem, and because each Persona is a realistic individual person, the designers can focus upon features, behaviors, and expectations appropriate for this individual, allowing the designer to screen off from consideration all those other wonderful ideas they may have. If the other ideas are as useful and valuable as they might seem, the designer’s challenge is to either create a scenario for the existing Persona where they makes sense, or to invent a new Persona where it is appropriate and then to justify inclusion of this new Persona by making the business case argument that the new Persona does indeed represent an important target population for the product.
However be aware that intranet personas may not be appropriate to the requirements of enterprise search and it is advisable to develop a set of search personas which drill down into search requirements in more detail. Figure 3-2 shows one approach to segmenting user requirements into four broad categories, each of which could be represented by one or two personas.
The term ‘current domain’ is used both in an organizational sense (my current business unit) and in an expertise sense (I am a chemist). A novel domain could be someone moving to a new business unit, or taking on different responsibilities, such as a research chemist taking on a business planning role. Precision and recall should not be taken as absolutes but as indicating either a requirement for a few specific documents or for a much larger group of relevant documents.
One of the critical success factors in search is gaining an understanding the user context. Search logs may disclose what search terms have been used, but not why they were used.
Every organization has team meetings, though increasingly these are virtual team meetings, which require substantially more planning. Teams tend to have regular tasks, such as providing monthly status reports on new projects, revising corporate policies and tracking the activities of competitors. Sitting in on these meetings can help identify the types of searches that are carried out and what would be the desirable outcomes of the search process. The benefit of teams over focus groups is that team members will feel comfortable with each other and have a collective focus on certain corporate objectives which may well determine career development opportunities or compensation awards.
However there is no point just turning up at the meeting and asking for input on search requirements in the Any Other Business section of the meeting. The programme of attendances at the team meetings needs to be highlighted on the intranet. It is also important to have the discussion about search fairly high up on the agenda, so that it is positioned as an important topic. Having the discussion on the agenda also (hopefully!) ensures that attendees come prepared.
Of course, teams increasingly work and meet on a virtual basis, and this requires more preparation as the attention span of participants may well be lower when taking part in a meeting which may have been scheduled at a time that is not totally convenient for them. On the positive side as the attendees will be participating through a networked computer it may be possible for them to demonstrate some of the aspects of the current search application which they would like to see enhanced.
Always offer members of the team the option to talk individually about their search experience and requirements. They may not wish to disclose to their colleagues that they are having difficulty with the search applications
Sadly in many organizations the resources to carry out usability studies are very limited, and often there are no corporate usability specialists. Work on the usability of the corporate web site may well have been outsourced. Using external expertise is not ideal for internal applications because a good understanding of the business is needed in both agreeing the tasks and interpreting the results.
There is a lot of debate about how many participants should be used for each test. Jakob Nielsen suggests that five participants will highlight most of the main issues with the search application, and for the purposes of gaining an indication of user requirements for the specification of a new search application that is probably a good number to aim for.
A use case is defined as a list of steps defining interactions between a user (sometimes referred to as the ‘actor’) and a system to achieve an objective. There is no ‘correct’ way to present a use case, and the use cases set out below are very informal ones. However they can be useful in starting to translate user requirements into a specification, something that is more difficult to do with personas. Any given employee may display many use cases.
The ten use cases set out below are very pragmatic, based on my observations of people at work in organizations. They are deliberately set out in alphabetical order as there is no single or set of use cases that are more common or more important than any of the others. The use cases have titles which should be recognizable in organizations.
It is quite common in organizations to look for trends in performance, which could be financial, or measured in more complex Key Performance Indicators (KPIs). To undertake this analysis a user may want to find a defined set of reports, and some or all of these may contain a substantial element of numeric data. This is the area of content analytics and data/text mining, and on the edges of business intelligence.
In this use case there is a requirement for high recall to verify that all the critical information has been identified. Although this is typical in a compliance situation it can also occur when there is a need to locate all the project reports on a defined project, or all the products that use a specific chemical over which there is a concern about poor quality standards
The need to locate people, and in particular people with expertise, is often overlooked in designing search. All too often there are two search boxes, one for [search] and one for [people] which is unhelpful when the user is trying to find out about who knows someone, or even who knows which are the relevant documents. Many searches are carried out in an effort to find people with relevant expertise, and not just for the document itself.
In many organizations it is not unusual to have a staff turnover of more than 10% per annum, and there is sometimes a specific area of an intranet that supports early-stage induction into the organization. In addition there are many employees who will take on new roles and responsibilities during the year, perhaps in a different office or even in a different country. An important issue here is whether the search application will be able to provide some form of either a best bet so that the results of a search can be placed in context, and/or some tagging from other users which rates a document of being of particular value.
The user’s search will only be satisfied by finding a specific document, perhaps a presentation to a team or a project wrap-up document.
A feature of the learning persona is that the user is not at all sure about the best way to frame a search query. They may be seeking information on the work that the organization has undertaken to reduce its carbon footprint, and this could be covered by a very wide range of terms from corporate social responsibility to green engineering.
The easy element of the Mobile persona is that the user will be using a screen format which is smaller than the average desktop. The more difficult elements are the authentication that may be required, the inability to print out the results of a search, single tasking resulting in the need to open a different application to read the item listed in the search results, and the way in which the query is formulated. This formulation could be heavily dependent on location if GPS is used as a background search criterion, something that may not be apparent to the user, or even useful if the implicit criteria is not relevant to the search.
The main characteristic of this persona is that the search requirements are fairly consistent over a period of time, and the ability to be alerted to new information as soon as it has been indexed is usually very valuable.
When a user is searching for information on a particular product or service, either as a basis for internal review or to meet the requirements of a supplier or customer, then a near miss is not good enough. If product code AC34-345-12 does not appear on the first page of search results then the user has a problem on their hands.
Supporting standard tasks should be an important role for a search application, but few companies have any firm idea of what a task involves if it is not embodied in a workflow process. Understanding the information content of a task is going to be increasingly important in speeding the decision-making process and many organizations and search vendors are looking with considerable interest at search-based applications.
One-on-one interviews with employees can often uncover surprisingly complex tasks that depend on accessing multiple information sources. An example might be to set up a project team. This may require finding information on:
The procedures for setting up a project
Finding out if a project of this type, or for this client, has been carried out previously
What forms need to be completed and forwarded to other departments
Who the members of the project team should be
The current availability of the prospective team members
Internal guidelines on this particular type of project
The project progress reporting procedures
It is very easy to spend time interviewing users and end up with little relevant information. This is because it can so easy to move away from the core subject of the interviews and get into specifics of design and content that are then difficult to scale up to a set of user requirements.
In setting up user interviews it is easy to think in terms of departments or roles, but in specifying search requirements some lateral thinking is called for.
Some important categories of users that are often overlooked in the interview programme include:
Personal assistants to directors and senior managers
Employees who have recently joined the organization, not just because they will be coping with the usual induction issues but also because they may have experience of how search is delivered in their previous organization
Employees with a background in the sciences, law and medicine who will be familiar with large-scale information systems from their time at college, and during the course of their careers.
Excellent advice can be found in Steve Portigal’s book The Art and Craft of User Research Interviewing.
In conducting interviews I have found this diagram to be of value in getting the discussion going (see Figure 3-3).
The objective is to gain an understanding of information gathering that is carried out on a regular basis (and could be supported by search alerts) and ad hoc requirements which are almost always carried out under time pressures. This diagram also distinguishes information which has been collected and is under the management of a team or department and the need to discover information that may be anywhere in the enterprise.
I encourage interviewees to write on the diagram and collect these together as I go along. In many cases the interviews have to be carried out by telephone and sending this diagram in advance with a brief description of its purpose enables me to get quickly into the interview without wasting time. It is possible to let a face-to-face interview extend to 50 minutes but a telephone conversation needs to be limited to 30 minutes.
Conducting user surveys with web-based survey tools has transformed the effort required to carry out large scale surveys and have the results available in a short period of time. There are some important guidelines that should be taken into account in designed the search survey:
Start out with no more than ten questions, which will probably take a user around 10 minutes (or a cup of coffee) to complete. Anything longer will need very careful design.
The questions should be intuitive, so that respondents gain an immediate understanding of why the question is being asked.
Ideally provide an indication of how far through the survey a respondent has reached.
Don’t ask questions that rely on feats of memory about what the respondent did over a past period of time. ‘Do you use search now more than you did a year ago?’ has no value at all.
Don’t expect respondents to write essays in a text box. Invite respondents to contact you if they would like to talk through issues in more detail.
Recognise that it may be better to send out different surveys to specific user groups than try to accommodate the views of the entire workforce with a single set of questions
If using Likert or Likert-like surveys do not average out the scores. Use the median.
Commit to summarizing the outcomes by a given date, and invite respondents to comment on the results.
Test the survey, and then test it again.
For more guidance turn to Surveys That Work by Caroline Jarrett. As with user interviews there is a substantial body of good practice about the conduct of surveys. You are only going to do it once so it is advisable to do it properly. The future of the organization could depend on the outcomes.
If the aim of an enterprise search project is to improve search performance it is important to benchmark the current application. Great care is required to ensure that the test searches that are carried out are directly comparable with those undertaken initially in the Proof of Concept tests (Chapter 9) and then after the implementation (Chapter 10). The search queries need to be ‘real’ queries, not just queries dreamt up over a cup of coffee by the project team. The content scope should also be defined; perhaps all documents associated with a particular project or product launch. This collection is sometimes referred to as the Gold Collection or Golden Collection as it will be used on a regular basis. Not only is this collection of value in benchmarking the current application against the new application but also to assess the impact of changes that are made to the ranking parameters.
Search benchmarking is especially important in the case of web site search, as here the competition is certainly going to be Google. Trying to implement a search application that is ‘better’ than Google is a waste of time unless you are prepared to invest the $10 billion that Google currently spends annually on research and development. In many organizations, such as universities, the web site is a core information resource but the queries that might be posted from academic and research staff are likely to be very different to those from prospective students.
Search logs are an invaluable source of user requirements, but they are covered in more detail in Chapter 10.
Stories about search successes and failures can be very powerful in supporting a business case but not in defining the functionality of the search application. Extrapolating from even a number of stories some specific features that are required is not sensible.
All search applications should encourage users to provide feedback on their search experience, be it good or bad. A simple form on the search home page that gives users an opportunity to write a brief comment is all that is needed. The form should automatically capture the query terms. Asking users to fill in a detailed questionnaire never works. Calling them personally to discuss the search outcomes always pays dividends.
Almost certainly what will emerge from this work is a classic 80/20 set of requirements; good agreement on the core requirements and quite a number of outliers. It is important to make sure that the reasons for these outlier requirements are fully understood. It is essential that the draft user requirements report is circulated widely, and certainly to anyone who was involved in any way with the user research. It may not be until these employees read the report that it becomes evident that one particular group feels they did not present their case clearly enough. Other readers, seeing the results, may be able to contribute additional insights, and perhaps a story that can be used for emphasis.
All this takes time. The overall schedule might go as follows:
Month 1:
Plan out the user research project and brief all those who will be involved about the objectives and scope of the research
Month 2 and Month 3:
Allow two months as a minimum for the user research. Setting up meetings with individual teams can often be a critical step in the timing as these may only happen on a monthly basis
Month 4:
Summarise the outcomes and check any anomalies before preparing the draft requirements report
Month 5:
Allow several weeks for a review by participants before concluding the user requirements work and writing the final report.
This suggests that work on the user requirements research probably needs to start six months before the process of writing the requirements for a new search application or for an enhancement to the current search application. This may seem quite an extended period of time but this is an application which could make a significant difference to the performance of everyone in the organization and the performance of the organization itself.
Your employees will search in many different ways. There could be one small user group to whom a search engine with a particular feature could have a significant impact on operational performance. The user experience with a search engine starts at the point that the user realizes that they need to find a piece of information and ends with the successful use of that piece of information to make a good decision. The range of use cases will mean that a range of different techniques are going to have to be employed, with consequences for the research schedule and for the resources needed. As far as possible use techniques that can be used to measure the success of the implementation. Above all remember the adage that if it can’t be measured then it can’t be managed.
You'll find some additional information regarding the subject matter of this chapter in the Further Reading section in Appendix A.