Chapter 3. Defining User Requirements

All effective systems are based on a good understanding of user requirements. “We want it to work like Google” is an aspiration and not a user requirement. In this chapter a range of approaches are suggested to help define user requirements. There is no single approach that is better than the others and usually a blend of several is required. However a balance needs to be kept. At one end of the spectrum is the Google approach, in which innovations are tested out on customers and if there is a positive reaction then the innovation becomes a Google product. Apple is at the other end of the spectrum. The late Steve Jobs commented that Apple needed to provide customers with what they wanted even though they don’t know what this was.

One of the challenges of enterprise search is that almost everyone uses Google’s public web search as the definition of best practice. In Chapter 2 I pointed out that this is not a useful approach to defining the requirements for enterprise search but any discussion about user requirements will almost inevitably migrate towards a discussion about Google.

The general lack of support for search invariably means that little attention is paid to defining user requirements, and all too often changes to either a user interface or the implementation of a new search application are largely based on anecdote and hearsay.

The value of user research is not just in defining the requirements for technology but also in setting a benchmark that can then be used in the future to prioritise search enhancement activities.

In this chapter some of the techniques that can be used to define user requirements are presented. These may help define perhaps 80% of what is required. The remaining 20% will only be discovered over time and some proportion of the 80% will be found not to be of value. This is because:

Fortunately search applications are well suited to being modified and enhanced to meet emerging requirements, unlike many other enterprise applications where a change in business practice may require substantial and costly changes to be made.

For well over thirty years there has been a great deal of research into trying to understand how users go about seeking information. It is beyond the scope of this book to try and summarize all these models. Some of them have intriguing titles such as berry-picking, information foraging, information scent and orienteering. There is a good summary of these by Marti Hearst in her book Search User Interfaces, and Peter Morville takes a fresh and pragmatic view of information seeking in his book Search Patterns. Both books are in the Essential Search Library at the end of this book. What you will gain from reading about these information seeking models is that what is being attemped is the reduction of complex cognitive processes to a single process that can be evaluated in practice.

My contribution to the discussion about information seeking is rather simplistic. I refer to it as the Eureka! Triangle (see Figure 3-1).

When looking for information we will use the processes of browsing through the navigation of an intranet or folder structure of a document management system, of searching, and of being able to set up alerts either from RSS feeds or search profiles running in the background. These need to be kept in balance. In the case of intranets there can be such a focus on information architecture that when in usability tests someone uses search as an option the intranet team feel they have failed. The same applies to document management systems. Step outside to the web world, and organizations invest substantial amounts of money in designing home pages only to find that a significant number of site visitors arrive via Google and Bing and start deep inside the web site and possibly never see the wonderful carousel on the home page.

It is not unusual for an organization to have more than one search engine. The organization may have grown by acquisition, a major project justified having its own search engine and many enterprise applications will have embedded search functionality. There could also be clear business cases for eCommerce search on a web site and eDiscovery search for legal and compliance purposes. If there are existing search applications then the good news is that there will hopefully be some useful search logs and user experience. The bad news is that users will have found ways to get the best of the current search applications, and if search really is important to the organization there will be some reluctance to face the prospect of learning a new application.

Before any user requirement work is undertaken it is essential to have a good communications strategy that keeps everyone informed about the progress of the project. It could be that after a lot of user research the outcome is that there is no clear business case for an investment in a new search engine. As well as managing the expectations of all the stakeholders a news item on the intranet should make a point of inviting employees new to the organization to come forward and talk to the project team. The reasons are twofold. The induction period is always stressful and it is likely that newcomers have stress tested the current applications. The second reason is that they may have experience with other search engines and come with a different set of expectations about a good search experience.

The work carried out on defining user requirements is also of significant value in assessing search performance. From the outset the choice of a user research approach and the way that it is carried out should also take into account the potential use of the approach in search evaluation. If a survey of requirements is going to be conducted then the questions should be chosen so that at least some of them provide benchmarks for performance assessment in due course

Many organizations carry out what are often referred to as ‘climate surveys’ to assess the attitudes of staff towards culture, management approach and operational issues. These surveys are usually carried out annually and should include a question about whether employees feel that they can find the information they need to make decisions or carry out tasks. This is one good example of a metric that can be used to assess the post-implementation success of the search engine. If the current level of satisfaction is 60% there is certainly going to be room for improvement.

Asking people to maintain a diary of their search experiences can provide valuable information, but the design of the diary sheet needs to be developed with care, and with some pilot trials. Expecting people to complete a diary on a daily basis for a period of time is not realistic. This is at best a dip-stick test to see if there are any outlier search requirements which have not been identified using other techniques.

The information that could be collected in diary entry would include:

The best way to get useful outcomes is to agree with volunteers perhaps just two days in a specific week they are going to use the diary, perhaps a day when they are planning an internal presentation or preparing a project report. A quick telephone call during the course of the day to be supportive will be welcomed by the volunteers as will a public acknowledgement of the role that they have played. These volunteers in particular would be a good set of participants in later proof of concept or implementation tests.

It can be very tempting to run focus groups. The logic is that getting together a group of people who make extensive use of search would be a good way to start to develop a set of requirements. However it is highly likely that these people would be able to use almost any search engine and get the best out of it. Providing a good solution to people who find the current search application untrustworthy or difficult to use is just as important but it can be very difficult to find potential participants.

There is usually pressure from senior managers to set up some focus groups. These rarely have the desired effect as the participants may be unwilling to highlight problems that they find in obtaining and using information lest the other participants mark them down as incompetent. Running a focus group also requires two people, one to facilitate and one to record the comments, so some of the potential gains in interviewer time are already at risk. Then there is the challenge of making sure that all the participants turn up, so that the group is representative of a group of employees. Having someone miss the meeting and then insist on having an individual interview again wastes time and delays the conclusion of the project.

It is probably better to use focus groups later in the requirements-gathering process to validate some initial outcomes than to use them as an initial source of requirements.

The team at New Idea Engineering use ‘Development Dollars” to prioritize requirements. They give the group $100 and ask them to buy the requirements that they need. They soon get the idea that budgets are limited and quickly allocate the $100 across perhaps just three or four requirements. The process itself can reveal a lot about the priorities of each of the members of the group that have not come out in the discussion phase of the group interview.

A review of help desk calls is a very important part of the user requirements gathering, even if there has not been a specific search help desk in the past. The help desk tickets may reveal many points of failure, even if rarely points of success. It is also important to bear in mind that reducing calls to help desks is important in terms of employee satisfaction and help desk productivity.

In 2002 Microsoft user experience researchers Joey Benedek and Trish Miner developed a set of 118 adjectives that could be used to define usability in test situations. These adjectives are often used in the initial stages of an intranet or web site implementation but are just as relevant in the early stages of defining search requirements.

Some of the adjectives in the list are directly relevant to search, including:

The approach is especially useful when trying to understand the good and bad points about a current search implementation. There are various ways of using these terms in the process of starting to define user requirements. Ideally each word should be written on a card, and a set of cards given to small groups of users. The number in each group should be no more than five, because the objective is to get a discussion going about the terms that best describe the current search application, and the terms that should define the re-launched search. Initially each group should be asked to select eight cards for the current search application, and then in a second run for the new application. Once eight have been selected then the groups might be asked to bring the total down to five.

This approach is highly qualitative and its value is more in starting to gain the involvement of users than in developing a checklist of requirements based on the final outcomes of the card sorting tests.

It is possible to carry out this process remotely, just asking people to highlight the descriptions they have selected, but the best results are gained from a number of groups working together, presenting their results and then having a short discussion about the similarities and differences between the group results.

It is important to position this process as a ‘fun’ process which is just one input into defining the overall user requirements.

A widely-used technique in the design and development of web sites and intranets is the use of personas. A persona is a fictional person who represents characteristics of a group of people with similar requirements for information to undertake tasks.

Personas bring many overall user-focus benefits, including:

The usability consultant Donald Norman sums it up well:

However be aware that intranet personas may not be appropriate to the requirements of enterprise search and it is advisable to develop a set of search personas which drill down into search requirements in more detail. Figure 3-2 shows one approach to segmenting user requirements into four broad categories, each of which could be represented by one or two personas.

The term ‘current domain’ is used both in an organizational sense (my current business unit) and in an expertise sense (I am a chemist). A novel domain could be someone moving to a new business unit, or taking on different responsibilities, such as a research chemist taking on a business planning role. Precision and recall should not be taken as absolutes but as indicating either a requirement for a few specific documents or for a much larger group of relevant documents.

One of the critical success factors in search is gaining an understanding the user context. Search logs may disclose what search terms have been used, but not why they were used.

Every organization has team meetings, though increasingly these are virtual team meetings, which require substantially more planning. Teams tend to have regular tasks, such as providing monthly status reports on new projects, revising corporate policies and tracking the activities of competitors. Sitting in on these meetings can help identify the types of searches that are carried out and what would be the desirable outcomes of the search process. The benefit of teams over focus groups is that team members will feel comfortable with each other and have a collective focus on certain corporate objectives which may well determine career development opportunities or compensation awards.

However there is no point just turning up at the meeting and asking for input on search requirements in the Any Other Business section of the meeting. The programme of attendances at the team meetings needs to be highlighted on the intranet. It is also important to have the discussion about search fairly high up on the agenda, so that it is positioned as an important topic. Having the discussion on the agenda also (hopefully!) ensures that attendees come prepared.

Of course, teams increasingly work and meet on a virtual basis, and this requires more preparation as the attention span of participants may well be lower when taking part in a meeting which may have been scheduled at a time that is not totally convenient for them. On the positive side as the attendees will be participating through a networked computer it may be possible for them to demonstrate some of the aspects of the current search application which they would like to see enhanced.

Always offer members of the team the option to talk individually about their search experience and requirements. They may not wish to disclose to their colleagues that they are having difficulty with the search applications

Sadly in many organizations the resources to carry out usability studies are very limited, and often there are no corporate usability specialists. Work on the usability of the corporate web site may well have been outsourced. Using external expertise is not ideal for internal applications because a good understanding of the business is needed in both agreeing the tasks and interpreting the results.

There is a lot of debate about how many participants should be used for each test. Jakob Nielsen suggests that five participants will highlight most of the main issues with the search application, and for the purposes of gaining an indication of user requirements for the specification of a new search application that is probably a good number to aim for.

A use case is defined as a list of steps defining interactions between a user (sometimes referred to as the ‘actor’) and a system to achieve an objective. There is no ‘correct’ way to present a use case, and the use cases set out below are very informal ones. However they can be useful in starting to translate user requirements into a specification, something that is more difficult to do with personas. Any given employee may display many use cases.

The ten use cases set out below are very pragmatic, based on my observations of people at work in organizations. They are deliberately set out in alphabetical order as there is no single or set of use cases that are more common or more important than any of the others. The use cases have titles which should be recognizable in organizations.

It is very easy to spend time interviewing users and end up with little relevant information. This is because it can so easy to move away from the core subject of the interviews and get into specifics of design and content that are then difficult to scale up to a set of user requirements.

In setting up user interviews it is easy to think in terms of departments or roles, but in specifying search requirements some lateral thinking is called for.

Some important categories of users that are often overlooked in the interview programme include:

In conducting interviews I have found this diagram to be of value in getting the discussion going (see Figure 3-3).

The objective is to gain an understanding of information gathering that is carried out on a regular basis (and could be supported by search alerts) and ad hoc requirements which are almost always carried out under time pressures. This diagram also distinguishes information which has been collected and is under the management of a team or department and the need to discover information that may be anywhere in the enterprise.

I encourage interviewees to write on the diagram and collect these together as I go along. In many cases the interviews have to be carried out by telephone and sending this diagram in advance with a brief description of its purpose enables me to get quickly into the interview without wasting time. It is possible to let a face-to-face interview extend to 50 minutes but a telephone conversation needs to be limited to 30 minutes.

Conducting user surveys with web-based survey tools has transformed the effort required to carry out large scale surveys and have the results available in a short period of time. There are some important guidelines that should be taken into account in designed the search survey:

For more guidance turn to Surveys That Work by Caroline Jarrett. As with user interviews there is a substantial body of good practice about the conduct of surveys. You are only going to do it once so it is advisable to do it properly. The future of the organization could depend on the outcomes.

If the aim of an enterprise search project is to improve search performance it is important to benchmark the current application. Great care is required to ensure that the test searches that are carried out are directly comparable with those undertaken initially in the Proof of Concept tests (Chapter 9) and then after the implementation (Chapter 10). The search queries need to be ‘real’ queries, not just queries dreamt up over a cup of coffee by the project team. The content scope should also be defined; perhaps all documents associated with a particular project or product launch. This collection is sometimes referred to as the Gold Collection or Golden Collection as it will be used on a regular basis. Not only is this collection of value in benchmarking the current application against the new application but also to assess the impact of changes that are made to the ranking parameters.

Search benchmarking is especially important in the case of web site search, as here the competition is certainly going to be Google. Trying to implement a search application that is ‘better’ than Google is a waste of time unless you are prepared to invest the $10 billion that Google currently spends annually on research and development. In many organizations, such as universities, the web site is a core information resource but the queries that might be posted from academic and research staff are likely to be very different to those from prospective students.

Search logs are an invaluable source of user requirements, but they are covered in more detail in Chapter 10.

Stories about search successes and failures can be very powerful in supporting a business case but not in defining the functionality of the search application. Extrapolating from even a number of stories some specific features that are required is not sensible.

All search applications should encourage users to provide feedback on their search experience, be it good or bad. A simple form on the search home page that gives users an opportunity to write a brief comment is all that is needed. The form should automatically capture the query terms. Asking users to fill in a detailed questionnaire never works. Calling them personally to discuss the search outcomes always pays dividends.

Almost certainly what will emerge from this work is a classic 80/20 set of requirements; good agreement on the core requirements and quite a number of outliers. It is important to make sure that the reasons for these outlier requirements are fully understood. It is essential that the draft user requirements report is circulated widely, and certainly to anyone who was involved in any way with the user research. It may not be until these employees read the report that it becomes evident that one particular group feels they did not present their case clearly enough. Other readers, seeing the results, may be able to contribute additional insights, and perhaps a story that can be used for emphasis.

All this takes time. The overall schedule might go as follows:

Month 1:

Plan out the user research project and brief all those who will be involved about the objectives and scope of the research

Month 2 and Month 3:

Allow two months as a minimum for the user research. Setting up meetings with individual teams can often be a critical step in the timing as these may only happen on a monthly basis

Month 4:

Summarise the outcomes and check any anomalies before preparing the draft requirements report

Month 5:

Allow several weeks for a review by participants before concluding the user requirements work and writing the final report.

This suggests that work on the user requirements research probably needs to start six months before the process of writing the requirements for a new search application or for an enhancement to the current search application. This may seem quite an extended period of time but this is an application which could make a significant difference to the performance of everyone in the organization and the performance of the organization itself.

Your employees will search in many different ways. There could be one small user group to whom a search engine with a particular feature could have a significant impact on operational performance. The user experience with a search engine starts at the point that the user realizes that they need to find a piece of information and ends with the successful use of that piece of information to make a good decision. The range of use cases will mean that a range of different techniques are going to have to be employed, with consequences for the research schedule and for the resources needed. As far as possible use techniques that can be used to measure the success of the implementation. Above all remember the adage that if it can’t be measured then it can’t be managed.

You'll find some additional information regarding the subject matter of this chapter in the Further Reading section in Appendix A.