1 Introduction
How can end-users be enabled to directly interact with individual sensors or sensor arrays?
How can interactive IoT devices help end-users to make sense of data?
How can we design ways for end-users to qualify sensor results and share that interpretation with other users?
We explore these questions in the context of Home and Office automation, one the most prevalent application scenarios currently being discussed in the context of the IoT. In office environments in particular, the use of embedded smart sensors is well advanced. In order to efficiently control indoor climate in modern office buildings environmental monitoring technologies have been tightly integrated with building control mechanisms. Building management systems (BMS) are commonly used to orchestrate large numbers of sensors to monitor environmental conditions and control temperature, humidity, lights and blinds accordingly.
The aim of such systems is to make buildings more responsive to dynamic environmental conditions and thus more comfortable ‘on average’ for their inhabitants. However, even if they draw on localised sensor inputs to give a more complete picture of environmental conditions across a building, they still generally rely on centralised control systems. This means that individual inhabitants’ preferences are not met, either because those preferences happen to be different from an idealised ‘average user’ or because relevant local environmental conditions are not available to the system. This is a particular issue for open plan office environments shared by large numbers of inhabitants, each with individual preferences. The resulting lack of inhabitant’s control over their environment mirrors common concerns levelled at IoT scenarios, in particular questions around privacy and the locus of control.
In our research we aim to address this problem by contributing to the design of systems which allow people in open-plan office environments to control the conditions of their indoor climate. Additionally, we seek to support inhabitants in communicating their preferences with others so that an overall consensus can be reached around desired indoor climate conditions. It is worth clarifying that we do not seek to take the extra step of using this information on desired environmental conditions to actually change the functioning of environmental control systems. Doing so presents a number of significant integration challenges with both technical and social dimensions. It is a ‘wicked problem’, consisting of many interconnected challenges and an extended research programme will be required to address these challenges in full. Although we see the work presented here as contributing a valuable first step in this programme of research, the larger challenges of integrating with functioning environmental control systems is beyond the scope of the study.
Our focus in this paper is to explore a range of potential interaction mechanisms, feedback modalities, and personal input techniques, that could be employed by such systems. We have designed, implemented and conducted an initial evaluation of a system which explores these principles. The system allows users to provide feedback about their subjective impressions of comfort in an office environment for a several salient environmental factors. It employs both tangible and ambient modes of input and output and also provides for the display of data sensed by the system and an aggregated representation of group preferences.
Our system is made up of the following three parts: (1) a local Sensor Platform, which is placed near users’ work area and gathers local measurements of humidity, light, temperature, and noise levels; (2) MiniOrb: a small tangible and ambient interaction device, which displays the local environmental conditions as sensed by the local sensor platform and allows the user to submit their preferences in relation to these; and (3) MobiOrb: a mobile application, which implements an alternative interface for displaying sensor information and allows users to input preferences through touch-screen interactions as more precise values.
The purpose of this chapter is to address two pertinent questions related to end-user interaction with IoT devices. Firstly, how might tangible and ambient interaction techniques be used to support people in reflecting on and recording subjective preferences in relation to comfort levels in an office setting, and secondly how effective are these techniques compared to more conventional screen-based touch-interactions for setting the same information, but with greater numerical precision?
The rest of this chapter is organised as follows. Section 2 discusses related work, specifically in the context of ubiquitous computing, ambient interaction as well as crowdsourcing and participatory sensing. Section 3 outlines the design of the MiniOrb device, its related sensor platform as well as its mobile interface. Section 4 summarises the results of the device’s evaluation. Section 5 discusses and interprets these results in the context of end-users interacting with and through IoT devices. Section 6 concludes this chapter and summarises insights gained.
2 Background
2.1 Ubiquitous Computing and Indoor Climate
Within building studies, there has been a move towards ‘user-centred’ conceptions of how people experience buildings [24]. This raises questions concerning the ways that social relations, people’s lived experience, and their day-to-day use of buildings have an effect on how they experience indoor climate. There is an awareness and recognition that far from being simply an engineering problem, the energy efficiency of buildings is as much dependent on the lived practices, use-patterns and social relations between building inhabitants [12]. In order to improve the sustainability of buildings in terms of their energy use, we need to question the models of comfort based on pre-defined steady-state conceptions of indoor climate [23].
Alongside this, there is also increasing use of ubiquitous sensing technologies within buildings. So-called ‘smart’ sensors distributed throughout a building provide data which is used to intelligently regulate indoor climate systems (Liu and Akinci). In practice, this often is realised as increasingly automated indoor climate systems, however the occupants of buildings have been shown to have lower levels of satisfaction if occupants lack control over their environment [4]. Giving control of the indoor climate to people not only has benefits in terms of improving satisfaction overall [4], it can also prove effective as a way of reducing energy consumption [2].
There are a number of interaction techniques which can be employed to facilitate user engagement in this context, but in this chapter we focus on the way that tangible and ambient interaction mechanisms linked to an IoT sensing platform could be used for this purpose.
2.2 Ambient Interaction
Ambient devices are a type of interaction mechanism, designed to unobtrusively communicate information to users. In one example, Ishii et al. [11] instrumented office environments with a range of devices which could provide ambient feedback as part of their ambientROOM environment. This environment included a range of modalities such as sounds, lights, air flow and visual projections. Ambient feedback approaches have since been studied within a range of other settings (e.g. [3, 19]).
Ambient devices typically employ simple mechanisms for output, such as glowing orbs. However, despite their outward simplicity, ambient devices present several challenges to designers. Designers must consider carefully what information should be displayed by the device, how the appropriate intensity levels for notifications should be classified, and how transitions between various states should be handled by the device [15]. Besides their use as output devices, there is also an increasing interest in finding ways to integrate tangible interaction mechanisms into ambient devices so that both input and output capabilities can be provided for users (e.g. Hausen et al. [7, 10]). For instance, AuraOrb [1] augmented a “glowing orb” display with touch input and eye contact sensing. This allowed users to trigger functions of the device simply by directing their gaze to it. Other examples in the context of presence awareness and instant messaging have also explored this approach (e.g. [5, 7, 18]).
2.3 Crowdsourcing and Participatory Sensing
The notions of crowdsourcing and participatory sensing are relevant to understanding end-user interaction with IoT devices for gathering and sharing data. Jeff Howe who coined the term [8, 9], defines crowdsourcing succinctly as: “The application of Open Source principles to fields outside of software.” Crowdsourcing is an effective approach to collect and analyse information from large numbers of contributors using internet-based services, either implicitly or explicitly. Participatory sensing [13, 17] is a form of crowdsourcing that is applied in an IoT context. Participatory sensing describes the gathering of data through sensors that participants carry or use, without the need for participants to actively interfere. By contrast, the related concept of citizen science [6, 16], implemented through projects such as the Berkeley Open Infrastructure Network Computing, not only allows researchers to harness the computing power of many computers worldwide, but also supports the active contribution and analysis of data through volunteers—an approach referred to as civic intelligence [22]. Similar approaches are being employed in non-volunteer setups, including Internet marketplaces such as Amazon’s Mechanical Turk (mturk.com), using micro-payments to support a broad range of analysis tasks or IBM’s Many Eyes, to facilitate the distributed creation of visualisations from datasets [25]. These related notions show a spectrum of user engagement when sharing and interpreting data that is of relevance when considering the design of end-user enabled IoT devices.
3 Design Process
The design process was based on an existing embedded sensor platform previously developed by one of the authors [21]. The platform consisted of a range of basic environmental sensors (temperature and humidity, light sensor, and sound sensor) mounted onto a custom-built circuit board. The board was designed to be placed in indoor office environments, and was envisaged to be predominantly mounted on office workers’ desks. Sensor platforms communicated wirelessly via a ZigBeee mesh network, relaying sensor reading to a central server for logging.
The purpose of the sensor platform was to sense, monitor indoor climate data for individual desk workstations in order to log differences in environmental conditions and detect potential problems (e.g. increased sun exposure and glare due to the way blinds are being operated, noisier workstation due to the placement of air conditioning ducts). While the sensing process was automated, the platform allowed users to provide simple feedback by means of an on-board “joystick”. This small PCB-mounted button was chosen because its very small form-factor integrated well with the overall design of the sensor platform. The placement and shape of the initial button itself made it not particularly suitable as a user feedback mechanism. However, the fact that feedback had been built into the initial platform made us consider expanding on the range of interactions and feedback mechanisms provided by the platform in order to collect user-preference information alongside raw sensor data and to explore to what extent the control of indoor climate could be customised to individual user preferences.
The device should be unobtrusive and very easy to interact with
The device should allow users to understand current sensor readings by showing data using an ambient display mechanism
The device should enable users to set individual preferences for each of the sensor categories
The device should allow users to compare between individual and group (average) preferences, enabling them to be aware of other users’ preferences
The device should allow users to provide feedback on their level of “social connectedness”
We introduced the additional parameter of “social connectedness” as a soft measure of the overall social environment in order to complement the sensor readings made by the platform. The purpose of this measure was to allow participants to express their perceived level of office comfort with regards to their social environment in addition to indoor climate preferences. The interpretation of the term “social connectedness” was deliberately left to participants allowing them to interpret it according to their needs and the specific context of their office environment. Rather than specifically providing a quantitative measure for social connectedness, we wanted to open up this measure for discussion in our subsequent participant interviews in conjunction with other environmental factors (please see the “study design” section for further discussion).
Basing the design of our interaction device on the existing sensor platform introduced a number of design limitations. One particular limitation was the fact that the sensor platform only possessed a small number of free input/output ports that could be used to communicate with the interaction device. Due to this we did not focus on building an interaction device that had a large or complex set of interactive capabilities. Instead the design of the device focussed on supporting a small but sufficiently complex set of interaction mechanisms that would addressed our design goals, yet allowed us to base our design on the existing sensor-platform infrastructure.
To design the MiniOrb, we carried out an iterative process, which involved building working prototypes that we ‘lived with’ in our work-spaces so we could experience them directly and use these experiences to drive refinements to functionality, usability and physical form of the devices. An important point here is that the ‘behaviour’ programmed into the devices could only be fully understood by taking time to personally experience what it was like to interact with the devices. This led to several improvements to the design of the devices. We added audio output to provide additional feedback to the user when preferences were being set and to communicate reminders to users to interact with the device. We also found that it was necessary to support users to be able to compare between the current reading of a sensor and their preferred setting. We also added the ability to ‘scroll’ through the various sensor readings.
During this process, we discovered the need to consider whether people would need to be able to get precise readings of the data from sensors, relative to the more ‘ambient’ display of the device. This led us to develop and design a second prototype implemented as a mobile-optimised web app. This replicated the basic functionality of the device and also allowed users to read and enter precise values for sensor readings and preferences. This mobile web-app implementation provides an alternative approach for building an interface based on the same sensor platform. In this case, the sensor platform’s functionality is accessed through a screen-based interface. This second interaction approach is more representative of typical methods for exposing and accessing the data of IoT sensing infrastructures. It therefore provided a useful comparison point that could be evaluated alongside the tangible and ambient device design.
3.1 The MiniOrb System

MiniOrb sensor platform (right) and interaction device prototype (left)
3.1.1 Sensor Platform

MiniOrb sensor platform
3.2 The MiniOrb Interaction Device

MiniOrb breakout diagram

MiniOrb ambient interaction device

MiniOrb interaction
For example, in order to display information related to the light sensor, the device cycles through three separate modes related to this parameter. It first displays the value which the sensor platform has recorded by mapping this to a relative intensity of the colour green. The “sensor” icon is illuminated by the indicator LED to the current state. Next, the device transitions to a display of the last recorded user preference for light levels. Similar to the display of the sensor reading, this is indicated by mapping the value to the colour green on the RGB LED and illuminating the “user” icon indicator LED. The display cycle for the light parameter completes with a display of the value of the average “group” preference for light levels. The duration that each state is shown for is about five seconds. After the light cycle is completed, the device displays a similar cycle related to the “sound” category using blue as the output colour, and so on continually cycling through all the sensor categories and colours.
The fourth “social” category is different to the categories described above in that it is not directly linked to sensor values accessed from the sensor platform. Instead, it is calculated based on overall user feedback for the category. As described in the design section above around the notion of “social connectedness” category, the intention of including this category was to trigger subsequent discussion with participants about their interpretations of this. Therefore, because of the way the values are calculated for this category, the “sensor” and “group” values are identical. To assist users in learning the mappings between colours, sensors and the available interactions, a “cheat sheet” was prepared to accompany each device.
There are three interaction methods provided the device through a combination of the scroll wheel and push button inputs: (1) scroll wheel: users can move the wheel to choose between the various sensor categories manually. For example, a user could scroll the wheel cycle through to the sound category immediately instead of having to wait for it to finish the remaining cycles. (2) push button: pushing the button triggers the display of the user’s preference for the sensor category that is currently being displayed. When the user releases the button, the associated value read from the sensor platform is displayed on the device. This allows users to easily compare the currently sensed value against their preference in order to help them think about whether they would like the preference set slightly higher or lower. (3) scroll wheel and push button: when the scroll wheel and push button are used together, users are able to set their preference value for a the currently selected sensor category. To do this, they simply keep the button pressed down and set the desired value by scrolling. The intensity of the orb adjusts continuously as they scroll the wheel. As soon as they release the button the preference setting is recorded. The design of the device is such that this interaction can be easily achieved with one hand (i.e. by pressing down the button with the index finger and simultaneously using the thumb to scroll the wheel).
A small set of audio cues are also employed by the device to improve the interaction. As the scroll wheel is turned, subtle “clicks” are produced to provide users a sense that they are selecting discrete values. When the wheel scrolled into the “middle” position, a slightly more pronounced click is produced to provide an audible locator for the middle of the input range. To notify the user of when a preference has been successfully recorded and transmitted to the server, a separate “chirp” sound is made. Finally, once a day each device emits a short “buzz” sound to act as a reminder to users and encourage them to record their preferences at least once for the day. This sound has been carefully chosen to be noticeable to users, but not to be too annoying.
3.3 The MobiOrb Mobile Application

MobiOrb mobile interface (right)
All of the sensor readings, user preferences and group average preferences are displayed on a single screen in the mobile interface. The screen is divided up into four separate sections, each of which displays information for a single sensor. The four sections all follow a similar graphical layout. Each has a colour-coded slider corresponding to the colour categories used in the MiniOrb (described above). These sliders allow users to record their preferences by sliding left and right. The numerical value of the preference is also displayed in a textbox within the slider. Values of the sensor readings taken from the sensor platform are shown in units relevant to that sensor (e.g. Celsius, lux, decibels) at the bottom of each section. In the middle of each section is another textbox which displays the average group preference for that sensor. Despite the differences in how the information is displayed, the values for sensors and preferences shown in the two interfaces are exactly the same. The most significant difference in terms of the users’ experience of the interfaces is that the mobile interface allows more accurate assessment and setting of sensor and preference values, but does not provide the same level of ambient accessibility afforded by the MiniOrb devices placed on users’ desks.
4 MiniOrb Evaluation
We evaluated the MiniOrb system through a number of user studies. This book chapter summarises outcomes from a two week long trial of the MiniOrb system carried out in situ with users in their actual work environment along with the outcomes of a number of post-trial semi-structured interviews, which were carried out with participating users. A comprehensive account of the study results has been published in Rittenbruch et al. [20].
4.1 Study Design and Setup
The participants for the study were recruited from inhabitants of the Queensland University of Technology’s Science and Engineering Centre (SEC), Australia. This is a recently established research centre, situated across two newly constructed buildings. The buildings host general staff, academics as well as postgraduate students from a variety of disciplines. To recruit participants, an email was distributed to all SEC inhabitants inviting them to take part in the study. The study was planned in three parts, as follows: (1) a questionnaire which assessed existing participant attitudes and preferences in relation to indoor climate factors; (2) an in situ trial of the working MiniOrb system over two weeks; and (3) a post-trial semi-structured interview which aimed to investigate participants’ experience of using the device and interpretations of the sensor categories.
Participants’ involvement of the different stages was entirely voluntary and participation was obtained via informed consent. Participants were free to withdraw from the study. The overall study design was run twice, once in each of the buildings of the SEC. In total 11 participants across the two trials participated all the way through to the post-trial interview stage. To categorise the interview results we conducted open coding through a grounded theory approach.
4.2 Study Results
This section presents and discusses results from the MiniOrb trial and post-trial interviews.
4.2.1 Post-trial Interviews
The post-trial interviews were organised around three sections: (1) participants’ attitude toward office comfort, (2) experiences interacting with the MiniOrb ambient device and (3) experiences interacting with the MobiOrb mobile application. The intention of the first section was twofold: first, enrich the data on attitudes to office comfort levels collected in the earlier round of questionnaires; to provide greater detail on participants’ working context; and second uncover attitudinal differences between individual participants. The remaining two sections inquired into how and when people made use of the devices on their desk, and what their perceptions of usability and user experience were. The results from each of these sections of the interviews are discussed in turn below.
4.2.2 Attitudes with Regards to Office Comfort
Although many of the participants reported that they appreciated their office environments over, we identified several concerns from participants regarding office comfort factors. The most commonly raised issues were around temperature. Several of the participants were of the opinion that the target temperature of the air-conditioning system for the building was set “a little bit” too cold. Participants also reported noticing the cold more during certain times of the day (e.g. in the afternoon). It is important to note here that due to the fact that the study was carried out in a sub-tropical environment. Therefore, issues around the building being too cold did not imply that insufficient energy was being used to warm the building, but the opposite, that energy was being wasted by cooling it more than necessary. The next most frequently raised set of issues were concerned with noise in the building. The notion of “noise” could relate to several different sources of sound, such as building noise, environmental noise, etc. In the context of the interviews, noise was almost exclusively discussed in terms of the noise resulting from conversations within the workspace. Several participants described being disturbed when nearby people chatted or carried on conversations on the telephone. Approximately half of the participants reported that they used headphones to cope with this kind of disruption. Strategies reported by other participants were to move to a quieter desk, to use a separate meeting room, or to work in the university library. Noise issues were reported exclusively by participants situated in an open office workspace. Sources of noise not related to conversation, such as general background noise, were not perceived as a problem. Some participants reported issues around lighting, particularly in relation to how window blinds were set. Participants’ response to light as an issue was dependent on the location of their desk in relation to the windows and the direction of sunlight. Participants either perceived that that their work environment was too dark and that they could not clearly see the outside environment, or the opposite, that they received too much light, which caused reflection and glare on their computer screens. Overall however, complaints about lighting levels were fewer and less intense compared to issues raised around noise and temperature. The notion of privacy within an open office setting was also raised as an issue several times. Some participants expressed a desire for more secluded cubicles or offices so that they could carry out their work with more privacy. When we asked about the level to which participants perceived a level of control over their current environments, the majority expressed feelings of very low or even non-existent levels of control. The control factor most requested by participants was to be able to adjust the temperature, followed by the ability to control the setting of the automatic window blinds. Some participants also mentioned a desire for control of privacy and noise aspects, but also reflected that this would probably require changes to be made to the physical layout of the office environment.
4.2.3 Experience Using the MiniOrb
All participants who were interviewed reported that they had used the MiniOrb device. The interviews revealed a number of common patterns participants followed when recording their comfort preferences. The first pattern indicated that many study participants used the interaction device when they first arrived at their desk in the morning, and again at other times when they returned to the desk after a temporary absence (e.g. e meeting or a break). Participants reported that the reason for this was that they perceived the interaction device had a very ambient quality and blended into the background so that they “forgot” that it was there after a period of time. When they returned to their desk from a break however, they commonly noticed the orb displaying sensor readings using different colours and “remembered” that the device was there. The second pattern was that participants would use the interaction device specifically to specify preferences, either when they perceived the environmental conditions as uncomfortable, or when aspects of their local environment changed (e.g. a change in light levels due to the automatic blind control). The third pattern that emerged was that participants commonly provided feedback on comfort levels when the interaction device played a specific sound that had been created to periodically “prompt” participants. A large majority of participants made positive comments about this interaction mechanism. Participants felt that the mechanism prompted them to provide input and cases where they otherwise would have forgotten to do so. Participants reported that they did not perceive the interaction to be distracting or intrusive. One of the participant further reported that they were encouraged to provide feedback by hearing that other people were sending feedback from their own devices (the device issues an audible “feedback submitted” sound that could be overheard by other users who were close enough). Once they heard the sound that remembered the device “existed” and use it themselves to provide feedback.
A large majority of study participants reported that they enjoyed having the interaction device sitting on their desk and felt that it was both easy to use as well as very unobtrusive. Not all aspects of the device were used out the same rate however, and some functionalities were interpreted and applied differently. One aspect that was significantly different between participants was the way that participants recorded their climate preferences. One group of participants frequently used the push button in order to compare the current sensor reading with their own preference setting for a specific category. These participants would commonly set the preference value a “little bit higher” or “a little bit lower” than the status that was currently displayed in order to indicate a gradual change in preferences. Another group of participants, instead turned their preference values to the maximum or minimum setting possible in order to indicate their strong desire for this value to change respectively. When asked, these participants said that they did not think that they were trying to set a specific value, but instead felt that the interaction was more like “casting a vote”. They further reported that they were most likely to use the device in this way if they felt strongly about their choice or wanted to express their discontentment (e.g. if they felt that the environment was too hot, or to cold, or that aspects of the environment were too noisy for them to concentrate).
The social category was different from other categories in that it did not directly relate to a reading from the sensor platform. Instead the “social value” was directly defined through the participants’ input. Out of our group of participants only a number of users reported having used the “social” category. As described earlier, our intention with adding this category was to trigger discussion in study interviews of what our users’ interpretations of such a category were. While some participants reported that they were not sure how to use and interpret this category, others gave unanticipated examples for how they used the social category and how its use resulted in unexpected social interactions. For instance, some participants who belonged to the same department group set their social preference value to the maximum setting at the end of their working day in order to signal to other co-workers that they were available to socialise. These examples show that such a category could be used twofold. First, to act as a measure of “social atmosphere” within the group and second as a way for co-worker to show social availability.
The aspect of the device that was used the least out of all features was the display of “group averages”. A number of participants explained that they used the group functionality after submitting their preference via the interaction device in order to compare their own preference with that of other users. Many other participants however stated that they either were not sure what the purpose of the group functionality was or that the group setting was not relevant to them and that they subsequently did not pay attention to it.
Some participants highlighted the fact that providing feedback via the system made them feel like “somebody cared”. These participants were well aware that the system only collected feedback values but did not affect actual change. They nonetheless valued the opportunity to share their opinion. One participant stated: “(…) it just gave me the feeling that somebody maybe cares somewhere”.
The interviews showed some other, less prevalent, issues related to the system’s design and functionality. One participant thought that the “press button” function would enable them to compare individual preferences with group average values, rather than sensor values. A single participant mentioned that the light emitting from the device’s orb was somewhat distracting and subsequently positioned it out of sight. However, this attitude was not shared by other participants who did not find the device distracting.
4.2.4 MobiOrb Application Experience
Seven out of eleven participants that were interviewed had used the mobile application at least once. The most commonly made observation amongst that group was that the mobile application was less generally noticed or in people’s mind. Most participants felt that they used the ambient interaction device more because it was placed on their desk and because it actively reminded and encouraged them to use it. The mobile application, by contrast, was not always turned on and visible. Participants had to remember it was there, access their phone and use it on purpose. This behaviour required more effort and was further removed from the immediacy of directly interacting with a dedicated physical device placed on the desktop.
However, once participants actually used the mobile application they said that they appreciated the ease with which feedback values could be set and found it generally easy to use. One participant commented that the process of setting multiple values was quicker and easier to achieve on the mobile device. The fact that all readings and settings on the mobile device were displayed as numeric values rather than relative colour hues marked an obvious difference between the mobile and the ambient interface. On average, our participants did not seem to prefer one way of presenting values over the other. Some participants voiced that seeing the concrete numerical values, as well as the actual range within which these values could be changed, enhanced their experience: “It just felt like I knew more what I was saying with the range”. However, another participant mentioned that he liked being able to focus on setting their perceived comfort levels in relation to the current sensed value (e.g. “I would like the lighting to be a bit less bright”), without having to think about absolute numbers.
5 Discussion of Results
5.1 Reflection of Interview Results
The post-trial interviews gave us a nuanced insight into participants’ attitudes about office comfort and provided an overview of how they used the various parts of the system. In the discussion that follows, we highlight five areas in particular, which warrant further discussion.
5.1.1 “Protest” Vote Versus Gradual Vote
The MiniOrb device displayed values of sensor readings as well user input about preferred comfort levels via changing colour intensity. This meant that what feedback values actually meant could be interpreted differently by different users. Two different ways in which participants employed the feedback mechanisms stood out as notable. In one approach, participants would submit gradual changes relative to the current sensor reading to indicate preferences in comfort levels compared to the current level. In another approach participants would use the feedback to make submit more radical change by giving feedback at the most extreme available minimum maximum settings.
We refer to the latter approach (b) as casting a “protest vote”. Participants took this approach when they wanted to express strong disapproval or discomfort. In this sense, it was more similar to a yes/no voting approach than the continuous preference-setting approach we had imagined. This was in contrast to the gradual changes approach (a), which aimed to convey accurate readings of desired comfort levels. Protest votes only occurred in the context of discomfort as they allowed users to express feedback by selecting the maximum or minimum opposite value. For instance, a user who found the office environment too cold would set the temperature preference to maximum in order to express their desire to be warmer. Both approaches constitute a valid use of platform, however in comparison, the mobile application was generally less suited to the protest vote style of interaction. This was because users were already presented with the precise numeric value of their preference. Participants who actually saw how a “protest vote” mapped to particular numerical values on the MobiOrb application, reported that it became clear to them the recorded value was either very high or a very low. In most cases this extreme setting did not reflect what their actual preference would be. For instance, a “protest vote” might record the desired temperate of 30 degree Celsius, which did not match their actual preference, but only their desire to “be warmer”. We believe that both are valid approaches for users to provide comfort level feedback and are worth supporting as separate interactions in future systems.
5.1.2 The Trade-off of Minimal Design
The MiniOrb device only provided limited number of input and output mechanisms. This minimal design was consciously chosen by us when we decided to design within the constraints of the existing sensor platform. This presented us with a challenge around how to design a minimal interaction device based on a limited set of tangible interaction mechanisms with suitable ambient output modalities. It was still necessary for the device to support a sufficient level of functionality while at the same time not over-burdening the user complexity. Based on our findings from the post-trial interviews, we are confident that this goal was achieved. Nevertheless, there are aspects of the design that could be reworked in future. Some redesign of elements of the sensor platform do seem warranted in order to expand the interaction possibilities for the device, while also retaining a simple and minimal implementation overall.
The device’s “ambient quality” was well perceived and appreciated by almost all of our participants. They felt that the device quickly faded into the background when it was not being used, but that is was equally as quickly available whenever they wanted to interact with it. However, not all of the functionality built into the device was utilised to the same extent. A salient example was the display of the group average, which was used by a limited number of the participants. The relative lack of use of this feature might have been influenced by our decision to allow users to compare their own feedback to the value of the respective sensor reading rather than to the group average. It became clear from the interviews that this functional design decision was significant due to how it supported users to feel that their preferences were aligned with those of the group. This further emphasises the point as much as indoor comfort is a measurable physical phenomenon, it is also a social phenomenon. By making the comparison with sensor values in our design instead of with the values of the group preference, we de-emphasised a “social” view of indoor climate in favour of a “physical” one.
Building a small device with limited interactive capabilities always requires a trade-off. With regard to the design of the MiniOrb device, we suggest that instead of attempting to combine the comparison of group averages and sensor values in a single device, a better design approach is to extract less frequently used areas of functionality, such as the group average readings, and instantiate this functionality in a separate interface dedicated to that task. For example, we imagine complementing our system with a separate “MaxiOrb” device that designed solely for the purpose of publicly displaying group averages to group of users, such as clusters of users in an open plan office belonging to the same work group.
5.1.3 Subjective Perceptions of Being Listened to
Our findings emphasise that how people experience office comfort depends on more than measurable factors. People care about things like “being appreciated” as well as measurable parameters like temperature. An important consideration for the design of systems like this is how to design such systems so they give users the feeling that they will be listened to. It further raises questions about how such mechanisms can help share office comfort attitudes with other inhabitants and help to affect change. For instance, with regards to the “MaxiOrb” public display idea described above, we could consider how such an interface would allow users to indicate to their colleagues that the office is becoming too noisy, thus raising awareness across a wider section of the office population.
5.1.4 Encouraging Interaction
The “remember me” buzz that the device periodically issued to encourage and remind users input their preferences had a stronger than e3xpected influence on users’ pattern of use. Somewhat surprisingly, users did not report that they found these notifications distracting. Instead, they reported perceiving them as welcome reminders to use the system. Conceptually, this notification can be thought of as an interaction which moves the device out of an ambient “background” mode of interaction into the foreground of the user’s attention. Compared to a more conventional notification, which indicates a change to the system’s state, the “remember me” function acts as a form of reverse notification, that encourages the user to interact with the device.
5.1.5 Ambient Versus Mobile Interaction
The interview results indicate that the ambient and the mobile interfaces each fulfil different and complementary roles. A key characteristic of the MiniOrb device was its ambient quality. Because it was physically located at users’ desks, it was able to act as a constant reminder, while only requiring a minimal interactive effort. This is a highly useful characteristic for an interface to have if the aim is to eleicit user input over an extended time period. In comparison, the mobile device “MobiOrb” was appreciated for its clean user interface, that was better suited to interpret the numerical values and range of the sensor categories and allow users to give more accurate feedback. Interestingly, several users expressed a preference for this interface to be installed as an application on their desktop computer, rather than on their mobile device. These users felt that such an application would better integrate with their desktop working environment and their overall work routines. In general, users felt that the MobiOrb mobile interface was providing additional functionality and saw it as a complementary rather than a replacement interface to the ambient MiniOrb interaction device.
6 Conclusions
The purpose of this book chapter was to explore the notion of direct end-user interaction with and through IoT devices. To this end we described the design, use and evaluation of MiniOrb, a system that combined a sensor platform with an interaction device. The device combines ambient output with a tangible input approach to allow users to share their subjective perceptions of the comfort of their office environments, in particular relating to temperature, lighting and noise. One specific attraction of a tangible interaction approach in this context is that it gives physical presence to a phenomenon that is normally not visible or in peoples’ foreground experience. The work reported here addresses two related questions. First, to what extent can ambient and tangible interaction mechanisms make it easier for office inhabitants to reflect on their subjective office comfort levels and record their preferences, and second how do these mechanisms compare to other more traditional approaches that enable users to see sensor information relevant to office environments and record their preferences?
The results show that is feasible to build minimal interaction devices that use non-screen-based interaction approaches such as ambient and tangible interaction mechanisms and that these mechanisms are well suited to engage users in the process providing preferences. Indeed, even with the rather minimal interactive elements that could be supported by our constrained IoT sensor platform, surprisingly rich user interactions and behaviours emerged. This process can be further aided by providing complementary interfaces to provide additional options for the input and reading of accurate sensor and reference values. In our case this was achieved by the provision of a mobile user interface in addition to an ambient interaction device. What is particularly notable is that the system we tested with users did not actually affect the lighting, temperature or any other physical aspects of their indoor environment, but simply recorded what their preferences were in relation to comfort. One would expect that this lack of actual physical control would result in a significantly reduced incentive to use the system. Nevertheless, users still made frequent use of our system. The fact that users reported that they felt they were being “listened to” underlines the need for exploring alternative interaction approaches that allow individual control for users within these environments. While overall, the evaluation points to the success of the system from a user-experience perspective, the results of our study identified many further nuances with regards to the process of how users provide feedback, which functionality should or could be integrated in a minimal interaction device, how to prompt for specific feedback and interactions and insights into how users interpret and handle the display of vague versus accurate sensor readings.
The results show that questions around how to design specific interaction elements that enable direct end-user interaction with IoT sensing platforms are an interesting and valid line of inquiry. End-user enabled interaction devices offer an additional dimension, not normally offered by standard IoT devices. First, they allow for the meaningful interpretation of sensed data through end-users. This approach specifically make sense if the data is directly relevant to end-users, as was the case in our case study on office comfort levels. Second, they allow for crowdsourced end-user feedback to be collected directly at the IoT device level, rather than being collected through different devices (e.g. mobile phones). This approach allows for immediate feedback that ties together the interpretation of sensed data with subjective user feedback. We believe that the study we described here is a first step to gain insights into the tighter integration of direct interactive capabilities in the context of IoT and will help to inform future research in this context.
7 Key Terms and Definitions
Ambient display/interface—Displays meant to minimise the mental effort required to perceive information
BMS—Building Management System
IoT—The Internet of Things
MiniOrb—A custom built sensing and interaction platform for indoor climate preferences
Peripheral awareness—Ability to perceive object or actions not in the direct line of vision
Tangible interaction—Supporting interaction through direct manipulation of physical interfaces