January 2016 Download this article as a PDFAbstract

Next to active user involvement and a multi-method approach, a third major principle within living lab research consists of capturing the real-life context in which an innovation is used by end users. Field trials are a method to study the interaction of test users with an innovation in the context of use. However, when conducting field trials, there are several reasons why users stop participating in research activities, a phenomenon labelled as attrition. In this article, we elaborate on drop-outs during field trials by analyzing three post-trial surveys of living lab field trials. Our results show that several factors related to the innovation, as well as related to the field trial setup, play a role in attrition, including the lack of added value of the innovation and the extent to which the innovation satisfies the needs and time restrictions of test users. Based on our findings, we provide practical guidelines for managers to reduce attrition during field trials.

Introduction

Within living lab research, end users are involved actively to develop an innovation that is adapted to their needs and wants. A living lab environment is defined as “a user-driven open innovation ecosystem based on a business–citizens–government partnership which enables users to take an active part in the research, development and innovation process” (European Commission, 2009). In addition to this active user involvement, a multi-method approach and real-life interventions make up the three central characteristics of the living lab approach (Schuurman, 2015). Although questions have been raised about the extent to which living labs are capable of achieving the necessary levels of user engagement and keeping in mind that their interests are sometimes overlooked (Dutilleul et al., 2010), users are generally seen as very important actors.

A living lab study by Ebbesson and Eriksson (2013), in the context of an online platform to gather input from end users, showed good support for the end users during the startup phase of the projects, but also showed an increasing number of users dropping out or lowering their activity level. When studying the motivations of end users participating in open innovation processes, Ståhlbröst and Bergvall-Kåreborn (2011, 2013) found a close relationship between motivational factors and the values achieved, and thus that most voluntary contributors are satisfied when learning new things. Intrinsic motivations such as learning, being entertained, and stimulating curiosity are seen as the most important motivators to participate in an innovation intermediary context. Baccarne, Logghe, Veeckman, and Schuurman (2013) also found that the main motivator to participate in living lab research is intrinsic in nature, but for repeated participation, material incentives become more important as motivators. They also argue that the motivations to participate tend to differ according to the research step.

With this study, we wanted to dig deeper into the reasons why people participate or drop out during living lab research. Because there seem to be differences between research techniques (e.g., surveys, field trials, co-creation workshops) (Baccarne et al., 2013), we decided to focus on one research step in particular: field trials. Field trials can be defined as “tests of technical and other aspects of a new technology, product or service in a limited, but real-life environment” (Ballon et al., 2005). They also link up with the "real-life intervention" characteristic of living lab projects (Schuurman, 2015). Field trials enable researchers to study the use of the innovation by test users in a natural use context and allow them to discover and understand how technologies are being used and adopted in a real-life setting, which is one of the key principles within living lab research (Ballon et al., 2005; Følstad, 2008; Kjeldskov & Skov, 2014; Schuurman et al., 2013). In contrast to other research methods, participation in field trials requires a prolonged engagement of test users because they are expected to test an innovation during a specific period. Moreover, in most field trials, users are asked to actively provide feedback regarding their usage. However, following participants over a prolonged period also increases the risk of drop-out before the end of a test period (Schuurman & De Marez, 2009).

In previous research on attrition during field trials, some studies have been conducted in the field of eHealth. Within this domain, Eysenbach (2005) introduced the law of attrition, which is “the phenomenon of participants stopping usage and/or being lost to follow-up, as one of the fundamental characteristics and methodological challenges in the evaluation of eHealth applications”. Simons, Hampe, and Guldemond (2013) mention time and timing issues as reasons why people stop participating. For eHealth trials on the internet or with self-help applications, high dropout rates “may be a natural and typical feature”; however, it is important to further analyze the attrition data, because it may give an indication of real-life adoption problems (Eysenbach, 2005). Eysenbach (2005) also identified two sorts of attrition, namely dropout attrition, which is “the phenomenon of losing participants to follow-up (e.g., participants do not return to fill in follow-up questionnaires)” and non-usage attrition in which participants “have lost interest in the application and stopped using it”. In a field trial, an example of dropout attrition would be test users continuing to use the innovation but no longer providing feedback, whereas non-usage attrition occurs when test users stop using the innovation but can still give feedback regarding their non-usage. The second type of attrition provides important information for the innovation development process, whereas the first type of attrition generates less information. Therefore, it is especially relevant to minimize the rate of dropout attrition.

Multiple studies have illustrated the occurrence of attrition in the context of eHealth applications, without digging into the causes of the attrition (Grudin, 2002; Korn & Bødker, 2012). Kanstrup, Bjerge, and Kristensen (2010) argue that the stability of the ICT infrastructure and some kind of user support are factors that decrease the rate of attrition, but do not make a distinction between dropout and non-usage attrition.

More in-depth research regarding the attrition within ICT field trials or living lab projects is lacking, despite the specific testing opportunities in multiple real-life contexts of new media innovations and because of their ubiquitous nature (Grudin, 2002; Korn & Bødker, 2012). Therefore, within this paper we want to tackle two main research questions:

  1. To what extent can different types of attrition be distinguished within ICT living lab field trials?
  2. Which factors play a role in the decision of a test user to continue or stop participating in field trials?

Methodology

The main goal of this study is to find factors that are related, either positively or negatively, to different types of attrition during field trials. Therefore, we conducted a qualitative analysis within three Living Lab field trials. The field trials were carried out in living lab projects from iMinds Living Labs, a division of the iMinds ICT research institute of Flanders, Belgium. The attrition rates per field trial (based on project documents) are described in the results section.

In order to find as many factors as possible, we selected three cases that differ in multiple ways, such as sample size, type of innovation, field trial setup, and communication with test users. First, we conducted a quantitative analysis on the attrition rates. The qualitative analysis was done by coding the answers test users gave to open questions from the post-trial survey. Thus, during the analysis and interpretation of the results, we must consider that the survey data will only include information about non-usage attrition, because test users subject to dropout attrition will already have dropped out. The answers to these open questions were analyzed using QSR International’s NVivo 10 qualitative data analysis software. When analyzing the factors related to attrition, we coded the factors that the same codes could be used for the three field trials.

Below, the field trials are further described and the answer rates of the post-trial surveys are given. One general finding is that, in all cases, the dropout attrition rate is high. This high dropout attrition rate must be kept in mind when interpreting the results.

Field trial 1

The first field trial was part of a living lab project to develop a location-based service application. The application was tested for seven weeks and participants received weekly emails with updates, tasks related to the innovation, and a feedback form where technical problems could also be reported. At the end of the trial, a survey was sent to 558 test users to receive feedback regarding their experience during the field trial (Figure 1).

Figure 1

Figure 1. Field trial 1: Summary of responses to the post-trial survey (n= 558)

Field trial 2

The second field trial was part of a living lab project in which an application to meet up with friends was co-created. The application was tested for five weeks and the participants received weekly emails to give feedback about the innovation, and they were given a weekly assignment. At the end of the field trial, a post-trial survey about the innovation was sent to the 55 participants (Figure 2). The test users could also send the survey to their friends or family that also tested the application. In total 35 test users and eleven contacts of the test users filled in the survey completely.

Figure 2

Figure 2. Field trial 2: Summary of responses to the post-trial survey (n= 55 + 11 friends/family of test users)

Field trial 3

The third field trial was more data-driven. Participants had to read 30 news articles for the first try-out and 60 for the second and third try-outs. They could choose to participate in one or more try-outs. Within this field trial, participants did not co-create the innovation, but instead received different assignments that generated data that was needed to test an underlying technology. They did not know the exact intention of the field trial. Participants were rewarded with a cinema ticket for each finished assignment. At the end of the field trial, a survey was sent to 350 participants to get feedback about the trial (Figure 3).

Figure 3

Figure 3. Field trial 3: Summary of responses to the post-trial survey (n= 350)

Results

Attrition rates during field trials

Within this section, we dig deeper into the first research question: To what extent can different types of attrition be distinguished within ICT living lab field trials? Within the first field trial (Figure 4), we see two dips in the attrition rate: i) when respondents have to fill out an intake survey to participate during the trial (dropout attrition) and ii) at the end of the field trial, when many test users stopped using the innovation (non-usage attrition). Thus, many test users did not participate for the entire duration of the field trial.

Figure 4

Figure 4. Attrition within field trial 1

Figure 5

Figure 5. Attrition within field trial 2

For the second field trial, we see that the pattern of attrition (Figure 5) is similar to the first field trial. There is high dropout attrition when people have to complete the intake survey , and then there is further (non-usage) attrition during the field trial. However, at the end of the trial, 35 participants filled out the post-trial survey.

Concerning the third field trial, the highest attrition rate was observed when test users had to complete the assignments (Figure 6). For the first assignment, for which the users were asked to read 30 news articles, the non-usage attrition rate was approximately 10% lower than for the two subsequent assignments, each of which required them to read 60 articles. Thus, the lower attrition rate in the first assignment may be explained by it being less cumbersome than the other two assignments. Because it was not expected from the test users to give feedback about an innovation via several research methods, the dropout attrition during this field trial was rather low.

Figure 6

Figure 6. Attrition within field trial 3

In general, we can conclude that, within living lab field trials, dropout attrition occurs during different phases of the trial. A crucial moment for dropout attrition seems to be the intake survey. This increased attrition is most pronounced in the first and second field trial, which seems to be caused by the fact that these surveys had more than 20 questions, whereas in the third field trial, users only had to fill in five questions. Within the first and the third case, there was a delay of several days between the intake survey and the start of testing. However, compared to the second trial, in which the participants received a link to test the application immediately after filling in the survey, there were no substantial differences in attrition.

Non-usage attrition occurs especially after the first time the test-users are confronted with the innovation. Within the next section we will dig deeper into the reasons why participants dropped out during living lab field trials.

Factors related to participation in field trials

Next, we examine factors that can play a role in the attrition during field trials. The data used for this analysis is based on the post-trial surveys at the end of the field trials. The respondents were asked to explain why they stopped using the application or why their use decreased, increased, or stayed constant throughout the entire field trial. These answers were coded according to the different factors related to attrition.

First, we analyzed the factors that are positively related to participation in field trials (Table 1). When analyzing the first field trial, the assignments that were given to test the innovation were seen as particularly positive, likely because of the users' curiosity: they wanted to know what the innovation was about. For the second field trial, only a few people kept on testing the innovation during the field trial. Therefore, only three factors were mentioned: i) the "fun factor" of testing the innovation, ii) the added value of the app, and iii) the fact that friends also started to use and test the innovation. For the third field trial, extrinsic motivation (incentives) and intrinsic motivation to participate in scientific research played a role as a factor that motivated test users to finish the assignments.

When comparing the factors that are related positively to participation in field trials with the motivational factors mentioned by Ståhlbröst and Bergvall-Kåreborn (2011, 2013), we see that learning new things (e.g., increasing one's own skills), being entertained (e.g., fun), and stimulating curiosity were also mentioned by the participants in the field trials. During the three field trials, the fun factor played a motivating role.

Table 1. Factors positively related to participation in field trials

 

 

Field trial 1

(n= 23)

Field trial 2

(n= 7)

Field trial 3

(n= 142)

Factors Related to the Innovation

Challenge to conduct the whole field trial

 

 

3 (2.1%)

Curiosity about the innovation

1 (4.4%)

 

2 (1.4%)

Increase own skills

 

 

4 (2.8%)

Interesting study

 

 

8 (5.6%)

Like the concept/idea

 

 

14 (9.9%)

Like the app

1 (4.4%)

 

 

Something new to do

 

 

5 (3.5%)

First one to test it

1 (4.4%)

 

 

 

Friends test the app

 

2 (28.6%)

 

 

App helps in daily life

 

3 (42.9%)

 

Factors Related to the Field Trial Setup

Anonymous

1 (4.4%)

 

 

Incentives

 

 

46 (32.4%)

Mailing

1 (4.4%)

 

 

Tasks

7 (30.4%)

 

 

Test when and where you want

 

 

4 (2.8%)

Other Factors

Conducted by iMinds

 

 

3 (2.1%)

Like to participate in research

2 (8.7%)

 

18 (12.7%)

See the evolution of the app

 

 

1 (0.7%)

Help in the development of an innovation

1 (4.4%)

 

9 (6.3%)

Curious to know what the innovation was about

5 (21.7%)

 

 

Try something new

 

 

5 (3.5%)

Fun

2 (8.7%)

2 (28.6%)

20 (14.1%)

 

Engagement

1 (4.4%)

 

 

n= total number of reasons to participate in field trials mentioned by the participants

%= number of times a reason was mentioned by participants, divided by n

 

Next, we analyzed the factors that are negatively related to participation in field trials (Table 2) and found that different factors are of importance for each field trial. Within the first field trial, users stopped using the innovation because they did not see the benefit of using it. There were only a limited amount of features available, which made the innovation less interesting to test and made it less likely that test users would test it for a longer period, keeping in mind their time restrictions. For the second field trial, participants mentioned that the innovation did not satisfy their needs. Furthermore, technical issues and a small user base generated dropout among the participants. This finding is in line with Kanstrup, Bjerge, and Kristensen (2010), who argue that users dropout when the technology is unstable. Finally, for the third field trial, time restrictions caused non-participation in the assignments, as was also found by Simons, Hampe, and Guldemond (2013).

Table 2. Factors negatively related to participation in field trials

 

 

Field trial 1

(n= 89)

Field trial 2 (n= 44)

Field trial 3

(n= 17)

Factors Related to the Innovation

Not interested

21 (23.6%)

 

 

Did not like the design

 

3 (6.8%)

 

Did not see the benefit of the app

32 (36.0%)

3 (6.8%)

1 (5.9%)

Did not trust the app (would not install)

1 (1.1%)

 

 

Lack of features

8 (9.0%)

2 (4.6%)

 

Not innovative enough

3 (3.4%)

 

 

Innovation did not satisfy needs

2 (2.3%)

5 (11.4%)

 

Technical issues

4 (4.5%)

6 (13.6%)

3 (17.7%)

Factors Related to the Field Trial Setup

Problems with installing the app

3 (3.4%)

 

 

No incentive to participate

2 (2.3%)

 

 

Did not like the tasks

1 (1.1%)

 

 

Not enough triggers to test

1 (1.1%)

 

 

Unclear what was expected

 

 

2 (11.8%)

Other Factors

Forgot to test

3 (3.4%)

 

2 (11.8%)

Lack of motivation

3 (3.4%)

 

1 (5.9%)

Not enough users

 

17 (38.6%)

 

Unforeseen circumstances

 

1 (2.3%)

1 (5.9%)

Time restrictions

5 (5.6%)

2 (4.6%)

7 (41.2%)

No opportunities to test the app

 

5 (11.4%)

 

n= total number of reasons to participate in field trials mentioned by the participants

%= number of times a reason was mentioned by participants, divided by n

 

When analyzing the factors across the three field trials (Table 3) and when digging deeper into the difference between dropout and non-usage attrition, our studied cases suggest that dropout attrition is mainly linked to the research setup, whereas non-usage attrition is mainly linked to factors related to the innovation itself.

When comparing the non-usage attrition over the three field trials, we see that it is high for the first and second field trial because these projects focused more on user co-creation of the innovation, which corresponds with the active user involvement characteristic of living lab research. In the third field trial, the non-usage attrition was lower. The focus in this project was more on the users generating data that allowed testing of the underlying technology, which made the co-creation aspect less important. Next to this, the participants also received cinema tickets after completing their assignments. In the first and second field trial, the participants were not certain they would  receive a material incentive, however they did not mentioned this as a factor in their decision to participate in the field trial. Thus, incentives helped when participants had to finish a certain assignment, but when test users had to co-create, intrinsic motivations became more important.

The higher non-usage attrition for the first and second field trial is interesting for the instigator of the project: it points to factors related to the innovation (e.g., usability problems or users not seeing the benefit of the application), which should lead to iteration of the innovation or of the use cases. This finding is in line with Eysenbach (2005), who argues that attrition data can give clues about real-life adoption problems.

Also, network externalities, or the nature of the innovation itself, can cause non-usage attrition. For example, during the second field trial, the testing involved an application for meeting up with friends, which implied that the friends of the test users also had to use the application. These network externalities related to the innovation had a negative influence on the sustained usage of the innovation as the factor "not enough users" scored very high for this field trial.

Table 3. Comparison across field trials

 

Number of Users at Start

Non-Usage Attrition

Dropout Attrition

Incentive

Duration

Field trial 1

High

High

Highest

Intrinsic

6 weeks

Field trial 2

Medium

High

Lowest

Intrinsic

4 weeks

Field trial 3

Medium

Low

Medium

Extrinsic + intrinsic

1 week

Also, differences in the dropout attrition are noticeable between the trials. These factors are mostly related to the design of the field trial. For the first field trial, we see, for example, high interest among participants to start the field trial, but a very high attrition rate subsequently. This high level of interest in participating can be explained by the communication strategy that was used. A narrative was generated for the field trial, which asked the test users to help as "undercover agents" and to go on missions to test a new secret application. The mysterious nature of the narrative seemed to have a positive influence on the willingness to participate by triggering the curiosity of the test users. The long intake survey, which was cumbersome, and the lack of perceived added value of the application caused the highest attrition rate.

For the second field trial, the participants were clearly briefed regarding the innovation and were stimulated to provide feedback that would be taken into account by the project instigator. This trial attracted a lower number of test users, but we noticed a lower rate of dropout attrition: some test users kept on giving feedback although they stopped testing the innovation itself. This relatively low rate of dropout attrition seems to be caused by the intrinsic motivations of the participants, who were involved in active co-creation, coupled with reminders that were sent for filling in the feedback surveys.

Concerning the non-usage attrition, the duration of the trial also can play a role in attrition. For example, the first field trial lasted for six weeks, the second trial lasted for four weeks, and the third trial lasted one week per assignment. When comparing the trials, we see a bigger attrition rate for longer field trials.

Guidelines for Project Instigators and Managers

Although the results presented here are exploratory in nature, and further research is needed, we have summarized the main lessons learned in the form of practical guidelines related to: i) the innovation and non-usage attrition and ii) the field trial setup and dropout attrition.

Guidelines related to the innovation and non-usage attrition

  1. Introduce the innovation clearly and underline its benefits.
  2. Stress the co-creation aspect: test users can be motivated by knowing that their contributions can impact the innovation.
  3. Conduct usability testing before the start of the field trial so that any technical issues can be solved beforehand. If there are still technical issues during the field trial, then provide a clear help channel and manage the expectations of test users by, for example, reminding them that the innovation is still in its development phase.
  4. Try to anticipate network externalities, because the number of test users can impact the relevance of certain functionalities of an innovation.
  5. Communicate clearly at the beginning of the trial what is expected from the test users. Define tasks for the test users to stimulate usage.
  6. Remind test users to perform the requested tasks. Some may not otherwise set aside time for testing or they may not remember that a task is to be completed.

Guidelines related to the field trial set-up and dropout attrition

  1. Create an accessible helpdesk and make it clear who is responsible for operating it. By including a helpdesk, test users can always give useful feedback when they have the time.
  2. Ensure that the testing initiation process is clear and straight forward (e.g., by providing a clear test link at the start).
  3. Provide incentives to encourage test users to complete tasks. However, note that incentives do not trigger test users to give valuable feedback.
  4. Include some fun (or even funny) tasks or assignments that challenge the users or trigger their curiosity. Appeal to the motivating factor that encourages participation just for fun.

Conclusion

Within living labs, field trials help researchers study the extent to which innovations are being used by test users in a real-life environment. However, several authors have highlighted the difficulty in finding motivated and engaged (long-term) users (Ebbesson & Eriksson, 2013; Kaasinen et al., 2013; Schuurman & De Marez, 2009). This challenge can be problematic, because the setup of a field trial is very time consuming and expensive. Currently, the literature on user participation in field trials during living labs is scarce. In the research domain of eHealth, Eysenbach (2005) explained the law of attrition within field trials and the difference between non-usage attrition and dropout attrition. Although it is difficult to extrapolate these results to field trials in a living lab context, we used this framework to analyze attrition within living lab field trials. With this study, we conducted a qualitative analysis of open questions in post-trial surveys of three living lab field trials and an analysis of attrition data from project documents.

This research has some limitations, including for example, that dropout attrition occurred when the post-trial surveys were sent to the test users. Future research could elaborate on this aspect by exploring how to minimize the dropout attrition so that there is information about why test users dropped out. Future studies should also ask why people stop testing an innovation and how many people stop testing an innovation. Although data logging can be used to measure attrition during field trials, it does not help researchers understand why the users stopped. There are thus many opportunities within this domain for quantitative as well as qualitative research. Although the results of this research are exploratory and difficult to generalize to other field trials, we believe the results are valuable for other researchers, practitioners, and idea owners of new products and services to organize and follow-up field trials. Researchers can pro-actively take into account the factors that play a role in the attrition of test users during the preparations for these trials. The idea owners can also practically gain from these findings because some attrition factors relate directly to the innovation itself.

Within this exploratory research, we can conclude that non-usage attrition as well as dropout attrition occurs. Whereas dropout attrition is mainly linked to the research setup, non-usage attrition is mainly linked to the innovation itself. The factors that affect attrition differ for each field trial because of the differences in the innovation and design of the trial. In this study, the main factors why participants stopped testing is because of time restrictions, because they did not see the benefits of using the application, or the application did not address the user's need as well as intended. We also provided practical guidelines to help instigators and managers reduce attrition in their living lab field trials. Here, the main outcome is that communication with test users plays an important role in minimizing dropout attrition, which in turn yields valuable information regarding non-usage attrition. Project instigators and managers should take care to recognize the factors that affect attrition and consider how they can predict future adoption behaviour.

 

Acknowledgements

An earlier version of this article was presented at the XXVI International Society for Professional Innovation Management (ISPIM) Conference – Shaping the Frontiers of Innovation Management, Budapest, Hungary, June 14–17, 2015.

 


References

Baccarne, B., Logghe, S., Veeckman, C., & Schuurman, D. 2013. Why Collaborate in Long-Term Innovation Research? An Exploration of User Motivations in Living Labs. In 4th ENoLL Living Lab Summer School 2013. European Network of Living Labs.

Ballon, P., Pierson, J., & Delaere, S. 2005. Test and Experimentation Platforms for Broadband Innovation: Examining European Practice. Brussels: Vrije Universiteit Brussel.
http://dx.doi.org/10.2139/ssrn.1331557

Dutilleul, B., Birrer, F. A. J., & Mensink, W. 2010. Unpacking European Living Labs: Analysing Innovation's Social Dimensions. Central European Journal of Public Policy, 4(1): 60–85.

Ebbesson, E., & Eriksson, C. I. 2013. Co-creating Innovative UGC Services with the Media Industry. In 46th Hawaii International Conference on System Sciences (HICSS): 3057–3066. New York: IEEE.
http://dx.doi.org/10.1109/HICSS.2013.133

European Commission. 2009. Living Labs for User-Driven Innovation: An Overview of the Living Labs Methodology, Activities and Achievement. Brussels: European Commission.
http://dx.doi.org/10.2759/34481

Eysenbach, G. 2005. The Law of Attrition. Journal of Medical Internet Research, 7(1): e11.
http://dx.doi.org/10.2196/jmir.7.1.e11

Følstad, A. 2008. Living Labs for Innovation and Development of Communication Technology: A Literature Review. The Electronic Journal for Virtual Organisations and Networks, 10: 99–131.

Grudin, J. 2002. Group Dynamics and Ubiquitous Computing. Communications of the ACM, 45(12): 74–78.
http://dx.doi.org/10.1145/585597.585618

Kaasinen, E., Koskela-Huotari, K., Ikonen, V., & Niemeléi, M. 2013. Three Approaches to Co-Creating Services with Users. In G. Salvendy & W. Karwowski (Eds.), Advances in the Human Side of Service Engineering: 286–295. Boca Raton, FL: Taylor & Francis Group.

Kanstrup, A. M., Bjerge, K., & Kristensen, J. E. 2010. A Living Laboratory Exploring Mobile Support for Everyday Life with Diabetes. Wireless Personal Communications, 53(3): 395–408.
http://dx.doi.org/10.1007/s11277-010-9953-3

Kjeldskov, J., & Skov, M. B. 2014. Was It Worth the Hassle?: Ten Years of Mobile HCI Research Discussions on Lab and Field Evaluations. In Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices & Services: 43–52. New York: ACM.
http://dx.doi.org/10.1145/2628363.2628398

Korn, M., & Bødker, S. 2012. Looking Ahead: How Field Trials Can Work in Iterative and Exploratory Design of Ubicomp Systems. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing: 21–30. New York: ACM.
http://dx.doi.org/10.1145/2370216.2370221

Schuurman, D., & De Marez, L. 2009. User-Centered Innovation: Towards a Conceptual Integration of Lead Users and Living Labs. In Proceedings of COST 298 Conference: 13–15. Ljubljana, Slovenia: University of Ljubljana.

Schuurman, D., Baccarne, B., Kawsar, F., Seys, C., Veeckman, C., De Marez, L., & Ballon, P. 2013. Living Labs as Quasi-Experiments: Results from the Flemish LeYLab. In Proceedings of The XXIV ISPIM Conference, Helsinki, Finland, June 16–19, 2013.

Schuurman, D. 2015. Bridging the Gap between Open and User Innovation? Exploring the Value of Living Labs as a Means to Structure User Contribution and Manage Distributed Innovation. Doctoral dissertation, Ghent University, Belgium.

Simons, L. P., Hampe, J. F., & Guldemond, N. A. 2013. Designing Healthy Living Support: Mobile Applications Added to Hybrid (E) Coach Solution. Health and Technology, 3(1): 85–95.
http://dx.doi.org/10.1007/s12553-013-0052-9

Ståhlbröst, A., & Bergvall-Kåreborn, B. 2011. Exploring Users Motivation in Innovation Communities. International Journal of Entrepreneurship and Innovation Management, 14(4): 298–314.
http://dx.doi.org/10.1504/IJEIM.2011.043051

Ståhlbröst, A., & Bergvall-Kåreborn, B. 2013. Voluntary Contributors in Open Innovation Processes. In J. S. Z. Eriksson Lundström, M. Wiberg, S. Hrastinski, M. Edenius, & P. J. Ågerfalk (Eds.), Managing Open Innovation Technologies: 133–149. Berlin: Springer Berlin Heidelberg.
http://dx.doi.org/10.1007/978-3-642-31650-0_9

Share this article:

Cite this article:

Rate This Content: 
No votes have been cast yet. Have your say!

Keywords: attrition, drop-out, field trial, Living lab, Open innovation, user engagement, user involvement