Thursday, July 18, 2019

Boon or Bane

Advantages and Disadvantages of internet seek check intos Evidence from the Literature Ronald D. Fricker, Jr. and Matthias Schonlau RAND e send off and entanglement valuates subscribe been the subject of a lot hyperbole approximately their capabilities as intumesce as nigh criticism close their limitations. In this report we examine what is and is non comen swell-nigh the engage of the profits for kening. Specific every(prenominal)y, we think prove put in the books regarding reception grade, cartridge clipliness, in constituteation caliber and monetary value.In light of this evidence, we evaluate popular form of addresss that net income- base watchs grass be carry wizardd smart, intermit, cheaper, and/or easier than check knocked out(p)s conducted via schematic humors. We catch out that the verity of speak to and speed often does non live up to the hype. N binglethe slight, it is app argonnt to implement profit-establish pecks in ways t hat argon effective and exist-efficient. We refrain that the net profit pull up stakes wait to grow in wideness for conducting certain types of seek massvass.INTRODUCTION With the advent of the cosmos Wide weave ( nett or web) and electronic get by (e institutionalise), the mesh has opened up new vistas in inspecting. Rather than posting a motif bailiwick, a doer goat now be pr whizz a hyperlink to a vane grade containing the see. Or, in an e- turn on adopt, a inquirynaire is send to a responsive via net brand, possibly as an attachment. As each an alternate or an adjunct to conventional regard personal manners (e. g. , the tele band, get down, and personal interviewing) meshwork-based measures offer unique new capabilities.For congressman, a nett check ro go for comparatively simply incorpo cast multi-media nontextual matter and strait into the plenty instrument. Similarly, an opposite(a)(prenominal) features that were once limit to untold d earlier-won interviewer-assisted directions, much(prenominal)(prenominal) as automatic branching and real- meter stochasticization of vision suspenses and/or reacts, eject be incorpo site into self-administered sack up (and round net air) go everyplaces. However, non unlike when ph whizz and arms watchs were maiden introduced, revives exist near whether these Internet-based written reports ar scientifically valid and how they atomic derive 18 trounce conducted.In the after-hours 1980s and early 1990s, prior to the far-flung avail powerfulness of the meshing, electronic beam was prototypicalborn explored as a go off stylus. As with the lucre, e trip out offers the possibility of contiguously instantaneous contagion of sentiments to recipients while avoiding both postal constitute. previous(predicate) e trip were primarily ASCII text-based, with rudimentary format at best, which campaigned to limit their continuance and scope. The solo evidential advantage they offered all everywhere paper was a likely decrease in deli very(prenominal) and chemical reaction land rules, Vol. 14 no 4, both hundred2 347-367. 1 imes, though approximately too hypothe coatd that the novelty of the new sensitive might enhance solvent order (Parker, 1992 Zhang, 2000). The wind vane started to break widely on tap(predicate) in the early to mid-1990s and quickly supplanted electronic charge as the Internet mass intermediate of choice because it was easy to implement, it provided an reformd user inter cause with the reactent, and it offered the possibility of mul cartridge clipdia and interactive messs containing auditory sensation and video. For contrivance auditions, the sack withal offered a way around the necessity of having to know resolveents net ship addresses.As a root, quick rout outvas and early(a)(a) types of entertainment check everywheres devour force progressively popul ar and widespread on the vane. Internet-based looks argon now in waythose conducted via the entanglement in busybecause of tether assumptions (a) Internet-based st ars ar much cheaper to conduct (b) Internet-based retrospects be faster and, (c) when combined with other heap modal values, Internet-based check up ons interpret high(prenominal) result evaluate than conventional check up on ways by themselves. Yet, does the evidence in the writings confirm these assumptions? be Internet-based sees faster, better, cheaper, and/or easier than surveys conducted via conventional orders? What washbowl we argue near the strengths and current limitations of Internet-based surveying from the items in the literature? In this report we compound the literature near the use of the Internet ( electronic berth and the weave) in the survey ope place. Other accounts of the literature include Schonlau, Fricker and Elliott (2002), Couper (2000), Dillman (2000), and Tut en et al. (2002). In addition, an encompassing root word of blade survey literature pot be found on the mesh at www. ebsm. org. LITERATURE SUMMARY FOR INTERNET-BASED SURVEYS In this section we sum up key characteristics of Internet-based surveysthat is, surveys victimisation the web and e- weapons as a result modeas document in the literature. We active a captain librarian to conduct a extreme literature search in the loving acquaintance entropybase and the league Paper indicant infobase. The sociable Science Database indexes to a greater extent than 1,500 of the to the highest degree meaning(a) worldwide social informations journals since 1972.Additional articles pertinent to the social sciences argon in any gaffe incorporated from over 2,400 journals in the inwrought, physical, and biomedical sciences. The Conference Paper Index provides adit to records of the to a greater extent than c,000 scientific and adept text file (since 1973) presended at ove r 1,000 major(ip) regional, national, and international meetings from each one year. scene of action Methods, Vol. 14 none 4, 2002 347-367. 2 The literature search yielded 57 papers that were substantively interesting and informative.Here we report on a subset of those articles of direct relevance to this discussion. (Appendix B of Schonlau et al. , 2002, itemisations 52 papers and we re trip augmented the disceptation here with an extra cardinal that realise appeargond since Schonlau et al. was make. ) We consider the noniceing key characteristics of surveys (1) re assortee rate, (2) sentenceliness, (3) data lumber, and (4) apostrophize. We compargon what has been published in the literature well-nigh Internet-based surveys to a natural conventional survey alternative mail. epoch no survey mode is going to be optimal in all of these aras, we chose mail because twain mail and Internet-based surveys are self-administered, mail surveys tend to be the least(prenomin al) pricey of the conventional modes, and virtually all of the comparisons make in the literature are to mail surveys. solvent evaluate A bill way to summarize survey public presentation is by comparing reception evaluate among various survey modes. By survey mode (some clocks called chemical reaction mode) we mean the mode by which the survey itself is conducted meshwork, electronic mail, mail, etc.In this section, we compare retort grade for studies classified into one of ternion categories (1) pots employing prob aptitude take or conducting a nosecount that utilise the tissue as the precisely nonice mode (2) quite a poors in which sufficeents were allowed to engage one of several repartee modes, including at least one Internet-based react mode and, (3) travel a capaciouss in which answerers were assigned one of several reply modes, including at least one Internet-based repartee mode. We begin with results for studies that employ the nett as the direct or yet answer mode with any censuses or robability auditions ( disconcert 1). The table is ordered by year and it provides that sack up- whole research surveys start presently only passd fairly modest response order, at least as record in the literature. subject Methods, Vol. 14 none 4, 2002 347-367. 3 instrument panel 1. repartee Rates for mesh-only sees employ Probability hires or Censuses stress Size 1,602 14, one hundred fifty retort Rate 42%d 8% community University of international mile Students College-bound high school and college students Everingham (2001) 1,298 44% RAND employees Jones and Pitt (1999) 200 19% University faculty b 9,522 41% Purchasers of Dillman et al. 1998) computing device products c 2,466 38% Purchasers of Dillman et al. (1998) data processor products a Most actents were inter-group communicationed via their parents, which tear downd the response rate. A mail response mode was added late in the survey protocol. b A comparatively plain weathervane survey trope was use in this take inal arm. c A relatively fancy meshwork survey heading was apply in this experimental arm. d Another 5. 6 pctage of part stand ind surveys were similarly meetd. keep an eye on Couper et al. (2001) a Asch (2001)In fact, the results in Table 1 whitethorn over press out response rate action for research surveys of broader universes because Dillmans results are based on participants who were initially connectioned by phone and had agreed to participate in a Web survey and Everinghams stress was of a closed tribe of employees at one company. Jones and Pitt (1999) sampled stave at 10 universities whose staff directories were available on the WWW and Couper et al. (2001) surveyed 1,602 University of naut mi students.In all of these cases, the possible survey participants were likely to be much self-colored and much than than disposed to act compared to a random sample of the general people. I n addition, because university populations often tend to learn greater devil to the Internet, and todays college students can be look toed to be much computer- and Internet-savvy. In Table 2 we summarize the studies published in the literature that allowed the respondent to choose to respond either via the Web or through the mail, ordered in terms of the work out that responded via the Web.Since for many populations the fraction of respondents that can or ordain answer via the Web whitethorn not be enoughly life-sized, and mail emerges as the near relevant second mode for a dual mode survey, these studies are important. reach Methods, Vol. 14 none 4, 2002 347-367. 4 Table 2. Studies Allowing responders to Choose a Web or send out receipt Mode list Sample Size a theater % Chose to Respond by get by Web Overall Response Rate Population U. S. military and spouses Quigley et al. (2000) (DoD make) 7,209 83% 27% 37% Civilians c b 45% 58% U.S. geriatric Chiefs 57 52% R aziano et al. (2001) Zhang (2000) 201 20% 80% 78% lookers d Schleyer and Forrest (2000) 84% 74% Dentists 405 16% NOTE The binary Quigley et al. and Raziano et al. entries meet multiple arms of the uniform instruct. a This arm of the study utilize mail as the contact mode. b Includes electronic mail. The authors do not distinguish amidst e-mail and Web as a response mode. c This arm of the study used e-mail as the contact mode. d The response mode in this case was either e-mail or telecommunicate. 21,805 Raziano et al. 2001) Sedivi Gaul (2001) and Griffin et al. (2001) (American residential area passel 2000) Sedivi Gaul (2001) and Griffin et al. (2001) (Library Media Center Survey 1998) Sedivi Gaul (2001) and Griffin et al. (2001) (Library Media Center Survey 1999) Quigley et al. (2000) (DoD study) 57 9,596 924 13,440 96% 95% 95% 81% 77% 4% 5% 5% b 77% 38% 38% 63% 42% U. S. geriatric Chiefs U. S. households Librarians Librarians 19% 23% In Table 2 we see that for close o f the studies respondents shortly tend to choose mail when disposed(p) a choice mingled with Web and mail.In fact, tear down when respondents are contracted electronically it is not axiomatic that they get out take to respond electronically, as in Raziano et al. (2001) that did not comment a statistically epochal deviation in response rate. Zhang (2000) and Schleyer and Forrest (2000) are the only studies that contradict this shutdown and they tend to represent groups of respondents that are largely or entirely computer literate and sluttish with electronic communication. In comparison, Quigley et al. (2000) and the American Community Survey (2000) study tend to represent general cross-sections of the U. S. ublic in terms of computer literacy and availability and for these studies the fraction that chose Web as the response mode was quite lower-ranking. In Table 3 we present studies that compared response pass judgment betwixt groups assigned to one of either 2 or lead response modes. Here we see that Internet-based mode response range mainly do not achieve response pass judgment check to mail surveys. (The table is basic ordered from lowest to highest e-mail response rate and consequently by Web response rate. ) Further, Sheehan (2001) concludes that e-mail response rates are declining over time (though the intellectual for the castigate is unknow).Field Methods, Vol. 14 no(prenominal) 4, 2002 347-367. 5 Table 3. Studies With Multiple Study fortify Comparing Response Rates for electronic mail, Web and Mail Response Modes Total Sample Size 400 500 418 153 262 8,000 904 cxl 200 1,800 Response Rate netmail Mail 6% 27% 7% 52% 19% 57% 28% 78% 40% 45% 43% 71% a 58% 53% 68% 38% 34% 72% -54% -62% Population University staff University staff MIS and trade faculty Health educators BBS wordgroup users national employees WSU faculty AT&T employees University staff Businesses in Slovenia Undergraduates at the University 600 of Florida McCabe e t al. 2002) 5,000 63% -40% University of Michigan Students Indicates not applicable the indicated response mode was not evaluated in the study. a An superfluous 5 portion that were softened by mail are not included in this number. b In the 2nd limited review of two study arms respondents were contacted by two mail and e-mail. c An superfluous phone study arm achieved a response rate of 63%, an excess contact mail / response fax study arm achieved a response rate of 43%. Study Tse et al. (1995) Tse (1998) Schuldt and Totten (1994) Kittleson (1995) Mehta and Sivadas (1995) Couper et al. 1999) Schaefer and Dillman (1998) Parker (1992) Jones and Pitt (1999) c Vehovar et al. (2001) b Pealer et al. (2001) Web 19% 32% 58% Parker (1992) is the only study of which we are aware in which e-mail achieved qualified or high(prenominal) response rates when compared to postal mail. Parker conducted a survey of one hundred forty expatriate AT&T employees on matters link up corporate polic ies for expatriation and repatriation, reporting a 63 share response rate via e-mail (63 re rancid out of 100 sent by e-mail) compared to a 38 percent response rate for postal mail (14 returned out of 40 sent by mail).Interestingly, Parker (1992) similarly attri potentlyed the going in response rates to the fact that, at the time, AT&T employees true a lot of corporate paper trash mail nonethe slight, over the internal e-mail system, they trustworthy little to no electronic junk mail. Hence, recipients of the paper survey were much than likely to discount its importance compared to e-mail survey recipients. With the spread of e-mail spam, this detail is likely to be reversed today.In an example more typical of the current state of affairs, and in one of the some studies to disarrange respondents to mode, Couper et al. (1999) obtained an average email response rate of closely 43 percent compared to near 71 percent with mail in a survey of employees in five-spot fede ral statistical agencies. Couper et al. chose e-mail as the Field Methods, Vol. 14 no 4, 2002 347-367. 6 mode for the survey over the Web because e-mail was al intimately universally available in the five agencies while the Web was often not available. Turning to the Web, McCabe et al. 2002) conducted an experiment in which 5,000 University of Michigan students were randomized to receive a survey about dose and alcohol use 2,500 latent respondents authorized a mail survey and 2,500 were notified of an tantnitty-gritty(predicate) Web-based survey. Respondents in both groups received a $10 gift certificate incentive. In this study, McCabe et al. achieved a 63 percent Web response rate compared to 40% for mail. In contrast, however, Pealer et al. did not find a statistically significant difference mingled with Web and mail response rates for a survey of undergraduates at the University of Florida.The only other published study that achieved exceptional response rates with an Inte rnet-based survey is Walsh et al. (1992) in which potential respondents were solicited by e-mail and offered the option to respond by e-mail or point a paper survey by postal mail. magic spell they did not conduct an equivalent postal mail only survey for comparison (and thus are not listed in Table 3), Walsh et al. achieved a 76 percent general response rate of a randomly sample of subscribers (300 out of a full population of 1,100) to a scientific computer network for an e-mail survey.In addition to providing nonrespondents with two go through reminders, a lottery prize of $250 was employed as an incentive. Walsh et al. found that 76 percent of the respondents replied by e-mail and the other 24 percent responded by postal mail. They also received requests from an additional 104 subscribers (who were not chosen in the sample of 300) to participate in the survey. For the self-selected 104, 96 percent responded by e-mail. non surprisingly, they also found a electropositive c orrelation between propensity to respond electronically and amount of network usage.In conclusion, at that place is little evidence in the literature that Internet-based surveys achieve higher response rates, as a general rule, than conventional surveys. The few Internet-based surveys that develop achieved higher response rates fuddle tended to be either of university-based populations or small, specialized populations. The majority of results inform in the literature show Internet-based surveys at best currently achieve response rates equal to conventional modes and often do worse. The reasons for this difference are not barely clear and require more study.Yet, as we have seen, on that point are also a few examples of Web surveys outperforming mail for some incidentized populations. Whether this was idiosyncratic of these few surveys, Field Methods, Vol. 14 none 4, 2002 347-367. 7 or it is an indication that methodology is maturation to achieve higher response rates i n the new median(a) is yet to be shown. It is important to note that, strange to intuition, there is no evidence in the literature that concurrent palming of a survey via a conventional mode and via an Internet-based mode results in any significant cleansement in response rates.This whitethorn be because, as Table 2 shows, except in specialized populations, when abandoned a choice between mail and Web surveys, most individuals tend to respond to the mail survey. In addition, there is no evidence that those who would normally refuse to everlasting(a) a mail survey would choose to respond if the survey was Internet-based. Of course, these results are specific to the current state of the art of Internet-based surveying, lively technology, and the current state of respondent attitudes toward surveys, both Internetbased and conventional.Future developments may significantly alter these findings and more research is certainly warranted in an drive to modify the response rate i nstruction execution of Internet-based surveys. Finally, we note that while research surveys based on probabilistic survey sampling methods are mostly acknowledge as being necessary to conduct statistical inference to any population immaterial of the sample, convenience sampling can also be useful to some research workers for other purposes. For example, early in the course of research, responses from a convenience sample might be useful in developing research hypotheses.Responses from convenience samples might also be useful for identifying issues, defining ranges of alternatives, or put in other sorts of non-inferential data. In fact, in certain types of qualitative research, convenience samples on the Web may be only if as valid as other methods that use convenience samples. There are a number of studies in the literature that used convenience samples, for which response rate comparisons do not apply (and hence precluded their inclusion body in Tables 1-3), often with resp ondents lifted through advertizing of some form. epoch response rates for these studies are meaningless, we present a few of the more interesting studies here to illustrate alternative ways that Web surveys can be used. In a social science study of geographic mobility and other topics Witte et al. (2000) recruit a large number of respondents 32,688. Similarly, Vehovar et al. (1999) conducted a large-scale survey targeted at the Internet population of Slovenia, which corresponds to about 13 percent of the congeries population of Slovenia.In both cases, likewise sized traditional mail surveys would likely have been more complicated and very expensive to field. Coomber (1997) conducted a survey about drug dealer practices, where his target population was illicit drug-dealers throughout the world. Coomber solicited responses by e-mail and Field Methods, Vol. 14 No. 4, 2002 347-367. 8 through advertising, and collected responses on the Web hoping his respondents would be encouraged to respond more honestly because of a perceived anonymity.Timeliness In todays fast-paced world, survey timeliness is more and more stressed. The length of time it takes to field a survey is a function of the contact, response, and follow-up modes. Decreasing the time in one or more of these parts of the survey process exit tend to decrease the overall time in the field. However, it is important to keep in mind that the relevant measure is not average response time simply maximum response time (or possibly some large percentile of the response time distri moreoverion) since survey digest generally does not begin until all of the responses are in.Most studies tend to conclude, often with little or no empirical evidence, that Internet-based surveys are faster than surveys sent by postal mail. This conclusion is usually based on the erudition that electronic mail and other forms of electronic communication can be in a flash transmitted while postal mail takes more time. However, s imply concluding that Internet-based surveys are faster than mail surveys naively ignores the reality that the positive amount of time for survey handle time is more than on the nose the survey response time.A complete comparison essentialiness take into account the mode of contact and how long that process provide take and the mode of follow-up allowing for multiple follow-up contact occlusions. For example, if e-mail addresses of respondents are unavailable and a probability sample is craved and so respondents may have to be contacted by mail. In this case a Web survey only drive homes time for the return address of the holy nousnaire, and not for the contact and follow-up, so that the resulting time savings may only be a fraction of the total survey fielding time.In the case of e-mail surveys, where the condition is that the potential respondents e-mail addresses are cognize and can therefore be used not just for delivering the survey but also for pre-notification an d non-response follow-up, the time savings can be substantial. For example, one is often constrained to allow for a week of speech communication time in the postal mail. With an pressurise letter and a single mail follow-up, this one week delay telescopes into over a month in survey fielding when two weeks must be budgeted for initial survey oral communication and return time, plus an additional two weeks for a single followup reminder delivery and response time.By comparison, in an all-electronic process the analogous operation has the potential to be hi-fi in a few geezerhood or less. Yet, sluice in an all-electronic surroundings it is not necessarily true that the Internet-based survey will be timelier. For example, in a comparison of response speed Field Methods, Vol. 14 No. 4, 2002 347-367. 9 between e-mail and mail, Tse et al. (1995) did not find a statistically significant difference in the time between sending and receipt of an e-mail survey to university faculty and staff and an equivalent survey sent by mail.Furthermore, to achieve sufficiently high response rates, it may be necessary to forget an Internet-based survey in the field for an increase period of time. For example, a prominent commercial Internet survey company, Knowledge Networks, has indicated that to achieve 70-80 percent response rates they must leave a survey in the field for about 10 days. This period comprises one workweek with two weekends, because they find that most respondents complete their surveys on the weekend. However, there are cases in the literature that did show more timely response.Tse (1998) found a statistically significant difference in the average initial response time for those that received an e-mail survey compared to those that received a paper survey in the campus mail (one day versus 2-1/2). Further, in Tses experiment, most e-mail survey recipients either responded almost immediately (within one day) or they did not respond at all, which raises the question of the effectiveness of non-response follow-up in the electronic forum. Schaefer and Dillman (1998) also document faster e-mail response rates 76 percent of all responses were received in 4 days or less. Pealer et al. 2001) found a statistically significant difference in the average return time between their e-mail study arm (7. 3 days) and their mail study arm (9. 8 days). However, the closing e-mail survey was received after 24 days and the final mail survey after 25 daysa negligible difference in overall fielding time. In conclusion, while it is certainly reasonable to conclude prima facie that the delivery time of an Internet-based survey is faster than the delivery of a survey by mail, it does not necessarily follow that the increased delivery speed will iterate into a significantly shorter survey fielding period.deuce points are relevant (1) prominent improvements are only possible with an all-electronic process, which is currently only possible for specialized populations and, (2) even for populations in which all-electronic surveys are possible, the literature is not very informative as there is no information available about the length of fielding time needful to achieve particular response rates. role When the primary purpose of a survey is to gather information about a population, the information is useless unless it is accurate and phonation of the population.While survey fallacy is commonly characterized in terms of the precision of statistical estimates, a good survey program seeks to reduce all types of errors, including reporting, Field Methods, Vol. 14 No. 4, 2002 347-367. 10 sampling, non-response, and measurement errors. (See Groves, 1989, for a small discussion of the Total Survey hallucination approach. ) Indeed, even when a survey is conducted as a census, the results salve may be affected by many of these sources of error. insurance reportage error is the most widely recognized shortcoming of Internet-based surv eys.Today the general population reportage for Internet-based surveys still significantly lags behind the coverage achievable apply conventional survey modes. However, there are some important caveats to keep in mind. First, the coverage derivative is rapidly closing and may constitute immaterial in the relatively near future (though this is far from a preordained conclusion). Second, even though conventional modes have the ability to reach most of the population, it is becoming increasingly knotty to get people to respond (e. g. answering machines are routinely used to screen calls these days and, hence, screen out environ surveyors and solicitors). Third, while conventional modes have near universal coverage, there will always be special subpopulations that have little or no coverage for any mode. Fourth, in the case of Internetbased surveys, entrance fee is only one consideration. even if the respondent in principle has Internet entranceway (e. g. through a library), there are large portions of the population that are still computer illiterate and would have difficultness correctly responding to such a survey.Finally, access and computer literacy are necessary but not sufficient conditions for success Respondents must also have compatible computer hardware and computer software. However, less than universal access to the Internet can be immaterial for some studies, such as studies that focus on closed populations with equal access or Internet users, for example. In order to improve coverage, Dillman (2000) recommends a fusemode strategy for contact, using both e-mail and postal mail for pre-notification. Similarly, using mixed response modes, such as Web and e-mail can be used to increase coverage.However, as we previously mentioned, there is little evidence in the literature that concurrent mixed mode fielding increases response rates over what would have been achieved using a single, conventional mode. In addition to coverage, data timberland is a function of a number of other dimensions, including (1) unit and item nonresponse (2) ingenuousness of responses, in particular for questions of a sensitive dis point (3) completeness of responses, particularly for openended questions and, (4) quality of data agreement into an electronic format for abstract if necessitate by the survey mode.All other things held continuous (such as pre-notification and non-response followup), unit and item non-response are generally smaller using interviewer-assisted modes (de Leeuw, 1992) compared to self-administered survey modes. Face-to-face interviews Field Methods, Vol. 14 No. 4, 2002 347-367. 11 have long been considered the opulent shopworn of surveys and tend to result in the lowest unit and item non-response as well as minimizing respondent misunderstanding of questions and swerve patterns.However, it has been shown that interviewer-administered survey modes, particularly face-to-face, yield more socially desirable answers than selfadministered modes (de Leeuw, 1992, Kiesler et al. , 1986, p. 409). This is particularly relevant for surveys of sensitive topics or for surveys that contain sensitive questions, such as questions about income or sexual practices, for example. Mail and other selfadministered modes tend to be the least expensive but often have higher unit and item non-response rates. On the other hand, they tend to elicit the most accurate responses to sensitive questions.Data quality is usually calculated by the number of respondents with missing items or the contribution of missing items. For open-ended questions, lasting answers are usually considered more informative and of higher quality. In those studies that compared e-mail versus mail, for closed-ended questions, it appears that e-mail surveys may incur a higher percentage of items missing than mail surveys. As Table 4 shows, for studies in the literature that reported the percentage of missed items, the percentage for mail respo ndents was less than or equal to the percent for e-mail respondents.Table 4. Average benefactoring of Missed Items for E-mail and Postal Mail Surveys Postal Mail Population 14. 2 Undergraduates, University of Florida 0. 7 Business school deans and chairpersons 0. 4 Names and addresses purchased from Internet magazine in the U. K. 0. 5 Fourth-year medical students 0. 8 Employees of five U. S. federal agencies 0. 3 ready U. S. users of bulletin board system (BBS) news group Study Peale et al (2001) Bachman at al. (1996) Comley (1996)a Paolo et al. (2000) Couper et al. (1999)b Mehta and Sivadas (1995)c a E-mail 14. 2 3. 7 1. 2 1. 2 0. 8 0. 3Based on three questions. Based on 81 attitude questions. c Across five contrastive study arms, one of which allowed for both mail and e-mail responses. b At the respondent level, Paolo et al. (2000) also found that 27 percent of e-mail respondents did not respond to at least one question versus 9 percent for mail respondents. Kiesler and Spro ull (1986) found the opposite in the e-mail (contact and response) study arm only 10 percent of respondents crumpleed to complete or spoiled one Field Methods, Vol. 14 No. 4, 2002 347-367. 12 item compared to 22 percent in the mail (contact and response) study arm.Tse (1995, 1998) found no difference in quality of responses. For open-ended questions, studies found that e-mail responses are either womb-to-tomb or of the same length as mail responses. Comley (1996) found that in the two open-ended questions e-mail respondents gave longer answers. One respondent even wrote a miniessay. Mehta and Sivadas (1995) found that there was hardly any difference between the average established responses for both the open and close-ended questions (Mehta and Sivadas, 1995, p. 436). Kiesler and Sproull (1986) found that the total number of words did not significantly differ for e-mail and mail respondents.If one also takes into consideration that open-ended items for mail respondents are not al ways encoded for represent reasons, it appears that Internet-based survey modes may be better suited to open-ended questions. Other quality issues for Internet-based surveys resulting from some sort of sampling error are generally the same as for conventional surveys. However, as the Internet deceases more ubiquitous, collecting much larger samples becomes more feasible. Indeed, we have talked to some organizations recently that have electronic access to their entire population and are considering eliminating sampling and simply conducting censuses.Often these census efforts result in much larger numbers of respondents than otherwise could have been garner using traditional survey sampling techniques and those larger numbers give the visual aspect of greater statistical verity. However, such accuracy may be misleading if non-response biases are not accounted for and researchers need to waryly consider the trade-offs between smaller samples that allow for careful non-response follow-up and larger samples with less or no follow-up.The former may have larger standard errors but less bias while the latter may have much smaller standard errors but an unknown, and potentially very large, amount of bias. Finally, we note that Web surveys offer the ability to clearly improve on other forms of self-administered surveys in terms of data validation, skip pattern automation, and the elimination of transcription errors, all of which help to minimize measurement error. Web surveys can be programmed to conduct introduce validation as a lucid check of the respondents answers.These types of checks improve data quality and subsequently save time in the cooking of the analysis file. As with logic checks, Web surveys can also be programmed to manage the process of skipping questions. This will eliminate errors and, from the respondents point of view, simplify the process of fetching the survey. And, while all conventional surveys take some form of conversion into an electronic Field Methods, Vol. 14 No. 4, 2002 347-367. 13 format for analysis, for Web surveys respondents answers are instantly downloaded into a database, avoiding transcription errors.Cost targeting a survey fundamentally involves making trade-offs between the quality and quantity of data and approach. For smaller research surveys that are not subsidize in any way, a major component of total survey cost is frequently the researchers time for survey fig and subsequent data analysis. However, these costs take leave little by survey mode. A major expense that does vary by mode is the constancy cost of the staff office who actually execute the survey.Depending on the size of the survey and the complexity of the design, either researcher labor costs, survey forcefulness labor costs, or a combination of the two will likely dominate the survey budget. Comparing the costs of doing a Web survey versus a mail survey or other some other mode in the literature is difficult because contrasting authors define costs different ways. Academics frequently only consider cast and reproduction costs and often fail to account for the cost of one or more of various types of labor, including survey design and/or programming, coding, analysis, and other such items.Estimates also vary depending on whether they are give on a per mail-out or per complete survey response basis and, unfortunately, most studies in the literature omit any discussion about costs altogether. However, the question often reduces to how to price the time played out programming a Web survey and whether and how to price the time of the investigator or a survey coordinator. While lower costs are often touted as one of the benefits of Internet-based surveys, Couper et al. (1999) found no cost benefit in e-mail compared to postal mail surveys in their work.In a large and comprehensive survey effort of different government agencies Couper et al. compared an all e-mail survey (contact, response, and foll ow-up) versus an all mail survey. They found that evaluating and interrogatory the e-mail software took over 150 hours almost 4 times as much as they budgeted. For the mail survey, costs for stamp and mould were $1. 60 per reply and data editing and foundation cost about $1. 81. For the email survey, managing the e-mail cost $1. 74 per completed case. In addition, they handled over 900 toll-free calls of a mostly practiced nature.While the printing and mailing costs were eliminated for the e-mail survey, Couper et al. found that the costs of evaluating and testing the e-mail software, additional post- charm processing, and the costs of maintaining a toll-free phone line which was largely utilize to responding to technical questions related to the e-mail surveys balance any savings. (For example, while Field Methods, Vol. 14 No. 4, 2002 347-367. 14 the e-mail survey was designed so that respondents would use the reply function of their e-mail program so the resulting replies could be automatically read into a database upon receipt. Further, almost 47 percent of the e-mail surveys required some type of clerical action to prepare them for automatic reading. On the other hand, Raziano et al. (2001) in a small study of 110 Geriatric Chiefs crosswise the U. S. , compute the cost per respondent for their mail study arm to be $7. 70 and for their e-mail study arm $10. 50. The programming time to construct the e-mail survey is factored into this calculation. However, the total programming time accounted for, two hours, may be unrealistic for a large or complicated survey operation.Also, these estimates fail to reflect the fact that their postal arm response rate from the first mail-out exceeded the e-mail arm response rate after four contact attempts. Hence, for a given desired response rate, the difference in costs would be less as fewer mailings would be required. Similarly, Schleyer and Forrest (2000) in their study received responses over the Web, by mail, and by fax and found the total costs for the Web survey turned out to be 38 percent lower than for the equivalent mail survey. Asch (as reported in Schonlau et al. 2002) found that adding a Web response option to a mail survey to be economical when about 620 responses are obtained over the Web when the Web is first used as the primary survey mode and surveys are only mailed out to non-respondents. Their calculations were based on the trade-off of the expected savings in postage, printing, and labor costs to prepare survey mailing packages and code the subsequent survey returns against the expected extra costs of programming, additional management effort, and maintaining a telephone help-line for the Web survey.This study did achieve a cost savings since it garnered over 1,000 Web responses. In two studies that essentially ignore personnel costs, Mehta and Sivadas (1995) and Jones and Pitt (1999) conclude, not surprisingly, that Internet-based surveys are less high-priced than mail surveys. These conclusions simply stem from the fact that Internetbased surveys do not incur postage and printing costs while mail surveys do. In conclusion, when only considering postage and printing costs, e-mail and Web surveys almost by definition are cheaper than mail surveys.However, when the total costs of a survey are considered, including labor and other costs, Web surveys may or may not be cheaper depending on whether the additional expenses incurred with that mode, such as programmer costs, are offset by savings, such as postage and data entry costs. When planning for and subsequently put to death a Web survey, care must be taken that unanticipated technical problems are minimized or these problems can easily eliminate all potential cost benefits. Field Methods, Vol. 14 No. 4, 2002 347-367. 15SUMMARIZING THE CURRENT PERFORMANCE OF INTERNET SURVEYS In the Introduction we said that Internet-based surveys are in vogue those conducted via the Web in particular primarily be cause of three assumptions (a) Web surveys are much cheaper to conduct (b) Web surveys are faster and, (c) combined with other survey modes, Web surveys yield a higher response rate than the other survey modes by themselves. That is, the usual naive generalization about Internet-based surveys is that they can be conducted faster, better, cheaper, and easier than surveys conducted via conventional methods.How do these claims stand up when compared to what has been published in the literature? Faster? Web surveys are thought to be much faster than conventional survey modes. While there is no question that the delivery time of an Internet-based survey is faster than a survey sent via the mail, there is little to no evidence in the literature to substantiate whether this increase subsequently results in a shorter overall fielding period.We are aware of a couple of organizations that have implemented all-electronic survey processes by communicating with respondents via e-mail, but this is only currently possible for prerecruited panels or specialized subsets of the population. If respondents must be contacted through mail or phone, which generally is the case if a probability sample is required by the research, then there may only be a marginal improvement in overall response times. Better? Response rates for Web surveys where no other survey mode is iven have tended to range from temperate to poor. The reasons for this are not clear. It is possible that potential respondents simply do not respond as well to electronic solicitation or response. If true, this may improve as Internet-based communication methods continue to spread and become routine with all segments of the general population. It is also possible that the execution of the Internet-based survey experiments have been less than optimal something that will improve with surveyor experience.There are a few examples of Web surveys outperforming mail in some of the more recent comparisons between these tw o media. Whether this was a unique result for these few surveys, or whether it is a leading indicator that the field is maturing and learning how to achieve higher response rates in the new metier is not known. In either case, it Field Methods, Vol. 14 No. 4, 2002 347-367. 16 is of concern that any improvements in these areas may be offset by over-saturation of the population with other forms of commercial surveys.Setting the question of response rate aside, Web surveys offer some advantages over conventional modes. For example, if multi-media and/or interactive graphics are required then there are few conventional alternatives (and those alternatives, such as face-to-face interviewing, would likely be significantly more costly). If a convenience sample will suffice for the research, then Web can be an fantabulous mean(a) to use, particularly if the desired respondents are geographically diverse or hard to find/identify.A major issue for Web surveys is their ease of execution fac ilitates naive misuse. The particular concern for this medium is the easy collection of large numbers of surveys can result in surveyors and survey data consumers enigmatic quantity with quality. There is on-going research about the effects of surveying via the Internet, the Web in particular, on unit and item non-response and on the affect the medium has on survey responses. Preliminary results have been reported at some conferences and symposia, but little has appeared in the literature as yet. Cheaper?The usual claim that Web surveys are much cheaper than mail surveys is not necessarily true. Web and e-mail surveys can save on some or all mailing costs, but except for very large surveys these may be small costs in the overall survey effort. Web surveys can also eliminate data entry costs e-mail survey results may not because they often require additional manipulation before they can be downloaded into an analytical database. However, savings in data entry may be partially or com pletely offset against higher programming costs and additional help desk staffing requirements.The literature mostly neglects labor costs, which form the highest cost component for Web surveys. Nonetheless, adding a Web survey to a mail survey can be cost efficient if done carefully and the right way. Easier? The implementation of Web surveys is technically more tortuous than mail or phone surveys. Survey designers need to specify many issues related to the technical control of Web surveys (e. g. how to move back and forward between questions, input validation, passwords, for what questions answers are not optional) that are simpler or not required with conventional survey modes.Web surveys also require more extensive Field Methods, Vol. 14 No. 4, 2002 347-367. 17 pretesting to ensure both that the questions elicit the desired information and that the program works properly across numerous hardware and software configurations. The fielding process may or may not be do easier. Int ernet-based surveys have the potential to eliminate some of the more labor-intensive fielding tasks, such as survey package preparation and mailing and the subsequent data entry.Yet, if mixed modes are required to obtain sufficient population coverage and/or response rates, then these tasks cannot be completely eliminated and the fielding process may actually then become more complex since clog for two or modes must be maintained and managed. What is the Future of Internet-based Surveying? The first Internet browser was introduced only about a decade ago and early use of the public Wide Web as a survey medium only started about five geezerhood ago. The result is that significant research results about the use of this new survey medium have only recently begun to become available in the literature.Hence, there is a great deal that is still not well known about Internet-based surveys. While some predict that Web surveys will replace other survey modes, we expect Web surveys to dev elop into a searching survey mode with advantages and disadvantages that will have to be weighed against the conventional alternatives. Little is known about Web instrument design and the effects of instrument design on how survey participants respond to a survey or a particular survey question, and what enhances response rates and response accuracy.For example, at the 2001 American tie of in the public eye(predicate) smell questioners conference, some anecdotal evidence was presented that respondents fetching surveys on the Web had shorter attention spans, upkeep to browse the survey like they browse other Web sites. If true, this would suggest that long surveys and/or surveys with complex questions may not perform as well on the Web as by mail. While many of the design principles from paper-based surveys may translate to Internet-based surveys, much more research is required.To date, most Web surveys have been conducted on convenience samples or in organizations where a list of target populations readily exists. However, Internet-based surveys with probability samples can be fielded by using the mail or telephone for respondent contact and the Web for response. There is currently no equivalent to random digit dialing for e-mail. Even though the fraction of the population having access to email will continue to grow, it is marvellous that one will ever be able to construct a random e-mail address in the same way a random telephone number is constructed.However, Field Methods, Vol. 14 No. 4, 2002 347-367. 18 large commercial e-mail lists may yet emerge that are of high enough quality to be useful in survey research. A major challenge for researchers will be to distinguish themselves and their survey from the plethora of commercial and entertainment surveys that exist and continue to cover on the Web. These other surveys will continue to proliferate because the financial and technical barriers are so low for Web surveys. Thus, just as telephone survey r esponse rates have continued to decline because of telemarketers, it s likely to become increasingly difficult to achieve superior response rates in the new medium. Nonetheless, Internet-based surveys are here to stay. The challenge for researchers is to learn to use the new medium to their best advantage. REFERENCES Asch, B. , (2001). RAND, Santa Monica, California. own(prenominal) communication. Bachman, E. , J. Elfrink, and G. Vazzana (1996). Tracking the Progress of e-mail vs. Snail-Mail, marketing question, 8, 31-35. Bradley, N. (1999). Sampling for Internet Surveys. An Examination of Respondent Selection for Internet look into, ledger of the commercialise Research Society, 41, 387395.Cochran, W. G. (1977). Sampling Techniques, 3rd edition, butt Wiley & Sons, new-sprung(prenominal) York, NY. Comley, P. (1996). Internet Surveys. The Use of the Internet as a Data line of battle Method, ESOMAR/EMAC Research Methodologies for The in the altogether Marketing, Symposium ESO MAR human beingsation helpers, vol. 204, 335-346. Coomber, R. (1997). Using the Internet for Survey Research, Sociological Research Online, 2, 14-23. Couper, M. (2000). Web Surveys, A Review of Issues and Approaches, Public aspect Quarterly, 64, 464-494. Couper, M. P. , J. Blair and T. Triplett (1999). A Comparison of Mail and E-mail for a Survey of Employees in U.S. statistical Agencies. diary of Official Statistics, 15, 39-56. Couper, M. P. , M. W. Traugott, M. J. Lamias (2001). Web Survey Design and brass. Public smell Quarterly, 65, 230-253. Field Methods, Vol. 14 No. 4, 2002 347-367. 19 de Leeuw, E. D. (1992). Data tincture in Mail, band, and face to Face Surveys, Ph. D. dissertation, University of Amsterdam, ISBN 90-801073-1-X. Dillman, D. A. (2000). Mail and Internet Surveys, The trig Design Method, 2nd ed. , John Wiley & Sons, New York, NY. Dillman, D. A. , R. D. Tortora, J. Conradt and D. Bowerk (1998). Influence of Plain vs.Fancy Design on Response Rates for Web S urveys. unpublished paper presented at the Annual shock of the American Statistical Association, Dallas, TX. Dillman, D. A. (1978). Mail and Telephone Surveys, The Total Design Method, John Wiley & Sons, New York, NY. Everingham, S. (2001). RAND, Santa Monica, California. personal communication. Fowler, Jr. , F. J. (1993). Survey Research Methods, 2nd ed. , Applied Social Science Research Methods Series, volume 1, SAGE Publications, Newbury Park, CA. Griffin, D. H. , D. P. Fischer, and M. T. Morgan (2001). test an Internet Response Option for the American Community Survey.Paper presented at the American Association for Public Opinion Research, Montreal, Quebec, Canada. Groves, R. (1989). Survey Errors and Survey Costs, John Wiley & Sons, New York, NY. Hamilton, C. H. (2001). Air world power Personnel Center, Randolph Air Force Base, personal communication. Henry, G. T. (1990). Practical Sampling, Applied Social Research Methods Series, Volume 21, SAGE Publications, Newbury Park, CA. Jones, R. and N. Pitt (1999). Health Surveys in the Workplace Comparison of Postal, Email and World Wide Web Methods, Occupational Medicine, 49, 556-558. Kiesler, S. and L. S. Sproull (1986).Response effects in the electronic Survey, Public Opinion Quarterly, 50, 402-413. Kish, L. (1965). Survey Sampling, John Wiley and Sons, New York, NY. Kittleson, M. J. (1995). An assessment of the Response Rate Via the Postal Service and E-Mail, Health Values, 18, 27-29. McCabe, S. E. , Boyd, C. , Couper, M. P. , Crawford, S. , and H. dArcy (2002). Mode Effects for amass Health Data from College Students Internet and US Mail. Paper under review. Mehta, R. and E. Sivadas (1995). Comparing Response Rates and Response Content in Mail versus Electronic Mail Surveys, ledger of the Market Research Society, 37, 429-439.Field Methods, Vol. 14 No. 4, 2002 347-367. 20 Nichols, E. , and B. Sedivi (1998). Economic Data aggregation via the Web A Census way outcome Study Proceedings of the surgical incision On Survey research Methods, American Statistical Association,366-371. Paolo, A. M. , Bonaminio, G. A. , Gibson, C. , Partridge, T. and K. Kallail (2000). Response Rate Comparisons of e-mail and mail distributed student evaluations, Teaching and acquire in Medicine, 12, 81-84. Parker, L. (1992). Collecting Data the E-Mail Way, Training and Development, July, 5254. Pealer, L. , R. M. Weiler, R. M. Pigg, D.Miller, and S. M. Dorman (2001). The Feasibility of a Web-Based superintendence System to Collect Health find Behavior Data From College Students. Health culture & Behavior, 28, 547-559. Quigley, B. , Riemer, R. A. , Cruzen, D. E. , and S. Rosen (2000). Internet Versus Paper Survey Administration Preliminary Finding on Response Rates, 42nd Annual Conference of the multinational Military Testing Association, Edinburgh Scotland. Raziano, D. B. , R. Jayadevappa, D. Valenzula, M. Weiner, and R. Lavizzo-Mourey (2001). E-mail Versus constituted Postal Mail Survey of Geriatr ic Chiefs.The Gerontologist, 41, 799-804. Schaefer, D. R. and D. A. Dillman (1998). Development of a Standard E-mail Methodology Results of an Experiment. Public Opinion Quarterly, 62, 378-397. Schleyer, T. K. L. and J. L. Forrest (2000). Methods for the Design and Administration Web-Based Surveys, Journal of the American medical exam Informatics Association, 7, 416-425 Schillewaert, N. , F. Langerak and T. Duhamel (1998). Non-probability Sampling for WWW Surveys A Comparison of Methods, Journal of the Market Research Society, 40, 307-322. Schonlau, M. , Fricker, R. D. , Jr. , and M.Elliott. (2002). Conducting Research Surveys via E-Mail and the Web, RAND Santa Monica, MR-1480-RC. Schuldt, B. A. and J. W. Totten (1994). Electronic Mail vs. Mail Survey Response Rates, Marketing Research, 6, 36-44. Sedivi Gaul, B. (2001a). Web Computerized Self-administered Questionnaires (CSAQ). Presentation to the 2001 national CASIC Workshops. U. S. Census Bureau, Computer Assisted Survey Researc h Office. Sedivi Gaul, B. (2001b). joined States Census Bureau, Washington, D. C. Personal Communication. Field Methods, Vol. 14 No. 4, 2002 347-367. 21 Sheehan, K. B. (2001).E-mail survey response rates A review. Journal of ComputerMediated Communication, 6(2). Retrieved March 9, 2002, from http//www. ascusc. org/jcmc/vol6/issue2/sheehan. html. Tse, A. C. B. , Tse, K. C. , Yin, C. H. , Ting, C. B. , Yi, K. W. , Yee, K. P. , and W. C. Hong (1995). Comparing Two Methods of Sending Out Questionnaires E-mail versus Mail, Journal of the Market Research Society, 37, 441-446. Tse, A. C. B. (1998). Comparing the Response Rate, Response Speed and Response Quality of Two Methods of Sending Questionnaires E-mail versus Mail, Journal of the Market Research Society, 40, 353-361.Tuten, T. L. , D. J. Urban, and M. Bosnjak (in press, 2002). Internet Surveys and Data Quality A Review in B. Batinic, U. Reips, M. Bosnjak, A. Werner, eds. , Online Social Sciences, Hogrefe & Huber, Seattle, 7-27. Vehov ar, V. , K. Lozar Manfreda, and Z. Batagelj (1999). Web Surveys Can the weight unit Solve the Problem? Proceedings of the persona on Survey Research Methods, American Statistical Association, Alexandria, VA, 962-967. Vehovar, V. , K. Lozar Manfreda, and Z. Batagelj (2001). Sensitivity of e-commerce Measurement to the Survey Instrument.International Journal of Electronic Commerce, 6, 31-51. Walsh, J. P. , S. Kiesler, L. S. Sproull, and B. W. Hesse (1992). Self-Selected and Randomly Selected Respondents in a Computer Network Survey, Public Opinion Quarterly, 56, 241-244. Witte, J. C. , L. M. Amoroso, and P. E. N. Howard (2000). Research Methodology Method and Representation in Internet-based Survey Tools, Social Science Computer Review, 18, 179-195. Zhang, Y. (2000). Using the Internet for Survey Research A Case Study, Journal of Education for Library and tuition Science, 5, 57-68. Ron Fricker is a statistician at RAND.He has designed, managed, and analyze many large surveys of na tional importance, including a survey of Persian Gulf contend veterans about Gulf War Illnesses and, most recently, a survey on national terrorism preparedness in the United States. Dr. Fricker holds Ph. D. in Statistics from Yale University. In addition to his position at RAND, Dr. Fricker is the vice-chairman of the Committee on Statisticians in Defense and National Security of the American Statistical Association, an associate editor of marine Research Logistics, and an adjunct assistant prof at University of Southern California.Field Methods, Vol. 14 No. 4, 2002 347-367. 22 Matthias Schonlau, Ph. D. , is an associate statistician with RAND and heads its statistical consulting service. Dr. Schonlau has extensive experience with the design and analysis of surveys in areas such as wellness care, military manpower and terrorism. Prior to link RAND, he held positions with the National Institute of Statistical Sciences and with AT Labs Research. Dr. Schonlau has co-authored numero us articles as well as a recentRAND book Conducting Internet Surveys via E-mail and the Web. In 2001, he and his group won second place in the data mining competition at the worlds largest conference on data mining KDD. Acknowledgements. The helpful and substantive comments of three unidentified reviewers and the editor significantly improved this work. Our research was supported by RAND as part of its continuing program of independent research. Field Methods, Vol. 14 No. 4, 2002 347-367. 23

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.