Obviously, it is underlined that the statistical perspective of representativeness of the results should be considered in future studies. No probability‐based sample ad coverage limitations (partial coverage and possibility of duplications) mine to the generalization of the results. New methodological solutions need to be adopted for representativeness of these interesting preliminary results.
The history above shows that technology changes have impacted survey taking and methods:
Paper questionnaires were exclusively used for decades until the 1970s and 1980s for both self‐completion and by interviewers. Processing the data was expensive and focused on eliminating survey‐taking mistakes.
Computer questionnaires at first were used solely for interviewing, while paper questionnaires were still used for self‐completion.
The advent of the Internet meant that self‐completion could now be computer based, but this was limited at first to browsers on PC.
Computing advances in hardware, software, and connectivity enabled and forced changes in survey taking, processing, and methods.
1.2.7 PRESENT‐DAY CHALLENGES AND OPPORTUNITIES
In the past 15 years, rapid technical and social changes have introduced a number of challenges and opportunities. The following is a high‐level list of challenges:
The respondent is much more in charge of the survey including whether and how he/she will participate.
There is such a vast proliferation of computing devices and platforms that survey takers cannot design and test for each possible platform.
Modern‐day surveys must be accessible to all self‐respondents, including the blind, visually impaired, and the motor impaired.
Few survey practitioners have all the skills needed to effectively design surveys for all platforms and to make them accessible at the same time.Pierzchala (2016) listed a number of technical challenges that face survey practitioners. This list was developed to communicate the magnitude of the challenges. The term multis refers to the multiple ways that surveys may have to adapt for a particular study:
Multicultural surveys: There are differences in respondent understanding, values, and scale spacing due to various cultural norms. These can lead to different question formulation or response patterns.
Multi‐device surveys: There are differences in questionnaire appearance and function on desktops, laptops, tablets, and smartphones.
Multilingual surveys: There are translations, system texts, alphabetic versus Asian scripts, left‐to‐right versus right‐to‐left scripts, and switching languages in the middle of the survey.
Multimode surveys: There are interviewer‐ and self‐administered surveys such as CATI and CAPI for interviewers and browser and paper self‐completion modes (Pierzchala, 2006).
Multinational surveys: There are differences in currency, flags and other images, names of institutions, links, differences in social programs, and data formats such as date display.
Multi‐operable surveys: These are differences in how the user interacts with the software and device including touch and gestures versus keyboards with function keys. Whether there is a physical keyboard or a virtual keyboard impacts screen space for question display.
Multi‐platform surveys: These are differences in computer operating systems, whether the user is connected or disconnected to/from the server, and settings such as for pop‐up blockers.
Multi‐structural surveys: There can be differences in question structures due to visual versus aural presentation, memory demands on the respondent, and linear versus nonlinear cognitive processing.
Multi‐version surveys: In economic surveys, questionnaires can vary between industries. For example, an agricultural survey asks about different crops in different parts of the country, and different crops can have different questions.
These multis lead to changes in question wording, text‐presentation standards, interviewer or respondent instructions, location of page breaks, number of questions on a page, question format, allowed responses, whether choices for don't know (DK) or refusal (R) are explicitly presented or are implied, and whether the user can advance without some kind of answer (even if DK or RF ) or can just proceed at will to the next question or page.
There can be additional challenges. Governmental and scientific surveys can be long and complex. Surveys must be accessible and usable to the disabled.
Additionally, there are ever‐tightening constraints including not enough time, not enough people or money, unclear and late and inconsistent specifications, last‐minute changes, screens that are too small, and computers that are too slow.
1.2.8 CONCLUSIONS FROM MODERN‐DAY CHALLENGES
The description of modern‐day survey challenges leads to some conclusions:
Modern‐day surveys can be very hard.
No single person has all the answers.
New survey‐producing methods are necessary to address all the challenges within ever‐tightening constraints.
Small screen sizes often lead to adaptations of survey instruments such as using fewer points in a scale question.
With the proliferation of devices, it becomes harder to rely on unimode designs where all questions appear the same in all modes and devices (Dillman, Smyth, and Christian, 2014). Instead, the institute may strive for cognitive equivalence across all manifestations (de Leeuw, 2005).
1.2.9 THRIVING IN THE MODERN‐DAY SURVEY WORLD
Updated survey design methods may give ways to handle and even thrive in the modern‐day survey world. The idea is to use extremely powerful computer‐based specification to replace document specification and manual programming. This idea is described in the following:
Use a capable computer‐based specification system to define the questionnaire. A drag‐and‐drop specification may be adequate for simpler surveys, but when you get to surveys that must handle more of the multis mentioned above, or when you get to thousands of questions, drag‐and‐drop becomes too onerous.
Specification and survey methods research should use question structures (see below).
The institute should define its question‐presentation standards for each structure across all the multis. This requires some up‐front work and decisions.
When the specification is entered, the computer should generate the necessary source code and related configuration files for all multis. All these computer‐generated outputs should conform to the institute's standards.
Use a survey‐taking system that has evolved to cope with the modern‐day world.
The historic developments with respect to surveys as described in the previous section took also place in the Netherlands. Particularly the rapid developments in computer technology have had a major impact on the way Statistics Netherlands collected its data. Efforts to improve the collection and processing of survey data in terms of costs, timeliness, and quality have led to a powerful software system called Blaise. This system emerged in the 1980s, and it has evolved over time so that it is now also able to conduct web surveys and mixed‐mode surveys. The section gives an overview of the developments at Statistics Netherlands leading to Internet version of Blaise.
The advance of computer technology since the late 1940s led to many improvements at Statistics Netherlands for conducting its surveys. For example, from 1947 Statistics Netherlands started using probability samples to replace its complete enumerations for surveys on income statistics and agriculture. The implementation of sophisticated sampling techniques such as stratification and systematic sampling is much easier and less labor intensive on a computer than manual methods.
Читать дальше