A questionnaire usually has a coherent structure. This means that all aspects of the concept studied are covered, and the order of the questions is well thought through because respondents are influenced by the questions they have already read. A good questionnaire is based on a model or a theory that explains the construct which is studied (see the Theory Toolkit), as in this example (Al-Rahmi et al., 2020), where the moderating role of cyberbullying was studied through a questionnaire based on the model of social media-based collaborative learning (Venkatesh et al., 2003).
There are two ways to initiate a study based on questionnaires. One option is to create a new questionnaire. Another option would be to use questionnaires created by other researchers (see also Global Kids Online Adopting and adapting standardised modularised survey by Kjartan Olafsson). Questionnaires developed by other researchers can be found from international scientific peer-reviewed articles and trusted websites created by scientists. Although it might seem easy to generate questions and ask children to provide answers, the questionnaire is a method that requires careful planning and preparation in order to get meaningful results.
Step-by-step procedure of creating a new questionnaire
Decide on the type of questions that will be in the questionnaire. The types are identified based on how the questions are answered (yes/no, strongly disagree-strongly agree, multiple-choice, or open answer written freely by the respondent) – some types of answers are faster to provide and which take less time to analyse. While others are complicated and take a lot of time to answer and analyse (e.g., open answers take a lot of time to code and analyse). It would therefore be advisable to have similar questions throughout the questionnaire.
The questions should be concise and simple.
The first version of the questionnaire should contain to two to three times the necessary amount of questions. As every question should be coupled with at least two questions that duplicate the meaning but not the wording. This is necessary for the analysis of validity, reliability and for choosing the best questions later.
Ask experts on the topic of the questionnaire to evaluate the validity of the questions (related to the problem).
Make a data collection plan. The best idea would be an online questionnaire where the data is recorded automatically into a specific folder in a cloud drive (consider using platforms such as Google Docs https://docs.google.com/ or LimeSurvey Home page - LimeSurvey - Easy online survey tool) . This saves many working hours and reduces possible human error when entering raw data into a digital file.
Conduct a pre-testing on a small target group.
Evaluate the validity and reliability of the analysis of the questionnaire (e.g. (Dilci, 2019; Williford & DePaolis, 2019)). Based on the evaluation, make the relevant changes to the questionnaire. Thus removing excess questions and those which were not well understood or frequently left unanswered.
Repeat steps 6-7, and when the results are satisfactory, the questionnaire is ready for use in a study.
The data of the study is then collected, and mostly non-parametric statistical analyses methods are used. There are many statistics books that cover this topic and also good online resources. For example, former adjunct professor of mathematics at Jacksonville University and Florida State College, Stephanie Glen, created a great online resource on ‘anything statistics’. Find her helpful guide for statistics beginners Statistics How To here.
Using a ready-made questionnaire
The questionnaires which have been used in published peer-reviewed journal articles have already been validated (compared with other tools) and had their reliability assessed. This also means that if a previously developed questionnaire is used, the questionnaire must be exactly the same (letter for letter, word for word) as the one that is already published. When a questionnaire is translated to another language, it is not the same questionnaire anymore. After translation, the questionnaire needs to be revalidated, and its reliability reassessed for the culture and population it is being used. Thus, a questionnaire must be adapted to the population where it is used (for example the adaption of the cyberbullying questionnaire to the Greek population (Antoniadou et al., 2016) ). This requires some level of effort, but it is still a good option because 1) it reduces the overall size and number of questions through removal of those that are not valid or reliable; 2) the questions are clear and concise, through the development of the questionnaire; 3) the results are comparable to the people who also have used this questionnaire.
A questionnaire enables:
Data collection requires only a short period of time (thus eliminating the need to visit respondents personally).
Data collection can be obtained from a large group of people (children).
Allows for data collection from the same group, multiple times such as in the case of longitudinal studies.
Allows data collection from similar groups from many cultures.
Data collection can take place asynchronously as the respondents can do it when they have the time. This removes the requirement for immediate responses.
The reduction of researcher bias (the researcher is not marking down responses)
Anonymity (the researcher does not see the respondent; therefore, etc. questionnaires are good for obtaining data about personal and confidential matters (for example, which social media groups one is a part of).
Reduces the cost of research, as it can be done by a single researcher without large funds, because there is no need to hire people to meet respondents.
a) The biggest problem with the questionnaire/survey method is that many questionnaires are not returned by respondents because respondents:
as population members (their characteristics) are not considered when the method of administrating a questionnaire is chosen. For example, in case of children it is important to consider whether the particular groups of children can read well enough and understand the questions. Even when they can, they might not be able to express their answers in the required ways. A good alternative can be an Interview.
Forget to return the questionnaire. This can be overcome by taking several precautions to ensure the relevant sample size is attained. First, double the number of respondents when sampling for the research. Second, send reminders (when questionnaires are sent by e-mail). Third, send a good, short, and captivating explanation of why the answers from the respondent are valuable and necessary with the questionnaire.
Do not trust science, the way the data is handled, or a particular researcher.
Misinterpret or misunderstand some questions.
Think that the questionnaire is too long and takes too much time to answer.
Delicate matters are not properly worded and may cause an offense in some respondents.
b) inappropriate planning of a questionnaire use. Read more from the Global Kids Online Survey sampling and administration guide by Alexandre Barbosa, Marcelo Pitta, Fabio Senne and Maria Eugênia Sózio.
c) when children respond to the questionnaire questions themselves, they may do it differently from adults. Below are some points to consider, read more from the Global Kids Online Conducting qualitative and quantitative research with children of different ages guide by Lucinda Platt.
they are more susceptible to primacy (selecting the first response item in the list) and recency (selecting the last response item in the list) effects (Fuchs, 2005).
Their memory is short and they respond worse to vague response items (as opposed to specific response items) (Borgers et al., 2003; Smith & Platt, 2013)
Susceptibility to social desirability may distort children's answers to a questionnaire (Waterman et al., 2000).
CO:RE Methodological practices/examples: researching children and youth online: Conducting school-based online survey during the COVID-19 pandemic: Fieldwork practices and ethical dilemmas
Internet Abuse questionnaire for children and youth (Blau, 2011)
Global Kids Online: Global Kids Online Method Guides
EU Kids Online: Report of EUKidsOnline 2020 survey results
Al-Rahmi, W. M., Yahaya, N., Alturki, U., Alrobai, A., Aldraiweesh, A. A., Omar Alsayed, A., & Kamin, Y. Bin. (2020). Social media – based collaborative learning: the effect on learning success with the moderating role of cyberstalking and cyberbullying. Interactive Learning Environments, 1–14. https://doi.org/10.1080/10494820.2020.1728342
Antoniadou, N., Kokkinos, C. M., & Markos, A. (2016). Development, construct validation and measurement invariance of the Greek cyber-bullying/victimization experiences questionnaire (CBVEQ-G). Computers in Human Behavior, 65, 380–390. https://doi.org/10.1016/j.chb.2016.08.032
Blau, I. (2011). Application use, Online Relationship Types, Self-Disclosure, and Internet Abuse among Children and Youth: Implications for Education and Internet Safety Programs. Journal of Educational Computing Research, 45(1), 95–116. https://doi.org/10.2190/EC.45.1.e
Borgers, N., Hox, J., & Sikkel, D. (2003). Response quality in survey research with children and adolescents: The effect of labeled response options and vague quantifiers. International Journal of Public Opinion Research, 15(1), 83–94. https://doi.org/10.1093/IJPOR/15.1.83
Del Rey, R., Casas, J. A., Ortega-Ruiz, R., Schultze-Krumbholz, A., Scheithauer, H., Smith, P., Thompson, F., Barkoukis, V., Tsorbatzoudis, H., Brighi, A., Guarini, A., Pyzalski, J., & Plichta, P. (2015). Structural validation and cross-cultural robustness of the European Cyberbullying Intervention Project Questionnaire. Computers in Human Behavior, 50, 141–147. https://doi.org/10.1016/j.chb.2015.03.065
Dilci, T. (2019). A study on validity and reliability of digital addiction scale for 19 years or older. Universal Journal of Educational Research, 7(1), 32–39. https://doi.org/10.13189/ujer.2019.070105
Fuchs, M. (2005). Children and Adolescents as Respondents. Experiments on Question Order, Response Order, Scale Effects and the Effect of Numeric Values Associated with Response Options. Journal of Official Statistics, 21(4), 701–725.
Smith, K., & Platt, L. (2013). How do children answer questions about frequencies and quantities? Evidence from a large-scale field test.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly: Management Information Systems, 27(3), 425–478. https://doi.org/10.2307/30036540
Waterman, A. H., Blades, M., & Spencer, C. (2000). Do children try to answer nonsensical questions? British Journal of Developmental Psychology, 18, 211–225.
Williford, A., & DePaolis, K. J. (2019). Validation of a Cyber Bullying and Victimization Measure Among Elementary School-Aged Children. Child and Adolescent Social Work Journal, 36(5), 557–570. https://doi.org/10.1007/s10560-018-0583-z