Dependent variables are data obtained from all respondents or the time it took, to make keypresses, indicating the speed of choice in a task. In other words, a dependent variable is the measured response from the participants in the study. There could be only one response recorded from each participant or there could be several different tasks in the study and, therefore, several dependent variables are recorded.
Criteria for parametric analysis
Parametric analysis must in general attain the following three criteria:
The first criterion for parametric analysis is that a dependent variable, which one wants to analyse parametrically, must be measured on a scale with equal intervals between adjacent values—that is, an interval or ratio scale. For example, time is measured in an interval scale. This means that in a timed task, a scale consists of seconds and every second has the same length. Just for the sake of comparison, when a participant was asked to subjectively evaluate the difficulty of a task on a scale from 1 to 5, each participant may have perceived the steps from 1 to 2 or 2 to 3 to have different lengths. Therefore, this answer was not recorded using an interval scale.
The second criterion for parametric analysis is that the dependent variable has to have normal distribution. This means that the most frequent answer recorded would be from the middle of the scale and the extreme answers from both ends of the scale are infrequent. In other words, the distribution of scores conforms to a bell-shaped distribution rather than some other form of distribution. The risk of deviating from the normal distribution of the dependent variables is larger when the sample size is small. When the sample size is large (over 500 participants), the sampling distributions begin to approach the normal curve.
The third criterion for parametric analysis is termed homogeneity of variance. Homogeneity of variance means that when the dependent variables are divided into groups, those groups would have equal variances. The variance is usually described in terms of standard deviation from the mean. If the scores of any group deviate from the group mean significantly more or less than the scores of other group(s) from their group mean, the variances are not homogeneous.
When any one of those criteria is violated, the idea of parametric analysis should be abandoned and non-parametric analysis (statistics like chi-square, the binomial test etc.) would be a more appropriate choice.
Interpretation of the results
The interpretation of the results of parametric analysis is based on a calculation of probabilities. This means that even when one gets robust results, those results never established anything. But merely infer that the study has measured a probability of the interactions taking place. Consequently, researchers are therefore able to draw a conclusion with some degree of confidence. Because probabilities are involved, it is always possible to make inference errors. A Type I error occurs when the researcher concludes that an effect exists when, in fact, it does not. A Type II error occurs when the researcher concludes that no effect exists when, in fact, it does. Increasing the strength of the study can reduce the risk of a Type II error.
Important concepts
The statistical strength or robustness of a study test if a model indicates the occurrence of certain criteria occurring, if met it can validated. The strength of a study can be increased through increasing the sample size (number of participants) and effect size (e.g., the difference in outcomes between two groups). Effect size is an indication of how large an effect is and whether a significant effect is, in fact, a meaningful effect.