The social sciences, among other sciences that require usage of the "scientific method", rely entirely on statistical models for publications and institutional validation. This is as true for the design of an experiment or survey implemented in social science experiments as it is for interpretations of the results. In other words, the design of an experiment is pre-determined by the statistical test that will be used to analyze the results as well as the more general model of an experiment that requires passing a level of relative “significance”. In the hypothesis-testing model that social sciences use, it is the "significant differences" between-or among-experimental groups which are desirable and therefore designated based off of their structural facilitation in fitting these requirements. Whether or not there is a “significant difference” between or among groups is determined by a certain value, lettered p, and called the "level of significance". This value represents the amount of error (the probability that the results found are due to chance alone) that is allowed for the experiment. Typically, the amount of error allowed (referred to as alpha) is either 1% or 5% (with 1% allowing for less error but reducing the chance of finding that “significant difference”). Thus, when you're reading an article from the social/behavioral sciences often times the statement "(... p>.05)" follows a finding described from a study. This is to indicate that the null hypothesis that (always in this model) states that there was no difference between or among experimental and strategically designated groups and that a difference was found and therefore the study is valid for circulation and a peer-reviewed status.
Once, overlooking the foundational rule that emphasizes significant differences (rejecting the “null hypothesis” that there is no difference between or among groups) over the opposite outcome of this binary design, I designed an experiment on feeling time/ estimating seconds with the intent of finding no “significant difference” between conditions or among experimental groups. I had tremendous difficulty writing my paper (more than usual with so dry and terrible a style as is pushed on psychology students), as I was faced with discussing papers that relied on similar hypotheses but in the positive (that is findings showing significant similarities between groups) and suddenly I realized that I was doing something that by the standards and so-called logic of behavioral science statistical modeling was unusable. So I was inclined to look up “null hypothesis" experiments (those that find no significant differences between experimental groups) and I stumbled upon the headstones of these unpopular studies in The Journal of Null Articles, a bi-yearly online publication. I learned that the pleasure of the null is found in rejecting exclusion.
Once, overlooking the foundational rule that emphasizes significant differences (rejecting the “null hypothesis” that there is no difference between or among groups) over the opposite outcome of this binary design, I designed an experiment on feeling time/ estimating seconds with the intent of finding no “significant difference” between conditions or among experimental groups. I had tremendous difficulty writing my paper (more than usual with so dry and terrible a style as is pushed on psychology students), as I was faced with discussing papers that relied on similar hypotheses but in the positive (that is findings showing significant similarities between groups) and suddenly I realized that I was doing something that by the standards and so-called logic of behavioral science statistical modeling was unusable. So I was inclined to look up “null hypothesis" experiments (those that find no significant differences between experimental groups) and I stumbled upon the headstones of these unpopular studies in The Journal of Null Articles, a bi-yearly online publication. I learned that the pleasure of the null is found in rejecting exclusion.