Statistical Significance Tests

Statistical Significance Tests

Statistical Significance Tests In the process of data processing, there are often two or more different test results. When comparing and analyzing the data, the conclusion should not be made merely based on the difference between the two results, but on the statistical analysis and the test of difference significance. Statistical significance tests are to make a hypothesis about the parameters of the population (random variables) or the distribution form of the population in advance, and then we use the sample information to judge whether this hypothesis (the null hypothesis) is reasonable, that is, to judge whether the true situation of the population is significantly different from the null hypothesis.

In mathematical statistics, 5% probability (P) is generally used as a significant evaluation standard. If the probability of difference caused by accidental factors is more than 5 times in 100 trials, the difference between two results is considered to be insignificant. But if the difference is within 5% probability, then we consider the difference is significant. Sometimes we think 5% is too high, so we take 1% as a criterion of significance.

Our Services

Our statistical experts will help you to develop clear and appropriate statistical significance tests. Our services will take price, time and availability into account, which can greatly increase your productivity.

  • Student’s t-test

Since there are uncertainties when the sample sizes are small, when determining statistical significance, the difference between mean values, the scatter inherent in the data, the standard deviation, and the sample sizes must be considered. Student, whose real name is William Sealy Gossett, combined these concepts into a single equation to calculate a t-value that can determine statistical significance (Figure 1). ‘t’ is a t-statistic, whose numerical value is proportional to the probability that the difference between means, and it is statistically significant. The larger the t-value, the greater the difference between means. ‘μ1’ and ‘μ2’ represent the mean values of the samples being studied. Obviously, the greater the difference between means, the larger the t-value, therefore, the more likely the difference is statistically significant. ‘σ’ is the standard deviation of the two groups, and ‘n’ represents infinity.

The calculation of t-value

Fig 1. The calculation of t-value

  • U test

The Mann-Whitney U test is one of the most widely used statistical tests in studies of behavior. Like many other non-parametric tests based on rank, it assumes equal variances in the two populations, from which the two samples being compared are taken. The U test can be used to compare the mean of the sample with the mean of the population and the means between two samples. In theory, samples are required to meet a normal distribution. The application conditions for U test are as the same as for the t test. It is suitable to use U test when the sample size is large and to use t-test when the sample size is small. The U test can be replaced by the t-test.

  • Analysis of variance

Analysis of Variance (ANOVA), also known as "Variance Analysis", was invented by R. A. It is used to test the significance of the difference between two or more samples. It is a statistical tool used extensively in the biological, psychological, medical, ecological and environmental sciences. Due to the influence of various factors, the data obtained from a study is fluctuating. The causes of fluctuation can be divided into two categories, one is uncontrollable random factor, and the other is controllable factor. When applied to generalized linear models, multilevel models, and other extensions of classical regression, ANOVA can be extended in two different directions. On the one hand, the F-test can be used (in an asymptotic or approximate fashion) to compare nested models, and to verify the hypothesis that the simplicity of the models is sufficient to explain the data. On the other hand, the idea of variance decomposition can be interpreted as inference for the variances of batches of parameters (sources of variation) in multilevel regressions.

  • Chi-square test

Chi-square test is a widely used hypothesis test method. Its applications in the statistical inference of classified data can be divided into two categories, one is the test of two rates or comparison of two composition ratios, and the other is the test of multiple rates or the comparison of multiple composition ratios and correlation analysis of classified data. Like all non-parametric statistics, the Chi-square is robust with respect to the distribution of the data. Specifically, it does not require equality of variances among the study groups or homoscedasticity in the data. It allows evaluation of both dichotomous independent variables and multiple groups of studies. Unlike many other non-parametric statistics and some parametric statistics, the calculations required in the Chi-square provide a great deal of information about how each of the groups performed in the study. Because of these sufficient information, researchers could fully aware of the results and obtain more detailed information from the statistic. Advantages of the Chi-square are as follows: robustness with respect to distribution of the data, simple computation, detailed information from the test, availability in studies (which is cannot be satisfied in parametric hypothesis), and flexibility in handling data from two or multiple group studies.

We guarantee the confidentiality and sensitivity of our customers' data. We are committed to providing you timely and high-quality deliverables. At the same time, we guarantee cost-effective, complete and concise reports.

If you are unable to find the specific service you are looking for, please feel free to contact us.

References:

1. Livingston E H. (2004) 'Who was student and why do we care so much about his t-test?', Journal of Surgical Research, 118(1):58-65.
2. Ruxton G D. (2010) 'The unequal variance t-test is an underused alternative to Student's t-test and the Mann-Whitney U test', Behavioral Ecology, 17(4):688-690.
3. Anderson M, Braak C T. (2003) 'Permutation tests for multi-factorial analysis of variance', Journal of Statistical Computation & Simulation, 73(2):85-113.
4. Gelman A. (2005) 'Analysis of variance', Quality Control & Applied Statistics, 20(1):295-300. 5. Mchugh M L. (2013) 'The Chi-square test of independence', Biochemical Medical, 23(2):143-149.

Are you looking for a professional advisor for your trials?

Online Inquiry
×