# statitic discussion response

Please read the discussions and respond to them in APA format with reference

Discussion 1

The ANOVA test is a statistical method that statisticians use to determine differences between two or more groups. Specifically, â€œThe ANOVA test is similar to the t-test because the null hypothesis (no differences between groups) is rejected when the analysis yields a smaller p value, such as p â‰¤ 0.05, than the Î± set for the studyâ€ (Grove&Cipher,2017).

The one-way ANOVA test is the simplest test to perform. This test compares the means between groups of interest and determines if a statistic significance is present. Statisticians often use this test when they take a large group and divide the larger group into two or more smaller groups. The smaller groups are randomly placed, equally distributed, and mutually exclusive. Each group is subjected to a different condition, and all group observations are independent and measured at an interval/ratio level.

The ANOVA tests tells us that there is a significant difference between two groups, but the ANOVA test can not specifically tell us which groups were different. To learn which groups were statistically different a post hoc test is completed. The post hoc test shows us where the differences between groups actually occurred. (Laerd,2018).

Grove, Susan K., Daisha Cipher. Statistics for Nursing Research: A Workbook for Evidence Based Practice, 2nd Edition. Saunders, 022016. VitalBook file.

Discussion 2

Analysis of Variance or ANOVA are essentially an extension of t-test and z-test for the use of analyzing the differences between 2 or more groups. According to Grove & Cipher, there are assumptions for an Analysis of Variance, 1-The populations from which the samples were drawn or the random samples are normally distributed. 2-The groups should be mutually exclusive. 3-The groups should have equal variance, also known as homogeneity of variance. 4-The observations are independent. 5-The dependent variable is measured at the interval or ratio level. P values that at greater than the alpha are found to be non-significant. P values less than alpha are significant. Another step after the Analysis of Variance is then breaking down the specific difference in the groups in what is call Posthoc testing. Example, Treatment of children for post-tonsillectomy and pain treatment results after treatment with Codeine, Tylenol and a placebo. Pain scale can be measured in a 0-5 scale (Likert scale). ANOVA can give a result of significance in the study. The posthoc testing can breakdown differences between the groups.

Grove, S & Cipher, D (2017) Statistics for Nursing Research: A Workbook for Evidence-Based Practice, 2nd Edition. Elsevier. St. Louis, Mo.

Discussion 3

To explain analysis of variance to an audience that is not in a statistics class, I would first define analysis of variance and associated terms within that definition. I would then give an easily understood example so that the concept could be applied. The analysis of variance, or ANOVA, is a statistical method used to test differences between two or more means (Analysis of Variance, 2018). The mean refers to the average. For example, the mean age of a group of individuals ages 15, 17, 18, 19, 19, 22, 24, and 26 would be 20 years old. That is, the sum of all the ages divided by the number of individuals.

There can be a one-way or two-way ANOVA. The one-way ANOVA has one independent variable and the two-way ANOVA has two independent variables to be tested. For example, if you were testing if groups of children liked one brand of cereal you would use a one-way ANOVA, with the one cereal being the independent variable. If you were testing the brand and the calories in cereal you would use a two-way ANOVA (ANOVA test: Definition, types, examples, 2018).

A ANOVA test is a way of finding out if a study or experiment is significant. You are basically testing groups to see if there is a difference between them. (ANOVA test: Definition, types, examples, 2018).

References

Discussion4

How to explain the theory of variance to a novice?

To me this is difficult due to a lack of knowledge on my part. I could explain heart disease or results of DM since this are subjects that have been studied for years due to occupation but statistics is still a novel idea to me at this point. Simplicity is usually the way to teach someone and even through extensive searches on the internet there still doesnâ€™t seem to be an easy explanation.

Analysis of variance or ANOVA is a study of the range or spread between the study group or means. The mean is the beginning with being the average of the data to be compared. ANOVA is used instead of the t â€“ test due to the data pool being larger than two groups. The ANOVA is used in this type to decrease the likely hood of a Type 1 Error or an alpha inflation. The mean is the average off the data, next the difference of the mean determines the deviation of the data. This is then squared with the next step being the addition of the squared data this gives the total deviation. The last step is to divide the total number less than 1 of the total value. This will give the variance, simple huh?

References:

Pezzullo, J. (n.d.) The Basic Idea of Analysis of Variance. Retrieved from http://www.dummies.com/education/science/biology/t…

Discussion 5

If I had to explaon this to someome that didnt know statyistics I would rely om something lke below- which is an exerpt from a statistics for dummies book. I find that the have suimple explanations for things that may be harder to understand.

The so-called â€œone-way analysis of varianceâ€ (ANOVA) is used when comparing three or more groups of numbers. When comparing only two groups (A and B), you test the difference (A â€“ B) between the two groups with a Student t test. So when comparing three groups (A, B, and C) itâ€™s natural to think of testing each of the three possible two-group comparisons (A â€“ B, A â€“ C, and B â€“ C) with a t test.

But running an exhaustive set of two-group t tests can be risky, because as the number of groups goes up, the number of two-group comparisons goes up even more. The general rule is that N groups can be paired up in N(N â€“ 1)/2 different ways, so in a study with six groups, youâ€™d have 6Ã—5/2, or 15 different two-group comparisons.

When you do a lot of significance tests, you run an increased chance of making a Type I error â€” falsely concluding significance when thereâ€™s no real effect present. This type of error is also called an alpha inflation . So if you want to know whether a bunch of groups all have consistent means or whether one or more of them are different from one or more others, you need a single test producing a single p value that answers that question.

The one-way ANOVA is exactly that kind of test. It doesnâ€™t look at the differences between pairs of group means; instead, it looks at how the entire collection of group means is spread out and compares that to how much you might expect those means to spread out if all the groups were sampled from the same population (that is, if there were no true differences between the groups).

The result of this calculation is expressed in a test statistic called the F ratio(designated simply as F), the ratio of how much variability there is between the groups relative to how much there is within the groups.

If the null hypothesis is true (in other words, if no true difference exists between the groups), then the F ratio should be close to 1, and its sampling fluctuations should follow the Fisher F distribution, which is actually a family of distribution functions characterized by two numbers:

http://www.dummies.com/education/science/biology/t…

Discussion 6

A statistical interaction occurs when the input variable has a different effect on the output variable, and the interactions that occur among variables look different, but their meaning is essentially the same (ICBS,2018).

Simply put, two or more variables come together, and each variable shares a similar link. One variable has a more powerful influence on the other variables, and the dissimilar effects the variables produce allow researchers to thoroughly explain data when the interactions are evaluated.

One variable cannot determine the output independently. All variables must be combined to comprehend the results.

For example, a fictional community hospital offered an incentive for bedside nurses who achieved certification in 2014. A total of 48 nurses attained medical-surgical certification during fiscal year 2014. Three nurses attended a certification review offered by the hospital, and a total of 45 nurses did not attend the review. All nurses passed the exam regardless.

In this example, the independent variable, or the number we are measuring, is the number of nurses achieving certification. The dependent variables are the nurses who attended the certification review and the nurses who did not attend the review. The sample size was 48. Overall, the incentive offer was successful as 48 nurses obtained their specialty certification, but hospital provided certification preparation courses were not cost effective.