BPCC 108
STATISTICAL METHODS FOR PSYCHOLOGICAL RESEARCH- II
BPCC 108 Solved Free Assignment 2023
BPCC 108 Solved Free Assignment January 2023
Assignment One
Q 1. Describe the assumptions of parametric and non-parametric statistics.
Ans. Parametric and non-parametric statistics are two different types of statistical analyses that are used in data analysis.
The choice between these two types of analysis depends on the assumptions made about the data being analyzed.
Parametric statistics assume that the data being analyzed follow a particular distribution, while non-parametric statistics make fewer assumptions about the data.
Assumptions of Parametric Statistics:
Parametric statistics are based on certain assumptions about the population from which the sample was drawn. These assumptions include:
Normal Distribution:
The data is normally distributed, meaning that the data follows a bell-shaped curve, where most of the observations are located in the middle of the distribution and the remaining observations are spread symmetrically on either side of the mean.
In other words, the data is assumed to be approximately normally distributed.
Homogeneity of Variance:
The variance of the data in all the groups being compared is equal. This means that the spread of data in each group is similar.
Independence: BPCC 108 Solved Free Assignment 2023
The observations in the sample are independent of each other. This means that the value of one observation does not affect the value of another observation.
Random Sampling:
The sample was drawn from the population using a random process, which means that every member of the population has an equal chance of being selected.
Assumptions of Non-Parametric Statistics:
Non-parametric statistics make fewer assumptions about the population distribution and the characteristics of the sample. The main assumptions of non-parametric statistics include:
Independence:
The observations in the sample are independent of each other. This means that the value of one observation does not affect the value of another observation.
Random Sampling:
The sample was drawn from the population using a random process, which means that every member of the population has an equal chance of being selected.
Scale of Measurement:
Non-parametric tests are generally used for data that are measured on nominal or ordinal scales. This means that the data is not necessarily normally distributed.
Lack of Normality:
Non-parametric tests do not assume that the data is normally distributed. In fact, non-parametric tests are often used when the data is not normally distributed.
Homogeneity of Variance: BPCC 108 Solved Free Assignment 2023
Non-parametric tests do not assume that the variance of the data in all the groups being compared is equal.
Advantages and Disadvantages of Parametric Statistics:
The advantages of parametric statistics include:
Greater Statistical Power:
Parametric tests are more powerful than non-parametric tests when the data is normally distributed and the other assumptions are met. This means that parametric tests are more likely to detect a real difference between groups.
Ease of Interpretation:
Parametric tests are easy to interpret because they have well-established procedures for calculating the probability of obtaining a particular result.
The disadvantages of parametric statistics include:
Sensitivity to Assumptions:
Parametric tests are sensitive to the assumptions of normality and homogeneity of variance. If these assumptions are not met, the results of the analysis may be unreliable.
Restricted Applicability:BPCC 108 Solved Free Assignment 2023
Parametric tests are restricted to situations where the data is normally distributed and the other assumptions are met. This means that parametric tests may not be applicable in all situations.
Q 2. Compute One Way ANOVA for the following data:
Ans. To compute One Way ANOVA, we need to calculate the following:
Total sum of squares (SST)
Between-group sum of squares (SSB)
Within-group sum of squares (SSW)
Degrees of freedom (df) for SSB and SSW
Mean squares (MSB and MSW)
F-statistic
First, let’s calculate the overall mean and the mean for each group:
Overall mean = (4+5+6+4+3+2+1+5+4+3+5+4+3+2+6+7+3+5+7+5+7+6+3+3+3+4+3+4+5+6+6+7+4+6+7+3+7+2+3+4)/40 = 4.425
Mean for Group 1 = (4+5+6+4+3+2+1+5+4+3)/10 = 3.7
Mean for Group 2 = (5+4+3+2+6+7+3+5+7+5)/10 = 4.7
Mean for Group 3 = (7+6+3+3+3+4+3+4+5+6)/10 = 4.4
Mean for Group 4 = (6+7+4+6+7+3+7+2+3+4)/10 = 4.3
Next, we calculate the sum of squares for each group:
SSG1 = 10 * (3.7 – 4.425)^2 = 20.5025
SSG2 = 10 * (4.7 – 4.425)^2 = 7.5625
SSG3 = 10 * (4.4 – 4.425)^2 = 0.0625
SSG4 = 10 * (4.3 – 4.425)^2 = 17.0625
Now, we can calculate the between-group sum of squares:
SSB = SSG1 + SSG2 + SSG3 + SSG4 = 45.19
To calculate the within-group sum of squares, we first need to calculate the deviation of each score from its group mean:
Group 1: (-0.7, 1.3, 2.3, -0.7, -1.7, -1.7, -2.7, 1.3, 0.3, -0.7)
Group 2: (0.3, -0.7, -1.7, -2.7, 1.3, 2.3, -1.7, 0.3, 2.3, 0.3)
Group 3: (2.6, 1.6, -1.4, -1.4, -1.4, -0.4, -1.4, -0.4, 0.6, 1.6)
Group 4: (1.7, 2.7, -0.3, 1.7, 2.7, -1.3, 2.7, -2.3, -1.3, -0.3)
We can now calculate the within-group sum of squares:
SSW = 9 * (0.7^2 + 1.3^2 + 2.3^2 + 0.7^2 + 1.7^2 + 1.7^2 + 2.7^ 2^2 + 1.4^2 + 0.4^2 + 0.6^2 + 1.6^2 + 0.3^2 + 2.3^2 + 2.3^2 + 1.3^2 + 0.3^2)
SSW = 69.31 BPCC 108 Solved Free Assignment 2023
Now, we can calculate the degrees of freedom for SSB and SSW:
dfSB = k – 1 = 4 – 1 = 3 (where k is the number of groups)
dfSW = N – k = 40 – 4 = 36 (where N is the total sample size)
Next, we can calculate the mean squares:
MSB = SSB / dfSB = 45.19 / 3 = 15.0633
MSW = SSW / dfSW = 69.31 / 36 = 1.9253
Finally, we can calculate the F-statistic:
F = MSB / MSW = 15.0633 / 1.9253 = 7.835
To interpret this F-value, we need to compare it to the critical F-value at a given level of significance and degrees of freedom. Assuming a significance level of 0.05, and df1 = 3 and df2 = 36, the critical F-value is 2.866.
Since our calculated F-value (7.835) is larger than the critical F-value (2.866), we reject the null hypothesis and conclude that there is a statistically significant difference between the means of at least two of the groups.
Assignment Two
Q 3. Elucidate factorial designs with the help of suitable examples.
Ans. Factorial designs are a type of experimental design that allows researchers to investigate the effects of two or more independent variables on a dependent variable. BPCC 108 Solved Free Assignment 2023
In a factorial design, each independent variable is called a factor, and the different combinations of levels of the factors are called conditions.
Factorial designs are often used in psychological research to study the effects of different variables on behavior or cognitive processes.
A simple example of a 2 x 2 factorial design involves two independent variables, each with two levels. For example, imagine a study investigating the effects of caffeine and exercise on mood.
In this study, caffeine and exercise are the two independent variables, each with two levels: caffeine (present or absent) and exercise (present or absent).
The dependent variable is mood, which is measured after each participant completes the experimental task.
To create the conditions for this study, the four possible combinations of caffeine and exercise are created: (1) caffeine present and exercise present, (2) caffeine present and exercise absent, (3) caffeine absent and exercise present, and (4) caffeine absent and exercise absent.
Participants are randomly assigned to one of these four conditions, and their mood is measured after completing the experimental task.
The results of this study might show that caffeine has a positive effect on mood, but only when combined with exercise. BPCC 108 Solved Free Assignment 2023
This would suggest that caffeine alone does not have a significant effect on mood, but when combined with exercise, it can enhance mood.
Alternatively, the study might show that exercise has a positive effect on mood, but only when caffeine is absent. This would suggest that caffeine has a negative effect on mood, but when it is absent, exercise can enhance mood.
Another example of a factorial design is a 2 x 3 design, which involves two independent variables, one with two levels and the other with three levels.
For example, imagine a study investigating the effects of color and size on memory.
In this study, color and size are the two independent variables, each with two and three levels, respectively.
The dependent variable is memory, which is measured after each participant completes the experimental task.
To create the conditions for this study, six possible combinations of color and size are created: (1) small blue stimuli, (2) medium blue stimuli, (3) large blue stimuli, (4) small green stimuli, (5) medium green stimuli, and (6) large green stimuli.
Participants are randomly assigned to one of these six conditions, and their memory is measured after completing the experimental task.
The results of this study might show that size has a significant effect on memory, with larger stimuli leading to better memory performance. It might also show that color has no significant effect on memory.
Alternatively, the study might show that both size and color have significant effects on memory, with larger blue stimuli leading to the best memory performance.
This would suggest that color and size interact to influence memory performance.
Q 4. Explain the fundamental concepts in determining the significance of the difference between means
Ans. Determining the significance of the difference between means involves several fundamental concepts, including hypothesis testing, the null hypothesis, the alternative hypothesis, the level of significance, and the p-value.
Hypothesis testing is a statistical procedure used to determine whether a sample result is likely to have occurred by chance, or whether it reflects a true difference or effect in the population.
The hypothesis being tested is typically framed in terms of the null hypothesis and the alternative hypothesis.BPCC 108 Solved Free Assignment 2023
The null hypothesis is a statement that assumes there is no difference or effect in the population. It is usually denoted as H0.
The alternative hypothesis is a statement that assumes there is a difference or effect in the population. It is usually denoted as Ha.
The level of significance is a value chosen by the researcher to determine how unlikely the sample result must be to reject the null hypothesis.
It is typically denoted as alpha (α) and is set at a specific value (e.g., 0.05) prior to conducting the hypothesis test.
The p-value is the probability of obtaining a sample result as extreme as, or more extreme than, the observed result, assuming the null hypothesis is true.
If the p-value is less than the level of significance (α), then the null hypothesis is rejected, and the alternative hypothesis is accepted.
If the p-value is greater than the level of significance (α), then the null hypothesis is not rejected, and the alternative hypothesis is not accepted.
When comparing the means of two groups, the t-test is commonly used. The t-test is a parametric test that assumes the data are normally distributed and have equal variances. BPCC 108 Solved Free Assignment 2023
There are two types of t-tests: the independent samples t-test and the paired samples t-test.
The independent samples t-test is used to compare the means of two independent groups.
The test calculates the t-statistic, which is the difference between the means of the two groups divided by the standard error of the difference.
The degrees of freedom for the t-test are calculated as the sum of the sample sizes minus two.
The t-statistic is compared to the critical t-value obtained from the t-distribution table, with the level of significance and degrees of freedom used to determine the critical value.
If the t-statistic is greater than the critical t-value, then the null hypothesis is rejected, and the alternative hypothesis is accepted.
The paired samples t-test is used to compare the means of two related groups. This test is typically used when the same participants are measured at two different times or under two different conditions.
The test calculates the t-statistic, which is the difference between the means of the two groups divided by the standard error of the difference.
The degrees of freedom for the t-test are calculated as the number of pairs minus one. BPCC 108 Solved Free Assignment 2023
The t-statistic is compared to the critical t-value obtained from the t-distribution table, with the level of significance and degrees of freedom used to determine the critical value.
If the t-statistic is greater than the critical t-value, then the null hypothesis is rejected, and the alternative hypothesis is accepted.
In conclusion, determining the significance of the difference between means involves several fundamental concepts, including hypothesis testing, the null hypothesis, the alternative hypothesis, the level of significance, and the p-value.
The t-test is commonly used to compare the means of two groups and involves calculating the t-statistic, degrees of freedom, and critical t-value.
These concepts are essential for conducting statistical analysis and making informed decisions based on data.
Q 5. Compute Chi-square for the following data:
Ans. To compute Chi-square for the given data, we need to first create a contingency table.BPCC 108 Solved Free Assignment 2023
Contingency Table:
Yes No Total
Male 5 10 15
Female 15 10 25
Total 20 20 40
The observed frequencies are given in the contingency table. Now, we need to calculate the expected frequencies.
Expected Frequencies:
Yes No
Male 7.5 7.5
Female 12.5 12.5
To calculate the expected frequencies, we use the formula:
Expected frequency = (row total * column total) / grand total
For example, the expected frequency for Male and Yes is:
Expected frequency = (15 * 20) / 40 = 7.5
Similarly, we can calculate the expected frequencies for all cells in the contingency table.
Now, we can use the Chi-square formula to calculate the test statistic:
Chi-square = Σ [(O – E)^2 / E]
where Σ is the sum of all cells in the contingency table, O is the observed frequency, and E is the expected frequency.
Using this formula, we get:
Chi-square = [(5-7.5)^2 / 7.5] + [(10-7.5)^2 / 7.5] + [(15-12.5)^2 / 12.5] + [(10-12.5)^2 / 12.5] BPCC 108 Solved Free Assignment 2023
Chi-square = 1.67 + 1.67 + 0.8 + 0.8
Chi-square = 4.94
Degrees of Freedom:
To determine the degrees of freedom, we use the formula:
df = (number of rows – 1) * (number of columns – 1)
In this case, we have 2 rows and 2 columns. Therefore,
df = (2-1) * (2-1) = 1
Significance Level:
Assuming a significance level of 0.05, we can use the Chi-square distribution table to find the critical value of Chi-square with 1 degree of freedom. The critical value is 3.84.
Conclusion:
The calculated value of Chi-square is 4.94, and the critical value is 3.84. Since the calculated value is greater than the critical value, we can reject the null hypothesis and conclude that there is a significant association between gender and response. Therefore, we can conclude that gender is a significant predictor of response.
Q 6. Compute Mann Whitney U for the following data :
Ans. To compute Mann Whitney U for the given data, we need to first rank all the scores in the combined sample, from lowest to highest.
In case of ties, we assign the average rank to all the tied scores. Then, we calculate the sum of ranks for each group and use the following formula to calculate U:
U = n1 * n2 + (n1 * (n1 + 1) / 2) – R1
where n1 and n2 are the sample sizes of group 1 and group 2 respectively, and R1 is the sum of ranks for group 1. BPCC 108 Solved Free Assignment 2023
Ranking the data:
Scores Rank
1 1
3 2.5
3 2.5
3 2.5
4 5.5
4 5.5
5 7.5
5 7.5
6 9.5
6 9.5
Scores Rank
1 10.5
2 9
3 5.5
3 5.5
4 3.5
5 1.5
5 1.5
6 0.5
6 0.5
The sum of ranks for group 1 is 46, and the sum of ranks for group 2 is 37. Now, we can calculate U using the formula:
U = n1 * n2 + (n1 * (n1 + 1) / 2) – R1
U = 10 * 10 + (10 * (10 + 1) / 2) – 46
U = 100 + 55 – 46
U = 109
We can also use the smaller of U1 and U2 to test the null hypothesis that the two groups come from the same population. In this case, since group 1 has a smaller sum of ranks, we can use U1, which is given by:
U1 = n1 * n2 + (n1 * (n1 + 1) / 2) – R1
U1 = 10 * 10 + (10 * (10 + 1) / 2) – 46
U1 = 100 + 55 – 46
U1 = 109
We can use the Mann Whitney U table to find the critical value of U with a significance level of 0.05 and sample sizes of 10 and 10. The critical value is 25.
Since the calculated value of U (109) is greater than the critical value of U (25), we can reject the null hypothesis and conclude that there is a significant difference between the job satisfaction scores of the two groups.
Therefore, we can conclude that the job satisfaction scores of the two groups are not from the same population.BPCC 108 Solved Free Assignment 2023
Q 7. Explain the procedure for computation of correlation using Microsoft Excel.
Ans. Microsoft Excel is a popular tool for computing correlations between two sets of data. The following steps can be followed to compute the correlation using Excel:
- Enter the data into two columns in an Excel spreadsheet. For example, if you want to compute the correlation between the number of hours studied and the exam scores, you would enter the number of hours studied in column A and the corresponding exam scores in column B.
- Select a cell where you want to display the correlation coefficient. This can be any cell that is not already occupied.
- Click on the “Formulas” tab in the ribbon at the top of the screen.
- Click on the “More Functions” drop-down menu, and select “Statistical” from the list of options.
- Click on the “CORREL” function in the list of statistical functions. This will open the “Function Arguments” dialog box.
- In the “Function Arguments” dialog box, select the range of cells that contain the data for the first variable (hours studied, in this example) in the “Array1” field.
- Select the range of cells that contain the data for the second variable (exam scores, in this example) in the “Array2” field.
- Click “OK” to close the “Function Arguments” dialog box.
- Excel will now compute the correlation coefficient and display it in the cell that you selected in step 2.
- If you want to interpret the correlation coefficient, you can use the following guidelines: BPCC 108 Solved Free Assignment 2023
. If the correlation coefficient is close to +1, it indicates a strong positive correlation between the two variables.
. If the correlation coefficient is close to -1, it indicates a strong negative correlation between the two variables.
. If the correlation coefficient is close to 0, it indicates no correlation or a weak correlation between the two variables.
In addition to computing the correlation coefficient, Excel can also generate a scatter plot to visually represent the relationship between the two variables.
To create a scatter plot, select both columns of data and click on the “Insert” tab in the ribbon at the top of the screen.
Then, click on the “Scatter” chart type and choose the desired format for the chart. The scatter plot can help to identify any patterns or trends in the data, and can complement the numerical correlation coefficient.
Q 8. Explain, data view, variable view and output view of SPSS.
Ans. SPSS (Statistical Package for the Social Sciences) is a software package used for statistical analysis. BPCC 108 Solved Free Assignment 2023
It has three main views: Data View, Variable View, and Output View. Each view serves a unique purpose in data management, analysis, and presentation. Below is an explanation of each view:
Data View: The Data View is the default view in SPSS. This view allows you to enter, edit, and view data in a spreadsheet-like format. Each row represents a case, and each column represents a variable.
The Data View allows you to enter data directly or import data from external sources. You can also sort and filter data, recode variables, and perform other basic data management tasks.
The Data View is where you will spend most of your time when working with SPSS.
Variable View: The Variable View is where you define and edit variables in your dataset. Variables are the characteristics of the cases you are studying, such as age, gender, or income.
In the Variable View, you can specify the variable name, label, type, format, and other properties. You can also define value labels for categorical variables, set missing values, and add notes.
The Variable View helps ensure that your data are accurate, consistent, and easy to analyze.
Output View: The Output View displays the results of your analysis. It presents your findings in tables, charts, and graphs, and allows you to customize the appearance and content of your output. BPCC 108 Solved Free Assignment 2023
The Output View shows descriptive statistics, inferential statistics, regression analysis, and other types of analyses. You can also export your output to other formats, such as PDF, Excel, or Word.
The Output View is where you communicate your results to others and make informed decisions based on your analysis.
In addition to the three main views, SPSS also has several other useful features for data analysis. One of these is the Syntax Editor, which allows you to write and run scripts to automate your analyses.
This is particularly useful for complex analyses or when you need to repeat the same analysis on multiple datasets.
The Syntax Editor also allows you to save your scripts for future use, making your analysis more efficient and consistent.
Another important feature of SPSS is the Data Dictionary. This is a document that describes the variables in your dataset, including their names, labels, and definitions.
The Data Dictionary can be useful for data documentation and for communicating the meaning of variables to others.
SPSS also allows you to generate a codebook from the Variable View, which is a summary of the variables and their properties that can be exported to other formats.
Finally, SPSS also offers a wide range of statistical procedures for data analysis. These include descriptive statistics, inferential statistics, regression analysis, factor analysis, and many others. BPCC 108 Solved Free Assignment 2023
The output of these procedures can be customized and exported for use in other programs or for publication.
SPSS also allows you to create charts and graphs to visualize your data and communicate your findings to others.
Overall, SPSS is a powerful tool for data analysis that offers a wide range of features and procedures.
Understanding the different views and features of SPSS can help you manage and analyze your data more efficiently and accurately.
Whether you are a student, researcher, or business professional, SPSS can help you make sense of your data and make informed decisions based on your analysis.
CHECK OTHER IGNOU ASSIGNMENT HERE