BECS 184 Solved Free Assignment 2023
BECS 184 Solved Free Assignment Jan 2023
(a.) Compute and interpret the correlation coefficient for the following data:
X(Height) 12 10 14 11 12 9
Y (Weight) 18 17 23 19 20 15
(b) Explain step by step procedure for testing the significance of correlation coefficient.
Ans (a.) To compute the correlation coefficient, we first need to calculate the mean and standard deviation for both X (Height) and Y (Weight).
Mean of X (meanX) = (12 + 10 + 14 + 11 + 12 + 9) / 6 = 11
Standard deviation of X (stdX) = √[((12-11)^2 + (10-11)^2 + (14-11)^2 + (11-11)^2 + (12-11)^2 + (9-11)^2) / 6] = √[5.33] ≈ 2.31
Mean of Y (meanY) = (18 + 17 + 23 + 19 + 20 + 15) / 6 = 18.67
Standard deviation of Y (stdY) = √[((18-18.67)^2 + (17-18.67)^2 + (23-18.67)^2 + (19-18.67)^2 + (20-18.67)^2 + (15-18.67)^2) / 6] = √[4.22] ≈ 2.05
Next, we calculate the covariance (covXY) between X and Y:
covXY = [(12-11)(18-18.67) + (10-11)(17-18.67) + (14-11)(23-18.67) + (11-11)(19-18.67) + (12-11)(20-18.67) + (9-11)(15-18.67)] / 6 = -2.33
Finally, we can compute the correlation coefficient (r) using the formula:
r = covXY / (stdX * stdY)
r = -2.33 / (2.31 * 2.05) ≈ -0.53
Interpretation: BECS 184 Solved Free Assignment 2023
The correlation coefficient (r) between height (X) and weight (Y) is approximately -0.53. This indicates a moderate negative linear relationship between the two variables. As height increases, weight tends to decrease.
Ans (b.) To test the significance of a correlation coefficient, you can perform a hypothesis test. Here’s a step-by-step procedure for testing the significance of a correlation coefficient:
(1) Formulate the hypotheses:
Null Hypothesis (H0): There is no significant correlation between the variables.
Alternative Hypothesis (Ha): There is a significant correlation between the variables.
(2) Set the significance level (alpha):
Choose the desired level of significance for the test, commonly denoted as alpha (α). The most common values are 0.05 (5%) or 0.01 (1%).
(3) Calculate the correlation coefficient:
Calculate the correlation coefficient (r) using the appropriate formula. You can use the Pearson correlation coefficient formula for linear relationships.
(4) Determine the critical value:
Look up the critical value corresponding to the chosen significance level and the sample size (degrees of freedom). This critical value is used to determine the threshold for rejecting the null hypothesis.
(5) Calculate the test statistic: BECS 184 Solved Free Assignment 2023
Compute the test statistic (t) using the formula:
t = r * sqrt((n – 2) / (1 – r^2))
where n is the sample size.
(6) Determine the p-value:
Calculate the p-value associated with the test statistic. This can be done using a t-distribution table or a statistical software package.
(7) Compare the p-value with the significance level:
If the p-value is less than the chosen significance level (p-value < α), reject the null hypothesis. There is evidence to support the alternative hypothesis, indicating a significant correlation.
If the p-value is greater than or equal to the significance level (p-value ≥ α), fail to reject the null hypothesis. There is insufficient evidence to support a significant correlation.
(8) Interpret the results:
If the null hypothesis is rejected, conclude that there is a significant correlation between the variables. BECS 184 Solved Free Assignment 2023
If the null hypothesis is not rejected, conclude that there is no significant correlation between the variables.
It’s important to note that correlation does not imply causation. A significant correlation only indicates a relationship between the variables but does not establish a cause-and-effect relationship.
Q 2. (a.) A market research firm wants to estimate the share that foreign companies have in Indian market for certain products. A random sample of 100 consumers is obtained and 34 people in the sample are found to be users of foreign made products, the rest are users of domestic products. Give a 95% confidence interval for share of foreign products in the market.
(b) A salesman made a profit of Rs. 245 on a showpiece A for which average profit has been Rs. 200 with a standard deviation of Rs. 50. Later on the same day, he made a profit of Rs. 620 on a showpiece B for which the average profit has been Rs. 500 with a standard deviation of Rs. 150. For which of these two models is the salesman’s profit relatively higher.
Ans (a) To estimate the share of foreign products in the market with a 95% confidence interval, you can use the formula for calculating a confidence interval for a proportion. Here’s the step-by-step procedure:
Calculate the sample proportion:
Divide the number of users of foreign-made products (34) by the sample size (100) to obtain the sample proportion (p-hat).
p-hat = 34/100 = 0.34
Determine the standard error:
The standard error (SE) represents the variability in the sample proportion and is calculated using the formula:
SE = sqrt((p-hat * (1 – p-hat)) / n)
where n is the sample size. BECS 184 Solved Free Assignment 2023
SE = sqrt((0.34 * (1 – 0.34)) / 100) ≈ 0.049
Determine the margin of error:
The margin of error represents the range within which the true population proportion is likely to fall. It is calculated by multiplying the standard error by the critical value from the t-distribution based on the desired confidence level.
For a 95% confidence level, the critical value can be obtained from a t-distribution table with (n – 1) degrees of freedom. Since n = 100 and the degrees of freedom is 99, the critical value can be approximated to 1.984.
Margin of Error = Critical Value * SE
Margin of Error ≈ 1.984 * 0.049 ≈ 0.097
Calculate the lower and upper bounds of the confidence interval:
Subtract and add the margin of error to the sample proportion to obtain the lower and upper bounds, respectively.
Lower Bound = p-hat – Margin of Error
Lower Bound = 0.34 – 0.097 ≈ 0.243
Upper Bound = p-hat + Margin of Error
Upper Bound = 0.34 + 0.097 ≈ 0.437
Interpret the confidence interval:
With a 95% confidence level, we can say that we are 95% confident that the true share of foreign products in the market lies between approximately 24.3% and 43.7%.
Therefore, the 95% confidence interval for the share of foreign products in the market is approximately 24.3% to 43.7%. BECS 184 Solved Free Assignment 2023
Ans (b) To determine which showpiece model yields a relatively higher profit for the salesman, we need to compare the profits in terms of their relative positions within their respective distributions.
First, let’s calculate the z-scores for each showpiece using the formula:
z = (x – mean) / standard deviation
For showpiece A:
Profit = Rs. 245
Mean profit = Rs. 200
Standard deviation = Rs. 50
z_A = (245 – 200) / 50 = 45 / 50 = 0.9
For showpiece B:
Profit = Rs. 620
Mean profit = Rs. 500
Standard deviation = Rs. 150
z_B = (620 – 500) / 150 = 120 / 150 = 0.8
The z-score measures the number of standard deviations a data point is from the mean. A higher z-score indicates that the data point is relatively further from the mean.
Comparing the z-scores, we can see that the z-score for showpiece A is 0.9, while the z-score for showpiece B is 0.8.
Since the z-score for showpiece A is higher, it means that the profit of Rs. 245 for showpiece A is relatively higher within its distribution compared to the profit of Rs. 620 for showpiece B within its distribution.
Therefore, the salesman’s profit is relatively higher for showpiece A compared to showpiece B. BECS 184 Solved Free Assignment 2023
Q 3. a.) Discuss different measures of Central Tendency and the specific situation in which they could be used.
b.) Marks of 20 students in Economics are: 70, 60, 80, 50, 65, 78, 81, 69, 72, 77, 58, 42, 62, 55, 82, 84, 64, 75, 59, 66 (Maximum marks are 100). Find the percentage of marks of each student. Also find out mean, median and mode using spreadsheet package. (Enclose screenshots of the spreadsheet in the assignment).
Ans (a) There are several measures of central tendency used to describe the center or average of a dataset. The choice of measure depends on the nature of the data and the specific situation.
Here are some common measures of central tendency and their specific use cases:
The mean is calculated by summing all the values in a dataset and dividing by the total number of values.
It is commonly used when the data is normally distributed or symmetric.
It is sensitive to extreme values and can be influenced by outliers.
Example situation: Calculating the average score of students in a class.
The median represents the middle value in a dataset when it is sorted in ascending or descending order.
It is used when the data is skewed or has outliers.
It is not affected by extreme values. BECS 184 Solved Free Assignment 2023
Example situation: Determining the median income of a population, where a few extremely high or low incomes may exist.
The mode represents the most frequently occurring value(s) in a dataset.
It is used when analyzing categorical or discrete data.
It can be useful when identifying the most common category or value.
Example situation: Finding the mode of transportation used by people in a city.
The geometric mean is used for calculating the average growth rate or when dealing with multiplicative relationships.
It is commonly used in finance, economics, and investment analysis.
It is less sensitive to extreme values.
Example situation: Calculating the average annual growth rate of an investment portfolio.
Weighted Mean: BECS 184 Solved Free Assignment 2023
The weighted mean is used when different values in the dataset have different weights or importance.
It is used in situations where certain values have a greater impact on the overall average.
Example situation: Calculating the weighted GPA of a student, where each course is weighted differently.
It’s important to choose the appropriate measure of central tendency based on the characteristics of the data and the specific question or context of the analysis.
Ans (b) Do yourself Using Spreadsheet
Q 4. Explain the following:
a. Non sampling errors
b. Probability density curve
c. Type I and type II errors
d. Conjoint Analysis
Ans (a) Non-sampling errors refer to errors that occur in the data collection and analysis process that are not related to random sampling. These errors can arise due to various reasons and can impact the quality and accuracy of the results.
Here are some common types of non-sampling errors:
Measurement or Response Errors:
These errors occur when there are inaccuracies or inconsistencies in the way data is measured or recorded.
Examples include errors in data entry, transcription mistakes, misunderstanding of survey questions, or respondent biases.
Non-response errors occur when individuals selected for a survey or study do not participate or fail to provide complete information.
This can introduce bias if non-respondents differ systematically from respondents in terms of the variables being studied. BECS 184 Solved Free Assignment 2023
Selection bias occurs when the sample selected does not represent the entire population accurately.
It can happen when certain groups or individuals have a higher or lower chance of being included in the sample, leading to a distorted representation of the population.
Coverage errors arise when some units in the population are not included or are improperly represented in the sampling frame
It can occur due to errors in the sampling frame or incomplete coverage of the target population.
Processing Errors: BECS 184 Solved Free Assignment 2023
Processing errors occur during data processing, data cleaning, or data analysis.
These errors can result from mistakes in data entry, data manipulation, or coding errors.
Non-sampling Frame Errors:
Non-sampling frame errors occur when the sampling frame used to select the sample does not accurately represent the target population.
This can happen if the frame is outdated, incomplete, or contains inaccuracies.
Interpretation errors occur when data is misinterpreted or conclusions are drawn incorrectly from the data.
This can result from mistakes in statistical analysis, faulty assumptions, or miscommunication of findings.
Ans (b) A probability density curve, also known as a probability density function (PDF), is a graphical representation of the probability distribution of a continuous random variable. It provides information about the likelihood of different values occurring within a given range.
Key characteristics of a probability density curve are:
Non-negative values: The curve is always non-negative, meaning the probability cannot be negative for any particular value.
Area under the curve: The total area under the curve represents the probability of the random variable taking any value within the range. The total area is equal to 1.
Probability within a range: The probability of the random variable falling within a specific range can be determined by calculating the area under the curve within that range.
Shape and symmetry: The shape and symmetry of the curve depend on the probability distribution of the random variable. Common probability density curves include the normal distribution (bell-shaped and symmetric), exponential distribution, uniform distribution, and many others.
The probability density curve is typically plotted on a Cartesian plane, with the x-axis representing the values of the random variable and the y-axis representing the probability density.
The curve is smooth and continuous, allowing for any possible value within the range.
The precise equation or formula for the probability density curve varies depending on the specific probability distribution being modeled.
For example, the normal distribution has a well-known formula involving the mean and standard deviation. BECS 184 Solved Free Assignment 2023
Probability density curves are widely used in statistics, probability theory, and data analysis to understand and describe the distribution of continuous random variables.
They provide insights into the likelihood of different values and can be used to make predictions and infer statistical properties of a population.
Ans (c) Type I and Type II errors are two types of mistakes that can occur in hypothesis testing, particularly in the context of statistical hypothesis testing.
Type I Error (False Positive):
A Type I error occurs when the null hypothesis is incorrectly rejected, even though it is true.
It represents the situation where the researcher concludes there is a significant effect or relationship when, in reality, there is none.
The probability of making a Type I error is denoted by alpha (α), which is the significance level set by the researcher.
A lower significance level (e.g., α = 0.05) reduces the chance of Type I errors but increases the likelihood of Type II errors.
Type II Error (False Negative):
A Type II error occurs when the null hypothesis is not rejected, even though it is false.
It represents the situation where the researcher fails to identify a significant effect or relationship when one truly exists.
The probability of making a Type II error is denoted by beta (β).
Power (1 – β) is the complement of Type II error and represents the probability of correctly rejecting the null hypothesis when it is false.
Factors that can influence the likelihood of a Type II error include sample size, effect size, and the chosen significance level.
The trade-off between Type I and Type II errors is commonly illustrated using the concept of the “power” of a statistical test.
Increasing the sample size or choosing a higher significance level (α) can decrease the likelihood of Type II errors but may increase the risk of Type I errors.
It’s important to carefully consider the consequences of both types of errors when designing hypothesis tests and interpreting the results.
The choice of significance level and the desired power of the test should be based on the specific context and consequences of each type of error.
Ans (d) Conjoint analysis is a market research technique used to understand and quantify how individuals make decisions when faced with multiple attributes or features of a product or service.
It aims to determine the relative importance of different attributes and the preferences of consumers. BECS 184 Solved Free Assignment 2023
The basic idea behind conjoint analysis is to present respondents with a series of hypothetical product profiles or scenarios and ask them to rank or rate their preferences.
By systematically varying the levels of different attributes and analyzing the responses, researchers can derive insights into consumers’ decision-making processes and identify the key drivers of their choices.
Here are the key steps involved in conducting conjoint analysis:
Attribute Selection: Identify the relevant attributes that influence consumer decision-making for the product or service under study. Attributes can be tangible features (e.g., price, color, size) or intangible factors (e.g., brand reputation, warranty terms).
Attribute Levels: Determine the specific levels or options for each attribute. For example, if the attribute is price, the levels could be low, medium, and high.
Profile Creation: Create a set of hypothetical product profiles or scenarios by combining different attribute levels. These profiles should represent a range of realistic combinations that cover the attribute space adequately.
Questionnaire Design: Design a questionnaire or survey instrument to present the product profiles to respondents. Different conjoint analysis techniques, such as rating-based, ranking-based, or choice-based, can be employed to gather preference data.
Data Collection: Administer the questionnaire to a sample of respondents, ensuring that an adequate number of responses are collected to ensure statistical validity.
Data Analysis: Analyze the collected data using statistical techniques such as regression analysis, choice modeling, or utility analysis.
The goal is to estimate the relative importance of attributes and determine the utility or value consumers assign to different attribute levels.
Derive Insights: Interpret the results to gain insights into consumer preferences, identify key drivers of choice, and make informed decisions about product design, pricing, and marketing strategies.
Q 5. a.) Frame a short questionnaire to identify social or economic impact of Covid 19 in your locality?
b.) What are the various sources of Secondary data?
Ans (a) How has the COVID-19 pandemic affected your employment status?
a. Lost job or laid off
b. Reduced working hours
c. No impact on employment
d. Other (please specify)
Have you experienced a decline in your income or financial resources due to COVID-19?
b. No BECS 184 Solved Free Assignment 2023
How has COVID-19 impacted your daily life and routines? (Select all that apply)
a. Restricted mobility and travel
b. Closure of businesses and services
c. Increased health and safety precautions
d. Changes in educational activities
e. Other (please specify)
Have you or anyone in your household experienced challenges in accessing essential goods and services during the pandemic?
How has COVID-19 affected your mental health and well-being?
a. Increased stress and anxiety
b. Feelings of isolation and loneliness
c. No significant impact
d. Other (please specify)
Have you received any financial assistance or support from government programs, NGOs, or community initiatives during the COVID-19 pandemic?
Are you aware of any local initiatives or programs aimed at supporting individuals or businesses affected by COVID-19?
How has COVID-19 impacted local businesses and the economy in your area?
a. Closure of businesses
b. Reduced consumer demand
c. Increased unemployment
d. Other (please specify)
Have you or anyone in your household contracted COVID-19?
In your opinion, what are the most significant social or economic challenges faced by your community as a result of COVID-19? (Open-ended question)
Thank you for participating in this questionnaire. Your responses will help us better understand the social and economic impact of COVID-19 in our locality.
Ans (b) Secondary data refers to data that has been previously collected by someone else for a different purpose but can be reused for research or analysis. There are several sources of secondary data, including:
Government and Official Sources: Government agencies and official organizations often collect and publish data on various topics. Examples include national statistical agencies, government reports, census data, economic indicators, and health data.
Research Studies and Academic Sources: Published research studies, dissertations, theses, and academic papers can serve as sources of secondary data.
These sources may provide access to data collected through surveys, experiments, or qualitative research. BECS 184 Solved Free Assignment 2023
Publicly Available Databases: There are numerous publicly accessible databases that provide secondary data. Examples include data repositories maintained by government agencies, international organizations, and research institutions.
These databases cover a wide range of topics, such as economics, health, education, and social sciences.
Online Platforms and Websites: Online platforms and websites often offer datasets and information that can be used as secondary data. Examples include data portals, data aggregators, open data initiatives, and online archives.
Commercial and Market Research Reports: Market research firms and commercial organizations gather and publish data related to market trends, consumer behavior, industry statistics, and demographic information.
These reports can be valuable sources of secondary data for business and market research.
Historical Records and Documents: Historical records, archives, diaries, letters, newspapers, and official documents can provide valuable historical data for research in fields such as history, sociology, and anthropology.
Media Sources: Newspapers, magazines, television broadcasts, and online media platforms can be sources of secondary data, especially for analyzing public opinion, news coverage, and media content.
Organizational Data: Many organizations maintain internal records, databases, and reports that can serve as sources of secondary data. Examples include sales records, customer data, financial reports, and performance metrics.
It is important to critically evaluate the quality, relevance, and reliability of secondary data sources before using them in research.
Researchers should consider the purpose of data collection, the methodology used, the representativeness of the sample, potential biases, and the currency of the data.
Q 6. a.) What are the conditions when t test, F test or Z test are used?
b.) A random sample of 100 recorded deaths in India during the past year showed an average life span of 71.8 years with a standard deviation of 8.9 years. Does this seem to indicate that the average life span today is greater than 70 years? Use a 0.05 level of significance.
Ans (a) T-test, F-test, and Z-test are statistical tests used in different scenarios based on the characteristics of the data and the research question. Here are the conditions when each test is typically used:
T-test: BECS 184 Solved Free Assignment 2023
T-tests are used when comparing means between two groups or conditions.
Conditions for using a t-test include:
- The variable of interest is continuous or approximately normally distributed.
- The samples being compared are independent or unrelated.
- The variances of the populations being compared are assumed to be equal (for a two-sample t-test) or unknown (for a one-sample t-test).
- The sample size is relatively small (typically less than 30) and the population standard deviation is unknown.
F-tests are used to compare variances or test for overall group differences in an analysis of variance (ANOVA) framework.
Conditions for using an F-test include:
- The variable of interest is continuous.
- The samples being compared are independent or unrelated.
- The populations being compared have approximately normal distributions.
- The samples are taken from populations with equal variances (for a one-way ANOVA) or with unequal variances (for a two-way ANOVA).
Z-tests are used when comparing means or proportions when the population standard deviation is known.
Conditions for using a Z-test include:
- The variable of interest is continuous (for mean comparison) or categorical (for proportion comparison).
- The samples being compared are independent or unrelated.
- The population standard deviation is known.
- The sample size is relatively large (typically greater than 30) or the population distribution is approximately normal.
It’s important to note that the specific conditions and assumptions for each test may vary depending on the variations and extensions of these tests.
Additionally, the choice of test also depends on the research question and the study design. BECS 184 Solved Free Assignment 2023
It’s recommended to consult statistical resources or a statistical expert to determine the appropriate test for a specific analysis.
Ans (b) To determine whether the average life span today is greater than 70 years, we can conduct a one-sample t-test using the provided sample mean, sample standard deviation, sample size, and the given significance level.
Null hypothesis (H0): The average life span is 70 years.
Alternative hypothesis (Ha): The average life span is greater than 70 years.
Here are the steps to perform the hypothesis test:
Set the significance level (α): α = 0.05.
Calculate the test statistic:
t = (sample mean – hypothesized mean) / (sample standard deviation / sqrt(sample size))
t = (71.8 – 70) / (8.9 / sqrt(100))
t = 1.8 / (8.9 / 10)
t = 1.8 / 0.89
t ≈ 2.02 (rounded to two decimal places)
Determine the critical value:
Since the alternative hypothesis is “greater than,” it is a one-tailed test.
With a significance level of 0.05 and 99 degrees of freedom (sample size – 1), the critical value for a one-tailed test is approximately 1.66 (obtained from t-distribution table or statistical software).
Compare the test statistic with the critical value:
If the test statistic is greater than the critical value, we reject the null hypothesis.
2.02 > 1.66
The test statistic (2.02) is greater than the critical value (1.66), which means we reject the null hypothesis.
Draw a conclusion: BECS 184 Solved Free Assignment 2023
Based on the test, with a 0.05 level of significance, there is sufficient evidence to conclude that the average life span today is greater than 70 years.
Q 7. Differentiate between:
a. Quantitative and Qualitative Research
b. Phenomenology and Ethnography
c. Techniques of univariate data analysis
d. ANOVA and MANOVA
Ans (a) Quantitative Research:
- Quantitative research involves collecting and analyzing numerical data to answer research questions and test hypotheses.
- It follows a structured and standardized approach, often using large sample sizes, to ensure generalizability and statistical reliability.
- Quantitative research employs various data collection methods, such as surveys, experiments, and existing datasets, to gather data.
- Data analysis in quantitative research involves statistical techniques, including descriptive statistics, inferential statistics, regression analysis, and data modeling.
- The goal of quantitative research is to provide objective and measurable insights, identify patterns and relationships, and make generalizations about a population.
- Qualitative research focuses on understanding and interpreting the experiences, behaviors, and meanings of individuals or groups through non-numerical data.
- It utilizes methods such as interviews, observations, focus groups, and document analysis to gather rich and detailed information.
- Qualitative research emphasizes the exploration of subjective perspectives, context, and social interactions to gain an in-depth understanding of the research topic.
- Data analysis in qualitative research involves coding, categorizing, and interpreting the collected data to identify themes, patterns, and relationships.
- The goal of qualitative research is to generate rich descriptions, uncover complexities, and provide a deeper understanding of social phenomena, often in a specific context.
Ans (b) Phenomenology and Ethnography:
Phenomenology: Phenomenology is a qualitative research approach that aims to understand and describe individuals’ lived experiences and their subjective interpretations of those experiences. BECS 184 Solved Free Assignment 2023
It explores the essence of phenomena from the perspective of the participants, focusing on their thoughts, emotions, and meanings attached to specific phenomena.
Ethnography: Ethnography is a qualitative research method that involves the in-depth study of a particular culture or social group.
It involves observing and immersing oneself in the context of the group being studied, collecting data through participant observation, interviews, and document analysis.
Ethnography aims to provide a rich and detailed understanding of the culture, social practices, and perspectives of the participants.
Ans c. Techniques of univariate data analysis:
Univariate data analysis refers to the analysis of a single variable at a time. Various techniques can be employed to analyze and summarize univariate data, including:
Measures of central tendency: Mean, median, and mode are used to describe the central or typical value of a variable.
Measures of dispersion: Range, variance, and standard deviation provide information about the spread or variability of a variable.
Frequency distributions: Creating frequency tables and histograms to summarize the distribution of a variable.
Probability distributions: Analyzing data using different probability distributions, such as the normal distribution, to make inferences and calculate probabilities.
Hypothesis testing: Conducting statistical tests, such as t-tests or chi-square tests, to assess the significance of differences or associations in univariate data.
Ans d. ANOVA and MANOVA: BECS 184 Solved Free Assignment 2023
ANOVA (Analysis of Variance): ANOVA is a statistical technique used to compare means across multiple groups or conditions.
It determines whether there are statistically significant differences between the means by examining the variation between groups and within groups. ANOVA is often used when there are three or more groups to compare.
MANOVA (Multivariate Analysis of Variance): MANOVA is an extension of ANOVA that allows for the analysis of multiple dependent variables simultaneously.
It is used when there are multiple outcome variables, and it assesses whether there are significant differences between groups while controlling for the interrelationships among the dependent variables.
Both ANOVA and MANOVA are parametric statistical tests and require certain assumptions to be met, including normality of the data, homogeneity of variances, and independence of observations.
They are commonly used in experimental and observational research to analyze the effects of independent variables on multiple dependent variables.