Welcome to EssayHotline!

We take care of your tight deadline essay for you! Place your order today and enjoy convenience.

Does theory allow you for the possibility that the extracted factors are correlated? If yes, then it justifiable to specify a correlated rotation (using PROMAX rather than VARIMAX).  

ASSIGNMENT INSTRUCTIONS

Rationale for the study and objective: Information was provided in weeks 5-8, which you used for the qualitative practical, please refer back to this. Remember the whole point of this exercise is to show that you know how to write up a practical and your knowledge and skills in psychometrics, survey design, quantitative analysis and presentation. This not a test of your knowledge of CFS, hence in writing your rationale, there is no need to search for new material. You just need to be able to provide a focused rationale. Do not forget to include the objective of the study, which was to ascertain the factor structure and reliability of the KACFSQ, which is a new measure of knowledge and attitudes to CFS.

 

Method

This ought to have the following subsections: design, participants, materials and procedure. It was a quantitative survey design, including questionnaire development, and using a questionnaire data collection technique

The participants are those people who filled in the questionnaires (not just the 10 you collected), you need to include the full sample from the data file. Sample size, gender and age; educational qualifications and if they had or knew of people with CFS.

Remember that the questionnaire was derived from a previous focus group study into CFS

 

Data Analysis

You will need to give some detail on the type of analysis performed. You may wish to explain that from the initial analysis of 42 items, some were deleted if the communality was less than .3. You could also mention that 0.4 or lower was not deemed a sufficiently strong loading and if items load on more than one factor this is also worth mentioning (as a cross-loaded item)?

 

Results

What to present

How much variance did the factors account for? What criterion was used to determine the number of factors? (this will involve mentioning what happened when using eigenvalues greater than one and the scree plot). Name the factors and the items in each factor in your final analysis. Likewise the reliability analysis.

Remember when presenting tables that they need to be numbered (e.g. Table 1, 2 , etc) and given a meaningful title and that the title goes to the top.

You should present a table of the final factors, standardized loading and the reliability of each factor. To help you decide on how this should be laid out, please look at Table III (p. 671) in the SLQ paper (McBride et al., on your reading list).

 

Discussion

So you need to say that factor analysis using [you insert the type of extraction and type of rotation analysis]  was conducted on the 42 item questionnaire, using Kaiser criterion, this resulted in a X factor solution, which was rejected in favour of a newer X factor solution which accounted for X % of the variance.  Then name the factors (not the individual items) and say something briefly about how they relate to previous research that has looked specifically at CFS (so you may wish to write that previous authors have found etc, etc).

Were there any surprising results, this may not be the case, but you may be surprised that some items did not have very high communalites or correlate with other items on the questionnaire and some items cross-loaded, consider why this may be the case. Maybe they were badly worded!

The limitations of this piece of research should be a feature of your discussion, think about the type of analysis used, should principal axis factoring or another type of analysis (e.g. principal components analysis or maximum likelihood estimation)  have been used. What about the type of rotation, eigenvalues greater than one rule or scree plot? (have a look at the paper on Best Practices in Factor Analysis). Think about the sample, was the age range typical of the general population, any gender imbalance, etc. what about the sample size, was it sufficient, if not why not? What about the 5 point Likert scale that was used, would a 7 point scale have been better?

In directions for future research, you may wish to discuss the potential uses of a questionnaire that measures knowledge and attitudes to people with CFS and things such as stigma or acceptance of illness, perhaps there is a need to survey healthcare professionals to identify the attitudes they have towards people with CFS?

 

Conclusion

This should be very brief, a couple of lines

References

Please use the 6th edition APA format and ensure that all names that appear in the text are in this section.

Data Analysis for the quantitative practical 2019/20

The first stage in analysing a new questionnaire is to test the factor structure and reliability of it, however first it is necessary to clean the data and inspect it for missing data.

 

Data Cleaning and missing data

When you enter any data into SPSS, it is essential that the researcher checks that the data is ‘clean’.

Missing responses on questionnaire items: This happens quite commonly when people complete questionnaires; they can skip questions by mistake. If it is just one or two items on a large questionnaire, the researcher may choose to replace the missing items with a mean score, use an imputation method in SPSS or just let SPSS ignore a missing case. If there are quite a few case where substantial items are missing, the researcher will need to check the wording of the items to see if the wording is ambiguous. If that is the situation, in a new scale, the researcher may decide just to remove that item. If using a standardized scale, it is a bigger problem and may mean that the researcher has chosen a poor or inappropriate measure for the sample.

Cleaning the dataset in a sample with hundreds or thousands of cases is very tedious. Usually the best way to start is to run a frequency analysis on the demographic data and questionnaire responses and request the maximum and minimum scores, this output will also give any missing cases. The frequency output is examined to see if there are any unusual scores. For example, in the current study, the questionnaire items were scored between 1 and 5, so if you had any other value then you know that value was entered wrongly. If this happens in your research project you, should go back to the original questionnaire to correct this error.

Factor analysis

Please ensure that you listen to the lecture on ‘factor analysis’ and the one on ‘how to conduct factor analysis in SPSS’, both are contained within the in the materials in Weeks 8-11.

 

In SPSS,

  • Go to AnalyzeDimension Reduction, and click on Factor, move the questionnaire items, 1 to 42 into the variables box.
  • Click Initial solution, tick correlation, KMO and Bartlett’sthen continue
  • Click Extraction, click on method and choose Principal Axis Factoring, then tick Scree Plot, click continue
  • Click on Rotation, tick Promax, click continue
  • Click on Options­, tick Sort by Size, tick small coefficients and change this value to .4
  • Click Continue and the OK

 

Note: In the SPSS extraction command, the maximum number of iterations for convergence is set at 25 by default. If you get a message in the output that the solution has not converged after 25 iterations, it is possible to increase this number before you re-run the analysis. SPSS will state that the solution has converged after x number of iterations or it will state.

“ Attempted to extract X factors. More than 25 iterations required. Extraction was terminated…”.

Increasing the number of iterations for the initial extraction can often solve this problem. This may not be a problem in this data but it is worth noting for future reference.

Looking at the first run of the analysis

As you go through the output, you will see that using the Kaiser criterion (eigenvalues greater than 1), results in 12 factors, accounting for 61.29 % of the variance; this is not a good solution, as there are several factors containing only one or two items.

If you look at the Cattell’s scree plot, you will see that there are several ‘elbows’ in the plot that we could choose as ‘cut off’ points for the number of factors we wish to retain (possibly 4, 5, 6 or 7 factors in this data set). The scree plot is only a guide but is not conclusive in this instance.

In addition, inspection of the correlation matrix indicates that some of the questionnaire items have very low correlations and the communalities show that some items have a low level of shared variance. It is quite tedious to search through the correlation matrix, so we can use the communalities to give an indication of how much if the variance in each variable has been explained in the analysis. As a rule of thumb, a variable that has a communality of less than .3 may be excluded from the analysis. For the purpose of this data set we will remove items if the communality value (after extraction of factors) is less than .3 in the initial analysis of 42 items.

 

Re-run the factor analysis BUT with fewer items this time.Now that we have made these decisions, it is necessary to rerun the factor analysis, removing the questionnaire items with communalities less than 0.3.
Some of the 42 items produced communality values (after extraction) of lower than .3 and so these should be removed before re-running the next exploratory factor analytic model. Because we still are not clear on the optimal numbers of factors, we will continue to use the default Kaiser Criterion (eigenvalues greater than 1 rule) and the scree plot as guides.

 

Go to AnalyzeDimension Reduction, and click on Factor, remove the questionnaire items that have communalities of .3. Click Continue and OK

The output from the Scree is inconclusive, whereas the Kaiser rule has extracted 11 factors with eigen values great than 1 .

We started with 42 items on the questionnaire and some of the items have been rejected because of low communalities and some will be suppressed as they do not load at above .4 or higher on any factor. You will notice that some of the factor loadings may have minus values, you do not need to do anything to the data file to change them.

 

Cross loadings of items on Factors

At this stage, you should visually inspect the items that load under each factor and look for cross-loadings (those that load on one or more factors). When you have an item that loads on two or three factors simultaneously, it can be discarded,  as there is no clear indication as to which factor it loads on best. In the situation were an item has a high loading on one factor, but low on another, you might decide to retain it under the highest loading (this is a somewhat subjective process). If you have theoretical reasons to suggest that the factors should be uncorrelated with each other (i.e. independent) then VARIMAX is more appropriate than PROMAX. PROMAX rotation assumes that the factors can correlated with each other (in theory). At this point you have a decision to make and justify.

  1. Does theory allow you for the possibility that the extracted factors are correlated? If yes, then it justifiable to specify a correlated rotation (using PROMAX rather than VARIMAX).

    If you decide to do this, then you should inspect and report on the rotated ‘PATTERN’ matrix. The PROMAX rotated solution will also give you an additional table of correlations among the extracted factors, so bear in mind the output from a rotated solution is a little more complicated (i.e. you see a PATTERN matrix plus a STRUCTURE matrix and a FACTOR CORRELATION matrix. (note: you can ignore reporting on the details of the STRUCTURE matrix in the write-up).

 

Face validity (interpretation) of Factors

Have a look at the remaining items within each factor; do they have something in common? If so, then give each factor a name, this is where potential subjectivity creeps in as well. The PATTERN matrix is the easiest one to interpret and report. Strong factors will have high standarised loadings (>.4) and contain at least 3 items. Factors with less than two highly loaded items are not statistically strong and so should be discarded from the final questionnaire set of items. The items on each factor should have some thematic overlap in terms of the wording in order to give the factor a meaningful label. Do not worry if you cannot identify a consistent theme as some factors may not always appear consistent or intelligible. The next part of the analysis may help your decision about whether a factor is meaningful and statistically consistent.

 

Reliability Analysis

Sometimes researchers will wait until they have completed the reliability analysis before naming the factors. During reliability analysis, other items can be removed at this point if you can show that they are having an undesirable impact on the value of Cronbach’s alpha (α). Do remember that you can use the discussion board for the quantitative practical to ask questions and discuss your results with other students. This is not to say that you should rely totally on the discussion board for answers. You are encouraged to source additional information on the topic and read more broadly around the issues of factor analysis.

 

 

Reliability analysis

You should go to the Week 9-11 content and view the lectures on reliability.

 

For the next stage, we are going to test the internal reliability or consistency of each factor and will produce a statistic called Cronbach’s alpha. It can be thought of as a sort of average correlation of all the items within the factor and ranges from 0 to 1. The closer to 1 the Cronbach’s alpha is, the stronger the correlation between items. A value of .7 is acceptable and .6 questionable, some researchers reject items which have a Cronbach’s alpha below .7, although there is a bit of debate about this and some authors suggest that an alpha of as low as .5 can be used, if a good case can be made for inclusion. In addition, in the early stages of developing a scale, factors with an alpha value of .6 can be acceptable, so for the purposes of the quantitative practical, you can accept .6. Note that Cronbach’s alpha should really only be performed when you have more than 2 items on the factor so factors with only one or two items should be ignored for this purpose.

 

Go back to the SPSS data file, to Analyse, Scale and Reliability Analysis. To test the reliability of factors, you need to put all the items that you retained under Factor 1 into the Items box. Click on Statistics, in the box Descriptives, tick Scale if Item Deleted, then click on Continue. In the box Scale Label, type in Factor 1 and then click OK. You now have to go through the process for the remaining factors.

 

Note: the items specified for each reliability analysis will be slightly different depending on whether you opted for VARIMAX or PROMAX rotation.

 

Discussion Area for Factor Analysis and reliability

Help each other and discuss your findings on the ‘Factor Analysis and reliability’ discussion board. By now, you will have realised that naming the factors is not an exact science and it is OK for you to have slightly different names to someone else, but the general idea underpinning the factor should be the same. So for example, if you were look at why some people get CFS, a factor could be called ‘blame’, but someone else may call it ‘culpability’, etc.

 

Remember that the analysis you have conducted was exploratory in nature. In the real world if you wanted to go on and test the hypothesis that the questionnaire had 5 scales you would use confirmatory factor analysis (CFA) using independent data… a point you may wish to mention in the discussion.

 

© 2024 EssayHotline.com. All Rights Reserved. | Disclaimer: for assistance purposes only. These custom papers should be used with proper reference.