Welcome to EssayHotline!

We take care of your tight deadline essay for you! Place your order today and enjoy convenience.

Exploring the influence of epistemological beliefs-To what extent do you feel he could be seen as following constructivist and social constructivist perspectives?

DE300 Investigating psychology 3
Week 6 Study Guide
Contents
2. Exploring the influence of epistemological beliefs 3
Children learning 3
Further reading: Piaget and Vygotsky 4
Interview with Neil Mercer 5
Videos about developing a questionnaire 5
3. Methods and skills: Questionnaire design 11
Psychometrics 12
Stages of questionnaire design 12
Construct validity 17
Reliability 19
4. Methods and skills: Analysing questionnaire data 21
Revisiting relationships 21
Deciding which test to use 22
What is multiple regression? 23
How do we use it? 23
How do we write it up? 30
5. Independent project
32
Creating an online survey 32
References 34
2 of 34 Thursday 4 April 2019
2. Exploring the influence of
epistemological beliefs
Now read Book 1, Chapter 5, ‘Developmental psychology: cognitive development and
epistemologies’ by Kieron Sheehy.
Chapter 5, which is written from the perspective of developmental psychology,
considers the standpoints of two theorist, Piaget and Vygotsky, with regard to cognitive
development and epistemology. Epistemological beliefs are important in developmental
and educational psychology interventions, and personal epistemological beliefs
can be researched using questionnaires. The chapter provides evidence that
epistemological beliefs are important in education and offer particular insights in the
field of inclusive education.
When you have finished reading the chapter, return to this study guide to work through
the rest of this week’s tasks and activities.
Children learning
The video below shows recreations of some of the classic developmental psychology
tasks that are mentioned in Chapter 5. It begins with Piagetian conservation tasks and
compares the performances of younger with older children. This is followed by older and
younger children solving Piaget’s ‘three mountains’ task. The results of these tasks
support Piagetian views of the cognitive development of young, preoperational children.
The Piagetian tasks are followed by adapted versions of these tasks which, as you read in
Chapter 5, give the tasks greater ‘human sense’. These are Hughes and Donaldson’s
‘hiding from policemen’ task (1983), McGarrigle and Donaldson’s ‘naughty teddy’ task
(Donaldson, 1982) and Paul Light’s ‘chipped beaker’ task (Light et al., 1979).
Video content is not available in this format.
Children learning
2. Exploring the influence of epistemological beliefs
3 of 34 Thursday 4 April 2019
Activity 1: Children learning
Allow about 30 minutes
In the video ‘Children learning’, how did the younger children perform in the adapted
tasks in comparison with the previous Piagetian ones?
What does this reveal about the younger children’s competence and egocentricity?
You might find it useful to make some notes in the text box below.
Provide your answer…
Further reading: Piaget and Vygotsky
Now select one of the two readings listed below. If you have previously studied child
development at level 2 (e.g. ED209 Child development or E219 Psychology of childhood
and youth), then choose Reading A. If you have studied other areas of psychology at level
2, then choose Reading B.
Activity 2: Further reading
Allow about 2 hours
Now read either:
Reading A: ‘Pedagogy’ by Harry Daniels (pp. 307–20 only). This covers the sections
‘Development’, ‘Key Elements in Vygotsky’s Approach to Pedagogy’, ‘Social Contexts
for Learning: Developmental Teaching’ and ‘Scaffolding’. This reading will build on
your previous study of Vygotsky. As you read this, identify how Daniels sees the
relationship between social and cultural influences and psychological development.
Important points to consider in this respect include spontaneous versus scientific
concepts, and factors that allow successful ‘scaffolding’.
Or:
Reading B: ‘Piaget and Vygotsky: many resemblances, and a crucial difference’ by
Orlando Lourenço. This paper draws together many aspects of Piagetian and
Vygotskian psychology. It is common for psychology texts, and hence psychologists, to
present the work of Vygotsky and Piaget as simply being in opposition. Orlando
Lourenço argues that there are several nuanced points of similarity between the two
approaches, and proposes that there is also a ‘generally ignored’ difference. Having
read this article, do you feel that Lourenço’s argument gives you a deeper insight into
the relative positions of the two theorists on key aspects of developmental theory? A
key issue to consider here is the degree of influence that the theorists ascribe to
external and internal factors.
2. Exploring the influence of epistemological beliefs
4 of 34 Thursday 4 April 2019
Interview with Neil Mercer
In the audio in Activity 3, Professor Neil Mercer talks about his position on the relationship
between children’s talk and their cognitive development. He describes a ‘talk-based’
classroom invention and the effects this had on the development of children’s social and
cognitive development.
Activity 3: Interview with Neil Mercer
Allow about 50 minutes
Now listen to the interview with Neil Mercer. As you do so, consider the following
questions.
● Can you infer Professor Mercer’s beliefs about how children’s knowledge
develops and learning occurs?
● To what extent do you feel he could be seen as following constructivist and socialconstructivist
perspectives?
Jot down your ideas in the text box below; then compare them with those in the
discussion.
Audio content is not available in this format.
Interview with Neil Mercer
Provide your answer…
Discussion
Professor Mercer’s discussion of his research suggests that he holds a socialconstructivist
view of learning. This can be seen in his comments about the importance
of collaboration and the impact he sees the social context of language use as having
on children’s cognitive development. However, he also stresses the importance of
children working together to solve problems, which seems much closer to Piaget’s
later research concerning the importance of ‘equal’ peers working together.
Videos about developing a questionnaire
Having read Chapter 5 you will be familiar with epistemological questionnaires and the
type of items that they use. You will now watch a series of five short videos that illustrate
the development and piloting of this type of questionnaire in a survey to measure
teachers’ beliefs about the nature of knowledge and how children learn. The videos show
part of a research project in which two researchers develop a questionnaire to gather data
on East Javan teachers’ epistemological beliefs. The researchers were particularly
interested in understanding teachers’ beliefs about sign-supported communication
(Signalong Indonesia, introduced in Chapter 5). They hoped to develop a questionnaire
2. Exploring the influence of epistemological beliefs
5 of 34 Thursday 4 April 2019
that could explore the relationship between teachers’ beliefs about how children learn
(and the nature of knowledge), inclusive education and the use of key word signing.
The videos show a real-life project to develop a questionnaire step by step. You will learn
how to do this in detail in this week’s Methods and skills section.
Activity 4: The steps in developing a questionnaire
Allow about 2 hours 20 minutes (in total to watch all five videos)
As you watch the five videos that follow, identify the extent to which the following steps
identified by Burgess (2001) are evident:
● defining the research aims
● identifying the population and sample
● deciding how to collect replies
● designing the questionnaire
● running a pilot survey
● carrying out the main survey*
● analysing the data*.
*These videos illustrate the process of developing a questionnaire as far as the pilot
study stage, so you will not see evidence of Burgess’s final two steps (carrying out the
main survey and data analysis).
Although the videos are only snapshots of the process, you may be able to develop some
critical questions about the research. For example, teachers are interviewed to explore
their beliefs about signing, and how children learn in inclusive classrooms. How
successful are the interviews in capturing issues that subsequently inform the
questionnaire items? Do you feel that there are cultural issues or influences that the
researchers might have missed in their initial interviews? You may have an opinion on the
influence of the translator and presence of a film crew on what is said by the participants.
Part 1: Introduction to Signalong and Signalong Indonesia
The first video in the series gives a background and context to the research. The
Indonesian education system is aiming to develop inclusive schools that are able to
accept and teach a diversity of children, including those with learning difficulties and
physical impairments. Consequently, children with severe learning difficulties, who would
previously have been at risk of exclusion from education, are able to attend mainstream,
inclusive classrooms. These children may experience significant issues in communication
with their peers and teachers. One approach that has been helpful in this context has
been key word signing. This video introduces one particular approach to key word signing
– Signalong (UK) – and reveals some of the beliefs that teachers and parents have about
it. Signalong Indonesia was developed based on the Signalong model (i.e. a key word
signing approach with one sign per concept) and introduced to support classroom
communication.
Allow about 45 minutes to watch this first video and make some notes.
Video content is not available in this format.
Part 1: Introduction to Signalong and Signalong Indonesia
2. Exploring the influence of epistemological beliefs
6 of 34 Thursday 4 April 2019
Part 2: Researching the questions
There is some evidence to suggest that teachers’ personal beliefs about the effects of
sign-supported communication on children can prevent or facilitate its use in the
classroom and across schools. If Signalong Indonesia is to be used successfully then it is
important to understand the relationship between teachers’ beliefs about the effects of
signing, classroom practice and their own epistemological beliefs. The second video in
the series shows the researchers observing the use of Signalong Indonesia in two
schools, and speaking with teachers, parents and children to gain data for developing
initial items for a pilot questionnaire.
Allow about 50 minutes to watch this second video and make some notes.
Video content is not available in this format.
Part 2: Researching the questions
Part 3: Designing the questionnaire
One requirement when developing a useful questionnaire is that the items successfully
transmit the researchers’ intended meaning. This can be an issue even when the
researchers and participants share a common language and culture. It is highlighted in
Parts 3 and 4, as there are cultural differences and translation issues that the researchers
need to consider.
Allow about 20 minutes to watch this third video and make some notes.
Video content is not available in this format.
Part 3: Designing the questionnaire
2. Exploring the influence of epistemological beliefs
7 of 34 Thursday 4 April 2019
Part 4: Piloting the questionnaire
The fourth video in the series shows the questionnaire being piloted at a conference.
There are some problems with piloting a questionnaire in this way; for example, the
participants might confer in answering the questions. Another issue, for researchers, is
the time-consuming nature of coding responses to a paper questionnaire, as opposed to
collecting responses online. On the positive side, the researchers are able to gain direct
feedback and ask a sample of participants about the questionnaire and their experience of
completing it.
Allow about 20 minutes to watch this fourth video and make some notes.
Video content is not available in this format.
Part 4: Piloting the questionnaire
Part 5: Reflecting on the pilot study
On returning to the UK, the researchers coded the pilot questionnaire responses and
carried out a preliminary analysis. This preliminary analysis would be used in developing
the final research questionnaire. The analysis appeared to suggest links between the
types of epistemological beliefs that teachers hold and their attitudes towards inclusive
education and the use of key word signing. The data did not, at first glance, appear to
support separate constructivist and social-constructivist beliefs, an issue which is
discussed in Chapter 5. This type of insight from questionnaires is potentially helpful in
developing teacher training on the use of key word signing in inclusive classrooms and
schools. However, like all pilot questionnaires, it will need further refinement in relation to
its reliability and validity; these qualities are discussed in Section 3 of this week.
Allow about 15 minutes to watch the fifth and final video in this activity and make some
notes.
Video content is not available in this format.
Part 5: Reflecting on the pilot study
2. Exploring the influence of epistemological beliefs
8 of 34 Thursday 4 April 2019
Ethical issues
As in all psychological research, the development and use of questionnaires raises
potential ethical issues.
Activity 5: Ethical issues
Allow about 10 minutes
Can you identify at least two ethical issues from the five videos you have just watched
in Activity 4? Jot them down and then compare your answer with the one in the
discussion.
Provide your answer…
Discussion
You may have identified the following ethical issues:
● Using a filmed conference for the collection of pilot data might mean that the
participants felt obliged to complete the questionnaire.
● The questionnaire is quite lengthy and the researchers had to be able to justify
asking all of the questions. However, in a pilot study, the validity of particular
questions is not always known.
● One of the researchers was involved in developing Signalong Indonesia, which
may have influenced the responses participants felt able to give.
● One issue with collecting questionnaire data that is related to people’s
professional practice is that it might, potentially, be used against them. It is
therefore important to ensure that data is anonymised and data protection issues
are managed appropriately.
● When filming the development of a questionnaire, the impact of potentially being
able to assign particular beliefs to particular individuals needs to be considered.
You will explore ethical issues further in Week 14.
2. Exploring the influence of epistemological beliefs
9 of 34 Thursday 4 April 2019
Trying out the pilot questionnaire
Extension activity: Completing the epistemological beliefs questionnaire
If you wish to, you can now complete the questionnaire that was developed in the East
Java films and see how your beliefs compare with those of the teachers from East
Java. There is an online consent form that mirrors the one used for the teachers. The
data will be collected and presented later in the module and may also be published at a
later date. In looking at this data you might consider the extent to which this
questionnaire is able to capture your own beliefs or make comparisons between two
different cultural contexts.
2. Exploring the influence of epistemological beliefs
10 of 34 Thursday 4 April 2019
3. Methods and skills: Questionnaire
design
Measuring what’s ‘in the mind’
As you have learned, psychologists have a number of different ways in which they can
collect information about people’s behaviours – from naturalistic observation to
laboratory-based experiments. But what happens when they want to explore the more
esoteric aspects of human experience? What tools can psychologists use to measure
psychological constructs that are not directly observable (e.g. attitudes, beliefs, mood
and personality)?
Chapter 5 introduced you to a specific data-collection method that is designed to help you
to measure what goes on ‘in the mind’: the self-report questionnaire. Asking participants
directly about their feelings, experiences or thoughts in a standardised way can allow
psychologists access to the unobservable and can help to shed light on these hidden
processes.
However, questionnaires aren’t limited to measuring psychological constructs. They can
also be used to collect factual information about a person (e.g. gender, age and
education) and self-reported details of their behaviour (e.g. frequency/types of
behaviours). In addition, they can be useful for a number of different study designs.
Questionnaires can be used to collect rich qualitative data about someone’s experiences
or beliefs, and to collect quantitative data that aims to measure or quantify people’s
3. Methods and skills: Questionnaire design
11 of 34 Thursday 4 April 2019
behaviours and experience in some way. (It is this latter type of questionnaire data that will
be the focus of this week.) As such, they are considered important tools for carrying out
psychological research.
This week’s methods materials will cover two aspects of research using questionnaires.
1. In this section, you will consider important aspects of questionnaire design itself.
2. In Section 4, you will explore a statistical technique that is often used on quantitative
data collected from questionnaires: multiple regression.
Psychometrics
Questionnaires can allow a researcher to gather a large amount of information from a
potentially large number of study participants. While this is a great advantage of this
technique (larger sample sizes = more knowledge = more power), it is important to
remember that the information garnered is only ever as good as the tool used to collect it.
When carrying out questionnaire research, it is important to make sure the data you
collect is accurate and usable; the questionnaire must therefore be both valid and reliable.
This is such an important aspect of questionnaire design that there is an entire field of
study dedicated to it: psychometrics.
Psychometrics is a branch of psychology that is explicitly interested in the design,
administration and interpretation of questionnaires and scales that seek to quantify and
measure aspects of psychological ability and experience. Psychometric researchers
rigorously test existing scales and measures across thousands of participants in different
situations to ensure their reliability and validity. This is an extremely costly and timeconsuming
process, so would obviously not be appropriate (or possible) to do at this stage
of your research career.
A good tip when it comes to designing your own questionnaire is never to start entirely
from scratch. There are a wealth of questionnaires and surveys that have already been
thoroughly tested and standardised, so base your research on them, using and improving
established measures as necessary.
Stages of questionnaire design
When carrying out questionnaire research, there are typically five distinct stages that you
must go through to ensure your questionnaire design is sound.
1. Formulate a clear and precise research question, and identify the population (and
therefore the sample) you are interested in testing.
2. Define the psychological constructs you are hoping to measure with your
questionnaire.
3. Create your items/questions (thinking about question wording and suitable response
options).
4. Pilot your questionnaire (removing/changing any question items that are potentially
problematic).
5. Finalise your questionnaire by assessing it for reliability and validity.
Let’s look at each of these stages in more depth.
3. Methods and skills: Questionnaire design
12 of 34 Thursday 4 April 2019
Step 1: Formulating a research question
It may seem obvious, but before you do anything else it is important to think carefully
about the overall research question you are hoping to address with your research. Let’s
take an example from Chapter 5. Towards the end of this week’s reading, you were
introduced to several pieces of research that sought to measure people’s beliefs about
epistemology. This is obviously a broad topic, which could be investigated in numerous
ways. So in order to narrow it down to a researchable subject, you first need to think about
exactly what it is about people’s epistemological perspectives you are interested in, and
why; and who you want your participants to be.
In Chapter 5, the researchers explicitly justified their population of interest and carefully
defined the research question they were attempting to address. Specifically, because
beliefs about how we acquire knowledge are likely to be related to education, they were
particularly interested in the question of whether teachers’ epistemological beliefs about
teaching and learning relate to their classroom practices.
This research clearly highlights two specific variables that they were interested in
measuring:
● epistemological beliefs about teaching and learning
● classroom practices.
Both variables then had to be carefully defined.
Step 2: Defining your constructs
Once you have identified a psychological construct that you want to measure, you need to
carefully define it so it can be operationalised. This might sound obvious, but it can also be
very tricky.
Activity 6: Construct definition
Allow about 5 minutes
To illustrate the complexity of construct definition, see if you can define the construct of
‘love’.
Provide your answer…
It’s hard, isn’t it? You may have come up with some broad and varied definitions covering
different aspects of the construct. For example, you may have thought about the physical
side of it; for example, the rush of endorphins that often accompanies attraction to
someone, or the deep feeling of warmth experienced when thinking about someone you
love. You may have thought about romantic love, the platonic love of a close friend, or love
for a child, parent or sibling (and possibly even the love of something such as chocolate!).
The point is that psychological constructs rarely have one single, agreed-upon definition.
So before trying to measure it, it is important to consider all of the different and varied
facets that make up the construct, in order to identify the specific aspects of it that you
want to measure.
3. Methods and skills: Questionnaire design
13 of 34 Thursday 4 April 2019
So how do you come up with an operational definition for your construct? The best way to
do this is by carrying out a literature search on the topic and looking at the different
definitions that already exist. Because you want to make sure your questionnaire
accurately taps into the construct you are hoping to measure, you want to make sure that
you are covering as many of the different facets of your psychological construct as
possible. It may also help you to brainstorm this stage of the design with a small group of
people.
For example, think about the research question that looked at whether teachers’
epistemological beliefs about teaching and learning relate to their classroom practices.
How would you define what is meant by ‘epistemological beliefs’ and what are the
different aspects of this that teachers might have? Again, your definition may be wide and
varied. But a simple literature search would help you identify the following important
dimensions of this construct:
● teachers’ beliefs about students’ capacity for learning (or innate ability)
● teachers’ beliefs about the contribution of hard work and effort in learning (learning
effort)
● the extent to which teachers believe knowledge is primarily transmitted by authority
figures (authority/expert knowledge)
● how stable/reliable or changeable teachers believe knowledge to be (certainty
knowledge).
As such, researchers investigating this construct would probably want to include
questions/items that tap into each of these different facets. Once you have defined your
construct, and established the different dimensions that you want to try to measure, you
can then begin to build your questionnaire. If previous surveys/questionnaires have been
developed to tap into these dimensions, then including relevant items from them would be
a good starting point.
Step 3: Creating your items
If you studied DE200, you will have learned a lot about how to design a good
questionnaire, how to ask different types of questions, and how to word your question
items or questions and the different response options that you can give to respondents. If
you’d like to, you can look at the DE200 materials on questionnaires and surveys.
When thinking about how to design your questionnaire, think carefully about how you
want to measure the construct you are interested in. If you want to try and quantify it in
some way, you need to give participants questions (or items) that tap into the different
facets of the construct and that allow them to respond in a manner that can easily be
coded.
A common method used when trying to measure psychological constructs is the Likert
scale (Figure 1), which was introduced in both DE100 and DE200. In this method,
participants are given a list of statements (or items) relating to the research construct and
they have to indicate the extent to which they agree with each one. Responses are then
placed along a balanced rating scale, which allows response options to be numerically
coded (usually from 1 to 5). For example:
3. Methods and skills: Questionnaire design
14 of 34 Thursday 4 April 2019
Strongly
Disagree Disagree
Neither Agree nor
Disagree Agree
Strongly
Agree
Getting ahead takes
a lot of work
Our abilities to learn
are fixed at birth
Anyone can figure out
difficult concepts if one
works hard enough
The really smart students
do not have to work hard
to do well in school
Figure 1: Example of a Likert scale
The key is to come up with statements that directly represent, or tap into, the different
facets of the construct you have defined. Again, coming up with these items can be tricky.
The best way to do it is to look at existing questionnaires and surveys for inspiration and
then to brainstorm the definitions you have settled on.
Activity 7: Survey items and constructs
Allow about 5 minutes
Using the example of measuring epistemological beliefs, let’s consider an existing
questionnaire that has been designed to tap into teachers’ beliefs about knowledge
attainment.
See if you can match up the survey items with the dimensions of epistemological
beliefs identified on the previous page.
Interactive content is not available in this format.
Remember, the aim of your questionnaire is to produce a measure of a construct that is
both reliable and valid. As such, you want to make sure that all of your items appear
related to the construct you are trying to measure and its various facets. In addition, you
want to avoid opportunities for bias that might sneak into the survey design. So keep in
mind that question wording can influence the responses people give, and try to eliminate
response bias by reverse-phrasing some of your items.
For example, while a high agreement score on the question ‘If we try hard enough, then
we will understand the module material’ indicates a high belief in learning effort, you could
reverse-phrase this to something like: ‘No matter how hard we try, we will never
understand the module material’. In this case, a high agreement score would suggest the
participants have little belief in the fruitfulness of learning effort. Reverse-phrasing just
means that the respondents have to think a bit more about each question; and you can
easily weed out any unreliable participants who have just ticked ‘agree’ for all of the
questions, as their answers will appear to be inconsistent.
3. Methods and skills: Questionnaire design
15 of 34 Thursday 4 April 2019
Step 4: Piloting your questionnaire
Once you have designed your questionnaire, the next thing to do is to pilot it using a small
group of people. Ask participants for their feedback and their experience in giving their
responses. Is there anything they would change? Is there anything they didn’t
understand? If any items prove confusing or problematic, you should remove or change
them at this point.
Step 5: Finalising your questionnaire
After you have piloted your questionnaire, you are ready to assess it for reliability and
validity. (These concepts are also covered in DE200.)
● ‘Validity’ is the extent to which a measure is measuring what it set out to measure.
● ‘Reliability’ refers to the consistency or stability of a measure.
These two concepts are individually very important: a good questionnaire should be both
valid and reliable. But before going on to consider these concepts in isolation, it’s
important to think about their relationship with one another.
Reliability is actually an important component of validity. If a measure you are using is not
reliable, then the validity of your study will also be compromised. For example, if people
score very different results on a measure every time they take it, the measure is unlikely to
be tapping into anything stable or specific.
However, just because a measure is reliable, that doesn’t necessarily mean that it is also
valid. For example, imagine that someone has adjusted your bathroom scales so that they
show your weight as being one stone (6.35 kg) lighter than you actually are. The scales
are reliable – they are consistent and show the same weight every time you weigh
yourself – but they are not showing your true weight.
The aim is to come up with a measure that is successful in both validity and reliability.
Hitting the target for reliability and validity
Remember, the reliability and validity judgements should refer to any specific measures
you want to use in your research. If your measures are meaningless, any subsequent
analysis will also be meaningless.
It is worth noting that a questionnaire may be designed to measure a single construct, or it
may be designed to tap into a number of different constructs, beliefs or behaviours. If this
is the case, the reliability and validity of each distinct concept (or variable) should be
considered separately.
3. Methods and skills: Questionnaire design
16 of 34 Thursday 4 April 2019
Taking the epistemological beliefs example, you may want to consider the reliability and
validity of the questionnaire as a whole. However, as you may also want to explore the
different sub-constructs separately, you would also need to consider the reliability and
validity of the sub-measures (innate ability, learning effort, authority/expert knowledge and
certainty knowledge).
Construct validity
Construct
validity
Content
validity
Criterion-related
validity
Predictive
validity
Concurrent
validity
Convergent
validity
Face validity
Discriminant
validity
Figure 2: The different types of construct validity
So how can you tell whether your questionnaire is valid? At a basic level, you need to be
confident that your measure is assessing the psychological construct it is supposed to
measure – in other words, that the measure has construct validity.
It is possible to check for construct validity in several ways. One option is simply to judge
whether the measure appears to researchers and participants to reflect the construct of
interest. For example, does the test look as though it is measuring what you think it is?
This subjective judgement of validity is referred to as face validity. Obviously, just
because a measure appears to have face validity, this doesn’t mean it is actually valid.
However, face validity can be very important for participants. If they feel that the test being
administered lacks face validity, they may believe that they are being asked unnecessary
questions and not want to complete the questionnaire. (On the other hand, sometimes it’s
helpful for a measure to have low face validity so that participants are naïve to the
construct under investigation.)
Another related type of construct validity is content validity. This refers to whether a test
covers all of the different facets, or content domain, of the construct of interest. For
example, a measure of general intelligence will lack content validity if it assesses only
mathematical intelligence and does not cover verbal intelligence. In some instances the
content domain of the construct is clear, but in others (e.g. creativity – which you will learn
more about next week) there is disagreement about what constitutes the construct,
making it difficult to determine whether a measure has content validity.
A more objective measure of construct validity is called criterion-related validity. This
involves examining the relationship between scores on the measure and scores on some
3. Methods and skills: Questionnaire design
17 of 34 Thursday 4 April 2019
other outcome, or criterion. There are a number of different sub-types of criterion-related
validity:
● Sometimes the aim of research is to predict future behaviour. In these cases, where
we look at the relationship between our measure and a criterion of future behaviour,
we are assessing predictive validity. For example, if you apply for a job, you may be
given psychometric tests. The employers use these particular tests because they
have predictive validity – they are good at indicating future performance in the job.
● Alternatively, we can look at the relationship between the measure and a concurrent
criterion (i.e. something measured at the same time, or concurrently). This is known
as concurrent validity. For example, if you were developing a new test of
intelligence, you could examine the association between scores on the new test with
scores on a well-established intelligence test given at the same time. If the new test
is valid, you would expect that the scores correlate to a high degree.
● Concurrent and predictive validity are related to something called convergent
validity. Measures that are designed to assess the same construct should all be
related to each other. In other words, measures of the same, or similar, constructs
should ‘converge’. Different intelligence measures should correlate with one another,
and with related variables (e.g. academic attainment).
● In contrast, but equally importantly, we need to determine whether our measure is
distinct from other measures that it should not be related to. This type of validity is
known as discriminant validity. This may sound strange and unnecessary, but it is
important to know that our measure is distinct from measures of other constructs,
and that we can discriminate between the different measures. A measure of a
construct such as intelligence, for example, should not be highly correlated with a
measure of a theoretically distinct construct such as self-esteem.
Test yourself
Activity 8: The different types of validity
Allow about 10 minutes
Drag and drop these definitions into their corresponding type of validity:
The degree to which a measure is not associated with measures of different
constructs.
The degree to which a measure is associated with other measures of the same
construct
The degree to which a measure is related to an outcome measure, assessed at
the same time.
The degree to which a measure can predict future behaviour.
The degree to which a measure is related to another outcome measure of the
construct of interest
How well a measure looks as though it is measuring the theoretical construct of
interest.
How well a measure is measuring the theoretical construct of interest.
3. Methods and skills: Questionnaire design
18 of 34 Thursday 4 April 2019
Match each of the items above to an item below.
Discriminant validity
Convergent validity
Concurrent validity
Predictive validity
Content validity
Face validity
Construct validity
Reliability
Reliability refers to the consistency and stability of a measure. It should be fairly obvious
why this is such an important quality for psychological measures. What’s the point in using
a measure if it gives you different results each time you use it? In an ideal world,
psychological measures would be 100 per cent accurate at measuring a participant’s real
score on the variable of interest. However, in the real world it is common to have some
measurement error, and we try to minimise this error.
Reliability can be assessed in different ways, including test–retest reliability, internal
consistency reliability and inter-rater reliability. The latter usually involves calculating a
number, called a coefficient, which indicates the strength of the reliability.
Test–retest reliability
Test-retest reliability is determined by administering a measure on two separate
occasions and looking at the relationship between the two scores. This is essentially done
using correlation. Hopefully correlational analysis is something that is quite familiar to you
now. As you learned on both DE100 and DE200, correlations allow you to investigate the
relationship between two variables that are measured at the interval or ordinal level.
Correlation directly compares the scores of one variable with the scores of another to see
if there is a relationship between them. If participants score similarly on both occasions,
the correlation will be strong – so there is high test–retest reliability. If the scores are quite
different, the correlation will be weak – so there is low test–retest reliability. (If you need to,
you can revise the material on correlation.) You may sometimes see test–retest reliability
referred to as ‘temporal consistency’ (i.e. consistency over time).
However, there is a potential issue with repeated testing – a participant’s performance at
the second testing session may be influenced by how they performed the first time. For
example, a participant may improve their scores the second time around through practice,
or they may perhaps remember their previous responses and give those instead of
responding honestly, or maybe they have a better idea of what the test is trying to
measure and give answers to reflect this.
To address this potential issue, some psychological tests have two different, but
equivalent, versions (or forms). The individual items or questions in the two versions are
different, but they are all measuring the same underlying psychological construct. When
researchers assess the consistency between the two versions of the same test, this is
referred to as alternate or parallel forms reliability.
3. Methods and skills: Questionnaire design
19 of 34 Thursday 4 April 2019
Internal consistency reliability
Of course, it’s not always possible to collect data from participants on two occasions, so
researchers often have to assess reliability using data collected on just one occasion. This
is possible since psychological measures are usually composed of several items (or
questions); researchers can compare participants’ performance on items with other items
in the measure that are meant to assess the same psychological construct. This is
referred to as internal consistency reliability – how reliable is the measure within itself?
One form of internal consistency reliability is split-half reliability. In this case, the items
on a questionnaire are randomly split in two. Participants’ scores on both halves of the
questionnaire are then calculated and compared – again, using correlation. If a scale is
very reliable, their scores on both sets of items should be very similar and highly
correlated. If there are large discrepancies between the scores, then the measure is
unlikely to be reliable.
However, there is an issue with the split-half method: the items on a questionnaire can be
split in two in a variety of different ways, and the way in which they are split can result in
dramatically different split-half reliability scores. To overcome this problem, an alternative
measure of reliability can be used: Cronbach’s alpha.
If you need to, you can revisit the DE200 materials on how to use Cronbach’s alpha. But,
to recap, this measure is effectively equivalent to calculating the split-half reliabilities for
every conceivable split combination possible, and then taking the average of these scores
(although in reality, that’s not actually how it is calculated). As with any reliability measure,
the higher the value (which will range from 0 to 1), the more reliable the measure.
An additional technique, item-total correlation, determines the correlation between
individual items and the overall total score on the test or measure. Using these
correlations allows researchers to identify which individual items are particularly
unreliable and appear to be unrelated to the measure as a whole; these unreliable items
can be removed from the questionnaire to improve its reliability. (The
Cronbach’s alpha tutorial also demonstrates how this is done.)
3. Methods and skills: Questionnaire design
20 of 34 Thursday 4 April 2019
4. Methods and skills: Analysing
questionnaire data
As you have seen in Section 3, questionnaires can be a very useful tool when you are
interested in collecting information about natural behaviours, attitudes and psychological
constructs.
But the question is: how do you analyse this type of data? Well, of course, this will depend
on the type of research question you have and on the type of questionnaire you have
designed. But, commonly, questionnaires are used to quantify (or measure) different
aspects of behaviour and/or experience to see how they relate to one another.
For example, you might want to see:
● how teachers’ existing epistemological beliefs are related to the different teaching
methods that they use
● how different natural behaviours (such as frequency of exercise or frequency of
eating sugary foods) are related to levels of anxiety or depression
● whether someone’s IQ and hours spent revising may predict their exam score.
All of these examples involve measuring existing aspects of human experience –
psychological constructs (e.g. IQ, anxiety, happiness), or natural behaviours (e.g. how
often someone does something), or someone’s experiences/beliefs – and exploring the
relationships between them.
This is quite different from what you have been learning in the methods sections over the
last five weeks. Until now, your learning has concentrated on how experiments allow you
to investigate the effect one (or more) independent variable(s) (IV) has on a dependent
variable (DV). In these circumstances, participants’ performance (i.e. the DV) is measured
and compared under different experimental conditions. In this situation, each IV is a
categorical (or nominal) variable and the DV is measured at the interval level.
So what happens in situations when you are unable to manipulate your IVs in an
experimental way, or when both your IVs and DVs are measured at the interval or ordinal
level? Can you still investigate the relationships between these types of variables, even
when you have not manipulated them directly? Of course you can!
Revisiting relationships
You have already learned about two methods that allow you to look at the relationship
between two naturally occurring interval (or ordinal) variables on DE200:
● Correlation tells you the strength, direction and statistical significance of the
relationship between two variables.
● Simple linear regression allows you to model the relationship that exists between two
variables and make predictions based on that model.
Of the two, simple linear regression is generally more useful, as it allows you to determine
the nature and significance of the relationship between your variables and make
predictions about one variable based on the other. However, as with correlation, you are
4. Methods and skills: Analysing questionnaire data
21 of 34 Thursday 4 April 2019
still limited by the fact that you can only investigate two variables at a time. The problem is
that we know from general experience that human beings are complicated creatures and
things rarely happen to people in isolation. It is unlikely that you would really be able to
successfully model human behaviour or experience with only two variables. Therefore,
exploring the relationship between two variables alone is likely to oversimplify things,
which limits the usefulness of the analysis.
Activity 9: Identifying possible predictor variables
Allow about 5 minutes
Let’s illustrate this with a simple example. Imagine you are a researcher interested in
academic achievement. You might be interested in modelling and predicting likely
exam outcomes, but how do you decide which variables to measure and include in
your model? A sensible place to start is to think about the variables you believe are
likely to relate to exam scores.
Take a moment to think about this question. Why do you think some students do better
in exams than others? What factors are likely to influence exam performance?
Provide your answer…
Discussion
This is obviously a huge subject area, with a number of different possible factors
affecting exam outcome. For example, you might have said that it could have
something to do with:
● how hard students revised
● how much they enjoyed the subject
● the teaching ability of their lecturer
● their general intelligence level
● how much sleep they had the night before
● the degree of exam anxiety they experience.
The point is that how well someone does in an exam is likely to be based on more than
one single variable. In reality, all of the above factors (and many more besides) are likely
to play some role in someone’s performance in an exam. In a similar manner, psychology
researchers often want to look at how more than one IV (or predictor variable) relates to
an outcome variable (or DV).
Deciding which test to use
Imagine you wanted to investigate the relationship between some of the predictor
variables identified in Activity 9 and exam performance. You could give participants a
questionnaire comprising questions designed to measure these different variables. But
then what?
You could, in theory, use correlation or simple linear regression to investigate the
relationship between each of these predictors and the outcome (exam performance), but
4. Methods and skills: Analysing questionnaire data
22 of 34 Thursday 4 April 2019
you would be limited to looking at only one predictor at a time. For example, you could use
correlation to study the relationship between participants’ exam scores and how hard they
revised. You could also use correlation to investigate the relationship between
participants’ exam scores and how much they enjoyed the subject matter. However, you
could not find out how exam score is related to both their revision intensity and subject
enjoyment, or how the different predictors interact to predict exam outcome. To do this,
you need to use a different test altogether.
Activity 10: Statistical decision tree
Allow about 5–10 minutes
Looking at this decision tree, what test do you think you would need to use to establish
a predictive model of exam scores that allows for the simultaneous inclusion of several
predictor variables at once?
Click on the tree to work out the answer.
Interactive content is not available in this format.
Statistical decision tree
Discussion
You would need to use a multiple regression, because you are investigating the
relationships between more than two variables (i.e. two predictors and one outcome).
What is multiple regression?
Like correlational analysis, regression is concerned with the relationship between
variables. But while correlation is just used to describe the relationship between two
variables (i.e. description), regression can be used for prediction.
Sometimes we want to make predictions that extend beyond our current data
range (extrapolation), or estimate a value within our data range (interpolation).
Regression gives us the mathematical tools to do that.
As with ANOVA, there are many different types of regression. You have already covered
the fundamentals of regression in DE200, although this was restricted to simple linear
regression. If you wish, you can revisit the DE200 material on regression.
But this week’s focus is on multiple regression: ‘multiple’ because it looks at the effect of
multiple predictor variables on an outcome variable simultaneously. Multiple regression
builds on simple linear regression.
How do we use it?
To try to explain how and when you would use multiple regression, let’s go back to the
exam performance example. Imagine you were only interested in how revision intensity
predicted (or related to) exam outcome. You could measure how many hours of revision
participants had done (e.g. the number of hours) in the weeks preceding their exams and
4. Methods and skills: Analysing questionnaire data
23 of 34 Thursday 4 April 2019
then relate this to their actual exam performance. Imagine you ran such a study and found
the following linear relationship between these two variables:
60.00
60.00
100.00
100.00
80.00
80.00
40.00
40.00
20.00
20.00
.00
.00
Exam score
Hours spent revising
Figure 3: The relationship between hours spent revising and exam score
You could investigate the relationship between these two variables by running a simple
correlation, which in this case would give you a correlation coefficient of r = 0.54 (which
represents a strong relationship between the two variables). You can use the following
cut-offs to interpret the strength of the correlations:
● 0.8 or more (or –0.8 and below) can be seen as a very strong relationship
● 0.5 or more (or –0.5 and below) represent a strong relationship
● 0.3 or more (or –0.3 and below) show a medium relationship
● between 0.3 and –0.3 shows a weak relationship.
But let’s think about this relationship another way. Remember, the key aim of regression is
to try to account for (or explain) the variation that you see in your data. So is there a way of
looking at the proportion of the variance in these two variables that co-vary (that is, occur
together)? In fact, yes, there is – and this is actually something you can calculate directly
from the correlation’s r value in just two simple steps:
1. First you need to square it, which is this case would be r2 = 0.54 x 0.54 = 0.29
2. Then you multiply it by 100 to give you the percentage of variance that is shared by
the two variables = 29%.
This could be illustrated in a Venn diagram as follows:
4. Methods and skills: Analysing questionnaire data
24 of 34 Thursday 4 April 2019
Revision
Intensity
Exam
Score
Variation in exam
score that has
nothing to do with
revision intensity
The variance
accounted for by the
relationship between
exam score and
revision intensity (29%)
Variation in revision
intensity that has
nothing to do with
exam score
Figure 4: The relationship between exam score and revision intensity
In this example, 29 per cent of the variation in exam scores is accounted for by time spent
revising. In other words, if you know that this is how long a person spent revising for their
exam, you know about 29 per cent of what you need to know to make an accurate
prediction about what their exam score will likely be.
Multiple regression gives you a way of potentially accounting for more variance in your
data, as it allows you to add multiple predictor variables into your model. As such, it
should give you a way of making a more accurate prediction about your outcome variable
(DV) of interest. For example, imagine you added another variable into your exam score
analysis: how much sleep people had the night before the exam. If you wanted to
investigate the relationship between these three variables, you might see something like
this:
4. Methods and skills: Analysing questionnaire data
25 of 34 Thursday 4 April 2019
Revision
Intensity
Exam
Score
9% of exam score
is accounted for by
how much sleep the
participants had the
night before the exam
29% of exam score
is accounted for by
how much revision
participants had done
Hours of Sleep
Figure 5: The relationship between exam score and two unrelated variables: hours of
sleep and revision intensity
In this case, you can see that participants’ sleep the night before an exam affected about
9 per cent of the exam scores. By adding this variable to your study, you have improved
your understanding of why people achieve the exam scores they do from 29 per cent to 38
per cent. In other words, by using two variables rather than one variable, you have
improved your ability to make accurate predictions about a person’s likely exam outcome.
Again, the proportion of shared variance between hours’ sleep and exam score can be
calculated directly from the correlation coefficients between these two variables (r = 0.3 ->
r2 = 0.09 -> 9%) and then simply added to the previously obtained 29 per cent. But this
method of adding the proportions together is only possible when the two predictor
variables are not related to one another. This won’t always be the case in multiple
regression. Let’s consider a slightly different example to think about this. Instead of
investigating hours’ sleep, let’s measure how much the students enjoyed the subject they
were studying. In this case, we might get something like this:
4. Methods and skills: Analysing questionnaire data
26 of 34 Thursday 4 April 2019
Revision
Intensity
Exam
Score
21% of exam score is
accounted for by subject
enjoyment, but is
unrelated to revision intensity 8% of revision intensity is
accounted for by subject
enjoyment, but is
unrelated to exam score
13% of exam score is
accounted for by an
interaction between
subject enjoyment and
how much revision
participants had done
16% of exam score is
accounted for by how
much revision participants
had done, but is unrelated
to their enjoyment of the
subject
Subject
Enjoyment
Figure 6: The relationship between exam score and two related variables: subject
enjoyment and revision intensity
In this case, the relationships between the variables are more complicated than before.
Sixteen per cent of a person’s exam score is related to revision intensity alone, 21 per
cent is related to subject enjoyment, and 13 per cent is related to an interaction between
the two predictor variables. In this case we have elevated our ability to predict/explain
exam scores to 50 per cent!
However, in this example you wouldn’t be able to calculate the shared variance between
all of the variables using correlation alone as there is overlap between all three, and
correlation only allows you to look at two variables at a time.
Instead, you would need to produce a statistic called ‘R’ (the capitalisation is important
here). This is a very similar statistic to r, and can be interpreted like any regular correlation
coefficient – but in this case it tells you the strength of the combined relationships between
all of the predictor variables (or IVs) and the outcome variable (DV). As before, this can be
squared to give you a single value (R2) that tells you how much of the variance in your DV
is accounted for by all of your IVs combined. Don’t worry though, you don’t have to know
how to calculate this from scratch: SPSS will do it for you!
4. Methods and skills: Analysing questionnaire data
27 of 34 Thursday 4 April 2019
What does regression do?
Regression does three important things:
1. It calculates how well your model fits the data. It does this by establishing the
proportion of variance in your data that can be explained by all of the predictor
variables in the model (which is represented by the sign R2). This is a number
between 0 and 1, which can be multiplied by 100 to represent the percentage of
variance that your model explains.
2. It establishes whether your model is significant. It essentially does this using
an F-ratio, similar to that used in ANOVA. The larger the F-ratio, the greater the
improvement in predictive accuracy of your model, and the smaller the amount of
unsystematic residual variance (or error). The larger the F-ratio gets, the more likely
it is that the new model is a significant improvement on simply using the mean to
predict the score of the outcome variable. (If you need to revisit the way this works,
you can access the DE200 material on regression.)
3. It produces a mathematical equation (i.e. model) that allows you to make specific
predictions about one variable from the others.
The regression equation
So, you have a statistic that allows you to establish whether or not your IVs are effective
predictors of your DV. Now you can take the next step and actually use your model to
make predictions.
You may remember that, with simple linear regression, as the relationship between two
variables can be modelled by a line of best fit on a graph, the formula for a predictive
model is based on the equation for a straight line:
Y = B0 + B1X + ε
where:
● Y = the predicted value of Y (which is your outcome, or DV)
● B0 represents the y intercept, and is the predicted value of Y when X = 0 (in other
words, the value at which the line meets the y axis)
● B1 represents the gradient of the slope of the line, and can be interpreted as the
change in the value of Y for a unit change in X
● X = the score on your predictor variable (or IV) from which you are trying to predict Y
● ε is added at the end to represent the error in the model (because in real life no
model is ever likely to be perfect).
The formula for multiple regression is just an extension of this, where:
Y = B0 + B1X1 + B2X2 +…………..+ BkXk + ε
In this case, the subscript numbers assigned to the X and beta values indicate that they
represent different predictor variables.
● X1 = a score on your first IV for which you are trying to predict a value of Y (revision
intensity)
● X2 = a score on your second IV for which you are trying to predict a value of Y
(subject enjoyment)
4. Methods and skills: Analysing questionnaire data
28 of 34 Thursday 4 April 2019
and so on. In addition, B1, B2 and Bk are analogous to the slope in linear relationship
between each related IV and the DV. They are known as regression coefficients and
can be interpreted in the same way as gradient in simple linear regression. So if B1 = 2.5,
it would indicate that Y will increase by 2.5 units if X1 increased by 1 unit.
New beta and Xs are added into the equation for each IV in the regression model.
Making predictions
To see how a regression equation can be used to make predictions, let’s go back to our
example. Imagine you are an educational psychologist and you want to try to predict your
students’ likely exam scores based on their interest in the subject and the amount of
revision they have done.
Let’s plug these variables into the regression equation:
Y = B0 +B1X1 + B2X2 +…………..+ BkXk + ε
Exam score = B0 + B1 revision intensity + B2 subject enjoyment + ε
While simple linear regression allowed you to identify the values of your betas yourself (by
looking at a line graph of your two variables and working out the y intercept and the
gradient of the line), multiple regression is more complicated than that. In this case, SPSS
produces these numbers for you (we’ll come back to this).
So, imagine you ran a multiple regression analysis and you found the following values of
beta:
B0 = 20.65
B1 = 0.29
B2 = 0.67
If you plugged these values into your regression equation, you would get the following
predictive model:
Y = B0 + B1X1 + B2X2 + ε
Exam score = B0 + B1 revision intensity + B2 subject enjoyment + ε
Exam score = 20.65 + (0.29*revision intensity score) + (0.67*subject enjoyment
score)
Activity 11: Making predictions
Allow about 5 minutes
Imagine that you collected revision intensity and subject enjoyment scores from one of
your students. They scored the following:
Revision intensity = 50 hours
Subject enjoyment = 75 per cent
Using the equation above, what exam score would you expect them to get?
Provide your answer…
4. Methods and skills: Analysing questionnaire data
29 of 34 Thursday 4 April 2019
Answer
Exam score = 20.65 + (0.29*revision intensity score) + (0.67*subject enjoyment score)
Exam score = 20.65 + (0.29*50) + (0.67*75)
Exam score = 20.65 + 14.5 + 50.25
Exam score = 85.4 per cent
How to run the test in SPSS
Tutorial: Carrying out a multiple regression in SPSS
This interactive tutorial, entitled Multiple regression, shows you how to carry out a
multiple regression using SPSS. The tutorial is based on the exam score study
described in the example above.
To start the tutorial, click on the link while holding down the Ctrl key on your computer
keyboard. This will open up the tutorial in a new tab, and allow you to come straight
back this page when you have finished.
Once you have finished, download the PDF version of the tutorial for your Methods
folder.
Now download the Week 6 data file used for this tutorial and see if you can run the test
yourself. You can revisit the tutorial, or use the Multiple regression PDF for extra help
and to check your output is correct.
How do we write it up?
When writing up your results there is a certain formula that you have to follow:
1. State what inferential test you have used (in this case, a multiple regression), and
outline the outcome and predictor variables you used.
2. State the proportion of variance that can be explained by your model. This is
represented by the statistic R2 and is a number between 0 and 1. It can either be
reported in this format (i.e. R2 = x.xx) or the value can be multiplied by 100 to
represent the percentage of variance your model explains.
3. Present your inferential statistics: the F-ratio for your model. To report the
significance of your regression model, you must report:
○ the test statistic you are reporting (in this case F)
○ then in brackets two degrees of freedom measures: first the regression df, then
the residual/error df
○ then an equals sign followed by the actual value of the F-ratio
○ next an equals sign followed by the actual value of the F-ratio
○ finally the p-value (which tells you the significance of your model).
4. Methods and skills: Analysing questionnaire data
30 of 34 Thursday 4 April 2019
between df within/error df
F-statistic p-value
F (1,23) = 45.67, p < .01
F-value
Reporting the F-ratio in APA format
4. Once you have reported all of the relevant statistics, you then need to write a brief
statement that summarises your results. Included in this should be each of
your B values (found in the unstandardised beta column of your SPSS output) and
the significance of each variable’s contribution to the model.
5. You should finish by stating your predictive model (with B values included).
Activity 12: Completing a results section
Allow about 5–10 minutes
Fill in the blanks to complete the write-up in APA format.
Interactive content is not available in this format.
Alternatively, your unstandardised betas, their standard errors, and their significance
values can be summed up in a table (rather than in prose, as above). For example:
Unstandardised
Beta
SE Sig.
(Constant) 20.647 9.530
Hours spent revising .295 .154 .067
Enjoyment of
subject
.668 .285 .027
However, if you present your findings like this, you still need to interpret them in plain
English in your write-up.
4. Methods and skills: Analysing questionnaire data
31 of 34 Thursday 4 April 2019
5. Independent project
If you are using a survey approach in your research project, it is important that you are
able to generate questionnaire items and conduct a pilot study to help develop a reliable
and valid questionnaire. This week it would be useful to work on the following aspects of
your own research.
S
Selecting your topic and producing initial questions
Once you have identified the focus of your research you will need to begin drafting your
questions. If you are planning on using a survey or questionnaire as part of your project,
you will find it helpful to revisit the DE200 material that considers this issue in detail.
You will need to consider the format and phrasing of your questions, and may base these
on existing questionnaires and develop your own. Again you may find the DE200 material
helpful.
Ethical issues in questionnaire design
The pilot study requires the same ethical procedures to be followed as in your final study.
The pilot stage is therefore a good opportunity to develop your ethical consent form. In
questionnaires this information is often incorporated in the introduction to the
questionnaire itself.
Assessing the pilot questionnaire
You will need to consider how you will get initial feedback on your pilot questionnaire and
the most appropriate way of assessing its reliability and validity. You will find it helpful to
revisit the relevant sections of this week’s methods materials to make this choice in
relation to your own research.
Creating an online survey
Having thought about the context and design of your questionnaire, you may wish to
develop your skills in the mechanics of producing a questionnaire and collecting data with
it. In Part 4 and Part 5 of the video activity on designing questionnaires, you saw a paperbased
questionnaire being used. Helen Kaye commented on the time-consuming nature
of distributing, collecting and coding this type of questionnaire. For this reason it is
increasingly common for researchers to use online questionnaires, which are more
efficient and allow potentially larger samples of responses to be collected. There are
many different software options available with which psychologists can do this. If you are
using a survey approach in your research, then now is a good time to familiarise yourself
with one of them, Qualtrics. As an Open University psychology student you will be able to
5. Independent
project
32 of 34 Thursday 4 April 2019
access and use this software in your own survey research. To begin with, make sure you
have activated your Qualtrics account as described in Week 1.
Once you’ve accessed Qualtrics, try out a ready made survey, following the steps below:
1. Log in
2. Click ‘+Create Project’ – green button
3. Select ‘From a library’ from menu to the left
4. Click on ‘Select a library’ box and choose ‘Qualtrics library’
5. Select a topic area then an interesting survey
6. Give it a name in the ‘Project name’ box
Click the green ‘Create a project’ button at the bottom.
You may find it useful to explore:
● the different types of responses that can be collected
● how to include an informed consent section as an integral part of the questionnaire
● how to change the appearance of the survey.
Once you’ve explored these aspects, construct a short survey and activate it. This will
give you a copyable link to your survey. Try it out on yourself to see what it looks like ‘live’
and the way in which data is collected for you by the software. You will need this link for
next week’s study so make sure you keep a note of it. You might find it helpful to copy it
into the box below:
Link to survey
5. Independent
project
33 of 34 Thursday 4 April 2019
References
Burgess, T.F. (2001) Guide to the Design of Questionnaires. A General Introduction to the
Design of Questionnaires for Survey Research, Leeds, University of Leeds [Online].
Available at http://iss.leeds.ac.uk/downloads/top2.pdf (Accessed 31 March 2016).
Donaldson, M. (1982) ‘Conservation: what is the question?’, British Journal of Psychology,
vol. 73, no. 2, pp. 199–207.
Hughes, M. and Donaldson, M. (1983) ‘The use of hiding games for studying the
coordination of viewpoints’, in Donaldson, M., Grieve, R. and Pratt, C. (eds) Early
Childhood Development and Education: Readings in Psychology, New York, Guilford
Press.
Light, P.H., Buckingham, N. and Robbins, A.H. (1979) ‘The conservation task as an
interactional setting’, British Journal of Educational Psychology, vol. 49, no. 3, pp. 304–10.
All rights including copyright in these materials are owned or controlled by The Open
University and are protected by copyright in the United Kingdom and by international
treaties worldwide.
In accessing these materials, you agree that you may only use the materials for your
own personal non-commercial use.
You are not permitted to copy, broadcast, download, store (in any medium), transmit,
show or play in public, adapt or change in any way these materials, in whole or in part,
for any purpose whatsoever without the prior written permission of The Open
University.
References
34 of 34 Thursday 4 April 2019

© 2024 EssayHotline.com. All Rights Reserved. | Disclaimer: for assistance purposes only. These custom papers should be used with proper reference.