3 Types of Survey Research Reliability

Google BookmarksStumbleUponGoogle GmailYahoo MailShare/Bookmark

It is far too easy for researchers to believe that those taking their surveys are providing the accurate responses. The reality is that people are fickle. Few respondents care about your data collection, and most are simply completing a survey for the incentive or because they feel obligated to share their opinion.

That means that your data is not always going to be accurate. Or, at the very least, there is a high probability that there are some reliability issues with your survey. Your goal as a researcher is to minimize these issues as much as possible, so that you can still create conclusions from your data.

Types of Reliability

At Research Methods Knowledge Base, they review four different types of reliability. However, inter-rater reliability is not generally a part of survey research, as this refers to the ability of two human raters/observers to correctly provide a quantitate score for a given phenomenon.  Inter-rater reliability may play a role in scoring qualitative answers, but in general most quantitative survey research doesn’t involve human data collection. The three types or reliability that are important for survey research include:

  • Test-Retest Reliability – Test-retest reliability refers to whether or not the sample you used the first time you ran your survey provides you with the same answers the next time you run an identical survey. There are factors that may affect their answers, but presumably if a survey is taken twice and nothing has changed to affect responses, the results should be the same.
  • Parallel Forms Reliability – This refers to the ability of similar questions asking about the same item to have similar/near identical responses. You want to make sure that all questions relating to a particular item are being read as equal by the same sample, and thus generating similar results. Researchers do this by creating a list of questions that are all asking about the same things, then randomly dividing them up into separate tests with the same sample, and seeing the correlation between the results.
  • Internal Consistency Reliability – This refers to the respondent’s ability to answer similar questions within a single survey the same. Such as the answer to: “How much do you like this product” and comparing it to “On a scale of 1 to 10, what are your general feelings about this product.” There are a lot of ways to judge internal consistency, but the goal is to make sure that questions asking similar questions yield similar responses.

Reliability is more than just a good research tool. Without it, you run the risk of forming erroneous conclusions from your data, because if your data isn’t reliable, your results are less meaningful.

This entry was posted in Customer Research, Survey Results and tagged , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>