The Descriptive Research Strategy

Chapter 13: The Descriptive Research Strategies

Study Aids and Important terms and definitions

Many of you have been inquiring about how to better prepare for the quizzes.

One way is to use the online resources that accompany the textbook.

The book website is located at:

http://www.cengage.com/cgi-wadsworth/course_products_wp.pl?fid=M20bI&product_isbn_issn=9781111342258 Links to an external site.

Select the chapter you want to view.

Any item with a symbol of a lock next to it can only be viewed by instructors.

Other, non-locked, items can be viewed by students and for each chapter there is a glossary, flash cards that you can set to view either a word or its definition first, a crossword puzzle, and a practice quiz.

Chapter 13 vocabulary words and terms you should know the definition of include:

descriptive research strategy                           participant observation

observational research design                                contrived observation or structured observation                                                                   

behavioral observation                                    survey research design

habituation                                                      Likert scale

behavior categories                                         anchors

inter-rater reliability                                        response set

frequency method                                           semantic differential

duration method                                             nonresponse bias

interval method                                               interviewer bias

time sampling                                                  idiographic approach

event sampling                                                nomothetic approach

individual sampling                                         case study design

content analysis                                                case history

archival research                                             

naturalistic observation or nonparticipant observation

What is Descriptive Research? (p. 364)

Descriptive research is nonexperimental in nature.

The aim of it, as the name implies, is to describe variables.

In correlational designs the aim is to determine if two variables are related.

In descriptive designs the researcher is not necessarily interested in relationships among variables, and instead will merely describe a variable oor set of variables.

In chapter 13 of yoru textbook three types of designs are considered.

  1. Observational research designs
  2. Survey research designs
  3. Case studies

These designs produce results such as:

Ony 4% of those with an NCAA bracket selected Michigan to be in the final four (Go BLUE!)

The number of Americans who smoke cigarettes dropped for the fifth year in a row

The number of adults who voted in the presidential election was lower than in the previous election

Most of the new admissions to the pscyhiatric hospital are diagnosed with a psychotic disorder

More than 1 in 7 Americans take psychotropic medications

Depression is increasing in non-industrialized nations and will soon have a greater world wide cost than the cost of treating schizophrenia

Notice that the intent of the study is to describe something rather than to show relationships between variables.

The Observational Research Design (p. 265)

As the name implies, observational deisgns involve observing behaviors, usually as they occur naturally.

The more difficult parts of doing observational research are observing in a manner that doesn't affect the behavior of those being observed, deciding how to quantify the beahviros observed, and deciding on a schedule of observing that is both practical and accurately captures the phenomenon of interest.

 

For example, if I wanted to observe dog's social behavior, my own presence as an observer migth affect their behavior, so perhaps I could use a camera or keep my distance.

If I wanted to observe social beahvior I woudl ahve to decide what constitutes a scoial beahvior. Barking? Jumping? Assuming the play posture? Begging?

Should I measure how often the behaves occur each hour or day or week? Or use a different measurement strategy?

And finally, I would have to decide when and how long to observe. Is an hour a day for a month enough? Or should it be more frequent? Or for a longer or shorter duration?

Dogs play posture.jpg

(image of two dogs in play posture)

Behavioral observations

The researcher observing behavior has two important decisions to make with respect to how observations are made. Namely, how will behavior be quantified, and how will behavior be sampled so as to have enough time to observe the behavior of interest without watching the particpant 24 hours a day seven days a week.

Quantifying Observations

There are differnet ways to quantify behavior. For example, we can consider how many times a behavior occurs, how long it lasts, or whether or not it occurs in a given period of time. Which method to use depends on the behavior of interest and how frequently it occurs. For instance, 'blinking' occurs every frequently and is usually of short duration, so frequency method is probably best, while 'disruptive behavior in school' occurs less frequently and may have a longer duration and so may be better quantified by the duration or interval method.

Make sure youa re familiar with the differnet methods of quatifying observations described in chapter 13

  • frequency method
  • duration method
  • interval method

Sampling Observations (p. 367)

Observation can be very simple or complex depending on the person or group being observed and the nature of the behavior being observed. Some researchers use video recording and then observe the video to make sur ethey don't miss anything. Others have multiple observers, though thsi can interfere with the behavior of the partcipants if there are too many observers. Once a period of observation is decided upon the researcher can use one of three procedures described on p. 387 of your textbook.

Make sure are are familiar with them:

time sampling

event sampling

individual sampling

ContentAnalysis and Archival Research (p. 367)

Types of Observations and Examples (p. 369)

Naturalistic Observation

behavior is observed as it occurs naturally, that is the observer doesn't interfere in any way

Ethologists often study animals in their natural environemnts. nthropologists and socialogists may study human behavior as particpant observers or in naturalistic settings.

jane goodall.jpg

(image of Jane Goodall with a chimpanzee)

Technology is allowing for more and more sophisticated observation. For example, if you watched any of the series "Planet Earth" you might have noticed that many of the most amazing observations were recorded by unmanned cameras placed in strategic locations as well as by humans sitting in the field.

Strengths of the design are that behavior is observed and recorded as it occurs in the real world.

Weaknesses are that it is very time consuming, there is the potential for the observer to influence the behavior s/he is observing, and the interpretation of the beahvior can be subjective. For instance, three differetn observers might call the same behavior aggressive, or playful or neutral.

Participant Observation

the researcher does not observe from a distance

the researcher is or becomes a member of the group being observed in order to observe up close.

Naturalistic observation is preferred since the researcher cannot be certain that his presence doesn't change the behavior of thsoe he is observing.

However, sometimes naturalistic observation is impossible. If someone wanted to observe members of a cult or a gang or a professional baseball team, they would notice her hanging around! She couldn't do naturalistic observation, and instead would become part of the group either by secretly joining a cult or gang, or earning the trust of members so that they permit the presence of an observer.

The advantages of particpant observationa re tha tit can be used when naturalistic behavior is not possible so information can be gathered that would otherwise be very difficult or impossible to observe, and the particpant-observer has a unique perspective on behavior.

Weaknesses are that it is time consuming, the participant observer can be less objective than a more distant observer, and teh particpant observer can influence the behavior of thso ebeing observed.

Contrived Observation

behavior is observed in a setting created or contrved by the researcher and designed to elicit the behavior of interest

The 'strange situation' is one of the better known contrved observation paridigms in developmntal research on young children.

In this contrived setting a young child is briefly sepearated from his or her parent while an observer watches the child's behavior from behind a two-way mirror.

The developmntal psychologist might observe how the child behaves in teh absence of the paretn whiel another adult or child or toy or pet is introduced to the setting.

The main advantage of contrived observation is that the researcher does not have to wait for the behavior of interest to occur - he arragnes conditions so it is guaranteed to occur. This, it tends to be less time-consuming than other observational approaches.

On the other hand, since situations are contrived the major disadvantage of this approach is that observed behavior is less natural.

Strengths and Weaknesses of Observational Designs (p. 372)

Some specific strengths and weaknesses of each observational design were described above. In this section strengths and weaknesses that are share dby all observational designs are considered.

A major strength of observational research is that actual behavior is being observed as compared to surveys where people are asked to describe their own behavior.

Observation research usually has high external validity since it is conducted in the field - the one exception is that contrived observational research may have high or low external validity depending on how much the contrived setting is similar to or different from what would occur naturally.

A major weakness is concern about ethics. Observational research can be a littel bit like spying. Usually it is considered ethical to observe public behavior, but partcipant observation falls into a grey area - for instance, if a scientist joins a gang or a cult under false pretenses to observe criminal or religious behavior, can we really call the behavior of the gang or cult members 'public?'

Another weakness is that the designs are descriptive, that is, behavior is merely described, no explanations of beahvior are provided.

The Survey Research Design (p. 373)

In survey research instead of observing behavior we ask particpants to describe their own beahvior, beliefs or attitudes.

Other than scientists studying behavior, survey research is often used by:

market researchers to learn about their customers or potential customers

pollsters to gauge attitudes towards politcal issues or candidates

organizations to find out about their members' satisfaction and interests

 

Survey research is not as simple to carry out a sit might seem.

The researcher must attend to several important issues and:

  • Write good questions (what makes a question 'good' or 'bad' is discussed below)
  • Organize the questions into a well-constructed survey
  • Select a representative sample of the population of interest
  • Decide how to administer the survey
  • Decide how to deal with people who do not respond to the survey

Types of Questions:

Deciding what type of question to ask is almost always a matter of deciding which is more important, getting detailed information tha tis time-consuming to analyze, or getting simple answers that lack detail and are easier to analyze.

Of course some topics lend themselves better to different types of questions.

For example, if a market researcher wants to know who their customers are, then they might just ask multiple chocie questions such as 'circle the answer below that best describes your age/ethnicity/education/income.'

If they want to know how much customers like their products they might use a rating scale, such as on a scale of 1 - 3, where 1 means dislike, 2 means neutral, and 3 means like, what did you think of the new gamemaster 3.5? Or how likely are you to recommend it to a friend, or how much time do you spend playing video games, etc.

On the other hand, if they want to know about their customers or what their customers think of the product in more detail and ask open ended questions such as, what are two adjectives that you would use to describe the new gamemaster 3.5? Or, what are your three favorite electronic games?

Three common types of questions are open-ended, restricted, and rating scales.

  • open ended
  • restricted
  • rating scale

Open ended questions

participants respond in their own words to the topic presented. For example,

  • How do you think campus safety could be improved?
  • What do you believe are the mos timportant porblems facing the country?

There is only one advantage to using open ended questions, and it is that open ended questions have the greatest response flexibility so particpants can answer the question in an infinite number of ways.

This is also its biggest weakness - the data are often difficult to summarize or interpret and compare. Specifically, it is difficult to subject open ended questions to a statistical naalysis, and partcipants may answer in ways that are hard to compare, for example, three answers to the question about campus safety might be "It can't be improved" - does this mean that it is perfect and there is no way to make it better, or that nothing can be done to improve it? And anothe rparticpant might name procedures that can be implented, say add metal detectors, hire more police officers, install panic buttons and emergency phones. Alternately another student may emphasize wirking with existing structures, such as "have more training for all personnel" The variety of possible answers makes them difficult to compare and summarize.

Restricted Questions

Restricted questions have a limited number of responses.Multiple chhocie and True/False are types of restricted questions.

For example, the open ended questions above might be re-written to something like:

  • Which of the following strategies do you think would have the biggest impact on campus safety?

a. installing metal detectors in all classrooms

b. conducting weekly random searches of campus residence halls

c. hiring more cmapus police officers

d. installing emergency phones

 

  • What do you think is the most important problem facing the country?

a. unemployment

b. the federal deficit

c. partisanship in Washington

d. the global economy

The limited number of options makes it easy to analyze the data statistically and to summarize it. For example, "60% of respondants said tha tunemployment is the biggest problem facing the country". "No students suggested that searching residences would improve campus safety and they were about equally divided as to the effectiveness of hiring more police offiecers or installing emerceny phones."

A weakness is that respondants are forced to endorse one of the restricted items, but might believe something else is more important. FOr example, someone might answer 'unemployment' to the question about the biggest problem facing the country, but might think some other item that isn't on the list is the problem, say 'tensions with North Korea.'

One solution is to add an open-ended option "other" ___________________, that particpants can respond to in any way they choose. These data are fairly easily summarized by saying something like "10% of respondants selected "other", and wrote in 150 different answers. Items written by at least 1% of respondants included,

tensions with China, congress, tax reform, and immigration

 

Another, though more partial solution to the problem of the limited array of questions is to have an option to 'check all that apply" For instance, which of the following options do you think would be likely to significantly improve campus safety, and provide several options where the respondant could check any or all of them.

None of the above also solves the problem of having answers that might create too many restrictions.

Rating scales

Rating scales allow partcipants to select a numerical response on a scale designed by the investigator.

Likert and Likert-type scales are among the most common.

These scales usually have five responses assigned a value from 1 - 5 such as "1 = strongly disagree, 2 = disagree, 3 = neither agree nor disagree, 4 = agree, 5 = strongly agree."

The instructions might be along the lines of "use the following scale (numbers 1 -5) to describe how you feel about each of the statements below."

1 = strongly disagree, 2 = disagree, 3 = neither agree nor disagree, 4 = agree, 5 = strongly agree.

1. ____ unemplyment is a serious problem

2. ____ taxes are too high

3. ____ I would pleased if an immigration reform bill were passed

4. ____ I describe myself as liberal

Another common rating scale is called a subjective untis of distress scale (or SUDS or SUD scale) and in often used in clincial research. One version fo this type of scale is something like.

Rate your level of anxiety on a scale of 1 - 10 where 1 means no anxiety at all, extremely relaxed and 10 is the most anxiety you ahve ever felt.

The highest and lowest ratings are called anchors, and should be extreme, for example, strongly disaggree, or none at all, or the best, etc.

And the person making the scale can give examples or, as in the SUD scale example above have particpants set their own anchors, for example the researcher ser 1 as no anxiety and 10 as the most anxiety you have ever felt.

When using rating scale questions it is important not to have too many categories. Most use 5 to 10. Respondants find it difficult to make ratings when there ar emore than 10 options. For isntance, consider how difficult it would be to rate something on a scale of 1 - 5 Vs a scale of 1 - 25. The differecne between a 3 and 4 is fiarly sclear on a scale of 1 - 5. On a scale of 1 - 25 the differnece between a 3 and 4 would be harder to decide.

One disadvantage to rating scales is tha tparticpants may anser with what is called a response set and answer every item or almost every item with the same answer. A common response set is to answer every or almsot every item with 'niether agree nor disagree.' Sometimes it is because partcipants are in a hurry, and other times it is because they do not want to think about it much, and sometimes it is because they do not have a strong opinion and don't want to take the time to distinguish between, say, 'disagree somewhat' and 'niether agree nor disagree.'

Bad Questions = Bad Research

Poorly written questions can make the results of a survey difficult to interpret and can mislead respondants so that their answers don't necessarily reflect their views...some do's and don'ts for writing survey questions:

Don't use...

  • unfamiliar technical terms
  • vague or imprecise terms
  • ungrammatical sentence structure
  • overly complicated phrasing
  • misleading information in the question
  • loaded questions

For example

Have any of your grandparents had a cerebrovascular accident? (unfamilar technical term)

Do you think the candidate running for office is good? (vague, imprecise terms)

Do you or don't you find compelling Mr. Smith's decision to go to Washington? (ungrammatical)

Do you doubt or not that the real winner of the 2000 election wasn't George Bush? (overly complicated phrasing)

Do you think that the porposed tuition increase is acceptable given that faculty will be getting very large raises? (misleading information)

Do you favor eliminating the wasteful excesses in the bloated state budget that are aimed at helping low lifes who can't help themselves? (loaded question)

Better questions:

Have any of your grandparents ever had a stroke?

Do you think Mr. John Smith (the candidate running for congress in your congressional district) will do a good job representing your interests? 

How likely are you to vote for Mr. Smith in the upcoming congressional election?

Do you believe that George W. Bush deserved to win the 2000 presidential election?

Do you think that the porposed tuition increase of 5% is acceptable?

Do you favor eliminating some services for the poor in order to balance the state budget?

Constructing a survey (p. 378)

After you have written some great questions, you still have to construct your survey.

Many researchers have a panel of experts review their questions and make suggestions to eliminate or re-word some of the questions.

The order of the questions can make a difference in how people respond.

A few rules of tumb for constructing a survey are:

  1. put demographic questions at the end of the survey
  2. put potenitally sensitive or embarrassing questions in the middle
  3. group questions about the same topic or subtopic together
  4. format such tha tthe page is not too cluttered
  5. make sure the reading level is appropriate (most aim for a 4th - 8th grade reading level for a general survey, but may be higher or lower depending on target group of respondants).

Selecting relevant and representative individuals

How do you decide who will complete your survey?

Some of the same principles as were discussed in chapter 5 on sampling apply. Who is your population of interest, your accessible population?

For instance,

in a survey about campus safety, would you want to consider UCF safety only and survey only UCF students, or would you want to sample students from several institutions and consider opinions about safety on colleges campuses generally?

in an opinion poll about a candidate, do you wan tto survey all adults, only registered voters, or only people who decribe themselves as 'likely to vote?'

in a survey about the opinions of mental health care professionals, do you want to know the opinion of all mental health care professionals or only those with advanced degrees?

If you wanted to do a survey about the opinions of Orlando residents, how would you creat a sample since sampling everyone would be too labor intensive and expensive?

Fortunately, computers and public records can help. A computer program could be created to select, say, every 10th resident from an online list of residents, or to randomly select 200 individuals on a lsit of registered voters.

If the group is specific, mailing lists (including email) or phone lists are commonly used. For instance, a list of all households with a child enrolled in the local public schools, or a list to contact all members or a sample of members of the American Psychological Association, or all licensed real estate agent sin Florida, etc.

Selecting a sample can also be more simple, for instance, approaching people at the mall to have them c omplete a survey about any number of topics.

Administering a survey

Once you have constructed your survey and selecte dyour sample you have to decide how to administer it. There are advantages and disadvantages to each mode.

Mail Surveys

Mailing surveys is easy enough, however it can be expensive (historical note - today telephoen costs are cheap, many years ago it was far cheaper to mail surveys than to make long distance phoen calls).

A problem with mailin surveys is that many people ignore them. There are a few things a researcher can do to get more epeople to respond:

  • offer them a small gift -  a gift of even one dollar, or a pencil, or a sticker will greatly increase the chance that someone will respond!
  • enter respondants in a raffle for a prize
  • Send them a reminder
  • send them an advance notice such as, 'in one week you will recieve a very important survey about...'
  • include prepaid and addressed return envelope
  • send the survey to non-responders a second time.

The most importnat thing to do about non-responders is to make sur ethey ar enot very different from responders. For instance, suppose non-responders were less likely to be female, over age 55, unemployed, etc.

Telephone surveys

the main disadvantage is it is time consuming to talk to 100 people on the phone! However, there are more and more automated options, and if a researcher has many research assistants they can make calls.

An advantage is that follow-up questions can be asked on the spot, and the researcher or assistant can clarify anythign the partcipant may not understand.

As you may knwo from personal experience, many people hang up when a caller asks if you want to compelte a survey!

  • A few ways to increase the chances that they will respond are to:
  • keep the questions and number of answers short
  • practice before calling anyone
  • begin by identifying yourself and the purpose of the survey, and make clear you are NOT a sales person!!

Also, watch out for interviewer bias, make sure you or your assistants are nt asking questions in a leading way so partcipants ar emore likely to give the answer you prefer. Use neutral wording, and keep tone of voice the same for all questions.

Internet surveys

Internet survey sare becoming mroe common.

A HUGE advantage is that data can be automatically be sent to a data file allowing for easy data analysis.

Also, they are inexpensive, and can be easily modified, and can be targeted by posting them on sites devoted to causes, or emailing to groups, say the NRA website for a survey of gun owners, or an email to all studnets staff and faculty about UCF dinign options

Disadvantages are that many people still don't have regular internet access, and how you recruit people matters a great deal. For example, many people are recruited from chat rooms and sites that might appeal to specific kinds of people and not to others. It is not easy to get an organized list of email addresses unless you are emailing people on a mailing list of a group. It is hard to know how responders differ from non-responders.

Internet surveys are best when they are sent to a closed group, such as the UCF community, or members of the American Medical Association, etc.

In-person surveys and interviews

This method is very efficient, probably the quickest time to get results since partcipants complete it on the spot rathe rthan saving for later as they might for a mail or itnernet survey. It can be more efficient than phone surveys because you can gathe rgroups and give instructions to several people at once.

When you administer a survey to a single person, it is called an interview. Interviews are useful when the person can't read, or perhaps has a visual or physical disability that makes them unable to respond to a paper and pencil or online survey. Interviews are also useful if you wan tto ask follow-up questions.

There is a risk of interviewer bias, so interviewers should be trianed to keep a neutral tone of voice and facial expression when scoring recording responses.

Strengths and weaknesses of survey research (p. 385)

Strengths and weaknesses of each specific type of syrvey have already been considered above.

In general, surveys can be used to get information that would be difficult to gather in other ways, and they are a reasonably efficient way of getting a large amount of information.

Disadvantages are that the results can be difficult to interpret and analyze, and that since the information is self-report it may not be accurate. That is, partcipants may wan tto present themselves in a favorable (and in some cases an unfavorble) manner, and might distort their true behaviors, feelings, and attitudes.

The Case Study Design (p. 386)

Case studies involve research with a single person, or single unit such as a couple, family, or small business is the object of study.

The study of an individual is called the ideographic approach to understanding, while the study of groups is the nomothetic approach

Chapter 14 is about single subject designs, which can be experimental designs even with only one participant. Chapter 13 considers descriptive case studies.

Case studies are detailed descriptions of an individual. In many psychological journals case studies focus on treatment of an individual, and are sometimes called case histories.

An obvious limitation of cas studies is that they have poor external validity. IT is difficult to generalize from a single case to a larger population.

On the other hand, in some situations case studies are very useful

Applications of case study designs

rare phenomenon and unusual clinical cases

Case studies in medicine and psychology are common for very rare disorders.

When a disease, disorder, or condition is very rare it is difficult to gather a large enough sample to do an experiment. Case studies may be the only way to further our understanding of rare events.

I predict that someone will write a case study about the Aurora shooter who killed and injured so many in a Colorado movie theatre last year. Mass shootings are rare (though sadly, not rare enough...) and even rarer is that the shooter survives to be interviewed. Same thing with other types of serial killers. Forensic scientists study these 'rare cases' to learn more about indentify, treating, and preventing violent behavior.

case studies as counterexamples

Case studies are often written to describe cases that seem to be exceptions to the rule. For instance, there was a recent report of a child who tested postive for HIV at birth, and who was free of the AIDS virus after two years of treatment. Since HIV is thought to be uncurable at this time, this case could be very important!

A person who survives well beyond the usual survival times for a disease, is cured of a disease that is thought to be uncurable, or is not visibly impaired after serious brain damage, would all be prime candidates for a counter-example case study.

Case studies can also be very vivid and are used to sway public opinion. A detailed case study of a child dying of starvation in a foriegn country may be more likely to persuade people to donate money or support forign aid than is a table with a bunch of statistics on starvation.

Strengths and weaknesses of the case study design (p. 388)

The single most important weakness is the lack of validity, both internal and external.

Internal validity is weak because we can never be sure which variables are important in accounting for behavior.

External validity is weak because one case is not enough to generalize ot a popualtion.

Like all descriptive resaerch case studies do not show causal relationships, they only describe a series of events.

Finally, case studies have a greater potential for experimenter bias to distort findings. With only a single case the researcher can highlight some information and leave our other information - even without intending to.

The strengths of case studies allow us to live with the weaknesses.

Case studies allow us to study and get information about rare events where controlled experiments are impossible.

They are detailed, and can help us identify 'exceptions ot the rule.'

And one way to overcome some of the weaknesses in case studies is to repeat them. For example, a single case study of treatment of a rare disease may be unconvincing. But a series of several case studies wher ethe same treatment led to the same outcome leads to more convincing results and improved validity.

In spite of the limitations descriptive research, including case studies, are an important part of psychological science.