XCOPY: Specifies information, designs the method for collecting information (what do we need to know and how are we going to collect it), manages the data collection process, analyzes and communicates the results to the (emphasis on communicating to the whole company)
1. Time Constraints: Is there time?
2. Data Availability: Do you lack adequate info?
Is the data out there already?
Do we lack proper data? Should we conduct our own (primary research) or refer to someone else’s (secondary research)?
3. Routine VS. Non-Routine Decisions:
Whether you should carry drinks in large or small bottles — you already carry your drinks in both, so it’s a routine decision
4. Benefits VS. Costs: Do benefits outweigh cost of research?
When NOT to conduct research:
1. Resources are lacking
2. Research would not be useful
3. Opportunity has passed
4. The decision has already been made
5. Managers cannot agree on what they need to know to make a decision
6. Decision-making info already exists
7. The costs of conducting research outweighs the benefits
Exploratory Research: when you don’t know much about problem; maybe you don’t necessarily know what is happening, so you explore it
Descriptive Research: What is the major problem?; phase to be done after exploratory; might do a survey with a representative sample to better understand what the problem is; quantify it
Causal Research: Experiment/Test Market; when you are testing a solution to the problem.
Exploratory Research: preliminary research conducted to increase understanding of a concept, to clarify the exact nature of the problem to be solved, or to identify important variables to the study
Descriptive Research: conducted to answer who, what, where, when, and how questions
Causal Research: those in which the researcher investigates whether one variable (independent) causes or influences another variable (dependent)
1. The research problem/opportunity
• What info is needed to make the decision?
• Does the info exist?
• How will the info be used?
2. creating the research design
• What type of research design best addresses the research problem? (exploratory/descriptive/causal)
3. choosing a basic method
• What methodology best addresses the research question? (exploratory = focus groups/depth interviews, descriptive = surveys/observation, causal = experiments/test markets)
4. selecting the sample procedure
• Who is to be sampled?
• How large of a sample?
• How to choose?
5. collecting the data
• When the researcher collects the data, research design determines method
• How will the data be gathered?
• Who will gather it?
6. analyze the data
• Logically summarizing the data
• statistical analysis
• determined by research design
• What are the rules for coding/editing
• What analysis techniques to use (why)?
7. prepare research report/presentation
• Who will read the report?
• How will it be structured?
• more descriptive
• Statistical results (Examples: surveys/observation)
A research methodology that seeks to quantify data
Applies statistical analysis
Used to generalize to the population
Quantify data, applying statistics and #’s
• exploratory research, word answers
• Focus groups, depth interviews (Examples:
Suppose you are a restaurant and you were looking at competitor restaurants in the area for food they sell, prices, types of customers they attract, etc)
• Open-ended: getting opinions/ideas to provide insight and understanding (no multiple choice or true/false)
• Unstructured exploratory research methodology
• To provide insight and understanding
• To understand a universe (ex: Coke wants to understand who their competitors are according to the consumers; may do qualitative exploratory research/consumers)
• Usually uses small samples (5-50 people)
• directly ask respondent what you want to know
• use it when you think they’ll answer truthfully
• indirectly ask the question
• respondent doesn’t know what question is really about
• use it when you think they will NOT answer truthfully
• advantages: time, cost, and convenience;
• disadvantages: unavailability of secondary research, fit/relevance of data (wrong units, out-of-date, wrong definitions), inaccuracy (by source, original purpose, quality, collected by someone with a bias?)
• surveys with people who are knowledgeable on the subject
• you are not talking to consumers for this
• ex: want to learn about shelf space in grocery store so you talk to grocery store managers! (people who have experience in the business)
• observing people and writing up exemplary case of how they did it
• ex: TGIF was building a smaller restaurant, COO visited a Navy crew to explore efficient food service on deck, tried to emulate that at the new location
• a semi-structured, free flowing interview with a small number of people
• #1 type of exploratory research
• semi-structured = have an idea of what we want to cover, but we can vary from it if we need to
-Knowledgeable but not all-knowing
-A facilitator (not a performer)
-Empathetic (feel what other people feel)
-Big picture thinker
-A good writer
• measured once / one and done!
• not looking to do research over time
• ex: research for new logo design (don’t really need to observe customer reactions over time)
• measuring sample repeatedly:
• true panel: when you’re measuring the exact same sample of people every time, over time (Ex: Nielsen television ratings = they install a box on TV’s to measure watch time over a long period) — measuring the SAME people over time
• omnibus panel: use the same TYPE of people over time (ex: phone surveys over time with certain age group over time, different individuals)
1) Random error: This error can be reduced only by increasing sample size
2)Systematic error: This error can be reduced by minimizing the sample design and measurement errors
Response bias: when people don’t answer truthfully
Non-response bias: When participants in a sample that respond to a questionnaire are different from those that don’t respond, thereby biasing the data.
Acquiescence bias: Respondents choose a response to please the interviewer
Extremity Bias: When respondent has a monotonous scaling task and just chooses all the same answers down one side or the middle
Interviewer bias: When the interviewer influences the respondent’s answer. This could also include: Body language and cheating
Auspices Bias: When knowledge of the organization conducting the survey biases how the respondent answers
• close-ended questions
• either dichotomous (two choices) or multichotomous (multiple choice)
• Ex: What is your gender? ____ M or ____ F
• open-ended question
• use when we don’t know the all the possible answers
• ex: “What do you think of President Obama?” ______________________
• respondent doesn’t know what the research is about
• use when think they will not answer truthfully
• ex: Do you have >friends that are prejudiced against illegal immigrants?
• respondent knows what the research is about
• use when you think they WILL answer truthfully
• ex: What is your favorite color? r/o/y/g/b/p (people probably wouldn’t lie about this 🙂 )
• Pros: good for >sensory research, can hold long interviews, can ask complex questions or ask them to do complicated tasks
• Cons: most expensive, interviewer influenced (bias), no anonymity (ex: questions about medical problems and social taboos might be difficult to answer)
Telephone Interview: a survey that gathers info through telephone contact w/ individuals
• Pros: fairly fast, less costly than personal interviews, a little less pressure because you are not face-to-face
• Cons: lack of sensory signals, length can be 15-20 mins and by random call, losing representation (Caller ID and Do Not Call List leave you with less respondents)
Mail Questionnaire: a self-administered questionnaire sent to respondents through the mail
• Pros: Geographic flexibility (send anywhere), relatively inexpensive/cost effective, can be filled out at respondent’s convenience
• Cons: low response rate, long time, length of questionnaire
1. Determine survey OBJECTIVES(from Research Proposal)
2. Determine DATA COLLECTION method (telephone, email, mail, Internet, etc.)
3. Determine the question RESPONSE FORMAT (open-ended, close-ended multichotomous, scales, etc.)
4. Decide question WORDING
5. Establish questionnaire FLOW (order) and layout
6. PERSONALLY EVALUATE the questionnaire (length, does it meet objectives?, etc.)
7. PRETEST/REVISE (peer review!)
For part 5:
b. Screener Questions (qualifying Q’s)
c. 1st few questions
d. 1st 3rd (transition questions)
e. Middle Half to 2nd 3rd (difficult questions)
f. Last section (classification and demographic questions)
-Unstructured observations:Open to record everything, exploratory
-Natural observation:Observing things that occur naturally, use when what you want to observe happens regularly
-Scientifically contrived observation:The researcher makes the event happen; Use when event does not occur frequently enough or is difficult to observe
-Machine observation: Observation done by a machine; When a human can’t observe or it is inefficient for a human to observe
-Human observation:Observation by a human; When a person can do it
-Disguised observation: Respondent doesn’t know they are being observed; When observation will change the behavior
-Undisguised observation: Respondent knows they are being observed; Don’t think it will bias their actions
The Likert Scale: measure that allows respondents to rate how strongly they agree or disagree with statements
–EX: 1: strongly disagree to 5: strongly agree
Semantic differential: seven point scale consisting of bi-polar adjectives (use to measure: image)
–EX: attitude towards Mcdonalds
-cheap —– expensive
Constant Sum Scale: a measure of attitudes where respondents are asked to divide a constant sum to indicate the relative importance of attributes
EX:Divide 100 points among the following characteristics
Graphic Ratings Scale: A measure of attitudes that allows respondents to rate an object by choosing any point along a graphic continuum.
EX: Which picture best describes how you feel about CSUF? ( the need to understand what each picture means to all respondents)
Ranking: respondents are asked to rank their preferences.
Perceptual scaling: creaking a map where respondents place brands according to distinguishing dimensions
1. Affective (feeling): “I love Burger King.”
2. Cognitive (thinking): “It’s inexpensive and tastes great.”
3. Behavior (doing): “I go to Burger King three times a week.”
Attitudes affect behavior — i.e. If you have positive attitudes toward CSUF, you’re likely to continue attending the university.
Conceptual Definition: What is the meaning of the concept (ex: define happiness as a state of well-being and contentment)
Operational Definition: How will the concept be measured? (ex: the amount of time that a person smiles in an hour [or looking at other body language such as eye movements])
Observation: (ex: see that you go to Starbucks 3 times per week, might be able to infer that you have a positive attitude toward it)
Indirect Techniques: sentence completion questions, collages, etc.
Physiological Reaction: changes in the body that show attitude (ex: pupil dilation, body electricity, etc.)
Self-Report Measures: scales we use to measure attitude that our respondents fill out (our focus for this class)
-The Research Process and Careers (Answers are posted on Titanium)
If we have reliability- you have reduced random error
Reliability measures yield consistent results
Test-retest method: Administer the same measurement instrument twice and see if the results correlate
Validity: measuring what we think we are measuring
If we have validity- you have reduced systematic error
3 ways to establish validity:
1) Face validity: Do experts or judges agree that this is a good measurement? (weakest)
2)Criterion-related validity:Does a measure that is meant to predict accurately predict?
3)Construct validity: Do two different measuring instruments correlate on measures of the same item?
Qualitative Research: research whose findings are not subject to quantification or quantitative analysis
Quantitative Research: research that uses mathematical analyses
Exploratory Research: preliminary research conducted to increase understanding of a concept, to clarify the exact nature of the problem to be solved, or to identify important variables to be studied
Descriptive Research: research studies that answer the questions of who, what, when, where, and how
Causal Research: research studies that examine whether the value if one variable causes another variable