case study please be as detailed as possible when writing paper and spreadsheet

Module 1 – Case


Assignment Overview

You are a consultant who works for the Diligent Consulting Group. In this Case, you are engaged on a consulting basis by Loving Organic Foods. In order to get a better idea of what might have motivated customers’ buying habits you are asked to analyze the ages of the customers who have purchased organic foods over the past 3 months. Past research done by the Diligent Consulting Group has shown that different age groups buy certain products for different reasons. Loving Organic Foods has sent a survey to 200 customers who have previously purchased organic foods, and 124 customers have responded. The survey includes age data of past customers who purchased organic foods in the previous quarter.

Case Assignment

Using Excel, create a frequency distribution (histogram) of the age data that was captured from the survey. You should consider the width of the age categories (e.g., 5 years, 10 years, or other). That is, which age category grouping provides the most useful information? Once you have created this histogram, determine the mean, median, and mode.

After you have reviewed the data, write a report to your boss that briefly describes the results that you obtained. Make a recommendation on how this data might be used for marketing purposes. Be sure to conduct adequate research on organic foods industry, organic market analysis, and healthy food industry using IBISWorld database or other databases such as Business Source Complete (EBSCO) and Business Source Complete – Business Searching Interface in our online library. Provide a brief description on the industry background and the consumer changing attitudes and behavior toward healthy lifestyles. Also identify the customer demographics of organic food industry and explain how the customers of Loving Organic Foods are different from this target market.

Data: Download the Excel-based data file with the age data of the 124 customers: Data chart for BUS520 Module 1 Case. Use these data in Excel to create your histogram.

Assignment Expectations

Excel Analysis

Complete analysis in Excel using the Histogram function. Check the following video on histogram:

If you are not so familiar with Excel, refer to the following link on Excel training videos:

Check the professional market research reports from IBISWorld database to find the industry analysis for your cumulative Session Long Project.

IBISWorld Overview (n.d.). IBISWorld, Inc., New York, NY.

IBISWorld Forecast (n.d.). IBISWorld, Inc., New York, NY.

IBISWorld Data and Sources (n.d.). IBISWorld, Inc., New York, NY.

IBISWorld Navigation Tips (n.d.). IBISWorld, Inc., New York, NY.

Written Report

  • Length requirements: 4–5 pages minimum (not including Cover and Reference pages). NOTE: You must submit 4–5 pages of written discussion and analysis. This means that you should avoid use of tables and charts as “space fillers.”
  • Provide a brief introduction to/background of the problem.
  • Provide a brief description of organic food industry and target market characteristics such as their demographics, lifestyles and shopping behaviors.
  • Provide a written analysis that supports your Histogram age groups (bins).
  • Based on your analysis of the histogram data, provide complete and meaningful recommendations as the data relates to Loving Organic Foods’s marketing strategy.
  • Write clearly, simply, and logically. Use double-spaced, black Verdana or Times Roman font in 12 pt. type size.
  • Have an introduction at the beginning to introduce the topics and use keywords as headings to organize the report.
  • Avoid redundancy and general statements such as “All organizations exist to make a profit.” Make every sentence count.
  • Paraphrase the facts using your own words and ideas, employing quotes sparingly. Quotes, if absolutely necessary, should rarely exceed five words.
  • Upload both your written report and Excel file to the case 1 Dropbox.

Here are some guidelines on how to conduct information search and build critical thinking skills.


Module 1 – Background


Required Reading

Statistics are all around you, sometimes used well, sometimes not. We must learn how to distinguish the two cases. Just as important as detecting the deceptive use of statistics is the appreciation of the proper use of statistics. You must also learn to recognize statistical evidence that supports a stated conclusion. When a research team is testing a new treatment for a disease, statistics allows them to conclude based on a relatively small trial that there is good evidence their drug is effective. Therefore, it is important to understand statistics. In this course, you would reform your statistical habits from now on. No longer will you blindly accept numbers or findings. Instead, you will begin to think about the numbers, their sources, and most importantly, the procedures used to generate them. In this way, you can become a more rational decision maker by analyzing the past performance to make business planning.

Statistics are often presented in an effort to add credibility to an argument or advice. You can see this by paying attention to television advertisements. Many of the numbers thrown about in this way do not represent careful statistical analysis. They can be misleading, and push you into decisions that you might find cause to regret. If you cannot distinguish good from faulty reasoning, then you are vulnerable to manipulation and to decisions that are not in your best interest. Statistics provides tools that you need in order to react intelligently to information you hear or read. In this sense, statistics is one of the most important things that you can study. For these reasons, learning about statistics is essential to business intelligence. This course will help you refresh some statistical essentials that are related to business analytics and decision making.

Some Basic Terminologies

Before we begin gathering and analyzing data we need to characterize the population we are studying. The population of a study is the group the collected data is intended to describe. Sometimes the intended population is called the target population, since if we design our study badly, the collected data might not actually be representative of the intended population.

Why is it important to specify the population? We might get different answers to our question as we vary the population we are studying. First-year students at the University of Washington might take slightly more diverse courses than those at your college, and some of these courses may require less popular textbooks that cost more; or, on the other hand, the University Bookstore might have a larger pool of used textbooks, reducing the cost of these books to the students. Whichever the case (and it is likely that some combination of these and other factors are in play), the data we gather from your college will probably not be the same as that from the University of Washington. Particularly when conveying our results to others, we want to be clear about the population we are describing with our data.

If we were able to gather data on every member of our population, say the average (we will define “average” more carefully in a subsequent section) amount of money spent on textbooks by each first-year student at your college during the 2015-2016 academic year, the resulting number would be called a parameter. A parameter is a value (average, percentage, etc.) calculated using all the data from a population. We seldom see parameters, however, since surveying an entire population is usually very time-consuming and expensive, unless the population is very small or we already have the data collected. A survey of an entire population is called a census. Since surveying an entire population is often impractical, we usually select a sample to study. A sample is a smaller subset of the entire population, ideally one that is fairly representative of the whole population. For now, let us assume that samples are chosen in an appropriate manner. If we survey a sample, say 100 first-year students at your college, and find the average amount of money spent by these students on textbooks, the resulting number is called a statistic. A statistic is a value (average, percentage, etc.) calculated using the data from a sample.

Once we have gathered data, we might wish to classify it. Roughly speaking, data can be classified as categorical data or quantitative data. Categorical (qualitative) data are pieces of information that allow us to classify the objects under investigation into various categories. Quantitative dataare responses that are numerical in nature and with which we can perform meaningful arithmetic calculations. Once we have collected data from surveys or experiments, we need to summarize and present the data in a way that will be meaningful to the reader. We will begin with graphical presentations of data then explore numerical summaries of data.

Frequency Table and Histogram

Categorical, or qualitative, data are pieces of information that allow us to classify the objects under investigation into various categories. We usually begin working with categorical data by summarizing the data into a frequency table. Sometimes we need an even more intuitive way of displaying data. This is where charts and graphs come in. There are many, many ways of displaying data graphically.

Quantitative, or numerical, data can also be summarized into frequency tables. If we have a large number of widely varying data values, creating a frequency table that lists every possible value as a category would lead to an exceptionally long frequency table, and probably would not reveal any patterns. For this reason, it is common with quantitative data to group data into class intervals. In general, we define class intervals so that:

  • Each interval is equal in size. For example, if the first class contains values from 120 to 129, the second class should include values from 130 to 139.
  • We have somewhere between 5 and 20 classes, typically, depending upon the number of data we’re working with.

We can also use histogram to present quantitative, or numerical, data. A histogram is like a bar graph, but where the horizontal axis is a number line. Consider a repetitive process, for example, driving home from work. You (and your spouse) have noticed that it takes longer to get home sometimes than others. So you want to do an experiment and find out just how long it does take. You record your time to drive home for 6 weeks and get 30 data points (5 days, 6 weeks). Then you decide to analyze this statistically and see just how frequent the short trips and long trips and medium trips take. The best way to do this is with a frequency diagram. Here is an example of what one looks like:


Probability and Expected Value

The probability of a specified event is the chance or likelihood that it will occur. There are several ways of viewing probability. One would be experimental in nature, where we repeatedly conduct an experiment. Suppose we flipped a coin over and over again and it came up heads about half of the time; we would expect that in the future whenever we flipped the coin it would turn up heads about half of the time. When a weather reporter says “there is a 10% chance of rain tomorrow,” she is basing that on prior evidence; that out of all days with similar weather patterns, it has rained on 1 out of 10 of those days.

Another view would be subjective in nature, in other words an educated guess. If someone asked you the probability that the Seattle Mariners would win their next baseball game, it would be impossible to conduct an experiment where the same two teams played each other repeatedly, each time with the same starting lineup and starting pitchers, each starting at the same time of day on the same field under the precisely the same conditions. Since there are so many variables to take into account, someone familiar with baseball and with the two teams involved might make an educated guess that there is a 75% chance they will win the game; that is, if the same two teams were to play each other repeatedly under identical conditions, the Mariners would win about three out of every four games. But this is just a guess, with no way to verify its accuracy, and depending upon how educated the educated guesser is, a subjective probability may not be worth very much.

We will return to the experimental and subjective probabilities from time to time, but in this course we will mostly be concerned with theoretical probability, which is defined as follows: Suppose there is a situation with n equally likely possible outcomes and that m of those n outcomes correspond to a particular event; then the probability of that event is defined as m/n.

If you roll a die, pick a card from deck of playing cards, or randomly select a person and observe their hair color, we are executing an experiment or procedure. In probability, we look at the likelihood of different outcomes. We begin with some terminology. The result of an experiment is called an outcome. An event is any particular outcome or group of outcomes. A simple event is an event that cannot be broken down further. The sample space is the set of all possible simple events.

Basic Probability

Given that all outcomes are equally likely, we can compute the probability of an event E using this formula:

P(E)=Number of outcomes corresponding to the event E/Total number of equally-likely outcomes

For example, if we roll a 6-sided die, calculate

  1. P(rolling a 1)
  2. P(rolling a number bigger than 4)

Recall that the sample space is {1,2,3,4,5,6}

  1. There is one outcome corresponding to “rolling a 1”, so the probability is 1/6
  2. There are two outcomes bigger than a 4, so the probability is 2/6=1/3

Probabilities are essentially fractions, and can be reduced to lower terms like fractions. An impossible event has a probability of 0. A certain event has a probability of 1. The probability of any event must be 0≤P(E)≤1. In the course of this module, if you compute a probability and get an answer that is negative or greater than 1, you have made a mistake and should check your work.

Expected Value

Expected value is perhaps the most useful probability concept we will discuss. It has many applications, from insurance policies to making financial decisions, and it is one thing that the casinos and government agencies that run gambling operations and lotteries hope most people never learn about. Expected Value is the average gain or loss of an event if the procedure is repeated many times. We can compute the expected value by multiplying each outcome by the probability of that outcome, then adding up the products. In general, if the expected value of a game is negative, it is not a good idea to play the game, since on average you will lose money. It would be better to play a game with a positive expected value (good luck trying to find one!), although keep in mind that even if the average winnings are positive it could be the case that most people lose money and one very fortunate individual wins a great deal of money. If the expected value of a game is 0, we call it a fair game, since neither side has an advantage. Expected value also has applications outside of gambling. Expected value is very common in making insurance decisions and other business decisions.

For example, a 40-year-old man in the Unite States has a 0.242% risk of dying during the next year. An insurance company charges $275 for a life-insurance policy that pays a $100,000 death benefit. What is the expected value for the person buying the insurance?

The probabilities and outcomes are:

Outcome Probability of outcome
$100,000 – $275 = $99,725 0.00242
-$275 1 – 0.00242 = 0.99758

The expected value is ($99,725)(0.00242) + (-$275)(0.99758) = -$33.

Not surprisingly, the expected value is negative; the insurance company can only afford to offer policies if they, on average, make money on each policy. They can afford to pay out the occasional benefit because they offer enough policies that those benefit payouts are balanced by the rest of the insured people. For people buying the insurance, there is a negative expected value, but there is a security that comes from being insured that is worth the cost.

Subjective Assessments of Risk and Uncertainty

How do we humans make subjective assessments of uncertainty and how do we (should we) deal with them as probabilities? We will look at:

    • Assessing discrete probabilities
    • Various types of heuristics and biases
    • Decomposition, experts and probability assessments

Probability: A Subjective Interpretation

Often our statements involve informal, personal, and subjective assessments of uncertainty at a fundamental level. Subjective assessments of uncertainty are an important element of decision analysis. A basic tenet of modern decision analysis is that subjective judgments of uncertainty can be interpreted in terms of probability. Many public policy issues and decisions involve probabilities, often a mix of formal (e.g., computer models) and subjective (e.g., verbal statements such as “likely” or “rarely”) probability assessments. For example,

    • Weather forecasts and farmers protecting crops
    • Earthquake predictions and real estate prices
    • Environmental issues, like global warming and public policies on fuels

Probability theory may be presented in two general ways: 1) In terms of long-run frequencies, e.g., tossing a coin, or collecting data from a specific process, for example driving time to home from work. This approach is most useful for events that recur often and have not yet happened, e.g., gambling with cards or stock prices. And 2) In terms of subjective judgments or degrees of belief, e.g. using your gut to decide to bet on a lottery ticket. This approach is most useful for rare or unique events or events that have already happened, e.g., nuclear war or Marilyn Monroe’s death in 1962.

If decision analysis is to be precise and rigorous, it must operate with numbers, not verbal phrases, for probabilities. Calculations require some type of quantification. Also, verbal representations of uncertainty are subject to varied interpretations depending on people and contexts.

Example: Accounting for Contingent Losses

Statement of Financial Accounting Standards No. 5, “Accounting for Contingencies,” provides guidance to companies on reporting various kinds of losses that might happen. They are to be reported as “probable,” “remote,” or “reasonably possible.” These terms in turn are defined verbally, e.g., “probable” means “likely to occur.” Defining such terms verbally instead of numerically allows for a wide range of interpretations by those doing the reporting (companies) and those interpreting the reports (e.g., accountants, analysts, stockholders, etc.).

Assessing Discrete Probabilities

There are three basic methods for assessing probabilities: Method #1 – Directly as the probability of an event occurring; Method #2 – Indirectly as placing a bet; and Method #3 – Indirectly as a comparison of two lotteries, one for the event in question, the other as a benchmark.

Method #1: As Probabilities

Simply ask the decision maker (DM) to assess the probability directly. “What is your belief regarding the probability that event such and such will occur?” The disadvantages of this method are that the DM may or may not be able or willing to give a direct answer to the question and/or the DM may place little confidence in the answer given.

Method #1 (Lakers Example): Suppose that the Los Angeles Lakers are playing the Boston Celtics in the NBA finals this year. We are interested in finding the decision maker’s probability that the Lakers will win the championship. Using Method #1 directly ask, “What do you think are the chances of the Lakers winning?”

Method #2: As Bets

Ask about bets that the person would be willing to place. Find a bet with a specific amount such that the decision maker is indifferent no matter which side of the bet wins. This means that the expected value of the bet is the same regardless of which side of the bet is taken. Now solve for the probability.

Disadvantages of this method: Some people don’t like betting out of principle. And people as risk averse, they value a loss higher than an equivalent win. This will skew any bets.

Method #2 (Lakers Example): Continuing with the Lakers vs. the Celtics

  • Set up a general framework in which the two sides of a bet are opposites
Bet framework

Bet 1:

Win $X if the Lakers win.

Lose $Y if the Lakers lose.

Start with the amounts of the bets far enough apart that the person will prefer one over the other. Then adjust X and Y till the person is indifferent, i.e., both are equally attractive.

For Example, start with 500 and 200. After several adjustments, the person reaches this point of indifference:

Bet 1

Win $250 if the Lakers win.

Lose $380 if the Lakers lose.

Bet 2

Lose $250 if the Lakers win.

Win $380 if the Lakers lose.

Then we can solve for the probability by making expected values of each choice equal, because the person is indifferent. Pr(LW) * X + Pr(LL)*(-Y) = Pr(LL)*(-X) + Pr(LW)*Y, and Pr(LL) = 1= Pr(LW). By using algebra we can combine and condense the equation.

Pr(Lakers Win) = Y/(X+Y). Entering the values of the bets we get:

= 380/(250+380)

= 0.603

This person believes there is a 60% chance the Lakers will win and a 40% that the Celtics will lose.

Method # 3: As Lotteries. We will skip this method.

Assessing Continuous Probabilities; this is an advanced topic.

Heuristics and Biases

The above discussion of probability assessment probably sounds fairly straightforward – very logical, systematic, and mathematical! However, these assessments are subjective judgments made by people, which raises some problems. People use simplifying techniques and rules of thumbs for making many decisions – called heuristics. We are not natural statisticians and have difficulty comprehending concepts like probability. Heuristics often lead to systematic errors, as opposed to random errors. That is, subjective judgments are frequently biased.

Heuristics are the way we process information; a heuristic is short-cut or rule of thumb using limited data to make a decision in a simplified way.

Biases (systematic errors) are a result of the processing. Two ways to define bias:

  • As the systematic differences between subjective probabilities and data-based probabilities
  • The extent to which subjective probabilities are different from what they would be in the absence of the heuristics

Heuristics and Biases can come from several different sources, either cognitive or emotional, and are categorized in five groups: Memory, Statistical, Confidence, Adjustment, Motivational.

The following review of biases is not exhaustive but does cover most of the major ones.

Memory-Based Biases

  • Representativeness bias: subjective judgment made from memory by comparing the information known about the person or thing with the stereotypical member of the category. It involves ignoring base rate.
  • Availability bias: judging the probability of an event by the ease with which we can retrieve similar events from memory.
  • Imaginability bias: An event’s probability is judged by how easily it can be imagined.
  • Illusory correlation bias: the perception of a pair of events occurring simultaneously leads to an incorrect judgment regarding the probability (usually overestimated) of the two events occurring together again.

Statistical Biases

  • Base rate neglect bias: ignoring or being insensitive to base rates or prior probabilities. It leads to over-adjusting for new information.
  • Chance bias: assessing independent random events as having some inherent (non-random, even causal) relationship. We resist believing that things can really be random.
  • Conjunction bias: overestimating the likelihood of the intersection of two (or more) events – event A “and” event B occur together.
  • Disjunction bias: underestimating the likelihood of disjunctive events – either event A “or” event B can occur.
  • Sample bias: ignoring the fact that small samples have higher probabilities of leading to incorrect inferences causes a person to draw overly strong conclusions form a small sample. Small samples are especially vulnerable to Type I (false positives) and Type II (false negative) errors. This is also called the law of small numbers.

Confidence Biases

  • Desire bias: overestimating the likelihood of a desired outcome. Systematic “wishful thinking.”
  • Selectivity bias: discounting or excluding information that is inconsistent with one’s personal experience or preferences. Reducing the amount of information processed can cause probabilities to incorrectly assessed.

Adjustment/Anchoring Biases (Individuals tend to under-adjust their initial judgments in face of uncertainty or randomness.)

  • Anchor and adjustment heuristic: choosing an initial anchor of some sort and then adjusting. Usually insufficiently adjusted from the anchor. Affect assessing continuous probabilities more than it affects assessing discrete probabilities.
  • Partition dependence bias: incorrectly adjusting probability assessments from the initial default assessments due to having to assess a series of related outcomes together. That is, the probability is divided or partitioned among the possible outcomes. Tend to under-adjust subsequent assessments.
  • Conservatism bias: under-adjusting a probability assessment because new information is discounted due to the decision maker anchoring on previous information. Opposite of base-rate bias where new information is weighted too heavily.
  • Regression Bias: improperly assessing a probability due to difficulty comprehending that subsequent random events cause an average to regress toward the mean. If performance or measurements are random, then extreme cases will tend to be followed by less extreme ones. However, decision makers notice extreme events more often and base probab
    Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
    Use Discount Code "Newclient" for a 15% Discount!

    NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.