Using Wine Studies To Predict Produce Demand Based On Peer Reviews, Expert Reviews And Local/National Preferences
Jim Prevor’s Perishable Pundit, December 10, 2019
How do consumers come to value different products.? It is a question the produce industry doesn’t ask often enough. All too often producers produce what they know how to produce or want to produce, rather than producing what consumers actually want.
In politics, it has become obvious that the ability of politicians to communicate directly with people has reduced the influence of those institutions that previously held a monopoly on getting information out to consumers. So President Trump doesn’t have to be as concerned with staying in the good graces of the editors of The New York Times because he has, in Twitter, a direct line to voters.
Nowadays, restaurants worry more about their Yelp reviews than the local newspaper restaurant reviewer.
When we learned that two Cornell professors were collaborating to better understand how peer reviews — common in social media — impact value perceptions, we were intrigued. We asked Pundit Investigator and Special Projects Editor, Mira Slott, to find out more:
Ruth and William Morgan Associate Professor
in Applied Economics and Management
Ithaca, New York
The Cornell School of
SC Johnson College of Busines
Q: It’s nice to meet you Aaron. We’re excited you’ll be teaming up with Brad for an educational micro-session. Brad has been a treasured veteran of the New York Produce Show since its inception, as we celebrate the show’s 10th edition. Brad, your research has run the gamut on topics important to the produce industry. Sometimes, your research extends beyond produce, but with myriad lessons for our industry embedded within.
Aaron: This is the case with the study we’ll be presenting: Dissonant opinions and the home bias — Consumer response to crowd-sourced reviews for wine.
Brad: At first, we weren’t sure this research was enough about the produce industry to be a candidate as an interesting presentation for The New York Produce Show. Then, Jim Prevor was on campus visiting and we invited him to review our study, as we were collecting some data. Jim came up afterwards and started asking some thoughtful questions, raising the idea that even though it’s not about produce, perhaps the research has a lot of implications for produce.
Q: What’s behind those thoughtful questions? Can you describe the study and how it came about?
Aaron: The way that consumers make decisions about purchases is complicated in markets that include a wide variety of choices and credence, or non-observable, attributes. In many markets (such as books, movies, hotels, automobiles and wine), we observe a substantial amount of information that is made available to consumers. This can be objective information with facts and details, or subjective information based on opinions.
In the age of the Internet and social media, we see a rise in the dissemination of non-expert opinions from peers or crowd-sourced reviews in many markets. And there is some evidence that such reviews can be very influential and have the capacity to increase or decrease demand for credence products.
Brad: We live in a world now where people are giving opinions about a lot of stuff. Not too long ago, while there were still plenty of opinions, they were mostly expert opinions; someone trained and versed in an area who we would look to for their judgement on a new hotel, for instance, or a new cultivar of apples. But recently, we’ve seen many more amateur opinion-makers in the world. These crowd-sourced reviews. We see it on Amazon. I often view what the ratings are among the consumers who purchased product previously.
Aaron: You see this in the restaurant business and in the hotel industry. You do see it with online food, although it’s a relatively small market now, but it’s growing. You see it in Whole Foods, actually within Amazon, and a little bit in Amazon Fresh.
Q: The fresh produce industry is just starting to tackle the complexities of an omni-channel future and learning quickly the myriad potential pitfalls in online selling. Last year, we ran an intense Amsterdam Produce Summit solely devoted to this important topic. World thought- and practice-leaders gathered to share retail, as well as foodservice strategies, to seize omni-channel success.
Unique issues inherent to fresh fruits and vegetables, i.e., the perishability factor, Mother Nature’s whims, quality inconsistencies, and the ability to develop and control brand status and reputation amid a commodity heavy business, create a new layer of complications for online selling. There was some tough talk about the potential impacts of positive or negative consumer reviews on product sales... In some instances, the negative product review can have more to do with the problems that occurred during the handling and delivery process to the consumer’s home...
Brad: In examining food and beverages and where this idea is most prevalent, the wine market is an ideal case study. There are lots of experts that will give you their opinion about wine. But there are also lots and lots of crowd-sourced reviews from consumers voicing their opinions about wine. So, we’re curious what is the impact of these crowd-sourced reviews on demand for food and beverages, and it just so happens this study focuses on wine because that seemed like the easy choice.
In the real world, there are many expert opinions and crowd-sourced reviews that exist already for wine.
Aaron: These reviewing sites for wine are incredibly well-developed. They’re still emerging in other food and beverage fields, but certainly for wine they are well-established and have been around for a number of years now.
Q: When you say well-developed, can you provide some perspective, from a structural standpoint, do you mean consumers can find these reviews in many places, from reputable sources and wine experts, as well as from wine connoisseurs and more mainstream consumers?
Brad: Yes. I think that’s right. There are several services that offer these types of reviews. And there’s a lot of evidence that wine consumers use this information in some capacity to make their decisions.
There are many of these websites, but there’s this one called vivino. It’s a platform where people can give their opinion about a particular wine. I think there are millions of reviews on this website for thousands of wines. So, it’s very active, and that’s just one example. There are also other websites that do similar things.
Aaron: Yes. And most places where you can purchase wine have a consumer review component. Wine.com, for example, is a big one that has lots of reviews. We’re seeing this in other alcohol markets as well. Beer is another example where there’s a huge following. There are significant consumer review sites for beer.
Wine has a really nice counterpart of having a well-established expert review component, places like Wine Spectator and Robert Parker Wine Advocate.
Q: Do you think there’s also a connection with the pairings of food and wine?
Brad: I do, especially in places where food and wine are sold in the same store, when that store does the messaging. They might have reviews of a particular food and a particular wine, and those reviews could appear together.
When you go to Amazon, sometimes you’ll pick a product, and there will be some reviews for that product, and then it suggests, you might also like this product, and you can go through and see some of those reviews. I think this could be one of the areas of interest from this study for the produce industry, where there could be greater linkages between food and wine or other pairings. What we’re finding in the alcohol market may tell us insights applicable to the produce industry.
As people buy more and more produce online and these crowd-sourced reviews become more common, some of the things we’re discovering in the wine market may also be true for online sales of produce.
Q: It should be interesting to engage an audience of produce industry executives with the key findings and analyze how they translate over to the produce industry...
Brad: I do think so. I believe there are potential similarities, in particular some of the things we’re studying here. We can give you a big overview without getting too bogged down in many of the details if that’s helpful.
Q: OK. That would be great.
Brad: We do this experiment where we introduce consumers to three different wines. All the consumers are in the United States. One of the wines is from New York, one is from France, and one from Spain. They are sparkling wines, if that matters.
Aaron: Our experiment tests participant’s willingness to pay for products in a bidding process when exposed to multiple pieces of information and peer-review treatments.
Brad: We share information with consumers, each time asking what they would be willing to pay for each of the three wines. For example, we tell them about where the wine is from, how the wine was made. We give them an opportunity to taste the wine. We tell them what the experts think about the wine, and what the community and their peers are saying about wine. With each of these, then we follow up with questions of what they are willing to pay for this wine.
In the first bits of information of what the experts we’re saying about the wine, we use information that says these wines are similar. They have similar expert scores, they’re made in similar ways. They’re produced in different parts of the world, but the words we use to describe these wines are mostly the same.
In this final round of information that we share with them, it’s peer scores that come from the community, these are non-expert crowd-sourced reviews. Different people will get information that suggests their peers like all three the same, or they like one of the wines more than the others, or one less than the other two. This is the place where we’re really trying to narrow in on what’s the impact of the peer reviews on demand for the wine, or how does it affect their willingness to pay for the wine, when the peer scores are the same, or if one of the scores is rated higher or lower than the other.
We do this equally, where we make Spanish wine rated higher, or we make the French wine rated higher, or we make the New York wine higher or lower, etc. We try to have enough people exposed to each of these different possibilities.
Q: I’m intrigued to find out what happens...
Brad: A couple of things we’re curious about here, and I think all of these have some links to the produce industry… We can use this data that we collect from this little game that we play with these consumers to ask three questions. The first one: Are consumers more influenced by subjective information or more persuaded by objective information, and facts? Are they persuaded by the real stuff of how it’s produced, or by somebody saying something positive or negative, which of those two, subjective or objective information matters most?
The second question - which I think we will focus on at the show because it has interesting implications for produce -- is trying to understand the relative importance of a positive or negative review. When I say relative, it’s because in our study, we had Spanish, French and New York wines in the mix.
Q: So, this could perhaps parallel consumer perceptions with local produce versus imported produce. In the case of wine, aren’t there long-established, pre-conceived notions of where the best wine comes from?
Brad: When the peers say something positive or negative about the home product or something positive or negative about the foreign product, in our case the French product probably has an established reputation. Some people might think a French champagne is the best product compared to a New York sparkling wine. And if that French champagne with the highest reputation gets a positive or negative review, what does it do to that product, but more importantly, does it also impact the other two products? What is the relative impact of a positive or negative review? Does it have any spillover effects on the other products?
I’ll give you an example: If the French champagne gets a really low score, presumably that’s going to decrease demand for the champagne, but does that increase demand for the other two wines, having a relatively negative review for the French wine, and neutral scores for the other two. What is the tradeoff between these three products?
Aaron: The way I would frame it, the idea is really this: Does a low score in one wine cause consumers to reevaluate their assessment of another wine? If the score on the New York is higher than the French wine, does that cause an absolute shift of demand for that wine? What’s the interrelation between those two wines?
Q: You mentioned that in your lab experiment, you had the consumers taste the wines to see how that influenced outcomes. There are no wine tastings when ordering online. The same occurs with buying fresh produce off a website. With fresh fruits and vegetables, consumers have been used to the visual, tactile and aromatic experience of shopping the produce department. Also, with fresh produce, there are additional issues of sizing, packaging, and quality variants, among other variables, which can be difficult to distinguish online...
Aaron: In the larger context and why this might apply to other sectors like the produce industry, to assess a food item, you actually have to consume it. I can look at piece of fruit and assess it, but my opinion is not fully formed until I consume the goods. This all falls under this umbrella of experienced goods, and why we think it might be relevant in produce as well.
Brad. The third thing I want to do is try to use the data we’re collecting to see when people are more familiar with New York product. Are they more or less impacted by a positive or negative outsourced review? So, if you live here in New York and you probably already have an opinion about this local product, are these crowd-sourced reviews more or less important for the people who live in the local market.
You might think, oh, those people already have formed an opinion, and if it gets a positive review online, great, they’ll say, oh great, and like it even more, and if it gets a negative review online, one hypothesis might be they ignore it. Relative to someone living in California, a negative review on New York wine might lower its value, and a positive review might help its value. It might be different for someone living closer to the product.
Q: Did that hypothesis ring true? Could you share some of the key findings from your data?
Brad: To start, it’s important to say, we did this survey with some 500 people distributed in different age brackets. There were about 300 people between the ages of 21 and 35, and maybe 80 percent were under 50 years old.
In general, if you ask people before giving them too much information, they would say the French wine in their mind was worth $29, and the Spanish wine was worth $24 dollars, and the New York wine was work $21. That was the average starting point for the people who took part in this exercise. When we started to give them some information, when we told them how the wine was made, and the scores from Wine Spectator, and then the online peer scores, all these things had a positive impact on how much they valued the wine. This is true for all three wines on average.
And then, what we think is interesting is the following:
When the Spanish wine, for instance, got a positive, glowing report from the peers, then people on average would be willing to pay $5 extra for that wine, which brings it in line with the initial average price we had gotten for the French wine. This is a positive, glowing report for Spanish wine relative to the New York and French wines, when they got decent average peer reviews in line with what the experts were giving them.
But when the New York wine got a positive, glowing report from peers, and relatively the Spanish and French wines got a decent grade, in that case, it increased people’s willingness to buy New York wine by nearly $7 a bottle. That’s an average boost of 33 percent in their willingness to pay.
Q: Does the number of positive peer reviews matter?
Brad: That’s a good question. We’re giving them a lot of peer reviews; what we’re doing is summarizing them and calling it the average of recent scores from the online community. They’re under the impression this is not just one person’s opinion.
The other thing we wanted to highlight when the Spanish wine got a low score, this again is a relative test, where the French and NY wines got average decent scores in the online peer community, in that case the value or the willingness to pay for the Spanish wine went down by $5.
Aaron: It’s almost symmetric. When the Spanish wine gets a good score, it goes up by $5, and when it gets a bad score it goes down by $5.
When New York gets a high score, it gets a bump of $7, but when it gets a low score it only goes down by $5.
Q: Why is New York getting a bigger bump? Is it because it’s domestic?
Brad: I think it may be that people didn’t have a great expectation for New York wine to start with, so if it gets a negative score, they almost half expected it anyway. That’s perhaps one explanation. Aaron, what are your thoughts?
Aaron: I was just going to say, I think New York is not as well-established as a wine region, so that informs expectations. French champagne is highly regarded across the world. So, a negative review is really punishing them in a sense because it’s supposed to be an excellent wine with the best reputation.
Whereas, with New York wine, people are not as familiar with it. I won’t say New York wine has a bad reputation, but it also doesn’t have a very strong reputation either. It’s just that people aren’t very familiar with it. They’re not going into it with that many expectations.
Brad: There is one other result I will share with you, because I think this is interesting.
When the Spanish wine got a relatively glowing report it increased demand by $5, but surprisingly for me, when the French wine got a relatively low score from the online crowd-sourced reviews, that actually increased the value of the Spanish wine in consumers eyes by $6. But a good score for the Spanish wine only increased the value of the Spanish wine by $5. So, Spanish wine is better off when the competition gets a low score.
It’s almost true for New York wine too. A low score for French wine in the minds of the consumer, also boosts your willingness to pay for New York wine. New York wine itself gets a high score and it gets a boost of $7. And a poor score for French wine boosts New York wine by $6. So, New York wine is better off when it gets a high score, but a close second is for the French wine to get a low score. In the case of Spanish wine, it’s the opposite.
Brad: I think consumers see Spanish and French wines as closer substitutes.
Aaron: These are hypotheses, and we have to dig into the data a little bit more. We collect some additional information about all of our participants after the experiment, how familiar they are with each of the wines and how that plays into that as well.
Brad: We’ll have more of this analytic work to share at the New York Produce Show, looking at issues in the produce industry that mirror what we’re focused on in this study.
If you think about the same type of story, where you have a produce item that is grown domestically and there might be a couple of international options. Maybe one of those has greater familiarity with customers, or perhaps has a higher-level reputation than the others, and then looking at how do people purchase and experience these goods. They taste them and then give some reviews. There’s this idea that a positive review for one thing might have some positive implications for it, but a negative review for one of the competing products might actually have a much bigger impact, depending on how familiar or what sort of reputation it had in the first place.
For instance, you might think of Italian apples or French apples, or U.S. apples, or New York apples compared to Washington State apples...
Aaron: There are lots of analogies we can think of, maybe San Marzano tomatoes from Italy compared to Italian-style tomatoes grown in the U.S.
Brad: That’s even a better example as it relates to our study, because there is champagne from France, and then there are these other versions of sparkling wine that are like champagne but they’re not. This would be especially applicable to those products for which there is a patent or some protected variety, or appellation name associated with them.
Aaron: The other thing… there is a lot of new specialty produce that might be branded. Historically, produce has been more of a commodity, but more and more we’re seeing this specialized branding of produce, specific cultivars of fruits and vegetables. It does have some similarities to wine, in the sense that there are branded differences between some of these different products.
Brad: We’re hoping industry executives can weigh in on this and help us make these links. We’re thinking of some of these specialized produce items, or products that have an application or place of origin that is really distinct to them, and that there are other examples of products in the world that compete with those, that share the same product but don’t share the same level of reputation or familiarity from the onset.
Going back to those three questions, we’ll walk through our research results with the audience, and that should make for a nice discussion, whether to get answers or just food for thought.
What matters more for consumers of wine and consumers of produce… is it more objective information or subjective information?
Q: Is it fair to say the crowd-sourced peer information is more influential than the expert information?
Brad: I think that’s a fair statement. However, it’s not simply the presence of the expert scores, but it’s the relative expert scores. We also find some evidence that expert scores matter, and we think those expert scores also might be important in produce.
On the second question, I think these relative opinions matter. How your product fairs matters — what reviews you receive, but also what scores others get. In particular, whatever the flagship product is in that category. Those scores seem to matter a lot, especially to the competition. It depends how closely you’re associated with that product.
Aaron: It’s also the prestige of that competitor. If it’s just some random competitors in a category that don’t have much market share and recognition, getting a bad score doesn’t have as significant effect on you. The product that you’re competing with in your category seems to be an important factor.
Brad: On the third point, we have some preliminary evidence with these online peer reviews that suggests there is some sort of home bias effect. People tend to discount negative reviews when they feel more closely connected to their product, when it’s a local product, and they tend to accept positive reviews. They are influenced by negative reviews, but less so than a non-local consumer. We’ll dig into this more at the New York Show.
Aaron: There is sort of an asymmetry of how positive and negative reviews effect consumer demand. We didn’t go into it, but there is certainly a large amount of literature of asymmetry in how consumers respond to positive and negative reviews, so this kind of fits nicely into that larger context.
Broadly speaking, there have been some studies, not in wine and food, but in general how consumers respond to negative reviews versus positive reviews. There’s some evidence to suggest that consumers tend to essentially jump on positive reviews, and pay more attention to them, and discount negative reviews. In a simplistic model, a consumer is willing to pay $10 for that product, and if it gets a positive review, they’re willing to pay $15, or $5 more. But in the reverse, if that product gets a negative review, that consumer only decreases their willingness to pay by $1, rather than by $5. That asymmetry could go in either direction, but you’re not equally effected by positive and negative information.
Brad: Aaron and I will have new data and analyses to bring to the session for an interactive discussion.
Q: We can’t wait.
There are so many interesting questions. One wonders how a pre-existing reputation influences the crowd-sourced reviews. When we do focus groups, we have to have attendees write things down on pieces of paper and then reveal all at once. Otherwise, one person’s opinions may influence other opinions.
It also would be interesting to study how durable the impact is of crowd-sourced reviews. Do people disregard them after more experience with the product?
There is a truism in politics that the influence of a recommendation is inverse to the importance of the job. So one may well listen to one’s union, or an expert editorial page, or your politicly savvy cousin when deciding who to vote for as a State Assemblyman -- where you don’t know much. But the same influencers don’t swing your vote for President, where you are more informed.
Maybe the same applies to wine – or an apple.
In any case, this will be a robust discussion. Come be a part of the discussion at The New York Produce Show and Conference.
You can register right here.