4. Trust & Reputation Flashcards

1
Q

What two things can the platform do to ensure trust among users?

A

Reputation systems and design choices.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How does reputation system facilitate trust?

A

By ensuring that the behaviour of the user (ex a seller) will be revealed in review systems, which hopefully makes hen behave in a good way and keep high standards of the products.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the two main decisions for the platform to make regarding the design of the platform?

A
  1. What information users on each side can get about each other when they decide to interact.
  2. How much flexibility they should have in choosing who to transact with.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the challenge with review systems?

A

That only a small fraction of users actually leave reviews. A review is a public good –> the private benefit is small –> tend to be few reviews. And often biased - since a “certain” type of people, maybe with a strong opinion, leave reviews.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How can a platform increase content generation in the review/reputation systems? (2)

A
  1. Seeding reviews - early stage platforms can sometimes hire reviewers. Ex Yelp launching in a new city.
  2. Incentivize the generation of reviews - Encourage users to contribute via financial or non-financial (gifts, norms, status symbols) incentives.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How can incentivising reviews via financial or in-kind incentives backfire?

A

Because monetary payments may undermine intrinsic motivation to contribute and compensation can generate bias in favour of writing positive reviews. –> if people know about this bias, they discount the value of reviews. Further it may lead to higher quantity fo reviews but lower quality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is an example of where non-pecuniary (ideella) incentives were important for content generation?

A

The shutdown of Wikipedia in China which lead to less contribution on neighbouring countries. They were driven by pro-social motivations to contribute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How did MovieLens do to increase content generation (movie reviews)?

A

They used personalised information in an experiment to see if those getting the email that they had reviewed less than median would increase their generation –> they started rating significantly more. This suggests that social norms are important for the rate of contributions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How can status symbols increase content generation?

A

Because individuals care about their self-identity and reputation on the platform –> therefore, the platform can provide social status by giving their loyal customers “elite” status. Can also be peer-provided - “this review was helpful”. Ex given badges and titles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the problem that Wikipedia faces and how can they deal with it?

A

The problem is that the retention rate is very low, and there is a single man contributing to most of the articles. By giving the new contributors symbolic awards, they are more likely to remain active afterwards.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Why are newcomers that are given symbolic awards more likely to stay on Wikipedia and contribute? What are driving the results?

A

Various mechanisms could be at work, but enhanced self-identification with the community is one explanation. This is proven by showing that these newcomers also are more willing to contribute with boring/admin tasks,
Also status/reputation concerns as well as the value of being recognized are mechanisms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the challenges of testing whether online reviews actually matters for people’s choices?

A
  • Lack of data since a lot of data is private to firms. Review data are online and available but you need to match it with sales etc. And even if you had both reviews and sales data, it’s still hard to get the causal effects, because better restaurants that sell more are often also better reviewed - causation or only correlation??
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are 3 methods to test whether review systems actually matters? (If the reviews actually influence people’s choices)

A
  1. Experiments - if possible. Often need to collaborate with the platform in question. Ex Wikipeda.
  2. Differences across platforms - to identify an outcome of interest. Platforms typically have specific features that can be exploited to identify causal effects. But these features differ across platforms.
  3. Platform peculiarities - regarding design. Also often specific to the platform.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

3 challenges with reputation systems and their informativeness:

A
  1. Noise - always noise in reviews. Some are just not informative about the product/seller.
  2. Distortions by users - ex due to fear of retaliation (give better review than it was worth), people who leave reviews are different (selection bias), and fake reviews by other platforms, etc.
  3. Distortion by the platform itself.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What types of noise can it be in ratings?

A
  • Bad understanding from users/reviewers
  • Idiosyncratic tastes
  • Uncontrollable shocks
  • Price variations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why is not all platforms two-sided in their review system?

A

Because as we have learnt, the fear of retaliation can affect the ratings a lot and make then unreliable… No one dare to leave a negative feedback… This is what caused Ebay to switch from two-sided to one-sided review system

17
Q

What happened in the Airbnb experiment where hosts in T-group could only see the guest’s review if they first submit their own review?

A

There was an increase in number of reviews submitted, and there was an increase in the text being negative, and also an increase in lower-rating starts (except from 1-star).

18
Q

Why is it a decrease in 1-star?

A

Because a lot of people give 1-start just because of the retaliation problem. The second reviewer punish the first for giving bad review.

19
Q

Why is the time between reviews reduced?

A

Because users are willing to see and show the other user’s review because: i)they are curious, and ii) it is advantageous to have a review visible on the platform as soon as possible.

20
Q

Why is median/mean rating not always the best measure to present on the platform?

A

Because ratings tend to be upward biased since people are afraid of leaving bad reviews and therefore skip leaving a review. Thus, it could be better to present nr of pos feedback divided by totalt number of transactions, to include the silen transactions as well.

21
Q

Can platforms themselves benefit from providing more accurate ratings?

A

Yes, for ex when Ebay presented the EPP (nr of pos/tot transaction) –> it lead buyer to be more likely to return to Ebay and shop on the platform.

22
Q

Since reviews are voluntary, selection bias is a problem in the review system. What are two main levels of self-selection?

A
  1. Review are left by users who chose to purchase a given product/service and/or to interact with a certain counterpart. Can lead to upward bias because early adopters are different from late buyers.
  2. Reviews are left by users who chose to leave a review after their purchase - which is often when he/she experienced something really bad/good. Extreme.
23
Q

How can we test fake reviews? Example of a study.

A

We can compare different platforms and examine the difference in reviews posted on them for different products. This was studie on Expedia (only customers can leave review) and Tripadvisor (anyone can leave review).
To test this we can look for reviews on hotels close to a competitor (more fake?), hotels part of a smaller entity (more pos fake reviews?), and hotels close to smaller entity competitors (more fake neg?)