This is a quick post to share a recently published paper, How Community Feedback Shapes User Behavior, that examines the effects of ratings systems and up/down voting on social networking platforms and services. I go on to discuss some questions it raises for online social learning.
Here’s the abstract to How Community Feedback Shapes User Behavior:
“Social media systems rely on user feedback and rating mechanisms for personalization, ranking, and content filtering. However, when users evaluate content contributed by fellow users (e.g., by liking a post or voting on a comment), these evaluations create complex social feedback effects. This paper investigates how ratings on a piece of content affect its author’s future behavior. By studying four large comment-based news communities, we find that negative feedback leads to significant behavioral changes that are detrimental to the community. Not only do authors of negatively-evaluated content contribute more, but also their future posts are of lower quality, and are perceived by the community as such. Moreover, these authors are more likely to subsequently evaluate their fellow users negatively, percolating these effects through the community. In contrast, positive feedback does not carry similar effects, and neither encourages rewarded authors to write more, nor improves the quality of their posts. Interestingly, the authors that receive no feedback are most likely to leave a community. Furthermore, a structural analysis of the voter network reveals that evaluations polarize the community the most when positive and negative votes are equally split.”
Summary of findings
- The findings of the study appear to contradict the Skinnerian behaviourist model of operant conditioning (i.e. punishments and rewards or “sticks and carrots”).
- Up/Down-votes and commenting provide a means for social interaction and “this can create social feedback loops that affect the behavior of the author whose content was evaluated, as well as the entire community.”
- Authors of down-voted comments/posts tend to post more frequently and their comments/posts tend to be of lower quality.
- Down-voted authors are also more likely to subsequently down-vote others’ comments/posts.
- Down-voting tends to percolate throughout online communities having an overall negative effect.
- Up-voting doesn’t appear to influence authors’ subsequent comments/posts in any significant way.
- If comment/post authors receive no feedback, they are more likely to disengage with the community, i.e. fewer comments/posts and less up/down-voting.
The article concludes that ignoring/tolerating negative behaviour in online communities, i.e. giving no feedback whatsoever, is a more effective approach at discouraging it than addressing it directly, e.g. down-voting.
How does this relate to online social learning?
Firstly, we should be cautious about drawing any conclusions about online discussions and learning activities in online social learning. Firstly, the researchers report that, “…we have mostly ignored the content of the discussion, as well as the context in which the post appears… “, which can have significant and far reaching effects on the behaviour and interactions between participants.
Secondly, the social dynamics of social constructivist oriented online courses can be very different: The study focused on massive groups of self-selected users participating in communities based around popular media and entertainment websites, whereas in elearning, we’re typically dealing with smaller cohorts of learners who, at least in an ideal world, establish an atmosphere of mutual support, shared responsibility, and explicitly shared common purpose that is effectively moderated by skilled, experienced mediators/facilitators, e.g. teachers, teaching assistants, and/or moderators.
Rethinking the design of ratings systems
In my opinion, this paper raises more questions for elearning practitioners than it answers, which is a good thing:
- How do learners use ratings systems and how does this affect their future behaviour in online learning communities? Is it significantly different to the users’ behaviour on social media sites?
- Is it possible to design ratings/feedback systems that have more positive effects or at least avoid the potential negative effects reported in the paper?
- How would the range of ratings options available to users affect the way they rate and comment, e.g. if you only include positive options in ratings?
- How would providing ratings options that are more specific to the learning objectives of the particular learning activity affect the quality and quantity of comments and quantity of ratings?
- What factors/influences affect learners’ behaviour in online learning communities more significantly with regard to ratings and comments? e.g. Does the degree of familiarity, mutual respect, and trust affect how learners respond to negative and critical ratings and comments?
Some example suggestions
In an earlier article, Implementing star-ratings in Moodle, I described how teachers and curriculum developers can create custom ratings in Moodle. As well as simple star-ratings, I listed some possible options which included Likert scales, prompts, showing interest, and expressing personal alignment, e.g. “This is(n’t) like me” statements. Most of these omit negative or neutral ratings, my reasoning being that, in order to give negative or critical feedback, learners and/or teachers have to take the time and effort to write sensitively phrased, personalised, specific, reasonable, constructive criticism, ideally with some kind of “what to do next”, so that it’s not just negative or critical but that it’s also helpful and purposeful in some way.
One strategy that springs to mind is to use ratings systems that, rather than ratings that suggest learners are being graded, i.e. “good vs. bad” comments, provide a set of prompts and/or questions and therefore are a convenient and helpful tool to encourage further participation. If learners have little experience of social learning and/or maybe need some initial support and guidance, having a convenient list of prompts/questions at hand could be helpful. For example:
- How do you determine this to be true?
- Why don’t you consider a different route to the problem?
- Why does that answer make sense to you?
- What if I say that’s not true?
- Why do you think this works? Does it always, why?
- How do you think this is true?
- Show how you might prove that.
- Why assume this?
- How might you argue against this?
- Can you explain that in another way?
- How does this relate to [discussion topic]?
- Can you be more specific?
- Can you give us an example?
- Please tell us more about this.
It’s worth mentioning that a strong characteristic of these questions and prompts is that they are intended to stimulate analytical and critical thinking, which we usually expect to hear from teachers and mentors rather than from our peers. Learners don’t automatically assume that such questions and prompts are welcome or appropriate from their peers. In order for them to be positive and productive, participants should already be inducted into a familiar, trusting, mutually respectful and supportive group of peers, who all explicitly share a common purpose, i.e. learning objectives and/or “big/essential questions,” in a collaborative climate.
Image credit Wikimedia Commons