Bass Pro Shops   Daveys Locker Sportfishing  Newport Landing Sportfishing   The Fishing Syndicate  Carver Covers  Tight Lines Guide Service  Bob Sands Fishing Tackle 
Results 1 to 6 of 6

Thread: Why nice people become mean online

  1. #1
    Join Date
    Mar 2017
    Posts
    61

    Default Why nice people become mean online

    Why nice people become mean online

    April 3, 2018


    On the evening of February 17, Professor Mary Beard posted on Twitter a photograph of herself crying.

    The eminent University of Cambridge classicist, who has almost 200,000 Twitter followers, was distraught after receiving a storm of abuse online.
    This was the reaction to a comment she had made about Haiti.Twitter Ads info and privacy








    She later added "I speak from the heart (and of cource I may be wrong). But the crap I get in response just isnt on; really it isnt."Twitter Ads info and privacy








    Those tweeting support for Beard -- irrespective of whether they agreed with her initial tweet that had triggered the abusive responses -- were themselves then targeted. And when one of Beard's critics, fellow Cambridge academic Priyamvada Gopal, a woman of Asian heritage, set out her response to Beard's original tweet in an online article, she received her own torrent of abuse.
    Such constant barrages of abuse, including death threats and threats of sexual violence, is silencing people, pushing them off online platforms and further reducing the diversity of online voices and opinion. And it shows no sign of abating.
    A survey last year found that 40% of American adults had personally experienced online abuse, with almost half of them receiving severe forms of harassment, including physical threats and stalking. Seventy percent of women described online harassment as a "major problem."


    Our human ability to communicate ideas across networks of people enabled us to build the modern world. The internet offers unparalleled promise of cooperation and communication between all of humanity.
    But instead of embracing a massive extension of our social circles online, we seem to be reverting to tribalism and conflict. While we generally conduct real-life interactions with strangers politely and respectfully, online we can be horrible.
    How can we relearn the collaborative techniques that enabled us to find common ground and thrive as a species?
    Being nice to other people

    "Don't overthink it, just press the button!"
    I'm playing in a so-called public goods game at Yale University's Human Cooperation Lab. The researchers here use it as a tool to help understand how and why we cooperate, and whether we can enhance our prosocial behavior.
    I'm in a team of four people in different locations, and each of us is given the same amount of money to play with. We are asked to choose how much money we will contribute to a group pot, on the understanding that this pot will then be doubled and split equally among us.
    Even though everyone is better off collectively by contributing to a group project that no one could manage alone -- in real life, this could be paying towards a hospital building or digging a community irrigation ditch -- there is a cost at the individual level. Financially, you make more money by being more selfish.
    "If you think about it from the perspective of an individual," says lab director David Rand, "for each dollar that you contribute, it gets doubled to $2 and then split four ways -- which means each person only gets 50 cents back for the dollar they contributed."
    Rand's team has run this game with thousands of players. Half of them are asked, as I was, to decide their contribution rapidly -- within 10 seconds -- whereas the other half are asked to take their time and carefully consider their decision.
    It turns out that when people go with their gut, they are much more generous than when they spend time deliberating.
    "There is a lot of evidence that cooperation is a central feature of human evolution," says Rand. Individuals benefit, and are more likely to survive, by cooperating with the group. And being allowed to stay in the group and benefit from it is reliant on our reputation for behaving cooperatively.
    Rather than work out every time whether it's in our long-term interests to be nice, therefore, it's more efficient and less effort to have the basic rule: be nice to other people. That's why our unthinking response in the experiment is a generous one.
    In a further experiment, Rand gave some money to people who had played a round of the game. They were then asked how much they wanted to give to an anonymous stranger.
    It turned out that the people who had got used to cooperating in the first stage gave twice as much money in the second stage as the people who had got used to being selfish did. So is there something about online social media culture that makes some people behave meanly?
    'Making outrage a habit'

    I trudge a couple of blocks through driving snow to find Molly Crockett's Psychology Lab, where researchers are investigating moral decision-making in society. One area they focus on is how social emotions are transformed online, in particular moral outrage.
    Brain-imaging studies show that when people act on their moral outrage, their brain's reward center is activated -- they feel good. This makes them more likely to intervene in a similar way again.
    So, if they see somebody acting in a way that violates a social norm, by allowing their dog to foul a playground, for instance, and they publicly confront the perpetrator about it, they feel good afterward. And while challenging a violator of your community's social norms has its risks -- you may get attacked -- it also boosts your reputation.
    In our relatively peaceful lives, we are rarely faced with outrageous behavior, so we rarely see moral outrage expressed. Open up Twitter or Facebook and you get a very different picture.
    Recent research shows that messages with both moral and emotional words are more likely to spread on social media -- each moral or emotional word in a tweet increases the likelihood of it being retweeted by 20%.
    "Content that triggers outrage and that expresses outrage is much more likely to be shared," Crockett says. What we've created online is "an ecosystem that selects for the most outrageous content, paired with a platform where it's easier than ever before to express outrage."
    Unlike in the offline world, there is no personal risk in confronting and exposing someone, so there is a lot more outrage expressed online. And it feeds itself. "When you go from offline -- where you might boost your reputation for whoever happens to be standing around at the moment -- to online, where you broadcast it to your entire social network, then that dramatically amplifies the personal rewards of expressing outrage."
    This is compounded by the feedback people get on social media, in the form of likes and retweets and so on. "Our hypothesis is that the design of these platforms could make expressing outrage into a habit," Crockett explains.
    "I think it's worth having a conversation as a society as to whether we want our morality to be under the control of algorithms whose purpose is to make money for giant tech companies," she adds. "I think we would all like to believe and feel that our moral emotions, thoughts and behaviors are intentional and not knee-jerk reactions to whatever is placed in front of us that our smartphone designer thinks will bring them the most profit."
    On the upside, the lower costs of expressing outrage online have allowed marginalized groups to promote causes that have traditionally been harder to advance.
    Moral outrage on social media played an important role in focusing attention on the sexual abuse of women by high-status men. And in February, Florida teens railing on social media against yet another high-school shooting in their state helped to shift public opinion, as well as shaming a number of big corporations into dropping their discount schemes for National Rifle Association members.
    "I think that there must be ways to maintain the benefits of the online world," says Crockett, "while thinking more carefully about redesigning these interactions to do away with some of the more costly bits."
    Influencing behavior

    Someone who's thought a great deal about the design of our interactions in social networks is Nicholas Christakis, director of Yale's Human Nature Lab, just a few more snowy blocks away. His team studies how our position in a social network influences our behavior, and even how certain influential individuals can dramatically alter the culture of a whole network.
    The team is exploring ways to identify these individuals and enlist them in public health programs that could benefit the community. In Honduras, they are using this approach to influence vaccination enrollment and maternal care, for example. Online, such people have the potential to turn a bullying culture into a supportive one.
    Christakis explores the properties of social networks by creating temporary artificial societies online. "We drop people in and see how they play a public goods game, for example, to assess how kind they are."
    Then he manipulates the network. "By engineering their interactions one way, I can make them really sweet to each other. Or you take the same people and connect them a different way and they're mean jerks to each other and they don't cooperate."
    In an attempt to generate more cooperative online communities, Christakis' team has started adding bots to their temporary societies.
    He takes me over to a laptop and sets me up on a new game. Anonymous players have to work together as a team to solve a dilemma that tilers will be familiar with: each of us has to pick from one of three colors, but the colors of players directly connected to each other must be different. If we solve the puzzle within a time limit, we all get a share of the prize money; if we fail, no one gets anything.
    I'm playing with at least 30 other people. None of us can see the whole network of connections, only the people we are directly connected to -- nevertheless, we have to cooperate to win.
    I'm connected to two neighbors, whose colors are green and blue, so I pick red. My left neighbor then changes to red so I quickly change to blue. The game continues and I become increasingly tense, cursing my slow reaction times. I frequently have to switch my color, responding to unseen changes elsewhere in the network, which send a cascade of changes along the connections.
    Time's up before we solve the puzzle, prompting irate responses in the game's comments box from remote players condemning everyone else's stupidity.
    Christakis rewinds the game, revealing the whole network. I see now that I was on a lower branch off the main hub of the network. Some of the players were connected to just one other person, but most were connected to three or more. Then Christakis reveals that three of these players are actually planted bots. "We call them 'dumb AI,' " he says.
    "Some of these bots made counterintuitive choices. Even though their neighbors all had green and they should have picked orange, instead they also picked green." When they did that, it allowed one of the green neighbors to pick orange, "which unlocks the next guy over, he can pick a different color and, wow, now we solve the problem." Without the bot, those human players would probably all have stuck with green, not realizing that was the problem.
    By adding a little noise into the system, the bots helped the network to function more efficiently. Perhaps a version of this model could involve infiltrating the newsfeeds of partisan people with occasional items offering a different perspective, helping to shift people out of their social media comfort-bubbles and allow society as a whole to cooperate more.
    Triggers for trolling

    "You might think that there is a minority of sociopaths online, which we call trolls, who are doing all this harm," says Cristian Danescu-Niculescu-Mizil, at Cornell University's Department of Information Science. "What we actually find in our work is that ordinary people, just like you and me, can engage in such antisocial behavior. For a specific period of time, you can actually become a troll. And that's surprising."
    Danescu-Niculescu-Mizil identifies two main triggers for trolling: the context of the exchange -- how other users are behaving -- and your mood. "If you're having a bad day, or if it happens to be Monday, for example, you're much more likely to troll in the same situation," he says. "You're nicer on a Saturday morning."
    He has built an algorithm that predicts with 80% accuracy when someone is about to become abusive online. This provides an opportunity to, for example, introduce a delay in how fast they can post their response.
    If people have to think twice before they write something, that improves the context of the exchange for everyone: you're less likely to witness people misbehaving, and so less likely to misbehave yourself.
    The good news is that, in spite of the horrible behavior many of us have experienced online, the majority of interactions are nice and cooperative. Justified moral outrage is usefully employed in challenging hateful tweets.
    A recent British study looking at anti-Semitism on Twitter found that posts challenging anti-Semitic tweets are shared far more widely than the anti-Semitic tweets themselves. Most hateful posts were ignored or only shared within a small echo chamber of similar accounts. Perhaps we're already starting to do the work of the bots ourselves.


    As Danescu-Niculescu-Mizil points out, we've had thousands of years to hone our person-to-person interactions, but only 20 years of social media. As our online behavior develops, we may well introduce subtle signals, digital equivalents of facial cues, to help smooth online discussions.
    If social media as we know it is going to survive, the companies running these platforms are going to have to keep steering their algorithms, perhaps informed by behavioral science, to encourage cooperation rather than division, positive online experiences rather than abuse.
    As users, we too may well learn to adapt to this new communication environment so that civil and productive interaction remains the norm online as it is offline.
    "I'm optimistic," Danescu-Niculescu-Mizil says. "This is just a different game and we have to evolve."





  2. #2

    Default

    Quote Originally Posted by commiechew View Post
    Why nice people become mean online

    April 3, 2018


    On the evening of February 17, Professor Mary Beard posted on Twitter a photograph of herself crying.

    The eminent University of Cambridge classicist, who has almost 200,000 Twitter followers, was distraught after receiving a storm of abuse online.
    This was the reaction to a comment she had made about Haiti.Twitter Ads info and privacy








    She later added "I speak from the heart (and of cource I may be wrong). But the crap I get in response just isnt on; really it isnt."Twitter Ads info and privacy








    Those tweeting support for Beard -- irrespective of whether they agreed with her initial tweet that had triggered the abusive responses -- were themselves then targeted. And when one of Beard's critics, fellow Cambridge academic Priyamvada Gopal, a woman of Asian heritage, set out her response to Beard's original tweet in an online article, she received her own torrent of abuse.
    Such constant barrages of abuse, including death threats and threats of sexual violence, is silencing people, pushing them off online platforms and further reducing the diversity of online voices and opinion. And it shows no sign of abating.
    A survey last year found that 40% of American adults had personally experienced online abuse, with almost half of them receiving severe forms of harassment, including physical threats and stalking. Seventy percent of women described online harassment as a "major problem."


    Our human ability to communicate ideas across networks of people enabled us to build the modern world. The internet offers unparalleled promise of cooperation and communication between all of humanity.
    But instead of embracing a massive extension of our social circles online, we seem to be reverting to tribalism and conflict. While we generally conduct real-life interactions with strangers politely and respectfully, online we can be horrible.
    How can we relearn the collaborative techniques that enabled us to find common ground and thrive as a species?
    Being nice to other people

    "Don't overthink it, just press the button!"
    I'm playing in a so-called public goods game at Yale University's Human Cooperation Lab. The researchers here use it as a tool to help understand how and why we cooperate, and whether we can enhance our prosocial behavior.
    I'm in a team of four people in different locations, and each of us is given the same amount of money to play with. We are asked to choose how much money we will contribute to a group pot, on the understanding that this pot will then be doubled and split equally among us.
    Even though everyone is better off collectively by contributing to a group project that no one could manage alone -- in real life, this could be paying towards a hospital building or digging a community irrigation ditch -- there is a cost at the individual level. Financially, you make more money by being more selfish.
    "If you think about it from the perspective of an individual," says lab director David Rand, "for each dollar that you contribute, it gets doubled to $2 and then split four ways -- which means each person only gets 50 cents back for the dollar they contributed."
    Rand's team has run this game with thousands of players. Half of them are asked, as I was, to decide their contribution rapidly -- within 10 seconds -- whereas the other half are asked to take their time and carefully consider their decision.
    It turns out that when people go with their gut, they are much more generous than when they spend time deliberating.
    "There is a lot of evidence that cooperation is a central feature of human evolution," says Rand. Individuals benefit, and are more likely to survive, by cooperating with the group. And being allowed to stay in the group and benefit from it is reliant on our reputation for behaving cooperatively.
    Rather than work out every time whether it's in our long-term interests to be nice, therefore, it's more efficient and less effort to have the basic rule: be nice to other people. That's why our unthinking response in the experiment is a generous one.
    In a further experiment, Rand gave some money to people who had played a round of the game. They were then asked how much they wanted to give to an anonymous stranger.
    It turned out that the people who had got used to cooperating in the first stage gave twice as much money in the second stage as the people who had got used to being selfish did. So is there something about online social media culture that makes some people behave meanly?
    'Making outrage a habit'

    I trudge a couple of blocks through driving snow to find Molly Crockett's Psychology Lab, where researchers are investigating moral decision-making in society. One area they focus on is how social emotions are transformed online, in particular moral outrage.
    Brain-imaging studies show that when people act on their moral outrage, their brain's reward center is activated -- they feel good. This makes them more likely to intervene in a similar way again.
    So, if they see somebody acting in a way that violates a social norm, by allowing their dog to foul a playground, for instance, and they publicly confront the perpetrator about it, they feel good afterward. And while challenging a violator of your community's social norms has its risks -- you may get attacked -- it also boosts your reputation.
    In our relatively peaceful lives, we are rarely faced with outrageous behavior, so we rarely see moral outrage expressed. Open up Twitter or Facebook and you get a very different picture.
    Recent research shows that messages with both moral and emotional words are more likely to spread on social media -- each moral or emotional word in a tweet increases the likelihood of it being retweeted by 20%.
    "Content that triggers outrage and that expresses outrage is much more likely to be shared," Crockett says. What we've created online is "an ecosystem that selects for the most outrageous content, paired with a platform where it's easier than ever before to express outrage."
    Unlike in the offline world, there is no personal risk in confronting and exposing someone, so there is a lot more outrage expressed online. And it feeds itself. "When you go from offline -- where you might boost your reputation for whoever happens to be standing around at the moment -- to online, where you broadcast it to your entire social network, then that dramatically amplifies the personal rewards of expressing outrage."
    This is compounded by the feedback people get on social media, in the form of likes and retweets and so on. "Our hypothesis is that the design of these platforms could make expressing outrage into a habit," Crockett explains.
    "I think it's worth having a conversation as a society as to whether we want our morality to be under the control of algorithms whose purpose is to make money for giant tech companies," she adds. "I think we would all like to believe and feel that our moral emotions, thoughts and behaviors are intentional and not knee-jerk reactions to whatever is placed in front of us that our smartphone designer thinks will bring them the most profit."
    On the upside, the lower costs of expressing outrage online have allowed marginalized groups to promote causes that have traditionally been harder to advance.
    Moral outrage on social media played an important role in focusing attention on the sexual abuse of women by high-status men. And in February, Florida teens railing on social media against yet another high-school shooting in their state helped to shift public opinion, as well as shaming a number of big corporations into dropping their discount schemes for National Rifle Association members.
    "I think that there must be ways to maintain the benefits of the online world," says Crockett, "while thinking more carefully about redesigning these interactions to do away with some of the more costly bits."
    Influencing behavior

    Someone who's thought a great deal about the design of our interactions in social networks is Nicholas Christakis, director of Yale's Human Nature Lab, just a few more snowy blocks away. His team studies how our position in a social network influences our behavior, and even how certain influential individuals can dramatically alter the culture of a whole network.
    The team is exploring ways to identify these individuals and enlist them in public health programs that could benefit the community. In Honduras, they are using this approach to influence vaccination enrollment and maternal care, for example. Online, such people have the potential to turn a bullying culture into a supportive one.
    Christakis explores the properties of social networks by creating temporary artificial societies online. "We drop people in and see how they play a public goods game, for example, to assess how kind they are."
    Then he manipulates the network. "By engineering their interactions one way, I can make them really sweet to each other. Or you take the same people and connect them a different way and they're mean jerks to each other and they don't cooperate."
    In an attempt to generate more cooperative online communities, Christakis' team has started adding bots to their temporary societies.
    He takes me over to a laptop and sets me up on a new game. Anonymous players have to work together as a team to solve a dilemma that tilers will be familiar with: each of us has to pick from one of three colors, but the colors of players directly connected to each other must be different. If we solve the puzzle within a time limit, we all get a share of the prize money; if we fail, no one gets anything.
    I'm playing with at least 30 other people. None of us can see the whole network of connections, only the people we are directly connected to -- nevertheless, we have to cooperate to win.
    I'm connected to two neighbors, whose colors are green and blue, so I pick red. My left neighbor then changes to red so I quickly change to blue. The game continues and I become increasingly tense, cursing my slow reaction times. I frequently have to switch my color, responding to unseen changes elsewhere in the network, which send a cascade of changes along the connections.
    Time's up before we solve the puzzle, prompting irate responses in the game's comments box from remote players condemning everyone else's stupidity.
    Christakis rewinds the game, revealing the whole network. I see now that I was on a lower branch off the main hub of the network. Some of the players were connected to just one other person, but most were connected to three or more. Then Christakis reveals that three of these players are actually planted bots. "We call them 'dumb AI,' " he says.
    "Some of these bots made counterintuitive choices. Even though their neighbors all had green and they should have picked orange, instead they also picked green." When they did that, it allowed one of the green neighbors to pick orange, "which unlocks the next guy over, he can pick a different color and, wow, now we solve the problem." Without the bot, those human players would probably all have stuck with green, not realizing that was the problem.
    By adding a little noise into the system, the bots helped the network to function more efficiently. Perhaps a version of this model could involve infiltrating the newsfeeds of partisan people with occasional items offering a different perspective, helping to shift people out of their social media comfort-bubbles and allow society as a whole to cooperate more.
    Triggers for trolling

    "You might think that there is a minority of sociopaths online, which we call trolls, who are doing all this harm," says Cristian Danescu-Niculescu-Mizil, at Cornell University's Department of Information Science. "What we actually find in our work is that ordinary people, just like you and me, can engage in such antisocial behavior. For a specific period of time, you can actually become a troll. And that's surprising."
    Danescu-Niculescu-Mizil identifies two main triggers for trolling: the context of the exchange -- how other users are behaving -- and your mood. "If you're having a bad day, or if it happens to be Monday, for example, you're much more likely to troll in the same situation," he says. "You're nicer on a Saturday morning."
    He has built an algorithm that predicts with 80% accuracy when someone is about to become abusive online. This provides an opportunity to, for example, introduce a delay in how fast they can post their response.
    If people have to think twice before they write something, that improves the context of the exchange for everyone: you're less likely to witness people misbehaving, and so less likely to misbehave yourself.
    The good news is that, in spite of the horrible behavior many of us have experienced online, the majority of interactions are nice and cooperative. Justified moral outrage is usefully employed in challenging hateful tweets.
    A recent British study looking at anti-Semitism on Twitter found that posts challenging anti-Semitic tweets are shared far more widely than the anti-Semitic tweets themselves. Most hateful posts were ignored or only shared within a small echo chamber of similar accounts. Perhaps we're already starting to do the work of the bots ourselves.


    As Danescu-Niculescu-Mizil points out, we've had thousands of years to hone our person-to-person interactions, but only 20 years of social media. As our online behavior develops, we may well introduce subtle signals, digital equivalents of facial cues, to help smooth online discussions.
    If social media as we know it is going to survive, the companies running these platforms are going to have to keep steering their algorithms, perhaps informed by behavioral science, to encourage cooperation rather than division, positive online experiences rather than abuse.
    As users, we too may well learn to adapt to this new communication environment so that civil and productive interaction remains the norm online as it is offline.
    "I'm optimistic," Danescu-Niculescu-Mizil says. "This is just a different game and we have to evolve."




    I have my own personal experience on that subject. I joined this website to talk about fishing and to make some new friends. Well as I was minding my own business writing up my park fish reports. I was then suddenly and viciously attacked by some "Hooligan's" who thought it was funny to crap on peoples fish reports. (these people were big players in the GD section) At the exact same moment DocRat insulted me and my park catfish reports. He said, "I should act like a man and join the GD section and give my political opinion on matters. That was the Ana kin Skywalker moment for me, turning me into Darth Vader in the GD section
    Last edited by etucker1959; 04-04-2018 at 09:48 AM.

  3. #3

    Default

    Great post commiechew.
    I guess I'll have to try to be nicer to people that want to help destroy our country.
    Oops, hey it may take a little while.
    All kidding aside. It is a very good article and makes a lot of sense, but I'm not sure how we combat it???
    Maybe if you and I set an example and stop the BS name calling and cheap shots we can start with some actual meaningful dialogue.
    My biggest complaint is the corporate take over of our country and kissing Putins butt . As much trepidation as I have for our government, I will ALWAYS take the government over corporate cronies (republican donors) that care about nothing but 'shareholders' and profits. Don't get me wrong, there are plenty of democrats doing it too and they need to go as well. We the people will lose every single time. At least with the government we can vote the crooks out just like what's going to happen this fall. We don't have that say so with corporations and some things just should not be profited from (healthcare and education) to name a few. We should want our kids to be the smartest and healthiest on the planet and not the dumbest and unhealthy. Hopefully a new influx of fresh people and ideas on both sides will be a positive one. Only time will tell.
    Last edited by Brent; 04-04-2018 at 10:05 AM.

  4. #4

    Default

    I think the basic problem is that some people use anonymity as an enabler to release their worst impulses. It's analogous to KKK members in their white robes, or whatever behavior may occur in mobs that people wouldn't do if they felt they would be identifiable and held accountable, except that online it is more convenient and more individualistic in nature.

    I believe that most people are not like that, but there are enough when people are allowed to be anonymous, to spoil the pot. That's why Facebook requires that people use their real names, although many people on Facebook still use aliases in violation of that policy. At least it helps.
    Last edited by Natural Lefty; 04-04-2018 at 08:07 PM.

  5. #5
    Join Date
    Feb 2004
    Location
    Loudon TN
    Posts
    2,835

    Default

    Mean people are the result of not being breast fed as a baby or not being hugged

  6. #6

    Default

    Quote Originally Posted by muskyman View Post
    Mean people are the result of not being breast fed as a baby or not being hugged
    You may be on to something Muskyman
    That may explain my love of one particular part of that statement

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •