Facebook RU

Main Menu

  • Facebook News
  • Mark Zuckerberg
  • Facebook Messenger
  • Whatsapp
  • Accounts

Facebook RU

Header Banner

Facebook RU

  • Facebook News
  • Mark Zuckerberg
  • Facebook Messenger
  • Whatsapp
  • Accounts
Facebook Messenger
Home›Facebook Messenger›Should we regulate Facebook Messenger or Instagram?

Should we regulate Facebook Messenger or Instagram?

By Shirley J. Speights
July 10, 2022
0
0

OTTAWA — The advisory panel tasked with making recommendations on Canada’s pending online safety legislation has failed to agree on the definition of online harm and the need to completely eliminate harmful content from ‘Internet.

On Friday, the federal government released the conclusions of the tenth and final session of the expert panel, which summarized three months of deliberations on what a future legislative and regulatory framework might look like.

The 12-person panel brought together experts on topics including hate speech, terrorism, child sexual exploitation and the regulation of online platforms. Their findings come after Ottawa released a proposed online harms bill last summer, prompting some stakeholders involved in the consultations to urge the government to go back to the drawing board.

The results highlight the daunting challenges the federal government will face when introducing the legislation, which was due within 100 days of the Liberals forming government last fall.

Heritage Minister Pablo Rodriguez is now beginning a series of regional and virtual roundtables to gather more feedback on the framework, starting with the Atlantic provinces.

Here’s what the experts – who remained anonymous in the report – concluded.

What is “online harm”?

In its proposal last year, the government identified five types of “harmful content”: hate speech, terrorist content, incitement to violence, sexual exploitation of children and non-consensual intimate images.

Most of the panel members felt that child exploitation and terrorist content should be dealt with “unequivocally by future legislation”. Others have deemed the five categories “deeply problematic”, in one case challenging definitions of terrorism to focus on “Islamic terror” and omit other forms.

Rather than isolating specific types of harmful content, some experts have suggested that “harm could be defined more broadly, as harm to a specific segment of the population, such as children, the elderly, or minority groups “. Panel members also disagreed on whether harms should be defined narrowly in legislation, with some saying dangerous content evolves and changes, while others said regulators and law enforcement order would require strict definitions.

Misinformation, something Rodriguez has previously said needs to be addressed with “urgency,” also occupied an entire session of the panel’s review. While deliberately misleading content was not listed as a category in the government’s proposal last year, misinformation emerged as a possible classification of online harm during last summer’s consultations.

The panel concluded that disinformation “is difficult to pin down and define”, but agreed that it has serious consequences, such as incitement to hatred and undermining democracy. Members ultimately argued that disinformation should not be defined in any legislation as it would ‘put the government in a position to distinguish between what is true and false – which it simply cannot do’.

Should harmful content be removed from the Internet?

Another key area where experts could not agree was whether upcoming legislation should require platforms to remove certain content.

The debate stems from long-standing issues with the government’s previous suggestion that harmful content must be removed within 24 hours of reporting it, and concerns over free speech.

While experts seemed to agree that explicit calls for violence and child sexual exploitation content should be removed, some cautioned against cleaning up any content, while others “expressed a preference for a over-removal of content rather than under-removal”.

Experts have diverged on the thresholds that would constitute content removal, with some suggesting that harm could be categorized in two ways: either a “serious and criminal” category with the possibility of appeal, and a less serious category without the possibility of appeal. .

There was also disagreement over whether private communications, such as content sent via chat rooms, Facebook Messenger or Twitter and Instagram direct messages, should be regulated and removed. Some members said private services that harm children should be regulated, while others said it would be “difficult to justify from a Charter perspective” to access private conversations.

What can happen after content is flagged?

Canadian lawmakers will not only have to ask what constitutes harm online and what to do about it, but also what happens to victims – and those who posted harmful content – ​​after the messages are reported.

It is not yet clear which body would be responsible for overseeing Ottawa’s online safety framework, although the appointment of a specialist commissioner – such as the Australian ‘eSafety’ commissioner – has been proposed as an option.

Experts agreed platforms should have a review and appeal process for all moderation decisions, with some suggesting the creation of an “internal ombudsman” to support victims.

It was noted that such a role should remain entirely independent of government, potential commissioners, online platforms and law enforcement.

“Some have suggested that the regime could start with an ombudsman as a support center for victims, and become a body that adjudicates disputes later,” the report notes.

Experts, however, disagreed on how an ombudsman would work, with some citing that users need an outside “place” to air their concerns due to distrust of social media platforms. social media.

Others “pointed out that creating an independent body to make opt-out decisions would be a massive undertaking akin to creating an entirely new quasi-judicial system with major constitutional issues related to both federalism and concerns relating to the Charter”.

Experts have also expressed concern that remedies may simply not be practical, given the volume of content, complaints and appeals the legislation could generate.

Ultimately, they concluded that “instead of just scrapping the idea, it requires further development and testing.”

Raisa Patel is an Ottawa journalist who covers federal politics for the Star. Follow her on Twitter: @R_SPatel

TO SHARE:

Related posts:

  1. Downtown Owensboro Will Be Bustling With Entertainment This Weekend | Characteristics
  2. Heroin trafficker convicted of causing death of Iowan woman | SiouxlandProud | Sioux City, IA
  3. Instagram Live takes Clubhouse with options to mute and mute video – TechCrunch
  4. Snap continues to crush Facebook in this key demographic

Recent Posts

  • Walls close in on Zuckerberg as leaders desert Meta
  • Schenectady’s Mandy McHugh Taps Horror Fandom in ‘Chloe Cates is Missing’
  • Awabah launches WhatsApp Chat Bot in Port Harcourt, Abuja
  • Modi changes Twitter and Facebook displays national flag images | Latest India News
  • What has Facebook done now?

Archives

  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021

Categories

  • Accounts
  • Facebook Messenger
  • Facebook News
  • Mark Zuckerberg
  • Whatsapp
  • Terms and Conditions
  • Privacy Policy