Regulating Social Media: An Overview

Advertisement

The rise in the power and influence of social media in the global forum is leading several countries to rethink their laws regulating social media. An important issue fuelling this debate is the ethical safeguards of privacy acts that may get tampered with if a change in regulatory protocols is brought about. Some ways in which different countries are considering regulating social media are:

  • Some governments are in favor of releasing details of various types of data to researchers for further advancement of technology.
  • Some governments are considering laws that scrutinize and restrict online content sharing to prevent the spread of misinformation.
  • Several forums and digital rights groups recommend the establishment of a “statutory building code” that mandates the protocols for safety and quality control across digital platforms.

Additionally, several fact-checkers for the distribution of information have been working independently to safeguard users from becoming victims of misinformation. There is a call to all digital platforms to reassess their design and engineering to mitigate inadvertent harm that can arise from viral content, especially ones that have not been fact-checked.

Several laws are already in place that regulates social media. However, most are not universal. Some regulations already in place are

  •  discrimination or exclusion on the basis of race or religion is illegal in most circumstances.
  • Several countries have localized restrictions pertaining to platforms like Facebook and YouTube.
  • In many platforms, users are recommended to limited broadcasting possibilities or are given access to micro-target audiences.

To build a safer internet is the top priority of the day.  Digital platform giants are now coming together to devise ways of protecting freedom of expression and the Open internet even as new laws and regulations are mandated by states across the globe.

Technology is only a subpart of the media ecosystem and it is essential that the entire society takes a stance to keep its portals transparent and healthy. For instance, freedom of speech should not be abused as an excuse for hate speech or libel.

Part of what makes these decisions a long deliberation is the ambiguous nature of content sharing and the inability of platform algorithms to identify triggers in individual cases. What qualifies as misinformation is sometimes difficult to define especially when it comes presented as opinion versus facts. Hence how society responds to content is also of crucial importance to keep the internet safe.  For instance, it is easy to identify the idea that Covid 19 is a hoax is a misinformation. There is empirical data to back this.  It is not as easy to identify whether vaccines for it will be as effective as predicted or whether the disease can have long-lasting side effects. It is a violation of the users’ freedom of speech if those skeptical of the vaccines under trial are obstructed from voicing their opinion on social media. Yet it is easy to trigger mass dissatisfaction or even instigate mob violence against doctors and researchers over the issue. In cases like this, authorities should be able to formulate at what point an opinion on the matter can be deemed harmful and hence be regulated.

Advertisement
Advertisement