[ad_1]
Proposed UK laws could see top managers at tech companies be jailed if they fail to meet the demands of regulators. The laws, coming in the form of an Online Safety Bill, were introduced to Parliament on Thursday after almost a year of consultation.
The UK government commenced work on the proposed laws in May last year to push a duty of care onto social media platforms so that tech companies are forced to protect users from dangerous content, such as disinformation and online abuse.
“We don’t give it a second’s thought when we buckle our seat belts to protect ourselves when driving. Given all the risks online, it’s only sensible we ensure similar basic protections for the digital age,” Digital Secretary Nadine Dorries said.
Under the proposed legislation, executives of tech companies could face prosecution or jail time if they fail to cooperate with information notices issued by Ofcom, UK’s communications regulator. Through the Bill, Ofcom would gain the power to issue information notices for the purpose of determining whether tech companies are performing their online safety functions.
A raft of new offences have also been added to the Bill, including making in-scope companies’ senior managers criminally liable if they destroy evidence, fail to attend or provide false information in interviews with Ofcom, or obstruct the regulator when it enters company offices.
The Bill also looks to require social media platforms, search engines, and other apps and websites that allow people to post their own content to implement various measures to protect children, tackle illegal activity and uphold their stated terms and conditions.
Among these measures are mandatory age checks for sites that host pornography, criminalising cyberflashing, and a requirement for large social media platforms to give adults the ability to automatically block people who have not verified their identity on the platforms.
The proposed laws, if passed, would also force social media platforms to up their moderation efforts, with the Bill calling for platforms to remove paid-for scam ads swiftly once they are alerted of their existence. A requirement for social media platforms to moderate “legal but harmful” content is also contained in the Bill, which will make large social media platforms have a duty to carry risk assessments on these types of content. Platforms will also have to set out clearly in terms of service how they will deal with such content and enforce these terms consistently.
“If companies intend to remove, limit or allow particular types of content they will have to say so,” Dorries said.
The agreed categories of “legal but harmful” content will be set out in secondary legislation that will be released later this year, the digital secretary added.
While the UK government has framed the Online Safety Bill as “world-leading online safety laws”, law experts have criticised the Bill for its use of vague language through the “legal but harmful” classification, which they say could create unintended consequences.
“The Online Safety Bill is a disastrous piece of legislation, doomed not just to fail in its supposed purpose but make it much harder for tech companies and make the internet less safe, particularly for kids,” said Paul Bernal, University of East Anglia IT law professor.
The UK government hasn’t been alone in wanting to create laws regulating how social media platforms moderate content. Australia’s federal government is currently mulling over two pieces of legislation, one focusing on stopping online defamation and the other being about online privacy.
The defamation laws, framed by the federal government as anti-trolling laws, seek to force social media companies into revealing the identity of anonymous accounts if they post potentially defamatory material on platforms.
Australia’s proposed online defamation laws have faced similar criticism of potentially creating unintended, adverse impacts, leading to criticism from online abuse victims and privacy advocates.
Related Coverage
[ad_2]
Source link