
Addressing online harms at the source
The review recommends imposing a “digital duty of care” on large social media companies. The federal government has already committed to doing this. However, legislation to implement a digital duty of care has been on hold since November, with discussions overshadowed by the government’s social media ban for under 16s. The digital duty of care would put the onus on tech companies to proactively address a range of specific harms on their platforms, such as child sexual exploitation and attacks based on gender, race or religion. It would also provide several protections for Australians, including “easily accessible, simple and user-friendly” pathways to complain about harmful content. And it would position Australia alongside the United Kingdom and the European Union, which already have similar laws in place. Online service providers would face civil penalties of 5% of global annual turnover or A$50 million (whichever is greater) for non-compliance with the duty of care.Will Meta roll out its new content moderation approach in Australia immediately? Or will the company first review its obligations under the Online Safety Act? #AusLaw #AusPol
— Leanne O'Donnell (@mslods.bsky.social) 8 January 2025 at 00:30
[image or embed]
Two new classes of harm – and expanded powers for the regulator
The recommendations also call for a decoupling of the Online Safety Act from the National Classification Scheme. That latter scheme legislates the classification of publications, films and computer games, providing ratings to guide consumers to make informed choices for selecting age-appropriate content. This shift would create two new classes of harm: content that is “illegal and seriously harmful” and “legal but may be harmful”. This includes material dealing with “harmful practices” such as eating disorders and self-harm. The review’s recommendations also include provisions for technology companies to undergo annual “risk assessments” and publish an annual “transparency report”. The review also recommends adults experiencing cyber abuse, and children who are cyberbullied online, should wait only 24 hours following a complaint before the eSafety Commission orders a social media platform to remove the content in question. This is down from 48 hours. It also recommends lowering the threshold for identifying “menacing, harassing, or seriously offensive” material to that which “an ordinary reasonable person” would conclude is likely to have an effect. The review also calls for a new governance model for the eSafety Commission. This new model would empower the eSafety Commissioner to create and enforce “mandatory rules” (or codes) for duty of care compliance, including addressing online harms.The need to tackle misinformation and disinformation
The recommendations are a step towards making the online world safer for everybody. Importantly, they would achieve this without the problems associated with the government’s social media ban for young people – including that it could violate children’s human rights. Missing from the recommendations, however, is any mention of potential harms from online misinformation and disinformation. Given the speed of online information sharing, and the potential for artificial intelligence (AI) tools to enable online harms, such as deepfake pornography, this is a crucial omission. From vaccine safety to election campaigns, experts have raised ongoing concerns about the need to combat misinformation. A 2024 report by the International Panel on the Information Environment found experts, globally, are most worried about “threats to the information environment posed by the owners of social media platforms”. In January 2025, the Canadian Medical Association released a report showing people are increasingly seeking advice from “problematic sources”. At the same time technology companies are “blocking trusted news” and “profiting” from “pushing misinformation” on their platforms. In Australia, the government’s proposed misinformation bill was scrapped in November last year due to concerns over potential censorship. But this has left people vulnerable to false information shared online in the lead-up to the federal election this year. As the Australian Institute of International Affairs said last month:misinformation has increasingly permeated the public discourse and digital media in Australia.