Expanding the scope of the Disinformation Code
As we previously noted, a review of the Australian Code of Practice on Misinformation and Disinformation is currently underway. CMT’s submission to the review focuses on improving accountability by ensuring the scope of the code is broad enough to capture the full range of actions that signatories take to address mis- and disinformation.
Considered broadly, the Australian code takes the right approach, with a focus on outcomes and the encouragement of proportionate responses to the risk of harm. But key elements of the code undermine this approach and limit the code’s potential effectiveness. In particular, the scope of the code is unnecessarily narrow, setting a high threshold of serious and imminent harm for platform intervention and excluding relevant services (messaging) and content (professional news and political advertising).
As a result, the code fails to encompass the full range of actions that platforms are actually taking to address misinformation. Importantly, therefore, these actions are not subject to the transparency and accountability requirements of the code. Take Google’s decision to remove Sky News videos from YouTube for violating its misinformation policies. Whether you agree with the decision or not, Google cannot be held accountable for it under the code because professional news is excluded.
The rationale for the narrow scope is to avoid impinging on freedom of expression and freedom of the press, as well as, perhaps, serving to limit what platforms take themselves to be responsible for and in what areas it is appropriate for them to act. In our view, however, these freedoms would be better protected from within the code. Outside the code they are essentially subject to the whim of individual platforms.
Importantly, nowhere does the code compel platforms to censor false or misleading information; indeed it explicitly rejects this. Expanding the scope of the code to cover the full range of platform measures would not alter this fundamental principle. Instead, it would provide an opportunity for platforms to engage openly, with each other, with related industries and with the public, on finding effective solutions to mis- and disinformation that appropriately balance interventions with the protection of democratic freedoms. For example, collaborative development of formal decision-making frameworks would increase accountability and responsiveness to public expectations.
The strengthened EU code on disinformation provides a good example for how this can be achieved. Within a much-broadened scope, it requires signatories to form formal working groups, advisory bodies and other collaborative partnerships to share information and develop best-practice measures. While the strengthened EU code was published after DIGI released the discussion paper for this review, it would be a missed opportunity if the many improvements in the European model were not considered.
Michael Davis, CMT Research Fellow
This was featured in our eNewsletter of 5 August, read it in full here. To subscribe to have it direct to your inbox fortnightly, sign up here.