Arthur Piper

Ideas, drafts and afterthoughts

Meta and the power of tech

Facebook, and its parent company Meta, are facing growing concerns about unchecked power and influence. 

When Facebook and its subsidiary apps went down in October, it was a timely reminder of how powerful Facebook’s holding company – recently rebranded as Meta – has become. The impact was global and wide-reaching.

Without access to Meta’s apps, including Facebook, Instagram and WhatsApp, about 3.5 billion people lost access to parts of the internet. For some, this meant they could not run their businesses, hold events or communicate with customers, family and friends. Others could no longer get onto websites, smart TVs and other internet-enabled devices that require Meta’s apps to log in.

The outage affected many of the tools used internally by Meta on a day-to-day basis, which hindered attempts to diagnose the problem and get systems back up and running. Billions of daily lives and collective memories were affected.

Power and responsibility

Not only does Meta control a huge chunk of the internet’s physical infrastructure, it also controls who can see what, when and where. That gives it enormous power, even as the contours of the legal obligations to its users remain unclear.

Internal documents leaked by the whistleblower Frances Haugen recently threw a spotlight on the inner workings of the Facebook platform. Revelations from these documents raise difficult questions over precisely how the business makes decisions about the content it allows.

For example, Mark Zuckerberg allegedly oversaw a decision by Facebook to comply with demands from Vietnam’s communist government to increase censorship of anti-government posts ahead of the ruling party’s annual congress.

While Meta declined to comment on Zuckerberg’s personal role in the decision, a spokesperson told Global Insight that ‘We’ve been open and transparent about our decisions in response to the rapid rise in attempts to block our services in Vietnam. As we shared last year, we do restrict some content in Vietnam to ensure our services remain available for millions of people who rely on them every day.’

Haugen and many other protestors also argue that Facebook has a duty of care to remove content harmful to the mental wellbeing of hundreds of millions of young people – a process that clashes with the business’ values of enabling free speech. Haugen contests that Facebook puts profits over people, but a Meta spokesperson flatly denied this, saying that, in effect, bad content hurts its profitability. ‘People don’t want to see [harmful content] when they use our apps and advertisers don’t want their ads next to it’, the spokesperson told Global Insight.

In addition, Meta highlighted that it now has 40,000 staff dedicated to removing such content across the business and is on track to spend about $5bn on safety and security in 2021. It said it has halved the amount of hate speech seen on Facebook in the last year – it now represents about 0.03 per cent of content views.

However Zuckerberg uses his power, he is uniquely able to direct the course of action Facebook takes. In 2022, the business’ shareholders will reportedly vote for the fourth year running to dilute Zuckerberg’s power base by splitting the roles of CEO and chair. The organisation’s two-tier share system gives him control of 58 per cent of its voting rights, according to research by S&P Global Market Intelligence. But unless Zuckerberg personally sees sense in a move to loosen his grip on affairs, the vote is likely to fail again. Meta did not respond to Global Insight’s request for comment on these investor concerns.

Safety or profit?

Pressure for reform is building from outside the business. In her evidence to the US Congress and the UK’s Houses of Parliament, Frances Haugen claimed that where public safety and profits clash at Facebook, profits win. That has meant, she maintained, developing algorithms in a way that ‘amplifies division, extremism, and polarization’.

‘In some cases, this dangerous online talk has led to actual violence that harms and even kills people’, she told Congress. ‘In other cases, their profit optimizing machine is generating self-harm and self-hate – especially for vulnerable groups, like teenage girls.’

Facebook refuted these claims. Responding in a blog post in early October, Zuckerberg said that ‘this idea that we prioritize profit over safety and well-being [is] just not true’. He also highlighted, among other things, Facebook’s work to ensure children’s safety and wellbeing, including work ‘on industry-leading efforts to help people’ in moments of distress.

Frances Haugen’s testimony raises the important issue of whether Meta has a duty of care towards the users of its services

In the US context, increased oversight and regulation has been seen through the lens of proposed changes to Section 230 of the Communications Decency Act 1996. That legislation gives safe harbour to social media platforms against legal liability for the content that users post. In the wake of the outcry following the storming of the US Capitol by supporters of Donald Trump in January, Zuckerberg told the US Congress in March that he favoured reform using this approach. Trump’s account was taken down by Facebook following the attack on the Capitol.

Haugen argued in her testimony, however, that amending Section 230 would be ineffective because the way that Facebook’s algorithms work is opaque to the outside world. ‘Facebook’s closed design means it has no oversight – even from its own Oversight Board, which is as blind as the public’, she argued.

Catalina Botero-Marino, a member of Facebook’s independent Oversight Board and a former special rapporteur for freedom of expression for the Organization of American States’ Inter-American Commission on Human Rights, tells Global Insight that the Board is continually pushing the platform towards greater transparency and to make the rules behind its decisions clear to the public.

Duty of care

Haugen’s testimony raises the important issue of whether Meta has a duty of care towards the users of its services. For a duty of care to exist, the content must both be capable of causing harm and the service, for example Facebook, must be the direct cause of the harm.

In September, leaked Facebook research appeared to show that Instagram published content that could affect teenage mental health around, for example, eating disorders. Facebook responded by saying that the results were based on the ‘subjective perceptions of research participants’ and couldn’t be used to evaluate causal relationships between social media content and the health and wellbeing of its users.

Meanwhile, in court, any prosecutor would need to prove that a specific piece of content caused the harm in question. In an October ruling in the US, Godwin v Facebook, Facebook was found to not have a duty of care to prevent a murder. Although the devil is in the detail, establishing causation can be a difficult process.

Despite such issues, the UK’s Online Safety Bill – put forward in May – aims to impose a specific duty of care on social platforms, a system that would be overseen by the UK’s communications regulator, Ofcom.

Decisive action to check the growing power of Meta, its Chief Executive Officer Mark Zuckerberg and other technology giants is gathering pace

As drafted currently, the Bill marks a sharp departure from current practice. Not only will platforms need to protect users from illegal and harmful content, but also content that the provider has ‘reasonable grounds to believe’ is illegal under UK law. Similarly, platforms will need to risk assess each piece of content to evaluate how potentially harmful it could be. While the specific nature of ‘harmful’ is yet to be defined, it will include content that platforms have ‘reasonable grounds to believe’ will cause significant physical or mental harm.

The Bill has wide scope, but it’s too vaguely worded to be effective at present. For example, it includes the notions that legal content may be harmful, and that harm may be direct or indirect. So legal content that may cause indirect harm could fall under the scope of the Bill, which would be difficult to define.

Yet, despite the apparent confusion around wording in this draft of the Bill, decisive action to check the growing power of Meta, its Chief Executive Officer Mark Zuckerberg and other technology giants is gathering pace. And while the US is still stuck in frames of reference that revolve around Section 230, Europe and the UK are not. Just as Europe’s General Data Protection Regulation changed privacy practices around the world, it would be no surprise if the winds of regulatory change that eventually sweep through the US technology sector have their origins in Europe.

This article originally appeared in the November issue of IBA’s Global Insights magazine where I am the technology correspondent.

Spread the love