In Germany, lawmakers are pushing ahead with fines of up to 50 million euros, or $56 million, if Silicon Valley companies do not limit how online hate speech circulates on their social networks.
Now the country’s politicians want to go further.
In its electoral manifesto and in speeches by senior politicians, the governing Conservative Party outlined proposals to offer security officials more ways to keep tabs on potential extremists. Theresa May, the prime minister, raised the issue at a recent Group of 7 meeting and in talks with President Emmanuel Macron of France.
But if the proposals are pushed through, there will be costs.
The Conservatives now rule with a minority in Parliament, and will most likely have to rely on other parties for support. That may necessitate compromise or horse trading.
And the additional measures could hurt Britain’s effort to court new investment from the global tech sector as it prepares to leave the European Union.
Who Should Have Access to Your Messages?
Mrs. May had a simple message after the recent deadly terrorist attack in London.
“We need to do everything we can at home to reduce the risks of extremism online,” she told the British public, echoing a similar message by her government after a previous attack in Manchester.
Part of that plan is to demand that companies such as Apple and Facebook allow Britain’s national security agencies access to people’s encrypted messages on services like FaceTime and WhatsApp.
These services use so-called end-to-end encryption, meaning that a person’s message is scrambled when it is sent from a device, so that it becomes indecipherable to anyone but its intended recipient.
British officials, like their American counterparts, would like to create a digital backdoor to this technology.
Yet an opening for intelligence agencies, experts warn, would also allow others, including foreign governments and hacking groups, to potentially gain access to people’s digital messages.
It would also most likely induce terrorist groups to move to other forms of encrypted communication, while leaving everyday Britons — and others traveling in the country — susceptible to online hacks.
“If the British government asks for a special key like this, what stops other governments from asking for the same access?” said Nigel Smart, a cryptology professor at the University of Bristol. “You need end-to-end encryption because it stops anyone from listening in.”
British lawmakers say law enforcement and intelligence agencies need such access to foil potential terrorist plots.
But Facebook and others respond that they already provide information on people’s online activities, when required, including the I.P. address — a pseudo fingerprint for digital devices — of machines from where messages are sent.
And in a letter sent to British politicians in late 2015 — just as an earlier debate about tech regulation was bubbling to the surface — Apple made its views clear.
“We believe it would be wrong to weaken security for hundreds of millions of law-abiding customers so that it will also be weaker for the very few who pose a threat,” the company said.
Extremist Messages: What Should Be Controlled?
British politicians have another target in policing the internet: extremist messages that are circulated on Facebook, YouTube and other social media.
While other countries have taken steps to control how such material is shared across the web, tech executives and campaigners say that Britain has gone further than almost any western country, often putting the onus on companies to determine when to take down content that while offensive, does not represent illegal — or violent — messaging.
“I’d like to see the industry go further and faster in not only removing online terrorist content, but stopping it going up in the first place,” Amber Rudd, the country’s home secretary, said before meeting with tech executives this year. At the time, she called on them to take further steps to counter such extremist material.
Mrs. May also had discussions with Mr. Macron, the French president, last week about holding tech companies legally liable if they fail to remove content.
The British government’s stance has put tech companies in the difficult position of having to determine what should, and should not, be allowed online.
Britain’s freedom of expression laws are not as far-reaching as those in the United States, allowing British lawmakers to push for greater control over what is circulated across the web.
In recent months, companies like Facebook and Twitter say that they have taken additional steps to remove illegal extremist material from their social networks, and are giving users ways to flag potentially offensive content.
That includes Facebook announcing on Thursday that it would use artificial intelligence technology to flag, and remove, inappropriate content. Google has also provided financing to nonprofit organizations aimed at countering such hate speech online.
Some other European lawmakers have warned that too-strict limits on what can be shared across the web may hamper freedom of speech, a touchy subject for many people who grew up behind the Soviet-era iron curtain.
“For me, freedom of expression is a basic fundamental right,” Andrus Ansip, the digital chief at the European Commission, the executive arm of the European Union, said in an interview this year. “Nobody wants to see a Ministry of Truth.”