A Peek Behind the Great Firewall of Russia
Interrogation of a native Siberian tribesman: Where is the gold?
Translator: Where is the gold?
Tribesman: Won’t tell!
Translator: He won’t tell.
KGB interrogator: If you won’t tell, we’ll kill you.
Translator: If you won’t tell, they’ll kill you.
Tribesman: It’s hidden by the yurt’s entrance.
Translator: He says “Go ahead, I won’t tell!”
What the law actually says
As part of the counter-terrorism drive, Russian parliament passed legislation enabling Internet surveillance on a grand scale.
The new laws, which go into effect from July 1, 2018, require all Russian telecom operators and Internet service providers (ISPs) to store records of all of their users’ calls, messages and files for six months and information on the existence of those communications for three years. Internet providers will be required to hand over to law enforcement agencies the keys to decrypt all such traffic as a whole, as well as individual user messages.
“The severity of Russian laws is tempered by the inconsistency of their enforcement.” This phrase is ascribed to Saltykov-Shchedrin and has comforted people since the 19th century at least.
On first glance, the law is completely unenforceable. The reasons are economical (there is not enough storage equipment to implement it as written and buying it would break any economy) and technical: modern encryption protocols use new keys for each session or even for each message, so what exactly needs to be handed over? For the mechanism of the handover the instructions hilariously mention “on magnetic disk”, so this image was popular for a while among tech-savvy Russian bloggers:
Who has the last laugh?
Last week an anonymous blogger claimed to have insider information on the actual planned implementation of the surveillance and while he understandably does not name his sources, the plan looks technically plausible. Let us look at the details.
Even before the current legislation, all ISPs were legally obliged to implement the system of content filtering and prevent the users from accessing black-listed web sites or individual pages.
The plan is to extend this system on massive scale as follows:
Distribute a certificate and instruct the users to install it as a trusted root certificate. The incentive is very strong, as without installing it they won’t be able to access any HTTPS content:
Clicking on the lock icon and checking the certificate details brings up the following:
So, the browser complains about the google.com certificate because an organisation called “Content Filtering System/OblCIT” is unhappy with it.
Installing the certificate sent by the ISP solves the problem of Internet access, but digging into the certificate properties brings up the following:
The certificate of the organisation that presumes to check all of the user’s Internet connections for validity is not issued by a recognised certificate authority. Whoever OblCIT are, they didn’t even seek to get themselves a proper certificate. On the other hand, if you have a captive audience that can be made to install a self-signed certificate, then why bother?
What does it mean for traffic security?
Normally HTTPS connection provides end-to-end confidentiality between the sender and the recipient. (See this TechNet article for a detailed explanation. The infographics refer to email messages, but the logic applies to web traffic as well.)
When Alice talks to Bob, the traffic is encrypted by session key derived from Bob’s public key and the session key itself is encrypted with this public key. This means that only Bob, a legal owner of the matching private key, will be able to decrypt the session key and subsequently the data received from Alice.
However, Eve forced Alice to install her certificate as a trusted root and now all the elements are in place for the classic Man-in-the-Middle (MitM) attack:
Eve can decrypt the traffic, because it is encrypted by session keys derived from her public key. Then she has a choice of either encrypting the raw data as it is to be forwarded to Bob, dropping the packets if she does not approve of Bob, or maybe even modifying the data. Not only confidentiality is compromised, but the integrity is gone as well. For the potential impact see the joke in the epigraph.
So, can Eve, the ISP, comply with the law in this setup? Not literally, as the problem of storage space is still there. However, now that the ISP has access to plaintext traffic, it is possible to run deep packet inspection and perform whatever data mining is required. This may be just what the spirit of the law intended.
“If you’ve got nothing to hide, you’ve got nothing to fear”
This is the motto of government security initiatives everywhere. There are plenty of arguments against it, but let us for a moment run with it. So you are an Internet user who subscribes to this motto. For you, Eve in the discussion above is not a bad person – she has legal authority and you are fine with it. Eve only wants to protect everyone from terrorist threat. Let us rename her, to remove the negative connotations. How about Vladimir?
Vladimir’s private key would now be a target of bad guys everywhere. Where is it stored? With each ISP, to ensure quick access to all the data? That would require an enormous investment in personnel vetting and secure storage facilities. Stored centrally? That means ISPs have to pass all the encrypted traffic to a central processing facility, and we are back to the original problem of storing all the traffic in the country.
The goal of granting exceptional access for designated eavesdroppers is tempting for law enforcement agencies everywhere.
“Keys Under Doormats” is a seminal paper describing in detail all the technical and logistical reasons for why it is impossible to achieve without making it easier for adversaries to access the data as well.
Screenshots courtesy of https://quip.com/N07UAXChSjRR
Written by Irene Michlin
First published on 19/09/16