-
Notifications
You must be signed in to change notification settings - Fork 88
Threat model
Security is a process, not a product. No system is secure against all attacks. The question is whether a process is secure against some specific type of attacker that has a set of capabilities, privileges, and a certain amount of budget, time and motivation. Given enough of these resources, security of any process will fail. The only way to ensure security is to make the attack too expensive. This page aims to explain the full threat model TFC is designed against, so that you (the user) can make an informed decision on whether the tool and processes around it are secure enough for your needs.
Passive eavesdropping of the backbone of the Internet is the most common form of surveillance. TFC is end-to-end encrypted with 256-bit XSalsa20-Poly1305, a state of the art symmetric cipher that will not be broken even by a universal quantum computer running Grover's algorithm. The weak link is however the key exchange. Quantum computers running so called Shor's algorithm will eventually be able to break the X25519 key exchange used in TFC, leading to retrospective decryption of such conversations.
If such threats are part of the threat model, TFC also has support for password-protected pre-shared keys (PSK) that can not be broken by a quantum computer.
TFC is native software running on top of a native IM client that communicates via XMPP servers. Private keys or plaintexts are never uploaded to any third party in network.
Man-in-the middle attacks against the end-to-end encryption can be detected by comparing the public key fingerprints via end-to-end encrypted off-band channel, preferably over Signal call. Another way to ensure keys are actually exchanged with contact is to use PSKs that are given to the correct contact in person.
After setup, TFC can not be remotely compromised, as the hardware data diode physically blocks either insertion of malware, or exfiltration of keys/plaintexts with inserted malware. TFC is currently the only tool that offers such protection. CNE might sound like a non-issue as it has traditionally been considered targeted surveillance, but it turns out nation states are rapidly automating it, meaning it will become the mass surveillance of tommorrow.
As long as user has a way (face to face meeting with developer or web of trust) to obtain authentic fingerprint of the PGP key used to sign the installer, installation is mostly secure against MITM attacks: TFC is installed with a one-liner that authenticates both the PGP public signature verification key by it's SHA-1 fingerprint, and the installer with a 4096-bit PGP/RSA signature. If the attacker has a universal quantum computer that is capable of running Shor's algorithm, authenticity of installer can not be reliably guaranteed with PGP signatures. The installer install.sh
comes with pinned SHA256 hashes of all files downloaded from GitHub repository. This includes the requirements*.txt
files that contain the SHA512 hashes for dependencies downloaded over PIP (Argon2, PyNaCl, Pyserial, virtualenv etc..). However, TFC requires dependencies that are downloaded with APT, and while the related public key is pinned to the OS, security of private keys of third parties can not be guaranteed.
If the transmitter computer (TxM) is compromised before or during setup, it can later covertly transmit private keys through serial interface to malware on NH that forwards keys to adversary in the network. The risk is real, but in comparison, all end-to-end encrypted protocols such as Signal's double ratchet and OTR are vulnerable to key exfiltration attacks all the time, as operation of these tools requires that the system remains connected to the Internet. The length of window of opportunity to compromise keys of TFC is a groundbreaking improvement to all other secure communication tools.
Excluding public keys, Receiver program authenticates all received packets using Poly1305-AES MACs. Despite best efforts, it's impossible to guarantee receiver computer (RxM) can not be infected by a sophisticated attacker, who in such case will have the capability to
-
Destroy data: This is unavoidable because secure messaging system has to receive data from network, and because the so called security problem
Given a security scheme for an operating system, test whether it can be broken, just by using normal commands
is an unsolvable problem similar to halting problem. -
Display arbitrary messages. This is also part of the security problem, but luckily it is extremely hard, as the attacker has no way to learn actual content of conversation. Therefore, the displayed message is highly unlikely to be out of context. If the forged messages are logged, the attack can be detected by cross-comparing TxM side log of Alice with RxM side log of Bob and vice versa. To increase assurance against malware substituting content in displayed messages, users need an extended audit period during which displays of both devices are either recorded, or observed simultaneously.
-
User might reuse removable media used to deliver PSKs. In such case RxM could infect TxM, that could then exfiltrate sensitive data.
-
User might forget to remove wireless interfaces from TxM / RxM. The interface could then be used to infiltrate malware or exfiltrate keys. This includes Wifi, Bluetooth and NFC. In addition to these channels, covert ones -- that could be used to exfiltrate keys from RxM -- have been found in wireless / optical / mechanical systems:
- A nearby smart phone could listen to emitted FM signals (GSMem, AirHopper)
- A nearby smart phone could listen to TxM/RxM keyboards ((sp)iPhone)
- Heat emissions could leak data from RxM to a nearby device (BitWhisper)
- Fan sound pitch changes could transmit data from RxM to a nearby device (Fansmitter)
- All mechanical drives (HDD, CD-drive, floppy/zip/tape drive) could produce sounds (CDitter, DiskFiltration)
- Sounds could even be inaudible, e.g. speakers could produce ultrasound (O`Malley, Choo)
- RxM screen / LEDs might blink to a camera of nearby networked device. (Guri, Hasson, Kedma, Elovici)
- Scanners could read commands from blinking light (Shamir)
- USB HID devices could exfiltrate data from system (Veres-Szentkirályi)
- USB cable could be used as an antenna to exfiltrate data (USBee)
- Monitor's LED indicator could exfiltrate data (Sepetnitsky, Guri, Elovici)
- HDD LEDs could exfiltrate the data by blinking (LED-it-GO)
TFC is designed to protect against remote data exfiltration. It does not prevent someone from eavesdropping on unintended RF-emissions of your monitor's / keyboard's cables or GPIO wiring. It goes without saying, TFC also does not prevent anyone from installing key loggers or spy cameras or to physically observe your screen/keyboard.
TFC encrypts personal data to prevent impersonation and simple forms of physical data exfiltration, but weak password and keyloggers remotely installed on RxM can still compromise persistent data if system is later physically compromised. Use of full-disk-encryption (FDE) is highly recommended, but protection against bootkits and evil maid attacks are out of scope for TFC.
TFC does not heal keys automatically, thus if symmetric keys are stolen and master password is broken, all future conversations can be decrypted. To recover from such an attack (assuming the attack won't repeat), user has to replace the TxM (now assumed to be infected) and generate new keys. The risk of close-proximity implants might render future setups insecure, thus recovery while user is under targeted attack might be impossible.
Nation state adversaries have been caught installing malicious implants into hardware ordered online. If hardware such as computers / optocouplers user has bought is pre-compromised to the point it actively undermines the security of user, TFC (or any other tool for that matter) is unable to provide security.
Not by default. Metadata reduction requires user to add more layers to IM client's network connection. Because this computer can be remotely compromised, no absolute guarantees regarding metadata can be made. Below is a table that shows how adding different layers helps user hide different types of metadata:
Configuration | Data type | Geolocation of users | IP of users | Identity of user | Quantity of communication | Schedule of communication | Message length | The fact TFC is used | The fact XMPP-accounts communicate |
---|---|---|---|---|---|---|---|---|---|
TFC | Maybe* | No | No | No | No | No | Max-len rounded up to next 254B | No | No |
+ OTR | Maybe* | No | No | No | No | No | Max-len rounded up to next 254B | Probably not*** | No |
+ Tor | Maybe* | Yes | Yes | Yes** | No | No | Max-len rounded up to next 254B | Probably not*** | No |
+ Trickle | Yes | Yes | Yes | Yes** | Yes | Yes | Yes | No**** | No |
*TFC makes it hard to distinguish between sent messages and files, but high density bursts of messages can be assumed to mean file transmission.
**Keeping identity of user secret depends on user registering and exclusively using the XMPP-account through Tor, and not storing any identifying data on NH (preferably, user should run NH on Tails live CD, with no persistent hard drives installed). Additional layer of anonymity can be obtained by connecting to a random public access point using a long range parabolic/Yagi antenna.
***When OTR is enabled, the TFC packet and it's headers are tunneled inside OTR, but as OTR has insufficient padding, static ciphertext length of TFC most likely reveals TFC is being used.
****When trickle connection is enabled, the constant stream of traffic reveals TFC is being used as no other software yet provides such feature. The reduction of other metadata might however be worth it.