6 comments

  • pregnenolone 1 hour ago
    They’re useful for attestation, boot measurement, and maybe passkeys, but I wouldn't trust them to securely handle FDE keys for several reasons. Not only do you have to trust the TPM manufacturer – and there are many – but they also have a bad track record (look up Chris Tarnovsky’s presentation about breaking TPM 1.x chips). While parameter encryption has been phased out or not used in the first place, what's even worse is that cryptsetup stores the key in plaintext within the TPM, and this vulnerability remains unaddressed to this day.

    https://arxiv.org/abs/2304.14717

    https://github.com/systemd/systemd/issues/37386

    https://github.com/systemd/systemd/pull/27502

    • amluto 18 minutes ago
      My pet peeve is that the entire TPM design assumes that, at any given time, all running software has exactly one privilege level.

      It’s not hard to protect an FDE key in a way that one must compromise both the TPM and the OS to recover it [0]. What is very awkward is protecting it such that a random user in the system who recovers the sealed secret (via a side channel or simply booting into a different OS and reading it) cannot ask the TPM to decrypt it. Or protecting one user’s TPM-wrapped SSH key from another user.

      I have some kludgey ideas for how to do this, and maybe I’ll write them up some day.

      [0] Seal a random secret to the TPM and wrap the actual key, in software, with the sealed secret. Compromising the TPM gets the wrapping key but not the wrapped key.

    • Avamander 55 minutes ago
      Root-of-trust measurement (RTM) isn't foolproof either.

      https://www.usenix.org/system/files/conference/usenixsecurit...

  • amluto 2 hours ago
    > The key difference in threat models is that the device manufacturer often needs to protect their intellectual property (firmware, algorithms, and data) from the end-user or third parties, whereas on a PC, the end-user is the one protecting their assets.

    I would love to see more focus on device manufacturers protecting the user instead of trying to protect themselves.

    Prime example where the TPM could be fantastic: embedded devices that are centrally coordinated. For example, networking equipment. Imagine if all UniFi devices performed a measured boot and attested to their PCR values before the controller would provision them. This could give a very strong degree of security, even on untrusted networks and even if devices have been previously connected and provisioned by someone else. (Yes, there’s a window when you connect a device where someone else can provision it first.

    But instead companies seem to obsess about protecting their IP even when there is almost no commercial harm to them when someone inevitably recovers the decrypted firmware image.

    • direwolf20 29 minutes ago
      Many of these companies outsource manufacturing to places with low intellectual property protection - it would be easy for the manufacturer to run an extra batch and sell them directly, and this is only prevented by firmware encryption. I hope this explains the paranoia of these companies.
    • ls612 28 minutes ago
      And I’d like a pony, but we can’t get what we want, only what we can take, and asymmetric encryption with western law enables hardware manufacturers to take control of your property away from you. I’m not holding my breath for that to change anytime soon…
  • bri3d 29 minutes ago
    Note that it's really easy to conflate TPM and hardware root of trust (in part because UEFI Secure Boot was awfully named), and the two things are linked _only_ by measurements.

    What a TPM does is provides a chip with some root key material (seeds) which can be extended with external data (PCRs) in a way which is a black box, and then that black box data can be used to perform cryptographic operations. So essentially, it is useful only for sealing data using the PCR state or attesting that the state matches.

    This becomes an issue once you realize what's sending the PCR values; firmware which needs its own root of trust.

    This takes you to Intel Boot Guard and AMD PSB/PSP, which implement traditional secure boot root of trust starting from a public key hash fused into the platform SoC. Without these systems, there's not really much point using a TPM, because an attacker could simply send the "correct" hashes for each PCR and reproduce the internal black-box TPM state for a "good" system.

  • dfajgljsldkjag 2 hours ago
    It is wild that session encryption is not enabled by default on these chips. I feel like most vendors just slap a tpm on the board and think they are safe without actually configuring it properly. The article is right that physical access usually means game over anyway so it seems like a lot of effort for a small gain.
    • derekerdmann 2 hours ago
      If I remember correctly it's up to the client program to set up the session, not something to do with the vendor's implementation. It's conceptually similar to how an HTTPS client performs a TLS handshake after opening a socket before it can work with plain HTTP content.
      • bangaladore 1 hour ago
        It doesn't help that the TPM spec is so full of optional features (and the N spec versions), so it's often annoying to find out what the vendor even supports without signing an NDA + some.

        TPMs work great when you have a mountain of supporting libraries to abstract them from you. Unfortunately, that's often not the case in the embedded world.

        • RedShift1 1 hour ago
          Even on desktop it's terrible, I wanted to protect some private keys of a Java application but there is no way to talk to a TPM using Java so handsandshouldersup gesture.
          • Nextgrid 14 minutes ago
            The TPM needs a way to authenticate your Java application, since the TPM otherwise does not know whether it's actually talking to your application or something pretending to be it.

            This means you generally need an authenticated boot chain (via PCR measurements) and then have your Java app "seal" the key material to that.

            It's not a problem with the TPM per-se, it's no different if you were using an external smartcard or HSM - the HSM still needs to ensure it's talking to the right app and not an impersonator (and if you use keypair authentication for that, then your app must store the keypair somewhere - you've just moved the authentication problem elsewhere).

    • bangaladore 1 hour ago
      In many industries, once someone has physical access to a device, all bets are off. And when used correctly, TPMs can provide tons of value even when not encrypting the bus.
  • ValdikSS 1 hour ago
    Sigma-star does many very high quality embedded blog posts, and touches not popular and hardly discussed topics pretty in-depth.
  • jhallenworld 1 hour ago
    Do you really need a TPM if you have something like ARM TrustZone?
    • astrobe_ 13 minutes ago
      I think the general problem is that SoC-based security relies on internal "fuses" that are write-once, as the name suggests, which usually means that they are usable by the manufacturer only.

      TPMs can be reprogrammed by the customer. If the device needs to be returned for repairs, the customer can remove their TPM, so that even the manufacturer cannot crack open the box and have access to their secrets.

      That's only theory though, as the box could actually be "dirty" inside; for instance it could leak the secrets to obtained from the TPM to mass storage via a swap partition (I don't think they are common in embedded systems, though).

    • ValdikSS 58 minutes ago
      Sure, why not? You have a reference implementation for both TrustZone OP-TEE (from Microsoft!) and in-Linux-kernel. No need to code anything, everything is already there, tested and ready to work.

      https://github.com/OP-TEE/optee_ftpm

      Or you mean dedicated TPM?

      • jhallenworld 53 minutes ago
        I mean a separate chip.
        • ValdikSS 44 minutes ago
          Well, you have much more control of lower-level boot process on ARM chips, and each of the SoC manufacturers have their own implementation of Trusted Boot which relies on the cryptography and secrets inside the SoC rather than TPM as in x86/UEFI boot process.

          In context of trusted boot — not much. If your specific application doesn't require TPM 2.0 advanced features, like separate NVRAM and different locality levels, then it's not worth to use dedicated chip.

          However if you want something like PIN brute force protection with a cooldown on a separate chip, dTPM will do that. This is more or less exactly why Apple, Google and other major players have separate chip for most sensitive stuff—to prevent security bypasses when the attacker gained code execution (or some kind of reset) on the application processor.

          • bri3d 34 minutes ago
            > their own implementation of Trusted Boot which relies on the cryptography and secrets inside the SoC rather than TPM as in x86/UEFI boot process.

            TPM and x86 trusted boot / root of trust are completely separate things, linked _only_ by the provision of measurements from the (_presumed_!) good firmware to the TPM.

            x86 trusted boot relies on the same SoC manufacturer type stuff as in ARM land, starting with a fused public key hash; on AMD it's driven by the PSP (which is ARM!) and on Intel it's a mix of TXE and the ME.

            This is a common mistake and very important to point out because using TPM alone on x86 doesn't prove anything; unless you _also_ have a root of trust, an attacker could just be feeding the "right" hashes to the TPM and you'd never know better.

    • zorgmonkey 44 minutes ago
      Their have been many vulnerabilities in TrustZone implementations and both Google and Apple now use separate secure element chips. In Apple's case they put the secure element as part of their main SoC, but on devices where that wasn't designed in house like Intel they had the T2 Security Chip. On all Pixel devices I'm pretty sure the Titan has been a separate chip (at least since they started including it at all).

      So yes incorporating a separate secure element\TPM chip into a design is probably more secure, but ultimately the right call will always depend on your threat model.