Does trustd leak information about users' software usage to Apple and/or third parties?

When discussing a recent outage of Apple's OCSP server, people on various Twitter accounts (in the threads following this tweet) and "fefe's blog" claimed that the way trustd works on macOS would leak information about which software was used when to Apple and potentially to third parties as well. I always thought that trustd only sent hashes upstream and used OCSP stapling to prevent disclosing that sort of information.

Is there any reliable information out there about the privacy implications of trustd on macOS?


Solution 1:

Is there any reliable information out there about the privacy implications of trustd on macOS?

I don't think we need a deep dive into the privacy concers of what trustd does and how it does it. If we just look at four points in Jeffery Paul's blog (referenced in the first link - Jacopo Jannone - supplied by bmike's answer) we can see where the privacy issues stem from:

  1. The OCSP requests contain date, time, location, ISP, and application hash
  2. The OCSP requests are transmitted unencrypted.
  3. It's hosted by a 3rd party company (Akamai)
  4. Apple is a partner in PRISM that grants federal police agencies warrant-less, unfettered access to this data.

What does this tell us?

  • There is a log of what application you used, when you used it and where hosted by a company that has their own privacy policy and procedures.

  • This information can be be easily obtained through a simple man-in-the-middle attack, or by the ISP (of the coffee shop you're hanging out in) simply sniffing the traffic as it passes through their network. Even the coffee shop itself could potentially sniff this traffic!

  • PRISM access is essentially based on the "honor system" that the government "cannot" access info on Americans without first obtaining a warrant. However, recent history tells us otherwise.

A knee-jerk reaction to these points would point you down a path leading to "conspiracy theory." It's not that. It's that this information paints a picture of you and your activities and this information is not protected nor is it held on Apple's own servers - a company that loves to promote their stance on "privacy."

Using trustd to validate certificates of apps is one thing, but the fact that a log is created and maintained of not only a users activity on their computer they supposedly own, using software they supposedly and allegedly have full rights to use but also where and when they use it is concerning. IMO, this information shouldn't even exist in the first place. The fact that it exists on the servers of a company users didn't directly and explicitly contract with to share this data with is beyond troubling.

To whom does this computer belong to anyway? Apple, or the user?

Solution 2:

macOS (as implemented in November 2020) occasionally sends out “some opaque information about the developer certificate of apps” based on timeouts and if it detects the ability to connect to Apple servers to check for certificate revocation.

Apple has also announced some substantial improvements as a result of last week’s events. They also fixed what I termed a “rookie mistake” of inadequately documenting and committed publicly to addressing the architecture of this feature. Time will tell how fast these roll out and if the systems are more resilient without compromising agility to contain malware and react to revoked certificates.

  • https://support.apple.com/en-us/HT202491

Gatekeeper performs online checks to verify if an app contains known malware and whether the developer’s signing certificate is revoked. We have never combined data from these checks with information about Apple users or their devices. We do not use data from these checks to learn what individual users are launching or running on their devices.

These security checks have never included the user’s Apple ID or the identity of their device. To further protect privacy, we have stopped logging IP addresses associated with Developer ID certificate checks, and we will ensure that any collected IP addresses are removed from logs.

In addition, over the the next year we will introduce several changes to our security checks:

  • A new encrypted protocol for Developer ID certificate revocation checks
  • Strong protections against server failure
  • A new preference for users to opt out of these security protections

The best two technical write ups I have seen of what in hindsight looks like a rookie mistake by Apple about inadequate documentation on how trustd functions made a bad day much worse for Apple and people that support or use macOS.

  • https://blog.jacopo.io/en/post/apple-ocsp/ (credit to Jacopo Jannone for the wording of “some opaque information”)
  • https://lapcatsoftware.com/articles/ocsp.html

The best non-technical summary (but also based on technical analysis and background) is John Gruber’s summary.

Just an embarrassing bug for Apple on a high-profile launch day.

  • https://daringfireball.net/linked/2020/11/14/macos-trustd-bug

The failed or congested OCSP service did cause a very widespread denial of service outage on macOS client computers November 12, 2020 for more than 90 minutes.

A crucial difference between OCSP and notarization is that the latter is only checked on first launch of the app. The notarization status is cached permanently and has no expiration, unlike OCSP. Thus, notarization only affects your ability to install new apps, it doesn't affect your ability to launch already installed apps.

I am certain that security researchers are probing these services heavily and some will publish think pieces whether or not Apple has made mistakes. If Apple has, I expect some to collect substantial bounty by responsibly reporting if user privacy is being actively compromised.

  • https://developer.apple.com/security-bounty/