tl;dr DJB was approached with a complaint, and thought it was a situation where he would give advice and his counterparty expected he would maintain his confidence. After he heard about the frustration the complainant was experiencing, asked the person to file a formal complaint, or at least send a self-contained email (explicitly acknowledged as not confidential) that he could use to move forward, in order to not break that confidence.
Seems that's where things broke down. There's another complaint related to Tanja that seems separate (he says that she urged him to not file a complaint immediately), but that's orthogonal to DJB's side of this, I think.
EDIT: It seems, from context, that the complainant wanted the confidence revoked, and everything put on the record (not unreasonable). But DJB doesn't _keep_ records of confidential things -- hence his insistence that they start from the beginning.
EDIT2: I'm trying to summarize "What is DJB's side of this (as communicated in the linked emails)?" not the whole scenario. I don't know anything about this situation directly.
> But DJB doesn't _keep_ records of confidential things -- hence his insistence that they start from the beginning.
I call BS on this, if we're talking about adults at university positions. The reasonable response in that case is: "I do not have any archives. Please resend everything you've got.", not starting from the beginning without communicating that fact clearly. If someone fails to act properly in that position, they shouldn't be overseeing other people.
He should not stop because of a technicality on his side in that situation.
(Edit: reasonable response == absolute minimum here, he could do much more)
I don't disagree -- presumably some of their conversation happened verbally, so the claim 'I don't take notes or have records of confidential things' makes more sense? Seems likely, I frequently discuss things in person first.
I also agree with your characterization of the other side of this -- that it seems like he's using a technicality to excuse not doing something important. I'm not advocating anything, just trying to summarize a pretty long email chain.
You're talking about a crypto researcher here. Their behavior absolutely does include a much higher level of awareness around the handling of confidential information. He may well have a policy that all confidential communication is treated separately, including being automatically wiped after some period of time. This would need to be standard for his work as it relates to investigating 0day and other vulnerabilities that must be confidentially disclosed to third parties.
This does not make him a nice guy, and he would likely have been in violation of Title IX, which means any US govt funding for his lab is potentially at risk as a result of this case.
What do you think crypto researchers are? It's not a cloak and dagger field. It's applied mathematics research. You've never seen a group of people less wrapped up in spycraft than the attendees of an academic crypto workshop. That's one of the things that made Appelbaum's admission to Dan and Tanja's research group so weird.
I don't care who he is, or what his daily email routine is. It doesn't matter. At any level, if someone you're superior to in your organisation comes to you and reports abuse from another person in the org, you either follow up immediately, or you shouldn't be superior to them. Any kind of follow up should produce report of that. If the person taking to you doesn't want you to report it further, then it's your business to have a record of that and never lose it. I know it from normal decency and numerous company trainings and I've never even been a manager.
His research topic, or even whether the report is true don't matter. It's in his interest to follow up on his own and keep records. If not because it's right, at least to protect the university and himself from what's happening right now.
Sometimes your best protection is a policy that all electronic communications are automatically deleted after a retention period. Many companies have such policies, and they have them on advice of their legal council, specifically to avoid discovery issues in the event of a suit. You can argue this doesn't apply here from a moral perspective and I would agree with you, but IT and legal policies often do not follow an ethical code.
Crypto research exacerbates this because the likelihood of such suits is higher than with other kinds of research, sometimes rising to the level of nation states getting grumpy at you with all that could entail. Finally, while I can't make any excuse for the behavior, he would be far from the first graduate advisor to have less than stellar management training or skills.
That's pretty disgusting, and the kind of "sneaky" you'd expect from an overly precocious child. Then again, it actually does match the combination of passive aggression and thwarted control-freak that I've come to expect from academia.
> I don't think there are any amnesic features like in Tails nor strong isolation between gateway and workstation to prevent IP leaks like in Whonix.
Subgraph sandboxes run in a network namespace with no direct access to the network or ability to view any of the physical network interfaces on the system. There is no way for an attacker to send network traffic directly or to discover the real IP address of the system without breaking out of the sandbox.
On Qubes OS the networking VM runs a standard Linux kernel with no special security hardening at all apart from the simple fact that it runs in a separate Xen VM. If an attacker is able to compromise NetVM, they may not have direct access to user data, but they have dangerous access to perform further attacks:
- Attacks against hypervisor to break isolation
- Side channel attacks against other Qubes VMs to steal cryptographic keys
- Interception and tampering with networking traffic
- Attacks against any internal network this Qubes OS computer connects to.
So if you assume that remote attacks against the Linux kernel networking stack are an important threat, the consequences of a successful attack even against Qubes are pretty bad.
Subgraph OS hardens the Linux kernel with grsecurity, which includes many defenses against exploitation which have historically prevented local exploitation of most security vulnerabilities against the kernel. Exploiting kernel vulnerabilities locally is so much easier, probably never less than an order of magnitude easier. It's so rare to reliably exploit kernel vulnerabilities remotely even against an unhardened kernel that teams present papers at top security conferences about a single exploit:
I know it's contentious to say so, but I don't believe that anybody will ever remotely exploit a kernel vulnerability against a grsecurity hardened Linux kernel, especially since RAP was introduced:
The threat of remotely attacking the Linux kernel through the networking or USB stack was always low in my opinion, but as the threat approaches zero it raises some questions about how justifiable the system VMs are in Qubes OS considering the system complexity and usability impairment they introduce.
I agree with your comments about grsecurity making the kernel much more secure. However your comments about remote exploits and Qubes are somewhat contradictory. You claim that a remote kernel exploit is very rare/difficult, therefore the Qubes NetVM must be very difficult to attack because it runs no applications or services. It functions as a router and does essentially nothing else. By your own argument it would be very difficult to attack the NetVM. It is only the AppVMs or any others which run applications that are vulnerable, and if these are attacked, Qubes's design will likely prevent a permanent backdoor from being installed in that VM and make it difficult for the attacker to gain access to any of the other AppVMs.
I still think Subgraph looks promising and I look forward to your future work.
I'm answering a comment chain about how Subgraph OS does not 'isolate' the network or USB stacks which is frequently brought up as an important deficiency in comparison to Qubes OS. My point is that this isn't a significant advantage of Qubes because such attacks are rare and difficult, and because they're even harder to perform against Subgraph OS.
I wasn't talking about AppVMs at all, but you can of course persistently backdoor Qubes AppVMs in numerous ways by writing to the user home directory. In Subgraph OS we design our application sandboxes to prevent exactly this.
Thank you for your answer, I'll definitely look further into SubgraphOS and grsecurity. I nevertheless believe that the kind of attacks you describe, specially the one against hypervisor in NetVM to break isolation, are quite unlikely in Qubes.
Could you also answer my question about SubgraphOS main use case and threat model?
Is it mainly for anonymous and pseudonymous usage?
If it is designed mainly for everyday use (including non-anonymous use cases like banking, social media, personal/work email, etc.) as it seems to me I don't quite understand the design choice to enforce all traffic via Tor by default. That seems unnecessary as anonymity is not needed and even dangerous.
Yeah, we agree, actually. Tor probably won't even be the default. We are adding flexibility to network support right now. Soon you'll be able to have just cleartext SGOS, or be able to send sandboxed apps through different paths: one app might exit through a VPN, another through Tor only, another through i2p maybe, etc, enforced by the sandbox.
> imho the qubes approach is more viable and exposes far less attack surface.
I don't know what you base that opinion on since it's not an easy comparison to reason about. One metric you could use would be actual vulnerabilities. In the last year there have been several hypervisor escape vulnerabilites that compromised Qubes OS VM isolation completely, most (all?) of which have been present in Xen for the entire lifetime of the Qubes project.
By contrast during the same period only one Linux kernel vulnerability (DirtyCow) affected Subgraph sandboxed applications, and it would only have been exploitable using techniques which have not been disclosed in any public exploit so far.
I really liked the architecture, but I don't think container is a good name for the kind of isolation it has. For the machinery work as expected, the namespaced application must follow the Oz-rules/policies. Container nowadays are complete environments, everyone with different rootfs and so, very different of the kind of container required in subgraph. It can lead to misinterpretation, because I cannot reuse existent distros/rootfs and package managers to run other applications in subgraph. As I've understood, subgraph only bind-mount the common directories from host to the namespace mount point of process on top of a tmpfs. Hard to reuse an ubuntu app.
How persistent data will be managed isn't detailed in the document.
Seccomp-bfp is a good enhancement. It can really fix the well know security issue of docker when using volumes with write permissions.
Still wondering what's the plan for persistent data. If someone has more info, please share :)