Hacker Newsnew | past | comments | ask | show | jobs | submit | somethingnew's commentslogin

Here's a summary from the Docker perspective https://blog.docker.com/2016/04/docker-security/


The table from the report and Docker's championing thereof are also (nearly flagrantly) misleading, since Docker only supports image signing if you use the public hub. You cannot (repeat: cannot) sign Docker containers any other way, so it's barely a half feature and does not work for enterprises at all. But it says "strong defaults" in their table when describing this oddly useless feature, since enterprises are the ones most likely to invest in a serious key infrastructure and actually use signing:

https://docs.docker.com/engine/security/trust/content_trust/

> Content trust is currently only available for users of the public Docker Hub. It is currently not available for the Docker Trusted Registry or for private registries.

> Currently, content trust is disabled by default. You must enable it by setting the DOCKER_CONTENT_TRUST environment variable.

How is the complete lack of image verification until explicitly enabled, and only on public images, a "strong default"?

I'm also mystified by the row for SELinux, where rkt has Optional in scary yellow for some reason, and the other two do not. I suspect that table was the whole point of the independent review and it is fulfilling its purpose handily for Docker. I haven't even read the report and I can identify four suspicious discrepancies in that table alone.

ETA: Compare how the author describes Docker to rkt (it's rkt, not Rkt, too): http://imgur.com/a/D6nEw


Hi, author of paper here. Couple things: - I agree the requirement of using Docker hub or their private hub is an unfortunate requirement, but I'm talking purely about the technical implementation. "Does not work for enterprises at all", well, there are a number of large enterprises that would disagree with you ;)

- For the table w/ SELinux row, Rkt is optional "scary" yellow because it only supports a single MAC implementation, it isn't very portable, and it's not enabled by default, vs LXC and Docker which both have quite strong MAC policies by default. Trying to parse all the info down into a table was honestly quite difficult (balancing being able to read it without a million footnotes for each point). Hopefully readers don't take all their take-aways from the table, and read the paper in full.

- I used Rkt for the name and rkt for the command. Seemed to help consistency.

Thanks for your feedback.


(Disclaimer: I added SELinux support to CoreOS)

I'm a little confused around the SELinux issue. SELinux is inherently unportable - each distribution has its own policy (generally based on refpolicy, but sometimes fairly divergent), and it's basically impossible for an application to ship a policy that's compatible with more than one distribution. Rkt's SELinux design inherits from SVirt in such a way that in most cases it'll just work with a distribution's existing SELinux policy. It's fair to say that the number of distributions that ship policy that works with Docker is larger than for rkt, but this is fundamentally about distribution priorities rather than technological choices. On Fedora, rkt should provide identical SELinux confinement to Docker - on CoreOS it'll be better, since we support SELinux on overlayfs as well. Whether SELinux is enabled or not is (again) a distribution choice. Fedora ship with SELinux enabled by default, and both rkt and Docker will use it as a result.


Thanks for responding. I'm talking about the technical implementation, too. How is a feature hidden behind an environment variable a "strong default?"

And Docker simply screenshot the table, so noble goal, but...

I suppose I could remark upon MESOS, Rkt, and so on, and how getting names right is important because it characterizes the rest of your thoughts and analysis of the things you're studying, but I'll stick with the question I started with here.


You cannot (repeat: cannot) sign Docker containers any other way, so it's barely a half feature and does not work for enterprises at all.

What makes you think this? It is 100%, patently false. Private notary servers can be deployed alongside private registry servers without problem. See here for docs on how to do it: https://github.com/docker/notary/blob/master/docs/running_a_...


> What makes you think this?

The docs I linked and quoted, written by your own organization and helpfully pasted into the point I made?

I'm glad to see it's possible (if cumbersome), but I ruled Docker out for this purpose based on the exact link I just pasted. I also followed up and didn't see a "hey, you can sign private registries" bullet in your blog post responding to this paper, or much of anywhere, and Googling "docker sign private registries" doesn't go anywhere.

I'm still unsure why I'd stand up several daemons to accomplish signing a file, but that's a side point.

----

ETA: I can no longer reply because I've burned my precious HN comment budget commenting upon this paper (sorry, blame HN), so here's what I would reply to you downthread:

> 1. We have to host the signatures somewhere, so we host them in a store we call the notary server.

We've had this solved for a long time with .asc files, and Docker is already shipping an HTTP server or six. Shit, extend the Docker format and put the signature on each layer. There's a lot of prior art from RPM and dpkg in particular on how this can be done without writing yet another Docker daemon to run.

I'm sorry, I have to call bullshit, here. Docker is a very strong daemon-for-everything engineering culture, and that's the only reason it exists. It's also why folks are competing with you, because there are three or four different daemons in the Docker ecosystem that simply should not exist. Including dockerd.

> Think serving an outdated container with known-vulnerable software. Sadly, most artifact signing systems do not mitigate this attack today,

Because it's out of scope of a signature. That is conflating a signature with content revocation, which is a different problem altogether. A signature is an attestation of certain properties of data, and "is no longer valid content because circumstances changed after it was signed" is not one of them. The validity of the content is orthogonal to its signature. That known-vulnerable container is still a valid signature, and it's overreaching to expect a signature system to solve that problem. That is solvable in other ways.

Known-bad OpenSSL is still signed in repositories. And valid. And that's fine, because it's a separation of concerns; you get non-repudiation, integrity, all that stuff from a signature scheme. Upgrading to gatekeeping content on top of signatures indicates to me a fundamental misunderstanding of the problem ("can I run this?" instead of "this is an authenticated, intact image that came from where I expect"), which concerns me. You can solve the problem you present in other ways.

Mixing in the term "replay attack" is extremely confusing and I think diluting your point, because it is baffling me and really does not apply to what you are saying.


Disclaimer: I work at Docker.

Most of your points are a criticism of TUF, of which Notary and Docker Content Trust are an implementation. Based on your comments I believe you're not familiar with TUF and the scope of problems it solves. Here's a good resource to learn more about it: https://theupdateframework.github.io

You clearly are not a fan of Docker and I respect your opinion, I don't really want to engage in that aspect of the discussion. Now, on the specific topic of secure content distribution, I hope you won't let your bias against Docker get in the way of understanding the benefits of TUF. It does improve the state of the art in secure content distribution, and you should really take the time to understand it and perhaps revisit some of your opinions. We're leveraging it in Docker and sharing our implementation, but you don't have to use Docker to use Notary or TUF.

If after reading about TUF you have specific criticism of it, I would be interested to hear about it.


My criticism is that a digital signature isn't enough for you. If I want to integrate into TUF, I can. If I don't and solve what TUF does another way, well, Docker said I'll use TUF. So I'll use TUF. Your position is that a digital signature is not useful in itself. This is wrong. It is.

Let's write a spec:

- Verify integrity and authenticity of a Docker image

The logical implementation:

- Digital signatures, detached or otherwise

Your implementation:

- Multiple complex, daemonic systems to reinvent software updating and, incidentally, signatures based on TUF

Your rationale:

- Digital signatures are not useful alone

So those of us who are aware of the limitations are left out in the cold, because we can't point gpg at a Docker image and just get the problem done. We have to learn this entire system Docker has created that's going to bring a grand unified software updating future. Maybe I have my own Omaha updater already. Maybe I just want dockerd to validate a signature. It is your prerogative to steer Docker toward crafting novel daemon engineering for every possible scenario, but that's the criticism I'm going to levy, whether you want to engage it or not.

The fundamental problem here is composability versus platform. My critique is not of TUF, of which I am not only familiar but excited. My critique is that organizationally at Docker, you take a problem like "sign an image," which is a perfectly useful primitive in every software distribution system on the planet, and say "that's not enough. We need a platform." You are dictating how my updating infrastructure works and then saying you've solved signatures. Which is technically accurate, I suppose.

I'm also pretty much over critique of Docker being cast as my not getting and/or understanding it. Believe me, Solomon, I get it, and I understand that you want to caricature everyone who disagrees with your strategy as biased against you. (That's actually the third time I can recall you making my criticism of Docker personal. I have no anti-Docker bias. I believe others are implementing what you're working on better and you've simply got the warchest, which is vastly different than having a bias. I used the shit out of ZeroRPC and I've respected a whole lot of your work since then. Come on.)[0]

We're talking about signing a file. Signing. A file. Which I cannot do without a whole shitload of infrastructure that I do not want (including MySQL, apparently), which is a systemic issue with Docker all the way back to dockerd.

Edit: [0]: 484 days ago we discussed exactly this, and here we are again, condescending criticism: https://news.ycombinator.com/item?id=8789181


Responding to your edit:

Notary, the underlying project that implements Docker's Content Trust feature, is an implementation of The Update Framework (TUF). Generally, you want a software update system to deal with a whole host of issues. Just solving "is this content signed" actually achieves very little. Survivable key compromise, freshness guarantees, resilience against mix-and-match attacks are all critical to building a system that actually meets real-world use cases and attacks. Threshold signing and signing delegation are additional features that you get when using TUF, which help with splitting the ability to sign across multiple individuals or systems.

You seem to be interested in this topic. I recommend you read a couple of papers to get some more background on why TUF exists and what problems it solves. A key point would be to understand why TUF deals with signed collections of software instead of just individual signed objects.

Start here to get an overview of The Update Framework:

1. Overview: https://theupdateframework.github.io/

2. Specification: https://github.com/theupdateframework/tuf/blob/develop/docs/...

Existing package managers and their shortcomings are covered in these two papers:

1. https://isis.poly.edu/~jcappos/papers/cappos_pmsec_tr08-02.p...

2. https://isis.poly.edu/~jcappos/papers/cappos_mirror_ccs_08.p...


The several daemons serve two purposes:

1. We have to host the signatures somewhere, so we host them in a store we call the notary server.

2. Notary has a concept of timestamping, so we spin up a timestamping server alongside a notary server that can guarantee the freshness of the data. We use a separate server so that folks can segment the timestamp signing functionality from the signature metadata serving functionality. This helps allow separation of concerns.

Timestamping is important because it can help prevent replay attacks where old, validly signed data is served to clients. Think serving an outdated container with known-vulnerable software. Sadly, most artifact signing systems do not mitigate this attack today, but we wanted to make sure ours would.


Sorry about that; I will get that page of the docs fixed.

Open invitation to anyone here: Our implementation of TUF via notary has been serving us well. If you decide to try it out and run in to any snags let me know and I can help you with getting it up and running. Contact info can be found in my profile.



Besides Session IDs and Session Tickets[1] which already exist in the TLS protocol. He could be referring to the Token Binding Protocol Draft[2] which, quoting from it's summary, "allows client/server applications to create long-lived, uniquely identifiable TLS bindings spanning multiple TLS sessions and connections".

[1] https://en.wikipedia.org/wiki/Transport_Layer_Security#Resum...

[2] https://tools.ietf.org/html/draft-ietf-tokbind-protocol-01


Not having to worry about calculating the length of the message before transferring it?


gRPC actually still has to calculate the length of a message before transferring it.


How does it work? Brute force? Rainbow tables?


My university uses them for file storage and sharing, so I imagine they have many large enterprise clients.


Has this never been posted before?


Actually two significant posts in the last year:

https://news.ycombinator.com/item?id=7604787

https://news.ycombinator.com/item?id=7682173

So by HN standards (https://news.ycombinator.com/newsfaq.html) this is unequivocally a dupe. But the community interest here seems genuine, so we won't apply the full penalty.


It was: https://news.ycombinator.com/item?id=7815993

but half year ago and no community reaction.


No community reaction doesn't mean anything. HN is so competitive these days, any submission HAS to get 5 upvotes within 10 minutes in order to reach the front page, and only when it reaches the front page will enough people upvote it. It's a catch-22.



Running Google Hangouts through Tor would work well as it uses https by default. As long as he's using a throwaway Google account, it seems good to me.


Exactly, the wikipedia articles for BREACH and CRIME are good starting points if nothing else.

[1] http://en.wikipedia.org/wiki/BREACH_(security_exploit)

[2] http://en.wikipedia.org/wiki/CRIME


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: