I actually do this, because I kinda like having the proxy config right next to the app config in my Compose file, but I also dislike how much manual configuration Traefik needs. Downside is you need to know how to write Caddyfile (easy enough) and then also know how to write labels so CDP translates them into the correct Caddyfile (also easy enough, but could be annoying if you're learning both at the same time). Upshot is that once you know how it translates and you know what you need to write, it works just like Traefik but with just two labels, and I think that's pretty neat.
Caddy can support a surprising amount of weird and wonderful configurations, too.
I tried to submit the Caddy configuration for this to www.gnuterrypratchett.com, but looking at it, it doesn't seem like it was ever added to the site.
It is supported by default. Out of the box you can configure it this way.
It is not _configured_ by default. There is an important distinction.
It's not mentioned in documentation because the documentation [1] does not provide an _exhaustive_ list of the kinds of addresses that Caddy can be configured to serve.
Nobody blocks dots in email addresses, although I have had some sites in the past email me using the first part of my email address as my name; it's amusing to open an email from example.com saying "Hi, example.com!"
Funny, that's exactly how I do it! One of the reasons is, I know who lost/leaked/sold my data when the spam starts coming in. (Of course today we also have haveibeenpwned.)
> You were the one who posted the link! If the landing page isn't designed for HN audiences then maybe that's not the link you should have posted on HN?
The logical extreme of this statement is that @mholt shouldn't post a link to any website unless that link is specifically tailored to the average reader of the site he's posting to. That, or Hacker News is special among all websites @mholt could post to.
I don't think that's fair. I also don't see the defensiveness you see - instead, I see @mholt explaining his website's strategy for the benefit of your understanding (as well as that of any future readers). The alternative to which would be not responding to your feedback at all, as he already has sound reasoning not to incorporate your specific suggestion (which we know because he explained it).
It's important to read into the best possible interpretation of a comment and respond to that, assuming good faith, especially on communities like this one. Otherwise we begin to assume everyone is attacking or defending.
I wasn’t just basing my comment off his responses to me, his tone was similar through this thread and on the Github issue tracker as well (I later saw others have commented on it too). But it’s possible that’s just his writing style and we are reading too much into it and if that’s the case then I’m sorry for doing so.
Regarding the link point. I don’t agree. If you purposely post a promotional link saying “use my tool” to a specific forum then you can’t really backtrack and say “you’re not my intended audience for this page” when people raise questions based on incomplete information published to that link. That’s just bad product advertising. Or at the very least, you should add a disclaimer saying “this is normally a manager link (etc)”.
As it happens, I am actually the target audience for that landing page because I am a tech lead responsible for making architecture-based decisions and the number of HTTP end points we have is few because that’s not the main side of our business (so certs often get forgotten about). That’s why I was asking the questions I was asking.
It is true. The part that makes it consistent with Uber's flouting of the law is that there's another team above the legal team, and that team's job is to find ways to maximize income.
It's a function of priority: make money > minimize legal liability > satisfy customers.
Most popular ACME (Let's Encrypt) clients allow you to provide a CSR instead of generating the keys themselves. That means a bunch more work for you, but if you're worried about this, that's what you should do. Have your safe (even manual if you insist) process make keys, make CSRs for the keys, and put those somewhere readable. The ACME client will hand them over to the CA saying "I want certs corresponding to these CSRs" without needing access to your TLS private keys at all.
This is technically true, but contextually lacking.
acme-go/lego doesn't use HTTP validation unless you disable just about every other form of validation first. TLS-ALPN validation is much more likely, so port 443.
That said, it is very easy to allow software to bind to privileged ports without providing it root access; this has been solved for a very, very long time.
I help out a lot at Caddy's community forums. I'd like to know more about your experience with Caddy.
> user-hostile behavior
Which behaviour is user-hostile? Perhaps we could address it?
> outright dangerous behavior in the way it fetches SSL certificates
For reference, Caddy uses https://github.com/xenolf/lego/ for its ACME interactions with LetsEncrypt. Could you elaborate on what part of Caddy's behaviour is _dangerous_?
I can summarize my feeling with Caddy as it's probably a good choice for solo developers with a handful of sites on a single VPS that's not terribly important. If you have something really important, I would look elsewhere.
Caddy used to put non-removable advertisements in the response headers. (That one got such a bad backlash that they backed away...)
Caddy refuses to cooperate with OS packaging teams. It reeks of self-importance. It's questionable whether caddy is really FOSS with its odd licensing arrangement.
I've seen a number of backward-incomptatible updates that break config files from point-upgrades.
Caddy EXITS with error on boot if any of its HTTPS enabled sites fail to get certificates.
Which puts your webserver in a fragile state: the server may serve happily for months (with a hidden cert error) until you restart the service. Then all your sites are down.
Migrating a live website from one server to another is (or was, until recently) quite un-supported and hard to do with zero downtime. See caddy needs DNS pointed at it before it can get the cert -- but it can't start serving pages until it has the cert.... Just not acceptable story in a high availability environment.
I understand that now caddy can share its certs with a pool of other workers which may help migration processes moving forward.
I may be bitter about this as it caused a lot of damage and downtime for my business. Not inclined to ever touch Caddy again if I can help it.
I am sorry to hear about the damage done to your business. I appreciate that you took the time to list out those grievances. I'd like to respond on a few points, for the sake of clarity (if you're interested, but also for other readers here).
That header thing was indeed a bit of a fiasco; a misguided attempt to honour the few that stepped up to support Caddy monetarily. Once the depth of the issue was made clear to the developers, it was indeed walked back.
Regarding OS packaging teams - it's not the dev's responsibility to become approved package maintainers for individual distros; it's generally not done, either. The distro maintainers themselves decide which packages to make available, and how to package them. Caddy doesn't offer repos for the individual popular package managers because of the nature of Caddy's third party plugin architecture - none of the package managers allow arbitrary downloads from a build server (rightly so - the package maintaining process is intended to provide much higher assurance of security), and they don't allow for the package to be built to request either. Not only that, but those plugins may or may not be trusted by the user themselves; the usefulness of anyone being able to extend Caddy and publish their own plugin at any time comes with that downside.
The licensing arrangement was born out of a simple need - Caddy devs gotta eat. The code itself is Apache 2.0 - the Caddy project is as FOSS as it gets. The commercial part is the build server, which isn't open source - if you use it to build your binaries, those binaries are considered either commercial or personal in nature. I can tell you that the devs would like nothing more than to have a different method that would satisfy their monetary requirements so they could make the build server binaries free, too.
The idea behind exiting on start with an error is to ensure that when the user starts the server, they know straight away that there's a problem and Caddy can't do what you're asking it to (which is manage your HTTPS certificates). There are ways to get Caddy up, even with out a valid HTTPS certificate, and get your site online regardless - they're just not _automatic_.
The fragile state concept is one we come across frequently. The truth is that when people say they're restarting the server, the meaning of restart is "shut down, then start", instead of "reload". Caddy has graceful reload capabilities; you can swap the Caddyfile and even the binary itself out without interrupting the server (this isn't true on Windows, though, where varied signaling of the Caddy process is not possible in the same way as it is on *nix based systems).
I myself have posted working solutions to full live server migrations (for the entire set of websites), between two fully working and secured (HTTPS) Caddy instances, accounting for DNS propagation. It's not unsupported or difficult, just not _automatic_; it requires some specific configuration and a careful hand (like most live site migrations). The somewhat-recent filesystem clustering Caddy does isn't even related to migration - it's actually supportive of distributed fleets solving challenges for other instances. You've always been able to share the TLS assets between Caddy instances and have them be used.
I wish I (or the developers) had been given a chance to offer some guidance - I believe we would have been able to help avoid some of the downtime and losses suffered by your business.
Obviously hard to complain too much about a free product - I'm sharing my personal experience for others.
So the thing I like about AWS is that they can give you a cert before pointing the DNS A record at your site. Really fool-proof and excellent. Much better than the let's encrypt flow by design.
In fact on some of my sites I now run Caddy on AWS behind a load balancer with the AWS load balancer providing HTTPS. Works much better and I can sleep at night with less fear.
No worries, hope it makes the facts nice and transparent for people. If anyone reading this has questions or concerns about Caddy, I'd invite further discussion over on their forums.
I believe AWS can do this because they have proof that you own the domain (effectively DNS validation) before handing out certs. Caddy can do similar with DNS validation - fetching your cert without needing to be publicly accessible. It needs you to hook into the API of one of the supported DNS providers though, because validation is still done on a per-request basis (but it has been able to do wildcards for a while). I understand that AWS is more validate once, sign certificates many times, which is quite convenient - and it all hooks into their systems fairly automatically.
Investors are gonna invest, right? They're there to make money. Presumably they believe the company's operations are legal and likely to provide returns.
https://github.com/lucaslorentz/caddy-docker-proxy
I actually do this, because I kinda like having the proxy config right next to the app config in my Compose file, but I also dislike how much manual configuration Traefik needs. Downside is you need to know how to write Caddyfile (easy enough) and then also know how to write labels so CDP translates them into the correct Caddyfile (also easy enough, but could be annoying if you're learning both at the same time). Upshot is that once you know how it translates and you know what you need to write, it works just like Traefik but with just two labels, and I think that's pretty neat.
Caddy can support a surprising amount of weird and wonderful configurations, too.