Congratulations on the 10 year anniversary. Having used Traefik for multiple years in a large Micro-Service Setup (200+ services) I must say I have made mixed experiences. If your requirements match the very opinionated way Traefik does things then it's great. But as soon as they don't you're going to have a hard time getting things to work. That's why shortly after migrating to Traefik I started to maintain an internal fork to add support for unique request ID headers which I maintained for two years until we migrated to HaProxy. The GitHub issue I opened for this in 2019 is still open.
To be fair I used Traefik back when it was still version 1.7 so maybe things have improved by now.
A bit off topic, but you might want to rethink the name. It is very close to EDEKA, the largest German supermarket chain. They have a very large IT division (https://it.edeka) and judging from the name of your project I was expecting it to be one of their projects.
Well, I had this since 2011, and in 2018 a new disease was labeled EDKA ( that is the first result you get when you google for edka). I became aware about the german supermarket few years after also. I could consider it at some point, but is very hard to find something available these days...
I used to maintain a self hosted instance of BitBucket and the user experience of it was actually very nice. We shut it down when Atlassian deprecated the self-hosted licenses. Moving to GitHub and GitHub Actions felt like a downgrade in more than a few ways
This does affect OpenBao as well. We do have a process in place for responsible disclosure but unfortunately we were not informed about those issues before they were published.
Thank you for being communicative about this and for the great work you're doing on OpenBao
I'm very disappointed to hear that the researchers didn't do their due diligence and informed the OpenBao project about this issue before publishing
I imagine this is a stressful situation for everyone involved in the project, so I hope the researchers will do some reflections on how they can avoid this situation happening in the future
We've made an effort to keep API compatibility with Vault wherever possible, also with the new namespaces implementation. Most of the tooling which works with Vault today will also work with OpenBao
It is true that most of the commits in the last 12 months were made by cipherboy, but I can assure you that the project is not a one man show. Building a community and getting traction on a project is hard work and takes time.
The organization has been slowly building trust in more committers and maintainers and so he's had to personally review many a pull request of mine in the interim. :-D
Note to clarify: I wasn’t intending to disparage the project with my original comment! I appreciate that these things take time and a lot of hard work. Just wanted to share an observation, in the knowledge that it may not hold true indefinitely :)
That actually works quite well. I once built a templating engine for Terraform files based on JQ that reads in higher level Yaml definitions of the resources that should be created and outputs valid Terraform Json config. The main reason back then was that you couldn't dynamically create Terraform provider definitions in Terraform itself.
Later on I migrated the solution to Terramate which made it a lot more maintainable because you write HCL to template Terraform config instead of JQ filters.
Make is definitely just my personal preference. If using bash scripts, Just, Taskfile or something similar works better for you then by all means use it.
The main argument I wanted to make is that it works very well to just use GitHub actions to execute your tool of choice.
Whenever possible I now just use GitHub actions as a thin wrapper around a Makefile and this has improved my experience with it a lot. The Makefile takes care of installing all necessary dependencies and runs the relevant build/Test commands. This also enables me to test that stuff locally again without the long feedback loop mentioned in other comments in this thread.
In addition to the other comments suggesting dagger is not the saviour due to being VC-funded, it seems like they have decided there's no money in CI, but AI... yes there's money there! And "something something agents".
From dagger.io...
"The open platform for agentic software.
Build powerful, controllable agents on an open ecosystem. Deploy agentic applications with complete visibility and cross-language capabilities in a modular, extensible platform.
Use Dagger to modernize your CI, customize AI workflows, build MCP servers, or create incredible agents."
Hello! Dagger CEO here. Yes, we discovered that, in addition to running CI pipelines, Dagger can run AI agents. We learned this because our own users have told us.
So now we are trying to capitalize on it, hence the ongoing changes to our website. We are trying to avoid the "something something agents" effect, but clearly, we still have work to do there :) It's hard to explain in marketing terms why a ephemeral execution engine, cross-language component system, deep observability and interactive CLI can be great at running both types of workloads... But we're going to keep trying!
Internally we never thought of ourselves as a CI company, but as an operating system company operating in the CI market. Now we are expanding opportunistically to a new market: AI agents. We will continue to support both, because our platform can run both.
Please be careful. I'd love to adopt Dagger, but the UI in comparison to GHA, is just not a value add. I'd hate for y'all to go the AI route that Arc did... and lose all your users. There is A LOT to CICD, which can be profitable. I think there's still a lot more features needed before it's compelling and I would worry Agentic AI will lead you to a hyper-configurable, muddled message.
Thank you. Yes, I worry about muddling the message. We are looking for a way to communicate more clearly on the fundamentals, then layer use cases on top. It is the curse of all general-purpose platforms (we had the same problem with Docker).
The risk of muddling is limited to the marketing, though. It's the exact same product powering both use cases. We would not even consider this expansion if it wasn't the case.
For example, Dagger Cloud implements a complete tracing suite (based on OTEL). Customers use it for observability of their builds and tests. Well it turns out, you can use the exact same tracing product for observability of AI agents too. And it turns out that observability is huge unresolved problem of AI agents! The reason is because, fundamentally, AI agents work exactly like complicated builds: the LLM is building its state, one transformation at a time, and sometimes it has side effects along the way via tool calling. That is exactly what Dagger was built for.
So, although we are still struggling to explain this reality to the market: it is actually true that the Dagger platform can run both CI and AI workflows, because they are built on the same fundamentals.
Hmmmm... so I think the crux of the matter is here: that you clearly articulate why your platform (for both containers and agents) is really helpful to handle cases where there are both states and side-effects
I can understand what you're trying to say, but because I don't have clear "examples" at hand which show me why in practice handling such cases are problematic and why your platform makes that smooth, I don't "immediately" see the value-added
For me right now, the biggest "value-added" that I perceive from your platform is just the "CI/CD as code", a bit the same as say Pulumi vs Terraform
But I don't see clearly the other differences that you mention (eg observability is nice, but it's more "sugar" on top, not a big thing)
I have the feeling that indeed the clean handling of "state" vs "side-effects" (and what it implies for caching / retries / etc) is probably the real value here, but I fail to perceive it clearly (mostly because I probably don't (or not yet) have those issues in my build pipelines)
If you were to give a few examples / ELI5 of this, it would probably help convert more people (eg: I would definitely adopt a "clean by default" way of doing things if I knew it would help me down the road when some new complex-to-handle use-cases will inevitably pop up)
> * Don't bind yourself to some fancy new VC-financed thing that will solve CI once and for all but needs to get monetized eventually (see: earthly, dagger, etc.)
Literally from comment at the root of this thread.
Docker has raised money, we all use it. Dagger is by the originators of Docker, I personally feel comfortable relying on them, they are making revenues too.
I implemented a thing such that the makefiles locally use the same podman/docker images as the CI/CD uses. Every command looks something like:
target:
$(DOCKER_PREFIX) build
When run in gitlab, the DOCKER_PREFIX is a no-op (it's literally empty due to the CI=true var), and the 'build' command (whatever it is) runs in the CI/CD docker image. When run locally, it effectively is a `docker run -v $(pwd):$(pwd) build`.
It's really convenient for ensuring that if it builds locally, it can build in CI/CD.
I dont quite understand the benefit. How does running commands from the Makefile differ from running commands directly on the runner ? What benefit does Makefile brings here ?
If you have your CI runner use the same commands as local dev, CI basically becomes an integration test for the dev workflow. This also solves the “broken setup instructions” problem.
I don't have a makefile example, but I do functionally the same thing with shell scripts.
I let GitHub actions do things like the initial environment configuration and the post-run formatting/annotation, but all of the actual work is done by my scripts:
To be fair I used Traefik back when it was still version 1.7 so maybe things have improved by now.