Congrats on building your custom delivery platform. We understand your pain ;-).
Any specific pain you want to share or feature you are still missing in your solution?
We’re happy to help you migrate to Distr if you decide you no longer want to maintain your solution or if you’d like to benefit from regular updates and continuous improvements.
Distr SaaS is configured for USD only (via Stripe) for simplicity, but EUR payments are definitely possible (either via Stripe, SEPA, etc.).
Its all bambi legs at the moment.. I wonder if we'll ever get past this stage.
Moving it all to a 'real' product is an option before it gets too serious. I'll bring this up with the group and lets see what happens.
Thanks for sticking around! Yes indeed, Distr already has a lot of features and can replace many other tools in your deployment stack (e.g., CI, log collection, alerts, licensing, secrets, OCI registry, customer & user management).
Users definitely also use us for internal deployments, but there are some trade-offs you’ll need to make compared to the zero-config setup you get when deploying your Next.js app to Vercel on the one hand, and the hardcore customization you get when running your own GitOps/Terraform setup on the other.
Is there any specific feature you’re missing, or is it just that our website doesn’t really highlight that use case?
Sure, happy to follow up with a detailed comparison.
TL;DR: Octopus Deploy has a strong focus on CD, providing a Cloud based framework to push your software to multiple targets. Distr also supports directly and continuously deploying your software to various deployment targets, but it additionally supports the pull approach, where the customer fully self-manages their infrastructure and simply fetches the application or artifacts from Distr.
Octopus Deploy also offers deep Argo CD integration (after they acquired Codefresh). On the other hand, Distr is completely open source and can be self-hosted if desired.
If you’re interested in specific features, I’m happy to go into more detail.
I feel you, but a huge percentage of recently funded companies are in the AI space. Software distribution for them is even more complex due to all the moving parts, and we want to make sure these companies know that our solution is a great fit for them.
Sure, multiple of our customers that distribute applications with a machine learning/AI component also need to distribute their models. They can use our OCI registry to distribute large images with huge layers. We specifically reworked our registry implementation to storing in-transit blobs on disk to save memory, ensuring the application doesn’t run out of memory [1].
Is registry OOM protection the only advantage your registry has for large layers? Robotics has a need for Docker tooling that handles large layers/images gracefully. Even if you've done the "right" thing and sideloaded your ML models with some other management system, CUDA layers and such are gigantic.
Edit: looking at this, this is very adjacent to some problems w/ robotics deployments. Fleet management, edge deployment, key management. Neat.
I'd be curious about the multi-artifact support. Can I declare a manifest that binds together multiple services (or a service and an ML model?) Do you support ML models as an artifact?
Feel free to reach out via our website[1]. Distr does not require an internet connection to keep your application running. Update commands are fetched directly from the agent and do not require any special connectivity.
Updates are pulled before the rollover to a new version is performed, so a poor internet connection may only affect the download speed of new updates. Distr is designed to operate even when no connection is available, or when connectivity is only allowed in short time slots.
SSO (Google, Microsoft, GitHub) is available on all tiers. Custom OIDC provider support is even available in our ~MIT~ (Apache2) licensed community edition if you self host Distr.
Even TOTP MFA is available for all users and part of the community edition.
We develop one or more apps that we deploy on-prem. An app for us is a git repo with a docker compose file. On-prem is either a Linux server or a Linux vm that we have ssh access to (normally via Wireguard vpn).
For app updates, we use ansible to ssh into machines, run pre-deployment scripts, pull git repo and docker images, restart containers and run post-deployment scripts.
It could be better, but it works for us.
The biggest bottleneck we have now is communication with customers, scheduling of updates, letting them know of breaking changes or new features, that kind of stuff.
The apps are provided "fully managed". They dont know and don't care about the details I just described, but they do need assurances that everything is done "properly".
What we think would help us a lot is a way to easily let them know of new releases of any apps they have installed, let them read release notes, docs, and be able to either deploy on-demand or schedule a deployment at a certain time.
Although having fewer things for us to do is nice, what is crucial is to oversee deployments and make sure they are successful (and intervene if not).
We already have a customer portal where you can display any information you want for your customers. We also provide container and deployment logs, as well as alerts in the platform, so you can immediately see if an update failed and what went wrong. Release notes are already on our roadmap, and a lightweight issue tracker has also been requested.
Scheduled updates are currently not on the roadmap, but we’d be happy to scope that feature together and add it.
You can use it for IoT use cases for sure. Although our software is compatible with Windows, it does not explicitly perform operating system management or the tasks that you typically need to handle in an IoT environment.
Thanks. If you are referring to how we handle large OCI images for our OCI-compatible container registry, we create a temporary volume and stream/cache the layers there before streaming them to S3-compatible storage. This mitigates the need to keep large layers in memory, which previously led to memory resource exhaustion.
We are actually building in a similar space with Distr [2] and happy to jump on a quick call.
[1] https://dokku.com/
[2] https://distr.sh/docs/
reply