"The capability based approach is interesting, but trusting developers to declare all their needed capabilities seems brittle. Modern OS level sandboxing or containerisation provides strong isolation without relying on each library to opt in, which may be a more robust alternative."
Fascinating vulnerability. It shows how browser integrations can create unexpected attack surfaces. I’d love to see more discussion around hardening these kinds of local browser modules.
What worked great for me for speeding up CI was using Cachix + namespace.so. This works so well I moved all my personal and our company CI to GitHub Actions.
With "it works so well", I mean:
* CI is cheap because namespace.so runners are faster & cheaper than GitHub Actions
* the UX is exactly the same as using vanilla GitHub Actions, namespace.so's runner is a drop-in replacement
* the CI starts up very fast because `/nix/store` is stored on namespace.so's "Cached Volumes" which are like virtual disks attached to the CI runner, so there is nothing to download to prime the cache, Nix can just start running immediately
* https://github.com/cachix/cachix-action uploads whatever Nix builds to Cachix in the background so from this point on if any of our devs pulls the latest code, they don't have to build any derivations locally, and lose time waiting, everything is pulled automatically from Cachix to their machines, they can get right to work. Or if I have to purge the Cache Volume for some reason, nothing needs to be rebuilt, just pulled from Cachix.
It's taken a few years to get this dialed in, but now that it works, it's sooo good!
IIUC Magic Nix Cache uses Github Actions Cache, which means there is waiting for the cache contents to be loaded onto the runner instance. This can in no way be as fast as what namespace.so does, which is mounting a disk image with /nix/store.
I'm going to have to look into this mounting option. I wanted to try making volumes very similar to this for reusing existing /target directories for internal company Rust builds. Sounds like Namespace could make this very easy to try out.
Just need to figure out what the cache size limitations are..
To be clear, I would not expect it to be as fast! But I do wonder what the percentage drop-off would be and, if small enough, if that would make staying on Actions more compelling than a bespoke solution.
I've seen the following multiple times in the past:
1. something changed in test layer setup that made all tests slower
2. something changed in application code that made all tests slower (!!)
And most of the times, they were avoidable, and fixed after we realized what happened.
Unit tests tend to get slower with time, wasting a lot of developer hours. BlueRacer helps you keep them fast by posting a unit test performance summary on every Pull Request.
Yes, but the limitation of Twillio VoIP numbers is a show-stopper. I can already auto-forward Twilio and Google Voice SMS messages to a team/distribution list. Getting real carrier-issued numbers to do this is the larger challenge.
Which is exactly what the device on the link does. It's using a real SIM card to receive 2FA codes and then forwards it to wherever you configure it to.
Respectfully, this point is not clear on your website -- I thought the reference to Twilio was that you are using them for getting a number. I'm more interested now.
The trick is in knowing where to draw the line. We definitely felt we had "some" tracking. We had hundreds of people sign up to our newsletter. We had a couple of users that absolutely raved about the product (or as it turned out later, not about the product specifically, but the support we were providing).
We asked ourselves many times: is this "too slow" or are we just being impatient?
Additionally, there is a huge difference in getting free users and getting paid users.
We tried with a very low-price plan and got a lot of interest & signups. The moment we raised the prices a bit to cover hosting costs, the interest went away.
> We tried with a very low-price plan and got a lot of interest & signups. The moment we raised the prices a bit to cover hosting costs, the interest went away.
I have to correct my partner here as his memory is a bit fuzzy. :) We tried a low tier price ($15/m vs regular $49/m) but it did not make any significant difference in signups.
We targeted the high-end of the WordPress market where we competed against Kinsta, WP Engine, Flywheel etc. And the problem there was they were "good enough" (and in some ways muhc better) as I mention in the blog post.
At the same time, the lowend SiteGround had prices below $5/m so we couldn't compete against them in that segment as we were paying more than that to Google Cloud.
Did you consider trying a debt/capital backed growth strategy? Where you take on funding and use it to subsidize introductory pricing to gain market share ahead of raising prices? cf. Spotify's free 3 month trial, AWS free tier, food delivery app shenanigans.
I feel like this strategy is often just delaying and magnifying a failure, but if you really believe your product is better than similarly priced products, and the hurdle was getting customers to switch, then providing aggressive price incentives for switching, then ratcheting prices to parity would probably work pretty well.
We never wanted to go the funding route so this was never considered. While it might have worked, it would prolong the whole thing and make it much, much more stressful.
It's "zero trust". I.e. no-one from Pareto Security, nor anyone from your company can access or track any of your Macs.
The only thing that the local agent running on a Mac does, is sending a list of failing checks every now and then. It does not allow any centralized pushing of configuration, no tracking, no remote access, nada.
reply