Hacker Newsnew | past | comments | ask | show | jobs | submit | more solatic's commentslogin

This is how you handle it as an individual developer, but in a corporate environment things get real difficult, real fast. You need to set up your VMs and Git host to only trust certificates signed by an SSH certificate authority, and you need to work with users to submit the public key from the hardware-backed key to IT (controlling the CA) to get the public key signed and a certificate issued. Establishing trust when dealing with remote workers is hard unless you have both the budget and leadership patience to pay for overnight shipping, and even then, most people don't have access to tamper-proof packaging. Furthermore, for SSH CA support, GitHub requires Enterprise Cloud, GitLab requires Premium and self-hosted instances are not supported.

Would love to hear more from people getting this successfully set up at scale in corporate environments. I've seen big companies with lots of InfoSec talent not even attempt this.


I can't speak to actually setting it up, but where I work we have an IT-provided yubikey ssh-agent that handles getting all that stuff set up, and we just paste the public key from our individual yubikeys into our authorized ssh keys with our on-prem-hosted bitbucket server. However almost everyone I know quickly gets sick of touching the yubikey for every git remote operation and just generates their own local SSH key to use for git since doing so is not forbidden. It's definitely not High Security, but since our git is on-prem and can only be accessed from within the corporate VPN the risks are probably lower than if we were using something shared on the public internet.

The obvious solution is an ssh-agent integration that caches the touch-derived key for up to N hours or until the workstation is locked (as a proxy for user-is-away event), AND integrates with secure desktop (à la UAC) to securely show a software-only confirmation prompt/dialog for subsequent pushes within the timeout window.

(Tbh, a secure-desktop-integrated confirmation dialog would solve most issues that needed a hardware key to begin with.)


> almost everyone I know quickly gets sick of touching the yubikey for every git remote operation and just generates their own local SSH key to use for git since doing so is not forbidden

Yes, that's the exact problem at hand. If you generate your own local SSH key, the private key sits on the disk, and it can be stolen by malware (see article).

I'm asking how people set up the controls such that only hardware-based keys are signed by the CA.


If you aready have an SSH CA, why not just issue ephemeral certs lasting for several seconds or minutes? What risk would be addressed by adding hardware keys into the mix?

How do you prevent malware running on the pwned laptop from asking for an ephemeral cert to be issued? How do you know a human being is in the loop? Usually ephemeral sessions are up to 15 minutes (also to deal with misaligned clocks and unhappy users) - plenty of time for malware to ship the cert back to a command-and-control server.

This is the key advantage of hardware keys, the fact that the physical press is required prevents the keys from being exfiltrated from the machine by malware.


> How do you prevent malware running on the pwned laptop from asking for an ephemeral cert to be issued?

If you have malware capable of code execution, restricting the ability to issue one command is not going to be a meaningful control, especially with something like a physical touch which most users are just conditioned to accept, or can be trivially phished into accepting.

> plenty of time for malware to ship the cert back to a command-and-control server.

If your infrastructure cannot distinguish legitimate traffic, or you do not have a defensible network perimeter, again a physical touch is not going to be meaningful; it is not the panacea you are looking for.


I'd be fished in a heartbeat. I have to tap my key like 10 times every morning and then several times more throughout the day due to random logouts. Could be my IDE, a broken SSH connection or internal site that randomly decides to request it again and of course the popup gives no indication to where the request came from. It's ridiculous.

I think things would be more secure with fewer prompts because i wouldn't be conditioned to just tap every time it pops up.


> This is the key advantage of hardware keys, the fact that the physical press is required prevents the keys from being exfiltrated from the machine by malware.

Secure elements prevent exfiltration. Touch requirements prevent on-device reuse by local malware.


If you think too hard about this, you come back around to Alan Kay's quote about how people who are really serious about software should build their own hardware. Web applications, and in general loading pretty much anything over the network, is a horrible, no-good, really bad user experience, and it always will be. The only way to really respect the user is with native applications that are local-first, and if you take that really far, you build (at the very least) peripherals to make it even better.

The number of companies that have this much respect for the user is vanishingly small.


>> The number of companies that have this much respect for the user is vanishingly small.

I think companies shifted to online apps because #1 it solved the copy protection problem. FOSS apps are not in any hurry to become centralized because they dont care about that issue.

Local apps and data are a huge benefit of FOSS and I think every app website should at least mention that.

"Local app. No ads. You own your data."


Another important reason to move to online applications is that you can change the terms of the deal at any time. This may sound more nefarious than it needs to be, it just means you do not have to commit fully to your licensing terms before the first deal is made, which is tempting for just about anyone.

Software I don’t have to install at all “respects me” the most.

Native software being an optimum is mostly an engineer fantasy that comes from imagining what you can build.

In reality that means having to install software like Meta’s WhatsApp, Zoom, and other crap I’d rather run in a browser tab.

I want very little software running natively on my machine.


Web apps are great until you want to revert to an older version from before they became actively user-hostile or continue to use them past EoL or company demise.

In contrast as long as you have a native binary, one way or another you can make the thing run and nobody can stop you.


Yes, amen. The more invasive and abusive software gets, the less I want it running on my machine natively. Native installed applications for me now are limited only to apps I trust, and even those need to have a reason to be native apps rather than web apps to get a place in my app drawer

Your browser is acting like a condom, in that respect (pun not intended).

Yes, there are many cases when condoms are indicative of respect between parties. But a great many people would disagree that the best, most respectful relationships involve condoms.

> Meta

Does not sell or operate respectful software. I will agree with you that it's best to run it in a browser (or similar sandbox).


Desktop operating systems really dropped the ball on protecting us from the software we run. Even mobile OSs are so-so. So the browser is the only protection we reasonably have.

I think this is sad.


You mean you’d rather run unverified scripts using a good order of magnitude more resources with a slower experience and have an entire sandboxing contraption to keep said unverified scripts from doing anything to your machine…

I know the browser is convenient, but frankly, its been a horror show of resource usage and vulnerabilities and pathetic performance


The #1 reason the web experience universally sucks today is because companies add an absurd amount of third-party code on their pages for tracking, advertisement, spying on you or whatever non-essential purpose. That, plus an excessive/unnecessary amount of visual decoration.

The idea that somehow those companies would respect your privacy were they running a native app is extremely naive.

We can already see this problem on video games, where copy protection became resource-heavy enough to cause performance issues.


Yes because users don't appreciate this enough to pay for the time this takes.

You have to remember that companies are kind of fungible in the sense that founders can close old companies and start new ones to walk away from bankruptcies in the old companies. When there's a bust and a lot of companies close up shop, because data centers were overbuilt, there's going to be a lot of GPUs being sold at firesale prices - imagine chips sold at $300k today being sold for $3k tomorrow to recoup a penny on the dollar. There's going to be a business model for someone buying those chips at $3k, then offering subscription prices at little more than the cost of electricity to keep the dumped GPUs running somewhere.

I do wonder how usable the hardware will be once the creditors are trying to sell it - as far as I can tell is seems the current trend is more and more custom no-matter-the cost super expensive power-inefficient hardware.

The situation might be a lot different than people selling ex-crypto mining GPUs to gamers. There might be a lot of effective scrap that is no longer usable when it is no longer part of a some companies technological fever dream.


(I support licensing)

Licensing never happened because its effect is to reduce the size of the labor pool and restrict what the labor pool can do as individuals. Barring the very recent abberation of the glut of new grads and not enough junior positions, even without licensing, there haven't been enough engineers to fill all the open senior-level positions. Licensure would make that problem worse.

A licensure board would also get embroiled in political disputes over what is genuinely ethical. Python is a performance nightmare, should engineers be permitted to pick a language with known poor performance characteristics? Electron is a RAM hog and battery-killer, is it an ethical choice? So how could any Python or Electron shop support licensure?


> there haven't been enough engineers to fill all the open senior-level positions. Licensure would make that problem worse

The point of the licensing is to make sure they can do the job; hiring people without the licensing means you're hiring amateurs. It's not a good solution. You need more job-training programs to fix the existing lack of engineers, which still works with licensing. There's no quick fix for a lack of qualified expertise, other than H1-B's.

Sure a board can make things more complicated, but it's because they're trying to improve things. This is a positive.

> should engineers be permitted to pick a language with known poor performance characteristics?

In electrical work, you are restricted to what parts you can use for what work, based on its application/use-case. If it's touching a house or grid it needs to be UL-listed (mandatory testing). If it's outdoor it needs to be NEMA-3 (weather-resistant) or better. If it's direct burial it needs to be UF-B (resists common outdoor issues) or better. More than 3 conductors in a raceway requires derating the condutors. You can't join dissimilar metals (aluminum, copper) without some kind of tin-plated splicer (with oxidation treatment) to prevent corrosion.

I'm sure when these standards were introduced, electricians were annoyed that they were "being limited in choice". Today we take it for granted. Our safety and stability, both as individuals and as a society, is more important than the personal preferences of engineers.


So your backups are written to the same disk?

> datacenter goes up in flames

> 3-2-1 backups: 3 copies on 2 different types of media with at least 1 copy off-site. No off-site copy.

Whoops!


> even the metrics that RDS gives you for free make the thing pay for itself, IMO. The thought of setting up grafana to monitor a new database makes me sweat.

CloudNative PG actually gives you really nice dashboards out-of-the-box for free. see: https://github.com/cloudnative-pg/grafana-dashboards


Sure, and I can install something to do RDS performance insights without querying PG stats, and something to schedule backups to another region, and something to aggregate the logs, and then I have N more things that can break.

> few days' work

But initial setup is maybe 10% of the story. The day 2 operations of monitoring, backups, scaling, and failover still needs to happen, and it still requires expertise.

If you bring that expertise in house, it costs much more than 10x ($3/day -> $30/day = $10,950/year).

If you get the expertise from experts who are juggling you along with a lot of other clients, you get something like PlanetScale or CrunchyData, which are also significantly more expensive.


> monitoring

Most monitoring solutions support Postgres and don't actually care where your DB is hosted. Of course this only applies if someone was actually looking at the metrics to begin with.

> backups

Plenty of options to choose from depending on your recovery time objective. From scheduled pg_dumps to WAL shipping to disk snapshots and a combination of them at any schedule you desire. Just ship them to your favorite blob storage provider and call it a day.

> scaling

That's the main reason I favor bare-metal infrastructure. There is no way anything on the cloud (at a price you can afford) can rival the performance of even a mid-range server that scaling is effectively never an issue; if you're outgrowing that, the conversation we're having is not about getting a big DB but using multiple DBs and sharding at the application layer.

> failover still needs to happen

Yes, get another server and use Patroni/etc. Or just accept the occasional downtime and up to 15 mins of data loss if the machine never comes back up. You'd be surprised how many businesses are perfectly fine with this. Case in point: two major clouds had hour-long downtimes recently and everyone basically forgot about it a week later.

> If you bring that expertise in house

Infrastructure should not require continuous upkeep/repair. You wouldn't buy a car that requires you to have a full-time mechanic in the passenger seat at all times. If your infrastructure requires this, you should ask for a refund and buy from someone who sells more reliable infra.

A server will run forever once set up unless hardware fails (and some hardware can be redundant with spares provisioned ahead of time to automatically take over and delay maintenance operations). You should spend a couple hours a month max on routine maintenance which can be outsourced and still beats the cloud price.

I think you're underestimating the amount of tech that is essentially nix machines all around you that somehow just... work* despite having zero upkeep or maintenance. Modern hardware is surprisingly reliable and most outages are caused by operator error when people are (potentially unnecessarily) messing with stuff rather than the hardware failing.


I'm totally with you on the core vs. context question, but you're missing the nuance here.

Postgres's operations is part of the core of the business. It's not a payroll management service where you should comparison shop once the contract comes up for renewal and haggle on price. Once Postgres is the database for your core systems of record, you are not switching away from it. The closest analog is how difficult it is/was for anybody who built a business on top of an Oracle database, to switch away from Oracle. But Postgres is free ^_^

The question at heart here is whether the host for Postgres is context or core. There are a lot of vendors for Postgres hosting: AWS RDS and CrunchyData and PlanetScale etc. And if you make a conscious choice to outsource this bit of context, you should be signing yearly-ish contracts with support agreements and re-evaluating every year and haggling on price. If your business works on top of a small database with not-intense access needs, and can handle downtime or maintenance windows sometimes, there's a really good argument for treating it that way.

But there's also an argument that your Postgres host is core to your business as well, because if your Postgres host screws up, your customers feel it, and it can affect your bottom line. If your Postgres host didn't react in time to your quick need for scaling, or tuning Postgres settings (that a Postgres host refuses to expose) could make a material impact on either customer experience or financial bottom-line, that is indeed core to your business. That simply isn't a factor when picking a payroll processor.


Ignoring the fact that the assumption that you will automatically have as good or better uptime than a cloud provider, I just feel like you just simply aren't being thoughtful enough with the comparison. Like in what world is payroll not as important as your DBMS - if you can't pay people you don't have a business!

If your payroll processor screws up and you can't pay your employees or contractors, that can also affect your bottom line. This isn't a hypothetical - this is a real thing that happened to companies that used Rippling.

If your payroll processor screws up and you end up owing tens of thousands to ex-employees because they didn't accrue vacation days correctly, that can squeeze your business. These are real things I've seen happen.

Despite these real issues that have jammed up businesses before rarely do people suggest moving payroll in-house. Many companies treat Payroll like cloud, with no need for multi-year contracts, Gusto lets you sign up monthly with a credit card and you can easily switch to rippling or paychex.

What I imagine is you are innately aware of how a DBMS can screw up, but not how complex payroll can get. So in your world view payroll is a solved problem to be outsourced, but DBMS is not.

To me, the question isn't whether or not my cloud provider is going to have perfect uptime. The assumption that you will achieve better uptime and operations than cloud is pure hubris; it's certainly possible, but there is nothing inherent about self-hosting that makes it more resilient. The question is your use case differentiated enough where something like RDS doesn't make sense. If it's not, your time is better spent focused on your business - not setting up dead man switches to ensure your database backup cron is running.


> Like in what world is payroll not as important as your DBMS - if you can't pay people you don't have a business!

Most employees, contractors, and vendors are surprisingly forgiving of one-time screw-ups. Hell, even the employees who are most likely to care the most about a safe, reliable paycheck - those who work for the US federal government - weren't paid during the recent shutdown, and not for the first time, and still there wasn't some massive wave of resignations across the civil service. If your payroll processor screws up that badly, you fire them and switch processors.

If your DBMS isn't working, your SaaS isn't working. Your SLA starts getting fucked and your largest customers are using that SLA as reason to stop payments. Your revenue is fucked.

Don't get me wrong, having working payroll is pretty important. But it's not actually critical the way the DBMS is, and if it was, then yeah you'd see more companies run it in-house.


>Most employees, contractors, and vendors are surprisingly forgiving of one-time screw-ups.

If you are a new business that isn't true. Your comparison to the US federal government is not apt at all - the USG is one of the longest running, stable organizations in the country, people will have plenty of patience for the USG, but they wont have it for your incorporated-last-month business.

Secondly I could make the same argument for AWS. AWS has plenty of downtime - way more than the USG has shutdowns, and there are never been a massive wave of customers off of AWS.

Finally, as a small business, if your payroll gets fucked, your largest assets will use that to walk out the door! The second you miss payroll is the second your employees start seeing the writing on the wall, its very hard to recover moral after that. Imagine being Uber and not paying drivers on time, they will simply drive more often with a competitor.

That said, I still see the parallels with the hypothetical "Accountant forums". The subject matter experts believe their shiny toy is the most critical to the business and the other parts aren't. Replace "US federal government" with "Amazon Web Services", and you will have your "Accountant forums" poster arguing why payroll should be done in house and SLA doesn't matter.


> But maybe the local network interface is down, maybe there's a local firewall rule blocking it,...

That's exactly why you log it as a warning. People get warned all the time about the dangers of smoking. It's important that people be warned about smoking; these warnings save lives. People should pay attention to warnings, which let them know about worrisome concerns that should be heeded. But guess what? Everyone has a story about someone who smoked until they were 90 and died in a car accident. It is not an error that somebody is smoking. Other systems will make their own bloody decisions and firewalling you off might be one of them. That is normal.

What do you think a warning means?


The argument I'm hearing you make is that Hetzner needs to license a white-labelled version to distributors. If the servers are really commoditized then why aren't the data scientists in this "AWS is too expensive for data science" market going to Hetzner?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: