Hacker Newsnew | past | comments | ask | show | jobs | submit | nextos's commentslogin

I prefer X11 as well, but it has some security issues. Notably, all applications can read your input at any time. It's really hard to sandbox.

Wayland brought some irritations, including increased latency, and an architecture that requires rethinking all window managers. A rewrite is not enough. Very annoying.


I will never understand why "the computer can tell what input it is receiving" has turned into an accepted threat model.

I understand that we have built a computer where our primary interface depends on running untrusted code from random remote locations, but it is absolutely incredible to me that the response to that is to fundamentally cripple basic functionality instead of fixing the actual problem.

We have chosen to live in a world where the software we run cannot be trusted to run on our computers, and we'd rather break our computers than make another choice. Absolutely baffling state of affairs.


Defense in depth. One compromised application may do a lot of harm if it has access to your keyboard inputs. Supply chain attacks are not that uncommon. While you can trust software developers, you cannot completely trust their builds.

Agreed, which is why I build everything from source, but even that is not sufficient as there's no way to audit all that code at this time or for many years to come.

The X11 keylogger problem is a serious issue, however, I do not agree with transitioning to a completely new and different system (especially one designed like Wayland is) rather than just doing a backwards-incompatible "version 2.0 protocol" type fix and moving on.

We could have done this in 2005 and been long past this by now. Why not? Failed leadership. Same thing with 64-bit time, and plenty of other stuff.


I have doors between rooms in my house, despite its being inhabited by members of the same family who trust each other.

And when someone violates that trust, do you then tear the house down and build one with only external doors, requiring inhabitants to circle in the yard to move between rooms? The point of the Wayland security model is that the inhabitants of the house do not trust each other, and the architecture of the house must change to accommodate that.

I'm not impressed with the analogy. I am not confused about the goals of Wayland's security model. I am dismayed at the poor judgment elsewhere in computing that has led to its necessity.


I could have done without the "any stranger can run foreign code on my machine" bit, personally. I'm OK with doing away with Javascript forever, along with the whole "my computer just randomly downloads binary code silently in the background at intervals of its choosing" (aka "critical updates") that then becomes seen as a necessity.

Nobody needed constant code updates streaming to their PC in the BBS days when it was just ANSI or RipScript over a TTY. With complicated HTML/XML parsers the game started changing. Then Javascript came along and opened the posterior doors wide open.


Deep SSMs, including the entire S4 to Mamba saga, are a very interesting alternative to transformers. In some of my genomics use cases, Mamba has been easier to train and scale over large context windows, compared to transformers.

True, but Google Translate was already "AI". They previously used LSTMs. And before LSTMs, it was ML-like statistical translation.

Maybe formal methods have a chance of becoming mainstream now [1]?

This would increase the rigor of software engineering and put it on par with civil engineering.

Some niches like real-time embedded systems are already pretty much the same.

[1] https://martin.kleppmann.com/2025/12/08/ai-formal-verificati...


I doubt it, I feel like it might improve shops that already care and are already creating with rigor. I don't think it'll raise the bar for the avg shop. However, perhaps that's just be being cynical. By real time embedded is the same do you mean the same in the sense that they are just as poor in quality?

> [...] the same in the sense that they are just as poor in quality?

I mean some real-time software for critical embedded systems has an incredible level of rigor, making heavy use of static analysis, model checking, and theorem proving.


Noted, perhaps I'll investigate as a possible next career step. Thanks!

Medicaid holds previous addresses, household details, previous diagnoses, ethnicity, etc.

It is quite trivial to infer if someone is likely to have emigrated to the US due to obvious gaps in records or in their relatives' ones.

This is what Palantir does, essentially. Simple inference and information fusion from different sources.


Ironically, the Danish Government is a heavy user of Palantir systems, including creepy predictive policing solutions.

I would be keen to know if citizen data is being handled correctly, following GDPR/LED.

Given previous Danish client-state-like cooperation with NSA to spy on other EU countries, I can imagine the answer.


> including creepy predictive policing solutions.

Minority Report coming right up.


Which predictive policing solution from palantir are they using?

Their local Palantir implementation is called POL-INTEL. This thesis presents a good critical overview [1].

[1] https://en.itu.dk/-/media/EN/Research/PhD-Programme/PhD-defe...


GDPR has carveouts for governments and law enforcement so they can do whatever for those purposes.

The framework is the one I referred to (EU LED). In Denmark, LED is implemented in the Danish Law Enforcement Act.

However, LED has some purpose limitations, which critics argue the Danish Law Enforcement Act has bypassed. Some are trying to challenge it.


I love how powerful the GDPR marketing was that it made people forget that there are massive exceptions for prevention of crime and for the government

Linux comes in a wide range of distributions, so it is hard to make universal claims. One area where security defaults need to improve is sandboxing.

If security is a major concern, bwrap or firejail can easily provide that extra sandboxing.

NixOS and GuixSD make it quite trivial to sandbox applications in a declarative fashion using firejail.

An alternative is to use e.g. Flatpak, which gets you sandboxing for free via bwrap. But I am not a fan of application images that bypass package management.


I heard about the sandboxing being especially sketchy, thanks for a point in the right direction for mitigation.

Additionally, any thoughts on snap? (presently looking into Flatpak)


Functionally, it is very similar to Flatpak. The main reason people do not like it (for reasons independent of sandboxed applications in general) is that Canonical controls the store and that it is not open-sourced, and that it is very difficult to remove it on Ubuntu setups (a major pain-point for people who need an unsandboxed Firefox setup).

I wouldn't use snap or Flatpak, just sandbox using bwrap or firejail. They are really easy to use.

Containers also provide good development sandboxing. With distrobox you can run many distributions inside your own within a clean and isolated environment.


Just use flatpak. Let's not steer newbies towards barely maintained untested bespoke solutions.

Flatpak uses bwrap, it's not esoteric folklore software. The OP asked a serious question and they're entitled to a serious answer.

True. The UK is somewhere in the middle and worth considering.

London has decent investors and while they are not as risk-taking as SF ones, they are still quite reasonable.

EU investors, with some tiny exceptions, are terrible in that regard.


Regulators are generally really conservative. Spiegelhalter et al. already wrote a fantastic textbook on Bayesian methods for trial analysis back in 2004. It is a great synthesis, and used by statisticians from other fields. I have seen it quoted in e.g. DeepMind presentations.

Bayesian methods enable using prior information and fancy adaptive trial designs, which have the potential to make drug development much cheaper. It's also easier to factor in utility functions and look at cost:benefit. But things move slowly.

They are used in some trials, but not the norm, and require rowing against the stream. This is actually a great niche for a startup. Leveraging prior knowledge to make target discovery, pre-clinical, and clinical trials more adaptive and efficient.

Journals are also conservative. But Bayesian methods are not that niche anymore. Even mainstream journals such as Nature or Nature Genetics include Bayesian-specific items in their standard submission checklists [1]. For example, they require you to indicate prior choice and MCMC parameters.

[1] https://www.nature.com/documents/nr-reporting-summary-flat.p...


Bayesian methods are incredibly canonical in most fields I’ve been involved with (cosmology is one of the most beautiful paradises for someone looking for maybe the coolest club of Bayesian applications). I’m surprised there are still holdouts, especially in fields where the stakes are so high. There are also plenty of blog articles and classroom lessons about how frequentist trial designs kill people: if you are not allowed to deviate from your experiment design but you already have enough evidence to form a strong belief about which treatment is better, is that unethical? Maybe the reality is a bit less simplistic but ive seen many instantiations of that argument around.


A classic way to bridge the gap is to put a great academic brand on your CV. For example, you could work as a research assistant in CS for a famous university (e.g. Cambridge, Imperial, ETH).

Since the salaries they offer are low, the competition won't be so intense, and they will offer support to relocate. Once you have a foot in the ground, you can apply to great industry jobs.

A more elaborate plan would be to obtain a PhD at one of those institutions, but that is quite time-consuming and the benefits might not offset the costs.


That sounds like a solid advice!! Thanks for pointing it out!! What if they reject me there too? What is the plan B in this situation?


Well, you have many options. Given that you seem to have a good publication record, I'd expect a couple of offers after making 5 or 10 applications.

If you are a good programmer, avoiding areas that are too hot (e.g. ML) and focusing on things that can make best use of your skills (e.g. compilers, verification) could be a good idea.

Research assistant positions are also great a backdoor to PhD offers, in case you are interested in that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: