Hacker Newsnew | past | comments | ask | show | jobs | submit | TransAtlToonz's commentslogin

I don't understand why rust bindings imply a freezing (or chilling) of the ABI—surely rust is bound by roughly the same constraints C is, being fully ABI-compatible in terms of consuming and being consumed. Is this commentary on how Rust is essentially, inherently more committed to backwards compatibility, or is this commentary on the fact that two languages will necessarily bring constraints that retard the ability to make breaking changes?


Obviously the latter, which is already the point of contention that has started this entire discussion.


Can you explain why you think this? I don't understand the reasoning and it's certainly not "obvious". There's certainly no technical reason implying this, so is this just resistance to learning rust? C'mon, kernel developers can surely learn new tricks. This just seems like a defeatist attitude.

EDIT: The process overhead seems straightforwardly worth it—rust can largely preserve semantics, offers the potential to increase confidence in code, and can encourage a new generation of contribution with a faster ramp-up to writing quality code. Notably nowhere here is a guarantee of better code quality, but presumably the existing quality-guaranteeing processes can translate fine to a roughly equivalently-capable language that offers more compile-time mechanisms for quality guarantees.


You phrased it rather well -- "increased constraints will retard the ability to make breaking changes". You are adding a second layer of abstraction that brings very little generalization, but still doubles the mental load; there's no way it doesn't significant put additional pressure when making breaking changes. The natural reaction is that there will be less such breaking changes and interfaces will ossify. One can even argue this is what has already happened here.

In addition, depending on the skill of the "binding writer", the second set of interfaces may simply be actually easier to use (and generally true, since the rust bindings are actually designed instead of evolved organically). This is yet another mental barrier. There may not even be a point to evolving one interface, or the other. Which just further contributes to splitting the project into two worlds.


> The natural reaction is that there will be less such breaking changes and interfaces will ossify. One can even argue this is what has already happened here.

I don't think I'd agree with that. Current kernel policy is that the C interfaces can evolve and change in whatever way they need to, and if that breaks Rust code, that's fine. Certainly some subsystem maintainers will want to be involved in helping fix that Rust code, or help provide direction on how the Rust side should evolve, but that's not required, and C maintainers can pick and choose when they do that, if at all.

Obviously if Rust is to become a first-class, fully-supported part of the kernel, that policy will eventually change. And yes, that will slow down changes to C interfaces. But I think suggesting that interfaces will ossify is an overreaction. The rate of change can slow to a still-acceptable level without stopping completely.

And frankly I think that when this time comes, maintainers who want to ignore Rust completely will be few and far between, and might be faced with a choice to either get on board or step down. That's difficult and uncomfortable, to be sure, but I think it's reasonable, if it comes to pass.


> You are adding a second layer of abstraction that brings very little generalization

Presumably, this is an investment in replacing code written in C. There's no way around abstraction or overhead in such a venture.

> there's no way it doesn't significant put additional pressure when making breaking changes

This is the cost of investment.

> The natural reaction is that there will be less such breaking changes and interfaces will ossify.

A) "fewer", not "less". Breaking changes are countable.

B) A slower velocity of changes does not imply ossification. Furthermore, I'm not sure this is true—the benefits of formal verification of constraints surrounding memory-safety seems as if it would naturally lead to long-term higher velocity. Finally, I can't speak to the benefits of a freely-breakable kernel interface (I've never had to maintain a kernel for clients myself, thank god) but again, this seems like a worthwhile short-term investment for long-term gain.

> In addition, depending on the skill of the "binding writer" (and generally, since the rust bindings are actually designed instead of evolving organically), the second set of interfaces may simply be actually easier to use. There may not even be a point to evolving one interface, or the other. Which just further contributes to splitting the project.

Sure, this is possible. I present two questions, then: 1) what is lost with lesser popularity of the C interface with allegedly less stability, and 2) is the stability, popularity, and confidence in the new interface worth it? I think it might be, but I have no clue how to reason about the politics of the Linux ABI.

I have never written stable kernel code, so I don't have confident guidance myself. But I can say that if you put a kernel developer in front of me of genius ability, I would still trust and be more willing to engage with rust code. I cannot conceive of a C programmer skilled enough they would not benefit from the additional tooling and magnification of ability. There seems to be some attitude that if C is abandoned, something vital is lost. I submit that what is lost may not be of technical, but rather cultural (or, eek, egoist), value. Surely we can compensate for this if it is true.

EDIT, follow-up: if an unstable, less-used interface is desirable, surely this could be solved in the long term with two rust bindings.

EDIT2: in response to an aunt comment, I am surely abusing the term "ABI". I'm using it as a loose term for compatibility of interfaces at a linker-object level.


>Presumably, this is an investment in replacing code written in C. There's no way around abstraction or overhead in such a venture.

Nobody is proposing replacing code right now. Maybe that will happen eventually, but it's off limits for now.

R4L is about new drivers. Not even kernel subsystems, just drivers, and only new ones. IIRC there is a rule against having duplicate drivers for the same hardware. I suppose it's possible to rewrite a driver in-place, but I doubt anyone plans to do that.


There is a binder driver rewrite in rust. Companies who care are certainly rewriting drivers. If there is pushback in upstreaming them that will cause a lot of noise.


[flagged]


> Why not? That's the really juicy part of the pitch.

For now, it's because for logistical and coordination reasons, Rust code is allowed to be broken by changes to C code. If subsystems (especially important ones) get rewritten in Rust, that policy cannot hold.

> yes i get there are linux vets we need to be tender with. This shouldn't obstruct what gets committed.

Not sure why you believe that. We're not all robots. People need to work together, and pissing people off is not a way to facilitate that.

> if this is what linux conflict resolution looks like, how the hell did the community get anything done for the last thirty years?

Given that they've gotten a ton done in 30 years, I would suggest that either a) your understanding of their conflict-resolution process is wrong, or b) your assertion that this conflict-resolution process doesn't work is wrong.

I would suggest you re-check your assumptions.

> You quarter-assed this reply so I'm sure your next one's gonna be a banger.

Please don't do this here. There's no reason to act like this, and it's not constructive, productive, interesting, or useful.


[flagged]


you're trolling. please stop


> A) "fewer", not "less". Breaking changes are countable.

this just makes you look pedantic and passive aggressive


A sledgehammer doesn't need to be able to turn a screw. Perhaps states might take advantage of this, but the incompetency of toadies at technology won't impact their competency at wreaking destruction.


Lack of empirical support does not imply empirical support of no insight. In fact, it seems like you can reasonably draw whatever conclusion you please with about equivalent (zero) evidence. Calling these "myths" seems like a bit of a stretch—perhaps "popular conception" might be more accurate.


The appropriate restrictions are relative to the size and momentum of the organization. It's easy to spend months setting up safeguards rather than working on product development that won't proportionally return.

Of course, this involves being honest with yourself about risk and reward, and we all have implicit incentives to disregard the risk until we get burned and learn to factor that in.


I had no idea people took hackerrank as a serious signal rather than as a tool for recent graduates to track interview prep progress. Surely it has all the same issues AI does: you have no way of verifying that the person who takes the interview actually is responsible for that signal.

I don't see AI as a serious threat to the interview process unless your interview process looks a lot like hackerrank.


Your “unless” covers a huge swath of this industry, at the low end and at the high end. Excluding places that do that leaves you with what exactly? Boutique shops filled with 20 year veterans?


What do you mean by the "high" end? I would consider this sort of interview style necessarily precluding such a place from being considered a high-quality work-place. Not only is it a miserable way to interview, it's not an effective signal for engineer quality beyond rapid code snippet production.

> Excluding places that do that leaves you with what exactly? Boutique shops filled with 20 year veterans?

We are on a VC forum—I imagine small shops focused on quality are quite common here.


“High end” was meant as shorthand for FAANG …high comp, not necessarily high tech complexity.

You are correct about the deficiencies of the whiteboard interview. It is not a sane way to hire an individual. It makes sense as a way to hire someone in the top 20% from a large unfiltered pool. So wrt high/low, that’s what FAANG companies have to do, and for many nontechnical companies they outsource this work or emulate FAANG practices for no good reason.

My point was that there are very few places that don’t do this.


Ah, makes sense.

I just abandoned the code interview altogether and ask them questions about process. It's a very simple workaround, but very effective. I'll admit it helps that there are very few problems these days outside of specific problems to solve that require a high degree of technical competency to tackle.


For the more senior candidates that I interview, I've seen people who talk a great game but cannot deliver, and others who can. Having them write fizzbuzz-type code is a primitive check that has led to surprising results - 25 year industry veterans with all kinds of great projects delivered, and...they can't write a for loop? They can't declare a variable? They don't know what final/const really does? I hate asking them to do it but I can't really stop.


> There weren't signals like attending Harvard.

Oh trust me it's still not that great a signal.


Surely the title should indicate somewhere that this is about selecting partners (which I find to be a terribly dull topic).


Updated.


I miss how clear the sound was. Cellphones sound like absolute crap in comparison. I don't understand why.


These days it's not uncommon to stack multiple lossy compressors in a chain. Your VoIP phone -> service provider is one codec. Then the phone company to the other phone company - quite possibly re-encoded - then on the other end they re-encode it yet again, to send to the VoIP or cell phone.

IT doesn't have to be that way. Ideally we'd stream the data (whether compressed or uncompressed) directly to the other end. There are standards for this but, well you know how standards are: so many to choose from.

In hindsight I think lossy compression of telephony was a mistake. It's 64 kbps for classic narrowband. GSM and other early digital cellular technologies could provide perhaps 5 - 10 kbps per handset and voice just had to be crammed into that. It made sense in the early 90s in that one application. It makes little sense in other applications, either then or now.

The long distance network of the late 70s into the early 00s was mostly uncompressed digital PCM, while the local loop was analog. The result was a basically distortion-free channel from about 200 to 3 kHz. Oh, and it was mostly synchronous too, so the delay was generally under 100 ms even cross-continent. You used to be able to immediately interrupt the person while talking just as in an real-life conversation. Some telephony systems running over packet-switching with buffering end up with such significant delays in practice that you have to take turns like it's a walkie-talkie.


Oh, and you missed an important one for me, which is that many phones will only use half duplex for the codec (possibly made worse by noise cancelling that will cut the speaker when the mic is active). This is annoying because I used to be able to talk to people on land lines, interrupt them, and then wait for them to finish talking so I can have a turn. Now, as soon as I talk, their audio cuts out, and now I can't tell if they've actually paused to let me talk. I find it very maddening, and difficult to hold a conversation.

Luckily, at least on my phone, when I attach a bluetooth headset, I can still get full duplex audio through the headset


Transferring from one network to another happened back in the last millennium with landlines, too

I'm from Iowa, a state where that made lots of money, globally, for decades.


The rise of Big Latency is trashing our long distance relationships. It is absolutely infuriating.


I am not sure which landlines you remember, but VoLTE voice quality is better than every phone I've ever used, from landlines in the 80s to VoIP Vonage phones. POTS systems ran at a lower quality than current phones.


I've never had a cell phone that can match the latency of a 80s or 90s landline for local calls. Maybe the audio is as good, but that delay makes calls distinctly worse.


Latency is a different thing, though. Yes, latency is worse in some cases than copper-connected phones were, but that's a highly subjective statement relative to the connections. But quality alone has never been higher than it was, and it was much lower on landline phones.


The audio is awful. The old landlines advertised hearing a pin drop. And no one laughed at that concept. Imagine trying to hear something comparable with today's mobile phones.


It wasn't just POTS but also the kind of switches you went through for a given call. POTS over the right lines could be amazing.


It's entirely possible I have rose-colored glasses on. Still, VoLTE is terrible compared to any other audio service I've used aside.


Yeah, I think so too; modern cell phone voice quality has been pretty good in my experience. Granted, I'm mostly only calling other cell phones, so maybe there could be a quality downgrade if you call something truly analog that's still attached to a landline?


One time I heard someone say that people used to fall in love over the phone, but they don’t anymore.

I think it’s basically true.


Old phones definitely have a distinctive sound that I do like, but I haven't really noticed "bad" audio quality in phones?

If I call with Signal or Skype or something, usually the audio is pretty clear, and doesn't seem to crappy to me. Even "regular" phone service on my iPhone seems to be pretty clear to me.

You could argue that it's "too" clear I suppose, but I don't really think it's a bad thing personally.


Most of the times, videocalls sound too... stuffy for me. There's something off about the frequencies, I'm not sure if that's some kind of tight windowing and aggressive compression, or noise cancelling eating into signal, or both, but it's missing the high-order terms, so to speak.

(I've recently switched to using my headphones over audio jack again, and the problem persists, so it's not the Bluetooth headset profile - though in general, when HSP kicks in, the audio quality goes to the gutter.)


I went through my contacts and changed the default calling modality from phone to facetime audio for some of my frequently contacted family/friends. It's curious to me that iPhone's don't really surface the option to choose between "cellular" and "Facetime Audio" more prominently.


SIP trunk typically use very dated codecs like GSM at super low quality.

https://en.m.wikipedia.org/wiki/Full_Rate

> The quality of the coded speech is quite poor by modern standards

> The codec is still widely used in networks around the world.


it's a bit like LED and incandescent light bulbs rice cookers and kamado-san sure the new is efficient and full of add-on functionalities but somehow it's still being compared with the old, and still playing catch up

we got "HD" sound, but latency is still there for phones we got energy efficiency for LED, but it's blasting us with blue light (i got some chicken light bulbs lol) we got keep warm and scheduled cooking for rice cookers, but it's tainted with PFAS coating and still don't taste as good as kamado-san

makes me wonder what other inventions that beat the old in every possible way.. silicone chips i guess?


death by a thousand cuts sort of situation optimizing for voice clarity while not always maintaining voice quality (compression, background noise filtering, etc) — the method of transmitting the audio is significantly more complicated


Is that the cellphone itself or the environment where the phone is used?


Neither? It's however the voice is encoded over the cell network. Again, I don't understand why because there's more than enough signal to stream digital audio. It's like they haven't upgraded voice quality in 30 years despite this being an obvious market advantage.

Hell, you can still rig a physical handset to work with bluetooth + cellphone and it'll guaranteed sound terrible.

EDIT: phrasing, wording.


> Neither?

As someone who had to make sure that call audio was properly processed on a phone I worked on to make it match today's standards, I can say without hesitation: it's all three.

The codec you get can vary from okay to terrible; the way mobile phones are built these days requires you to do echo cancellation; and the environment phones are used in requires you to do noise reduction.

Just disable audio processing on your phone, feed the network with raw microphone input and notice the complaints from your interlocutors. I've been there :)


> Neither? It's however the voice is encoded over the cell network. Again, I don't understand why because there's more than enough signal to stream digital audio.

Something's gone badly wrong in your memory; landline phones intentionally drop important vocal frequencies and automatically prevent everyone from sounding like themselves. Cell phones don't do that and have always had much, much, much better audio than landlines.


It's why 24.4 kbps is about the max you can get from a modem without your phone line being a fancy one. Compression (in the musical sense not the Information Theory sense, though they overlap)


it's the phone. between the compression, the impression of it being half duplex all the time, and glitches and drops, having conversations on cell phones is so frustrating that I tend to avoid them altogether, to the detriment of my long distance relationships.

I experience these problems even even both of the participants are at home using WiFi calling.

I have been lamenting this problem for ages.


I use wifi calling and it still sounds bad compared to facetime. Like, exactly as bad as over cell.


Wow. This is fantastic. This is the first time in nearly fifteen years they've had a feature I've actually wanted to purchase.


Why not simply mandate providing code? It doesn't make any sense to wait until the vendor dies to buy.


Because the perfect is the enemy of the good.

Escrow triggered by insolvency or product termination would substantially improve on the status quo.


Mandating source code isn't remotely near perfect either. In an ideal world we wouldn't need currency to organize an economy. People gotta demand more.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: