Hacker Newsnew | past | comments | ask | show | jobs | submit | Validark's commentslogin

> Swift doesn’t have a match statement or expression. It has a switch statement that developers are already familiar with. Except this switch statement is actually not a switch statement at all. It’s an expression. It doesn’t “fallthrough”. It does pattern matching. It’s just a match expression with a different name and syntax.

Are there people who see a "match" statement, smash both hands on the table, and shout, "WHAT THE ___ is a ------- MATCH STATEMENT?!!! THIS IS SO $%^&@*#%& CONFUSING!! I DON'T KNOW THAT WORD!! I ONLY KNOW SWITCH!!"


TL;DR — it seems to me that it is less anger from devs at being confused over a Case construct and more an attempt to preemptively soothe any ruffled feathers for devs wanting a traditional Switch.

I think your comment was probably rhetorical, but does address/raise a fairly common issue in designing programming languages. My position on this is that it is less like "WHAT THE ___ is a ------- MATCH STATEMENT?!!! THIS IS SO $%^&@*#%& CONFUSING!! I DON'T KNOW THAT WORD!! I ONLY KNOW SWITCH!!" and instead more like the following (from the language designers POV):

Okay, we want a Case construct in the language, but programmers coming from or preferring imperative syntax and semantics may not like the Case concept. But, they often like Switch, or at least are familiar with it appearing in code, sooooooo: first, we will alter the syntax of the tradition Switch to allow a more comfortable transition to using this functional inspired construct; then second, we wholesale replace the semantics of that Switch with the semantics of a Case. This is underpinned by the assumption the the syntax change is small enough that devs won’t recoil from the new construct, then the larger divergence of semantics will hopefully not produce issues because it is just a small semantic change coated in an most familiar syntax.

Interestingly, the author of TFA seems to be operating under the assumption that the Case construct is an unqualified positive change and sneaking the corresponding semantics into that unfortunate imperative code is a wholly positive goal for the language design.

Without taking a position on the above positivity, I think the maneuvers language designers take while designing syntax and semantics (as exhibited in Swift’s Switch syntax for a Case Expression) is motivated by divergent, and often times strange, priorities and prior assumptions. So, from the 10,000’ view, does enshrining priorities and assumptions, and others like it, as a hard coded facet of the language the right path for languages generally? Should a language seek to be an overall more general framework for programming, leaving a vast majority of the syntax and higher-level semantics to be chosen and instantiated by devs where fit-for-purpose and pros/cons direct its inclusion? Or is the goal for opinionated languages, with or without accompanying sugar to help smooth over differences from other languages, the better path? Is there a ‘happy’ medium where:

1) design goals and forward thinking or experimental syntax/semantics get put in the language as an ‘it’s for your own good’ method for advancing the field as a whole and advancing/optimizing a single dev’s programs in particular;

2) the default position of a language should be as generalized as possible, but with abilities and options for users to specify what advanced, uncommon, or divergent syntax/semantics are utilized in a given program?


We're talking about fallthrough happening by default or not by default. You could call it a "map" construct or a "choose" statement for all I care.

Whether or not you have to write the "case" keyword 10 times is an aesthetic choice.

I don't think this has anything to do with program optimization. On all non-theoretical ISA's I'm aware of, you don't need a JUMP instruction to go to the next instruction. We're debating names.

I'm a Ziguana so my answer to the programming philosophy questions would be that we need a language where the complexity emerges in the code, not in the language itself, and we generally want a shared language that can be read and used by anyone, anywhere. If everyone has their own subset of the language (like C++) then it's not really just one language in practice. If every project contains its own domain specific language, it may be harder for others to read because they have to learn custom languages. That's not to say you should never roll your own domain specific language, or that you should never write a program that generates textual source code, but the vast, vast majority of use cases shouldn't require that.

And, yes, be opinionated. I'm fine with some syntactic sugar that makes common or difficult things have shortcuts to make them easier, but again, if I learned a language, I should generally be able to go read someone's code in that language.

What do you consider "advancing the field as a whole"?



Why did you restrict yourself to mobile development only?


it's because I only have a phone to use for coding. Tho, I am planning to make it more general. Mobile development is just one of the main goals of this language.


It might have more value than you think. If you look up SCEV in LLVM you'll see it's primarily used for analysis and it enables other optimizations outside of math loops that, by themselves, probably don't show up very often.


You might be right.


What's actually way cooler about this is that it's generic. Anybody could pattern match the "sum of a finite integer sequence" but the fact that it's general purpose is really awesome.


"we wouldn’t even need to bother looking at the AI-generated code any more, just like we don’t bother looking at the machine code generated by a compiler."

2020: I don't care how it performs

2030: I don't care why it performs

2040: I don't care what it performs


I liked the article, but I found the random remark about RISC vs CISC to be very similar to what the author is complaining about. The difference between the Apple M series and AMD's Zen series is NOT a RISC vs CISC issue. In fact, many would argue it's fair to say that ARM is not RISC and x86-64 is not CISC. These terms were used to refer to machines vastly different from what we have today, and the RISC vs CISC debate, like the LISP machine debate, really only lasted like 5 years. The fact is, we are all using out-of-order superscalar hardware where the decoder(s) of the CPU is not even close to the main thing consuming power and area on these chips. Under the hood they are all doing pretty much the same thing. But because it has a name and a marketable "war" and that people can easily understand the difference between fixed-width vs variable-width encodings, people overestimate the significance of the one part they understand compared to the internal engineering choices and process node choices that actually matter that people don't know about or understand. Unfortunately a lot of people hear the RISC vs CISC bedtime story and think there's no microcode on their M series chips.

You can go read about the real differences on sites like Chips and Cheese, but those aren't pop-sciencey and fun! It's mostly boring engineering details like the size of reorder buffers and the TSMC process node and it takes more than 5 minutes to learn. You can't just pick it up one day like a children's story with a clear conclusion and moral of the story. Just stop. If I can acquire all of your CPU microarchitecture knowledge from a Linus Tech tips video, you shouldn't have an opinion on it.

If you look at the finished product and you prefer the M series, that's great. But that doesn't mean you understand why it's different from the Zen series.


There seem to be very real differences between x86 and ARM not only in the designs they make easy, but also in the difficulty of making higher-performance designs.

It's telling that ARM, Apple, and Qualcomm have all shipped designs that are physically smaller, faster, and consume way less power vs AMD and Intel. Even ARM's medium cores have had higher IPC than same-generation x86 big cores since at least A78. SiFive's latest RISC-V cores are looking to match or exceed x86 IPC too. x86 is quickly becoming dead last which should be possible if ISA doesn't matter at all given AMD and Intel's budgets (AMD for example spends more in R&D than ARM's entire gross revenue).

ISA matters.

x86 is quite constrained by its decoders with Intel's 6 and 8-wide cores being massive and sucking an unbelievable amount of power and AMD choosing a hyper-complex 2x4 decoder implementation with a performance bottleneck in serial throughput. Meanwhile, we see 6-wide

32-bit ARM is a lot more simple than x86, but ARM claimed a massive 75% reduction in decoder size switching to 64-bit-only in A715 while increasing throughput. Things like uop cache aren't free. They take die area and power. Even worse, somebody has to spend a bunch of time designing and verifying these workarounds which balloons costs and increases time to market.

Another way the ISA matters is memory models. ARM uses barriers/fences which are only added where needed. x86 uses much tighter memory model that implies a lot of things the developers and compiler didn't actually need/want and that impact performance. The solution (not sure if x86 actually does this) is doing deep analysis of which implicit barriers can be provably ignored and speculating on the rest. Once again though, wiring in all these various proofs into the CPU is complicated and error-prone which slows things down while bloating circuitry, using extra die area/power, and sucking up time/money that could be spent in more meaningful ways.

While the theoretical performance mountain is the same, taking the stairs with ARM or RISC-V is going to be much easier/faster than trying to climb up the cliff faces.


> It's telling that ARM, Apple, and Qualcomm have all shipped designs that are physically smaller, faster, and consume way less power vs AMD and Intel.

These companies target different workloads. ARM, Apple, and Qualcomm are all making processors primarily designed to be run in low power applications like cell phones or laptops, whereas Intel and AMD are designing processors for servers and desktops.

> x86 is quickly becoming dead last which should be possible if ISA doesn't matter at all given AMD and Intel's budgets (AMD for example spends more in R&D than ARM's entire gross revenue).

My napkin math is that Apple’s transistor volumes are roughly comparable to the entire PC market combined, and they’re doing most of that on TSMC’s latest node. So at this point, I think it’s actually the ARM ecosystem that has the larger R&D budget.


> These companies target different workloads.

This hasn't been true for at least half of a decade.

The latest generation of phone chips run from 4.2GHz all the way up to 4.6GHz with even just a single core using 12-16 watts of power and multi-core hitting over 20w.

Those cores are designed for desktops and happen to work in phones, but the smaller, energy-efficient M-cores and E-cores still dominate in phones because they can't keep up with the P-cores.

ARM's Neoverse cores are mostly just their normal P-cores with more validation and certification. Nuvia (designers of Qualcomm's cores) was founded because the M-series designers wanted to make a server-specific chip and Apple wasn't interested. Apple themselves have made mind-blowingly huge chips for their Max/Ultra designs.

"x86 cores are worse because they are server-grade" just isn't a valid rebuttal. A phone is much more constrained than a watercooled server in a datacenter. ARM chips are faster and consume less power and use less die area.

> So at this point, I think it’s actually the ARM ecosystem that has the larger R&D budget.

Apple doesn't design ARM's chips and we know ARM's peak revenue and their R&D spending. ARM pumps out several times more cores per year along with every other thing you would need to make a chip (and they announced they are actually making their own server chips). ARM does this with an R&D budget that is a small fraction of AMD's budget to do the same thing.

What is AMD's excuse? Either everybody at AMD and Intel suck or all the extra work to make x86 fast (and validating all the weirdness around it) is a ball and chain slowing them down.


How does that square with the fact that there is no dramatic performance loss for x86 emulation on ARM?


Probably because must “emulation” is more like “transpilation” these days - there is a hit up front to translate into native instructions, but they are then cached and repeatedly executed like any other native instructions.


But only on Apple ARM implementations which have specific hardware built into the chip to do so (emulate the memory model), which won't be available in the future because they're dropping rosetta2..


Apple isn't dropping Rosetta 2. They say quite clearly that it's sticking around indefinitely for older applications and games.

It seems to me that Apple is simply going to require native ARM versions of new software if you want it to be signed and verified by them (which seems pretty reasonably after 5+ years).


> In fact, many would argue it's fair to say that ARM is not RISC

It isn't now... ;-)

It's interesting to look at how close old ARM2/ARM3 code was to 6502 machine code. It's not totally unfair to think of the original ARM chip as a 32-bit 6502 with scads of registers.

And, for fairly obvious reasons!


But even ARM1 had some concessions to pragmatics, like push/pop many registers (with a pretty clever microcoded implementation!), shifted rigsters/rotated immediates as operands, and auto-incrementing/decrementing address registers for loads/stores.

Stephen Furber has extended discussion of the trade-offs involved in those decisions in his "VLSI RISC Architecture and Organization" (and also pretty much admits that having PC as a GPR is a bad idea: hardware is noticeably complicated for rather small gains on the software side).


You don't believe that Intel chips have more instructions and complexity and AMD have fewer?

Neither is "simple" but the axis is similar.


I hate the idea of having one "Software Discipline". Something is lost when people are constrained by OOP or TDD or "Clean Code". Obviously, as with the example of TDD in the article, a lot of these terms mean different things to different people. Hence whenever "Clean Code" is criticized, people who think their code is "clean" take up arms.

I tend to disagree with most of these rulesets that are meaningless to "engineering". The idea that a function should only be 40 lines long is offensive to me. Personally, I would rather have one 400 line function than ten 40 line functions. I'm a Ziguana. I care about handling edge cases and I think my programming language should be a domain specific language to produce optimal assembly.

I would not constrain other people who feel differently. I read an article where some project transitioned from Rust to Zig, even though the people on the team were all Rustaceans. Obviously their Rust people hated this and left! To me, that's not a step in the right direction just because I prefer Zig to Rust! That's a disaster because you're taking away the way your team wants to build software.

I think hardly any of the things we disagree on actually have much to do with "Engineering". We mostly aren't proving our code correct, nor defining all the bounds in which it should work. I personally tend to think in those terms and certain self-contained pieces of my software have these limits documented, but I'm not using tools that do this automatically yet. I'd love to build such tools in the coming years though. But there's always the problem that people build tools that don't notice common use-cases that are correct, and then people have to stop doing correct things that the tool can't understand.


I don't remember so much of it now, but as a kid I did a History project on this where I went to the local state University and read all the references in the archives related to Celluloid and other names it went by. A really interesting subject, for sure!


According to Wikipedia, Alexander Parkes created the first celluloid (later called "Parkesine") on purpose in 1855 (as mentioned in the article, Collodion already existed and, when dried, created a celluloid-like film). John Wesley Hyatt apparently acquired Parkes's patent.

Daniel Spill, who worked with Parkes directly in England, founded several companies with Parkes selling Celluloid in England.

Spill and Hyatt spent the better part of a decade in court against each other over who invented it first and who has the right to the patents. The judge ultimately ruled that both of them can continue their businesses, and that Parkes invented it first.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: