Hacker Newsnew | past | comments | ask | show | jobs | submit | aecsocket's commentslogin

The cheapest possible Hetzner VPS (2 vCPU 40GB SSD) and a Hetzner storage box (1TB) works alright for cheap (less than EUR 10/mo). I store my database on the SSD, and the `/uploads` folder on the storage box attached as a CIFS drive. Put it behind Tailscale and it's worked fine for the past few months.


Wouldn’t you want your photos to be encrypted at rest on the Hetzner storage box?


I don't really care about that, since my threat model doesn't involve Hetzner looking through my photos and training an AI model on them. If/when I move this off to my own hardware, then I'll do full disk encryption, since my threat model may involve someone stealing my hardware.


Docker could be run on the VPS, and the storage leg could be encrypted.

I'm presuming some VPS providers allow converting your VPS disk image to something that supports encryption.


Is that something that docker can do?

I presume gocryptfs can be used to wrap an SMB mounted Hetzner storage box. Haven’t tried it myself though.

I would be careful storing any personal data on it unencrypted.


rclone.

Just use rclone if you need to turn object storage semantics usage into an encrypted mount.

It doesn't do well with non-object-storage access patterns but we're not putting an sqlite database on it here so that should be fine.

rclone has a `crypt` layer you can just paper over any of it's backends and still access through any of it's comfortable ways.

I'd personally likely bind mount the database folder over the rclone mount or the other way around, as needed to keep that database on a local filesystem.


In my experience mounting smb share inside docker containers has been very very unreliable...


I didn't make this clear enough in the article, sorry for the mix-up! Yes, iPhones support Osaifu-Keitai, and it's Android phones which have this problem. I've now updated the article to clarify this.


A writing feedback: After you added the "Clarification" update, you could also have put something like "Update (read below)" in the bold part "NFC-F support is not enough to use your phone as an IC card".

Otherwise, thanks for writing this article, it is very insightful, especially the whole parallel NFC standards development process in the early days of the technology by Japanese companies like Sony.


Author here, this is my fault for not proof reading this part properly! The part about non-Japan SKUs is generally true for Android phone manufacturers, but Apple eats the cost and gives all phones Osaifi-Keitai. You do not need to root an iPhone to get this functionality, even on a non-Japan unit.

I will write a correction for this section to clear up the confusion.


Author of the "Osaifu-Keitai-Google-Pixel" article here.

There's a big chance that Apple does not eat the cost for Osaifu-Keitai actually, as they may have a sweet-heart deal, hinted by an article from watch.impress [1], which I found a very long time ago via Twitter.

So the fee is either waived by FeliCa networks, covered by Japanese Carriers, and (as an educated guess) paid only upon enrollment of the first FeliCa-compatible card to device.

I think it would be naive to believe that Apple, of all companies, would be the one willing to pay a couple of cents per device in order to offer a feature that, at best, only a single digit percentage of their users would use.

[1] https://www.watch.impress.co.jp/docs/series/suzukij/1297656....


AFAI, many Android phones have Osaifu-Kaitai support outside of the US just sitting there. I think if there is a key generation fee, it's at setup time of a wallet and not just physical phone's existence.

I rooted my US model Pixel 9 Pro on my Japan trip last year to enable it. :D Literally a boolean in a config file.

https://github.com/kormax/osaifu-keitai-google-pixel

(The author's write up has more theories on why Google blocks it on non-Japan SKUs)


This is an interesting find and the author's ideas make sense to me. I can't confirm them of course, this is all probably hidden behind legal documents, but I've updated the article to a link with this repo. Thanks for the link!


The magnitude to which FeliCa was faster shocked me as well when I found out. But it's not like the latency is insignificant: it's obvious how much faster people can get through a Tokyo metro gate than a London one. So clearly it must have some kind of financial impact as well, if an entire city's public transport system works slower because of it. Even ignoring empathy for a second, isn't this the kind of thing that a Western capital ideology is supposed to improve? Some food for thought.


It is not just capital but the interpersonal and bureaucratic factors.

Technically the way to think about latency is that a process has N serial steps and you can (a) reduce N, (b) run some of those serial steps in parallel, and (c) speed up the steps.

For one thing, different aspects of the organization own the N steps. You might have one step that is difficult to improve because of organizational issues and then the excuses come in... Step 3 takes 2.0 sec, so why bother reducing Step 5 from 0.5 sec to 0.1 sec? On top of that we valorize "slow food" [1] have sayings like "all good things come to those who wait" and tend to think people are morally superior for waiting as opposed to "get you ass out of the line so we can serve other customers quickly" (e.g. truly empathetic, compassionate, etc.)

Maybe the ultimate expression of the American bad attitude is how you have to wait 20 minutes to board a plane because they have a complicated procedure with 9 priority levels and they have to pay somebody to explain that if you are a veteran you are in zone 3 and if you have this credit card from an another airline that this airline acquired you are in zone 5, etc... meanwhile they are paying the flight crew to wait, paying the ground crew to wait, etc. Southwest Airlines used to have a reasonable and optimized boarding scheme but they gave up on it, I guess the revenue from those credit cards is worth too much.

[1] it's a running gag when I go to a McDonalds in a distant city that it takes forever compared to, I dunno, Sweetgreens, even "fast" food isn't fast anymore. When I worked at a BK circa 1988, we cooked burgers ahead of time and stored them in a steam tray for up to ten minutes and then put condiments on them and put them in a box on a heat chute for up to another ten minutes. Whether you ordered a standard or customized burger you'd usually get it quickly, whereas burger restaurants today all cook the beef to order which just plain takes a while, longer than it takes to assemble a burrito at Chipotle.


US govt transportation agency central planners will happily spend billions to bulldoze a neighbourhood for a freeway lane, all to shave a few hypothetical seconds off a car commute, so I don’t think the issue is that US culture isn’t interested in speed, latency, or throughput.

Airline boarding is not the only class system in play. At every level of government, even within transit agencies, transit and its customers are seen as and treated as second class citizens. The idea of investing money, time or energy to shave even scores of minutes off the commute of someone who uses a bus, often seems as if it’s an unthinkable thought in these organizations.


> American bad attitude is how you have to wait 20 minutes to board a plane because they have a complicated procedure with 9 priority levels

The purpose of the many boarding groups is IMHO, to make those in groups > 1 feel as though they're missing out on some perk that they could get if they paid more. It's an intentional class system where some are encouraged to look down on those who paid less, and vice versa. It's good for revenue, bad for people.


I doubt airlines complicate boarding groups to reinforce classes. It is likely all about the bottom line, and nickel and dime-ing you at every opportunity.


I think the point is that creating a class system is one way to maximize revenue. The social aspects of that system - looking down on people in economy, or aspiring to be the people in first class – aren’t necessarily the first order effects, but I suspect they contribute.


Exactly, that's the point. Creating an envy structure in order to increase revenue.


> meanwhile they are paying the flight crew to wait

Most flight attendant and pilot union contracts only pay them based on the hours with the door closed or in flight. (This is changing, but it's how it's been for a long time.) This reduces the incentives for quick boarding, as most of the flight crew is not being paid for that time.


> On top of that we valorize "slow food" [1] have sayings like "all good things come to those who wait"

Japan has its own versions of these things so I doubt it's this. The whole culture is, in general, not built for efficiency either.


> So clearly it must have some kind of financial impact as well, if an entire city's public transport system works slower because of it.

Unlikely, most cities transport systems will run into issues with capacity long before they run into issues with ticket gate latency. No point getting people through the gates faster if they’re just gonna pile up on the platform and cause a crush hazard.

At peak hours in London, the inbound gates are often closed periodically to prevent crowding issues in major stations. If you look at normal TfL stations you’ll notice there’s normally a 2:1 ratio of infrastructure for people leaving a station vs entering. Because crowding is by far the biggest most dangerous risk in a major metro system, and also the biggest bottleneck.


The departure times dominate latency and throughput in metros. The gates are not the bottleneck.


When a full train empties out at a specific station you can get massive delays. Euston platforms 8-11 come to mind. Two arrivals of 600+ people (including standing) trains in a minute or so in say 8 and 11 can cause chaos.


It depends. Usually you'd be right, but for some big events, the stations and platforms can be incredibly packed. In those cases the extra delay from gates could really hurt. One example is Comiket, where you have thousands of attendees all coming to the same few stations around the venue. Both times I was there, there was a massive crowd spanning from the platform to the outside. Having to wait the extra few hundred milliseconds on each card tap would have been painful.

Here's an example video to show the gates in action: https://www.youtube.com/watch?v=YffjxN3KsD4



This opinion of "Gnome is killing customization" is something I see quite a lot, but which I think people take the wrong way. It's absolutely true that Gnome is designed to be less themeable than other DEs like KDE, or individual WMs - and by extension, GTK apps and apps designed to be used on Gnome are harder to customize/break more when you do theme them. But I disagree that "customization of Linux [being] half-dead" is a bad thing; on the contrary, I support the lack of theming options, and I like that there's someone on the Linux desktop that pushes this hard for consistency.

To make my biases clear: I'm a software developer that uses Gnome daily, and is developing a GTK/Adwaita app. I used to rice a lot back in the i3 days, but I don't particularly care about that nowadays, and stick to the defaults when I can. For my purposes, GNOME and Adwaita is perfect since it's very opinionated by default, and you can make good looking apps with minimal effort. Since all Adwaita apps are supposed to look similar and follow the same HIG, most of my desktop apps have the same look - but more importantly, the developers of the apps can also be confident that their apps look correct on my desktop. This is something that developers in the GTK space generally want, and for good reason[0].

One argument is that you as a user of the desktop should be able to have the final say on how your apps look, which is a totally valid take! And there are DEs, WMs, and apps which give you this freedom like Hyprland. But this doesn't guarantee that those apps will look good, or look consistent with each other, or even act consistently across apps. On the other hand, I as an app developer want to guarantee that my app looks good on your desktop, and the easiest way to achieve that is to target a single desktop environment, rather than an infinite combination of possibly-similar-but-maybe-completely-different desktops. Every preference has a cost[1][2], and when you take this philosophy beyond just preferences and expand it to color schemes, padding, margin, iconography, typography, it becomes unmanageable.

This isn't to say that GNOME is perfect, and I disagree with the project on some fundamental technical things like not supporting xdg-layer-shell[3], and refusing to accommodate server-side decorations for apps which don't want to render decorations themselves. (On the cultural side I can't comment, since I have no experience with that.) But in my opinion, this is the project that can deliver a usable and consistent Linux desktop to the average person the most effectively.

[0]: https://stopthemingmy.app/

[1]: https://blogs.gnome.org/tbernard/2021/07/13/community-power-...

[2]: https://ometer.com/preferences.html

[3]: https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/1141


Much of the frustration inspired by GNOME/GTK’s unthemability comes down to not having a few very simple knobs for users to tweak. Point in case, one of the primary reasons I used to theme GNOME desktops was to clean up Adwaita’s padding, which is utterly egregious for desktop usage. If GNOME just had a padding slider with 3-5 notches that’d go a long way and wouldn’t impair developers’ ability to build consistent apps in the least. Affordances like these are rarely given however and have to be fought for.

Aside from that, consistency and themability are not at all mutually exclusive. Back in the early days of OS X, theming by hacking system resource files (or patching them in memory via haxies[0]) was quite popular and for the most part, worked very well — generally, the only apps that didn’t play nice with themes were those sitting in the uncanny valley between native and custom, using bits of both, which tended to not be the highest quality applications anyway. This was way before Apple started pushing devs to parameterize their apps, too, and so similar theming capabilities today would work even better since themes can just tweak the parameterized fonts, colors, etc as needed to maintain coherence and usablity.

The real problem with GNOME/GTK is simply that it wasn’t designed with user customization in mind even as a remote possibility. A UI framework that did keep these things in mind combined with a strong dev culture of parametrization would make for a desktop that’s both customizable and consistent.

[0]: https://en.wikipedia.org/wiki/Unsanity


Interesting, I didn't know there was a theming presence on OS X! I agree with you in that consistency and themability can exist together (and I suppose your example proves that), and that had GNOME decided to prioritize themability we could have had something like that on the Linux desktop. I suppose this is a question of priority and where to allocate effort, rather than what is technically possible and not. Building a UI framework and HIG is already not an easy task, and making it customizable in the way you describe would be an even bigger burden on developers - many of which are, I assume, doing this work for free. But admittedly I haven't looked much into GNOME's funding or organizational structure, so maybe they are capable of it, but just haven't bothered.


> I as an app developer want to guarantee that my app looks good on your desktop

When your app doesn't follow how my desktop looks it doesn't look good on my desktop. And unsurprisingly most modern Gtk3 and especially Gtk4 apps do not look good on my desktop.

What you actually mean here is that you want to guarantee that your app looks good on your desktop, not mine.


Yeah, I should have clarified that by "your desktop" I mean "your GNOME desktop" - i.e. "if you run GNOME, it'll look good no matter your preferences". But wrt "your desktop" - even if I wanted to, I couldn't guarantee that my app looks good on your desktop specifically, because I have no clue what your desktop looks like! Which is why I want to target a large common denominator of desktops instead, where I know it can look good.

The counter argument to that is "so let the user theme the app, to suit their own desktop", which would be a decent solution, but:

1. My vision for my app might conflict with your vision for your desktop. Maybe I want this button to be a light blue because it meshes well with some other elements in the app, but you want it to be a darker blue because it fits with your desktop's color scheme. What happens then?

2. This still doesn't guarantee that the app will look good. If you theme my app's home page, but don't theme the rest of the pages, then sure it'll look good at the home page - but as soon as you start using it, the look will fall apart. Or, what if I push an update to my app which adds a new page with a new kind of UI element? Do you really want to be maintaining your desktop theme for every single app you have?

3. This adds a burden on me as the developer to make parts customizable. This is the least convincing argument in this list IMO, since if there was better tooling and infrastructure for theming in GTK this wouldn't be a problem - but there isn't, so it is still a problem.

As a practical example, my app makes use of a WebViewGTK to display some info. I inject some custom CSS into this web view to make it look like Adwaita. This touches on points 2 and 3:

2. The webview has some UI widgets which aren't present in the rest of GTK, like a sticky header bar. You would have to manually maintain a stylesheet for this single element.

3. I now need to write a way to let users theme the custom CSS inside the webview, rather than just the CSS of the GTK widgets themselves. (I have already written this, but it's still a maintenance burden.)


> My vision for my app might conflict with your vision for your desktop. Maybe I want this button to be a light blue because it meshes well with some other elements in the app, but you want it to be a darker blue because it fits with your desktop's color scheme. What happens then?

Probably something similar to how Apple platforms handle colors. Instead of providing a single static light blue, you have a couple options:

1. Use a “system color”, which is pre-tuned for optimal contrast, appearance, and usability and adjusts automatically when e.g. the user switches between light/dark mode or enables an accessibility setting related to color or vision

2. Define a light blue that’s actually multiple variants of the color bundled together, with each being optimal to various environments, with the UI framework choosing the right one depending on the situation

Arguably developers should be doing these things anyway for accessibility reasons. It’s not been good practice to use e.g. bare color hexes for quite some time now.


Isn't this what the accent colours do in newer GTK/Libadwaita?


> My vision for my app might conflict with your vision for your desktop. Maybe I want this button to be a light blue because it meshes well with some other elements in the app, but you want it to be a darker blue because it fits with your desktop's color scheme. What happens then?

The user trying to make your app match their desktop should 'win'. Your responsibility is to ship out an app and make sure it works in the way you want it to work.

If the people need to do more work to make it look good on their desktop (as I likely would running awesoemwm), that shouldn't be prevented, but it also need not be encouraged. It should at the least though be facilitated, certainly to a better extent than it is.


I generally prefer to live with defaults, so the lack of theming and user interface customization in Gnome is a good thing in my opinion. I tend to take the view that software that requires too much customization is poorly suited to my needs.

I suspect the thing that rubs a lot of people the wrong way with Gnome isn't so much the lack of customization or theming, but actual decisions behind how the user interface should work. People simply complain about the inability to customize Gnome as a reflection of how powerless they are to do anything about those decisions.


> People simply complain about the inability to customize Gnome as a reflection of how powerless they are to do anything about those decisions.

But also that the Gnome philosophy "leaks" into the wider ecosystem, thanks to the dominance of Gtk.

(I use GtkWave as a waveform viewer. I have it installed systemwide, and it has proper menus. If I activate the oss-cad-suite environment [which is superb - this isn't a dig at that project] I have to remember to specify its full path when running it, otherwise I get a newer build shipped with oss-cad-suite, which has hamburger menus.)


Interesting app, I haven't heard of Manabi before! How does it compare to other apps like Jidousho? And other, more general desktop tools like Yomitan? On mobile, I'm currently using Yomitan on Firefox for mining, but I'm curious about other mobile-specific approaches and apps that people have made.


Compared with Yomitan, a couple quick differences that come to mind:

- Manabi tracks the words and kanji you've read to show you which are new to you, and which you have as flashcards. You can see this visually on the page, and in a vocab listing

- Review flashcards that appear in whatever you're trying to read. Soon I will also have it auto-review flashcards passively as you read and encounter them naturally

- Add flashcards to Manabi Flashcards or to Anki including AnkiMobile on iOS

- One-tap words to look up instead of mouseover from starting boundary

- Manabi packages reading tools such as RSS, EPUB and soon manga (via Mokuro) with user-editable curated libraries of content. Yomitan is less of a standalone-capable tool

I am working on adding Yomitan dictionaries now (to also make the app multilingual) as well as more integrations such as 2-way sync with Anki, WaniKani, JPDB

I think Jidoujisho has a lot of similarities but it's not an iOS/macOS app

I should put up some product comparison material as there are a lot of tools out there


Rust already asserts that a match is exhaustive at compile time - if you don't include a branch for each option, it will fail to compile. This extends to integer range matching and string matching as well.

It's just that with #[non_exhaustive], you must specify a default branch (`_ => { .. }`), even if you've already explicitly matched on all the values. The idea being that you've written code which matches on all the values which exist right now, but the library author is free to add new variants without breaking your code - since it's now your responsibility as a user of the library to handle the default case.


Library users can force a compile error when new variants get added, using a lint from rustc. It's "allow" by default, so it's opt-in.

https://doc.rust-lang.org/rustc/lints/listing/allowed-by-def...


Does this require nightly? If so, #[warn(clippy::wildcard_enum_match_arm)] will do the samething but no need for nightly, and from clippy instead of rustc natively.


That's pretty neat. I still don't completely understand why #[non_exhaustive] is so desirable in the first place though.

Let's say I am using a crate called zoo-bar. Let's say this crate is not using non-exhaustive.

In my code where I use this crate I do:

  let my_workplace = zoo_bar::ZooBar::new();
  
  let mut animal_pens_iter = my_workplace.hungry_animals.iter();
  
  while let Some(ap) = animal_pens_iter.next() {
      match ap {
          zoo_bar::AnimalPen::Tigers => {
              me.go_feed_tigers(&mut raw_meat_that_tigers_like_stock).await?;
          }
          zoo_bar::AnimalPen::Elephants => {
              me.go_feed_elephants(&mut peanut_stock).await?;
          }
      }
  }
I update or upgrade the zoo-bar dependency and there's a new enum variant of AnimalPens called Monkeys.

Great! I get a compile error and I update my code to feed the monkeys.

  diff --git a/src/main.rs b/src/main.rs
  index 202c10c..425d649 100644
  --- a/src/main.rs
  +++ b/src/main.rs
  @@ -10,5 +10,8 @@
             zoo_bar::AnimalPen::Elephants => {
                 me.go_feed_elephants(&mut peanut_stock).await?;
             }
  +          zoo_bar::AnimalPen::Monkeys => {
  +              me.go_feed_monkeys(&mut banana_stock).await?;
  +          }
         }
     }

Now let's say instead that the AnimalPen enum was marked non-exhaustive.

So I'm forced to have a default match arm. In this alternate universe I start off with:

  let my_workplace = zoo_bar::ZooBar::new();

  let mut animal_pens_iter = my_workplace.hungry_animals.iter();

  while let Some(ap) = animal_pens_iter.next() {
    match ap {
      zoo_bar::AnimalPen::Tigers => {
        me.go_feed_tigers(&mut raw_meat_that_tigers_like_stock).await?;
      }
      zoo_bar::AnimalPen::Elephants => {
        me.go_feed_elephants(&mut peanut_stock).await?;
      }
      _ => {
        eprintln!("Whoops! I sure hope someone notices this default match in the logs and goes and updates the code.");
      }
    }
  }
When the monkeys are added, and I update or upgrade the dependency on zoo-bar, I don't notice the warning in the logs right away after we deploy to prod. Because the logs contain too many things no one can go and read everything.

One week passes and then we have a monkey starving incident at work.

After careful review we realize that it was due to the default match arm and we forgot to update our program.

So we learn from the terrible catastrophe with the monkeys and I update my code using the attributes from your link.

  diff --git a/src/main.rs b/src/main.rs
  index e01fcd1..aab0112 100644
  --- a/wp/src/main.rs
  +++ b/wp/src/main.rs
  @@ -1,3 +1,5 @@
  +#![feature(non_exhaustive_omitted_patterns_lint)]
  +
   use std::error::Error;
   
   #[tokio::main]
  @@ -11,6 +13,7 @@ async fn main() -> anyhow::Result<()> {
     let mut animal_pens_iter = my_workplace.hungry_animals.iter();
   
     while let Some(ap) = animal_pens_iter.next() {
  +    #[warn(non_exhaustive_omitted_patterns)]
       match ap {
         zoo_bar::AnimalPen::Tigers => {
           me.go_feed_tigers(&mut raw_meat_that_tigers_like_stock).await?;
  @@ -18,8 +21,12 @@ async fn main() -> anyhow::Result<()> {
         zoo_bar::AnimalPen::Elephants => {
           me.go_feed_elephants(&mut peanut_stock).await?;
         }
  +      zoo_bar::AnimalPen::Monkeys => {
  +        // Our monkeys died before we started using proper attributes. If they are hungry it means they have turned into zombies :O
  +        me.alert_authorities_about_potential_outbreak_of_zombie_monkeys().await?;
  +      }
         _ => {
  -        eprintln!("Whoops! I sure hope someone notices this default match in the logs and goes and updates the code.");
  +        unreachable!("We have an attribute that is supposed to tell us if there were any unmatched new variants.");
         }
       }
     }
And next time we update or upgrade the crate version to latest, another new variant exists, but thanks to your tip we get a lint warning and we happily update our code so that we won't have more starving animals.

  diff --git a/wp/src/main.rs b/wp/src/main.rs
  index aab0112..4fc4041 100644
  --- a/wp/src/main.rs
  +++ b/wp/src/main.rs
  @@ -25,6 +25,9 @@ async fn main() -> anyhow::Result<()> {
           // Our monkeys died before we started using proper attributes. If they are hungry it means they have turned into zombies :O
           me.alert_authorities_about_potential_outbreak_of_zombie_monkeys().await?;
         }
  +      zoo_bar::AnimalPen::Capybaras => {
  +        me.go_feed_capybaras(&mut whatever_the_heck_capybaras_eat_stock).await?;
  +      }
         _ => {
           unreachable!("We have an attribute that is supposed to tell us if there were any unmatched new variants.");
         }
But what was the advantage of marking the enum as #[non_exhaustive] in the first place?


It lets you have a middle ground, with the decision of when breaking happens left up to library users. Without non_exhaustive, all consumers always get your second scenario. With non_exhaustive, individual zoos get to pick their own policy of when/if animals should starve.

Each option has its place, it depends on context. Does the creator of the type want/need strictness from all their consumers, or can this call be left up to each consumer to make? The lint puts strictness back on the table as an opt-in for individual users.


Swift does this with unknown default.


Consider a bit of a different case. I run a service that exposes an API, and some fields in some response bodies are enums. I've published a Rust client for the API for my customers to do, and (among other things) it has something like this:

    #[derive(serde::Serialize, serde::Deserialize)]
    pub struct SomeEnum {
        AValue,
        BValue,
    }
My customers use that and all is well. But I want to add a new enum value, CValue. I can't require that all my customers update their version of my Rust client before I add it; that would be unreasonable.

So I add it, and what happens? Well, now whenever my customers make that API call, instead of getting some API object back, they get a deserialization error, because that enum's Deserialize impl doesn't know how to handle "CValue". Maybe some customer wasn't even using that field in the returned API object, but now I've broken their code.

Adding #[non_exhaustive] means I at least won't break my customers' code when I add a new enum value.


It's really nice when doing networking protocols and other binary formats. Lots of things are defined as "This byte signifies X : 0 == Undefined, 1 == A, 2 == B, 3 == C, 4-127 == reserved for future use, 128-255 vendor specific options".

This allows you to do something like:

    #[derive(Clone, Copy)]
    #[repr(u8)]
    #[non_exhaustive]
    pub enum Foo {
        A = 1,
        B,
        C,
    }
    
    impl Foo {
        pub fn from_byte(val: u8) -> Self {
            unsafe { std::mem::transmute(val) }
        }
    
        pub fn from_byte_ref(val: &u8) -> &Self {
            unsafe { std::mem::transmute(val) }
        }
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn conversion_copy() {
            let n: u8 = 1;
            let y = Foo::from_byte(n);
            assert!(matches!(y, Foo::A));
    
            let n: u8 = 4;
            let y = Foo::from_byte(n);
            assert!(!matches!(y, Foo::A) && !matches!(y, Foo::B) && !matches!(y, Foo::C));
            let n2 = y as u8;
            assert_eq!(n2, 4);
        }
    
        #[test]
        fn conversion_ref() {
            let n: u8 = 1;
            let y = Foo::from_byte_ref(&n);
            assert!(matches!(*y, Foo::A));
    
            let n: u8 = 4;
            let y = Foo::from_byte_ref(&n);
            assert!(!matches!(*y, Foo::A) && !matches!(*y, Foo::B) && !matches!(*y, Foo::C));
            let n2 = (*y) as u8;
            assert_eq!(n2, 4);
        }
    }
This lets you have a simple fast parsing of types without needing a bunch of logic - particularly in the ref example. Someone else sent you data over the wire and is using a vendor defined value, or a newer version of the protocol that defines Foo::D? No big deal, you can igore it or error, or whatever else is appropriate for your case.

If you want to define Reserved and Vendor as enum attributes, now you have to have logic that runs all the time - and if you want to preserve the original value for error messages, logs, etc - you can't Repr(u8) and take up more memory, have to do copies, etc.

    #[non_exhaustive]
    pub enum Foo {
        Undefined =0,
        A = 1,
        B,
        C,
        Reserved(u8),
        Vendor(u8),
    }
    
    impl Foo {
        pub fn from_byte(val: u8) -> Self {
            match val {
                0 => Foo::Undefined,
                1 => Foo::A,
                2 => Foo::B,
                3 => Foo::C,
                4..=127 => Foo::Reserved(val)
                128.. => Foo::Vendor(val)
            }
        }
    }
You also need logic to convert back to a u8 now too.

It's not strictly necessary, but it certainly makes some things far more ergonomic.


Looking at my code that works on this stuff - the above is just wrong. I was looking at my failed experimental branch not the actual code that does this. The above is a fun way to introduce all sorts of UB.

Apologies for my pre-coffee brainfarts.


How does the working code look?


Importantly #[non_exhaustive] applies to your users but not you. In the defining crate we can write exhaustive matches and those work - the rationale is that we defined this type, so we should know how to do this properly. Our users however must assume they don't know if it has been extended in a newer version.

#[non_exhaustive] is most popular for the variants of an enumeration but is permissible for published structure types (it means we promise these published fields will exist but maybe we will add more and thus change the size of the structure overall) and for the variants of a sum type (it means the inner details of that variant may change, you can pattern match it but we might add more fields and your matches must cope)


Wait what. I thought it existed for FFI purposes, regardless of if that's with C or network protocols. The defining crate getting away with it undermines this.


No. If you mean "I don't know" that's not non_exhaustive that's "I don't know".

For a network protocol or C FFI you probably want a primitive integer type not any of Rust's fancier types such as enum, because while you might believe this byte should have one of six values, 0x01 through 0x06 maybe somebody decided the top bit is a flag now, so 0x83 is "the same" as 0x03 but with a flag set.

Trying to unsafely transmute things from arbitrary blobs of data to a Rust type is likely to end in tears, this attribute does not fix that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: