Hacker Newsnew | past | comments | ask | show | jobs | submit | pm's commentslogin

Quality is 2nd best to what? And people haven't switched to what? Android? The situation is no better on Google OS.

Apple's App Store ad initiatives have always been woeful, and doubt it makes enough revenue to warrant a separate line item on their public accounting reports. Some executive has seen yet another overfunded company potentially making bank with an ad-based business model (OpenAI, et al.), and has thought they could extract Google-level ad revenue due to the App Store's exclusivity. It could also be a response to potentially competing App Stores given their rocky relationship with the EU.

It will have little effect, on revenue or user experience. The greater tragedy is the organisational decay that led to this being greenlit in the first place.


> And people haven't switched to what? Android? The situation is no better on Google OS

Agree. Even GrapheneOS is hell to use. I tried both PixelOS and GrapheneOS on a Pixel 9 and ended up returning it. If I was not homeless I would switch to a flip phone and just use a Linux desktop.


> Even GrapheneOS is hell to use.

This is not my experience. GrapheneOS is great and has absolutely zero bloatware/malware. The base system is just a couple of basic apps like the phone, messages app, and a Web browser. That's it. All the rest is up to the user to set up. You can be completely Google-free if you don't install sandboxed Google Play Services and other GApps.

Without GApps, the setup is extremely private and ideal to use with self-hosted solutions like Nextcloud and Home Assistant as substitutes for the typical commercial malware found on most "smart" phones.


> The greater tragedy is the organisational decay that led to this being greenlit in the first place.

Is it? I feel like that would only be tragic if you expected the App Store monopoly company to care about users instead of profits.

For most of us on the sidelines this is a real "told you so" moment.


The company cares about neither. People inside the company will care about a great many things. The people who care about users either don't have the power to act or no longer care enough to do so.

If the company was trying to extract as much profit as possible, it would be doing so at every level; it would be a company-wide strategy. This just looks disjointed. It speaks to Apple's loss of social cohesion, the signals of which have been apparent for sometime.

This isn't an "I told you so" moment, as this initiative is meaningless without context, and it's a poor attitude to take.


When you mentioned offline sync and graft, I mistook you for the author of this project:

https://github.com/orbitinghail/graft

However, they're clearly two different projects.

I don't want to take away from the work you've done, as you're clearly knowledgeable, but as someone else observed, heavy use of AI assistance can be observed in all your public projects. It's worth explicitly addressing, especially considering the foundational nature of your project: it's not easily replaced if it turns out to have to have subtle bugs.

Though I rarely use it myself, I'd like to know, simply because I'm curious as to how other engineers have incorporated such assistance it into their process.


I was interested in provisioning one of these a few months back through Scaleway, but couldn't navigate their sign-up process without it dumping me back to the start everytime. Nor did I receive a reply when I e-mailed their support e-mail.

I don't know if that's changed (they had odd pricing too, like Startup vs. Business, of which the difference wasn't clear), but aware. I hope someone has more success than I did.


Grim environment in the rodeo. The /home/directory, which the Rasberry-Pi doesn't align or distill as a symbolic markup language in .async or transport layer protocol and refers to /dev/ spinlock rolling release kernel version recursion or discursiveness.


Scaleway's site and support is horrible, as you've found. But their pricing and offerings are solid and their network is... OK. For the price, they're better than Hetzner... if you can get signed up!


Totally different experience here! For a project I wanted to try “euro cloud”. Something comparable to digital ocean. No need for hyperscaler functionality.

It has been great. Good terraform provider and reliable service. I like their console, although the design feel very vaporwave to me.

Of course stuff can be better, but it is rapidly improving. The way they implemented grafana + user management was shit. But that’s fixed. Grafana still feels bolted on however. And login is a bit weird with their dedibox or whatever button next to the cloud offering. But no where near as confusing as aws is!

Also bumped on a bug in their terraform provider, found a related bug report, contributed some info and it was fixed within two weeks.

Quite happy so far. Running serverless sql, serverless containers. Secrets management and some iam config. No big stuff but quite sure it is capable to run a decently sized saas.


I won't comment on the reliability of their services, as I've not experienced it. I was signing up specifically to provision a M1 mac mini, and couldn't navigate it. It was unfortunate, but worth a comment, in case others experience the same issue (or someone else could point me in the right direction).

Glad to know you've got good experiences though.


It was specifically for the M1 mac minis, so not too fussed about everything else, but judging from the comments, experiences are mixed.

I'll try to sign up again in a week or two.


If you have further difficulties, give us a shout on our community slack : https://www.scaleway.com/en/docs/tutorials/scaleway-slack-co...

There's an #apple-silicon channel there that the team behind the product monitors


I don't think wpa_supplicant.conf has been used for sometime, as they moved to cloud-init for bootstrapping. It requires the network-config file instead, the format of which is documented on the cloud-init documentation page.

I happen to have been experimenting with this for the past few weeks, and the most persistent issue was getting wi-fi to work correctly. It's quite a common issue, with any number of hacks. I offer my own network-config below, though I've only tested it with provisioning Ubuntu Server on the RPis so far (I have two 3B+s).

  network:
    version: 2
    renderer: networkd
    wifis:
      wlan0:
        regulatory-domain: "AU"
        dhcp4: true
        dhcp6: false
        optional: false
        access-points:
          "<access-point-name>":
            password: "<password>"
The important parts are:

1. The renderer, as the default is NetworkManager, which doesn't work correctly with RPis (at least on Ubuntu Server). It may work with RPiOS, but I haven't tested it yet.

2. The regulatory domain, the lack of which is what disables wi-fi in the first place. I forget how much testing I did with the format, but I believe it must be uppercase (I don't remember about quoting the string, however).

3. Disabling IPv6 may be relevant, though unlikely. It was just in a working configurations I found; I just haven't had time to confirm it. The relevant line in my user-data file is as follows:

  bootcmd:
  - sysctl net.ipv6.conf.all.disable_ipv6=1
The rest of the configuration is standard, though I purposefully made the wi-fi non-optional so I could confirm that wi-fi worked (my only Internet at the moment is through my iPhone hotspot, which was another source of issues, but that's a whole other story).

NB. According to someone else, the imager has the respective command line options for user-data and network-config, which I didn't know.


What were some of the technical challenges you experienced while reverse-engineering the wallpaper system? I've been reverse-engineering (for lack of a better term) some of macOS' and Xcode's poorly-documented functionality prototyping a personal developer tool. My investigation isn't sophisticated by any means; it's just been trial-and-error, but I haven't found much online in the way of resources for people going down this route.


Reverse engineering is hard! I use Hopper (https://www.hopperapp.com) to disassemble related binaries and frameworks. It's a great way to explore whats actually happening within macOS or Apple apps.

You can also export assembly files and throw various agents (Gemini, Claude etc) at them to learn more. It's surprisingly effective!


I'm no pro rev. engineer or anything, but did you try alternatives to Hopper at all? I've never had much luck with Hopper compared to Radare2 and IDA.


I haven’t tried those, but Hopper has been working pretty well for me. Although I mostly just sift through the assembly and pseudocode, and don’t use many advanced features.


Cool project!

This article is yet another reminder I need to learn Haskell (I've been meaning to for a decade), although the code from this article is approachable considering the topic. However, I've just started using Rust for professional projects, so the code you've posted is a bit easier to read, if more verbose, though the concepts are still unfamiliar to me.

I'm assuming this isn't your first go at writing a compiler?


Glad someone found it useful! It's at least represents a more fleshed out working example, and it's in a little module so it's pretty self-contained and easy to read through.

> I'm assuming this isn't your first go at writing a compiler?

Not quite, the first real language I worked on was called Eve: https://witheve.com


Ignoring its negative connotation, it's more likely to be a highly advanced "stochastic parrot".

> "You don't do that without some kind of working internal model of mathematics."

This is speculation at best. Models are black boxes, even to those who make them. We can't discern a "meaningful internal representation" in a model, anymore than a human brain.

> "There is just no way you can get to the right answer by spouting out plausible-sounding sentence completions without understanding what they mean."

You've just anthropomorphised a stochastic machine, and this behaviour is far more concerning, because it implies we're special, and we're not. We're just highly advanced "stochastic parrots" with a game loop.


> This is speculation at best. Models are black boxes, even to those who make them. We can't discern a "meaningful internal representation" in a model, anymore than a human brain.

They are not pure black boxes. They are too complex to decipher, but it doesn't mean we can't look at activations and get some very high level idea of what is going on.

For world models specifically, the paper that first demonstrated that LLM has some kind of a world model corresponding to the task it is trained on came out in 2023: https://www.neelnanda.io/mechanistic-interpretability/othell.... Now you might argue that this doesn't prove anything about generic LLMs, and that is true. But I would argue that, given this result, and given what LLMs are capable of doing, assuming that they have some kind of world model (even if it's drastically simplified and even outright wrong around the edges) should be the default at this point, and people arguing that they definitely don't have anything like that should present concrete evidence ot that effect.

> We're just highly advanced "stochastic parrots" with a game loop.

If that is your assertion, then what's the point of even talking about "stochastic parrots" at all? By this definition, _everything_ is that, so it ceases to be a meaningful distinction.


Swift evolution and governance is open and publicly documented. It will always be dominated by Apple engineers, but the evolution of the language is largely orthogonal to Apple's OS releases.

I'm not sure how much of the standard library is available on the server side. However, I it's more about the engineers' interest than it is Apple's, and in that respect, the Swift ecosystem has been evolving constantly, e.g., the Swift toolchain was entirely divested from Xcode a month ago.

I can't speak for the .NET ecosystem, but your fears are unfounded. Whether Swift is useful in a cross-platform context is another question, however.


Orthogonal? Odd thing to say given Swift's evolution and release timeline are entirely constrained by Apple's OS release schedule. We're currently going through the spike in evolution proposals by Apple engineers in preparation for the branching of Swift 6.2 next month before WWDC in June.

As for server side, the standard library is entirely available on other platforms, with a subset available for embedded Swift. However, it's fairly limited when compared to something like Python, and cross platform support for the other libraries like swift-foundation or SwiftNIO is more limited (IIRC SwiftNIO still doesn't support Windows properly).

I'm not sure what you're talking about with the tool chain. Apple has been producing toolchains that can run on macOS outside Xcode for years. Do you mean integration of swiftly? I think that just brought swiftly support to macOS for the first time.

Ultimately I agree with jchw; Swift would be in a much better position if it wasn't controlled by Apple's process. Features could get more than a few months work at a time. We could have dedicated teams for maintenance intensive parts of the compiler, like the type checker or the diagnostics engine, rather than a single person, or a few people that switch between focus areas.


Firstly, I believe the fears are founded; these fears are a good starting point, since learning and adopting a programming language is a big investment, and you should be careful when making big investments. To say they're unfounded suggests that they have no basis. Disagreed.

Secondly, I don't really feel like this sort of analysis does much to assuage fears, as Apple's business strategy is always going to take priority over what its engineers individually want. Apple of today doesn't have any obvious reason to just go and axe cross-platform Swift, but if that ever changes in the future, they could do it overnight, like it was never there. Could do it tomorrow. It's not much different than an employee getting laid off abruptly.

This is especially true because in truth Apple doesn't really have a strong incentive in the grand scheme of things to support Swift on non-Apple platforms. Even if they use it in this way today, it's certainly not core to their business, and it costs them to maintain, costs that they may eventually decide benefits their competitors more than it helps them.

There's no exact heuristic here, either. Go is entirely controlled by Google and does just fine, though it has the advantage of no conflict-of-interest regarding platforms. Nobody writing Go on Linux servers really has much reason to be concerned about its future; Partly because Google has quite a lot of Go running on Linux today, and given how long it took them to e.g. transition to Python 3 internally, I can just about guarantee you that if Go died it would probably not be abrupt. Even if it was, because of the massive amount of external stakeholders there are, it would quickly be picked up by some of the large orgs that have adopted it, like Uber or Digital Ocean. The risk analysis with Go is solid: Google has no particular conflict of interest here, as they don't control the platforms that Go is primarily used on; Google has business reasons to not abruptly discontinue it and especially not on Linux servers; there are multiple massive stakeholders with a lot of skin in the game who could pick up the pieces if they called it quits.

I believe Apple could also get to that point with Swift, but they might need a different route to get there, as Swift is still largely seen as "That Apple Thing" for now, by a lot of outsiders, and that's why I think they need to cede some control. Even if they did fund a Swift foundation, they could still remain substantially in control of the language; but at least having other stakeholders with skin in the game having a seat at the table would do a lot to assuage fears about Swift's future and decouple aspects of governance from Apple in ways that would probably ultimately benefit Swift for everyone.

P.S.: And I'm not singling Apple out here, because I think any rational company has to make tough decisions sometimes, but it's obvious from their past that they definitely don't fear changes of plan. Look all the way back to OpenDoc. Being willing to make bold changes of plan feels like it's a part of their company DNA.


My understanding of CVE is superficial at best. I thought it was just an acronym publicly identifying vulnerabilities; I didn't realise there was a political structure behind it all.

While the article presents good food for thought, certification isn't a practical solution to the problem at hand. This database seems like a reasonable alternative.


It is “just” that, but: How are numbers assigned? How can others find details? Who determines when these details are public? (note: full CVE details can be used to exploit critical software) If they’re not always public, who gets to see them? And who handles that dissemination? Who takes care of duplicates?

Lots of work does go into this, even if it’s “just” an identifier.


This is fascinating, but I don't know enough biology to understand the concepts at play. Could someone knowledgeable in the field explain it further?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: