Hacker Newsnew | past | comments | ask | show | jobs | submit | AceJohnny2's commentslogin

> We no longer have Google, effectively

Veering offtopic a bit... Google lost its (search) way years ago. See the "The Man who Killed Google Search" [1], and the room they left for alternatives like DuckDuckGo.

At work, we have full access to Claude, and I find that I now use that instead of doing a search. Sure it's not 100% reliable, but neither is search anyhow, and at least I save time from sifting through a dozen crappy content farms.

[1] https://www.wheresyoured.at/the-men-who-killed-google/


When you use a search engine, you can evalute the "trustness" of the source (webpage), this essentially disappears when using a chatbot

Only if you're dumb about it. Asking for source links is one thing I do all the time, and chatgpt gives citations by default.

What's the benefit of using a chatbot if you still have to go and read all of it's sources on your own?

The same, I suppose, as using Wikipedia to get an overview of a topic, a surface understanding, before following the citations to dig deeper and fully validate the summary.

You can get precise citations supporting the facts of interest to you, so that you don't have to dig through the sources on your own.

At least for the time being the chatbot isn't optimised to deliver as many ads or sponsored results to your eyeballs as physically possible.

> it do get pretty out of hand after a while if you wanted to adopt any newer or stricter language features.

How does it get out of hand?

FWIW, I just do `use v5.32;` or similar to opt-in to everything from that version.

https://perldoc.perl.org/functions/use#use-VERSION

Of course, if you instead want to pick-and-choose features, then I can see the list growing large.


it's extra funny to me because the Raspberry Pi SoC is basically a little CPU riding on a big GPU (well, the earlier ones were. Maybe the latest ones shift the balance of power a bit). In fact, to this day the GPU is still the one driving the boot sequence.

So plugging a RasPi into a 5090 is "just" swapping the horse for one 10,000x bigger (someone correct my ratio of the RasPi5 GPU to the RTX5090)


I'm not very familiar with this layer of things; what does it mean for a GPU to drive a boot sequence? Is there something massively parallel that is well suited for the GPU?

The Raspberry Pi contains a Videocore processor (I wrote the original instruction set coding and assembler and simulator for this processor).

This is a general purpose processor which includes 16 way SIMD instructions that can access data in a 64 by 64 byte register file as either rows or columns (and as either 8 or 16 or 32 bit data).

It also has superscalar instructions which access a separate set of 32-bit registers, but is tightly integrated with the SIMD instructions (like in ARM Neon cores or x86 AVX instructions).

This is what boots up originally.

Videocore was designed to be good at the actions needed for video codecs (e.g. motion estimation and DCTs).

I did write a 3d library that could render textured triangles using the SIMD instructions on this processor. This was enough to render simple graphics and I wrote a demo that rendered Tomb Raider levels, but only for a small frame resolution.

The main application was video codecs, so for the original Apple Video iPod I wrote the MPEG4 and h264 decoding software using the Videocore processor, which could run at around QVGA resolution.

However, in later versions of the chip we wanted more video and graphics performance. I designed the hardware to accelerate video, while another team (including Eben) wrote the hardware to accelerate 3d graphics.

So in Raspberry Pis, there is both a Videocore processor (which boots up and handles some tasks), and a separate GPU (which handles 3d graphics, but not booting up).

It is possible to write code that runs on the Videocore processor - on older Pis I accelerated some video decode sofware codecs by using both the GPU and the Videocore to offload bits of transform and deblocking and motion compensation, but on later Pis there is dedicated video decode hardware to do this instead.

Note that the ARMs on the later Pis are much faster and more capable than before, while the Videocore processor has not been developed, so there is not really much use for the Videocore anymore. However, the separate GPU has been developed more and is quite capable.


You have the most interesting job!

Thank you, I've used your work quite a number of times now.


> what does it mean for a GPU to drive a boot sequence

It's a quirk of the broadcom chips that the rpi family uses; the GPU is the first bit of silicon to power up and do things. The GPU specifically is a bit unusual, but the general idea of "smaller thing does initial bring up, then powers up $main_cpu" is not unusual once $main_cpu is ~ powerful enough to run linux.


That’s interesting, particularly since as far as I can tell, nothing in userland really bothers to make use of its GPU. I would really like to understand why, since I have a whole bunch of Pi’s and it seems like their GPUs can’t be used for much of anything (not really much for transcoding nor for AI).

> their GPUs can’t be used for much of anything (not really much for transcoding nor for AI)

It's both funny and sad to me that we're at the point where someone would (perhaps even reasonably) describe using the GPU only for the "G" in its name as not "much of anything".


Is video transcoding not “graphics”? Is it doing meaningful graphics work?

The Raspberry Pi GPU has one of the better open source GPU drivers as far as SBCs go. It's limited in performance but its definitely being used for rendering.

There is a Vulkan API, they can run some compute. At least the 4 and 5 can: https://github.com/jdonald/vulkan-compute-rpi . No idea if it's worth the bus latency though. I'd love to know the answer to that.

I'd also love to see the same done on the Zero 2, where the CPU is far less beefy and the trade-off might go a different way. It's an older generation of GPU though so the same code won't work.


One (obscure) example I know of is the RTLSDR-Airband[1] project uses the GPU to do FFT computation on older, less powerful Pis, through the GPU_FFT library[2].

1: https://github.com/rtl-airband/RTLSDR-Airband

2: http://www.aholme.co.uk/GPU_FFT/Main.htm


You can play Quake on 'em.

This experiment really does feel like a poetic inversion

Thanks! In your change https://github.com/tailscale/tailscale/pull/18336 you mention:

> There's also tailscaled-on-macOS, but it won't have a TPM or Keychain bindings anyway.

Do you mean that on macOS, tailscaled does not and has never leveraged equivalent hardware-attestation functionality from the SEP? (Assuming such functionality is available)


On macOS we have 3 ways to run Tailscale: https://tailscale.com/kb/1065/macos-variants Two of them have a GUI component and use the Keychain to store their state.

The third one is just the open-source tailscaled binary that you have to compile yourself, and it doesn't talk to the Keychain. It stores a plaintext file on disk like the Linux variant without state encryption. Unlike the GUI variants, this one is not a Swift program that can easily talk to the Keychain API.


You don't need Swift to use the Keychain API. It's doable from pure C.

In fact, SecurityFramework doesn’t have a real Swift/Obj-C API. The relevant functions are all direct bindings to C ABIs (just with wrappers around the CoreFoundation types).

> The third one is just the open-source tailscaled binary that you have to compile yourself, and it doesn't talk to the Keychain.

I use this one (via nix-darwin) because it has the nice property of starting as a systemwide daemon outside of any user context, which in turn means that it has no (user) keychain to access (there are some conundrums between accessing such keychains and "GUI" i.e user login being needed, irrespective of C vs Swift or whatever).

Maybe it _could_ store things in the system keychain? But I'm not entirely sure what the gain would be when the intent is to have tailscale access through fully unattended reboots.


Good to know, my understanding of the macOS system APIs is fairly limited. I'm sure it's doable, with some elbow grease and CGO. We just haven't prioritized that variant of the client due to relatively low usage.

If you want to avoid Cgo, you can use https://github.com/ebitengine/purego or Goffi to call the native functions. It's a bit cursed, but works.

Only one of the ways uses Keychain per that page.

Ah, looks like another KB update is needed, thanks for calling it out!

> And all of it was free.

It's important to take the perspective that the Silicon Valley isn't an incubator of technology, but of business models.

Yahoo Pipes being free is exactly what killed them. Without a sustainable business model, they could not last.


those do not appear to be, er... legitimate ways of activation.

Microsoft has a history of creating new UI frameworks. IMHO it's the result of Ballmer's "Developers, developers, developers!" attitude, which I think is a good thing at core (court the developers that add value to your platform!)

But this results in chasing a new paradigm every few years to elicit new excitement from the developers. It'll always be more newsworthy to bring in a new framework than add incremental fixes to the old one.

React has had tremendous success in the web world, so why not try and get those developers more comfortable producing apps for your platform?

(Tangentially, see also the mixed reaction to Mac native apps switching from AppKit to SwiftUI)


The software biz in general has a major "out with the old, in with the new" attitude, which paired with the attitude of, "We're going to build what we know, instead of learning the old stuff which is new to us".

I've seen time and again, things like apps rewritten from scratch because nobody knew C++, and they only had C# devs. Or a massive runaround because the last guy on the team who knew C++ wrote a bunch of stuff and left a couple years back, and now nobody really knew how any of that stuff worked.

> React has had tremendous success in the web world, so why not try and get those developers more comfortable producing apps for your platform?

IMO - this is worth talking about. Zune, Windows Phone, and some others died when they did not, in fact, suck, and were pretty good products which while late to the game, could have competed if there had just been a decent app ecosystem.


Out with the old, in with the new, doesn't have to be bad, but it depends on what your old and new are. I'd be a lot less skeptical about migrating OS-level sttuff from C to Rust than from C to React.

If the motivation is "Because I refuse to learn C", then both approaches will be bad. You can't avoid understanding what you're migrating, but seemingly Microsoft thinks they're above that. Fits with the average mindset of developers within the Windows ecosystem, at least from my experience.

Totally agreed, I have learned a lot of technologies to understand legacy systems. Either you run them or to migrate away from them. If you do not learn and respect the legacy system your migration is bound to fail.

I maintain to this day that the Zune was one of the best designed hardware and software platforms I've ever used. Probably the only truly design forward product that MS ever produced.

The Zune hardware was slick, particularly the solid state players. The music store worked great and their music licensing was so much better than Apple - $10 a month for unlimited streaming, unlimited downloads (rentals) to Zune devices and 10 free mo3 downloads to own.

Their only misstep was making one of their colorways poop brown! That and being too late to market with a phone that used the same design language


There was also the fact that Microsoft introduced it 3 months before Apple announced the product that would kill the iPod, leading with the HDD model (a direct competitor to what would become known as the iPod Classic line) when Apple’s real flagship was the iPod nano.

There was also the crap that was Windows Media Player 11 which I tried to like for about a month.

There was also the incompatibility with Microsoft’s own DRM ecosystem in PlaysForSure which was full of these subscription music services, some of which were quite popular with the kind of people that were inclined to buy a Zune: folks in Microsoft’s ecosystem that had passed up on using an iPod and used something from SanDisk, Creative, Toshiba or iRiver instead. This is because they wanted to replicate the entire iPod+iTunes model entirely.

The 2006 lineup of iPods was also particularly strong, and included the first aluminum iPod nano’s. When Microsoft announced and released the Zune, they were counter-programming against that, right into the Holiday season with a new brand that had no name ID, with a product that was just like the iPod, couldn’t play any of your music from iTunes or Rhapsody, but with… HD radio.

More than a few missteps were made.


  > Their only misstep was making one of their colorways poop brown
i think the other big issue was calling it a 'zune' but thats just me...

“…you’re absurd, what’s a Zune?!”

https://youtube.com/watch?v=Jkrn6ecxthM


Name or color had nothing to do with it imho (I like the brown personally). It was all timing. They were entering a market with a well estaablished leader (iPod) that was nearly as good, as good, or better depending on who you ask. On top of it phones themselves were taking over the music player market at the same time, which is where Microsoft really dropped the ball.

I mean, iPhone is a really ridiculous name as well if you stop to think about it.


You think having a dumb name would be a negative, but one of the biggest bands in the world is called Metallica.

The Zune software 2.0 remains the pinnacle of Microsoft design

Windows Phone was actually doing well and adoption was taking off when Nadella came in and killed it. It didn't help that they changed the app framework and then blamed lack of apps. Such a brain-dead decision.

Windows Phone was dead in the water because many services did not have first party support, and the third party clients kept getting killed / people banned from said services.

Google was extremely aggressive in muscling Microsoft out. They refused to release a Gmail, YouTube or Maps client for Windows Phone but made sure those services did not work (properly).

And indeed on top of that, Microsoft switched UI frameworks 3 or 4 times. And they left phones behind on the old OS releases repeatedly, that then couldn't run the new frameworks.

Still, Windows Phone its UI concept was really great, and I sorely miss the durability of polycarbonate bodies versus the glass, metal and standard plastic bodies of today.


What burned me was that there was no updating from WP7 to WP8 - After playing around with one and genuinely enjoying the experience, I convinced myself to buy a Lumia 900 in April of 2012, just for Nokia/Microsoft to effectively say "that was stupid, wasn't it?" when the Lumia 920 and WP8 launched just 7 months later. Releasing a so-called flagship device that they knew would be incompatible with their upcoming OS, effectively killing software support before the year was even finished, really doesn't inspire confidence in the longevity of a product.

It was always going to be difficult, but classic Microsoft blunders and extreme arrogance set back Windows Phone dramatically.

They basically couldn't stick to a strategy and alienated every potential audience one by one. I was trying to make a Windows Phone app back then and for developers they forced them to go through an extremely difficult series of migrations where some APIs were supported on some versions and others on other versions and they were extremely unhelpful in the process.

They had a great opportunity with low-end phones because Nokia managed to make a very good ~$50 Windows Phone. Microsoft decided there was no money in that after they bought Nokia they immediately wanted to hard pivot to compete head-to-head with Apple with Apple-like prices. They then proceeded to churn through 'flagships' that suffered updates that broke and undermined those flagships shortly after they released thus alienating high end users as well.

Having worked at Microsoft I think the greatest problem with the culture there is that everyone is trying to appeal to a higher up rather than customers, and higher ups don't care because they're doing the same. I think that works out OK when defending incumbency but when battling in a competitive landscape Microsoft has no follow through because most shot callers are focused on their career trajectory over a <5 year time frame.


Oh, this was like the windows 11 before windows 11. I didn't realize Microsoft made the same mistake twice.

Windows Phone 7 was doing well; for some reason they did a breaking change with Windows Phone 8 and broke app compatibility. I will never understand that, they kneecapped themselves despite being multiple laps behind Apple and Google already…

The reason was moving from the CE kernel to the NT kernel between WP7 and WP8. This was supposed to make developers’ lives much easier when porting Windows 8 apps. The minimum hardware requirement had to be bumped and old WP7 devices could never meet them.

At the very least they could've created some kind of translation / compatibility layer, to ease the transition

The decisions reg UI frameworks are largely due to internal political conflicts, mostly between DevDiv and Windows.

They have a lot of staff turnover too, and each generation of new SDE has less of a clue how the old stuff worked. So when they're tasked with replacing the old stuff, they don't understand what it does, and the rewrite ends up doing less.

That was my impression of one of the major problems when I worked there 2008-2011. But I don't think it's just one problem.


I think that because their total compensation is lower than FAANG, especially at senior levels, and they are seen as uncool, they sometimes have issues retaining top-notch talent. It's paradoxical, because MS Research is probably the best PLT organization in the world. But they have failed to move a lot of that know-how into production.

Besides, because it's an older company, it might have more organizational entropy, i.e. dysfunctional middle-management. As you say it's probably several other causes too. But still, hard to understand how they can create F#, F*, and Dafny, just to name a few, and fail with their mainstream products.


> dysfunctional middle-management

I thought about this a lot while working at a high-growth company recently.

Decided that regular (quarterly) manager rankings (HR-supported, anonymous) by 2-3 levels of subordinates is the only way to solve this at scale.

The central problem is: assuming a CEO accidentally promoted a bad middle manager, then how do they ever find out?

Most companies (top-down rankings-only) use project success as their primary manager performance signal.

Unfortunately, this has 3 problems: (1) project success doesn't prove a manager isn't bad, (2) above-managers only hear from managers, and (3) it incentivizes managers to hack project success metrics / definitions.

Adding a servant/leader skip-level metric is a critical piece of information on "On, this person is toxic and everyone thinks poorly of them, despite the fact that they say everyone loves them."


Sounds a like a great solution, adding random skip connections so that information flows from the bottom to the top of the hierarchy.

Certainly, few companies have managed to avoid this trap. It's largely an unsolved problem.

I've often met managers and execs two levels above me that had a completely delusional view of what was going on below them due to lies spread by middle-management.


> completely delusional view of what was going on below them due to lies spread by middle-management

Corporate dysfunction made more sense to me when I realized higher execs, because of span of control, are too busy to dig into any issue themselves.

Consequently, it's trivial in most orgs for the only information path to be through managers.

Also why I think more effective execs tend to have parallel investigation resources. E.g. their do-anything assistant who they task with fact finding


You also probably couldn't pay me enough to work in the kind of environment that produces such buggy software as Microsoft teams. A message based app which can't even guarantee delivery of messages, or synchronization across devices isn't a good sign for management and delivery.

I was a unix head at the time and ran OpenBSD on my personal Thinkpad. I figured a stint on the Windows team would broaden my horizons and expose me to differences. It did that. I don't regret it. I did in the end feel that the company was not my vibe, but I respect and appreciate some of what came out of there.

Back when I was there, part of my calculus was that cost of living in Seattle was cheaper than the bay. It was about 35% cheaper back then, according to regional CPI data I looked at at the time. Not sure what the difference is today. I believe housing is still substantially cheaper.

I think a few years after I left when more Big Tech opened offices in Seattle, competing companies started paying Bay Area salaries for Seattle living, removing this argument. I haven't watched this closely in recent years.

But fwiw, I was able to save and invest a lot in my Seattle days, despite a salary that was lower than in the bay.


Seattle cost of living is still significantly cheaper than the Bay Area. A lower salary goes even farther given the lack of state income tax, too.

But in a world where Amazon prices are the same, car and gas are the same, cost of living is just rent?

Housing makes a huge difference, but there is also the cost of groceries, dining out, etc.

Basically the housing price difference can mean buying a nice house close to your job vs renting a room in a share-apartment.

Best of both worlds is to save in a high-cost area then move to a cheaper area.


Not sure what you're getting at. Housing is the main cost, and is drastically cheaper in Seattle. Food in Seattle is a bit more pricey.

Housing is just one component, there is a lot of other stuff that has equal price: if you order stuff from Amazon the price is the same, if you buy a new car the price is the same.

State and local taxes can make a significant difference for general goods, and especially car purchases.

Amazon also isn't a restaurant, and while they do sort of sell groceries through Whole Foods and Amazon Fresh, those are again priced locally.


Because those languages were created at Microsoft Research, not DevDiv nor Windows.

All different business units.


Is compensation really the issue? Like, people earning 160k simply can’t take a dive into the OS source code and make proper fixes, but people earning 250k magically can?

I don't know. I know there are a lot of people who want to work on the OS source code, given the chance, but need some hand holding in the beginning. Companies in general are not willing to give them the chance, because they don't want to hand hold them.


I think uncompetitive compensation is the dominant factor in Microsoft’s decline. Up there with stack ranking. They claim that it’s 30% cheaper to live there but then they go and capture most of that 30% for themselves.

It is my opinion that developer ability is on a Pareto distribution, like the 80 20 rule when 80% of the work is done by 20% of the people. The job market is more liquid for those that are extremely productive so it’s pretty easy to for them to get a pay rise of 30% by switching companies. In the worst case you can often come back with a promotion because, like many companies, Microsoft is more likely to promote you when trying to poach you back. Doing a 2 year stint at Amazon was quite common. The other problem is that when your best people leave is that the process is iterative, not only are you getting paid less but you are now working with people who couldn’t easily switch jobs. You start being surrounded by incompetence. Stack ranking, which I hear is still being done unofficially, also means that you put your promotion and career in danger by joining a highly productive team. So it is rather difficult to get highly productive people to work on the same team.

Being paid less, being surrounded by incompetence, and being forced to engage in constant high stakes politicking really sucks.


I still think there are ways to hand hold people a bit and grow an ordinary engineer to a better one who is fit for system programming in maybe 12 months.

Otherwise as you said the only way is to offer the best compensation so that people don't leave. But again those people probably would leave for different reasons (culture e.g.).


Compensation is the easiest way and probably the most essential. It is hard to maintain a good culture when your best keep getting poached away with large sums of money. If Microsoft was the only game in town then sure they could get away with paying less, but they're not so they cannot.

Yeah you have a point. I wonder what the NT kernel team looks like nowadays.

Compensation can be the issue if the cost of living is creating problems. If you need 150k to just live in an area, 160k is not motivating while 250k gives you the peace of mind to focus on the work, not just on surviving. If you live in Bangladesh, the difference between 160k and 250k is almost meaningless.

Also compensation is a sign of respect and influences motivation. If you position yourself lower in the market, there is no reason to deliver top results for less money, correct? This attracts mediocrity, especially in management, and slowly kills companies. Usually there is no way back, no large company can replace the entire management and once and the mediocre ones will reject new, better ones.


It's not about the amount, but the type of people who stay when they could move to a higher paying job.

And the fact that it's impossible to poach people from companies offering a higher salary than you do. Unless you give them something more, like better conditions, or "mission", or the idea to work on something cool, but I don't think any of those apply to Microsoft.


If you could earn $250k why would you settle for $160? There are reasons people do but still money is a powful signal

A kernel engineering job is much more fun than yet another backend web gig. A large part because when working with typical web coding people do not want you to do actual software engineering.

But the actual issue is that if you underpay people they will not feel respected and valued so they will either not be motivated or leave. So you cannot pay below market, but you do not need to pay FB salaries either.


Theoretically (never happened to me), I'd definitely do a $100K Windows kernel, or whatever kernel work, over a $150K DE job that I currently have (I used to have a $220K DE job too and I won't hesitate to switch).

I can confirm, the guys still around for WinUI team and related frameworks, always appear clueless when posed questions about Windows features they were supposed to know about.

Just go watch a few recordings on their YouTube channel.


Raymond Chen tries to document it, but he's just one person.

https://devblogs.microsoft.com/oldnewthing


From the outside looking in one wonders why this is allowed to continue. Microsoft’s old school “developer tools for money” business is slowly dying (because Visual Studio proper is less popular than its ever been since so much is targeting web), you would think they’d reorganize and move .net and GitHub and stuff into their cloud team and yeet whatever toxic leadership is preventing Windows from using Microsoft’s own frameworks.

IIRC .NET was banned from core Windows components after longhorn died, but its been 20 years. .NET is fast now, and C++ is faster still. Externally developed web frameworks shouldn’t be required for Windows.


It’s a largely dysfunctional org creating largely dysfunctional software, I.e. Conway law. Dysfunctional orgs tend not to be capable of fixing themselves, especially without external threat. Satya Nadella, like many CEOs, seems mostly interested in impressing his peers and these days that means fancy AI, before that it was Quantum chips.

Microsoft has produced some great technology and when I was last there I was definitely focusing on getting as much of the good stuff out into open source as possible.

Back in the early V8 days the execs imagined JavaScript would keep getting exponentially faster, I tired to explain with a similar investment anything V8 could do dotnet could do better as we had more information available for optimization.


Yeah, .NET is actually an impressive piece of tech. They have F# too which is a really solid programming language. And then they chose React of all things to build core OS UI.

Because .NET is under DevDiv, F# came from Microsoft Research, and the OS is under Windows team.

Windows team even refuses to have managed bindings for DirectX, like Apple and Google do on their platforms.

Managed DirectX and XNA were pushed by highly motivated individuals, and lasted only as long as they stayed at Microsoft.


Longhorn was politics, then Google ate their lunch on mobile with Java and JavaScript userspace, across two platforms.

DevDiv is a "here C++ rules!" silo, even the Rust adoption is being widely embraced at Azure, less so on Windows team.


Yeah, as far as I understand it, that politics is: Sinofsky entrenched NIH on every team that he touched.

Just curious what is DevDiv? Tools division?

As I understand it, .NET, developer tools, and VS.

Basically you have tight OS integration vs developer friendly cross platform.


I think it also includes GitHub now.

I believe GitHub is under a different group (CoreAI), not DevDiv.

Thank you!

I think the reason they keep trying new UI frameworks is that no one really adopts them. Developers know that Microsoft won’t kill off backward compatibility and break all the enterprise apps, so why rewrite? When one framework fails, they start working on the next one. I question if they understand the corner they’ve painted themselves into.

I stopped writing Windows applications back in the early 00s

my Windows API knowledge (essentially: just Win32) is still exactly as useful as it was then, having missed the 7 or 8 different UI frameworks in the interim


Win32 is basically frozen on Windows XP.

Since Vista most newer APIs are done in COM, or WinRT nowadays.


I remember a thin book describing changes to the API in Vista and 7 compared to XP and it was really thin. Just a few extra APIs to be able to show controls in the taskbar preview and things like that. Win32 is a stable API and I hope they don't let anyone from the Windows 11 modernization team touch it.

>Win32 is a stable API and I hope they don't let anyone from the Windows 11 modernization team touch it.

I've heard a Microsoft executive talk about win32 as legacy that they want to replace. I don't think that's realistic though, it's probably the last piece of technology keeping people on the platform.


It was the goal with UAP and UWP, but they clearly messed up the execution.

https://learn.microsoft.com/en-us/uwp/win32-and-com/win32-an...

Win32, the C API, is stagnant since Windows XP, other than some ...Ex and ...ExN kind of additions.

As mentioned above, the new APIs are mostly delivered as COM, occasionally with some .NET bindings.

There is still a silo trying to push WinRT now on Win32 side, although given how they made a mess of the developer experience only those with Microsoft salaries care about it.

This oldie shows some of the background,

https://arstechnica.com/features/2012/10/windows-8-and-winrt...


Last year I ran into the issue of .NET network bindings not returning all NICs. [0] This issue has been present in the .NET since the genesis and only resolved in .NET 9. I had to create my own Win32 wrapper so everything works properly in .NET 4 frameworks ... still need to maintain Windows 7 pre-SP1 support in some applications.

Smells like Microsoft was trying to create APIs based on assumptions versus a 1:1 method that exposes managed code and hides unmanaged.

[0] https://github.com/dotnet/runtime/pull/100824


Except for anything that came after XP, you need to at least make use of COM.

WinRT can be avoided if you don't do any modern stuff like the new context menu, WinUI, or Windows ML.


And those new APIs (at least the context menu API) apparently require a "package identity", which requires a signed MSIX installer, which requires paying for a code-signing certificate, unless I'm missing something in the docs.

That is because of what people claiming UWP is dead, haven't gotten the whole picture.

UWP as separate subsystem, yes it is deprecated, and no one should be using it, although Microsoft was forced to introduce .NET 9 support on UWP, because many refuse to move away from UWP into WinUI 3.0, given the lack of feature parity.

Now, when WinRT was made to also work on Win32 side, it also brought with it the concept of package identity, which forms a part of the whole app isolation from UWP similar to Android, now on Win32 as well, hence the MSIX.

https://learn.microsoft.com/en-us/windows/win32/secauthz/app...

On the specific case of the context menu, it depends on what is being done,

https://blogs.windows.com/windowsdeveloper/2021/07/19/extend...

You can work around app identity when using the suggested alternative of unpackaged Win32 apps with sparse manifests.


Me too, back then I wrote applications either using the raw Win32 API (GetMessage, TranslateMessage, DispatchMessage, etc), or using MFC.

I think MFC is now long-time dead and buried, but at the time I liked it despite the ugly macros.


MFC is actually still supported by MS, works in the most recent MSVS, and even occasionally receives updates.

> the result of Ballmer's "Developers, developers, developers!" attitude

I think Microsoft’s framework chasing has been a betrayal of that philosophy. Internal divisional politics have been major drivers behind fracturing and refusing to unify the stack and its UI approach, and without clear guidance from the Office team the direction of the entire platforms UI is opaque. Short term resume and divisional wins at the expense of the whole ecosystem.

A developer centric platform would let developers easily create platform specific UIs that look modern and normal. As-is the answer to how to ‘hello world’ a windows desktop app is a five hour “well, akshully…” debate that can reasonably resolve to using Meta’s stack. “VB6 for the buttons, C++ for the hard stuff” is a short conversation, at least.


I think it's more of result of "okay we made it and it works, how we now can excuse still being employed in same head-count" development. And the answer is of course "rewrite, rewrite, rewrite". UI works well, no major bugs ? TIME TO CHANGE IT TO BE "MODERN"

Operating systems should always use C/C++ UI frameworks, and as little costly abstraction as possible, period. Anything else is wasting resources.

It’s not so much about wasting resources as it is about the added latency, jankiness, and inconsistency in look & feel hurting usability.

And how about reliability? Having to start a web browser and a web framework to display core OS functionality adds a lot of moving parts that can break.

Latency is literally wasting resources

The point was that users mostly don’t care about wasting resources, but about usability. If usability wasn’t impacted, few people would care about resources being wasted. But since usability is very much impacted, people (rightfully) complain.

Wasting resources affects usability, and not just through latency.

It doesn’t necessarily. A lot of resources are being wasted without impacting usability. Users only start noticing and complaining about it when it does.

It does, period. Less things can run in parallel, fans get louder, batteries live shorter and devices feel old years before they should, all of which directly affects usability. The user not noticing the direct cause or having enough resources to waste to at least perform some tasks acceptably doesn't change anything.

> React has had tremendous success in the web world, so why not try and get those developers more comfortable producing apps for your platform?

Because web stuff utterly sucks for making UIs on the desktop. Microsoft either doesn't know this (bad sign), or is choosing to use the trendy thing even though it'll make their software worse for customers (a worse sign). Either way it's a very bad look from MS.


probably trying to repro the crazy success of vscode, surely electron is the magic sauce and not the dream team of devs. azure data studio should've proved that you can't just sprinkle electron dust and get a winner.

sadly I loved azure data studio despite its being afflicted with electron, but it became so bug infested they had to completely abandon it.


vscode is successful despite electron, not thanks to electron. The electron part is the worst of it.

No, it does not suck.

I attempted to use WinUI 3, and could not even get PNGs to render the colors correctly, no matter what setting I tried.

Then I gave Tauri a try, and everything worked out of the box, with the PNGs rendering even better than in the Windows Photos app.

Building the UI was much easier, it looked better, and you get to build the "backend" in Rust.

Nothing about this sucked.


WinUI 3 is basically utterly pathetic bul_sh_t attempting to pretend that it is a UI framework. A wet paper plane passing itself off as a passenger aircraft. Please compare with a real desktop UI framework like GTK or Qt. Or just a more modern one like Rust Iced or gpui/slint

Changing UI framework all the times is fine, so is not changing anything for decades. Different strategies that both have value. Reasonably you want to be somewhere in the middle, depending on the use case. In an industrial setting, production infrastructure, etc... you generally want to change as little as possible "if it isn't broken, don't fix it". On emerging, consumer-facing technology such as mobile in the 2000s, "move fast and break things" makes sense.

But anyways, it is not the problem. The problem is just that Microsoft today is doing a terrible job.

The best example, I think, is the control panel. Starting from Windows 8, they changed it. Ok fine, you may like it or hate it, but to be honest, it is not a big deal. The problem is that they never finished their job, more than a decade later! Not all options are available in the new UI, so sometimes the old control panel pops up in a completely different style with many overlaps in features. And every now and then, they shuffle things around in hope that one day, the old control panel won't be needed anymore.

If you make a change, commit to it! If you decide to replace the old control panel, I don't want to see it anymore. It is certainly easier said than done, but if you are a many-billion dollar company and that's your flagship product, you can afford to do hard things!

Using a web engine to build UIs is fine too. As an OS-wide component, a web engine is not that big in terms of resource use, if properly optimized. The problem with Electron is that every app ships with its own engine, so if you have 10 apps, you have 10 engines loaded. But everything uses the same engine, and apps don't abuse it, then the overhead is, I think, almost negligible. It is rare not to have a browser loaded nowadays, so system components can take advantage of this. But again, you need to do it right and it takes skills, effort and resources.


Blaming this on Ballmer is a serious stretch, I can't see how you would come to that conclusion. Developers Developers Developers was for the launch of .NET and it brought us a platform that is still considered cutting edge 25 years later.

UX was fine in the Windows Forms days, and WPF was a large step forward (responsive layouts, etc...). The problem was after that it all fell apart with Windows 8 and the attempt to switch to Metro, followed by the Windows Store fiasco and attempting to move to a sandboxed application model.

It all comes down to Microsoft's failure to adapt to mobile/tablets in so many ways. Which is kind of hilarious, because they had some really cool tech for the time (the PocketPCs were pretty fun back in the day, before touch came along).


Remember when Silverlight was _the_ future?

How long did it last. Ironically it still gives me the shits because you can't select text on Netflix's front end.


Problem with SwiftUI is that it only works well on macOS 26, maybe one version prior. AppKit works well on all macOS versions.

Building a macOS 26 only app in SwiftUI today is a great UX, just as fast as AppKit.

But it takes quite some effort to turn an iOS SwiftUI app into a real macOS experience. Though most macOS optimizations help for iPadOS as well.


When I was a developer I was not amused at all with constantly changing APIs to be honest. And UWP was really sucky. Way too aligned with mobile and tablet which nobody actually uses on windows. Even as a user I'm glad it didn't take off.

>Tangentially, see also the mixed reaction to Mac native apps switching from AppKit to SwiftUI

I'll take AppKit -> SwiftUI over Win32-> windowsx.h -> VB6 -> MFC -> ATL -> WTL -> WFC -> WinForms -> WPF -> WinRT -> UWP -> WinUI3 -> MAUI.

Even with all that Microsoft still went outside and used React Native for the start menu and Electron for the Visual Studio installer and Visual Studio Code.


Torturers Inc, that operates in $country_i_hate & tortures over 10,000 people each day, is an outlier adn should not have been counted

didn't you do something similar for Datasette, Simon?


Nothing smart with HTTP range requests yet - I have https://lite.datasette.io which runs the full Python server app in the browser via WebAssembly and Pyodide but it still works by fetching the entire SQLite file at once.



yay no C strings!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: