To anyone wanting to learn about OS development, nothing beats MIT 6.824.
I finished the assignments in that course and that covers all the important aspects like processes, context switching, CPU modes, page tables and virtual memory and many other relevant topics like file systems, device drivers etc. And also it’s free.
From the table of contents this course gets too involved in ancillary matters like bootloaders or the Rust language itself whereas the focus of any OS development tutorial should be on core concepts like how processes are implemented, how context switching works, how paging and consequently multi level page tables (actually, in code) work etc.
I work at MSFT. I understand why they migrated to LibreOffice. Outside of work, I use none of MSFT products.
I do have some burning questions though,
1. How are they saving their work to the cloud if they use LibreOffice ? I don’t think it offers the same functionality that M365 suite does.
2. How are they handling IT security? Are they using a different vendor ?
Well, now, they can handle it more seriously, which before - they couldn't quite. That's because Microsoft - your company - is one big security breach. You are known to pass information that gets into your hand to the US federal government's intelligence agency, and you probably use it for all sorts of commercial purposes, like training AI models, directing advertising etc. So, by installing Microsoft Office, especially Office365 and cloud facilities, they were ensuring a security failure.
Sad news. HBO has a veritable treasure trove of TV shows like The Sopranos, The Wire, Six Feet Under, Silicon Valley etc. Even their more recent ones like White Lotus, How to With John Wilson are leagues above Netflix. Only HBO can bet on artists’ vision like that.
I cancelled my Netflix subscription 7 years ago, 99% of their content is algorithmic drivel. Mindhunter, Dahmer, House of Cards were something I liked but nothing beyond that. I knew they were trash once I saw the sheer number of spinoffs they have just on Pablo Escobar. They had had one decent run of Narcos but then they just tried to extract every drop of juice out of that one persona. Most of Netflix dramas are just the equivalent of abhorrent and ugly graffiti. Their shows are Exhibit A in what happens if you give into algorithmic drivel and have no human touch to curate them.
HBO has some timeless TV classics that I keep rewatching every year even though I have watched them multiple times. Netflix can’t produce TV dramas like that, ain’t in their blood. Completely different DNAs.
Netflix does deserve all the plaudits wrt to their streaming experience though.
I just bought a Thinkpad T14s a couple of months ago. It’s lightweight, has great build quality. I installed Ubuntu and it almost ran out of the box but I ended up having to tinker with it to get My Dell docking station and i3 window manager to work. But that is something I was willing to live with. So far, I have had no complaints. If you’re using Linux, the sleep and standby performance aren’t good. But much better than my previous laptop.
Coming to my previous laptop which I still have with me, I bought a Thinkpad L480 in 2018. It was then a dirt cheap version of a Thinkpad. But it did the job with no complaints. I had to replace the battery after 4 years but that wasn’t an issue. It did everything a daily driver is supposed to do, reliable and never threw a fit. I only had to change it as I felt I needed a better screen and performance. The Intel processor was showing its age.
I have only minor complaints running Thinkpad with Ubuntu. But if you start moving away from popular distros, then you have to accept you will occasionally have to tinker to get things work.
Thanks. Yeah my 470S is still holding strong and I only upgrade the RAM to some 24GB and replaced the two batteries. Now the battery lasts around 4 hours and I'm happy. I do agree that it's showing its age, e.g. having too many tabs in Chrome while playing HD videos in Youtube may stress it a bit, but so far no complaint.
I'll check out the T14s. One of my concerns is: it seems to be more difficult to replace batteries for modern laptops. I tried to remove the battery of the Dell 5550 last night and found it more difficult than the older models. How about the T14s?
As long as you have the small screwdrivers used to tinker with electronics, you’re good.
You can easily open up the laptop.
In my L480, I opened the laptop to change the battery and also install more RAM. Even for a hardware neophyte such as myself , this was straightforward.
Thinkpads are modular, you can easily get the components such as a battery etc. My T14s comes with a 3-year warranty as well.
It will eliminate some particular employees, the roles may be retained and filled with other, probably—for it to have a business purpose—lower-cost (either because of location or experience or a combination) personnel.
Or, even more cynically, the roles can be nominally retained but due to inability to find the right talent or something like that kept open through the current (outside of AI, for now) economic downturn, and then filled again later.
It’s not doublespeak. Multiple people can share the same role. Downsizing reduces redundancy. The number of roles can remain the same even with a smaller headcount.
and who manages the HR department employees? not being facetious, Ive wondered this for a while now. seems like they have almost unchecked power to run a corp into the ground.
If a company is the Soviet Union and the CEO is Stalin, HR would be the NKVD. How did Stalin keep the NKVD in line? They were also being murdered themselves while filling their murder quotas.
I work at MSFT. Everything the author says is 100% true not only at MSFT but probably at every Mag-7 company.
It’s also the same reason why MSFT doesn’t have a blockbuster AI product.
1. At work or for personal use, I use GPT-5 or Claude Code. I am forced to use Copilot because that has access to internal company data but it’s nowhere close to GPT-5 or Claude.
2. MSFT open sourced VS code but on its own couldn’t engineer products like Cursor or Windsurf. Lets leave aside the economics of these products for now.
The regular down in the trenches engineer like myself is so busy thinking about how I can advance my career by playing political games, currying favour with my manager or manager’s manager that little time gets spent on product building.
Good thing MSFT has all the cash in the world to invest in companies like OAI, GitHub etc because the bureaucracy is stifling at MSFT.
Go do the xv6 labs from the MIT 6.828 course, like yesterday. Leave all textbooks aside, even though there are quite a few good ones, forget all GitHub tutorials that have patchy support, blogs that promise you pie in the sky.
The good folks at MIT were gracious enough to make it available for free, free as in free beer.
I did this course over ~3 months and learnt immeasurably more than reading any blog, tutorials or textbook. There’s broad coverage of topics like virtual memory, trap processing, how device drivers work (high-level) etc that are core to any modern OS.
Most of all, you get feedback about your implementations in the form of tests which can help guide you if you have a working or effective solution.
I work at MSFT. There’s top down pressure to use LLMs everywhere. At this point, if you can convince your management about using LLMs anywhere, they would happily head nod and let you go do that. And management themselves are not that technical wrt LLMs, they are being fed the same AI hype slop that we are fed.
Most of these efforts have questionable returns and most projects will usually involve increasing test coverage or categorising customer incidents for better triage, apart from these low hanging fruits not much comes out of it.
People still play the visibility game though. Hey, look at what we did using LLMs. That’s so cool, now where’s my promotion? Business outcomes wise, there’s some low hanging fruits that have been plucked but otherwise it doesn’t live up to the hype.
Personally for me, it is helpful in a few scenarios,
1. Much better search interface than traditional search engines. If I want to ramp up on some new technology or product, it gives me a good broad overview and references to dive deep. No more 10 blue links.
2. Better autocomplete than before but it’s still not as groundbreaking as AI hype hucksters make it out to be
3. If I want to learn some concepts (say how ext4 FS works), it can give a good breakdown of the high level concepts and then I go need to study and come back with more Q’s. This is the only genuine use case that I really like. Where I can iteratively ask Q’s to clarify and cement my understanding of a concept. I have used Claude code and ChatGPT for this and I can barely see any difference between the two.
I have a similar mandate and a similar take, but slightly different use cases.
As to the search engine, my searches are often very narrow, like I want to recall a specific message from a mailing list, so I don't use that too much. On the other hand, I found Google's NotebookLM to be really good at recalling concepts from both source code and manuals (e.g. processor manuals in my case).
Code generators are incredible refactoring machines. In one case (not so easy to reproduce in general, but it did work) Claude Code did a Python to decently idiomatic Rust conversion in a matter of minutes; it added mypy annotations to 2000 lines of Python code (with 90% accuracy) in half an hour and got the entire job done with my assistance in about an hour. For the actual writing and debugging where the logic matters they're still not there even for small code bases (again 2000 lines of code ballpark). They're relatively good at writing and debugging testcases but IMO that's also where there's a risk of copyright taint. Anyhow it's something I would use maybe 2-3 times a month.
In one case I used it for natural language translation, with pretty good results, but I knew both languages because I needed to check the result. Ask it first to develop a glossary and then to translate.
For studying they're interesting too, though for now I have mostly tried that outside work. At work, Google Deep Research worked well compared to the time it takes and it's able to find a variety of sources (including HackerNews comments in one case :)) which is useful for cross-checking.
So what does 90% accuracy mean here? Is this like you ran it through a linter or language server and 90% had errors? Or just through a quick glance you felt it was that accurate?
I've found incorrect type hints to be one of the biggest issues when trying to use Python type-safely. Mostly (entirely?) with packages that get their own hints wrong, meaning methods aren't shown as existing within the class or the returned instance isn't the class it said it would be.
90% accuracy was just my own hunch but it is true that, out of 2000 lines, roughly 200 lines were changed, and there were roughly 20 mypy errors left after the first fully automatic pass.
Most of these were due to code that did use duck typing, but it turned out that this was dead code and therefore it's understandable that the LLM got confused. Once the dead code was removed, the remaining 5 or so issues were annotations that were too loose in either variable or argument declarations. It would have taken at least a couple hours to do it by hand.
I am an employee and stock holder both. Obviously, I would rather have a job than not but at the same time a significant portion of my personal wealth is in MSFT stocks.
I have seen engineering teams at MSFT that provide questionable value to the business so trimming the fat does make sense. Also These multiple rounds of layoffs have made me internalise that we need to be working on something valuable instead of useful.
Can someone discern why these layoffs are being done ? Driven by large shareholders? More runway till we get revenue from GenAI while we keep burning money on GPUs ?
> l am an employee and stockholder both. Obviously, I would rather have a job than not but at the same time a significant portion of my personal wealth is in MSFT stocks.
There's something especially perverse about compensation for employment coming in a form that companies grow in value by terminatkng employment. Ostensibly the point of stock compensation is to reward doing work that helps the company, but instead of provides cover for the company to leech the potential compensation employees would get in salary to grow the value of the stock with less pushback from those it literally is taking the compensation from.
People often decry stock-based compensation as "golden handcuffs" that stop you from leaving, but perhaps the more apt metaphor would be a golden guillotine; they'll have you cheering for the executions right up to the point when your own head is on the chopping block.
> Can someone discern why these layoffs are being done
They spent too much on AI without making enough revenue from it. So they need to cut elsewhere so that in quarterly reports, it looks like the AI investment returned the right numbers.
Between like 2012 and 2022, pay and benefits for engineers at tech companies in the US skyrocketed. This was for a lot of reasons and it is not something that investors and management like.
Starting in 2022ish, tech companies started feeling pressure from investors to control costs at the same time that the covid-era froth was dying down and the economy was becoming more unstable. The first round of layoffs pushed back against over-hiring. But by doing this together, tech companies managed to halt (and somewhat reverse) the growth in compensation for employees. Regular small layoffs since then have kept the hiring market awful, so companies can hire for less and offer flat pay across time.
These layoffs are part of a broader reaction by capital against the power gains that labor made over the past decade (especially during the covid era).
When you invent accounting terms to hide how much money Azure is really losing it catches up to you eventfully. Even Balmer, still one of the largest share holders, called them bullshit.
And history has shown nothing juices executive stock price based bonuses like layoffs.
I am an employee and stock holder both. Obviously, I would rather have a job than not but at the same time a significant portion of my personal wealth is in MSFT stocks.
I have seen engineering teams at MSFT that provide questionable value to the business so trimming the fat does make sense. Also These multiple rounds of layoffs have made me internalise that we need to be working on something valuable instead of useful.
Can someone discern why these layoffs are being done ? Driven by large shareholders? More runway till we get revenue from GenAI while we keep burning money on GPUs ?
I'm gonna get this post quoted in its entirety. I wonder if you personally will be impacted by this wave of layoff and if so, how will your opinion change.
Somehow we've gotten away from managements role being creating great teams. Instead the ICs get thrown under the bus, instead of the managers that created and hired the teams.
I finished the assignments in that course and that covers all the important aspects like processes, context switching, CPU modes, page tables and virtual memory and many other relevant topics like file systems, device drivers etc. And also it’s free.
From the table of contents this course gets too involved in ancillary matters like bootloaders or the Rust language itself whereas the focus of any OS development tutorial should be on core concepts like how processes are implemented, how context switching works, how paging and consequently multi level page tables (actually, in code) work etc.