Our developers managed to run around 750MB per website open once.
They have put in ticket with ops that the server is slow and could we look at it. So we looked. Every single video on a page with long video list pre-loaded a part of it. The single reason the site didn't ran like shit for them is coz office had direct fiber to out datacenter few blocks away.
We really shouldn't allow web developers more than 128kbit of connection speed, anything more and they just make nonsense out of it.
PSA for those who aren’t aware: Chromium/Firefox-based browsers have a Network tab in the developer tools where you can dial down your bandwidth to simulate a slower 3G or 4G connection.
Combined with CPU throttling, it's a decent sanity check to see how well your site will perform on more modest setups.
The main justification floated is that the car was "going fast" and thus made the undercover Israeli soldiers feel unsafe.
The New York Times describes it as such:
"Ali Bani Odeh’s wife and four young boys hadn’t seen him in a month and a half when he came home to Tammun, in the West Bank, from his construction job in Israel late on Friday to spend the last few days of Ramadan with his family.
On Saturday night, the boys persuaded him to take them out for a drive. Eid al-Fitr, the end of Ramadan, was coming, so there were new clothes to buy. The day’s fast had been broken, so there were sweets to be had, too.
They picked up fried doughnut holes in Tubas, saving them for later, but the clothing shop they went to in Nablus was closed. It was already past midnight, so they headed back to Tammun: Khaled, 11, the oldest, in the back with Mustafa, 8, and Muhammad, 5. Othman, 6, blind and incapable of walking or feeding himself, was in his mother’s lap in front.
As they rounded a corner slowly, a few minutes from home, young Khaled and Mustafa recounted on Sunday, their mother, Waad, 35, asked her husband to pull over and take Othman from her so she could get something from her bag on the floor. Suddenly, the boys said, they saw laser pointers shining on their family from every direction, heard their mother scream, heard their father say “God is great” — and then heard a deafening fusillade of gunfire."
Haven't seen this mentioned yet, but the worst part for me is that a lot of management LOVES to use Claude to generate 50 page design documents, PRDs, etc., and send them to us to "please review as soon as you can". Nobody reads it, not even the people making it. I'm watching some employees just generate endless slide decks of nonsense and then waffle when asked any specific questions. If any of that is read, it is by other peoples' Claude.
It has also enabled a few people to write code or plan out implementation details who haven't done so in a long (sometimes decade or more) time, and so I'm getting some bizarre suggestions.
Otherwise, it really does depend on what kind of code. I hand write prod code, and the only thing that AI can do is review it and point out bugs to me. But for other things, like a throwaway script to generate a bunch of data for load testing? Sure, why not.
I use Playwright to intercept all requests and responses and have Claude Code navigate to a website like YouTube and click and interact with all the elements and inputs while recording all the requests and responses associated with each interaction. Then it creates a detailed strongly typed API to interact with any website using the underlying API.
Yes, I know it likely breaks everybody's terms of service but at the same time I'm not loading gigabytes of ads, images, markup, to accomplish things.
If anyone is interested I can take some time and publish it this week.
It's well known that in authoritarian regimes (which autocracies generally are) corruption is, rather than a problem, a necessary element of society to keep things going.
Anyone with the slightest amount of official power, like a government officer, has the ability to prevent things going forward on his part. In this kind of society, most people are poor and it would be considered stupid to not demand a small (or large) bribe from the citizen in order to unlock the process. Everyone does it, more with outsiders and to a lesser extent with one's circle of acquaintances (because the social fabric between known parties is the other way to unlock things). Corruption surely is one thing that really trickles down from the top.
So, things like like obediently waiting in the queue for your turn or complaining about the officer won't help unlike in high-trust societies. If you try that in a low-trust society there will be additional documents, stamps, acknowledges, or signatures you need, and keep needing, in order to complete your request until you get the drift and bring a little something. Corruption gets things going and in a society that has no trust it is a positive trait.
In Western democracies this sounds unimaginable because there's a stronger sense that right things will work out right just because of the rules. Western corruption happens on a different level: a regular western citizen has no benefit from giving bribes and he would object to the police or government officials from demanding one. Western corruption mostly concerns about the powerful and rich making friendly mutual agreements to bend the governing bodies and law to enable themselves become more powerful and richer.
Regarding warrantless searches and access ... reading the text of the bill (OP link) warrants seem to be required. Simple, right?
Well, no, this is a recently inserted block of text in the bill (confirm at the link above):
Exception
(2. 7)(b) However, a copy of the warrant is not required to be given
to a person under subsection (2. 6) if the judge or justice who issues
the warrant sets aside the requirement in respect of the person, on
being satisfied that doing so is justified in the circumstances.
That's a pretty big, subjective loophole to bypass civil liberties IMO.
Wild. I have been eagerly awaiting this refresh, but this doesn't address either of the main issues with the original AirPods Max:
1. Still just as heavy. The AirPods Max sound quite good, but they are very heavy, to the point of being fairly uncomfortable after listening for any longer amount of time. This release as the exact same weight as the originals (13.6 oz).
2. Still no off button/position. They stay partially on unless you put them in the awkward and useless "case", which means they're constantly out of power when you want to use them. There's even an obvious fix: the ear cups swivel flat, they could just make this the "power off" position. Solved. But they didn't, so presumably these still have the same problem. There's also no mention of magnetic charging via stand, which would be another way to help alleviate this problem.
If these were even a few ounces lighter and powered off properly, I would buy them for sure. Given this announcement, I guess I will look for something else to replace the old AirPods Max.
It makes my work suck, sadly. Team dynamics also contributes to that, admittedly.
Last year I was working on implementing a pretty big feature in our codebase, it required a lot of focus to get the business logic right and at the same time you had be very creative to make this feasible to run without hogging to much resources.
When I was nearly done and worked on catching bugs, team members grew tired of waiting and starting taking my code from x weeks ago (I have no idea why), feeding it to Claude or whatever and then came back with a solution. So instead of me finishing my code I had to go through their version of my code.
Each one of the proposals had one or more business requirements wrong and several huge bugs. Not one was any closer to a solution than mine was.
I had appreciated any contribution to my code, but thinking that it would be so easy to just take my code and finishing it by asking Claude was rather insulting.
Input: I am starting a new job at Google next Monday. I will work as a contractor cleaning toilets.
Output: I’m thrilled to announce that I’m starting a new chapter at Google this coming Monday! I’ll be joining the team as a specialized Environmental Maintenance Contractor, dedicated to optimizing facility hygiene and ensuring a world-class onsite experience. Grateful for this opportunity to contribute to such an innovative ecosystem! #NewBeginnings #GoogleLife #FacilitiesManagement #CareerUpdate
> And it’s not just execs, but the whole corporate machinery that takes 3–6 weeks after quarter end to churn out reports.
Release early, release often.
If you want corporate machinery to run more smoothly with less effort, force it to operate more frequently not less: when TLS certs had 2-3 year lifespans there was all sorts of manual methods that people forgot how to do; then it was maximum one year. We then got free certs from LE (using ACME), but they were 90 days, so that made automation much more necessary.
Now with certs from public CAs having a max time of 47 days soon (not that I'm necessarily a fan) automation is all but a must.
So if you want less onerous effort on corporate reporting, your workflows and processes need to be much more automated: that's one of the reason why computers were invented after-all, to make computations faster.
And one way to force automation is to insist on more frequent reporting, not less; Barry Ritholtz:
> This is exactly backward: More frequent reporting makes the data less significant. In the real world, human behavior emphasizes what occurs less often—meaning doing something less frequently gives it an even greater significance than something that becomes routine or common.
> That is the difference between a New Year’s Eve celebration and a married couple’s weekly date night.
> Twice-a-year earnings reporting will make the event so momentous, with such focus on it, that any company that misses analysts’ forecasts will find their stock price shellacked. The twice-yearly focus on making the per-share number will become overwhelmingly intense.
Move from quarter / every-3-months to monthly reporting: companies will be forced to automate their "corporate machinery". And each report will be much less 'momentous' because the time between samples will be much less.
What a strange article, from somebody who should understand the underlying technology (click on the “books” tab - the author is a technologist).
This is not about AI, the author is mostly just pointing out that Spotify was not designed for classical music.
This is a product issue. Spotify DJ is essentially “shuffle with some voice interludes”. There’s probably some non-AI code in there to explicitly prevent it from playing an album end to end.
Besides, AI is not one thing. It’s weird to generalise “This beta spotify feature doesn’t serve me, hence AI is useless”. For example, when the author says “if it can’t do this, how could it compose music?”, that’s a category error.
Honestly the whole post and tone are just baffling. It’s mixing up all sorts of opinions and trying to put them under one umbrella, and about 50% of the text is just name dropping specific classical pieces.
I happen to agree that the Spotify DJ feature is terrible, but I think this is a very ineffective way of presenting the argument.
I work as a DevOps/SRE and have been doing it FinTech (bank, hedge funds, startups) and Crypto (L1 chain) for almost 20 years.
My thoughts on vibe coding vs production code:
- vibe coding can 100% get you to a PoC/MVP probably 10x faster than pre LLMs
- This is partly b/c it is good at things I'm not good at (e.g. front end design)
- But then I need to go in and double check performance, correctness, information flow, security etc
- The LLM makes this easier but the improvement drops to about 2-3x b/c there is a lot of back and forth + me reading the code to confirm etc (yes, another LLM could do some of this but then that needs to get setup correctly etc)
- The back and forth part can be faster if e.g. you have scripts/programs that deterministically check outputs
- Testing workloads that take hours to run still take hours to run with either a human or LLM testing them out (aka that is still the bottleneck)
So overall, this is why I think we're getting wildly different reports on how effective vibe coding is. If you've never built a data pipeline and a LLM can spin one up in a few minutes, you think it's magic. But if you've spent years debugging complicated trading or compliance data pipelines you realize that the LLM is saving you some time but not 10x time.
I’ve encountered an even more nightmarish version of this recently: ai generated tickets. Basically dumping the output of “write a detailed product spec for a clinical trial data collection pipeline” into a jira ticket and handing it off.
Doesn’t match any of our internal product design, adds tons of extraneous features. When I brought this up with said PM they basically responded that these inaccuracies should just be brought up in the sprint review and “partnering” with the engineering team. AI etiquette is something we’ll all have to learn in the coming years.
> One thing I’ve noticed is that different people get wildly different results with LLMs, so I suspect there’s some element of how you’re talking to them that affects the results.
It's always easier to blame the prompt and convince yourself that you have some sort of talent in how you talk to LLMs that other's don't.
In my experience the differences are mostly in how the code produced by the LLM is reviewed. Developers who have experience reviewing code are more likely to find problems immediately and complain they aren't getting great results without a lot of hand holding. And those who rarely or never reviewed code from other developers are invariably going to miss stuff and rate the output they get higher.
I don’t understand how this isn’t an immediate open and shut case for the police, assuming certain facts are verified independently. At the point that you’re making death threats to strangers you should be removed from civil society.
It has made my job an awful slog, and my personal projects move faster.
At work, the devs up the chain now do everything with AI – not just coding – then task me with cleaning it up. It is painful and time consuming, the code base is a mess. In one case I had to merge a feature from one team into the main code base, but the feature was AI coded so it did not obey the API design of the main project. It also included a ton of stuff you don’t need in the first pass - a ton of error checking and hand-rolled parsing, etc, that I had to spend over a week unrolling so that I could trim it down and redesign it to work in the main codebase. It was a slog, and it also made me look bad because it took me forever compared to the team who originally churned it out almost instantly. AI tools are not good at this kind of design deconflicting task, so while it’s easy to get the initial concept out the gate almost instantly, you can’t just magically fit it into the bigger codebase without facing the technical debt you’ve generated.
In my personal projects, I get to experience a bit of the fun I think others are having. You can very quickly build out new features, explore new ideas, etc. You have to be thoughtful about the design because the codebase can get messy and hard to build on. Often I design the APIs and then have Claude critique them and implement them.
I think the future is bleak for people in my spot professionally – not junior, but also not leading the team. I think the middle will be hollowed out and replaced with principals who set direction, coordinate, and execute. A privileged few will be hired and developed to become leaders eventually (or strike gold with their own projects), but everyone in between is in trouble.
Wild misunderstanding of Smith. He considered it a moral defect, wrote several pieces criticizing gambling, and criticized state run gambling.
"The over-weening conceit which the greater part of men have of their own abilities, is an ancient evil... their absurd presumption in their own good fortune, is even more universal."
> We plan to deliver improvements to [..] purging mechanisms
During my time at Facebook, I maintained a bunch of kernel patches to improve jemalloc purging mechanisms. It wasn't popular in the kernel or the security community, but it was more efficient on benchmarks for sure.
Many programs run multiple threads, allocate in one and free in the other. Jemalloc's primary mechanism used to be: madvise the page back to the kernel and then have it allocate it in another thread's pool.
One problem: this involves zero'ing memory, which has an impact on cache locality and over all app performance. It's completely unnecessary if the page is being recirculated within the same security domain.
The problem was getting everyone to agree on what that security domain is, even if the mechanism was opt-in.
You have to understand how gears shift from there. Trust is essential for business transactions and specifically for long term investments. You can’t make massive leaps in technology or medicine or many other areas without trust (a lot of money on a leap means if you don’t trust the other side or the government to keep conditions stable, you won’t see a return).
Now if you are in a high trust society, you may have a lot of leveraged businesses or governments who have gotten loans or permission to do something based on past trust history. If the trust degrades systematically Investors may want returns faster, or interest rates go up, or partnerships don’t happen. That’s why low trust places don’t grow as fast - trust is the oil for growth engines and lack of it is sand for the same.
Corruption also does a lot of small-profit-for-the-corrupt that leads to massive damage to the overall society via second and third order effects. (example: someone stealing copper cables that stop electricity to entire cities for a while).
This is just a Nextcloud rebrand with a confusing domain name. It claims "Core is [100%] Open Source" but no source code is provided beyond what's already available in the upstream projects, and it's unlikely that there will be (as this happens a lot). It's a one-man project without a track record or certifications based out of a shared office space [1].
And don't get me wrong: there's nothing wrong with starting a business rebranding Nextcloud and keeping your development closed source, as long as you're honest about that, which this initiative is not.
If you're looking for a Nextcloud hoster, there's a long list of partners here [2] that have contractually obligated themselves to contribute back to Nextcloud for every user they onboard.
"The Pentagon has released a modernization plan for Stars and Stripes that affirms the publication’s independence while expanding Defense Department oversight, introducing new restrictions on content"
Seems like this sentence contains contradictory statements.
Lots of people are saying nonsense here. The actual reason commercial insurers pay more is that's the only way to can make more profits.
Because of Obamacare requiring 80% of the money they collect to be spent, the insurance companies just get to keep 20%. So insurance companies spend more so they can collect higher premiums. That's how they make more money.
I sense a large number of Polymarket apologists in the comments. Polymarket's existence is a symptom of the ubiquity of Adam Smith's libertine, some would even label satanic ("Do what you wilt"), "free" market thinking. We ought to take it to its natural extreme -- where Polymarket encourages gambling on when specific celebrities, politicians, or even random individuals might die (there is already a name for this: "death pools"). I am sure if they followed through on this openly there would still be advocates and defenders of the practice and counter-claims "there wasn't unequivocal evidence that Polymarket influenced their murder" etc.
They have put in ticket with ops that the server is slow and could we look at it. So we looked. Every single video on a page with long video list pre-loaded a part of it. The single reason the site didn't ran like shit for them is coz office had direct fiber to out datacenter few blocks away.
We really shouldn't allow web developers more than 128kbit of connection speed, anything more and they just make nonsense out of it.