One interesting feature of DuckDB is that it can run queries against HTTP ranges of a static file hosted via HTTPS, and there's an official WebAssembly build of it that can do that same trick.
So you can dump e.g. all of Hacker News in a single multi-GB Parquet file somewhere and build a client-side JavaScript application that can run queries against that without having to fetch the whole thing.
DuckDB is an open-source column-oriented Relational Database Management System (RDBMS). It's designed to provide high performance on complex queries against large databases in embedded configuration.
"DICT FSST (Dictionary FSST) represents a hybrid compression technique that combines the benefits of Dictionary Encoding with the string-level compression capabilities of FSST.
This approach was implemented and integrated into DuckDB as part of ongoing efforts to optimize string storage and processing performance."
https://homepages.cwi.nl/~boncz/msc/2025-YanLannaAlexandre.p...
It is very similar to SQLite in that it can run in-process and store its data as a file.
It's different in that it is tailored to analytics, among other things storage is columnar, and it can run off some common data analytics file formats.
Hey jacquesm! No, I just forgot to make it public.
BUT I did try to push the entire 10GB of shards to GitHub (no LFS, no thanks, money), and after the 20 minutes compressing objects etc, "remote hang up unexpectedly"
To be expected I guess. I did not think GH Pages would be able to do this. So have been repeating:
Pretty neat project. I never thought you could do this in the first place, very much inspiring. I've made a little project that stores all of its data locally but still runs in the browser to protect against take downs and because I don't think you should store your precious data online more than you have to, eventually it all rots away. Your project takes this to the next level.
That's really cool, man. The music notation is beautiful. I hit play but couldn't get it to progress past the first note. Maybe I need to plug in a midi keyboard? It would be cool if I could "play" with my ASCII keyboard.
Listen was nice. That's really cool, actually. I encourage you to do it.
I was thinking more the numeric columns which have pre-built compression mechanisms to handle incrementing columns or long runs of identical values. For sure less total data than the text, but my prior is that the two should perform equivalently on the text, so the better compression on numbers should let duckdb pull ahead.
I had to run a test for myself, and using sqlite2duckdb (no research, first search hit), and using randomly picked shard 1636, the sqlite.gz was 4.9MB, but the duckdb.gz was 3.7MB.
The uncompressed sizes favor sqlite, which does not make sense to me, so not sure if duckdb keeps around more statistics information. Uncompressed sqlite 12.9MB, duckdb 15.5MB
Not the author here. I’m not sure about DuckDB, but SQLite allows you to simply use a file as a database and for archiving, it’s really helpful. One file, that’s it.
At a glance, that is missing (at least) a `parent` or `parent_id` attribute which items in HN can have (and you kind of need if you want to render comments), see http://hn.algolia.com/api/v1/items/46436741
I recall clickbait meaning "A way of describing what's behind a link, often inaccurately, so that you click on it". The completely non-controversial article seems to me to have a very hook-y headline which is exactly what the phrase refers to, at least to me. What does clickbait mean to you? Perhaps the meaning of the phrase has changed in different groups over time.
It’s not a buried sentence. It’s a section heading in large font saying “ The 777-200 Problem Is Not Safety. It Is Economics.”
Then there’s a whole paragraph stating “The Boeing 777-200 is not an unsafe airplane. As far as I can tell, that is not the issue even after the incident over Dulles over the weekend.”
Then just in case the reader jumped to conclusions, the first sentence of the conclusion again says it’s safe.
You are explaining exactly why the headline is clickbait: The article does not support the conclusions implied by the headline.
> just in case the reader jumped to conclusions
The author is correcting a problem of his own creation. He has already misled the reader with his headline. He means for the reader to misunderstand... and click.
I respectfully disagree. VAERS can absolutely be used to establish causality when followed by proper expert investigation (which is exactly its purpose as a signal-detection system). The IOM has relied on VAERS data to confirm causal links in 158 vaccine-adverse event pairs, including rotavirus vaccine and intussusception.
Here, FDA career scientists conducted that follow-up: they reviewed 96 child death reports and concluded at least 10 were caused by COVID vaccine myocarditis. That expert finding, not politics, is what triggered the stricter protocols. Healthy skepticism means demanding the full data for review, not preemptively calling it invalid.
The FDA memo citing 10 vaccine-caused myocarditis deaths in kids came _after_ the Sept. 2025 ACIP vote. ACIP had already dropped routine vaccination for healthy kids 6 mo-17 yr and moved everyone under 65 to "shared decision-making" (high-risk only) [1]
The detailed FDA analysis still isn't public. That's exactly why we should demand it instead of dismissing the claim.
Blame NYTimes for leaking the internal memo. In all honesty they should be fined for doing this.
We have no information about how highly motivated anti-vaxxers in positions of power over the FDA arrived at this conclusion except "the team has performed an initial analysis"[1]. That's literally it. Your claim that "FDA career scientists" conducted the follow-up can't even be based on this flimsy a statement. Moreover, these deaths have already been investigated by FDA career scientists and found these conclusions unwarranted.
Prasad spends the rest of the memo politically grandstanding (including claiming it was the FDA commissioner that was the hero here, forcing this issue, not FDA career scientists) and dismissing any objections to very obvious arguments against his claim (that have been made and published multiple times over the past five years) without any evidence, while providing no evidence of his own, in a memo addressing FDA career scientists.
Seriously, everyone should go read his memo. It's basically just a shitty antivax substack post, yet will apparently be FDA policy going forward. Another win for meritocracy.
> The detailed FDA analysis still isn't public. That's exactly why we should demand it instead of dismissing the claim.
The only "claim" here just sounds more official because RFKjr got a bunch of his best antivax buddies to be in charge of the FDA (same with the ACIP). There's no way to even consider it without evidence, so there's nothing to dismiss. Come back when you have something real.
The NYT shouldn't get a free pass for publishing a half-baked internal draft memo that even says "initial analysis" and then framing it as settled science. That's how you create panic and confusion, not transparency. Leaking unfinished work and splashing it on the front page is reckless. This should not be allowed.
Calling everyone "anti-vaxxers" is lazy. Most people I know who are skeptical of the covid shots (including plenty of doctors and scientists) are fully vaccinated against measles, polio, tetanus, etc. They just don't trust a product that skipped the usual 5–10 year safety window and got pushed with emergency authorization. That's not "anti-vax", that’s pattern recognition.
The memo is short on data and long on rhetoric, sure. That's exactly why we need the actual underlying review released in full.
You sound really invested in keeping those covid shots on the childhood schedule. Got a big Pfizer position in the 401k? Kidding, obviously. But the "anyone who asks questions is an anti-vaxxer" reflex is exactly why people stopped trusting the institutions in the first place. I respect every real skeptic, on any side. Asking questions is what moves science forward. Blind trust is stagnation.
Since it's now accepted, I guess I can also share the accompanying paper [1] about cloud hardware evolution; the idea is that every plot in the paper is clickable and opens an interactive version of itself. WebR was perfect for this use case.
(Disclosure: I work on https://quarto.org, for the same company that the author of WebR works on) Thanks for sharing that PDF link. It's so good! Would you be willing to write a bit about how you produced that PDF? It's a great example of what places like CIDR should be encouraging in terms of academic publications.
I didn't know Quarto, it looks interesting, thanks for sharing!
cloudspecs encodes the entire state (sql code, R code, view state) in the URL compressed and base64 encoded, since we wanted to be able to send links around to share interesting plots/tables with each other and revisit old plots if the data changes, e.g., if new EC2 instances come out.
The PDF is produced by good old latex, and the state-in-URL mechanism allows us to just use regular hyperlinks for the clickable plot. The limit is the max URL length browsers allow, but we haven't hit it.
Since we use R+ggplot for research anyway in the local environment (emacs+RSS), we just copied the code into cloudspecs, then copied the resulting link into latex. So a bit of manual work if we want to change the plots in the paper.
Let me know if you're curious about specific things or want to collaborate. Cheers!
Thanks! We hope other papers will adopt the idea as well. I think most use either python+matplotlib or R+ggplot for figures, so WebR is a real win.
Since it's only static files, you can also imagine "reproducibility archives" that you can just run in the browser (hopefully) years later w/o installing anything.
Well, based on your username, thanks for WebR! It took an hour or two to integrate with our DuckDB-Wasm prototype and just worked(TM). Really fantastic.
To add more information, the intervention was guidance about caffeine intake. From the Methods:
> If allocated to caffeinated coffee consumption, patients were encouraged to drink at least 1 cup of caffeinated coffee (or at least 1 espresso shot) and other caffeine-containing products every day as per their usual lifestyle. It was recommended that patients in the coffee consumption group not intentionally increase or decrease consumption of coffee or other caffeine-containing products.
> If allocated to the abstinence group, patients were encouraged to completely abstain from coffee, including decaffeinated coffee, and other caffeine containing products.
Waymo was on a roll in San Francisco. It still is, but it used to be, too. (With apologies to Mitch.) This is utter sensationalism. Fortunately, the state's regulations have liberated the good people of SF from being able to shoot ourselves in our own foot on this particular issue.
Key Takeaway: Get a CT or CTA scan, and if you can afford it go for the CTA with Cleerly.
There is a reason that we don't recommend getting imaging for everyone, and that reason is uncertainty about the benefit vs the risks (cost, incidentalomas, radiation, etc, all generally minor). Most guidance recommends calcium scoring for people with intermediate risk who prefer to avoid taking statins. This is not a normative statement that is meant to last the test of time: it may well be the case that these tests are valuable for a broader population, but the data haven't really caught up to this viewpoint yet.
The central point of his article is that he went to a doctor who followed the guidelines, tested him and found he wasn't at risk for heart disease.
But then he went to another, very expensive concierge doctor, who did special extra tests, and discovered that he was likely to develop heart disease and have a heart attack.
Therefore he is arguing that THE STANDARD GUIDELINES ARE WRONG AND EVEN IF YOU DO EVERYTHING RIGHT AND YOUR DOCTOR CONFIRMS IT YOU MAY BE LIKELY TO DIE OF HEART DISEASE ANYWAY, SO ONLY THE SPECIAL EXTRA TESTS CAN REVEAL THE TRUTH.
I want a second opinion from a doctor. Is this true? Is this for real? Because it smells funny.
I strongly suspect the truth is both are "right", but they're both optimized answers to slightly different problems.
Mainstream medicine is hyper optimized for the most common 80% of cases. At a glance it makes sense: optimize for the common case. Theres some flaws in this logic though - the most common 80% also conveniently overlaps heavily with the easiest 80%. If most of the problems in that 80% solve themselves, then what actual value is provided by a medical system hyper focused on solving non-problems? The real value from the medical system isnt telling people "it's probably just a flu, let's just give it a few days and see" it's providing a diagnosis for a difficult to identify condition.
So if your question is "how do we maximize value and profit in aggregate for providing medical care to large groups of people", mainstream medicine is maybe a good answer.
But if your question is "how do we provide the best care to individual patients" then mainstream medicine has significant problems.
Part of providing good care is not burdening the patient with tests or treatments that are very unlikely to yield benefit. Put another way, the mission of healthcare is not "health at any cost."
The mission of healthcare in the eyes of those who provide it, isn't "health at any cost".
For the people on the other side, "health at any cost" is pretty much the goal, usually limited by the "cost" side of things, especially in the parts of the world where they haven't yet figured out the whole "healthcare for the public" thing.
Cost here doesn't just include financial cost, but also time. As an extreme example, you could surely catch diseases earlier by visiting a doctor for an hour or two every day - getting tests for all sorts of things you might have conceivably developed. But that would make your life worse, and so most people wouldn't do that even if it was free.
Research science in this area has been in agreement for a long time now that ApoB is a more informative indicator than just LDL-C, because there are a variety of different atherogenic particles, not all LDL particles are created the same, etc.
His ApoB numbers are quite readily and apparently out of range. Hell, even his LDL is out of range for the two largest lab providers in the US - Labcorp and Quest both have <100 for their reference range. But the science shows that plaque progression is still generally occurring at levels above 70 LDL-C even with low Lp(a) and other atherogenic particles - the reference ranges are likely to get moved lower and lower as practice catches up with research.
His numbers are well within the range of concern based on pretty universal consensus across the research in this area over the past couple of decades. Preventative cardiologists and lipidologists would almost certainly agree with this concierge doctor.
Thanks for the astute and informed comment. So re-reading that portion of the article, it seems to me the answer to my question is not that any general or consensus guidelines are wrong, but that a company called Forward Health is doing lipid panels and providing an incorrect interpretation of the results.
OP's LDL-C was 116 and this is on the very top end of what Forward Health's report says is OK, their report is wrong, this number is bad.
All the stuff about needing to measure ApoB, needing a high end concierge doctor, and the very long article about measuring 10-20 different numbers and doing more exercise than the guidelines and being at risk of heart attack if you don't do amounts of exercise that the author consider unreasonable etc., some of this may have value, but this all seems to be a lot of very lengthy personal opinion by the techbro author of the post. The key insight is simply that your LDL-C becomes a cause for concern over 100, perhaps even over 70, and he was not as healthy as some tech company told him he was. No surprise there, I will talk to actual doctors instead of using services from "tech forward" startups any day of the week.
I would agree that this article overstates a lot of things.
ApoB is still a reasonable thing to check though, at least once - Lp(a) is the primary cause of atherogenic particle counts being high when LDL-C isn't the culprit, and it's usually a genetic factor. Having a high Lp(a) will bounce your ApoB up and give you a better understanding of the total atherogenic particle load. You could have fine LDL-C or Lp(a) on their own but the total amount could be enough to be worrisome.
Lp(a) being problematic is definitely less common than it being more or less fine, but it's certainly not incredibly rare, either.
The claim on an individual level is not objectionable to me. The question is that if we extrapolate it out to the population and actually take this action for everyone, do we make people better off? This is what clinical trials (or at least large observational studies) try to achieve. Right now, it is not clear.
His evidence is also kinda weak. And appeal to authority largely about someone who he's paying to tell him he has health problems. The incentives aren't aligned.
I also disagree that the 50the percentile is the breakpoint between healthy and unhealthy. There's a lot more to deciding those ranges beside "well half of the population has better numbers"
If you think a 100+ LDL-C is normal you're basing things off of significantly outdated information.
Expect the normal range to drop in the coming years as well - the AHA and NLA have both been talking about how this needs to go lower, and the science is robust. See my other comments for study links, the NLA's latest guidance, etc.
If your doctor is only getting concerned at 160+, find a new doctor.
If I die at 90 of a heart attack havjng maintained the ability to live independently up until then, I’d take that as a massive win compared to my relatives suffering through a decade of me with worsening dementia.
Cardiovascular diseases are huge risk factors for dementia, so if your goal is to avoid dementia you should try to have a healthy cardiovascular system.
If health science was as simple as health outcomes are proportional to one or two measurement percentiles, sure. But that's hardly true. Health is a lot more complex than that and the disease risk cannot be quantified by a small number of parameters
Maybe he got missed--let's concede that. What about the other 10 or 100 or 1000 or subjected themselves to tests and didn't find anything? Where are their stories?
If you have enough people, the tests, themselves are eventually going to harm somebody.
For example, certain scans require contrasts like gadolinium that bioaccumulates. That's not a big deal if we only pump it into people 2 or 3 times in their lives when something in their body is about to explode. It's a lot bigger deal if we're doing that to them every year.
The bottom line is these tests aren't some sort of one-size-fits-all panacea, and nor can they perfectly predict the future. In fact Oprah herself backtracked on it, via an article by Dr. Oz in her magazine in 2011: https://www.oprah.com/health/are-x-rays-and-ct-scans-safe-ra...
A good rule of thumb is don't take medical advice from Oprah or Dr. Oz. But in the case of the latter article, he wasn't wrong.
> But then he went to another, very expensive concierge doctor, who did special extra tests, and discovered that he was likely to develop heart disease and have a heart attack.
It’s scarily common in medicine for doctors to start specializing in diagnosing certain conditions with non-traditional testing, which leads them to abnormally high diagnosis rates.
It happens in every hot topic diagnosis:
When sleep apnea was trending, a doctor in my area opened her own sleep lab that would diagnose nearly everyone who attended with apnea. Patients who were apnea negative at standard labs would go there and be diagnosed as having apnea every time. Some patients liked this because they became convinced they had apnea and frustrated that their traditional labs kept coming back negative, so they could go here and get a positive diagnosis. Every time.
In the world of Internet Lyme disease there’s a belief that a lot of people have hidden Lyme infections that don’t appear on the gold standard lab tests. Several labs have introduced “alternate” tests which come back positive for most people. You can look up doctors on the internet who will use these labs (cash pay, of course) and you’re almost guaranteed to get a positive result. If you don’t get a positive result the first time, the advice is to do it again because it might come back positive the second time. Anyone who goes to these doctors or uses this lab company is basically guaranteed a positive result.
MCAS is a hot topic on TikTok where influencers will tell you it explains everything wrong with you. You can find a self-described MCAS physician (not an actual specialist) in online directories who will use non-standard tests on you that always come back positive. Actual MCAS specialists won’t even take your referral from these doctors because they’re overwhelmed with false cases coming from the few doctors capitalizing on a TikTok trend.
The same thing is starting to happen with CVD risks. It’s trendy to specialize in concierge medicine where the doctor will run dozens of obscure biomarkers and then “discover” that one of them is high (potentially according to their own definition of too high). Now this doctor has saved your life in a way that normal doctors failed you, so you recommend the doctor to all of your friends and family. Instant flywheel for new clients.
I don’t know where this author’s doctor fits into this, but it’s good to be skeptical of doctors who claim to be able to find conditions that other doctors are unable to see. If the only result is someone eating healthier and exercising more then the consequences aren’t so bad, but some of these cases can turn obsessive where the patient starts self-medicating in ways that might be net negative because they think they need to treat this hard to diagnose condition that only they and their chosen doctor understand.
It's important to note that there's geographic variability in guidelines. Also, the article doesn't give enough information about the author's other risk factors. For a similar patient (based on the initial lab results), treated by a doctor adhering to the European guidelines, at least the following items would have been considered:
- Lipid lowering drugs
- ApoB testing
- Coronary CT (if the pre-test likelihood of obstructive coronary artery disease was estimated to be > 5%)
The year is 1846, and a doctor has a radical new idea: doctors should wash their hands between performing autopsies and delivering babies!
You're not sure of whether this is a good idea or not, so you ask various physicians, and the consensus is unanimous: the very suggestion is offensive, do you think doctors are unclean?
Not sure I follow or maybe you skipped typing a word.
You listed the risks and concluded “all generally minor.” The benefit is absolutely nonzero. So, what’s the hold up?
And how have the data not caught up? People outside the US are getting the CT scans, while US doctors prefer to lick their finger to guess the weather.
My wife’s last interaction with a doctor: patient presents with back and chest pain accompanied by occasional shortness of breath at the age of 39, doctor reluctantly asks for a EKG - which takes 5-10 minutes and is done in the next room, right away and covered by insurance with a small copay - and has the gall to be surprised when EKG showed subtle abnormalities. If she hadn’t advocated for herself, as the OP argues, doctor would just skip the EKG.
This experience left me thinking maybe doctors are discouraged from asking for imaging and guidelines are there to protect their criminally negligent behavior. I have no proof or even proxy data for the claim about doctors being discouraged from asking for imaging. But it is objectively criminally negligent to not ask for imaging in a case like this.
"Smaht" people continuously parrot things they read elsewhere, usually in a contrarian way, to assert themselves in a futile and shallow way.
There is absolutely nothing wrong with getting one CT at a specific point in your life to right a disease which, as TFA states, has a 25% incidence rate.
The smaht ones will now point me to that study of 1-5% of cancers being linked to CT scans. Yeah, sure, but those are not from people who got one-two in their lives.
One thing that wasn't mentioned is the max sustained screen brightness for SDR, which is higher on the M4 Pro (1000 nits) compared to the M4 Air or M1 Pro (500 nits).
There’s an awesome app called Vivid which just opens the HDR max brightness. I use it all the time with my M3 Pro when working outside and I believe it also works on earlier models.
There are so many base features that are inexplicably relegated to 3rd party apps. Like a better finder experience. Or keeping screen on. Or NTFS writing.
NTFS writing isn't that inexplicable. NTFS is a proprietary filesystem that isn't at all simple to implement and the ntfs-3g driver got there by reverse engineering. Apple doesn't want to enable something by default that could potentially corrupt the filesystem because Microsoft could be doing something unexpected and undocumented.
Meanwhile if you need widespread compatibility nearly everything supports exFAT and if you need a real filesystem then the Mac and Windows drivers for open source filesystems are less likely to corrupt your data.
I'll take ntfs-3g over the best implementation of exFAT in a heartbeat. Refusing to write to NTFS for reliability purposes, and thereby pushing people onto exFAT, is shooting yourself in the foot.
At which point you're asking why Apple doesn't have default support for something like ext4, which is a decent point.
That would both get you easier compatibility between Mac and Linux and solve the NTFS write issue without any more trouble than it's giving people now because then you'd just install the ext4 driver on the Windows machine instead of the NTFS driver on the Mac.
> There are so many base features that are inexplicably relegated to 3rd party apps.
> Like a better finder experience.
> Or keeping screen on.
Do you mind linking or naming which tools you use for those 2 purposes?
Asking out of pure curiosity, as for keeping the screen on, I just use `caffeinate -imdsu` in the terminal. Previously used Amphetamine, but I ended up having some minor issues with it, and I didn't need any of its advanced features (which could definitely be useful to some people, I admit, just not me). I just wanted to have a simple toggle for "keep the device and/or display from sleeping" mode, so I just switched to `caffeinate -imdsu` (which is built-in).
As for Finder, I didn't really feel the need for anything different, but I would gladly try out and potentially switch to something better, if you are willing to recommend your alternative.
I use the Finder and Raycast heavily. Raycast is not, and does not sell itself as, a Finder equivalent.
OP: I've tried all the Finder replacements. Path Finder, for example. At the end of the day, I went back to Finder. I always have a single window on screen with the tabs that I use all day. This helps enormously. I show it on YouTube here (direct timestamp link): https://youtu.be/BzJ8j0Q_Ed4?si=VVMD54EJ-XsxkYzm&t=338
Finder is the number one reason it boggles my mind that people claim macOS as head and shoulders above other OSes "for professionals". Finder is a badly designed child's toy that does nothing at all intuitively and, in fact, actively does things in the most backwards ways possible. It's like taking the worst of Explorer (from Windows XP), and smashing it into the worst of Dolphin or Nautilus; and, to top it off, then hiding any and all remaining useful functionality behind obscure hot keys.
It has been more or less the same as long as I've used it (20 years or so). Familiarity is a plus. It is a pretty simple and straightforward tool. I'm not sure what you might find perplexing about Finder.
Who said it was perplexing? If anything, it's the opposite. It's so simple and rudimentary as to be antithetical to filesystem navigation.
Back/forward operate on history, not on hierarchy; at least have an "Up" button. There's no easy way to navigate the non-prescribed folders without adding every folder to the favorites list; hell, there's not even a "Home" link by default. Simple location navigation is hidden behind Cmd+G versus being evident. Easily jumping up the tree from your current location is hidden. Etc, etc, etc. It acts like the iPhone file manager, except the filesystem isn't a sandbox on macOS and you regularly need to navigate around it.
I'm sure if it's the only FS manager you ever use then it's just fine and you've learned all the quirks. But for people that regularly use other (better) managers on other OSes, it's severely lacking in ergonomics and functionality.
cmd+g is not hidden it is a menu bar item under "Go". You can navigate hierarchy with the path on the footer. I believe home link is in fact default, its been there on the sidebar as long as I can remember (only it is called your user folder not "Home").
Eh, I feel the opposite. Finder is much more usable to me, but of course I use the shortcuts like cmd-up to go up or down instinctively now. It is a bit ironic for such a mouse oriented OS everywhere else.
Still alt-clicking on the window title to see the whole folder hierarchy is easy to remember and doesn't clutter up the UI (err cmd-clicking? It's muscle memory so I forget). The fact that it works on most native apps with file titles as well I awesome.
Welcome to the Mac ecosystem. Where basic functionality is gated behind apps that Apple fans will tell you "are lifesavers and totally needed in Windows/Linux/etc)" for $4.99-14.99/piece. And, when they get popular enough, Apple will implement that basic functionality in its OS and silently extinguish those apps.
And that's when they let you modify/use your OS the way you want.
A far as I understand Windows only has a toggle for HDR on vs off, that's not what we're talking about here, this is about forcing the full brightness of HDR always, even outside videos. It's something that manufacturers don't allow for as it reduces display life, it would actually be an anti-feature for a consumer OS to expose as a setting. It'd be like exposing some sort of setting to allow your CPU to go well beyond normal heat limits.
I don't mind that. 3rd party Mac utilities are nice: well designed, explained and do what they're supposed to because someone makes a living of it. I'm happy to pay these prices.
I would personally be afraid of using that in case it causes damage long-term to the screen either due to temperature or power draw or something. Idk if there are significant hardware differences but in this case I would guess there’s a real hardware reason for it?
I imagine what those custom brightness apps do is not magically increase the brightness, but change the various pixels' brightness in accordance to some method/algorithm such that you see what appears to be brighter whites when placed next to certain other colors.
It's not what is implied by the parent post - where the mac is limiting the brightness only to have the app unlock it.
No, I believe the issue is Apple limits the top half or so of the brightness/backlight level for HDR content only. The apps allow it to be used for normal non-HDR content.
...I'd have to say that seems like a radical reading of the text.
No; you can adjust screen brightness just fine with the built-in settings, including with the F1 and F2 keys (plus the Fn key if you've got them set that way).
This Vivid app is specifically for extra HDR levels of brightness. I've never had a problem with my M1 or M4 MBPs, either inside or outside, with the built-in brightness levels. (But, to be fair, I don't use it outside a lot.)
Question - did you consider tradeoffs between duckdb (or other columnar stores) and SQLite?
reply