Fun fact about silver, besides its heavy industrial footprint, which you mentioned: the supply is dominated by Mexico. There have been some, uh, erratic words about Mexico from the people in the position to affect trade policy and foreign policy.
For those of you with this handy technology, the mobile phone, in the United States: you have an IPv6 address without NAT. Some of you even exist on a network using 464XLAT to tunnel IPv4 in IPV6, because it's a pure IPV6 network (T-Mobile). These mobile phone providers do not let the gazillion consumer smartphones act as servers for obvious reasons.
This is all to underscore the author's point: NAT may necessitate stateful tracking, but firewalls without translation has been deployed at massive scale for one of the most numerous types of device in existence.
> These mobile phone providers do not let the gazillion consumer smartphones act as servers for obvious reasons.
FWIW, I was interested so I tested this on my phone here in Finland (Elisa, the largest carrier here): IPv6 inbound TCP connections work just fine, unlike IPv4 which is behind CGNAT.
On mobile broadband (no calls) plans they also offer optional free public IPv4 address, but not on the regular phone plans.
(I did the test by installing Termux from Play Store, then in it running "pkg install netcat-openbsd" and "nc -6 -l 9956" and then connecting to that port from internet using telnet, while phone was not connected to WiFi.)
In the case of T-Mobile, unsolicited inbound IPv6 connections are blocked, but direct P2P is still possible. I successfully established a WireGuard tunnel over IPv6 between 2 phones. With IPv6, since the internal addresses and ports and the same end-to-end, all that is needed is a dynamic DNS service; STUN isn't necessary. I did need to set a persistent keepalive of 25 seconds on both sides of the tunnel to keep the firewall holes open.
Interestingly, Verizon Wireless blocks connections to other Verizon Wireless IPv6 addresses. T-Mobile-to-T-Mobile connections work, Verizon-to-T-Mobile connections work, but Verizon-to-Verizon connections do not work. Given the way Verizon's network has stagnated while T-Mobile's network has been rapidly improving, it may be time to move away from Verizon.
Slightly off-topic, but if you have a modern Google Pixel phone, Google includes "free" VPN service (which probably collects/sells your data). This service uses Endpoint-Independent filtering, so if you send an outbound packet with the source port you want to map, regardless of the destination IP/port, you can effectively receive unsolicited inbound connections from any host on the internet that contacts your IP:port, so long as you send a periodic keepalive packet from the source port you are using to anywhere.
This is how researchers were able to remotely hack several Chrysler models. They used a Sprint hotspot to get an IP on the cell network and were able to connect to any other Sprint device. The cellular modems on these cars were on Sprint so they just had to be on the same network. I wonder if Verizon intentionally blocks this.
What would be the obvious reasons? (I'm not being flippant here -- I'm genuinely interested in what arguments people have to not allow servers on that network)
High concentration of technically inept users with hardware that no longer receives security updates and has plenty of well known easily exploitable vulnerabilities. Which naturally is used to run banking apps and travels with users close to 24/7 while tracking their location.
From a business perspective you'd want to charge extra. Just because you can, but also because you want to discourage excess bandwidth use. The internet APs the carriers sell get deprioritized relative to phones when necessary and the fine print generally forbids hosting any services (in noticeably stronger language than the wired ISPs I've had).
> From a business perspective you'd want to charge extra. Just because you can, but also because you want to discourage excess bandwidth use
Isn't that already the case with limited plans?
For example, mine has 40 GBs and I'm pretty sure it counts both upload and download, because I generally consume very little, except for one week when I was on holiday with no other internet access and wanted to upload my pictures to my home server and didn't otherwise use the phone more than usual.
Facebook would start listening on port X and and then their embedded SDK in other websites or app would query that IP and port, get their unique id, and track users much better.
The most common use case for mobile data servers is probably pwned cheap/old phones forming DDoS swarms. Pure P2P over internet is very rare on mobile, no sense not blocking ingress from the perspective of ISPs.
However for that having the phone's IP not reachable has at best marginal benefits. The DDoS itself is an outgoing connection, and for command and control having the compromised phone periodically fetch instructions from a server is simpler to implement than the phone offering a port where it is reachable to receive instructions
I kind of doubt this, as the rapidly changing nature of mobile IP addresses would mean that a periodic outbound connection would still be necessary to keep the attack up-to-date on the compromised devices current IP address. At that point, you may as well have the compromised device periodically poll an attacker-controlled server for instructions rather than jump through a bunch of hoops by getting things to work over inbound connections.
I think it should vary based on the type of service being provided. Truly mobile service, I think it can make sense to not allow servers. If its being sold as a home internet solution (a more fixed kind of plan), I think it should allow servers to at least some level of hosting services.
The main difference is there's usually limited airtime capacity for clients, especially highly mobile ones. A server could easily hog quite a bit of the airtime on the network serving traffic to people not even in the area, squeezing out the usefulness of the network for all the other highly mobile people in the area. This person moves around, pretty much doing the equivalent of swinging a wrecking ball to the network performance everywhere they go.
When its being sold as a fixed endpoint though, capacity plans can be more targeted to properly support this kind of client. They're staying put, so its easier to target that particular spot for more capacity.
The phone providers oversell bandwidth. They also limit the use of already purchased bandwidth when it gets legitimately used.
Similar to many industries, their business model is selling monthly usage, while simultaneously restricting the actual usage. They are not in the business of being an ISP for people running software on their phones.
Being allowed to serve data from your own device should be seen as a natural human right.
If the networks don't have capacity or something then we need networks that can support that.
The idea that all of that has to go in the Fediverse on a server or something is just gatekeeping.
Wait a few years as IPV6 becomes truly ubiquitous. This will become very obvious to everyone and standard. People must be allowed to communicate directly, even if they have a lot of clients.
The opinions are slightly similar to remote work. Telecommuting was an obvious next step for a long time, it just took a certain number of decades for society to realize it.
I use beads quite a bit, but not as steve intended. And definitely the opposite of "Gas Town," where I use the note-taking capability and integration with Git (that is, as something of a glorified Makefile and database) to debug contexts, to close the loop and increase accuracy over time. Nevertheless, it has been useful for large batch runs over my code base: the record has been processing for thirty hours straight while getting something useful, and enough trace data to make further improvements.
Steve has gone "a bit" loopy, in a (so far) self aware manner, but he has some kind of insight into the software engineering process, I think. Yet, I predict beads will break under the weight of no-supervision eventually if he keeps churning it, but some others will pick up where he left off, with more modest goals. He did, to his credit, kill off several generations of project before this one in a similar category.
I’m pro LLM and use them, but crikey: if they’re so good at code why aren’t these people with all the attention, branding, and connections in the world unable to capitalize them?
I believe Google that uses their internal Gemini trained on their internal infrastructure to generate boiler plate and insights for older, less mature, code in one of the worlds biggest and most complicated anythings, ever. But I don’t see them saying anything to the effect of “neener neener, we’re using markov chains so 10x our stock ‘cause of the otherwise impossible face melting Google Docs 2026.”
OpenAI is chasing ads, like Reddit, to regurgitate Reddit content. If this stuff is worth the squeeze I need to see the top 10 LLM-fluencers refusing to bend over for $50K. The opposite is on display.
So hypotheses: Google’s s-tier geniuses and PMs are already expressing the mature optimum application. No silver bullets, more gains to be had ditching bad tech and extraneous vendor entanglements (copilot, 365).
That entire article sounds like my friends who think AI is real and keep sending their parents money into crypto scams.
I think I’ll just develop a drinking problem if this is Gas Town becomes something real in the industry and this kind of person is now one of our thought leaders.
that's one reason I am less worried about him than some, although, I don't want to say that only to have something bad happen to him, that is, a form of complacency. Just because (say) Boltzmann and Cantor had useful insights along the way didn't mean people shouldn't have been looking to support them.
the main area I'd like to see some departure from beads is to use markdown files (or something) to be able to see the issue context/comments better in a diff generated by git.
The other area I'd like to see some software engineering thinking that's more open ended is on regression testing: ways of storing or referencing old versions of texts to see if the agent can complete old transformations properly even with a context change that patches up a weakness in a transformation that is desirable. This is tricky as it interacts with something essential in software engineering, the ability to run test suites and responding to the outcome. I don't think we know yet when to apply what fidelity of testing, e.g. one-shot on snippets versus a more realistic test based on git worktrees.
This is not something you'd want for every context, but a lot of my effort is spent building up prompt fragments to normalize and clean up the code coming out of a model that did some ad-hoc work that meets the test coverage bar, which constrains it decently into having achieved "something." Kind of like a prototype. But often, a lot of ungratifying massaging is required to even cover the annoying but not dangerous tics of the LLM, to bring clarity to where it wrote, well, very bad and unprincipled code...as it does sometimes.
I was disappointed to see that this is still 10x the code needed for the feature set and that it still insists on duplicating state into a SQLite index for such minuscule amounts of data.
I've seen 25-30 similar efforts to make a Beads alternative and they all do this for some reason.
You don't even need to trust CPI alone when looking in history, where things have evened out a bit: we have historical short-term bond yield data, even the yield curve: people bidding on short periods with the safest debtor expecting changes in nominal value.
Not to suggest CPI is redundant, there's a reason why central bankers read it after all. For one, it's the most timely data they have. But it's impossible to nudge it year after year -- accumulative error -- without it become obviously decoupled from other data, including the long-term bond market data. It just so happens commodities are the wrong yardstick.
It's not very convincing, though: there's a huge runup in gold prices (as is often the case) between 2023 and the present, and a long do-nothing period before that (also often the case). The major consumers of gold are about: 50% jewelry, 10% industrial, 20% central banks, a large run-up from about 10% in the 2010s.
I like to think about the inherent contradictions of goldbugs going long on central bank portfolio policy: they both tend to distrust the central bank but in a way the central bank activities partially endorse their habits, and are the source of recent appreciation and thus accusations of "hidden" inflation. But central banks operate in an anarchic world system where they need something even independent of reserves held in other sovereign currencies, I presume most gold bugs are holding ETFs in an existing financial system (which is non-orthogonal: if you assume a financial system, why not avail yourself of the superior alternatives?) or have it in a safe in their house which has some other obvious problems.
I hold no gold, if I want hydraulic and non-volatile inflation compensation, it's quite simple: short-dated sovereign debt, aka the humble money market fund, which can be seen as the lower-fee version of the checking account. Nobody likes being a sucker, holding debt for below the time value of money, including changes in nominal value. It has immense price discovery pressure, and it finds its level nicely. If I were to hold gold, I would need some viable theory about how much I should hold to be de-correlated from other assets to be worthwhile. Maybe if I was exposed to jewelry costs and wanted to hedge them.
When people talk about inflation, I don't think they're referring to just CPI, but asset inflation too. Things like equities, real estate, gold/silver/platinum, bitcoin, etc.
These have been outpacing CPI because they're levered by cheap debt, brought to you by central bank actions that keep rates low so governments can play the same levered games with their own runaway fiscal policies.
That's a lot of financial devices painted with a broad brush, and I think the charge that so may central banks are knuckled under with fiscal dominance is simply not sustainable. The ones that are, we tend to hear about.
Because there's a lot one could write about each of: equities, real estate, gold, silver, platinum (which have very different industrial exposures), and bitcoin, which have many price drivers.
So let's try something more parsimonious: what do you make of people, institutions, etc that bid on short and even long-dated sovereign debt around the globe, and come up the collective discovered price of, say...3.5%, annualized, for maturity in a month? https://www.treasurydirect.gov/auctions/announcements-data-r...
Public Sans seems like a good candidate for a new "web safe" font. Perhaps one new web safe font per twenty five years is not too much. From there, it can percolate to the word processor and pdfs, and finally: government standard for government workers who just want to open their word processor and get to work, where sourcing even a free font to meet standard is just a snag to annoy.
No chance they're going to take risks to share that hardware with anyone given what it does.
The scaled down version of El Capitan is used for non-classified workloads, some of which are proprietary, like drug simulation. It is called Tuolumne. Not long ago, it was nevertheless still a top ten supercomputer.
Like OP, I also don't see why a government supercomputer does it better than hyperscalers, coreweave, neoclouds, et al, who have put in a ton of capital as even compared to government. For loads where institutional continuity is extremely important, like weather -- and maybe one day, a public LLM model or three -- maybe. But we're not there yet, and there's so much competition in LLM infrastructure that it's quite likely some of these entrants will be bag holders, not a world of juicy margins at all...rather, playing chicken with negative gross margins.
that also popped out at me: binding that many parameters is cursed. You really gotta use COPY (in most cases).
I'll give you a real cursed Postgres one: prepared statement names are silently truncated to NAMEDATALEN-1. NAMEDATALEN is 64. This goes back to 2001...or rather, that's when NAMEDATALEN was increased in size from 32. The truncation behavior itself is older still. It's something ORMs need to know about it -- few humans are preparing statement names of sixty-plus characters.
I think it's pretty funny because, for example, Katalin Karikó was thought to be working in some backwater, on this "mRNA" thing, that could barely get published before COVID...and, the original LLM/transformer people were well qualified but not pulling quarter billion dollars kicking around trying to improve machine translation of languages, a time-honored AI endeavor going back to the 1950s. The came upon something with outstanding empirical properties.
For whatever reason, remuneration seems more concentrated than fundamentals. I don't begrudge those involved their good luck, though: I've had more than my fair share of good luck in my life, it wouldn't be me with the standing to complain.
> Katalin Karikó was thought to be working in some backwater, on this "mRNA" thing, that could barely get published
There's a ton of examples like this, and it is quite common in Nobel level work. You don't make breakthroughs by maintaining the status quo. Unfortunately that means to do great things you can't just "play it safe"
One of the things I think about sometimes, a specific example rather than a rebuttal to Carmack.
The Electron Application is somewhere between tolerated and reviled by consumers, often on grounds of performance, but it's probably the single innovation that made using my Linux laptop in the workplace tractable. And it is genuinely useful to, for example, drop into a MS Teams meeting without installing.
So, everyone laments that nothing is as tightly coded as Winamp anymore, without remembering the first three characters.
> So, everyone laments that nothing is as tightly coded as Winamp anymore, without remembering the first three characters.
I would far, far rather have Windows-only software that is performant than the Electron slop we get today. With Wine there's a decent chance I could run it on Linux anyway, whereas Electron software is shit no matter the platform.
Wine doesn't even run Office, there's no way it'd run whatever native video stack Teams would use. Linux has Teams purely because Teams decided to go with web as their main technology.
Even the Electron version of Teams on Linux has a reduced feature set because there's no Office to integrate with.
reply