2) copy unencrypted SSH host key from it to a new computer (which necessarily must not be stored in the data volume), configured with the network identity of original computer
3) leave new computer in place of original to capture remote SSH-to-unlock attempt
4) use knowledge of password to unlock original's filevault at your leisure somewhere offsite
I'm still on Sonoma on my Mac, but I've recently been splitting my time between macOS and Linux and I'm starting to be pretty happy with Linux.
The main problem I had with living in a Gnome desktop environment, is with the keyboard. I'm not willing to abandon my use of Emacs control+meta sequences for cursor and editing movements everywhere in the GUI. On macOS, this works because the command (super/Win on Linux/Windows) key is used for common shortcuts and the control key is free for editing shortcuts.
I spent a day or so hacking around with kanata[0], which is a kernel level keyboard remapping tool, that lets you define keyboard mapping layers in a similar way you might with QMK firmware. When I press the 'super/win/cmd' it activates a layer which maps certain sequences to their control equivalents, so I can create tabs, close windows, copy and paste (and many more) like my macOS muscle memory wants to do. Other super key sequences (like Super-L for lock desktop or Super-Tab for window cycling) are unchanged. Furthermore, when I hit the control or meta/alt/option key, it activates a layer where Emacs editing keys are emulated using the Gnome equivalents. For example, C-a and C-e are mapped to home/end, etc.
After doing this, and tweaking my Gnome setup for another day or so, I am just as comfortable on my Linux machine as I am on my Mac.
The amount of people in the cooking enthusiast world that dismiss chemistry and assume it makes the end product worse is just a socially acceptable form of ludditeism. Meanwhile in the professional world Sysco does 80 billion dollars of business.
Extremely interesting use case. LLMs as a modding tool to recontexualize virtual spaces. I can see this being a tool used for artistic intervention in the same vein as plunderludic tools like Unity Hawk which allows you to run emulator save states in Unity3D.
https://plunderludics.github.io/tools/unityhawk.html
There are multiple long-form text inputs, one set is provided by User A, and another set by User B. User A inputs act as a prompt for User B, and then User A analyzes User B's input according to the original User A inputs, producing an output.
My system takes User A and B inputs and produces the output with more accuracy and precision than User As do, but a wide margin.
Instead of trying to train a model on all the history of these inputs and outputs, the solution was a combination of goal->job->task breakdown (like a fixed agentic process), and lots of context and prompt engineering. I then test against customer legacy samples, and inspect any variances by hand. At first the variances were usually system errors, which informed improvements to context and prompt engineering, and after working through about a thousand of these (test -> inspect variance -> if system mistake improve system -> repeat) iterations, and benefiting from a couple base-model upgrades, the variances are now about 99.9% user error (bad historical data or user inputs) and 0.1% system error. Overall it took about 9 months to build, and this one niche is worth ~$30m a year revenue easy, and everywhere I look there are market niches like this... it's ridiculous. (and a basic chat interface like ChatGPT doesn't work for these types of problems, no matter how smart it gets, for a variety of reasons)
So to summarize:
Instead of training a model on the historical inputs and outputs, the solution was to use the best base model LLMs, a pre-determined agentic flow, thoughtful system prompt and context engineering, and an iterative testing process with a human in the loop (me) to refine the overall system by carefully comparing the variances between system outputs and historical customer input/output samples.
Eco: Global Survival (https://play.eco/) bypassed the distortion problem entirely by using an undistorted flat voxel grid, but rendering the globe view as a torisphere.
It still has the tradeoff of making travel close to the center take longer than it should on a sphere (worked around by limiting diggable height), but i find it a more elegant solution.
I suppose life is very different inside the citadel. You get curated and triaged feedback from users, Tim Cook doesn't really have opinions about usability and design choices, so there's no one in charge of the classroom.
The reality is in spite of nice touches like call filtering, software quality and usability are both clearly going down.
And Apple's moat, which is a combination of ecosystem lock-in and graphic design, is threatened from one side by AI and from the other by whatever Liquid Glass is supposed to be.
The ecological and societal costs of data centers are hidden from the FAANG companies. It's very important to be well informed about it so society can regulate it. This podcast series, "Data Vampires" is really informative about the subject:
https://www.youtube.com/playlist?list=PLm-sqZXTqq9oIG_d0P7aT...
You can find it in your favorite podcast player. Everybody should listen to it.
Cursive as taught in schools today is useless at best and dangerous for your health at worst.
The cursive that made the world run between 1850 and 1925 was called business penmanship and it lets you write at 40 words per minute for 14 hours every day for decades on end without pain or injury.
>following lessons will make of you a good penman, if you follow instructions implicitly. The average time to acquire such a handwriting is from four to six months, practicing an hour or so a day. Practice regularly every day, if you want the best results. Two practice periods of thirty minutes each are better than one period of sixty minutes.
After two months I can comfortably write at 20 words per minute for four hours without stopping.
The youtube channel of ADHD science researcher Russell Barkley gave me the push to get diagnosis in my last year of undergrad and It was like lightning to see all my symptoms laid out since childhood in context of the underlying brain science. He does a lot of debunking of bad research too. Great channel.
"Little Signals considers new patterns for technology in our daily lives. The six objects in the series keep us in the loop, but softly, moving from the background to foreground as needed.
Each object has its own communication method, like puffs of air or ambient sounds. Additionally, their simple movements and controls bring them to life and respond to changing surroundings and needs."
I've been wanting to build these since the project came out, but never found the time. Has anyone else here built them with success? I'd love to hear your story about how you used them!
Check out SmolXP [1]. The one I setup had a 340MB qcow2 image, took about ~10 sec to boot to desktop with qemu, and used only 58MB ram idling on desktop.
There's actually a very good reason to implement a delay in switching submenus.
Recent versions of Apple's human interface guidelines don't make any mention of it, because those decisions are baked into the toolkit and not under control of application designers, but the earlier editions of Apple's guidelines went into some detail about why and how pop-up submenus were delayed.
1995 edition of Macintosh Human Interface Guidelines:
>pp. 79: Hierarchical menus are menus that include a menu item from which a
submenu descends. You can offer additional menu item choices without
taking up more space in the menu bar by including a submenu in a main
menu. When the user drags the pointer through a menu and rests it on a
hierarchical menu item, a submenu appears after a brief delay. To indicate
that a submenu exists, use a triangle facing right, as shown in Figure 4-36.
The original 1987 version of the Apple Human Interface Guidelines can be checked out from the Internet Archive, and should be required reading for serious user interface designers, the same way that serious art students should contemplate the Mona Lisa, and serious music students should listen to Mozart. Even though it's quite dated, it's a piece of classic historic literature that explicitly explains the important details of the design and the rationale behind the it, in a way that modern UI guidelines just gloss over because so much is taken for granted and not under the control of the intended audience (macOS app designers using off-the-shelf menus -vs- people rolling their own menus in HTML, who do need to know about those issues):
>pp. 87: The delay values enable submenus to function smoothly, without jarring distractions to the user. The submenu delay is the length of time before a submenu appears as the user drags the pointer through a hierarical menu item. It prevents flashing caused by rapid appearance-disappearance of submenus. The drag delay allows the user to drag diagonally from the submenu title into the submenu, briefly crossing parent of the main menu, without the submenu disappearing (which would ordinarily happen when the pointer was dragged into another menu item). This is illustrated in Figure 3-42.
>I run into this problem all the time on the Web. Web site designers forget to incorporate a menu show delay, resulting in frustration when trying to navigate around them. For example, let's look at the navigation bar on the home page of The Discovery Channel. Hover over TV Shows, and the menu appears. Suppose you want to go to Koppel on Discovery, but instead of moving the mouse straight downward, the way you hold your arm on the desk moves the mouse in an arc that happens to swing to the right before it arcs downward. You touch TV Schedules and your navigation is screwed up. You have to start over and make sure to move the mouse exactly straight down.
You can even solve the problem with CSS and without JavaScript, by using ":hover":
>This is a fairly old UX concept that I haven't heard talked about in a while, but is still relevant in the case of multi-level dropdown menus. A fine-grained pointer like a mouse sometimes has to travel through pretty narrow corridors to accurately get where it needs to in a dropdown menu. It's easy to screw up (have the mouse pointer leave the path) and be penalized by having it close up on you. Perhaps we can make that less frustrating.
Setup your own WireGuard rather than Tailscale.. this is too much like Authy delegating AAA to a third-party.
- Store your SSH public keys and host keys in LDAP.
- Use real Solaris ZFS that works well or stick with mdraid10+XFS, and/or use Ceph. ZoL bit me by creating unmountable volumes and offering zero support when their stuff borked.
- Application-notified, quiesced backups to some other nearline box.
- Do not give all things internet access.
- Have a pair (or a few) bastion jumpboxes, preferably one of the BSDs like OpenBSD. WG and SSH+Yubikey as the only ways inside, both protected by SPA port knocking.
- Divy up hardware with a type 1 hypervisor and run kubernetes inside guests in those.
- Standardize as much as possible.
- Use configuration and infrastructure management tools checked into git. If it ain't automated, it's just a big ball of mud no one know how to recreate.
- Have extra infrastructure capacity for testing and failure hot replacements.
One of the things that I found most frustrating about USB-C hubs is how hard it is to find one that actually gives you multiple USB-C ports. I have several USB-C devices but most hubs just give you one USB-C port and a bunch of USB-A ports. At most it’s 2 USB-C ports but only with the hub that plugs into both USB-C ports on my MacBook Pro (so I’m never able to get more ports than I started with). The result is I end up having to keep swapping devices. For a connector that was supposed to be the "one universal port," it's weird that most hubs assume you only need one USB-C connection. Has anyone found a decent hub with multiple USB-C data outputs?
I recently migrated to Linkwarden [0] from Pocket, and have been fairly happy with the decision. I haven't tried Wallabag, which is mentioned in the article.
Linkwarden is open source and self-hostable.
I wrote a python package [1] to ease the migration of Pocket exports to Linkwarden.
I don't want to get into politics, but to shift things slightly --- what technological and business structures might help to shift things for the better?
since it was set up as a public benefit corporation.
Similarly, there are co-operatives for electric still --- how are they handling solar? Do they afford an option to use one's share of the profits to purchase solar panels and batteries?
What would be an equivalent structure for an AI company which would actually be meaningful (and since circling back to politics is inevitable, enforceable)?
"Let the problem fester until the negative externalities build up so much it overcomes the coordination problem and companies are subject to the same coercion (but through 'market forces' so it's good), one day, eventually, maybe" isn't a meaningful argument against legislation.