Hacker Newsnew | past | comments | ask | show | jobs | submit | cactusfrog's commentslogin

We have tons of space on earth. Cooling in space would be so expensive.


Falcon heavy is only $1,500/kg to LEO. This rate is considerably undercut here on Earth by me, a weasley little nerd, who will move a kilogram in exchange for a pat on the head (if your praise is desirable) or up to tens of dollars (if it isn't).


In exchange for what benefit? There is literally no benefit to having a datacenter in space.


The benefit is capturing a larger percentage of the output of the sun than what hits the earth.


Can that really work? The datacentre will surely be measurably smaller than the earth.


Does your transportation system also have a risk of exploding catastrophically mid-flight? 'cause otherwise no deal. /s


Could the rust code be transpired to readable C?


> readable

No, because some things that are UB in C are not in Rust, and vice versa, so any codegen has to account for that and will result in additional verbosity that you wouldn't see in "native" code.


Thank you for the explanation


There’s this Gitlab incident https://www.youtube.com/watch?v=tLdRBsuvVKc


This author assumes that open sourcing a package only delivers value if is added as a dependency. Publicly sharing code with a permissive license is still useful and a radical idea.


Yep. Even just sharing code with any license is valuable. Much I have learned from reading an implementation of code I have never run even once. Solutions to tough problems are an under-recognized form of generosity.

This is a point where the lack of alignment between the free beer crowd and those they depend on is all too clear. The free beer enthusiast cannot imagine benefiting from anything other than a finished work. They are concerned about the efficient use of scarce development bandwidth without consciousness of why it is scarce or that it is not theirs to direct. They view solutions without a hot package cache as a form of waste, oblivious to how such solutions expedite the development of all other tools they depend on, commercial or free.


Yeah if I find some (small) unmaintained code I need, I will just copy it (then add in my metrics and logging standards :)

It shouldn't be a radical idea, it is how science overall works.

Also, as per the educational side, I find in modern software ecosystem, I don't want to learn everything. Excellent new things or dominantly popular new things, sure, but there are a lot of branching paths of what to learn next, and having Claude code whip up a good enough solution is fine and lets me focus on less, more deeply.

(Note: I tried leaving this comment on the blog but my phone keyboard never opened despite a lot of clicking, and on mastodon but hit the length limit).


A copyleft license is much better because it ensures that it will remain open and in most cases large companies won't use it, making sure they will have to shell out the money to hire someone to do it instead.


I do agree with this, but there are some caveats. At the end of the day it is time people invest into a project. And that is often unpaid time.

Now, that does not mean it has no value, but it is a trade-off. After about 14 years, for instance, I retired permanently from rubygems.org in 2024 due to the 100k download limit (and now I wouldn't use it after the shameful moves RubyCentral did, as well as the new shiny corporate rules with which I couldn't operate within anyway; it is now a private structure owned by Shopify. Good luck finding people who want to invest their own unpaid spare time into anything tainted by corporations here).


A lot of times those 5% people go blind :/


I was trying to track down what the 5% failure looks like. The article source is here[1] for the 95% stat, which then references here[2], but I honestly can't find it, [2] refers to "Recent studies reveal that in most of the cases, the prognosis is excellent after surgery, almost 70 to 80%", which refers to here[3], which seems to refer to patient happiness(?), so I haven't the foggiest where that 5% stat comes from.

Ultimately, I have no idea what the 5% failure means or looks like or if it even is a real statistic. Maybe I'm just thick and there's an obvious link or passage I missed.

[1] https://www.ncbi.nlm.nih.gov/books/NBK559253/ [2] https://www.ncbi.nlm.nih.gov/books/NBK539699/ [3] https://pmc.ncbi.nlm.nih.gov/articles/PMC10243645/


I was shocked when I saw a 5% failure rate, I would not take those odds.


Well the alternative is to go blind due to cataracts... So those are pretty good odds.


It's probably per-eye, right?


1 in 400 go blind, 19 in 400 become pirates, I assume something along the lines of The Crimson Permanent Assurance.


I hope humans are like Cyanobacteria in that in destroying the environment we create the substrate for something grander.


I fear that


What language do you think they should have based Mojo off of? I think Python syntax is great for tensor manipulation.


I wouldn't mind a python flavor that has a syntax for tensors/matrices that was a bit less bolted on in parts vs Matlab. You get used to python and numpy's quirks it but it is a bit jarring at first.

Octave has a very nice syntax (it's an extended Matlab's syntax to provide the good parts of numpy broadcasting). I assume Julia uses something very similar to that. I have wanted to work with Julia but it's so frustrating to have to build so much of the non-interesting stuff that just exists in python. And back when I looked into it there didn't seem to be an easy way to just plug Julia into python things and incrementally move over. Like you couldn't swap the numerics and keep with matplotlib things you already had. You had to go learn Julia's ways of plotting and doing everything. It would have been nice if there were an incremental approach.

One thing I am on the fence about is indexing with '()' vs '[]'. In Matlab both function calls and indexing use '()' which is a Fortran style (the ambiguity lets you swap functions for matrices to reduce memory use but that's all possible with '[]' in python) which can sometimes be nice. Anyway if you have something like mojo you're wanting to work directly with indices again and I haven't done that in a long time.

Ultimately I don't think anyone would care if mojo and python just play nicely together with minimal friction. (Think: "hey run this mojo code on these numpy blobs"). If I can build GUIs and interact with the OS and parse files and the interact with web in python to prep data while simultaneously crunching in mojo that seems wonderful.

I just hate that Julia requires immediately learning all the dumb crap that doesn't matter to me. Although it's seeming like LLM seem very good at the dumb crap so some sort of LLM translation for the dumb crap could be another option.

In summary: all mojo actually needs is to be better than numba and cython type things with performance that at least matches C++ and Fortran and the GPU libraries. Once that happens then things like the mojo version of pandas will be developed (and will replace things like polars)


The issue with R is that there is too much dsl. This is great for one-off analysis but makes building a cohesive large code base really difficult.


Yeah that's def part of it. As fun as it is there is just too much of it and people jump for it too readily, tidyverse included.


I know about ghostship, but I don’t immediately associate the word ghost ship with that disaster. I’ve always thought of them as the spooky abandoned ships that float around and I think it’s a good name.


One of the best books I’ve ever read is The Making of the Atomic Bomb Book by Richard Rhodes. If you want an extremely in-depth history of the science and people behind Manhattan project, I would highly recommend reading it.


Seconded. I tell people it's several books in one, all of which are brilliantly executed:

- Biographies of the preeminent scientists of the 20th century

- A history of late 19th and early 20th century physics and chemistry. Much more technical than many history books, which is a drawback for some audiences, but probably an attraction for a lot of people here.

- A history of World War I and World War II

- A history of the engineering and operation of the Manhattan Project

Highly, highly recommended for this audience.

One caveat: I tried the audiobook and couldn't stand the narrator. Your mileage may vary, but I recommend reading it.


Don’t forget the very last chapter: a gruesome moment by moment portrayal of the effects of the atomic bomb on the people of Hiroshima.


> Biographies of the preeminent scientists of the 20th century

This was the only parts of the book I skimmed over / skipped. While interesting, many of them go back to their parents and childhood upbringing which, again are interesting, but being more interested in the science/engineering I would skip ahead until their story was more relevant.


Learned about that book from HN, so thanks HN. Of late, I've been reading The Alchemy of Air which revolves around Haber-Bosch process and it's been a delight so far. Highly recommend if you love a mix of non-fiction, history & science.


> The Making of the Atomic Bomb Book by Richard Rhodes

A good book.

May I also recommend the In Our Time episode on the Manhattan Project.

https://www.bbc.co.uk/programmes/m00108h1

(The Richard Rhodes book is on the recommended reading list for this episode, listed on the linked website; as are other very good books on the Manhattan Project worth a read).


If you want a book that is more technical and really gives a sense of what the scope of the project was, I'd highly recommend The Los Alamos Primer by Serber which was the intro lecture given to scientists when they would arrive. Serber did a great job of annotating the lecture to explain in more accessible detail each section. A quick read, and well worth it.


I read and enjoyed The Making of the Atomic Bomb and Dark Sun, but another book by Rhodes made me question his veracity. <https://www.goodreads.com/review/show/4413437417>


100% agree. Also, if you liked that, try his follow on, "Dark Sun", focusing on the fusion bomb development after the war. There is probably a much greater focus on politics, especially involving Teller.


Dark Sun is not bad, but it is definitely overshadowed by Rhodes' magnum opus.

I recommend Igniting the Light Elements for people who want a keystone piece about the early thermonuclear. https://www.osti.gov/servlets/purl/10596 - it's an extensive Thesis on the history of early thermonuclear period. Also one of the last comprehensive looks before classification fully obscures the plurality of the programs.


Thanks for posting that reference. I came to do the same after finding that thesis while searching for another book I remember reading. The book covered Wheeler's (I think it was Wheeler) work simulating the first thermonuclear device on borrowed IBM calculating machines in the basement of some place in NYC (I think it was a commercial organization), basically beginning the HPC industry. Anyway, the Fitzpatrick thesis begins asking why it took so long for thermonuclear devices to be developed. I haven't yet had time to read to the conclusion, but presumably "not fast enough computers" is the answer.

Update, I tracked down the book. The guy was Ford, who worked for Wheeler: https://pubs.aip.org/physicstoday/article/68/7/46/415213/Bui...


The first half of this book is kind of a slog, focusing on the minutiae of the Soviet's espionage effort. Which, to be fair, was the basis for the Soviet's rapid development of fission and fusion weapons. I just wasn't expecting a (rather boring) spy book. The 2nd half is much more interesting as they get into the truly genius science and engineering of the hydrogen bomb. And boy, Teller really does come off as a complete jerk who wasted a lot of time on his preferred Super design.


Soviets also benefited a lot from German scientists pulled from post-WW2 Germany in their own version of a Paperclip-like program. Recommended reading: Forgotten Creators by Todd Rider. Free and online, over 4,000 pages including references and important appendices, so one has to navigate to the chapter / section of interest.

https://riderinstitute.org/revolutionary-innovation/


This is the same Todd Rider whose PhD work at MIT (advised by the late Lawrence Lidsky) showed aneutronic fusion was unlikely to be workable. Lidsky had previously argued DT fusion wasn't going to cut it because of inherently low volumetric power density and had argued aneutronic fusion should be pursued. Between those two approaches lies lower neutronicity D-3He fusion, which may be fusion's only real hope. Helion has the lead in pursuing this approach, with a design focusing on highly efficient energy recirculation that feels informed by Rider's analysis.


His analysis of German fusion advances during WW2 is very much present in his work.


He was definitely trying to impart more of a lesson with Dark Sun


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: