Hacker Newsnew | past | comments | ask | show | jobs | submit | NickM's commentslogin

I don’t think that the production model will look like that, I believe it’s a wrap. Also suspect it’s a bit of a joke on the model being called the R2 (the colors and patterns are reminiscent of R2-D2).


Have you really never bought a product or service for some other reason than that you saw an ad for it?

People have plenty of other ways of finding out about useful products and services. You can talk to your friends and family, or go to a store and talk to a salesperson, or look up product reviews online, or even pay for something like a Consumer Reports subscription.


Friends and family can be influenced, although I'd still trust them above anyone else. But salespeople are incentivized to lie to you (sorry, it's true). Product reviews are astroturfed by bots now. Consumer Reports, too, has been captured by industry, and is largely useless now.

When the metric is "make sales and make as much money as possible", it will be incredibly difficult to avoid bias from people with a vested interest in selling you something. This is why advertising (admittedly, mixed with our current society) is so insidious: it's very hard to find a third party that isn't trying to profit off of you buying something.


> Consumer Reports, too, has been captured by industry, and is largely useless now.

Any evidence of this?


video game "journalism", car magazines, travel reviews, restaurant reviews


So, no? Just naming a few of their products isn't evidence.


I think this is a dangerously half-true way of thinking.

Yes, there are times when hard work feels great, and it's absolutely worth seeking out this kind of work.

But any serious endeavor is going to have times that are a slog, and your ability to stick with it through the bad times will very directly dictate your ability to get back to the good times.


> But any serious endeavor is going to have times that are a slog

You misspelled "many" :-)


This is completely false; the psychiatric effects of isotretinoin are well studied and significant, with a plausible mechanism of action no less.

Many people supposed that it’s just the acne making people depressed because it’s a nice plausible explanation, but it’s verifiably wrong.


Why would banks keep giving PE firms loans for these kinds of deals if the companies inevitably collapse and default on those loans?

Not trying to defend PE here, but this narrative doesn't make sense to me.


First of all, investment banks are awash in capital thanks to 14 years of ZIRP and massive profitability. They don't like keeping cash on hand, so that means they dole it out into investments, some of which will flop.

Second, banks are the primary creditor in these deals, meaning they get paid first. They don't do these deals without ensuring that the company has enough saleable assets to ensure they get their pound of flesh. Lots of companies have billions in pension-earmarked reserves they don't have to pay out on if they declare bankruptcy. Guess who gets first dibs on that cash.

Third, they can shift the risk by selling their interest in these companies to another party. They are not stuck with it forever.


Image you have a goose that lays golden eggs. You could just keep selling the eggs every year but somebody comes up to you and offers you 2 billion dollars now and the public market values your golden egg business at 1.5 billion dollars so it seems fair.

It turns out that if you kill the goose there's a cache of 3 billion dollars worth of eggs within it.

The goose is gone and everybody made money off of it's demise.

---

PE (not always) is effective at finding under-valued companies and ensuring that they record the value on the PE's books.


Because it sometimes works. But it also sometimes destroys the companies.

But the “works” here is to just make PE richer in the short term, not to actually improve the company in the long term. That short term thinking leads to many impractical decisions that have caused bankruptcies


SWE salaries are a massive cost. Improving productivity is one way of offsetting that cost.

In a lot of businesses you get praise and look important if you’re responsible for leading a large group of highly paid employees, more so then if you have a smaller team.

Thus the motivation is frequently to spend as much money as possible, not to improve efficiency.

If you improve efficiency then maybe you just get your team size cut and people ask hard questions about why you needed all those resources in the first place.


This is true only so long as employees are actually producing things. Case in point: I'm working for a tiny company building an internal tool for a very very large company because their internal team isn't getting the job done. Hard questions are being asked, but not of us.

Maybe things were different in the days of zero APR free money being thrown left and right at companies to keep growing, but I don't think we'll see a return to that any time soon.


The trouble is that making progress in leaps is often mutually exclusive with being productive in the short term. It’s hard to think big and plan long-term when you’re constantly overwhelmed with what’s in front of you.

Slow Productivity by Cal Newport talks about this trade-off extensively and provides interesting points of reference where real famous historical figures achieved incredible things in ways that would seem slow and lazy by modern standards.


But that is part of my point. This is not actually the conflict. Progress by leaps is often only possible in places that are also making repeated incremental progress.

It is tempting to think of this in terms of sports. As an easy example, home runs make larger impacts for teams that are good at getting people on bases. Of course, you can argue that baseball has a ceiling on how much you can make from a single home run which is not true for most businesses.

But even sports somewhat miss one of the main things that is hard to communicate. What feels like small progress is often needed just to stay afloat. I suppose the sports nature of it would be that you have an offense and a defense, usually. In business, that daily short term progress would be the defense.

It is frustrating, because we do want to focus on the big ideas. But so many of the big ideas needed a TON of little ideas around them to be viable. And by nature, when we discuss one as a thing that we want, we almost necessarily ignore the other. We can really only focus on one thing at a time.


I disagree that the current generation of AI has "solved" artistic fields any more than it's solved math or programming.

Just as an LLM may be good at spitting out code that looks plausible but fails to work, diffusion models are good at spitting out art that looks shiny but is lacking in any real creativity or artistic expression.


> "looks shiny but is lacking in any real creativity or artistic expression."

My experience with that is that artistic milieus now sometimes even explicitly admit that the difference is who created the art.

"Human that suffered and created something" => high quality art

"The exact same thing but by a machine" => soulless claptrap

It's not about the end result.

A lot could be written about this but it's completely socially unacceptable.

Whether an analogous thing will happen with beautiful mathematical proofs or physical theories remains to be seen. I for one am curious, but as far as art is concerned, in my view it's done.


Truly great art, the kind that expands the field of artistry and makes people think, requires creativity; if you make something that's just a rehashing of existing art, that's not truly creative, it's boring and derivative.

This has nothing to do with whether a human or AI created the art, and I don't think it's controversial to say that AI-generated art is derivative; the models are literally trained to mimic existing artwork.


Creativity in AI art production is a fancy term for temperature that adds no semantic value.

Your "creativity" is just "high temperature" novel art done by the right person/entity.

This was something already obvious to anyone paying attention. Innovation from the "wrong people" was just "sophomoric", derivative or another euphemism, but the same thing from the right person would be a work of genius.


I think you might be missing the point of the article: the study being cited isn't trying to establish the existence of a "language brain" or a "math brain", that's just the way the headline editorialized it to help people understand the conclusions.

The conclusion of the study was that linguistic aptitude seemed to be more correlated with programming aptitude than mathematical aptitude, which seems fairly interesting, and also fairly unconcerned with which specific physical regions in the brain might happen to be involved.


I understood it.

  > The conclusion of the study was that linguistic aptitude seemed to be more correlated with programming aptitude than mathematical aptitude
And this is what I'm pushing back against and where I think you've misinterpreted.

  > They found that how well students learned Python was mostly explained by general cognitive abilities (problem solving and working memory), while how quickly they learned was explained by both general cognitive skills and language aptitude.
I made the claim that these are in fact math skills, but most people confuse with arithmetic. Math is a language. It is a language we created to help with abstraction. Code is math. There's no question about this. Go look into lambda calculus and the Church-Turing Thesis. There is much more in this direction too. And of course, we should have a clear connection to connect it all if you're able to see some abstraction.


> Math is a language.

Language is not math, therefore math is not language.


Logic doesn't follow.

There is no problem with A -> B ∧ B -/-> A

Here's an example. "I live in San Francisco" would imply "I live in the US". But "I live in the US" does not mean "I live in San Francisco".

Here's a more formal representation of this: https://en.wikipedia.org/wiki/Bijection,_injection_and_surje...


The word "is", maps to the logical "equals" operator. I agree with the example, but I don't agree it is relevant. There is no implies operator.

The statement "Math is Language", where A is Math and B is Language, maps to the logical assertion: "A = B".

If we are going to really be kinda twisty and non-standard, we could interpret the english "is" to be "is an equivalence class of". Which would map to your example pretty well: language is indeed an equivalence class of math, but math is not an equivalence class of language. Though, nobody is talking about implies operator or equivalence class here.. It's a "is" relationship, logical *equals*


> The word "is", maps to the logical "equals" operator.

It very obviously doesn't. A square is a rectangle. seadan83 is (probably) a mammal. Math is a language.


You point out the "is a" relationship, not the "is" relationship, they are different. [0]

Find examples with two singular nouns and just the word 'is'.

The phrase in question: 'Math is language' is an example, or something like 'food is love' is too. I concede you could interpret those last few sentences with poetic license to be read more like: "A is a form of B", or "A is a B" - though that is not what was written and this is not a place to expect that much poetic license.

*edit*: a minute later, thought of a good example. "ice is water". True that "ice is a form of water", but strictly speaking no, "ice is not water". I'll concede there could exist an implied "is a", or an implied "is a form of", but that is poetic license IMO.

[0] Google AI summarized it pretty well: google "logical "is a" vs logical "is"

> In logic, "is" typically represents an equality relation, while "is a" (or "is of the type") represents an inclusion relation. "Is" indicates that two things are the same or identical, while "is a" indicates that one thing is a member of a larger class or set of things


> You point out the "is a" relationship, not the "is" relationship, they are different.

Well, what you reacted to was, let me copy'n'paste, "Math is a language". It was you who insisted that "is" in this sentence maps to "equals" relation, so thanks for agreeing that you were wrong.


I'm reacting to: "Math is language. 'Everything' is language. Language is the image of reality."

There are other discussions which say:

- Math is a subset of language, surely

- It's easily argued that languages are subsets of math.

Given that context, the distinction seems to be very important.

I find the following idea (paraphrasing) to be very interesting: "not only is math a subset of language, but the language and math are equal sets." I also think it's not true, but am curious how a person would support this assertion. So, my challenge is, because the logical "is" relationship is reflexive and the reflexive property does not hold here - how can this be true? The most satisfying answer has been (paraphrasing) "cause I'm using non-precise language and you should just infer what I meant." Which is fine I guess..


I literally copied "Math is a language." from your quote that started this subthread. Nobody here has typed "Math is language" - except you. Just open https://news.ycombinator.com/item?id=43873113, press CTRL+F and see for yourself. I can't fathom how can you still deny being so obviously wrong.


Honestly I don't think his point even stands. We were using English to communicate and English doesn't have the strict rules of mathematics. That's literally why we created math (which I'll gladly call "a class of languages"). He's right, "is" maps to "equivalent" but he's also wrong because "is" also maps to "subset" and several other things. "Is" is a surjection.

The problem here all comes down to seadan83 acting in bad faith and using an intentional misinterpretation of my words in order to fit them to their conclusion. I'm not going to entertain them more because I won't play such a pointless game. The ambiguity of written and spoken language always allows for such abuse. So either they are a bad faith actor "having fun" (trolling) finding intentional misinterpretations to frustrate those who wish to act in good faith or they are dumb. Personally, I don't think they're dumb.


> We were using English to communicate and English doesn't have the strict rules of mathematics.

Agree.

> He's right, "is" maps to "equivalent" but he's also wrong because "is" also maps to "subset" and several other things. "Is" is a surjection.

I agree. So, why can't either interpretation be valid? Perhaps, because one is obviously not true? Yet, it seemed like there was a clarification that the obviously not true relationship was the intended one!!!

Godelski previously wrote: "Coding IS math. Not "coding uses math".

I interpreted that clarification to mean you intended "is" to be a strict "is". Particularly given the other context and discussion of "is a" in other threads. I suspect now you were perhaps emphasizing "uses a" vs "is a", rather than "uses a" vs "is". Not a satisfying conclusion here. It would be a lot more interesting if the precision could have been there and had we been able to instead talk about whether all coding languages form an abstract algebra or not. Or perhaps use that line of reasoning to explain why all coding is a form of math. That would have been far more interesting..


Thinking about this a bit more.. I think I can refute your statement that "coding is math" and not "coding uses math".

I'm sorry the conversation got so caught up on pedantics.

Previously I would have quite readily agreed that at least "coding is a subset of math" - now I'd only agree in the sense that coding is an applied math, just like Physics is applied Math.

So, it does seem to be clearly a 'uses' relationship, and I'll support the assertion. To explain, coding is the act of creating a series of boolean expression (governed by boolean algebra) to create a desired output from a given input. To really explain, code is translated to assembly, which is then translated to binary, which then directly maps to how electrical signals flow out of CPU registers into a series of logical circuits. Assuming no faulty circuits, that flow is completely governed by boolean algebra. We therefore use boolean algebra to create our programs, we define a series of boolean operations to achieve a certain goal. We are _using_ boolean algebra to arrange a series of operations that maps a given set of inputs to a desired output. In the colloquial sense, coding is applied math, it is not pure math though. We use boolean algebra to create our programs, the programs are not boolean algebra themselves, but an application of boolean algebra.

Now, tying it all back to the article and implications. The data collected stated that the language parts of the brain are more responsible for whether we are able to learn programming. That seems to imply that the math part of programming is so far abstracted, that the parts of the brain which are used for math are no longer the most salient.

I wonder how the experiments and results in the article would have gone had the topic been electrical circuits and electrical engineering, which is far closer to the underlying math than coding.


It's such an absurd thing to argue about that I just assumed that some massive brainfart happened there. It happens to everyone, not everyone doubles down on it though.


Oh, they did it much more than twice: https://news.ycombinator.com/item?id=43873381

But then again, isn't a good portion of this thread non-mathematicians arguing about what math is? I really thought ndriscoll put it succinctly[0]

  > It's like trying to argue about the distinction between U(1), the complex numbers with magnitude 1, and the unit circle, and getting upset when the mathematicians say "those are 3 names for the same thing". 
I fear the day some of these people learn about Topology.

[0] https://news.ycombinator.com/item?id=43882197


> But then again, isn't a good portion of this thread non-mathematicians arguing about what math is?

No, a good chunk is clarification of "WTF do you mean?"

The abstract arguing I suspect we all find to not be interesting and absurd. Let's go to substance here..

The article has stated there is evidence that the math related regions of the brain are not nearly as heavily used when coding as compared to the language regions. The "mathematicians" seem to be arguing that this can't be true because coding and math are so closely related.

This is why the article and evidence are interesting. Coding and math are clearly and very closely related in many ways. Yet, the way the brain handles and interprets coding is more akin to pure language, than it is to pure math.

Which I suppose makes it all the more interesting that Math, Language, and coding are so related, yet (per the evidence and the article) - the brain does not see it that way.


The point is that linguistic aptitude _is_ math aptitude, and vice versa.

From my experience, my ability to articulate myself well is bound up with my ability to abstract and detect patterns. It is the same thing I apply to crafting software, the same thing I apply to creating visual art.

I think high-cognitive-ability people segregating themselves into artsy vs mathy people has more to do with their experiences in their formative years.


I will never buy another Tesla again for political reasons, but regarding reliability: their new models have always had reliability problems, but then reliability has always gotten much better within a year or so.

I don't know if the Cybertruck will follow the same pattern, or if the whole company has jumped the shark, but if we're looking for non-political opinions I would not necessarily write them off on quality issues alone.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: