Hacker Newsnew | past | comments | ask | show | jobs | submit | paulsutter's commentslogin

Just make borrowings above the basis taxable as gains, its not hard

Nonsense

There will however be a gigantic gulf between kids who use AI to learn vs those who use AI to aid learning

Objective review of Alpha school in Austin:

https://www.astralcodexten.com/p/your-review-alpha-school


> There will however be a gigantic gulf between kids who use AI to learn vs those who use AI to aid learning

yeah, but not the way you are thinking

you think the rich are going to abolish a traditional education for their kids and dump them in front of a prompt text box for 8 years

that'll just be for the poor and (formerly) middle-class kids


In the rosiest view, the rich give their children private tutors (and always have), and now the poor can give their children private tutors too, in the form of AIs. More realistically, what the poor get is something which looks superficially like a private tutor, yet instead of accelerating and deepening learning, it is one that allows the child to skip understanding entirely. Which, from a cynical point of view, suits the rich just fine...


I think what?


What is the distinction between using "AI to learn" and using "AI to aid learning?"


Imagine a tutor that stays with you as long as you need for every concept of math, instead of the class moving on without you and that compounding over years.

Rather than 1 teacher for 30 students, 1 teacher can scale to 30 students to better address Bloom's 2 sigma problem, which discovered students in a 1:2 ratio with a tutor full time ended up in the 98% of students reliably.

LLMs are capable of delivering this outright, or providing serious inroads to it for those capable and willing to do the work beyond going through the motions.

https://en.wikipedia.org/wiki/Bloom's_2_sigma_problem (1984)


> Imagine a tutor that stays with you as long as you need for every concept of math, instead of the class moving on without you and that compounding over years.

I remember when I was at the uni, the topics I learned the best were the ones I put effort to study by myself at home. Having a tutor with me all the time will actually make me do the bare minimum as there always were other things to do and I would love to skip the hard parts and move forward.


The tutor is available for you all the time to learn.

If you read the article the other post shared, I think you might be surprised to find it's exactly what you are describing.


I don't think this answers the question in the comment you're replying to.


This is absolutely not an objective review. The person who wrote this is a very particular type of person who Alpha School appeals strongly towards. I'm not saying anything in particular is wrong with the review, but calling it unbiased is incorrect.


Calling the Alpha school "AI" or even "AI to aid learning" is a massive stretch. I've read that article and nothing in there says AI to me. Data collection and on-demand computer-based instruction, sure.

I don't disagree with your premise, but I don't think that article backs it up at all.


Watched the video - thanks

Ioannu is saying the paper's idea for training a dense network doesn't work in non-toy networks (the paper's method for selecting promising weights early doesn't improve the network)

BUT the term "lottery ticket" refers to the true observation that a small subset of weights drive functionality (see all pruning papers). It's great terminology because they truly are coincidences based on random numbers.

All that's been disproven is that paper's specific method to create a dense network based on this observation


False. It really was just intended to coalesce packets.

I’ll be nice and not attack the feature. But making that the default is one of the biggest mistakes in the history of networking (second only to TCP’s boneheaded congestion control that was designed imagining 56kbit links)


TCP uses the worst congestion control algorithm for general networks except for all of the others that have been tried. The biggest change I can think of is adjusting the window based on RTT instead of packet loss to avoid bufferbloat (Vegas).

Unless you have some kind of special circumstance you can leverage it's hard to beat TCP. You would not be the first to try.


For serving web pages, TCP is only used by legacy servers.

The fundamental congestion control issue is that after you drop to half, the window is increased by /one packet/, which for all sorts of artificial reasons is about 1500 bytes. Which means the performance gets worse and worse the greater the bandwidth-delay product (which have increased by tens of orders of magnitude). Not to mention head-of-line blocking etc.

The reason for QUIC's silent success was the brilliant move of sidestepping the political quagmire around TCP congestion control, so they could solve the problems in peace


TCP Reno fixed that problem. QUIC is more about sending more parts of the page in parallel. It does do its own flow control, but that's not where it gets the majority of the improvement.


TCP Reno Vegas etc all addressed congestion control with various ideas, but were all doomed by the academic downward spiral pissing contest.

QUIC is real and works great, and they sidestepped all of that and just built it and tuned it and has basically won. As for QUIC "sending more parts of the page in parallel" yes thats what I referred to re head of line blocking in TCP.


There is nothing magic about the congestion control in QUIC. It shares a lot with TCP BBR.

Unlike TLS over TCP, QUIC is still not able to be offloaded to NICs. And most stacks are in userspace. So it is horrifically expensive in terms of watts/byte or cycles/byte sent for a CDN workload (something like 8x as a expensive the last time I looked), and its primarily used and advocated for by people who have metrics for latency, but not server side costs.


> Unlike TLS over TCP, QUIC is still not able to be offloaded to NICs.

That's not quite true. You can offload QUIC connection steering just fine, as long as your NICs can do hardware encryption. It's actually _easier_ because you can never get a QUIC datagram split across multiple physical packets (barring the IP-level fragmentation).

The only real difference from TCP is the encryption for ACKs.


From a CDN perspective, whats missing is there is no kernel stack on FreeBSD / Linux, and no support for sendfile/sendpage and no support for segmentation offload entirely in hardware. So you can't just send an entire file (or a large range) and forget about it, like you can with TCP.

Some NICs, like Broadcom's newer ones, support crypto offloads, but this is not enough to be competitive with TCP / TLS. Especially since support for those offloads are not in any mainline kernel in Linux or BSD.


> (second only to TCP’s boneheaded congestion control that was designed imagining 56kbit links)

What would you change here?


Normalizing for language and culture seem like the hardest parts of any global survey. How are the translations done of that one question and are there any cultural implications?


Surprised at the downvotes to your excellent comment.

Good insight - that people dunk on the author as a cope to help the dunker feel less powerless


Meta-critical or self-critical comments on hackernews get downvoted . It’s about 1.5 ° above Reddit


> we won’t work on product marketing for AI stuff, from a moral standpoint

Can someone explain this?


Some folks have moral concerns about AI. They include:

* The environmental cost of inference in aggregate and training in specific is non-negligible

* Training is performed (it is assumed) with material that was not consented to be trained upon. Some consider this to be akin to plagiarism or even theft.

* AI displaces labor, weakening the workers across all industries, but especially junior folks. This consolidates power into the hands of the people selling AI.

* The primary companies who are selling AI products have, at times, controversial pasts or leaders.

* Many products are adding AI where it makes little sense, and those systems are performing poorly. Nevertheless, some companies shove short AI everywhere, cheapening products across a range of industries.

* The social impacts of AI, particularly generative media and shopping in places like YouTube, Amazon, Twitter, Facebook, etc are not well understood and could contribute to increased radicalization and Balkanization.

* AI is enabling an attention Gish-gallop in places like search engines, where good results are being shoved out by slop.

Hopefully you can read these and understand why someone might have moral concerns, even if you do not. (These are not my opinions, but they are opinions other people hold strongly. Please don't downvote me for trying to provide a neutral answer to this person's question.)


I'm fairly sure all the first three points are true for each new human produced. The environmental cost vs output is probably significantly higher per human, and the population continues to grow.

My experience with large companies (especially American Tech) is that they always try and deliver the product as cheap as possible, are usually evil and never cared about social impacts. And HN has been steadily complaining about the lowering of quality of search results for at least a decade.

I think your points are probably a fair snapshot of peoples moral issue, but I think they're also fairly weak when you view them in the context of how these types of companies have operated for decades. I suspect people are worried for their jobs and cling to a reasonable sounding morality point so they don't have to admit that.


Plenty of people have moral concerns with having children too.

And while some might be doing what you say, others might genuinely have a moral threshold they are unwilling to cross. Who am I to tell someone they don't actually have a genuinely held belief?


"Please don't downvote me for trying to provide a neutral answer to this person's question"

Please note, that there are some accounts downvoting any comment talking about downvoting by principle.


These points are so wide and multi dimensionsal that one must really wonder whether they were looking for reasons for concern.


Let's put aside the fact that the person you replied to was trying to represent a diversity of views and not attribute them all to one individual, including the author of the article.

Should people not look for reasons to be concerned?


I can show you many instances of people or organisations representing diversity of views. Example: https://wiki.gentoo.org/wiki/Project:Council/AI_policy


Okay. Why are we comparing a commentor answering a question to a FOSS organization who wants to align contributiors? You seem to have completely side tracked the conversation you started.


I'm not sure it's helpful to accuse "them" of bad faith, when "them" hasn't been defined and the post in question is a summary of reasons many individual people have expressed over time.


i have noticed this pattern too frequently https://wiki.gentoo.org/wiki/Project:Council/AI_policy

see the diversity of views.


Explanation: this article is a marketing piece trying to appeal to anti-AI group.


You have to design for scale AND deploy gradually


Yes, absolutely. Knowing that it will need to get big eventually is important, but not at all the same as deploying at scale initially.


> When a system uses very short intervals, such as sending heartbeats every 500 milliseconds

500 milliseconds is a very long interval, on a CPU timescale. Funny how we all tend to judge intervals based on human timescales

Of course the best way to choose heartbeat intervals is based on metrics like transaction failure rate or latency


Top shelf would be noticing an anomaly in behavior for a node and then interrogating it to see what’s wrong.

Automatic load balancing always gets weird, because it can end up sending more traffic to the sick server instead of less, because the results come back faster. So you have to be careful with status codes.


You have to consider the tail latencies of the system responding plus the network in between. The p99 is typically much higher than the average. Also, may have to account for GC as was mentioned in the article. 500ms gets used up pretty fast.


500ms is actually a very short interval for heartbeats in modern distributed systems. Kubernetes nodes out of the box send heartbeats every 10s, and Kubernetes only declares a node as dead when there's no heartbeat for 40s.

The relevant timescale here is not CPU time but network time. There's so much jitter in networks that if your heartbeats are on CPU scale (even, say, 100ms) and you wait for 4 missed before declaring dead, you'd just be constantly failing over.


Speak for your own network. On a densely interconnected datacenter network there would be no appreciable jitter.

4 x 10s heartbeats sounds like an incredibly conservative decision by whoever chose the default, and I cant imagine any critical service keeping those timeouts.


Well, it is called a heartbeat after all, not a oscillator beat :-)


cat /proc/sys/net/ipv4/tcp_keepalive_time

7200

That is two hours in seconds.


Language interoperability is a material question. Outputting Javascript, Python, C++ vs assembler/machine code have very different implications for calls to/from other languages

Is JIT also meaningless?

But ultimately if you don’t want to use a word, don’t use it. Not wanting to hear a word says more about the listener than the speaker


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: