Hacker Newsnew | past | comments | ask | show | jobs | submit | more mooktakim's commentslogin

Teaching is a terrible example. Teaching is actually more efficient when it is decentralised as the teachers can adapt to local environment and changes. With centralisation you have bad feedback loop.


Your dev setting up the local environment and getting it running is a great filter. Get rid of them if they can't.


Why didn't they send out new relays as Voyager travelled out.


Voyager's path required multiple flybys. You can't just send something on the same path later on - everything has moved.

Plus, no budget for relays.


This could be a nice UI to let you create the test visually.


I like this idea, it's even on my future roadmap: "Record browser interactions to save as ruby code"

Is there an example of a tool doing this well?


I watched the demo for a comment made above, and it looks like the gem can do what you're talking about:

> Slightly different approach, but appears to have the same overarching goals: https://github.com/bullet-train-co/magic_test


I thought the same


Aren't the LLM's already trained on the whole web? no need to RAG, in theory.


Training doesn't work like that. Just because a model has been exposed to text in its training data doesn't mean the model will "remember" the details of that text.

Llama 3 was trained on 15 trillion tokens, but I can download a version of that model that's just 4GB in size.

No matter how "big" your model is there is still scope for techniques like RAG if you want it to be able to return answers grounded in actual text, as opposed to often-correct hallucinations spun up from the giant matrices of numbers in the model weights.


They're only trained up to a certain point in time, so adding RAG should hypothetically allow such LLMs to access the most up-to-date information.


GPT-2 was launched in 2019, followed by GPT-3 in 2020, and GPT-4 in 2023. RAG is necessary to bridge informational gaps in between long LLM release cycles.


The important thing you learn in life is you don't know sh*t


I've been getting annoyed with Neovim and now trying out Zed.


Could we not just count the pixels and group hex values to certain colour words? DL doesn't seem its needed.


I was thinking the same thing, and I believe there are trade offs you make in both methods. If you count and group, you just have to pick your own hexcode buckets. So orange and gold are their top two colors in the example, and on our end we would jsut have to decide what range of hex values correlates to orange and what range correlates to gold and what range applies to everything. With deep learning these ranges are effectively learned, so it's more computational and feels like overkill but I can see the benefits.

If I were at work I would probably just choose my buckets and group by the hex counts though, a lot less computation in that and you can get consistent results. If I were having fun I would fit a deep net.


You should repeat to avoid coupling code that are contextually different. You want to avoid too many abstraction. There's no black and white rule. You'll get a feel for when to repeat and when to not.


Is it good journalism to watch YouTube interviews and report on that?

I would expect they'd go first-hand to various people who know him and dig up stories from the past.


They did.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: