Hacker Newsnew | past | comments | ask | show | jobs | submit | hubraumhugo's commentslogin

Merry Christmas HN! You're one of the few constants among the many variables in my life, please never change :)

"The Thinking Game" is an absolutely fascinating and inspirational documentary about DeepMind and Demis Hassabis: https://www.youtube.com/watch?v=d95J8yzvjbQ

Makes you really optimistic about the future of humanity :)


Discussed here if anyone's curious:

The Thinking Game Film – Google DeepMind documentary - https://news.ycombinator.com/item?id=46097773 - Nov 2025 (141 comments)


Would love to learn more about how this is built. I remember a similar project from 4 years ago[0] that used a classic BERT model for NER on HN comments.

I assume this one uses a few-shot LLM approach instead, which is slower and more expensive at inference, but so much faster to build since there's no tedious labeling needed.

[0] https://news.ycombinator.com/item?id=28596207


> Would love to learn more about how this is built. I remember a similar project from 4 years ago[0] that used a classic BERT model for NER on HN comments

Yes, I saw that project pretty impressive! Hand-labeling 4000 books is definitely not an easy task, mad-respect to tracyhenry for the passion and hardwork that was required back then.

For my project, I just used the Gemini 2.5 Flash API (since I had free credits) with the following prompt:

"""You are an expert literary assistant parsing Hacker News comments. Rules: 1. Only extract CLEARLY identifiable books. 2. Ignore generic mentions. 3. Return JSON ARRAY only. 4. If no books found, return []. 5. A score from -10 to 10 where 10 is highly recommended, -10 is very poorly recommended and 0 is neutral. 6. If the author's name is in the comment, include it; otherwise, omit the key. JSON format: [ {{ "title": "book title", "sentiment": "score", "author" : "Name of author if mentioned" }} ] Text: {text}"""

It did the job quite well. It really shows how far AI has come in just 4 years.


Thanks. I now run a two-step process: first pass reads through all posts and comments to extract patterns, second pass uses those to generate the content. Should be much more representative of your full year now :)

My impression was the same as the poster: it still over-indexes on a couple of recent posts.

Of course, it's possible that we've both been repeating ourselves all year long! I mean, I know I do that, I just think I've ridden more hobby horses than it picked up. :-)

It's fun, though. Thanks for sharing - a couple of my "roasts" gave me a genuine chuckle.


Grüezi! Is there a way to re-generate my wrapped?

https://hn-wrapped.kadoa.com/aschobel


It was quite different when I tried it again. Still fairly fixated on the last month, but it is definitely better.

My roasts are now substantially more well done now. Well done.

Appreciate the feedback, will try to iterate it to greatness further. It's still a bit hit or miss, but I've made a few improvements:

- improved prompts with your feedback

- added post/comment shuffling to remove recency bias

- tried to fix the speech attribution errors in the xkcd


Perhaps it should also avoid putting too much emphasis on several comments to the same story: there was a story about VAT changes in Denmark, where I participated with several comments; but the generator decided that I apparently had a high focus vat, when I just wanted to provide some clarifying context to that story. I wonder how comments are weighed, is it individually or per story?

Specifically this roast:

> You have commented about the specific nuances of Danish VAT and accounting system hardcoding at least four times, proving you are the only person on Earth who finds tax infrastructure more exciting than the books being taxed.

Yeah, but I did it on the same story (i.e. context).

Though the other details it picked up, I cannot really argue with: the VAT bit just stood out to me.


That’s a poorly written roast.

Thanks, you got a great tagline!

The xkcd should be saved and cached once generated, I'll look into the issue.

How would you improve saving?


I went to https://hn-wrapped.kadoa.com/AndrewDucker again, and got a third xkcd just now.

Actually, other than that, it works just fine. I just wanted to cache mine, for when your site gets melted by HN overload. So I did it manually.

Thanks again!


Ah, I've found the issue. Turns out I didn't account for case-insensitive HN usernames like yours :) should be fixed, love your current xkcd :D

Server is melting a bit, looking into it.

EDIT: a retry worked. Enjoy: https://hn-wrapped.kadoa.com/jaggs


Haha, thanks. Got it a bit wrong, but some bits were excellent. :)

The salt race condition comic made me laugh :D

Me too! AI is gonna put Randall out of business!

You can get your HN profile analyzed and roasted by it. It's pretty funny :) https://hn-wrapped.kadoa.com


I didn't feel roasted at all. In fact I feel vindicated! https://hn-wrapped.kadoa.com/onraglanroad


This is hilarious. The personalized pie charts and XKCD-style comics are great, and the roast-style humor is perfect.

I do feel like it's not an entirely accurate caricature (recency bias? limited context?), but it's close enough.

Good work!

You should do a "show HN" if you're not worried about it costing you too much.



This is exactly why you keep your personal life off the internet


Pretty fucking hilarious, if completely off-topic.


That cut deep

This is great. I literally "LOL'd".


No gemini-3-flash yet, right? Any ETA on that mentioned? 2.5-flash has been amazing in terms of cost/value ratio.


ive found gemini 2.5-flash works better (for.agentic coding) than pro, too


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: