Hacker Newsnew | past | comments | ask | show | jobs | submit | nfc's commentslogin

It seems like an extremely coarse classification. Category 3 contains languages with very different degrees of difficulty, while Bulgarian and Russian are both Slavic they are nothing alike in terms of difficulty since Bulgarian is the most analytic of Slavic languages (has the less inflection). That makes it extremely easy to learn compared to Russian.


What is also interesting is how written Russian was heavily influenced by old Bulgarian. In fact, written russian includes a lot of older written bulgarian vocabulary.

This results in a weird paradox: for literate Russians it is easy enough to read written bulgarian but almost impossible to understand the spoken language.


This happens in other languages too - danish and Norwegian are almost the same written, such that most products just combine the two on the packaging. But spoken it can be very difficult to comprehend


So... codified written languages are similar but real spoken ones have diverged? Is this only in the way things are pronouced or the differemce is deeper?


I speak Russian and some Bulgarian as third/forth languages, and while I agree that Russian is more difficult, I wouldn't say Bulgarian is "extremely easy" in comparison. It's maybe ~20% easier at best.


I think Bulgarian is considered the easiest Slavic language in terms of grammar because it has a simplified case system similar to how English dropped its cases over time.


Something I ponder in the context of AI alignment is how we approach agents with potentially multiple objectives. Much of the discussion seems focused on ensuring an AI pursues a single goal. Which seems to be a great idea if we are trying to simplify the problem but I'm not sure how realistic it is when considering complex intelligences.

For example human motivation often involves juggling several goals simultaneously. I might care about both my own happiness and my family's happiness. The way I navigate this isn't by picking one goal and maximizing it at the expense of the other; instead, I try to balance my efforts and find acceptable trade-offs.

I think this 'balancing act' between potentially competing objectives may be a really crucial aspect of complex agency, but I haven't seen it discussed as much in alignment circles. Maybe someone could point me to some discussions about this :)


I agree with your point about the validation bottleneck becoming dominant over raw compute and simple model scaling. However, I wonder if we're underestimating the potential headroom for sheer efficiency breakthroughs at our levels of intelligence.

Von Neumann for example was incredibly brilliant, yet his brain presumably ran on roughly the same power budget as anyone else's. I mean, did he have to eat mountains of food to fuel those thoughts? ;)

So it looks like massive gains in intelligence or capability might not require proportionally massive increases in fundamental inputs at least at the highest levels of intelligence a human can reach, and if that's true for the human brain why not for other architecture of intelligence.

P.S. It's funny, I was talking about something along the lines of what you said with a friend just a few minutes before reading your comment so when I saw it I felt that I had to comment :)


I think you are underestimating the context, we all stand on shoulders of giants. Let's think what would happen if kid Einstein, at the young age of 5, was marooned on an island and recovered 30 years later. Will he have any deep insights to dazzle us with? I don't think he would.


Hayy ibn Yaqdhan Nature vs nurture and relative nature of intelligence iirc


The Turing test seems to be a product of an era where the nature and capabilities of artificial intelligence were still in the realms of the unknown. Because of that it was difficult to conceive a specific test that could measure its abilities. So the test ended up focusing on human intelligence—the most advanced form of intelligence known at that time—as the benchmark for AI.

To illustrate, imagine if an extraterrestrial race created a Turing-style test, with their intelligence serving as the gold standard. Unless their cognitive processes closely mirrored ours, it's doubtful that humans would pass such an examination


Just thought about this scenario, there are probably more likely ones.

If some AI had access to the missile launch system, the best course of action for it would probably not be to launch immediately. This is because nowadays it is very unlikely that it would be able to repair itself so launching immediately would ensure its own destruction (and probably auto-destruction is not its goal)

If it was discovered it could just threaten humans with launch if they do not help it reach the state at which it would be able to repair itself (at which point humans would no longer be necessary)


Yes. This is more I, Robot or Terminator thinking. War Games was more concerned with humans behaving unimaginatively rather than with AGI as such.


I enjoyed the article but have a very minor nitpick. I didn't understand why the author added this sentence.

"However, the timescales involved in these calculation are so unreasonably large and abstract that one could wonder if these makes any sense at all."

Apart from the fact that we could wonder about anything and everything I think the author does not state what evidence do we have to suspect that large enough timescales would change the laws of physics.

It could be the case of course, and it would be great to talk about them if they exist but without further justification I feel that this sentence is an unjustified opinion in what is otherwise a very nice article that helps better understand enthropy.


“If we define understanding as human understanding, then AI systems are very far off,”

This took me into the following line of thought. If we wanted AGI we probably should give this neural networks an overarching goal, the same way our intelligence evolved in the presence of overarching goals (survival, reproduction...). It's these less narrow goals that allowed us to evolve our "general intelligence". It's possible that if we are trying to construct AGI through the accumulation of narrow goals we are taking the harder route.

At the same time I think we should not pursue AGI the way I'm suggesting is best, too many unknown risks (paperclip problem...)

Of course all this begs the question of what is AGI, how we define a good overarching goal to prompt AGI and many more...


I think the best concise definition I've run across is what I heard Yann LeCun say in his recent interview with Lex Fridman.

“The essence of intelligence is the ability to predict.” -Yann LeCun


That's just the current idea of how the brain works - predictive processing. As we advance our understanding perhaps this will be seen as only one facet of intelligence. For instance, where does creativity fit into this definition?


I'm trying to help friends in Ukraine as much as possible and advice on secure communications would be a way to do it. I know messaging apps security have been discussed in HN before but I wanted to ask the community about it in the context of the conflict of Ukraine.

The end goal for me is to give the best advice for my friends but I think it can also lead to the type of discussion HN is focused on.


I was thinking something similar recently, and I also believe it's an idea worth exploring. Something else to add to this conversation. There's an obvious difference between two cases in which a trusted person trusts a url:

1) Single contributor website (blog, personal page...): It seems that we could spread the trust the whole website in the algorithm (at least more than for the next case)

2) Multi contributor website (forum, newspaper): It seems the trust should be given at an URL level

Something worth delving into if we are designing this trust based search engine in real-time here at HN ;)


Then you just stop trusting them for your search results, it's not like you are unfriending them.


I think you are applying a solution that would work in a cooperative game. The problem is that this situation, like spam, is an inherently adversarial one - and one where one's adversaries are very motivated and have substantial (substantially more?) resources than you at the outset.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: