Hacker Newsnew | past | comments | ask | show | jobs | submit | tirrex's commentslogin

I think chair is the most important part. It looks so comfortable.


Exactly. Is there any benefit running mysql instead of sqlite for Wordpress? Feels like default db should have been sqlite all along.


When did this happen? Is it new?


Are they using four photos or more?


I don't think so but I don't have a source. Please someone correct me if I am wrong https://news.ycombinator.com/item?id=30810885


I'm guessing at least six photos, judging from the six tripods.

(They could be stands for lights, but like I said, just guessing.)


This should be improved further. Add a scanner to read handwritten answer and post it as the response.


https://github.com/tezc/sc/tree/master/map

For those who are interested in faster hashmaps, I tried bunch of hashmaps and this one performs better than others. This is for C. Maybe C++ has better hashmaps.


I can pay for a search engine service if it blocks paywalls, bloated sites, fake bullshit clickbait contents. It’s time consuming to find what you’re looking for with Google. Unfortuanetely, there is no alternative.


How are they going to make money and support the development?


> And how much impact does it have on runtime performance? Well even computing a sum for a relative small number 1000, according to cppbench, the signed version is 430X faster.

Although you can find these kind of examples for a few lines of code snippet, the question is what is the impact on overall program? Nowadays, I guess it has no impact for almost all programs. Because memory access patterns, system call overhead, operating system interaction etc. have much more impact on overall performance compared to optimizations enabled by undefined behaviors.


You are wrong.

Optimizations directly enabled by Undefined behavior are only a very negligible part of the performance benefits of the existence of UB.

For example, consider the fact that array access out of bounds is UB. Because of this a compiler can assume (without proof) that all accesses are actually going to be in range. This enables a boatload of loop optimizations.

All non trivial optimizations done by a compiler usually assume such facts.


The problem has always been that in practice any nontrivial codebase had UB somewhere (invalidating the entire program!) and diagnosing any particular instance was generally painful until recently. Compilers didn't point most things out, sanitizers didn't exist, and prior to 2011, I don't think there was even a list of UB in C besides the entire standard. C++ is still largely in that position AFAIK.

It's a complete disaster on all sides.


Invalidating the entire program is theoretically correct, but not really a useful statement.

In practice, weird miscompilations due to UB are just slightly more difficult to debug than your usual segfault. You can generally keep reducing your problem to localize the issue in the code.

Also, such issues are not very common because the value obtained from an UB operation is usually nonsense (shifting past bitwidth, out of bounds array element, etc) so a compiler switching things around is just garbage-in-garbage-out. It is of course a serious issue if a program is actually depending on such a value for a crucial operation. That's how you get exploits, with or without the compiler doing something clever.


All I can do is point out that very big names in CS (including Linus) disagree with you: http://www.yodaiken.com/wp-content/uploads/2018/05/ub-1.pdf

This is probably because they’ve seen the effects of the foot gun first hand. They also recall what C/C++ looked like before the standards bodies made exploiting UB in compilers open season. There’s also a reason why Rust doesn’t have any UB in its safe dialect even though it sits on infrastructure capable of most/all the same possible optimizations via LLVM.

There was in fact a direct kernel security issue UB exploitation caused in the Linux kernel whereas without that optimization there is no security issue. if I recall correctly the code looked something like:

    Some_value = ptr->value
    If (!ptr) { return; }
This worked correctly in all cases before the compiler added the optimization. It worked less well after because the !ptr check was UB since it followed a dereference. It also wasn’t immediately obvious this code was broken because there was no diagnostic nor any indication that upgrading a compiler would suddenly elide the check. The value-add of such optimizations at scale vs the correctness issues exploiting it causes is questionable.

The problem wasn’t that it’s not fixable. It’s that it takes time to find and you may not even find it until after it’s being exploited.


That code is buggy if ptr can be null. There isn't an especially rigorous distinction between a bug and a vuln. So saying "this was definitely a safe bug and the compiler elevated it to a vuln" can be a thing in very specific situations, but it also isn't generalizable in any way.

Linus is an important figure, but there are plenty of world experts on the other side of this argument. And there is even a third side that states that not only should the compiler avoid optimizations based on UB assumptions, C implementations should instead function more like a virtual machine for a PDP-11 and have everything under the sun be defined (at great cost of performance).

If you take an especially pessimistic view from the compiler, then really really basic stuff is almost impossible. A write to a dereferenced pointer is almost equivalent to a full program havoc. This invalidates almost all facts that the compiler knows and prevents virtually every optimization near that write. Heck, technically this can interfere with things like vtables and invalidate any sort of devirtualization, which is hugely valuable for performance.


No, unfortunately in kernel land this isn't buggy. It's perfectly "valid" to dereference null. You'll read garbage but it won't panic & the null check on the following line prevents any issues (assuming the compiler isn't exploiting UB optimizations).

Sure. I agree with you in principle it's cleaner to have the nullptr check before. However, it's also important to remember that UB & the compiler optimizing around it is a relatively "new" phenomenon & the compiler going out of its way to punish you for it (for a perf improvement that likely doesn't matter in many/most cases) seems punative & not helpful to most C/C++ codebases (with the exception of some heavy math or HFT applications that could probably be better suited with their own language/dialect). IIRC the standard relaxed to allow this in C99 & the consequences weren't well understood until about 10-15 years later once compiler authors realized what the standard was allowing them to do. This is also the most innocuous case - there's plenty of even more subtle issues that UB can cause.

While I agree there can be nuance in viewpoints, I'm not sure what the 3 you are talking about are. Generally there's the pro-UB & optimizing assumptions around it which is largely populated by compiler authors & "performance at all costs". I'd say Chris Lattner, Linus & DJ Bernstein hold a largely similar point of view of UB = bad decision by the C/C++ standards bodies & probably only differ on the solution (Chris = "switch to a different language", Linus = "give me back the behavior before UB was introduced", DJB = "define all current UB").

Personally, while I'm a fan of performance, I'm not convinced that UB has proven enough of a performance gain that it's not better as sitting behind a flag like `ffast-math` - most software would benefit from removing most of the UB optimizations & it's not clear to me that they performance impact would be all that significant.


One of my least favorite bugs came from a loop that iterated from *p to *(p+n) (exclusive). For n=0 and values of p other than null, the loop body never executes. For n=0 and p=null, the loop body is allowed to execute, because null+0 is allowed to be any value.


Nice example! Seems solvable by printf debugging though..


The fact that it’s fixable doesn’t mean it’s fun…


Miscompilations due to UB are normally silent and can remove checks written into the code for security purposes; e.g. checks for overflowing signed integers, or for null pointers.

They're not harder to debug. Debugging them isn't the issue. The problem is knowing that the code in the editor doesn't correspond to the code under execution.


> Because memory access patterns, system call overhead, operating system interaction etc. have much more impact on overall performance

In most media apps, the actually processing intensive part definitely does not do syscalls or OS interaction. It's pure computations for as long as possible (and often non-parallelizable, e.g. x[i] *= x[i-1] sort of things). Disabling those optimisations is a killer.


I'm not so sure, while these certainly have a large impact on performance, I know that compiler optimizations have a huge impact as well, now what part of that is enabled by assuming the lack of undefined behavior I don't know.


Too much marketing. Don’t get me wrong but I’m lost among “fast, scalable, low latency, high throughput” marketing paragraphs and images.

There are links to other websites(company website I guess), it is same unfortuanetely.

Please consider adding sections/pages about “how did you do it, benchmarks supporting your claim, what is different in your product compared to ndb” etc. If there are links to these somewhere, you may want to make it more visible in your front page.


Great points. Great claims require great evidence. We just haven't gotten around to building out the site yet.


The release notes of RonDB 21.04.0 found in the RonDB documentation at docs.rondb.com lists all the differences compared to NDB.


Thanks, I’ll take a look.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: