Either you want fixed point for your minimum unit of accounting or you want floating point because you’re doing math with big / small numbers and you can tolerate a certain amount of truncation. I have no idea what the application for floating point with a weird base is. Unacceptable for accounting, and physicists are smart enough to work in base 2.
I'm pretty confident that dfp is used for financial computation. Both because it has been pushed heavily by IBM (who certainly are very involved in financial industry) and because many papers describing dfp use financial applications as motivating example. For example this paper: https://speleotrove.com/decimal/IEEE-cowlishaw-arith16.pdf
> This extensive use of decimal data suggested that it would be worthwhile to study how the data are used
and how decimal arithmetic should be defined. These
investigations showed that the nature of commercial
computation has changed so that decimal floating-point
arithmetic is now an advantage for many applications.
> It also became apparent that the increasing use of decimal floating-point, both in programming languages and
in application libraries, brought into question any
assumption that decimal arithmetic is an insignificant part of commercial workloads.
> Simple changes to existing benchmarks (which used incorrect binary approximations for financial computations) indicated that many applications, such as a typical Internet-based ‘warehouse’ application, may be spending 50% or more of their processing time in decimal arithmetic. Further, a new benchmark, designed to model an extreme case (a telephone company’s daily billing application), shows that the decimal processing overhead could reach over 90%
Wow. OK, I believe you. Still don’t see the advantages over using the same number of bits for fixed point math, but this definitely sounds like something IBM would do.
Edit: Back of the envelope, you could measure 10^26 dollars with picodollar resolution using 128 bits
Decimal128 has exact rounding of decimal rules and preserves trailing zeros.
I don’t think Decimal64 has the same features, but it has been a while.
But unless you hit the limits of 34 decimal digits of significand, Decimal128 will work for anything you would use fixed point for, but much faster if you have hardware support like on the IBM cpus or some of the sparc cpus from Japan.
OPAP Agg functions as an example are an application.
> I don’t think Decimal64 has the same features, but it has been a while.
Decimal32, Decimal64, and Decimal128 all follow the same rules, they just have different values for the exponent range and number of significant figures.
Actually, this is true for all of the IEEE 754 formats: the specification is parameterized on (base (though only 2 or 10 is possible), max exponent, number of significant figures), although there are number of issues that only exist for IEEE 754 decimal floating-point numbers, like exponent quantum or BID/DPD encoding stuff.
You are correct, the problem is that Decimal64 has 16 digits of significand, while items like apportioned per call taxes need to be calculated with six digits past the decimal before rounding which requires about 20 digits.
Other calculations like interest rates take even more and cobol requires 32 digits.
As decimal128 format supports 34 decimal digits of significand, and has emulated exact rounding, it can meet that standard.
While items is more complex, requiring ~15-20% more silicon space in the ALU plus larger dataset size, compared to arbitrary precision libraries like BigNum it is more efficient for business applications.
I want signal to act as a transport bus. In particular, I want to give certain contacts permission to ask my phone for its location, so I can give my wife that ability without sharing it with Google.
Signal has solved the identity part, now encourage others to build apps on it.
(2fa via Signal would be better than SMS, too, though I know this may be controversial!)
I'm not seeing how you could draw that conclusion. The more likely explanation is that they are telling people not to build apps around it (and I assume thus the apis aren't well designed for adoption by other apps).
> This repository is used by the Signal client apps (Android, iOS, and Desktop)
as well as server-side. Use outside of Signal is unsupported.
Also, where are the test vectors? Because when I implement this, that's the first thing I have to write, and you could save me a lot of work here. Bonus points if it's in JSON and UTF-8 already, though the invalid UTF-8 in an RFC might really gum things up: hex encode maybe?
Andrew Tridgell's KnightCap did this differently: it's a network chess server, and it would dump its data to a file and re-exec. The trick here is that it would keep the (network) fds open for zero downtime. IIRC he used a Perl script called datadumper to gen the code marshal/demarshal the structures.
This has the advantage that reboots can be handled fairly seemlessly too (though there will be reconnections then of).
Eh, you could probably get away with it if you use BearSSL[0]. The only difficulty would be:
These elements can be allocated anywhere in (writable) memory, e.g.
heap, data segment or stack. They must not be moved while in use
(they may contain pointers to each other and to themselves).
Which you could probably get around with by just keeping track of offsets and using mmap
Absolutely. Wrapping the distinguished entry point in a new structure type equipped with a thin type-safe wrapper API that covers the most common use case is the way to go.
In recent years I've come to rely on this non-initialization idiom. Both because as code paths change the compiler can warn for simple cases, and because running tests under Valgrind catches it.
>
ryao 7 hours ago | parent | context | flag | on: Hacktical C: practical hacker's guide to the C pro...
cast a `struct Foo*` into a `struct Bar*` and access the Foo through it (in practice we teach this as the "strict aliasing" rules, and that's how all(?) compilers implement it, but that's not what §6.5 paragraph 7 of the standard says!)
Use the union type. Abusing it for aliasing violates the standard too, but GCC and Clang implement an extension that permits this. Alternatively, just allocate a char array and cast it as you please. Strict aliasing does not apply to char arrays if I recall.
allow a signed integer to overflow
Is this still true? I thought that the reason for this is because C left the implementation to define how signed arithmetic worked, meaning you could not assume two’s complement, but the most recent C standard was supposed to mandate two’s complement.
>> pass a NULL pointer to memcpy, even if the length is zero
> There is a reason for this. memcpy is allowed to start reading early as a performance optimization, before it does a branch that checks if reading is only.
Where did you get this idea from? It's not possible, since you can hand an address at the end of an array, and length 0. The array ends at the end of a page.
Handing memcpy() the address at the end of an array and length 0 is undefined behavior. It is often said that the reason for this is to allow memcpy() to read before it branches to make it fast.
This lead me to think of the case where you hand it the address right before the end of a byte array where the byte after the last byte is an unmapped page and tell it to copy 1 byte. I suspect systems that have such an optimization would read beyond 1 byte into invalid memory. This is my criticism of the idea of having memcpy(NULL, NULL, 0) be undefined to make that speed trick legal. I am suggesting that an undefined number of low values to copy must also be undefined, yet they are not under the standard.
An AI might be more likely to find it...