I like the design concept of SFML a lot, but it's been neglected for so long that it's way behind modern C++. I see they plan to move to C++17 in SFML 3, and start using std::unique_ptr and such.
Ultimately I just moved to Godot to get stuff done, which is not pretty C++ at all, but it's about as lightweight as you can get for a fully featured game engine, as opposed to a framework like SFML.
The naming is not misleading, because the article says a lot about parallel processing.
It is talking about memory consistency models, for heaven's sake. If you have a uniprocessor, or any kind of concurrency without parallelism, it makes zero sense to talk about memory consistency models because you automatically have sequential consistency and don't have to think about it.
If you don't have a multiprocessor, you don't need to think about memory consistency models. If you have a multiprocessor without parallelism, that's your problem, not the author's.
Sure, and the article discusses that. I was thinking at the instruction level. I tend to think of convincing the hardware to do what you want and convincing the compiler to do what you want separately as an effect of writing code before the languages had specified memory models.
EDIT:
Though as I think about it I'm having a hard time thinking of a system in which those compiler optimizations are problematic on a uniprocessor system. It is definitely true that optimizations that are safe on a uniprocessor break on a multiprocessor, but I can't think of any systems that don't (effectively) have a memory barrier when switching between threads. SMT wouldn't see a problem because both threads are on the same processor.
Imagine implementing a simple spinlock, but then substituting relaxed atomics for acquire/release synchronization. The compiler can reorder taking the lock relative to the protected operations and render the lock useless.
But a language that is concurrent and not parallel will usually (always? can't think of a counter-example) have a full memory barrier on a context switch, so the memory model is simple. I think that's why it's relevant to many parallel programming models, but not concurrent programming ones, and why the title is as it is.
First, I've got proof no. 3. I can offer a nice 4 argument proof, but you wouldn't let me provide it.
Second, I tried proof no. 2, where I'm offered words like given and algebra I have no use for.
To make a system like this right it needs to be deductive and have a large library of concepts. Getting this right is complex enough to be a PhD project. Properly scoped could be done for MSc.
For proof 2, "Given" is the reason for "ABCD is a kite" and "Algebra" is the reason for "BD = BD". I found that relatively obvious after I understood the purpose of the "statement" and "reason" columns.
If you treat it as a kind of puzzle where you have to arrange the given blocks to form a valid proof, it's not so bad. However, you should at least be able to reorder independent statements, and reasons should include references to the previous statements they depend on, e.g. in proof 4, "<A = <D" follows by "Substitution" only because the previous statement is "<EBC = <A and <ECB = <D". In longer proofs you may have to refer to something proved further up, and requiring the student to give the reference would be a further check of their understanding.
If you haven't done a two-column high school geometry proof in a while, this might seem a little strange. I'm a geometry teacher and created this mostly for my students to practice stringing together math statements in a logical order.
"Given" is a "reason", pretty commonly seen in mathematical solutions. "Given" indicates that the statement was indicated in the specification of the problem.
Turning vaguely defined task into something tangible is usually the most difficult, and most important part of the task, but it requires some experience.
In the beginning you need to learn the internal know-how, and this requires regular prompt answers. Otherwise, you will stay blocked and frustrated most of the time.
Remote work is fine when you already know the domain, but that's not really what intern programs are for.
That said, you can try to write design docs where you put your questions. Note that it's not really about writing a specification, but about writing down what you know and what you don't. You can speculate about answers to your questions, and this way make your manager understand what you're missing.
> Remote work is fine when you already know the domain, but that's not really what intern programs are for.
I'd argue that it depends on the rest of the team. The OP didn't mention whether he was the only remote or not, but if so then I agree; it's a lost cause, and there's no way this is going to work. But if he's joining a team of other remotes they can make it work if others are willing to try.
It so happens that a large part of my PhD was on this very subject. The result I've got N log(N), this is more visible when you get to larger RAM (I had 0,5 TB RAM at the time).
We have an empirical result, a justification and a rigorous predictive model.
The reason has to do with hashing, but a different type: TLB.
Thanks, I came across your research before and thought it was quite cool. In Section 8.5, when discussing whether hash tables would be suitable for handling TLB misses, wouldn't denial of service attacks also be a concern? If an attacker knows the hash function and can control the addresses being looked up, they might be able to trigger worst-case behaviour on every lookup, couldn't they?
> Moreover, an adversary can try to increase a number of necessary rehashes.
It seems to me that the section is a bit too dismissive, though, as there are hash tables and hash functions that mitigate these concerns. In particular, collisions can be replaced with trees, like in Java, limiting the worst case to O(log n) again.
It so happens that a large part of my PhD was on this very subject. The result I've got N log(N), this is more visible when you get to larger RAM (I had 0,5 TB RAM at the time).
We have an empirical result, a justification and a rigorous predictive model.
The reason has to do with hashing, but a different type: TLB.
Very interesting and informative. I would have expected a TLB to incur an additional cost beyond exhausting the cache hierarchy. But, I would have expected it to be a single additional cliff around a gig at most.