Its not just awful mods, its awful paid admins. It was a step in the right direction to ban hate speech. The people they hired have decided that being mean to racists is “hate speech”, while ignoring actual hate speech and letting racist spaces thrive. This is not what people meant when they asked reddit to do something about hate speech.
That's another reason, indeed. I get that moderation is hard, but the way reddit handles it is awful. There are so many, potentially conflicting interests among reddit stakeholders, but the admins seem to be single-minded and narrow-minded. Recently, they closed "tumblr in action", which now has moved to another site, a reddit clone called saidit.net. However, don't even try to visit that site: it is a truly pestilent rat nest. The net effect is that the echo chamber becomes stronger for all sides. Great outcome for no reason at all. Well, there is a reason: reddit wants free moderators. Paying would lower their dreamt IPO value.
This just sounds like punishing people, not incentivizing. People just want to see others punished, they would never want to use such a system themselves.
Reddit hired more paid moderators, to whom being mean to racists and other people they like is “hate speech” while actual racists and nazis run amok being shitty to actual protected groups. Reddit is turning into the next stormfront.
This was my experience too. I got banned from my city’s subreddit for the stated reason “don’t move here” and then harassed across several local subreddits. Paid reddit admins determined that is not in fact harassment. Then later these admins banned me because they said being mean to racists is hate speech. Seems like racist assholes run the place from mods up to the paid admins.
What can be done about the civil war in Syria, Afghanistan, Myanmar. The drug wars in South America. The invasion in Ukraine. The re-education camps in China. The drought in Eastern Africa etc. etc. The list goes on and on. I honestly have made peace with the fact that it's not my responsibility to change the world. Whatever happens, happens. Let me make money and enjoy life.
These problems that embedded rust is trying to solve are not a big deal. In C you try to minimize what happens in any interrupt, usually just set a flag, save a result, and return. You generally use atomic instructions on gpio. To set or clear a bit mask is atomic so there is not much reason to read-modify-write gpio in an interrupt or anywhere. I think these solutions would create more work than they save.
Keep reading, they also cover dealing with exclusive access to subsystems. I could totally see this being applied to DMA and other long-running peripherals to great success.
I'll also disagree that this is a small problem. We had one of these that was so hard to track down that it involved 100+ devices running in a stress loop over 24 hours with cameras looking for the regression. Ended up being a timing sequence that could have been caught by a system like this.
The repro was incredibly infrequent but when you've got millions of units even 0.01% chance of something happening is too often.
I think the project is neat, but I've also written stuff to do all of that for embedded C projects. Where you'd allocate pins on the board for the peripherals you wanted to enable and it would complain if you double allocated.
That it's auto-generated is the neat part, but then again, you could auto-generate C code that was just as robust (though admittedly, a lot of that robustness would be pushed to run-time checks with C code).
First, you can always use unsafe and access low level registers ignoring all the synchronization. You are still getting typed access to the bits (well, contingent on SVD file quality), which is an improvement over hand-writing bit manipulation code (or using wrong constant for shift/mask in STM32 HAL, for example).
Second, look at the port splitting example. If you split port into pins, when you can independently move these pins into different execution contexts and you won't need any synchronization as well. Write to a pin would be a single write to BSRR register -- no read-modify-write. So you are getting safety and about the same generated code as for hand-written code (well, except that if you want to do multiple pins at once).
What if you want to reconfigure port? This is, actually, what was particularly annoying for me with the old version of svd2rust I/O. Even though I know that my two subsystems operate on completely different pins, I still had to pass CRL/CRH around, to avoid potential race when reconfiguring pins.
This new version has the same property, though, -- according to the article.
However, the solution is pretty straightforward. By using ARM bit-banging feature, you can have the same "split" atomic-share-nothing-style API for the port reconfiguration, too. Similarly, you would split, for example, GPIOA into 8 pins and each pin would allow you to both input/output data and change port direction.
So, bottom line, I think you can get best of both worlds most of the time: safety and performance of the hand-written code. And in corner cases, you can still ignore all the safety and access registers without any overhead.
I don't really understand what is this approach lacking for career firmware engineer.
(disclaimer: I haven't ported my firmware yet, so I could be wrong in my assumptions how this new I/O works).
Yeah the current way of developing these things is slow AF so I'm not buying it. I've done Haskell->Verilog, its an amazing development speed up, and people give the same "but that's not the hard part" complaints.
The fact is, no one's development process is that parallel, so critical-path-style arguments don't work. Turning even just a few easy parts free does reduce dev costs, and in these case we're clean-sweeping like all the low hanging fruit.
As all problems, these bugs are avoidable, but having a high speed language that helps you avoid them seems like a good thing. Having had to fix this kind of stuff before, it sure felt like a "big deal" at the time. :-)
I can't say that I agree. I would rather thread through some boilplate, rather than have a hard to debug issue on a platform that I have low visibility into.
It sounds like this would work for beginners or certain people who know rust and not C. A new firmware language could never be widely adopted if it is not made for the career firmware engineer to use.
You will want to code on the avr directly. The problem with audio processing on arduino is that there is a 1ms system tick that is higher priority than user code. It causes audio signals to sound scratchy, even a solid tone because the arduino pwm is all interrupt driven.
Interesting. For my first experiments I don’t really mind - will just be happy to get sane output driven by midi!
From what I gather, the technique is to change the PWM in an interrupt continuously to match the amplitude of your output wave. During that 1ms, what happens? Does the PWM still output a regular wave but I can’t adjust it? Or does the PWM wave itself stop?
The first problem with doing this in arduino is that the pwm is interrupt based. So that limits the frequency. There is a tone library that can generate a pwm for a specific tone, but these are interrupt pwms and it cannot work correctly with the 1ms tick causing jitter. I see arduino projects every now and then as a consultant and one of them was to generate tones. It sounds scratchy out of the box if all you do is generate a tone. Had to modify arduino core for it to use hardware pwm. Why didn't they use hardware pwm in the first place? Its because they needed pwm to work on any pin not just the 8 or so pwm pins, and they sacrificed a lot of performance to do it. I love arduino for inventors to get something running, but for coders and engineers, you guys are smart just program on AVR in C, or better yet program on a modern arm cortex in C. If you want a blazing fast mcu check out the FRDM-K64F, this is 32 bits 150MHz vs the AVR's 8 bits and 20MHz, plus it has a nice DAC so you won't have to do pwm modulation, not to mention 256kb ram vs the arduino's 8kb.
That's assuming that you have a functional ICE[1] and aren't dealing with a timing issue in another subsystem :).
ICE pins are usually configured with the exact registers this article talks about. It's common for UART/SPI serial port configuration to be a part of enabling ICE. Some chips also let you disable the ICE pins so that you can use them to pick the cheapest chip possible.
Because ICEs are overrated. Many systems can't be debugged by single stepping (think servos or anything with physical hardware being controlled). ICEs rarely work well—every single one I've ever used was completely unreliable and required lots of fiddling to make it work. And then the next day you had to start the whole fiddling process over again. In the end they aren't very productive except for very specific types of bugs (they are invaluable in the very beginning of a project when you are bringing up a board). Once everything is generally up and running I find them to be pretty useless.
Yeah if you can get that to work. It's usually a serious hassle.
Another issue is that with microcontrollers you are usually debugging really low level stuff like interrupts where you can't even do printf debugging. Or you are setting registers on some black box subsystem and it just won't work and the only way to fix it is just keep randomly changing registers until you find the one you got wrong, or if you're lucky find working example code and bisect from that.
Or you've got some timing sensitive code that you can't stop and the only debugging channel that is fast enough is toggling group connected to an oscilloscope.