Anyone have any experience using this? I've been managing most of my homelab infrastructure with a combination of saltstack and docker compose files and I'm curious how this would stack up.
I used to run it and generally liked it, but eventually felt limited in the things I could do. At the time it was a hassle to run non-tipi apps behind the traefik instance and eventually I wanted SSO.
I ended up in a similar place with proxmox, docker-compose, and portainer but I have it on my backlog to try a competitor, Cosmos, which says many of the things I want to hear. User auth, bring your own apps and docker configs, etc.
I tried it a few months and it was nice. But I think it lacks a way to configure mounting points for the apps storage.
By default, each app have its own storage folder and not really a useful default in the use case of a home lab : you probably want, idk, Syncthing, Nextcloud and Transmission to be able to access the same folders.
It’s doable but you have to edit the yaml files yourself which I thought removed most of the interest of the project.
Ran into the same problem with umbrel. Wanted to use photoprism for a gallery but nextcloud and syncthing to backup the photos. Was easier to just manage the containers myself.
I've been evaluating it alongside Cosmos and Umbrel, in addition to tools I've used before like CapRover. I like it but I don't have any strong feelings yet. I will probably do some sort of writeup after I do more evaluations and tests and play with more things but I haven't had the time to dedicate to it.
If you're already familiar with setting things up Salt/Ansible/whatever and Docker compose, you might not need something like this -- especially if you're already using a dashboard like Dashy or whatever.
The biggest thing is that these types of tools make it a lot easier to set things up -- there are inherent security risks too if you don't know what you are doing, though I argue this is a great way to learn (and it isn't a guarantee that simply knowing how to use Salt or Ansible or another infrastructure as code tool will mean any of the stuff you deploy is any more secure) and a good entryway for people who don't want to do the Synology thing.
I like these sorts of projects because even though I can do most of the stuff manually (I'd obv. automate using Ansible or something), I often don't want to if I just want to play with stuff on a box and the app store nature of these things is often preferable to finding the right docker image (or modifying it) for something. I'm lazy and I like turnkey solutions.
> there are inherent security risks too if you don't know what you are doing
It's actually worst. Even if you know what you are doing there is some amount of work and monitoring you need to do to just to follow basic guidelines [0].
What we would actually need is a "self-hosting management" platform or tool which at least help you manage basic security around what you run.
I’ve been using it for a few months on a raspberry pi 4. I installed (via runtipi) pi hole for dns, Netdata for monitoring and Tailscale for remote access. Works great for me and my family, I can stream videos from jellyfin for my kids when we are on the go and all the family devices use pihole.
I tried to do things myself in the past but this is so much easier if you don’t have particular needs.
I'll answer from my own experience (with no inner voice):
1) I have to be very explicit when reading with an inner voice (which goes to your second point). Generally, it is a kind of recognition that a word has a meaning and I "feel" that meaning, if that makes sense?
But then how do you do what I'd consider simple things like sanity checks?
For example, hooking up jumper cables. I suppose I could hook them up without an inner dialog.
But then at any point if I get confused or remember that dangerous things can happen, I can go back and compare against a list in my head of the steps to hook them up. E.g., don't let clamps touch, start with the dead battery, positive first, crank and rev the working car, etc.
How can you get the same type of sanity check without an inner dialog? Do you memorize an ordered sequence of "feels" that correspond to the steps?
You know, I've never really thought about it. Yeah, I guess I do just remember the feeling and know that it's a step that I've completed. And when I recall the steps that need to be done, it's not really a cohesive spoken thought. It's like impulses of sensations? My brain is just like "Order's important. Red red, black black." without the words.
For really difficult mental tasks, I'll actually talk to myself or write things down so that they are less chaotic since the feelings/impulses in my brain may not line up. Kind of like a buffer filling up.
Now what really bothers my wife is when we talk about how we have conversations with others. I don't necessarily think about what I'm going to say verbatim when I'm talking to someone. It's impulses like I mention above where things kind of "flash" into my brain the moment before I say it. The best way that I can think of it is that it's similar to playing a sport where you have moments to change your trajectory and movement to accomplish what you want. I know thematically what I want to say, but I don't actually build the sentence in my brain.
> The best way that I can think of it is that it's similar to playing a sport where you have moments to change your trajectory and movement to accomplish what you want.
If I had to speculate I'd say a lot of people have the experience of both this and the inner voice, depending on context.
It doe make me wonder: are people who can't spontaneously "surf" sentences in realtime and instead rely wholly on an inner voice? :)
Edit: I've noticed that reflecting on this kind of stuff can wreak havoc on young musicians. I remember a colleague who learned the Grieg concerto but was having a small number of wrong notes (say, 5% or so) creep into various sections of the piece, seemingly at random. The teacher started to drill down on each section, having the student write down which finger plays what, subdivide tricky rhythms, and so forth.
What was wild was that each time the teacher asked the student to notate the chord at the point where a mistake occurred, this would generate a cascade of more mistakes. She'd essentially been playing mostly by feel without thinking about the harmony at all. The added anxiety about harmony increased her error rate for relying on feel; pretty soon she wasn't able to make it through the piece. They then went back methodically through the entire thing, having her notate each and every harmony and describe how each one relates to the next, which were primary vs. ancillary, and so forth. Only then could she play the entire piece with precision.
So sometimes to get from 95% to 100% you have to go back down to zero!
I'm a huge fan of Squarofumi. I've bought two Watchy kits and have absolutely loved coding all kinds of interesting things for them. Given that it's powered by ESP32, it makes development super easy.
I've had a very similar experience (both in terms of coming from the southeast and the "you don't need air conditioning here" lines). I don't know why people still say that AC isn't needed. Year over year, it seems to get worse. Add in the smoke from the fires that seems to settle in every other year, it just doesn't make sense _not_ to have AC.
A couple of summers ago, when the heat was so bad in the Seattle area, I called a local HVAC installation and repair company to talk about getting AC installed at my home (built sometime in the 80's). There was a wait list of _hundreds_ ahead of me and unit shortages. It was a nightmare. Luckily, folks seemed to reconsider installation over the winter and I was able to work our way up to the top of the line to get things installed during the winter. Best decision I've made.
Yeah I think the 9 months of winter tends to make people forget precisely how miserable summer can be without at least a little bit of AC. With the climate heat pumps are effectively ideal here and electricity is relatively cheap and renewable. There’s literally no reason other than stubbornness at this point.
Part of me wonders if it's similar to the "never carry an umbrella" dynamic? Maybe it's just a "well, this is just how it is, carry on"?
Anyway, it's probably different in our minds because in the southeast, the heat + the humidity can literally kill you. Not having AC isn't just an inconvenience there; it's an emergency.
Yeah absolutely. I spent so many nights with the bedsheets plastered to my body I can’t sleep if it’s even warm, I start flashing back to the trauma of my youth.
Trying to figure out what's unique here, but it seems like Proxmox is being used to create VMs that then run docker. Then docker on these VMs is used to spin up a bunch of containers. So really, it's just Proxmox -> VM -> Docker -> Containers. So it's dedicated docker VMs to coordinate containers...
I was expecting Proxmox's LXC capabilities to be used to scale up to 13000, but this is just VMs + docker allowing that. Seems like the same thing could be done with any LVM hypervisor and VMs? Can someone correct me if I'm missing something?
AFAIK it’s the only way to run docker containers on proxmox. You can run lxc containers directly but docker requires an intermediate vm… which is one more reason to avoid docker altogether :)
As others have mentioned, I think the interesting thing here would be understanding the latency for processing the signal. Anything in the single digit milliseconds would be fantastic! I know at one point I was looking into Raspberry Pi and ended up on Pedal Pi[0], though I couldn't get the parts I needed to make it work.
I ended up using Teensy[1] and related audio shields[2] to get things working from a sound/acceptable delay perspective. But being able to get things going on a Pi would probably make more of the advanced input controls much simpler to implement simply from a OS support perspective (like in this project with the WebUI). The UI I'm seeing in this project looks great and it would be cool to potentially see something like kits/preinstalled images roll out for this!
You've done a fantastic job of describing what I couldn't put my finger on! One of the issues I had when developing this code was what I referred to as a "jarring" noise when distortion gain was turned up. It's like you describe: almost "metallic." I alluded to the "jarring" sound in the article, and it's great to know the culprit! There weren't a lot of repos, articles, or forum posts that had examples of solid, straightforward distortion implementations, and now I can see why.
Hopefully, we'll see someone get some code out there that helps clean this up!
OP here! I'm admittedly new to the world of digital signal processing and don't have a math background. But what I _do_ have is a background in programming and guitar effect pedal building.
My hope with this site and series is to get people like me exposed to the possibilities a little bit of code and a Teensy can unlock for them. The lower the barrier of entry, the more cool things people can create. :)
That being said, I'd love to get feedback on good sources to learn more about signal processing/theory or any resources folks would suggest for covering the topic even better!
I’ve recently started to work through implementing DSP as VST plugins. But I never really thought that a pedal type DSP unit was tractable as a hobbyist. I’m pretty excited to follow your work and see if I can do some pedal style effect or basic synth work in hardware.
Thanks for the post! Right before I saw this on HN I was watching a video on how to build a Klon clone, so this write-up couldn't have come at a better time.
Hey there, no aliasing in this case. As for the clipping, this is effectively just hard clipping like you'd see from two alternating diodes in series on an analog gain circuit.
Admittedly, I'm early into moving from analog circuits to digital signal processing, so I could be off the mark on my answer. :) Hope it helps, though.
When you clip in continuous time you're pushing additional energy into the harmonics of the baseband signal being clipped. Since spectra in continuous time is infinite, you don't get any aliasing (a better way to say it is that "aliasing" isn't as meaningful in continuous time)
When you clip in discrete time, the spectra is finite (more technically, it's periodic with a period of the sample rate frequency). That means the energy that would go into harmonics past nyquist gets "wrapped" around.
This is the big difference between analog and digital distortion. In analog, it's really quite difficult to create energy at non-harmonic frequencies of the signal. In digital clippers like you have here, it's trivial, and the design problem is figuring out how to deal with it. Most products will use some kind of anti-aliasing strategy (usually oversampling before clipping) to handle it.
Oversampling won't help with the brickwall clipping being attempted here.
This circuit is not a diode emulator, it's a comparator. It's the worst-sounding of all distortions. It sounds even worse in digital because of the aliasing.
And it will always alias, no matter how much you oversample it, because a vertical edge - aka "Heaviside Step Function" - has an infinite harmonic series. If you oversample it enough it won't alias much because the series terms become smaller. But they never disappear.
A better way to do this kind of clipping is with a tanh (logistic/s-curve) approximation. That can give you a variety of valve-like [1] smooth clipping curves. Unfortunately tanh is pretty expensive computationally, so a more practical alternative is a piecewise curve, perhaps with some interpolation.
Although if you only have 8-bit or 16-bit resolution you may as well just use a lookup table.
OP might want to consider learning a little more about signal theory and practical DSP before posting more how-tos.
[1] Not really because real valves are more complicated. But it will do for a first approximation.
I appreciate your feedback and sharing your knowledge! I'll definitely be digging more into some of the things your mentioned to get a better understanding. The topic is a super deep one–though my goal (at least at this point) is to stay high level enough so that anyone could get started in making their own sounds/effects.
The lower the barrier of entry, the more cool things that people can come up with! I'm hoping that more without a math/EE/audio background like myself can get started and explore some more of these deeper topics :)
It will help, though! Commercial products will use 4-16x oversampling internally. Most useful distortions will have infinite harmonic series, but the hard clipper is certainly one of the worst because of the high amount of total harmonic distortion (THD). There's a lot of energy in those upper harmonics.
The classic soft clipper is something like
tanh(kx) / tanh(k)
which gives you a normalized output. It's not that expensive in the grand scheme of things, and is certainly cheaper than oversampling.
If you want something even more valve-like, a bit of DC bias to the signal before tanh will give you even order harmonic distortion, which sounds warmer (it beefs up the signal and puts energy into the octave harmonics, where as hard clippers usually only shove energy into the odd harmonics which sound harsher).
Interesting, I was under the impression that aliasing was something on the presentation layer (e.g. plotting on an oscilloscope). Have some breadcrumbs/links to share that would be a good resource on understanding aliasing in your context? Would be excited to learn more!
Take a sine wave below your system's Nyquist frequency. Chop off the top. Take the continuous Fourier transform. You will notice that there are now frequency components above the Nyquist limit of your system. Those will now be aliased down to lower frequencies.
One trick for doing nonlinear waveshaping without introducing too much aliasing is to perform the wave shaping at a higher sample rate than the rest of your system and then downsampling with a low pass filter. Thankfully, the high frequency components introduced by nonlinearity tend to decrease in magnitude reasonably quickly.
In the analogue realm - or more correctly, in the continuous-time realm, because an analogue sample-and-hold will show the same issues, it doesn't. It just introduces more harmonics at high frequencies.
In the digital (or again more correctly discrete-time, you have a sample rate) realm it totally makes a difference, because many of the harmonics you generate will extend above the Nyquist frequency, half the sample rate, and "reflect" back down.
What I don't get is, isn't that an inevitability of working with a digital signal? That is, if you were to sample a signal that was clipped in continuous time, wouldn't you get the same pattern of samples than if had clipped the signal after sampling?
there's basically an illusion going on... we think we're working with individual points in time.... But because we're sampling, we're working with sinc() functions that have a specific shape to them.
So for example, one might think drawing a basic rectangle in time would create a pulse waveform... but in the digital domain a square wave has ripples in it. Recall that in the spectral domain, a rectangle is approximated by ever-higher and ever-smaller sinusoids to infinity... in sampled signals, though, only frequencies below Nyquist are available. So the sampled version of the waveform will have ripples that the highest frequency components would have completed. The requirement then is to work with a higher Nyquist/sampling frequency than audible so that audible part of the square is correct.
No. Imagine the simplest possible case, a naive sawtooth wave. An easy way to do this in a microprocessor is to count samples, and let's say we have an 8-bit counter that goes from 0 to 255, one count per sample. This then feeds a DAC and we get a sawtooth wave out.
From 0 to 1, 1 to to, 127 to 128, 254 to 255, nothing surprising happens, the output voltage slowly rises. When the counter rolls over from 255 to 0, something surprising *does* happen though, we get a massive and instant negative-going spike.
Because this jump from 255 to 0 is instant, it has to have a lot of harmonics, up into high frequencies. If you plot the spectrum of a sawtooth wave you'll see that there's a sine wave at every harmonic, reducing in amplitude as 1/f - that is, the first one at full level, the second harmonic, twice the frequency, and at half level, 3rd at 1/3, 4th at 1/4 and so on.
Now here's your problem - some of those harmonics, which extend off to infinity, are going to be beyond the Nyquist frequency. As you increase the frequency of the sawtooth relative to the sample rate, you'll start to creep into an area where those harmonics are actually still quite loud compared to the lower ones, and that gives you a problem.
Aliasing occurs because those harmonics "reflect" off the Nyquist frequency. Think about watching a Western, where the wagon wheels appear to spin backwards at certain speeds - they are aliasing. If they were turning slowly they'd move a little before the next frame, but at some point - where one spoke moves exactly into the space left by the previous spoke - they will appear to stand perfectly still! With a bit of thought, you'll see that the fastest the wheel can go puts the spoke exactly in the middle of the gap between spokes in the previous frame, and any faster will make it look like it's going backwards.
This is exactly what's happening with aliasing. The sinewave that's just above half your sampling rate is coming back down towards you, backwards. What goes up must come down.
In a real-life analogue sawtooth oscillator this is of course a huge problem because that big negative-going spike has infinite energy, and we're lucky that it's also infinitely short because anything that requires infinite energy is going to have terrible battery life. In practice, the maximum frequency of the harmonics in the sawtooth are limited by other parameters in the circuit (mostly how quickly the capacitor discharges through the reset transistor). And herein lies our major clue.
In order to generate a sawtooth that doesn't alias, we must calculate that step so it is no longer infinitely fast. There are a bunch of ways we can do this, but a nice simple way is to calculate an amount to "correct" the sample just before and just after the step so that you've got a rough approximation of passing the signal spectrally through your sinc window. This is crude but effective, and can be done with a couple of adds and multiplies making it ideal for real-time computation on even fairly crappy chips.
You can actually generate a sinc wavelet and "paste" that over the step, and when you get that right it sounds perfect. If you've got a fairly chunky DSP or general-purpose CPU with good math support to play with, this will give best results.
It's important to note that if you take a sinewave and clip it, then sample it, you'll still get aliasing because you'll still have a big infinitely fast step somewhere, which is why audio equipment is so big on having steep lowpass filters before and after digital (or, sampled, at least) bits. A great example is in the resolutely analogue Roland Juno 106, where the chorus board has three 24dB/octave Butterworth filters at around 10kHz, realised as cascaded pairs of Sallen-Key filters. The first rips off any signal above about 12kHz so it cannot be passed to the chorus chip, the second two pairs are "reconstruction filters" to remove fizzy-sounding sampled steps which would be apparent as a swooshing sound as the chorus went through the "long" end of its delay.
The reason they need this is that the analogue "Bucket Brigade Delay" chips are a kind of analogue dynamic RAM, where on every clock pulse a capacitor charges up from the input to its left, and on the next discharges into the output on its right. Stick a few hundred of these capacitors together and it will "pass buckets of signal" down the line, delaying it by whatever the clock rate is divided by the number of buckets. It's all analogue, but it's still sampling!
>It's important to note that if you take a sinewave and clip it, then sample it, you'll still get aliasing because you'll still have a big infinitely fast step somewhere, which is why audio equipment is so big on having steep lowpass filters before and after digital (or, sampled, at least) bits.
Ah-ha. Gotcha. So my intuition was more or less correct. It's not that clipping in digital is especially susceptible to aliasing, it's that when people clip in analog they brickwall the signal to get rid of the harmonics.
Exactly, and if you synthesize a signal in discrete time you must take care to bandlimit it by calculating what it would be if you'd already brickwall filtered it.
You can't create a naive sawtooth (for example) and *then* filter it because the damage has already been done. That being said, the "supersaw" oscillator in the Roland JP8000 generated a bunch of naive saws and *highpass* filtered them just below their fundamental to remove the gurgly "beat note" from aliased partials.
Take the fourier transform of a clipped signal - it will have high frequency components.
In general, the more pointy edges you introduce to a waveform, the more high frequency artifacts you get.
This aspect of pontryagin duality (narrow in one domain means wide in the other) is also what underlies the heisenberg uncertainty principle. If you "hard clip" a photon's position (with a slit) you get a lot of frequency domain (momentum) noise, leading to a spread-out beam.
Why isn't there any aliasing? You're flattening off the peaks of the signal which absolutely must generate more harmonics, and at some point those are going to extend far beyond Nyquist.
I don't see you doing anything in particular to bandlimit your waveshaping, but I might well have missed it.
This is exactly what I've been looking for, but unable to find in such an approachable format! I recently picked up guitar after almost a decade without, and I've realized that guitar tutorials online follow a similar structure as cooking recipes: full of fluff at the top and you have to scroll a while before you get to the content. Well done!