Hacker Newsnew | past | comments | ask | show | jobs | submit | eonpi's commentslogin

Resources will vary depending on what you want to use it for:

- To use as a general computer, downloading the official OS and following the provided instructions to image to a microSD (or USB flash drive for those Pis capable of booting from USB) might be the easiest. Or if you got a kit that comes with a microSD that already contains the OS, or can get one of those cards, would remove the need to create your own. BTW, You may want to try a lighter distro if you have an older Pi (3 or older), or Pi Zero, because the full version can be slow to load on those Raspberries.

Any particular use case that you are interested in? Also, what Raspberry Pi are you using?


Have you tried with <Control-Left Square Bracket>? aka Ctrl-[, or ^[, as seen sometimes in the terminal. This works for me in most terminals as an alternative to Escape when using vi/vim.


Have seen several similar situations, and regardless of the demographic, the behavior seems to stem from the instigator's perception of being superior to the person towards which the behavior is being directed.

Usually there is also a feeling from the instigator that the targeted individual doesn't deserve X or Y, and therefore deserves disdain, scorn, or whatever they consider appropriate.

The best thing you can do is demonstrate firmly that you are as good or better at whatever they are using to make you feel bad or discredit you. You also have to be firm that you don't welcome or tolerate such behavior, and so should be your coworkers, manager, or higher-ups. If they aren't being explicit about not tolerating such behavior, depending on the applicable jurisdiction, they might be putting the company in legal jeopardy.


This is something that is usually taken care of by the App that's receiving the input from the microphone (Google Meet, Teams, etc). The App breaks the audio into frequencies, and the ones that correspond to human voice ranges are accepted, and anything else is rejected. This is referred to as, for example, voice isolation, and has been turned on by default in all major meeting Apps for a little while now.

Surprised to hear that it doesn't seem to work for you when the audio is generated by a different browser, this shouldn't make a difference.


Assuming OP is correct, your last sentence implies this isn't the solution being used.

Additionally, many (citation needed) Youtube videos have people talking in them; this method wouldn't help with that.

Isolating vocals in general is significantly more difficult than just relying on frequency range. Any instrument I can think of can generate notes that are squarely in the common range of a human (see: https://www.dbamfordmusic.com/frequency-range-of-instruments...)


Was trying to informally describe the use of Fourier transformations to achieve the isolation. Success will vary depending on the situation, but ML is also used in more recent cases with more uniform end results for the particular use case.

The initial question may be specific to the way one particular browser handles things to certain degree, but the comment was also trying to communicate that it can go beyond the browser and can actually be handled by the application. However, the microphone itself can also be participating at some level if it features noise suppression or some other enhancements.

The surprise about things being different when using a separate browser, come from assuming that any audio reaching the microphone should be processed equally if using FTs (or machine learning if applicable), so the audio source shouldn't matter.

References:

- https://www.nti-audio.com/en/support/know-how/fast-fourier-t...

- https://pseeth.github.io/public/papers/seetharaman_2dft_wasp...


If you are starting a new App, you should find it quite easy to get going with SwiftUI. Once you get the hang of it, you can really appreciate the simplicity for doing most things, especially when compared with the way things were, and still are the done in UIKit with UIViewControllers (viewDidLoad, willSomethingOrOther); but some other things you can find to be tricky.

The documentation is fair enough within Xcode to figure most things out by yourself, but there are plenty of resources online when you feel like needing some help.

I like the updates they are doing to Swift Data for the next round of OS updates, these should make things even easier and more straight forward to build certain Apps.

But yes, there are things to watch out for, make sure you learn the foundations well (e.g. @Observable, @State, modifiers), or you may find some unexpected things happening with no clear explanation.

Also keep in mind that not everything available in UIKit is available in SwiftUI, but there is plenty to cover lots of use cases nowadays.

You can also do some SwiftUI stuff outside Macs, there was this effort to bring SwiftUI to terminal, as in Linux. Ashen was the name of the project, but I believe that it hasn't received many updates lately. There are also recent efforts for Gnome if I remember well. And there are also some other general Swift updates that are very interesting, like Swift for micro-controllers or whatever they are calling it, and being able to create static Linux binaries (and do so directly from your very own Mac) without any dependencies that can run on any distro (or so they said IIRC), and built with musl for good measure apparently... again, or so they mentioned in one of the WWDC videos.

So... not so bad, but with all that said, I would like to add that, I really would prefer not to have any AI forced into any of my devices with the next update.


Definitely give Stripe a chance, besides payment links, you can have your product catalog, and create pricing tables that you can embed in a static web page, and Stripe takes care of the rest.

Documentation here: https://docs.stripe.com/no-code/pricing-table


The best choice usually comes down to:

1) References: you may want to prioritize working with freelancers or a dev shop you have worked with before, and has resulted in a good experience, or you have recommendation from someone you really trust.

2) Cost: can you afford the option you trust the most? If not, you might need to go with the next one.

3) Commitment: in the case of an extended engagement, will your choice be available, and will the terms remain the same? If not, same as above, go with the next in your list.


Thank you for the advice. However, if I run out of the references, where else would I be able to find talent? Are online freelancing websites like upwork and toptal worth it?


Freelancing websites sometimes have talented folks, but long term projects may be tricky. The nice thing is that usually the funds are held in escrow, so if the work isn't done to your satisfaction you may be able to recover some, or most of your money.

Since we are also in validation phase, you are welcome to send a DM on Mastodon or Twitter with some of the details, and if the idea fits our current effort, given proper evaluation, we might be able to get you a POC at no charge.


1. Ender printers are usually a good deal, but in many cases they come partially assembled, so you have to do at least some work to be able to use them. Some of these even include bed auto-leveling which is something nice to have. One important thing to consider is the size of the things you want to print, printing volume is part of the specs, for example 300mm x 300mm x 300mm, which is about 1 cubit foot, can be versatile enough to print plenty of different items, but do keep in mind the actual printer dimensions are a bit more that the printing volume, so you'll need to have some space where to place it, and also keep in mind that besides size, the weight can also be something to consider since these printers are made of aluminum for the most part. Highly recommend getting something that comes already assembled, Monoprice used to have small printers (100mm x 100mm x 100mm) for $150 or so, a bit slow but pretty much ready to get going, you only needed to level the bed and add the filament. Most printers include sample PLA filament that you can use to do test prints. You also should try to get a heated bed (most should have it these days) to ensure proper print adhesion, and ideally a removable printing surface that make it easy to remove prints once finished, like the Ender's flexible magnetic ones.

2. You can definitely use Blender, just make sure you're using the proper dimensions (e.g. millimeters), you can then export your model as STL, and then use Cura or any other slicer you may like to get a gcode file which you can then send to your printer or load into a microSD card that you can select from the printer menu. Quick tip: if you just want to do a quick test, make sure to select the draft profile on your slicer, this way the generated gcode file won't take as long to print, the quality may be rough, but you can make sure that what you want to print will be printed successfully without having to wait twice or three times as long. Quick Blender starter: using the default cube shown on start, it measures 20mm per side, just go into edit mode, remove the top face, then select all the faces, use solidify tool, enter a thickness of at least 2mm, leave edit mode, export STL, slice in Cura, and you should have quick little test box to print. Scale appropriately for a different size, and use your design skills to make round corners in order to avoid sharp edges.

3. Thingiverse has plenty to choose from, look out for the dimensions of the objects, and for any material recommendations or printing notes.


Not sure the kind of processing you need to do on the video stream, but have you considered giving `ffmpeg` a try if you just need plain pass-thru from video input to output? `ffmpeg` might be built with support for the Mali libraries you mention for the OS you are using. If you are able to run `weston`, `ffmpeg` should be able to output directly to the DRM card thru the use of SDL2 (assuming it was built with it).

If the HDMI-USB capture card that outputs `mjpeg` exposes a `/dev/video` node, then it might be as simple as running:

`SDL_VIDEODRIVER=kmsdrm ffmpeg -f video4linux2 -input_format mjpeg -i /dev/video0 -f opengl "hdmi output"`

An alternative could be if you can get a Raspberry Pi 3 or even 2, and can get a distro where `omxplayer` can still be installed. You can then use `omxplayer` to display your mjpeg stream on the output of your choice, just make sure the `kms/fkms` dtoverlay is not loaded because `omxplayer` works directly with DispmanX/GPU (BTW, not compatible with Pi 4 and above and much less 5), which contains a hardware `mjpeg` decoder, so for the most part, bytes are sent directly to the GPU.

Hope some of this info can be of help.


Looks helpful! I assume ffmpeg needs to be built with SDL for this to work? I couldn't get it to work with my current minimal compile, but I don't think the board I'm working on has SDL, so might need to install that and recompile


That's correct, `ffmpeg` needs to be built with `SDL` (SDL2 really is what is used on all recent versions). When `ffmpeg` is built, and the dev files for SDL2 are present, ffmpeg's build configuration picks it up automatically and it will link against the library unless instructed otherwise by a configuration flag. When you run `ffmpeg` first lines usually show the configuration which it was built, so it might have hints as to what it was built with, but if you want to confirm what it was links against you can do a quick:

$ ldd `which ffmpeg`

And you should get the list of dynamic libraries your build is linked against. If SDL2 is indeed included, you should see a line starting with "libSDL2-2...".

If I remember correctly you should be able to output to the framebuffer even if there is no support for SDL2, you just have to change the previous line provided from using `-f opengl "hdmi output"` to `-pix_fmt bgra -f fbdev /dev/fb0`.

You can also use any other framebuffer device if present and you'd prefer it (e.g. /dev/fb1, /dev/fb2). Also, you might need something different to `bgra` on your board, but usually `ffmpeg` will drop a hint as to what.


Wrote a fairly complete POC in 2012 with kivy, it was able to render rather detailed floor plans, which was the most important feature of the POC since the idea was that, given the complexity, it should be written once and be able to run on multiple platforms with minimal changes, keeping in mind that mobile platforms were the priority.

Most impressively, it was running very well on a first generation iPad, not to mention Android tablets, and of course, Mac, Windows, and Linux workstations.

It was ultimately dismissed by the stakeholders because there was no way to render a web page inside the App, which was something kivy couldn't do back then.


Thanks, I will try it out then.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: