Ed's main critique is about business sustainability -- it's true that there are many articles about AI on IP issues or ethics but he is unique in actually crunching the numbers on profit.
There's Financial Times, Forbes, tons of reddit posts, youtube videos.. I suppose it's possible that he's the only blogger doing this, but as far as I can see he is not the only one crunching profit numbers on the most "visible" company in the world.
FUTO is an organization dedicated to developing, both through in-house engineering and investment, technologies that frustrate centralization and industry consolidation. Through a combination of in-house engineering projects, targeted investments, generous grants, and multi-media public education efforts, we will free technology from the control of the few and recreate the spirit of freedom, innovation, and self-reliance that underpinned the American tech industry only a few decades ago. Our principles are here: https://www.futo.org/about/what-is-futo/
There is consensus today that social media is broken. At FUTO we aim to fix this. In this role you will act as product owner of Polycentric (https://gitlab.futo.org/polycentric/polycentric), our open-source decentralized social network that also forms the basis of FUTO ID. Additionally you will also provide support for our next generation social media project.
No need. Obtainium already supports downloading from third-party F-Droid, so users can add Grayjay this way:
1. Enter the URL "https://app.futo.org/fdroid/repo/"
2. In "Override Source", select "F-Droid Third-Party Repo"
3. For "App ID or Name", enter "grayjay"
4. Press "Add"
5. Done
Hey, we're aware and a little embarrassed! We use some nested CSS selectors that aren't compatible with some browsers and haven't gotten around to fixing it yet, sorry!
FUTO | Austin, TX | On-site / Hybrid / Remote | Full Time / Hourly
FUTO is an organization dedicated to developing, both through in-house engineering and investment, technologies that frustrate centralization and industry consolidation. Through a combination of in-house engineering projects, targeted investments, generous grants, and multi-media public education efforts, we will free technology from the control of the few and recreate the spirit of freedom, innovation, and self-reliance that underpinned the American tech industry only a few decades ago. Our principles are here: https://futo.org/what-is-futo
FUTO is also hiring for hourly engineering roles on the revolutionary cross-platform video streaming app Grayjay https://grayjay.app/
Recommendation Engine Developer - Grayjay aims to introduce a feature that recommends creators/videos to users based on their interests. The initial task involves developing an algorithm to tag channels in a way that correlates them with similar channels. https://futo.org/jobs/recommendation-engine-developer-grayja...
FCast Developer - We seek to create an open source casting protocol for wide adoption. Some work has already been done (see https://fcast.org/). You can help bring FCast to a wide developer audience by making libraries for multiple languages and are interested in bringing FCast to new platforms (Roku, AppleTV, TizenOS, WebOS, …). https://futo.org/jobs/fcast-developer-grayjay
To apply, send your resume and cover letter to jobs at futo dot org with the subject of the position you're interested in.
"Build, don't train" is poor advice for a prospective "AI Engineer" title. It should be "Know when to reach for a new model architecture, know when to reach for fine-tuning / LoRA training, know when to use an API." Only relying on an API will drastically reduce your product differentiation, to say nothing of the fact that any AI Engineer worth that title should know how to build and train models.
Fair point! I think my main idea was "prefer building with an API over training your own model" but that isn't as pithy.
The jury's still out on how much training and fine tuning are going to matter in the long run - my belief is that there are many great products that can exist without needing a new model architecture, or owning the model at all.
That advice makes sense if we're talking about 800B+ parameter models that require a gigantic investment of capital and time. For models that fit on a consumer GPU you're leaving chips on the table to not take advantage of training / fine-tuning. It's just too easy and powerful not to.
reply