They have failed to have a sensible industrial and energy policy, leading to around net 0 GDP growth since 2019. I’m sure for the degrowth elites though this is not a failure it is working as intended.
1) A website to measure and detect coil whine. It's been bugging me on my new Dell screen, but Dell says "it's within specs".
2) An AI-generated artwork platform with open firmware for Eink frames.
3) Server Radar: https://radar.iodev.org
Worked a little on Server Radar [1] again, the Hetzner Auction price tracker.
It's my fun little project to resort to. Implemented dark mode, sorting, grouping and various layout improvements. Also added a Drawer with Auction view the other week. UI is finally fun again with component libraries and LLMs.
Oh, and I added a Cloud Server Availability [2] page as I noticed people on /r/hetzner were complaining about lack of resources. Looks like their Cloud offerings are going quite well.
I'm still working on Server Radar, a price tracker for dedicated servers from the Hetzner Auction and recently added alerting. That included adding a backend to a previously completely static website. I decided to migrate to Cloudflare Pages (from GitHub Pages).
How is that different to a human reading your code and building up their experience? Is reading code now also covered by a license? It does not reproduce your code 1:1.
I think this is a bad-faith argument, if you know anything at all about how machine-learning systems work (in general, and LLMs in particular), and I wish people would stop trotting it out.
First, there are repeated documented examples of prompts beign designed that can cause an LLM to output training data (first link from a quick google: https://www.darkreading.com/cyber-risk/researchers-simple-te... but there are other examples, and some well-discussed ones on this site involving Github's own Copilot).
Second, the "it's just like a person learning" argument has been applied to all sorts of machine learning, and it rests on several fallacies:
1. That these systems learn the way humans do, and innovate on that learning
2. That their output constitutes any kind of original thought (related, LLM output is not copyrightable; human output is)
3. Most importantly, the scale is totally different. I think we can agree on the trivial example that training an image generation LLM on an artist's style and using that at scale to undercut the market for their work would constitute a kind of technologically-enabled competition that normal humans learning and copying styles could not equal, either in speed or in cost -- even if the quality were orders of magnitude better.
As soon as something can be automated, people start acting irrationally upset, especially if the thing is seen as even remotely "creative". Those people are going to have a bad time moving forward.
I don't see this being raised as an automation issue. The GP comment is concerned with their work, including licensed code, being used by Microsoft to train LLM models without any kind of agreement or even compensation.
There isn't clear legal precedent yet whether training models is an acceptable use of licensed work, but it has nothing to do with automation.
If the legal precedence comes out as "yes, it is legal", as many countries have already done (e.g. Japan), would GP really change their mind and become okay with it? (Note: the Copilot lawsuit have also mostly been dismissed, so this is already close to reality.)
I doubt it. I think the GP will still be concerned about it, and would petition to change the law. So it has nothing to do with the legal precedence either, unless the GP concern genuinely comes from legalities, instead of using it as an argument. But why would the GP continue to be concerned?
The first possible reason I can think of is automation. That is, "no one cared until the models became good enough". The GP might fear for their job, or have "artist envy".
The second possible reason is a distaste for corporations, and wanting one's due for contributing to it in any way, regardless of what the law says (note: assuming that the training is legal as aforementioned). So this is more of a personal morals issue, one that I disagree with but must acknowledge. I must also point out that open weight models exist.
Legal precedent doesn't generally cross national borders like that, unless you're talking specifically about international law and precedent set by bodies like the UN or ICC.
When it comes to petitioning to change the law, that's a step further than precedent. Precedent is really just legal cases that had to be hashed out because the laws either don't exist or aren't clear. A person would be well within their right to disagree with legal precedent and try to get lawmakers to clarify or create laws that overrule court decisions.
Automation is a reasonable guess on why some may be worried about LLMs and his they're trained, I just didn't see that in the GP comment. They commented specifically on concerns of their content being used to train models without any kind of agreement or financial incentive.
I should clarify, I meant "will the GP be less concerned if the precedent says training is legal in the U.S.?" The Japan one was just an example of what could happen.
My point was that, if the GP will disagree with such a precedent, then their worry goes further than mere legalities. Specifically, "there isn't clear legal precedent yet whether training models is an acceptable use of licensed work" felt like a non-sequitur to me, because it is unlikely they would care about the legal precedent.
First time I'm building a proper website, used a lot of AI. Things that changed over last time:
* Switched the charting library from D3 to Apex. D3 was too low-level for my purpose.
* Reworked the design and contents of a lot of pages (with the help of AI).
* Various bugfixes for the database queries.
* Tried to come up with some kind of pricing signal detection, but currently not working well.
* Link to the actual auction results.
* Minimal E2E validation using Playwright. What a pleasure to use!
I'm planning to add alerting. Not keen on running a backend though.
What exactly is failing in Germany, and why is it important in this context?