> "And if he can't sell it, why would he create it (or at least, why would he publish and market it) in the first place?"
And this is the crux of my biggest issue with people who are against AI art. The question "if artists can't make money on it, why would they make or share art in the first place?" is so incredibly depressing, dystopian, and frustrating to me. I understand we live in a largely capitalistic world, for better or for worse (mostly worse), but making a cash profit should not be the primary motivation to create and share art. The joy of creation and aesthetic appreciation should be. Art is only a human endeavor if it's done for intrinsic value or to share an idea or an emotion. Once the main -- or indeed only -- reason for creating something becomes "how many dollar bills will people put in my bank account if I let them see this?", it ceases to be art, in my opinion, and becomes conditioned capitalistic greed.
Don't get me wrong, if you make art that people want to pay you for, that's great! But if you remove the paycheck and suddenly can't think of any good reasons to continue creating, then it is my humble opinion that you were never making art in the first place. You were simply chasing currency, the nature of which you almost certainly don't understand anyway.
First up, I'm not against AI art - I think it's incredible what we can do with computers nowadays, and AI art generation is a fantastic idea. However, I would say your statement that 'Art is only a human endeavor if it's done for intrinsic value or to share an idea or an emotion' casts an interesting pall over AI art, and its role in terms of human culture.
Overall though, perhaps I've oversimplified. The question I should have asked should have been 'if artists can't make money on it, then how can they afford to dedicate time to developing their craft and producing their art?'
Personally, my suggestion would be something like Universal Basic Income, but there are a lot of people who would be against that.
>Art is only a human endeavor if it's done for intrinsic value or to share an idea or an emotion. Once the main -- or indeed only -- reason for creating something becomes "how many dollar bills will people put in my bank account if I let them see this?", it ceases to be art, in my opinion, and becomes conditioned capitalistic greed.
Truly, go live in the real world of being an artist who's struggling to get by at even a basic level and doesn't have all the soft cushions in life that provoke absurdly rigid and privileged sentiments like the ones you wrote here.
Creativity and the desire to express it don't put a person outside the essential pressures of economic need and its emotional roller coasters. On the other hand, when those needs aren't really satisfied, it can quickly become damn hard to be creative even if you're sincerely emotionally passionate about your creative expression. It's nice to spout crap about how people shouldn't create art just for "capitalistic greed", but for someone straining just to pay monthly rent and basic expenses, there's nothing greedy about having their creative motivation lubricated along with some decent sales or financial sponsorship. What nonsense to assume otherwise.
As someone giving money to a number of creators via patron… it’s been absolutely heartbreaking to hear them fall to adding adverts one by one like dominoes as the cost of living has kept rising.
Some have been able to make a living as a niche podcast host for years now but now it’s not enough… and they aren’t even in a field where they are competing against the AI art… yet (I struggle to imagine how pointless and devoid of meaning an AI generated podcasts would sound)
I dunno how long the house of cards can keep stacking up, but eventually it will collapse and we’re going to have to change how we do capitalism… or it will need replacement. If a 25k up front domestic robot with 1k per year maintenance costs, plus electricity costs (solar & wind just gets cheaper with amortisation) can do basic factory work, how many people from the bottom tier of society can have no job prospects before it collapses? The average IQ is not 100, it’s lower, 100 is the ideal, there’s a lot of people who aren’t going to get robot programmer jobs who would have been drivers or low skill tradesmen/general contractors…
I’m genuinely not sure what will happen but it just seems inevitable without the same sort of changes that will help Artists have a living while producing art.
I maintain that the short-form limitation of Twitter is exactly what amplifies the shitstorm compared to other social media platforms. Being anonymous makes people act like assholes, but when you only have 140 (now 280) characters to speak your mind, it guarantees that conversations across the site will be less clear and more often misunderstood. And miscommunications between people who are already anonymity-shielded dickwads leads to a positive feedback loop of Syfy's new worst movie, Shitnado.
I'd like to start a petition that we engineer a chimp with human TKTL1 and human FOXP2 genes! Sounds like we're only about 3 or 4 SNPs away, and if a chimp had a denser neocortex and a brain structure more suitable to human-like speech (and the corresponding spatial organization/awareness)... I'm just saying, a real-life Planet of the Apes doesn't have to end in disaster, we could just embrace our cousins as equals :)
I got interested in genetic engineering based on this specific idea from a Ben Bova sci fi book some 35 years ago (https://www.goodreads.com/book/show/302503.Exiled_from_Earth). There's an primate that has been genetically engineered to speak (they also had to edit some genes to make the throat more capable of human vocalizations). I read that and kind of looked at the front cover of my Biology book, which had a picture of a tobacco plant with a luciferase gene cloned in (a gene from fireflies, in plants, will make the plant glow).
Turns out doing actual experiments like this is far more complicated, ethically as well as technologically, than even many scientists appreciate.
As someone with a background in biomedical engineering, who once planned to research the development of gene therapies for cancer treatments: yes, definitely :)
But we do know the basics, and our ability to synthesize, excise, and replace specific genetic sequences is quite sophisticated nowadays. So while there is some technical consideration regarding implementation details, the bigger barrier to gene editing experiments is the ethics. For instance, I mentioned FOXP2, and as it turns out, we've already done the experiments of engineering mice with human FOXP2 genes (which is how we learned that, in addition to language, it's quite important in spatial awareness).
While modifying a chimp comes with its own technical considerations, we've already modified monkeys in 2001 and then again in 2014, and FOXP2 is more compatible with chimps than mice, so... it's really far more about the ethics of such a thing.
Now, I was joking in my comment, but part of me absolutely would love to say "screw the anthropocentric ethics, as long as you're not hurting anyone, MAKE THE CHIMP SMARTER!" :D
Here's the thing about truth: the "who" doesn't matter. There shouldn't even be a question of "who determines if something is untrue?", nor should we be advocating for every individual to decide the truth for themselves. There is one objective reality that exists outside of our own minds, desires, and decisions, whether we like that reality or not.
What determines truth is empirical evidence. If something has no empirical evidence, it is untrue. If something has empirical evidence, it is true. If there is conflicting evidence, then some of that evidence is invalid and it must be re-analyzed using math, existing knowledge from provable things, and formal logic. After doing so, you will either determine which of those things is true, or arrive at the conclusion that there isn't enough evidence either way and stop after saying "I don't know" rather than deciding which version you prefer.
It's not about appeals to authority. Expertise is about people who have more practice at finding, testing, and analyzing the evidence in their field than random Joe Schmo on the street; and it's about nothing more than that.
We should not let people "make up their own minds" on conflicting evidence -- which is another way of saying "let people make up their own reality and expect to live in it" -- we should encourage everyone to stop at "I don't know" when they aren't sure where the evidence actually leads, and defer to people who can follow the evidence, if such a person exists. And if no such person exists, then we as a species should all stop at "I don't know (yet)".
Not stolen, just a terrible consequence of a poor voting system (i.e. electoral college) and a larger-than-comfortable minority of voters who were also shitty humans.
That's a poor analogy. You're going to the website, it's not coming to you, and you're given recommendations, not having them load and play the videos for you automatically. This is more like you buy a frozen yogurt at McDonald's in the afternoon, and when you return to McDonald's, the cashier asks if you'd like a McFlurry this time, and maybe to try a Big Mac combo?
Even humans have trouble deciding (a) what constitutes "suitable content" and (b) whether we should be deciding that for other people. So of course our current algorithms don't take that into account, since we can't define it in the first place.
Maybe one day AI will get to the point where people will accept its decisions on such matters (knowing humanity, I deem this unlikely), but for now, there's no way to do that beyond just removing that content in the first place. And then you get into a huge debacle where people complain about censorship, and... yeah, it's not really a solvable problem.
I agree with you that clicks and view time do not necessarily equate to desired content, but I disagree with you that Google uses those parameters because they're "evil". They make money when people continue using their products, regardless of whether that's because they're being sucked into a rabbit hole of engaging things they don't want, or because they're being sucked into a rabbit hole of things they actually do want.
I think the reason Google (and Amazon, and Netflix, and every other major tech company) uses clicks/view time as recommendation engine inputs is because... well, what else can they use? What quantifiable metric could possibly be used for large-scale, automated recommendations that more accurately indicates what someone actually wants to see more of? (This isn't rhetorical: if you have any ideas, I'd love to hear them.)
I don't think clicks/view time are the best metrics at all, and I don't think they're extremely accurate. But I also think they're the most accurate measure we've got, with the only other options being either (a) remove recommendations entirely, or (b) have humans manually monitor everything you do on the website, occasionally ask you why you clicked or watched things, and then make personal recommendations to you based on your answers. The latter of which is slow, more expensive, less scalable, more invasive to the user, and more tedious for the employees that would have to sit there monitoring you.
Maybe one day we'll have an AI method of using your comments and search terms as a better indication of desire (I dare say some of the recent LLMs are close to being ready for that task), but we're not there yet.
FWIW, my position is today (and has been for many years now) that recommendation systems are inherently problematic, so you aren't exactly trapping me in some kind of contradiction or paradox here by asking me "but how else could it be done?!"... I'd argue that some things simply shouldn't be done.
In the case of social networks, I think they served a positive function to both society and the people who used them back when they didn't have the recommendation algorithms, and your feeds were curated by you choosing to follow people explicitly; this, however, was not profitable, so we are now here.
If you are to do it, then yes: I think you probably need to do what TikTok is doing, and have humans heavily involved in the recommendation system in a way that attempts to put a hand on the wheel rather than the Google way to approach problems with algorithms on top of algorithms and no humans anywhere.
"They don't recommend people anti-democratic indoctrination videos because they actually want you to become anti-democratic; they recommend them because they don't care, and just want to make money" is not a solid argument against them being evil.
Discussion of fraud is not beyond the Overton window. Continued discussion of dangerous ideas which have already been proven false should be, simply because of those two reasons. False balance is bad; false balance when one of the ideas on the scale is dangerous to human beings is worth removing.
No one call the forest rangers, I'm running down that hiking trail! :D
No, but legitimately, my motivation is "I know what I'm good at, how do I apply that to create new things?" If I'm forced into things I'm not good at, I falter motivationally. And if I'm doing what I'm good at, but not creating anything new or interesting with it, I burn out and become miserable. So in the context of this framework, sign me up to start running up that (hiker's) hill!
And this is the crux of my biggest issue with people who are against AI art. The question "if artists can't make money on it, why would they make or share art in the first place?" is so incredibly depressing, dystopian, and frustrating to me. I understand we live in a largely capitalistic world, for better or for worse (mostly worse), but making a cash profit should not be the primary motivation to create and share art. The joy of creation and aesthetic appreciation should be. Art is only a human endeavor if it's done for intrinsic value or to share an idea or an emotion. Once the main -- or indeed only -- reason for creating something becomes "how many dollar bills will people put in my bank account if I let them see this?", it ceases to be art, in my opinion, and becomes conditioned capitalistic greed.
Don't get me wrong, if you make art that people want to pay you for, that's great! But if you remove the paycheck and suddenly can't think of any good reasons to continue creating, then it is my humble opinion that you were never making art in the first place. You were simply chasing currency, the nature of which you almost certainly don't understand anyway.