Hacker Newsnew | past | comments | ask | show | jobs | submit | robbedpeter's commentslogin

It's cheaper than many other additives.

Graphene flakes are all that is needed for concrete reinforcement, and it's trivial to produce, even at scale. Carbon chunks are thrown into industrial blenders with water and detergent, resulting in graphene flakes sheared and then separated in suspension. The flakes are separated, washed, and dried.

Large graphene sheets are hard. Tiny flakes are trivial, gradeschool kitchen science.


This is where you might use a variation of that "healing cracks" bacteria in conjunction with the "eats plastic" bacteria. If you could get the plastic digesting bacteria to excrete rigid oxides, then most of the plastic would be replaced with an interlocking 3d mesh of concrete and rock-like bacteria waste.

https://www.sciencedirect.com/science/article/abs/pii/S09500...

It'd eliminate the plastic waste and leave much more environmentally friendly remains, and it'd be self healing.


https://youtu.be/uFhBd5mMkU8

This is one of note thousands of videos of cats and dogs using buttons to talk.

Cats, and all mammals, have a neocortex. Theirs is not as deeply layered or large as humans, but they most definitely have the ability to reason abstractly, are aware of themselves, think emotionally, and engage in complex, time aware planning over long periods.

Your views are wrong. Language areas like Broca's region in the human brain are a consequence of physical distribution relative to the connectome and sensory endpoints. If you were to rewire the millions of connections to the lips, tongue, mouth, ears, and other body parts to be locations on the neocortex, broca's region would be somewhere different. You have about 1 square meter of neocortex responsible for all of your perception and cognition, and almost all of it is uniform. Neurons aren't differentiated by function, and animal experiments show that plasticity allows for arbitrary rewiring.

The literature in the field shows that human cognition is likely superior to other species in the depth of cortical layering and size of the organ. It's likely the only reason elephants and whales or other animals with larger brains can't compete with humans is the mere absence of hands and vocal organs. Our range of colors and audible senses are important but lesser than many animals.

Give an orca hands and human speech and there's nothing we know about neuroscience to imply that the animal wouldn't be smarter and more capable than humans. There's a lot of evidence that the killer whale would be more intelligent than humans in many ways.

The cortical layering and columnar architecture of neuron clusters differs between species, and seems to dictate the cognitive depth of abstract reasoning. There may be different algorithmic constructions in neural connections that favor human level cognition.

In principle, however, human brains aren't terribly different from many other large mammals, and elephants certainly display complex, emotional, symbolic, and abstract reasoning well within a range comparable to human experience.

Your notion of animal cognition is unscientific and biased toward an assumption of human superiority that isn't grounded in fact. Neuroscience is slowly and tirelessly matching toward reverse engineering the brain. The more we learn, the more we find similarity in the basic functions of mammal brains, from mice to humans to blue whales.


I think that video is terribly cute and the cat is adorable, but I also remain unconvinced that a cat can understand, "morning now, before night". That is too abstract. For context, it took my (then) 4 year old human child many months to understand "before" and "after". He would use them indistinctly to mean "not now".


I watched the video and I want to believe, but... I don't know. I have cats, I know cats are intelligent, cats can be trained to understand single words, etc. But I'm just not convinced any cat can string together English sentences like this. Also, the concept of time might be a bit abstract for a cat who is always living in the present moment. And then there's the most viewed video on that channel[1] where he presses the "cuddle" button, and then... doesn't cuddle. He seems like he knows pressing a button get a reaction, but he doesn't know which button is which. Good morning human, time to play with the disembodied voice buttons again.

Like I said though, I want to believe. Can you think of any longer videos that might convince a skeptic?

[1]: https://www.youtube.com/watch?v=pvgfI9P377U


This one [1] isn't very long, but I think it shows that the cat (Billi) does associate the buttons with a meaning.

The owner is playing with Billi with a toy on a string, then drops it on her back. She then quickly (within 10 seconds) presses "No", then "Back", to tell her owner to get the toy off of her back. If she was just pressing buttons to get a reaction, this would be no more likely than "Morning" "Love you", or "Want" "Hello", or any other arbitrary pair of buttons.

[1] https://www.youtube.com/watch?v=AYPFnDOPTQo


That is more compelling, but I'm still not convinced. Out of all the videos I've watched it's the only one that comes close. Smells like broken clock theory to me.

"No" is another higher-level abstract concept that I wouldn't expect anyone to master before more basic concepts. If the cat could say "ears, no" or "tummy, no" with similar confidence, then I would be convinced it understands combining the concept of "no" with another word. And those seem easy to test too. Cats don't like you messing with those areas.


> Give an orca hands and human speech and there's nothing we know about neuroscience to imply that the animal wouldn't be smarter and more capable than humans.

Humans have one more difference from others: human cubs are born helpless. They even do not know how to use their eyes. They need to learn everything themselves. All the pre-wired genetic firmware was destroyed by evolution. It is very inconvenient for mothers of course, but on the other hand this shift from hardware to software gives unprecedented abilities to adapt to different environments. And probably it gives a lot of experience of making sense of the mess of input signal that can be useful later. And of course it is a factor of a selection: if you cannot learn how to use your eyes, then you are not Sapience enough to breed.


Those videos are comically misleading.

The author acts as if cats can understand abstract concepts and combining words into a SVO grammar. And with dozens of words!

"later morning" "later play [with] dad" "love you mommy"

This is simply far outside the cognitive abilities of cats.

Edit: Confirmation bias is so strong that I get downvotes for writing this.


You're getting downvoted because you are making a strong contrary claim with no evidence whatsoever. It's the equivalent of two kids going:

"Yes it is!"

"No it's not!"

"Yeahuh it is!"


Not only there is widely available scientific literature on the topic, but is is also stuff that people should have learned in high school if not before.

Do I also need to provide evidence that the earth is not flat?


Then please cite it.

Citations here should be trivial.

I can absolutely cite and argue that the Earth is not flat - and win.

I suspect you will not be able to do so for this claim you are making.


You can search on google or google scholar for 10 minutes.

https://en.wikipedia.org/wiki/Animal_language

https://en.wikipedia.org/wiki/Human%E2%80%93animal_communica...

https://www.nature.com/articles/s41598-022-10261-5

https://owlcation.com/stem/The-difference-between-animal-and...

Besides, extraordinary claims require extraordinary evidence. Here the extraordinary claim is that cats can understand grammar and abstract concepts.

Otherwise I could claim that birds can play chess and demand you to provide papers to prove the opposite.


I encourage you to read some of these nice sources you have selected. Allow me to quote:

> Such signing may be considered complex enough to be called a form of language if the inventory of signs is large, the signs are relatively arbitrary, and the animals seem to produce them with a degree of volition (as opposed to relatively automatic conditioned behaviors or unconditioned instincts, usually including facial expressions). In experimental tests, animal communication may also be evidenced through the use of lexigrams (as used by chimpanzees and bonobos).

and

> Seyfarth, Cheney and Marler reported that vervet monkeys (now called Chlorocebus pygerythrus) responded differently to different types of alarm calls2 (although some of the calls overlap acoustically3 and this view is currently debated4). More recently, west African green monkeys (Chlorocebus sabaeus) rapidly learned the novel referent of an alarm call that was given in response to a drone5. Referential signaling is not limited to primates.

You'll notice the parent did share sources! They presented a bundle of them, of actual cats using actual signifiers to refer correctly to signified objects.


Was. They've sold out to the ad churn, laying off journalists, and some genius middle management is steering the company toward profits over quality journalism.

They're a dead outlet, unless they restructure their management to preserve journalistic integrity. That's expensive and they seem more interested in cashing out their reputation.


Yeah. It might as well be in flash.


Advertising incentivizes lying. The best presentation wins, in this current ad market.

Metrics that track good faith interactions are needed, like eBay reputation - if someone isn't 98% or higher, they're going to be overlooked or bypassed in favor of someone with a higher score.

Product reviews and ratings get gamed, because current systems don't reward good faith transactions - Amazon and Google customers purchase attention and shuffle facts around top maximize purchases. If quality reviews and curation were incentivized, there would be a thriving class of reviewers and experts playing a role in the marketplace. Their absence is glaring, and the horde of product influencers and professional reviewers underscore the deep corruption of adtech. Those people leech money from the market by selling the ability to lie. The lies are sanctioned by adtech firms, and often laundered through otherwise reliable data sources.

Any legitimate attempt to compete threatens the entire adtech ecosystem, so a majority of all consumer marketplaces are incentivized to cultivate the corruption and prevent any changes or reform that threaten the sanctified lies.

Things like Angie's List and product review vlogs and expert podcasts are stuck within the system, regardless of their intent or functionality when they start. They eventually converge into niches that support the system as a whole. Even reddit, requiring individual human dialog and interaction, has been infested by professional reviewers shilling crappy products.

You can't trust the data sources because trustworthy sources are incompatible with adtech. Google has sufficient data to fix it, but they'd lose money by allowing reform, so they maintain the ethically gray areas ferociously. Their business is not quality search, it's maximizing advertising profits, and it's more profitable to have 50 people paying a premium for scraps than 5 high quality vendors with vetted products earning those spots through quality and service.

The system is working as intended.


They shouldn't be doing stupid things like hosting anything important on Facebook.

I have zero sympathy for Australia's inability to elect competent government.

They signed up for free services like every other chump on the planet, with the same terms of service, with the same naive, zero-fucks-given attitude toward privacy, service continuity, or responsibility, and it bit them in the ass while highlighting their stupidity. This story is going to get repeated over and over until western governments catch up to social media.


Steelmanning is your own good faith attempt to understand the opposing argument so well that you can articulate it as coherently as your own position.

If you're simply repeating what the other person said, you're not using the concept to full effect.

If you're framing your own argument as a strawman instead of clarifying the opposing argument, you've missed the plot entirely (unless your opponent is arguing for the use of strawmen in debate?)

The utility of steelmanning is to minimize assumptions. Everyone has to demonstrate their comprehension. You can take it further and in order to 'pass' the steelman stage, you have to agree with your opponent's steelman argument, or have a dialog to refine it until you're satisfied that both of you understand your argument.


A cluster of many $8000+ gpus. You're looking at around 350GB of vram, so 30 12gb gpus - a 3090 will cost around $1800, so $54k on the gpus, probably another $15k in power, cooling, and infrastructure, $5k in network, and probably another $20k in other costs to bootstrap it.

Or wait 10 years, if gpu capacity scales with Moore's law, consumer hardware should be able to run a ~400GB model locally.


One could use $4.5k RTX A6000 48Gb instead. They can be joined in pairs of 96Gb common memory pool with NVlink. That’s 7x$4.5=$31.5k in GPUs to get 336Gb of memory. Or 8x$4.5=$36k in GPUs to get 384Gb of memory.

Add say $3k per GPU pair for surrounding computer (MB,CPU,RAM,PSU) 4x$3k=$12k.

$48k total budget.


> so 30 12gb gpus - a 3090 will cost around $1800

3090 has 24Gb, thus 15 GPUs X $1800 = $27,000 in GPUs


Can 3090 GPUs share their memory with one another to fit such a large model? Or is the enterprise grade hardware required?


Yes, two 3090s ($1.7k each) can be connected via NVlink with common 48Gb of memory pool.

Two RTX A6000 ($4.5k each) can form 96Gb memory pool.


Almost no one does this on prem. What would this cost on AWS?


This is not true. On prem is extremely common for things like this because after ~6 months you'll have paid more in cloud costs than it would have cost to purchase the GPUs. And you don't need to purchase new GPUs every 6 months.

AWS would cost $50-100k/mo for something comparable.


The smaller models, yes. I'd bet dollars to donuts that gpt-neo and EleutherAI models outperform most, if not all, of Facebook's.

Check out huggingface, you'll be able to run a 2.7b model or smaller.

https://huggingface.co/EleutherAI/gpt-neo-2.7B/tree/main


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: