Hacker Newsnew | past | comments | ask | show | jobs | submit | TuringTest's commentslogin

Wikipedia is and has always been a wiki; reverting bad or controversial edits has always been expected from day one.

Also Wikipedia has developed an editorial line of its own, so it's normal that edits that go against the line will be put in question; if that happens to you, you're expected to collaborate in the talk pages to express your intent for the changes, and possibly get recommendations on how to tweak it so that it sticks.

It also happens that most of contributions by first timers are indistinguishable from vandalism or spam; those are so obvious that an automated bot is able to recognize them and revert them without human supervision, with a very high success rate.

However if those first contributions are genuinely useful to the encyclopedia, such as adding high quality references for an unverified claim, correcting typos, or removing obvious vandalism that slipped through the cracks, it's much more likely that the edits will stay; go ahead and try that experiment and tell us how it went.


> The main issue with neutral people is that we do not know in which camp they are.

And that's a good thing, 'cause it means they're living to their standards.


Oh dear, you need to learn about the GamerGate incident which started August 2012. All the extreme division and online manipulation through the collaborative creation of false narratives started right there, with that issue, before contaminating the entire political landscape.

It's the Eternal September of our generation, and it's not recognised enough as such. Before that, the internet was a different place.

https://en.wikipedia.org/wiki/Gamergate


That was 2014, not 2012; and I was trying not to mention it.

You're right, I mistyped it.

> Gamergate or GamerGate (GG) was a loosely organized misogynistic online harassment campaign motivated by a right-wing backlash against feminism, diversity, and progressivism in video game culture

Okay, what the actual fuck? IIRC it was people whining about the absolute state of games journalism in the 2010's.


GamerGate was about ethics in games journalism roughly as much as the Arab Spring was about a street vendor having his cart confiscated.

That was their initial spark, but it kicked off a ding-dong battle for years. You could argue it's still going today, given places like /v/ and ResetEra are still fighting it, games like Dustborn and Concord are pilloried, and the "Sweet Baby Inc. detected" Steam curator exists to list games that have taken that company's advice.


Basically Wikipedia has a failure point in which if media creates a narrative that's what passes as valid.

I was there, it was as Wikipedia describes it. Read the talk page.

Edit: the replies to this comment demonstrate why this problem is intractable: people are very emotionally invested into their idea of how things unfolded, and outright reject other perspectives with little more than a "nuh-uh!".


I was there as well. It was absolutely not as Wikipedia describes it. If the claim was that some people participating in GG did so because they were sexist, fair enough. That was true and unavoidable because you get crazies in every group. But that was not some kind of universal thing, such that Wikipedia should be describing the movement unambiguously as "misogynistic".

It absolutely matched the Wikipedia summary. There is a ton of evidence linked supporting each point: it was a hate mob from the moment Eron Gjoni decided his ex should be punished for breaking up with him.

> from the moment Eron Gjoni decided his ex should be punished for breaking up with him.

There was no such moment in the first place.


It all started with his post, attacking her relationship with Grayson, who never reviewed her games. Even he later admitted that the original claims were fictitious but that did nothing to stop the attacks – if you look at the threats she received or the online statements the attackers made, they cared a LOT more about her alleged infidelity or what they perceived as unfair privileges for women in the gaming industry than anything about journalism.

This was later added to his post:

> To be clear, if there was any conflict of interest between Zoe and Nathan regarding coverage of Depression Quest prior to April, I have no evidence to imply that it was sexual in nature.

He even told Boston Magazine that this was the hook he used to get attention, with what he knew was a high likelihood of attacks:

> As Gjoni began to craft “The Zoe Post,” his early drafts read like a “really boring, really depressing legal document,” he says. He didn’t want to merely prove his case; it had to read like a potboiler. So he deliberately punched up the narrative in the voice of a bitter ex-boyfriend, organizing it into seven acts with dramatic titles like “Damage Control” and “The Cum Collage May Not Be Accurate.” He ended sections on cliffhangers, and wove in video-game analogies to grab the attention of Quinn’s industry colleagues. He was keenly aware of attracting an impressionable readership. “If I can target people who are in the mood to read stories about exes and horrible breakups,” he says now, “I will have an audience.”

> One of the keys to how Gjoni justified the cruelty of “The Zoe Post” to its intended audience was his claim that Quinn slept with five men during and after their brief romance. In retrospect, he thinks one of his most amusing ideas was to paste the Five Guys restaurant logo into his screed: “Now I can’t stop mentally referring to her as Burgers and Fries,” he wrote. By the time he released the post into the wild, he figured the odds of Quinn’s being harassed were 80 percent.

https://www.bostonmagazine.com/news/2015/04/28/gamergate/2/


> Even he later admitted that the original claims were fictitious

No, he did not. And nobody was claiming that Grayson reviewed Quinn's games beyond like a day or two of confusion, and none of the arguments made relied on that being the case.

> what they perceived as unfair privileges for women in the gaming industry than anything about journalism.

This is a false dichotomy. The entire point was that the journalism had a role in creating those privileges.


> No, he did not

Those were his words, I’m not sure why you’d expect your assertion to be more credible.

> nobody was claiming that Grayson reviewed Quinn's games beyond like a day or two of confusion

They spent a year lying about her “unethical” actions justifying all of the abuse, and it all traced back to that foundational lie.


> Those were his words

No, they aren't. They're your interpretation of Boston Magazine's spin (and it's really, really obvious purely from the style of the prose that it's a complete hit piece that chose its conclusion ahead of time). The article provides no evidence of any such words. Because there is no such evidence, because he said nothing of the sort.

> They spent a year lying about her “unethical” actions justifying all of the abuse

That is, again, objectively not what happened. Any claims WRT Quinn were evidenced, and were also irrelevant to the large majority of what was going on. (What was actually going on, not what sources like the ones you prefer chose to focus on.)


> No, they aren't

They’re literally the words he updated his blogpost to add.

> That is, again, objectively not what happened.

Cool story, do you have any sources? You keep saying every period source is wrong, based on what?


How would you characterize the initial blog post?

Well no, I was also around but not particularly interested at the time. This looks like a classic case of the media trying to close ranks and smear their critics.

Funny enough, it would the movement stopped being about harassing women the moment the media stopped writing about it, advocates kept on going, criticizing ideological push into videogames to this day. At the same time by now both Brianna Wu and Anita Sarkeesian have been shown to be grifters who really knew jackshit but how to play a crowd.

I was also there, and I say it was very much not as Wikipedia describes it and the narrative is practically libelous. I would tell you to read as much of the archived back-room nonsense as I did (not just the talk page archives but internal Wikipedia government stuff), but even if could be unearthed this much later nobody deserves the trauma.

Sigh. Well, now that it's come up....

Fun thing about that. Whenever someone starts going off about how Zoe Quinn was supposedly mistreated and how that supposedly launched a "right-wing backlash against feminism" and a "misogynistic online harassment campaign", quiz them about the "jilted boyfriend" (as they typically put it) who wrote the post that supposedly set everything off. With remarkable consistency, they don't know his name (Eron Gjoni) or anything about his far-left political views, and will refuse to say the name if you ask. They have never read the post and have no idea what it says, and will at most handwave at incredibly-biased third-hand summaries.

I'm pretty sure I've even had this happen on HN.


GG wasn't constrained to Gjoni, it was the reaction to his posting. One guy saying "I'm on this team" does not define the characteristics of the resulting events.

You miss the point. It's about those people being misinformed, unwilling to look into matters independently, and selective in the application of their supposed ideological principles.

Does "someone doesn't know trivia about the inflection point" really demonstrate any of those things?

Like, if I asked you whether the anger at Depression Quest was downstream of a long-standing meme-feud on /v/ about whether visual novels are videogames and you didn't know that, that doesn't really mean anything about your understanding of anything other than /v/ culture wars of the 2010s.

I mean, c'mon, "five guys burgers and fries"?

The whole thing springs out of "someone who made a thing we don't like" and "an excuse to attack" - the lack of any actual ethical breaches in the coverage of Depression Quest should be immediately disqualifying.


Among other things, I think it suggests that my opinion about what happened, as someone who does know those things from distinctly remembering them and having had them be personally relevant at the time, should be taken more seriously than that of people telling me over a decade later what happened based on some combination of { the Wikipedia article, their own worldview, what their friends have said about it, more recent news articles from aggrieved people who cite it as part of a grand conspiracy theory about contemporary right-wing politics }.

If your grievance is "people don't take me seriously in arguments", then you could try deploying sources. There's probably still plenty of /v/ archives from back in the day, right?

But I think "people trust contemporary and retrospective reporting more than me, a guy who self-identifies as having a skin-in-the-game perspective" shouldn't be very surprising.

And, if it means anything, I was reading /v/ at the time, too, was initially sympathetic, and eventually realized it was all just an extension of existing /v/ grievance politics (from my perspective) - "people who disagree with us or make things we don't like are getting attention, which is evil".

I was there for threads where people were seething about positive coverage around Depression Quest before the "Zoe Post" blow-up, which was purely "we don't like that people enjoy experiences that don't suit our tastes".

At some point I realized that there just wasn't any actual ethical issues to speak of around the Depression Quest coverage, and it was just more /v/ seething about outlets liking things they didn't.


> If your grievance is "people don't take me seriously in arguments", then you could try deploying sources. There's probably still plenty of /v/ archives from back in the day, right?

I spent years trying to do this. It took inordinate amounts of time and mental energy, made exactly zero difference to the beliefs of my interlocutors no matter how well reasoned and evidenced, and additionally got me dismissed as some weirdo who cares too much (by people who clearly cared too much, but were annoyed that I disagreed with them).

I am not getting back into that now and am only willing to discuss this in the most top-level generalities. It was genuinely traumatic.

> At some point I realized that there just wasn't any actual ethical issues to speak of around the Depression Quest coverage, and it was just more /v/ seething about outlets liking things they didn't.

You keep talking about /v/. I don't understand why. The main discussion was on Reddit. And they showed concrete evidence of new ethical issues regularly.


>You keep talking about /v/. I don't understand why.

Given that you find not knowing the blog post guy's name disqualifying, this is extremely funny. The ground level of the whole shitshow wasn't /r/KiA.

(I'd love to see a scrap of evidence that /r/KiA did anything beyond "we did it reddit"-style conspiracy posting and going "hmm this dev is queer, is this an ethics issue?", but given that this was apparently traumatic for you, I won't force the issue)


The GamerGate article is probably the best example of Wikipedia's blatant political bias.

There are many biased articles out there, of course, but not many manage to misrepresent past events to such an extreme that it borders on comical. It reads like it was written by Zoe Quinn herself. Maybe it was.


GamerGate was about journalism in the same way that the Russian invasion of Ukraine was to protect the rights of ethnic Russian minorities in that country. The GamerGate people used ethics as an excuse because that sounds a lot more reasonable than “hate mob riled up by a bitter ex”, but it fell apart as soon as you looked at the evidence (e.g. they were most focused on attacking a developer over a relationship with someone who never reviewed her games), where they went for support (right-wing agitators with low journalistic ethics), and all of the real issues they ignored between huge gaming companies and the major media outlets.

The excuse was as believable as someone saying they were super concerned about ethics in tech journalism, but then never said a word about a huge tech company and spent all of their time badgering the Temple OS guy for sharing a meal with an OS News writer.


Or, you know, people had long been unhappy with the poor state of game reviews and the incident in question prompted broad complaints. Rather than accept criticism the journalists in focus instead decided to use their platforms to smear their critics as a sexist hate mob.

If that was really their motive, they sure picked an odd way to express it by focusing their efforts on attacking one woman with very little power in the industry while ignoring the actual game media outlets and huge companies. It’s like claiming you’re an environmental activist but instead of even talking about Exxon you’re busy making death threats to the local pet store claiming their organic kibble isn’t really organic.

That's it though, at the time there were plenty of complaints about the media outlets and publishers. Your problem is that the only people reporting on this to the wider public were the very journalists that the group were criticising.

GamerGate wasn’t doing that work and they distracted mightily from it. For example, the much-hated Kotaku was actually doing reporting which got them black listed so not only was GamerGate not contributing there, they were actually harming the people who were:

https://kotaku.com/a-price-of-games-journalism-1743526293

Even if they had also been involved, it would not excuse the abusive behavior.


That Kotaku piece was a full year after GamerGate, if anything people might question whether it'd have happened if GamerGate hasn't drawn attention to these problems.

GamerGate was still going strong in 2015, and did absolutely nothing to help stories like that. The people attacking journalists don’t get to take credit for their targets’ work.

I'm sure that the journalists involved would never admit that people putting a spotlight on their bad behaviour made them clean up their act.

There's no obvious way to change the location of the prediction. Can it be done, to support the "travelling soon" use cases?

Yes, but currently only possible via url param:

- https://weather-sense.leftium.com/?n=nyc

n is short for "name" and uses the Open Meteo geocoding API[1].

[1]: https://open-meteo.com/en/docs/geocoding-api


Alignment is a marketing concept put there to appease stakeholders; it fundamentally can't work more than at a superficial level.

The model stores all the content on which it is trained in a compressed form. You can change the weights to make it more likely to show the content you ethically prefer; but all the immoral content is also there, and it can resurface with inputs that change the conditional probabilities.

That's why people can make commercial models to circumvent copyright, give instructions for creating drugs or weapons, encourage suicide... The model does not have anything resembling morals; for it all the text is the same, strings of characters that appear when following the generation process.


I'm not so sure about that. The incorrect answers to just about any given problem are in the problem set as well, but you can pretty reliably predict that the correct answer will be given, granted you have a statistical correlation in the training data. If your training data is sufficiently moral, the outputs will be as well.

> If your training data is sufficiently moral, the outputs will be as well.

Correction: if your training data and the input prompts are sufficiently moral. Under malicious queries, or given the randomness introduced by sufficiently long chains of input/output, it's relatively easy to extract content from the model that the designers didn't want their users to get.

In any case, the elephant in the room is that the models have not been trained with "sufficiently moral" content, whatever that means. Large Language Models need to be trained on humongous amounts of text, which means that the builders need to use a lot of different, very large corpuses of content. It's impossible to filter all that diverse content to ensure that only 'moral content' is used; yet if it was possible, the model would be extremely less useful for the general case, as it would have large gaps of knowledge.


The idea of the ethical reasoning dataset is not to erase specific content. It is designed to present additional thinking traces with an ethical grounding. So far, it is only a fraction of the available data. This doesn't solve alignment, and unethical behaviour is still possible, but the model gets a profound ethical reasoning base.

>Alignment is a marketing concept put there to appease stakeholders

This is a pretty odd statement.

Lets take LLMs alone out of this statement and go with a GenAI style guided humanoid robot. It has language models to interpret your instructions, vision models to interpret the world. Mechanical models to guide its movement.

If you tell this robot to take a knife and cut onions, alignment means it isn't going to take the knife and chop of your wife.

If you're a business, you want a model aligned not to give company secrets.

If it's a health model, you want it to not give dangerous information, like conflicting drugs that could kill a person.

Our LLMs interact with society and their behaviors will fall under the social conventions of those societies. Much like humans LLMs will still have the bad information, but we can greatly reduce the probabilities they will show it.


> If you tell this robot to take a knife and cut onions, alignment means it isn't going to take the knife and chop of your wife

Yeah, I agree that alignment is a desirable property. The problem is that it can't really be achieved by changing the trained weights; alleviated yes, eliminated no.

> we can greatly reduce the probabilities they will show it

You can change the a priori probabilities, which means that the undesired problem will not be commonly found.

The thing is, then the concept provides a false sense of security. Even if the immoral behaviours are not common, they will eventually appear if you run chains of though long enough, or if many people use the model approaching it from different angles or situations.

It's the same as with hallucinations. The problem is not that they are more or less frequent; the most severe problem is that their appearance is unpredictable, so the model needs to be supervised constantly; you have to vet every single one of its content generations, as none of them can be trusted by default. Under these conditions, the concept of alignment is severely less helpful than expected.


>then the concept provides a false sense of security. Even if the immoral behaviours are not common, they will eventually appear if you run chains of though long enough, or if many people use the model approaching it from different angles or situations.

Correct, this is also why humans have a non-zero crime/murder rate.

>Under these conditions, the concept of alignment is severely less helpful than expected.

Why? What you're asking for is a machine that never breaks. If you want that build yourself a finite state machine, just don't expect you'll ever get anything that looks like intelligence from it.


> Why? What you're asking for is a machine that never breaks.

No, I'm saying than 'alignment' is a concept that doesn't help to solve the problems that will appear when the machine ultimately breaks; and in fact makes them worse because it doesn't account for when it'll happen, as there's no way to predict that moment.

Following your metaphor of criminals: you can control humans to behave following the law through social pressure, having others watching your behaviour and influencing it. And if someone nevertheless breaks the law, you have the police to stop them from doing it again.

None of this applies to an "aligned" AI. It has no social pressure, its behaviours depend only on its own trained weights. So you would need to create a police for robots, that monitors the AI and stops it from doing harm. And it had better be a humane police force, or it will suffer the same alignment problems. Thus, alignment alone is not enough, and it's a problem if people depend only on it to trust the AI to work ethically.


What precise process do you suggest to tell them apart at Google scale? That's the crux of the matter.

It is not the crux!!

Large companies don't get to say they're too big, so therefore it is hard.

Too damned bad!

They can take advantage of scale, but not at the cost of breaking the law, or just doing their job improperly.

If it makes service at scale difficult, well that's just too bad. Sucks to be them. Maybe a competitor will do better.

No excuses because "oh poor widdle me, I'm too big"


Who is going to stop them when they do the current shenanigans, and how are they going to enforce it?

Legal system supposed to…

Too bad it’s working only for the powerful.

Marx was right about some things…


Why does scale matter?

If I have 100 customers and I have to spend 1 hour a week dealing with legal compliance requests then if I have 200 customers I have to spend 2 hours a week dealing with legal compliance requests, but I also have more resources to do it with.

In fact, scale usually makes it easier rather than harder because you can take advantage of economies of scale to streamline the process.

And, in the end, if you aren't able to comply with the law then you shouldn't be in that business regardless of your scale.


The only way to guarantee compliance with the DMCA is to remove any content the moment a complaint is submitted.

Copyright can only be determined in court. The fact that not all copyright complaints lead to a video going down is because Google is willing to take on some liability when they believe a complaint is not legit, and leave the video up.


I'm not sure how this is a reply to my comment. What you said applies whether you are hosting 1 video a month or 1,000,000 videos a month. My point was that scale isn't an excuse. What applies to large applies to small and vice versa.

The point is that regardless of the size of the company, copyright is such a shitshow that there are only less bad ways of handling it. The only way for a company to guarantee that they never violate copyright law is to do a takedown every time there is a complaint.

Obviously, this is not something they can do, because offering random people the ability to take down random videos with only the courts as recourse would be a disaster. Neither do these companies want to be in the business of deciding if a complaint is valid or not, because if they decide one way and then a judge decides the other, they get screwed.

Google tries to take a measured stance and evaluate complaints for obvious issues, but otherwise they do generally just act on them, and if the other parties involved can't agree on whether or not there is infringement, they just throw their hands up and tell them to take it to court.

Copyright is so complicated and fraught that it's virtually impossible to manage it in a way that satisfies everyone, regardless of how big or small a player is.


That's not the debate we're having here. See the comment I originally replied to:

> What precise process do you suggest to tell them apart at Google scale?

The suggestion is that scale makes a difference. I was refuting that.


> And, in the end, if you aren't able to comply with the law then you shouldn't be in that business regardless of your scale.

Again, you're talking from a moral standpoint, but it's not practical. Who's going to stop Google or other corporations from tracking DMCAs the current way?

> Why does scale matter?

Because of resources. Any defined process needs resources to be implemented; law enforcement is no different.

Google provides services at scale by means of automating the shit of them. The only way to identify legit from fake claims at that level is to also create an automated resolution process, with the results we see.

You may want to limit Google size by forcing them to perform human reviews for all their customer service interactions; but again, how are you going to force them into compliance? You'd need a US judiciary system the size of Google to do it.


> You may want to limit Google size by forcing them to perform human reviews for all their customer service interactions

You've inferred that, but I didn't make this claim. A sensible strategy would involve automating as much as possible while allowing for the ones that matter (e.g. OP's example) to be escalated.

Clearly you can't do that if, as in OP's case, you don't even perform any automated ID checks before telling the complainant that their ID hasn't been verified.

> Again, you're talking from a moral standpoint

Not at all. I'm taking the legal standpoint. I say nothing about whether this particular law, or any other law, is moral or not. Complying with the law is a basic requirement that any company has to satisfy. Why should Google be any different just because it's big? You seem to be suggesting that laws should only apply to small entities and that once you go above a certain scale, you are above the law.

Again, if you simply cannot comply with the law for some reason (as you seem to be suggesting applies to Google) then you shouldn't be running that business at all because, after all, doing so implies doing something illegal.


If you have 100 customers, they are all authentic. If you have 100,000,000 customers, 15,000,000 are bad actors racking their brains on how to game your system.

It depends. Sometimes they joy is in discovering what problem you are solving, by exploring the space of possibilities on features and workflows on a domain.

For that, having elegant and simple software is not needed; getting features fast to try out how they work is the basis of the pleasure, so having to write every detail by hand reduces the fun.


Sounds like someone who enjoys listening to music but not composing or performing music.

Or maybe someone DJing instead of creating music from scratch.

Or someone who enjoys playing music but not building their own instrument from scratch.

No.

Building the instrument would be electrical engineering.

Playing the instrument would be writing software.


Came here to say the same. The game tastes as too little with just one question; when you get the gist of how it works, it's over.

> I know a decent bit of both worlds so that disconnect in perceptions always amuses me.

Double-entry bookkeeping was from its inception an error-correction code that could be calculated by hand.

Modern databases contain much more powerful error correction methods in the form of transactional commits, so from a pure technical point, double-entry bookkeeping is no longer needed at all; that's why programmers have a hard time understanding why it's there: for us, when we store a value in the DB, we can trust that it's been recorded correctly and forever as soon as the transaction ends.

The thing is, cultural accounting still relies on the concepts derived from double entry bookkeeping as described in the article; all those assets and debts and equity are still used by the finances people to make sense of the corporate world, so there's no chance that they'll fall out of use anytime, at least 'in the context of a company' as you out it.

Now would it be possible to create a new accounting system from scratch that relied on DB transactions and didn't depend on double entry? Sure it can, in fact crypto coins is exactly what happens when computer engineers design a money system unrestricted from tradition. But in practical terms it still needs to relate to the traditional categories in order to be understood and used.


> from a pure technical point, double-entry bookkeeping is no longer needed at all

Just because databases are transactional doesn't mean the entire system is. Double-entry accounting still helps catch errors.

A concrete example, since people like to think databases dealing with money are especially transactional, when they're not ...

I used to work at a small regional bank. In the course of some network maintenance, I accidentally disrupted the connectivity to an ATM while a customer was doing a transaction.

The next day, our accounting folks caught a problem with reconciliation, and the customer called to follow up as well. My interruption caused a deposit to proceed far enough to take their checks and money, but failed to credit the customer's account.

It's very hard to orchestrate transactions perfectly across multiple organizations and systems. You can't hand-wave this away by pointing at db consistency guarantees. Traditional accounting techniques will catch these errors.

I'm not sure that ATMs even have the ability to communicate certain failure classes back to the acquiring bank. eg, a cash dispenser malfunction is common enough to be mentioned by VISAs network rules explicitly, but as far as I know will almost always require manual reconciliation between the ATM operator and the network.


The crypto model of single entries with "from" and "to" field works well for transactions. For example you move $100 from checking to savings account, something like the following will capture it perfectly.

```json { "from": "Checking", "to": "Savings", "amount": 100 } ```

This is basically what a crypto ledger does.

But the main reason why we need double entry accounting is that not all accounting entries are transfers. For example, is we are logging a sales, cash increases by $100, and revenue increases by $100. What's the "from" here? Revenue isn't an account where account is taken from, it is the "source" of the cash increase. So something like the following doesn't capture the true semantics of the transaction.

```json { "from": "Revenue", "to": "Cash", "amount": 100 } ```

Instead, in accounting, the above transaction is captured as the following.

```json { "transaction": "Sale", "entries": [ { "account": "Cash", "debit": 100, "credit": null }, { "account": "Revenue", "debit": null, "credit": 100 } ] } ```

It gets worse with other entries like:

- Depreciation: Nothing moves. You're recognizing that a truck is worth less than before and that this consumed value is an expense. - Accruals: Recording revenue you earned but haven't been paid for yet. No cash moved anywhere.

The limitation of ledgers with "from" and "to" is that it assumes conservation of value (something moves from A to B). But accounting tracks value creation, destruction, and transformation, not just movement. Double-entry handles these without forcing a transfer metaphor onto non-transfer events.


But you don't need double entry to register increases or decreases of value; you could as well just use single-entry accounting and add or remove money from a single account, using transactions that are not transfers.

In the past this was problematic because you missed error-checking by redundance, but nowadays you can trust a computer to do all the checking with database transactions (which are more complex checks than double entry, though they don't need to be exposed into the business domain). Any tracking that you'd want to do with double-entry accounting could be done by creating single-entry accounts with similar meaning, and registering each transaction once, if you know that you can trust the transaction to be recorded correctly.


I think this depends on what you consider to be the fundamental trait of double-entry accounting: the error-checking or the explanatory power.

It is true that by enforcing that value movement have both a source and a target, we make it possible to add a useful checksum to the system. But I believe this is a side benefit to the more fundamental trait of requiring all value flow to record both sides, in order to capture the cause for each effect.

I agree with your general perspective though: technology has afforded us new and different tools, and thus we should be open to new data models for accounting. I don't agree with other commenters that we should tread lightly in trying to decipher another field, nor do I agree with the view that the field of Accounting would have found a better way by now if there were one. Accountants are rarely, if ever, experts in CS or software engineering; likewise software developers rarely have depth in accounting.

Source: just my opinions. I've been running an accounting software startup for 5 years.


You should stop presuming that you know more about what’s needed than an entire field of business practice and study. There is actual theory behind accounting. Accountants understand that there can be different views on the same underlying data, and the system they continue to choose to use has a lot of benefits. You seem to be really stuck on the idea that the mental model accounting as a profession finds useful doesn’t seem useful to you. But it doesn’t need to make sense to you.

I'm not presuming I know anything about accounting. I know a great deal about data recording and management though, and my analysis is done from that perspective.

I explicitly recognized that the practice of accounting as a discipline keeps using traditional concepts that are culturally adequate for its purpose. What my posts are pointing out is that the original reason for double-entry records, which was having redundant data checks, is no longer a technical need in order to guarantee consistency because computers are already doing it automatically. From the pure data management perspective I'm analysing, that's undeniable.

The most obvious consequence of this analysis is that traditional bookkeeping is no longer the only viable way of traking accountability; new tools open the possibility of exploring alternative methods.

Compare it to music notation; people keep proposing new ways to write music scores, some of them computer assisted; and though none of them is going to replace the traditional way any time soon, there are places where alternative methods may prove useful; such as guitar tablature, or piano-roll sheets for digital samplers. The same could be true for accounting (and in fact some people in this thread have pointed out at different ways to build accounting software such as the Resources, Events, Agents model.)

https://en.wikipedia.org/wiki/Resources,_Events,_Agents


> can't two errors cancel each other out and you still wind up at zero?

They can, but the probability of two opposite errors of exactly the same magnitude is much lower than of any individual random error.

It's the same as with any other error-correction encoding. You don't have a guarantee that all errors will be caught by it, but most of them can be, so it's useful overall.

https://en.wikipedia.org/wiki/Error_correction_code


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: