I feel like one quickly hits a similar partial observability problem as with e.g. light sensors. How often do you wave around annoyed because the light turned off.
To get _truly_ self driving UIs you need to read the mind of your users.
It's some heavy tailed distribution all the way down.
Interesting research problem on its own.
We already have adaptive UIs (profiles in VSC anyone? Vim, Emacs?) they're mostly under-utilized because takes time to setup + most people are not better at designing their own workflow relative to the sane default.
> Il numero medio di patologie osservate in questa popolazione è di 2.7 (mediana 2, Deviazione Standard 1.6)
It seems the above poses another question; is diabetes just confounding other underlying condition(s)?
Further, if most of the patients suffered from type 2 diabetes, it would likely correlate with older age in which case higher fatality rates are to be expected.
I was unable to find this info publicly available, but diabetes being one of the "2+ underlying conditions" seems probable.
This doesn't seem too controversial of an opinion, haven't small modules already been at the core of a Unix philosophy? Once usage patterns emerge, these smaller libraries are composed into frameworks which ultimately build end-user applications.
The entirety of engineering is premised upon leveraging powerful abstractions; cf. the reason we're not still coding via electric charge.
Edit: To expand on the analogy: "A two-person startup already uses twenty-eight other tools" [1].
>, haven't small modules already been at the core of a Unix philosophy?
I assume you're talking about userland programs such as "ls", "cut", "grep", etc. ?[1]
But C Language is also part of UNIX and programmers typically link against a large more comprehensive "standard library" such as glibc:
libc.a, libc.so <-- includes printf
... instead linking of a hundred little object files specified explicitly like this:
printf.o
strcat.o
strncat.o
malloc.o
leftpad.o
... <hundred or thousand more .o files>
What happened with Javascript is the language didn't have a comprehensive "standard library" like C, C++, Java, C#, Python "batteries included", Go, etc. Therefore, NPM became an adhoc "standard library" via bottom-up crowdsourcing. There are both positives and negatives to that approach. One negative is that leftpad() isn't produced by a canonical entity like Netscape or Mozilla but a random programmer. When he wanted to take his ball and go home, he broke everyone's build that depended on it. That type of event didn't happen with "printf()" in the C Language world.
Even so, what's the worst-case scenario in biasing towards lower speed and
assisting your vision system with a knowledge base?
You perform the task of vision 100% of the time driving. From a purely probabilistic standpoint, even assuming highly accurate ML, the relatively infrequent database updates still make for a sensible prior.
This is somewhat adjacent to what Gary Marcus has been arguing, and I think it makes a lot of sense; there seems to be no compelling reason to rely exclusively on primitives (i.e. vision) when good priors are easily accessible.
I personally didn't encounter any problems whatsoever installing a clean i3, but for anyone searching a batteries-included version of the window manager I've heard a lot of good things about i3-Manjaro (cf. 30-minute review on Youtube [0]).
The article seems to be a bit light on details for an "overview" of GNNs.
It's an area I've recently been researching and they do seem to be gaining
a significant amount of traction. If anyone is interested in additional reading
material, I can suggest the very recent GNNs: Models and Applications (slide deck available on the website) [0].
There is also a fairly comprehensive GitHub repo on [1], though I
personally haven't given it a detailed look yet.
Note-taking seems to have gained popularity on HackerNews over the past few weeks (or my attention has been biased towards these submissions, at least).
I've long been interested in the domain of "personal knowledge
engineering" and this clearly seems a common thread within our
community. As a brief overview of the "SOTA:"
* Emacs and Vim users skew towards Org-mode or Vim wiki.
* Roam Research is sort of a recent web-based alternative.
* There's a lot of competition in the domain of fully-fledged note taking apps.
Evernote has long been viewed as a king of note-taking but lost its edge over
time (bad editor, bugs, lack of attention to their users).
Memex seems to be gaining popularity even though their
software looks somewhat buggy at the moment.
* Otherwise, people naturally develop their own (similar) systems.
I myself have independently developed a custom-made Vim wiki before starting
research into this topic.
It basically consists of a few grepping/Vim aliases to search/create Markdown in
a `~/.notes` folder, backed up to GitHub. `mod+-` is bound to an i3 sratchpad
where Vim is constantly open to `~/.notes/index.md`. This drastically reduces
friction when making new notes.
In any case, seems interesting that a lot of the personal note-taking systems
have separately adopted similar principles to what Zettelkasten proposes:
1. Heavy reliance on tagging
2. Some sort of deep linking
3. Preference to making small, independent "knowledge chunks"
There's also the category of "infinite outliner apps". The OG of these is Workflowy[1] which is a minimalist's take on the idea. Personally, I use Dynalist[2] which is Workflowy with more features.
My current problem with these tools is that I tend to treat them as mostly a "write-only" medium. i.e., I don't really refer back to them that often. What I'm really looking for is a tool that will let me _serendipitously_ encounter ideas from the past. For that, I think that Roam[3] with its Mediawiki style "backlinks" might be the next thing that I spend some serious time with.
Either that or a tool that somehow encourages me to refactor thoughts from the past into more (currently) useful content.
Interesting, I haven't heard of (1) and (2) but as I've mentioned Roam Research also seems a good take on this idea. From my (recent) experience it appears their app is not yet stable enough to use as a daily driver, though. Notion [0] seems like a more mature version based around similar principles.
As for idea rediscovery I'm personally happy with the setup I devised for myself
so I can only suggest trying to emulate something similar. If you're interested,
some guiding principles I follow:
1. My note-taking app (Vim, but could also be the browser or anything else) is constantly open in the
background, and I have a keyboard shortcut to show/hide.
2. I maintain an index document of sorts (with TODOs, recent thoughts, outward
links). This is frequently updated and I don't think too much about
categorization.
3. Either Markdown `#` headers or `*` are mapped to Vim folds,
which allows me to use `zm` and `zr` to quickly expand and collapse document
outlines.
4. Treat tags as a "brain-dump". Just quickly come up with some keywords before
starting to write
the note. If you do that, I find even simple tools such as ctags and grep immensely
help with future rediscovery.
5. Backups are on GitHub so I have the double convenience of both tracking document
history, as well as being able to access all notes from mobile.
It's served me well thus far; most commercial note-taking apps will handle (4)
and (5), while (1) is easy to resolve (e.g. AutoHotkey on Windows) so there's
probably no need to make drastic changes to your setup if you wish to try it out.
I use a small subset of Org Mode myself. But I think people focus too much on tools vs method. Plain text plus ripgrep can work great. Lühmann used pen and paper, and he built some hugely successful Zettelkastens. Many users at Zettelkasten.de write super simple notes in Markdown.
Actually, I'd never move away from plain text or a plain text format like Org, Markdown or similar as it is really future proof. Will Roam or Evernote be around and well maintained in 20 years? Probably not. I'd recommend you to use your favorite text editor plus Markdown. If your favorite text editor is Emacs, then consider Org.
I've been guilty of this too. A note is a way to get something out of your head. I always return to plain text files on my computer because that's the fastest/least resistance and there just aren't any additional benefits for me to using other approaches. Linux command line tools are a simple yet effective way to work with text.
And he collected and refinend his knowledgebase for nearly 40 decades, because this was actually his job and he had nothing better at hand. Most people today use knoweldebases just a tool on the side and they have computers now. Naturally they strive for faster workload and higher quality. Something you can't get with pen&paper.
> Will Roam or Evernote be around and well maintained in 20 years?
Probably not, maybe they are. But they are not good competition there. Evernote is mostly tooling and servers, just a bit interface and mostly just fancy richtext. The advantage over markdown is quite thin these days. Roam might be a bit better, bit I think it's just a fancy wiki with autolinking, so not really good either.
The actual competition are apps like notion and airtable and Office Suits. Interfaces and tooling which goes above and beyond simple test with fancy colors. Markdown is not there, probably never will be. Orgmode to some parts goes there, but has other problems.
> I'd recommend you to use your favorite text editor plus Markdown. If your favorite text editor is Emacs, then consider Org.
My favorite editor is Vim, but a big part of why I learned to use Emacs was to try org-mode. Now, org-mode is pretty much the only thing I use Emacs for (email too, but not as often).
As a long time emacs org mode user, I would say that both vim/emacs users (and others) should check out asciidoc/asciidoctor. I have been doing more and more things in them and find myself really enjoying the simplicity (and shareability with those who don't emacs/vim) mainly when targetting html or pdf exporting. I find myself wishing there was an org mode to asciidoc exporter lately.
I've been obsessing about this for a long time and concluded, as you have, that AsciiDoc is the One True markup format for me.
I don't have anything to offer an Emacs user, but I have created my own Vim wiki plugin[1] which is heavily inspired by VimWiki and uses AsciiDoc as the native syntax. The plugin is pretty much brand spanking new, but I use it all day every day.
I've already converted some of my note extraction tools to pull from this wiki format and the dream is, ultimately, to have everything including my website pull from this one source.
Yeah, note taking posts have been ramping up in the last few days/weeks. I’ve noticed that as well. I’m enjoying it, to be honest. I’m always looking to make my process (which, currently, is rather hodgepodge) more streamlined.
I wish I had the link, but last week someone linked to something called `bash_log` which was someone’s bash function that created time stamped md files for simple journaling. Nothing groundbreaking, but for me it was a step up.
Currently, I’m using StandardNotes, the forementioned bash utility with GitHub pushing, and god-help-me random moleskins. I’ve also tried Dynalist which was nice but I didn’t like that I couldn’t lock it down and link it to GitHub.
If anyone has any suggestions in the context of what I’ve posted, I’m all ears. The article gives me some ideas, as does your response. I’d be interested in more concrete examples of what you’re doing if you wouldn’t mind.
I imagine it would integrate with bash_log quite well.
Back your data up first, and I'm guessing you could just `cd` into your notes folder then use bash_log straight from there. You could also set LOG_DIRECTORY in your .bashrc to be where ever you mount Standard Notes.
My personal goal with this idea is to make everything quickly searchable, preferably by voice at some point in the future.
“How do I ... in Swift|Scala|Rust”
I built a small Swift Cookbook on my website a few years ago. I recently moved it to Github to update to Swift 5.x and to move towards my goal.
I find this to be the case with most pop-science books; the author's intention is to leave the reader with a feeling of having learned something rather than providing a full overview of the field, which may be too complex for the situation anyway. I often compare it to my response when non-CS people ask me "what I do."
Do you have some specific examples the people you spoke to frequently criticize regarding Harari?
No specific examples I'm afraid (what they said put me off reading him entirely so I don't have any personal examples either - too many other books I want to read!), although the two people who mentioned it to me recently were a neuroscientist and an economist with a keen interest evolutionary biology.
I do wonder about the value of pop science. It feels like it's almost always oversimplified to the point of being misleading. I have a background in psychology / neuroscience and find it difficult to read any pop neuroscience without grinding my teeth.
I agree. Random data makes the tests less specific, so I'd wager the authors would probably also argue against it.
Assuming you trust your unit tests, you can claim a passing test suite means:
(1) given current understanding, the code is most likely correct and
(2) based on the same assumption, other developers agree that the code is most likely correct, for the current version of the program
I personally believe randomness has a place (fuzzing), but should stay semantically distinct from unit testing for the above reasons.
To get _truly_ self driving UIs you need to read the mind of your users. It's some heavy tailed distribution all the way down. Interesting research problem on its own.
We already have adaptive UIs (profiles in VSC anyone? Vim, Emacs?) they're mostly under-utilized because takes time to setup + most people are not better at designing their own workflow relative to the sane default.