Hacker Newsnew | past | comments | ask | show | jobs | submit | csnweb's commentslogin

You can activate the uv venv from anywhere just fine, just do source path_to_sandbox/.venv/bin/activate. Probably makes sense to define a shortcut for that like activate sandbox. Your conda env is also linked to a directory, it’s just a hidden one, you can also create the uv obe somewhere hidden. But I get it to some extent conda has this large prefilled envs with a lot of stuff in it already that work together. Still if you then end up needing anything else you wait ages for the install. I find conda so unbearable by now that I voluntarily switch every conda thing I have left over to uv the second I need to touch the conda env.


Because the gp comment is not about grapheneos, but its creator and could be taken as insulting (considering none of them is likely to be actually schizophrene).


Oh ok I wasn't aware. I don't follow the Android scene and didn't know GrapheneOS was mostly a one-man show.


That might take a few days or weeks, it seems like they put in some decent effort into it this time. From skimming the supplement I wouldn’t be suprised if the speedup is only 100x though. That’s still significant, but clearly less than they claim. For example I am not entirely convinced that 20% flops efficiency is really the upper limit or that the slicing overhead of 5x is really needed here.


If you replace an n^2 algorithm with a log(n) lookup you get an exponential speed up. Although a hashmap lookup is usually O(1), which is even faster.


That is not true unless n^C / e^n = log(n) where C is some constant, which it is not. The difference between log(n) and some polynomial is logarithmic, not exponential.


But if you happen to have n=2^c, then an algorithm with logarithmic complexity only needs c time. Thats why this is usually referred to as exponential speedup in complexity theory, just like from O(2^n) to O(n). More concretely if the first algorithm needs 1024 seconds, the second one will need only 10 seconds in both cases, so I think it makes sense.


N is a variable in what I posted, not a constant.


It depends if you consider "speedup" to mean dividing the runtime or applying a function to the runtime.

I.e. you are saying and f(n) speedup means T(n)/f(n), but others would say it means f(T(n)) or some variation of that.


The man, or llm, used the mathematically imprecise definition of exponential in a sentence with a big-O notation. I don't think he's going to be writing entire arguments formally.


They're still using the map in a loop, so it'd be nlogn for a tree-based map or n for a hash map.


The knowledge is stored in the model, the one mentioned here is rather large, the full version needs over 700GB of disk space. Most people use compressed versions, but even those will often be 10-30GB in size.


    ...the full version needs over 700GB of disk space.
THAT is rather shocking. Vastly smaller than I would expect.


I think you probably don’t want the application user to inherit from the superuser you may use for migrations since some migrations require a superuser.


The chat part (mostly) uses GPT-4, you can also see which model is called in the request logs. Here is the official announcement: https://github.blog/changelog/2023-11-30-github-copilot-nove...


Okay thanks for pointing that out.

I figure if they do this, they have to throttle or nert it somehow since it is cheaper than ChatGPT Plus which also gives access to GPT-4.


It won’t answer questions that are not somehow related to code or computing, I usually don’t need anything else so I didn’t really test the limits of that so far.


For me it really is extremely close to 100% for timers, I barely remember it being wrong and I use it several times per day. Finding my phone via the HomePod also works pretty much every time, may be 90% for me but it doesn’t recognize my wife so for her it basically never works. The others I don’t use enough. But timers and reminders work really well for me and it’s also what I need to most from an assistant.


I’ve seen similar. They really don’t have two person houses down pat - timers work great for me (as long as I never have to ask how much remaining; I’d die for a “count down from 30 seconds”) - but for the wife; nothing.


There are tools for Postgres unit testing https://wiki.postgresql.org/wiki/Test_Frameworks. Which is not to say there isn’t any room for improving them.


Write more Postgres functions to unit test Postgres functions. :))


Isn't that normally the premise of testing? Eg writing Python functions to test other Python functions?


Yes but Python is nice to write. Postgres functions are not. At least not to me.


Depending on what kind of deployment you have, you could use Tcl, Rust, or even Python if you could use untrusted extensions. (Not a comment on this particularly testing framework, but Postgres server-side programming more generally.)

But I hear you, PgSQL can be very annoying and unergonomic, and it's not a language most people you're hiring will know upfront. Pushing things onto the backend isn't unreasonable. When I write tests for PgSQL, I write them in Python and run them from the client side, not on the server.


pl/pgSQL is very good (and ergonomic) for a logical extension to SQL, aka set theory programming with intermediate state. It isn't and was never targeted toward general purpose programming like Python.

That said, 100% agree that unit tests should live outside the DB. Querying for sets inside or outside makes no functional difference, and your DB doesn't need all that extra cruft.


It’s extremely hard to root, I think that’s why it got removed from the supported robot page. As for ios, just saving the page on the Home Screen really works fine, no need for a native app here imho.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: