Hacker Newsnew | past | comments | ask | show | jobs | submit | plasma's commentslogin

GitLab's write-up mentions a dead man's switch where "The malware continuously monitors its access to GitHub (for exfiltration) and npm (for propagation). If an infected system loses access to both channels simultaneously, it triggers immediate data destruction on the compromised machine. "

https://about.gitlab.com/blog/gitlab-discovers-widespread-np...


Neat! How do you handle state changes during tests, for example, in a todo app the agents are (likely) working on the same account in parallel or even as a subsequent run, some test data has been left behind or now data is not perhaps setup for a test run.

I’m curious if you’d also move into API testing too using the same discovery/attempt approach.


This is one of our biggest challenges, you're spot on! What we're working on taking this includes a memory layer that agents have access to - thus state changes become part of their knowledge and accounted for while conducting a test.

They're also smart enough to not be frazzled by things having changed, they still have their objectives and will work to understand whether the functionality is there or not. Beauty of non-determinism!


I gave the demo a try and was able to run a search that showed "51 results" - great start! A few things I noticed though:

On the Data tab it says "no schema defined yet."

The Schema tab doesn’t seem to have a way to create a schema.

Most of the other tabs (except for Sources) looked blank.

I did see the chat on the right and the "51 items" counter at the top, but I couldn’t find any obvious way to view the results in a grid or table.


Could you share the session url via the feedback form if you still have access to it?

That's really strange, it sounds like Webhound for some reason deleted the schema after extraction ended, so although your data should still be tied to the session it just isn't being displayed. Definitely not the expected behavior.


Note you need to also raise the configuration limit of max upload to 200mb after the plan change in settings.


Yep, tried that (ridiculous it doesn’t auto update to this though)


Some of the slowdown will come from not indexing the FK columns themselves, as they need to be searched during updates / deletes to check the constraints.


Project looks interesting, would welcome seeing an API (or c# client) to be able to use it.


Unfortunately it seems the underlying search API is throwing '{ "message": "Not Ready or Lagging"}' for every search


Just woke up (in Madrid currently) and seeing all the errors. Working on getting the product back live.


We're back live now. Had to set up a new search server and quickly importing 100k products at a time. The results should be better by the minute now (as the data set increases).


Neat, I noticed it gave me wrong data though, and when I asked for the top 3 rows it provided the wrong value, due to not using UTF8 - Asking ChatGPT to use utf8 support fixed it, perhaps update it's prompt.

Check out https://chat.openai.com/share/34091576-036a-4e82-b4e3-a8798d...


You can download Google Maps offline while on WiFi and later access maps without Internet (or to avoid roaming charges).


OsmAnd is much better for this. The maps are far more detailed, especially for hiking and cycling


Worth noting the difference between an AWS Application Load Balancer (ALB) that is HTTP request aware, and Network Load Balancer (NLB) which is not, when load balancing HTTP traffic.

AWS ALB (and others I'm sure) can balance by "Least outstanding requests" [1] which means a server with the least in-flight HTTP requests (not keep-alive network connections!) to an app server will be chosen.

If the balancer operates on the network level (eg NLB) and it maintains keep-alive connections to servers, the balancing won't be as even from a HTTP request perspective because a keep-alive connection may or may not be processing a request right now and so the request will be routed based on number of TCP connections to app servers, not current HTTP request activity to them.

[1] https://docs.aws.amazon.com/elasticloadbalancing/latest/appl...


It's surprising that "least requests" is still so less commonly used than random or round-robin. It's a simple metric and yet it handles work-sharing remarkably well, without any need for servers to communicate their load back to the LB. (As you correctly say, the LB needs to be aware of idle/keep-alive connections in order to balance properly)

[Also, to be pedantic, it should really be called "fewest requests" - but I've never seen a product call it that!]


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: