GitLab's write-up mentions a dead man's switch where "The malware continuously monitors its access to GitHub (for exfiltration) and npm (for propagation). If an infected system loses access to both channels simultaneously, it triggers immediate data destruction on the compromised machine. "
Neat! How do you handle state changes during tests, for example, in a todo app the agents are (likely) working on the same account in parallel or even as a subsequent run, some test data has been left behind or now data is not perhaps setup for a test run.
I’m curious if you’d also move into API testing too using the same discovery/attempt approach.
This is one of our biggest challenges, you're spot on! What we're working on taking this includes a memory layer that agents have access to - thus state changes become part of their knowledge and accounted for while conducting a test.
They're also smart enough to not be frazzled by things having changed, they still have their objectives and will work to understand whether the functionality is there or not. Beauty of non-determinism!
Could you share the session url via the feedback form if you still have access to it?
That's really strange, it sounds like Webhound for some reason deleted the schema after extraction ended, so although your data should still be tied to the session it just isn't being displayed. Definitely not the expected behavior.
Some of the slowdown will come from not indexing the FK columns themselves, as they need to be searched during updates / deletes to check the constraints.
We're back live now. Had to set up a new search server and quickly importing 100k products at a time. The results should be better by the minute now (as the data set increases).
Neat, I noticed it gave me wrong data though, and when I asked for the top 3 rows it provided the wrong value, due to not using UTF8 - Asking ChatGPT to use utf8 support fixed it, perhaps update it's prompt.
Worth noting the difference between an AWS Application Load Balancer (ALB) that is HTTP request aware, and Network Load Balancer (NLB) which is not, when load balancing HTTP traffic.
AWS ALB (and others I'm sure) can balance by "Least outstanding requests" [1] which means a server with the least in-flight HTTP requests (not keep-alive network connections!) to an app server will be chosen.
If the balancer operates on the network level (eg NLB) and it maintains keep-alive connections to servers, the balancing won't be as even from a HTTP request perspective because a keep-alive connection may or may not be processing a request right now and so the request will be routed based on number of TCP connections to app servers, not current HTTP request activity to them.
It's surprising that "least requests" is still so less commonly used than random or round-robin. It's a simple metric and yet it handles work-sharing remarkably well, without any need for servers to communicate their load back to the LB. (As you correctly say, the LB needs to be aware of idle/keep-alive connections in order to balance properly)
[Also, to be pedantic, it should really be called "fewest requests" - but I've never seen a product call it that!]
https://about.gitlab.com/blog/gitlab-discovers-widespread-np...