Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Superset – Terminal to run 10 parallel coding agents (superset.sh)
96 points by avipeltz 1 day ago | hide | past | favorite | 90 comments
Hey HN, we’re Avi, Kiet, and Satya. We’re building Superset, an open-source terminal made for managing a bunch of coding agents (Claude Code, Codex, etc) in parallel.

- Superset makes it easy to spin up git worktrees and automatically setup your environment

- Agents and terminal tabs are isolated to worktrees, preventing conflicts

- Built-in hooks [0] to notify when your coding agents are done/needs attention,

- A diff viewer to review the changes and make PRs quickly

We’re three engineers who’ve built and maintained large codebases, and kept wanting to work on as many features in parallel as possible. Git worktrees [1] have been a useful solution for this task but they’re annoying to spin up and manage. We started superset as a tool that uses the best practices we’ve discovered running parallel agents.

Here is a demo video:

https://www.youtube.com/watch?v=pHJhKFX2S-4

We all use Superset to build Superset, and it more than doubles our productivity (you’ll be able to tell from the autoupdates). We have many friends using it over their IDE of choice or replacing their terminals with Superset, and it seems to stick because they can keep using whatever CLI agent or tool they want while Superset just augments their existing set of tools.

Superset is written predominantly in Typescript and based on Electron, xterm.js, and node-pty. We chose xterm+node-pty because it's a proven way to run real PTYs in a desktop app (used by VSCode and Hyper), and Electron lets us ship fast. Next, we’re exploring features like running worktrees in cloud VMs to offload local resources, context sharing between agents, and a top-level orchestration agent for managing many worktrees or projects at once.

We’ve learned a lot building this: making a good terminal is more complex than you’d think, and terminal and git defaults aren’t universal (svn vs git, weird shell setups, complex monorepos, etc.).

Building a product for yourself is way faster and quite fun. It's early days, but we’d love you to try Superset across all your CLI tools and environments, we welcome your feedback! :)

[0] https://code.claude.com/docs/en/hooks

[1] https://git-scm.com/docs/git-worktree





The real bottleneck isn’t human review per se, it’s unstructured review. Parallel agents only make sense if each worktree has a tight contract: scoped task, invariant tests, and a diff small enough to audit quickly. Without that, you’re just converting “typing time” into “reading time,” which is usually worse. Tools like this shine when paired with discipline: one hypothesis per agent, automated checks gate merges, and humans arbitrate intent—not correctness.

Agreed. I generally see much better results for smaller, well-scoped tasks. Since there's very little friction to spinning up a worktree (~2s), I open one for any small tasks, something I couldn't do while working on a single branch.

I currently prefer Cursor to CC, does Superset play well with Cursor too? Is this a replacement for their work tree feature?

I haven’t setup worktrees yet, so if I have a quick task while working in main, I currently just spin up another agent in plan mode, and then execute them serially. In parallel would be really nice though. I often have 5-10 agents with completed plans, and I’m just slogging through executing them one at a time.


Running agents in parallel is easy. Making sense of what they did is the hard part. We ran into this while building GTWY when multiple agents were working on different steps of the same workflow. Without a shared execution view, productivity gains quickly turned into coordination overhead. Orchestration and visibility ended up mattering more than adding more agents.

There is something you are not explaining (at least I couldn't find it, sorry if you do), but how do you manage apps states? Basically databases?

Most of these agents solutions are focusing on git branches and worktrees, but at least none of them mention databases. How do you handle them? For example, in my projects, this means I would need ten different copies of my database. What about other microservices that are used, like redis, celery, etc? Are you duplicating (10-plicating) all of them?

If this works flawlessly it would be very powerful, but I think it still needs to solve more issues whan just filesystem conflicts.


Great question currently superset manages worktrees + runs setup/teardown scripts you define on project setup. Those scripts can install dependencies, transfer env variables, and spin up branching services.

For example: • if you’re using Neon/Supabase, your setup script can create a DB branch per workspace • if you’re using Docker, the script can launch isolated containers for Redis/Postgres/Celery/etc

Currently we only orchestrate when they run, and have the user define what they do for each project, because every stack is different. This is a point of friction we are also solving by adding some features to help users automatically generate setup/teardown scripts that work for their projects.

We are also building cloud workspaces that will hopefully solve this issue for you and not limit users by their local hardware.


I have my agent run all docker commands in the main worktree. Sometimes this is awkward but mostly docker stuff is slow changing. I never run the stuff I’m developing in docker, I always run on the host directly.

For my current project (Postgres proxy like PGBouncer) I had Claude write a benchmark system that’s worktree aware. I have flags like -a-worktree=… -b-worktree =… so I can A/B benchmark between worktrees. Works great.


Awesome that sounds really cool! Yeah we have some friends that found a lot of luck with just a custom cli (something Avi did some tests with too), it definitely is a viable approach to use :)

Just docker compose and spin up 10 stacks? Should not be too much for modern laptop. But it would be great if tool like this could manage the ports (allocate unique set for each worktree, add those to .env)

For some cases test-containers [1] is an option as well. I’m using them for integration tests that need Postgres.

[1] https://testcontainers.com/


That’s what our setup/teardown scripts are for but we plan on making the generation of them automatic

Why aren’t you mocking your dependencies? I should be able to run a microservice without 3rd party and it still work. If it doesn’t, it’s a distributed monolith.

For databases, if you can’t see a connection string in env vars, use sqlite://:memory and make a test db like you do for unit testing.

For redis, provide a mock impl that gets/sets keys in a hash table or dictionary.

Stop bringing your whole house to the camp site.


Because the real thing is higher fidelity, but it can expensive to boot up many times.

Higher fidelity?

What does that mean in this context?

What higher fidelity do you get with a real postgres over a SQLite in memory or even pglite or whatever.

The point isn’t you shouldn’t have a database, the point is what are your concerns? For me and my teams, we care about our code, the performance of that code, the correctness of that code, and don’t test against a live database so that we understand the separation of concerns between our app and its storage. We expect a database to be there. We expect it to have such and such schema. We don’t expect it to live at a certain address or a certain configuration as that is the databases concern.

We tell our app at startup where that address is or we don’t. The app should only care whether we did or not, if not, it will need to make one to work.

This is the same logic with unit testing. If you’re unit testing against a real database, that isn’t unit testing, that’s an integration test.

If you do care about the speed of your database and how your app scales, you aren’t going to be doing that on your local machine.


There is your idealization, and there is reality. Mocks are to be avoided. I reserve them for external dependencies.

> What higher fidelity do you get with a real postgres over a SQLite in memory or even pglite or whatever

You want them to have the same syntax and features, to the extent that you use them, or you'll have one code path for testing and another for production. For example, sqlite does not support ARRAYs or UUIDs natively, so you'll have to write a separate implementation. This is a vector for bugs.


You're right that sqlite doesn't support array's or uuid's natively. SQLite was only a suggestion on how one might go about separating your database engine concerns with your data layer concerns.

If you fail to understand why this separation is important, you'll fail to reason with why you'd do it in the first place so continue building apps like it's 1999, tightly coupled and you need the whole stack to run your thing. God forbid you expand beyond just 1 team.


pglite might be an option.

You can for PG use that magic copy db they have, where they instantly (close to) copy db and with git-worktrees you can work on this, then tear it down. With sqlite obviously you would just copy it

Congrats on the launch!

Recently I gave Catnip a try and it works very smoothly. It works on web via GitHub workspaces and also has mobile app. https://github.com/wandb/catnip

How is this different?


Thanks, Catnip looks pretty cool! Honestly it's pretty similar, I think ours is a bit more lightweight (it seems they have remote sandboxes where they host their code whereas we host your code locally using git worktrees).

The mobile app is a pretty cool feature though - will definitely take a peek at that soon.


havent tried it yet, but i just signed up so ill get back to you on that :)

After setting up catnip it seems pretty sweet. The major difference is they are running as a cloud sandbox and we are currently running as a local terminal. Say you don’t want to use any worktrees or do stuff in parallel superset still works as a classic terminal. Eventually we plan on adding cloud workspaces like catnip tho

How are people productive using 10 parallel agents? Doesn’t human review time become a bottleneck?

Even if you are not gaining a big productivity boost from parallel agents, the paradigm has these positive externalities:

  - I am no longer chained to my laptop. With the right setup, you can make real progress just using your phone as a thin client to your agents.
  - They can easily outrun me when it comes to raw typing speed. For certain tasks, this does make things faster since the review is easy but the editing is tedious. It also helps if you have RSI.
  - They are great for delegating small things that can potentially become distracting rabbit holes.

Yep, there's a set of tasks I never have to babysit anymore and it's very freeing. Our desktop builds are 15m sometimes, and instead of checking it and waiting I have Claude watch the job, download the build from GitHub when it's done, and then open it in finder so I get a little popup

Gives me 15 more minutes to work on another task!


It’s not for building 10 complex or overlapping features at a time.

Parallel agents are useful for:

1. Offloading minor refactoring work

2. Offloading non-overlapping features

3. Offloading unit or integration test additions

4. Exploring multiple ways to build a feature with maybe different coding models

5. Seeing how Claude code thinks for extremely ambitious ideas and taking only the gist of it

Most of these tools don’t make working with Git merges or conflicts to main simpler in their UX. Even in Cursor, it helps to be good at using git from the command line to use Parallel agents effectively (diff, patch, cherry-pick from worktree to main etc)


Hey there, I'm another member of the superset team! I think it's definitely something you have to get used to, and it is somewhat task dependent.

For bug fixes and quick changes I can definitely get to 5-7 in parallel, but for real work I can only do 2-3 agents in parallel.

Human review still remains the /eventual/ bottleneck, but I find even when I'm in the "review phase" of a PR, I have enough downtime to get another agent the context it needs between agent turns.

We're looking into ways to reduce the amount of human interaction next, I think there's a lot of cool ideas in that space but the goal is over time tools improve to require less and less human intervention.


Use review bots (CodeRabbit, Sourcery and Codescene together work for me). This is for my own projects outside of work, of course. I use Terragon for this. 10 parallel rust builds would kill my computer. Got a threadripper on its way through, so superset sounds like something I need to give a go.

Yeah we're looking into ways to give users access to these tools in Superset too!

And yeah the next frontier is definitely offloading to agents in sandboxes, Kiet has that as one of his top priorities.


Running into this issue with just 1 agent. I have plenty of tokens to spare. Just not enough time to iterate and bugfix.

I really think git worktrees are a bad approach. You’re better off in my view with one shared state and dealing with conflicts live by dividing tasks ahead of time using beads and letting agents communicate with each other using Agent Mail and file reservations.

I’ve been able to productively run 12+ agents from CC, Codex, Gemini-cli at the same time this way and it works really well.


That's a pretty interesting approach, would love to see a demo of your setup :) my email is avi@superset.sh if you're down to chat!

I recorded this around a month ago, which is funny because it's already pretty obsolete since my tooling has advanced so much since then:

https://www.youtube.com/watch?v=68VVcqMEDrs

My full stack is detailed here on this site I made recently:

https://agent-flywheel.com/


Agent orchestration CLI tools are the new Javascript frameworks

LOL yeah I agree, we're definitely building in a crowded space. I am very hopeful though for the amount of utility that'll be made in the agent orchestration space though! There's a lot that can be built if we successfully make developers 10x more productive.

The incentives here, though, are much stronger. If I make it just as easy for my customers to use 10X tokens as 1X token, the world is my oyster.

Nice work! You might also want to look at Vibe Kanban, which supports similar features across multiple projects and multiple coding CLIs - https://www.vibekanban.com/

Superset will be a good alternative for someone who is using only ClaudeCode or CLIs. But for someone using Cursor, How does this differ from Cursor’s Agents UI, which supports local background agents using Git worktrees?


YES their project is great, there's a lot in the planning space that would be extremely useful (grabbing your Circleback notes -> creating tickets, having playbooks like Devin in ticket form so you can choose what to build first, etc. etc.)

I'd need to do a refresher but for Cursor agents you can choose any model but you're tied to their tooling right? I've heard they're really solid I just find people have their cli preferences and being terminal-first let's anyone bring their favorite agent along for the ride


I can see lots of tools explode around CC, but majority still use Cursor, cursror supports multiple agents, branching, multimodels etc.

It is really hard to justify tools like these, where you need CC+this tool+ some other tools to make it more productive , and you need to deal with billing where cursor gives you access to all models possible + BYOK.

Not trying to be negative ... but why hustle?


No you're good, it's fair feedback! I think a fair description of where we're at is "if you use a cli agent for 90+% of your work, this is a drop-in terminal replacement that'll make it easier for you to run them in parallel". It definitely is dependent on you preferring a CLI agent like Claude Code, but for us since it's all we use the worktree management was a missing component!

I come here for an Apache Superset demo and I get this?! (I'm mostly kidding but man, what a name collision)

IDK what everyone is doing anymore. Just why do you need 10 parallel agents doing things. How is this even a possible workflow for a person.

I am thinking the same. Is the bottleneck for many people just how many different tasks they can press through a certain window of time?

I feel that maybe a couple of things in parallel could be useful at certain times, but more often the need is not for "one more jira ticket in the pipeline" but rather things like meetings, discussing strategy, clarifying things so they can be built at all as opposed to actually having ten crystal clear tasks to unleash the bot army on.


LOL it definitely can get a little trippy but it's pretty doable! I can't get to 10 regularly but the space is moving in that direction (more agents in parallel hopefully equals more work done).

I liked this video a lot for a general idea of how it's possible, the main thing we need for 10 agents at once to be possible is less of a need for human intervention for agents, but I think it'll happen sooner (it may even be possible now with the right tools) than later.

https://youtu.be/o-pMCoVPN_k?si=cCBqufdg3nWcJDHD


Ask a Manager.

I am a manager. I am the head of engineering at my company. I still don't understand what is going on.

If you start leaning more on coding agents, you quickly realize there are a lot of 2–30 minute windows where you’re just waiting for an agent to implement something or finish a review. In those pockets I generally spinning up small tasks or running a few parallel experiments with different models or approaches. Once you’re juggling multiple threads having isolated working environments becomes pretty essential. We're just trying to make the environment management and that whole workflow much less of a headache. But I don't think this is the best workflow for everyone its just what we've been seeing more people converge towards.

Funny ... I have a 50-line bash script that does this but it also runs each agent in a sandbox so the agents can't write to disk outside their designated got worktree. I'm happy to skip the TS+NodeJS but will admit my version might not be as portable.

Yeah we have friends that have done the same! Definitely quick to build a custom CLI if you're willing to roll up your sleeves.

I do tend to like the niceties of our GUI tho, if you get a chance to compare your cli / our GUI would love to hear what you think!


Could you share this script as a gist?

https://gist.github.com


I’ve been a career programmer for almost two decades but have stopped for a while to parent my young kids. Is this what I’m coming back to? Because honestly I hate it.

Agree.

This kind of workflow feels a lot like "making the horse ten times faster", instead of using the power of AI to make developers stronger to build things that were previously too difficult or not worth the effort.

I guess I don't really see the intersection of "simple enough for parallel agents" vs "valuable enough to be worth the parallelization overhead".


I'm obviously biased but in my bubble this is where things are shifting. Similar to tabbing vs typing debate, just another way to move faster.

I’ve been following this space and a lot of good apps:

Conductor

Chorus

Vibetunnel

VibeKanban

Mux

Happy

AutoClaude

ClaudeSquad

All of these allow you to work on multiple terminals at once. Some support work trees and others don’t. Some work on your phone and others are desktop only.

Superset seems like a great addition!


My issue with most of them is the xterm.js, which can't handle when the terminals get large/too big, Even Conductor (great app, i love conductor and the team behind it) had to drop their "big-terminal" mode. i'm hacking a native solution for this which i personally like by hacking Ghostty+SwiftTerm.

Same my friend, I'm building a custom frontend for Vibetunnel that uses ghostty-web[0] instead of xterm!

0. https://github.com/coder/ghostty-web


ghostty-web looks interesting. full xterm compatibility as well. will give it a try!

Interesting, I'd love to hear more about this! Were you/they experiencing performance bottlenecks with xterm?


Thanks we're totally open source too! so you can check us out on github too https://github.com/superset-sh/superset

Do any of these work with hg?

Noticed this is built with electron (nice job with the project architecture btw, I appreciate the cleanness), any particular reason a Windows build isn't available yet?

We do plan to ship Windows (and Linux) builds, Electron makes that feasible, but for the first few releases we focused on macOS so we could keep the surface area small and make sure the core experience was solid since none of us are using Windows or Linux machines to properly test the app in those environments.

But it on the roadmap and glad to know theres interest there :)


Glad to help with the Windows build if you’re open to it!

Thanks for the offer! We'd be open to chatting, want to ping us on our discord?

In the past I've worked with devs who complain about the cost of context switching when they're asked to work on more than one thing in a sprint. I have no idea how they'd cope with a tool like this. They'd probably complain a lot and just not bother using it.

I think that paradigm is shifting a bit with agents where there are more downtime waiting for things to run. It's definitely not for everyone and the switching cost is real. We're trying to make that better with better UX / auto-summary, etc.

is there such a tool, which is composable?

I have my own VM's with agents installed inside, is there a tool which supports calling a codex/claude in a particular directory through a particular SSH destination?

Basically BringYourOwnAgentAndSandbox support.

Or which supports plugins so I can give it a small script which hooks it up to my available agents.


Check out Catnip, fully based on GitHub workspaces and uses your own Claude subscription.

Since it’s open source and based on GitHub workspaces, it’s free and works very smoothly.


We have the BringYourOwnAgent part! For sandboxes we may try to use just one provider if I had to guess as I'm not sure what the effort would look like to support a bunch of them, which provider do you use for your VM's?

I run my own VM's on my local machine. But if you just allow SSHing into an arbitrary host that would work with both cloud and local VMs, right.

to be more clear, I'm talking about supporting attaching to and using existing VMs, not about your app creating/destroying the VMs.


True yeah that would be a good feature! We took a peek at it in the past and it's not too bad, no promises on when it'll ship though

we are working on that, but its not there just yet

I have a question: How do you manage web servers running parallely for 10 coding agents?

Thanks for the question. For most traditional web apps using frameworks like Next.js, Vite, etc they'll automatically try the next port if its in use (3000-> 3001 -> 3003). We give a visualization of which ports are running from each worktree so you can see at a glance whats where.

For more complex setups if your app has hardcoded ports or multiple services that need coordination you can use setup/teardown scripts to manage this. Either dynamically assigning ports or killing the previous server before starting a new one (you can also kill the previous sever manually).

In practice most users aren't running all 10 agent's dev servers at once (yet), you're usually actively previewing 1-2 at at time while the other are working (writing code, running tests, reviewing, etc). But please give it a try and let me know if you encounter anything you want us to improve :)


Run them on different ports?

Yep, for now that's how we do it! We're looking into remote sandboxes and tunneling soon though :)

I’ve used superset at work this last week, and it’s great! Excited to see what’s next!

Thanks, glad to hear it! Let us know if you have any feedback :)

Thanks! love to hear it :)

What if my job uses hg and not git?

Hmm probably out of scope unfortunately as it's a pretty high maintenance burden to support (hg share is not 1:1 with git worktrees), it's possible our sandbox offering may work out-of-box for hg as we'll probably just clone your repo -> an ai agent will take it from there! We'll have to see but no promises

But then you got you merge and PR it. Just have the agents work in the same directory at the same time and have them commit only their changes.

I think you get dirty PRs like that which makes partial rollback more difficult. Isolating changes in separate PRs is much cleaner in my experience.

ah so now you can be a "10x" engineer -- 10x the cost with 0 to show. Where do I sign up?

I wonder what will be the next git feature we are going to (re)discover and build dozens of shiny glorified user interfaces on top.

I've played with git worktrees a few years back but until agents it was never that practical to have more than 2 worktrees at once. Now that it is practical, solving the poor usability makes sense for us.

Appreciate your input but I don't understand what you mean here.

You had use cases before in which you wanted more than 2 worktrees at once in the past and needed to juggle between them at the speed of "practical"?

What were you doing back then (a few years back and before agents as you implied) that required this and that your solution now solves?

What poor usability are you referring to here? Is the rate of utilizing git-worktrees a metric you are measuring? That does not make sense to me.

I also watched the demo video, and I don't understand the value here. Perhaps I am not your target demographic, but I am trying to understand who is. Is this for vibe-coding side projects only? Would be nice if you had a more practical/real stakes example. The demo video did not land for me.


Hmm I guess two good questions to check to see if this tool is useful for you is 1) do you use a cli coding agent for the vast majority of your work and 2) are you interested in using more than one at once? If those two assumptions are true, I think our UI is a nice way to make the git worktree workflow (a very common path to running multiple agents) a bit easier to manage! It handles copying over environment variables for you, setting up containers, and a lot of the other small things you need to think about when using git worktrees basically.

Having a more practical video is a great call-out though, we should probably have a more deep dive video of an actual session!


Not to be confused with Apache Superset (data visualization solution)

https://superset.apache.org/


correct :)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: