I am a Lead/Staff Backend Engineer with 15+ years of experience. I specialize in reducing cloud costs, scaling large databases, and debugging systems at the syscall level.
Highlights:
* Saved $500k/year at Automattic: reduced S3 storage costs by 75% for a 2 PB dataset.
* 26x Performance Increase: Architected a bespoke binary protocol for WordPress restores, eliminating the #1 product churn reason.
* Scale: Managed a 30 TB+ PostgreSQL cluster (Citus) and NLP pipelines for MIT/Harvard for 10 years.
* Recent Build: Solo-founded https://aero.zip - a privacy-first file transfer service with E2E encryption and OPAQUE authentication.
I debug GnuPG with strace, migrate petabytes without downtime, and replace brittle Cron jobs with robust workflow systems. Looking for complex backend challenges.
I'm working on https://aero.zip, a file transfer service designed to handle massive workflows (like raw video or huge project directories) directly in the browser without the usual speed caps or browser crashes.
The goal was to build something as fast as a native app but with the convenience of a web link. Some of the technical bits:
* Instant Streaming: Recipients can start downloading a file the moment the first chunk leaves the sender's computer. No waiting for the full upload to finish.
* Bespoke Chunking Protocol: To handle "unlimited" file counts (tested into the millions), we group small files into larger chunks and split massive files down, so many small files and a few large files can get transferred at similar speeds.
* Auto-Resume: Both uploads and downloads automatically resume from the exact byte where they left off, even if you switch networks or close/reopen your laptop.
* E2EE via Web Crypto API: Everything is AES-GCM-256 encrypted. The secret key stays in the URL fragment (after the #), so it never gets sent to our servers.
* Zero-Knowledge Auth: We use the OPAQUE protocol for logins, meaning we can authenticate users and store their encrypted Data Encryption Keys (DEKs) without ever seeing their password or having the ability to decrypt their files.
* Passkey + PRF: We support the WebAuthn PRF extension. If your passkey supports it (like iCloud Keychain or YubiKeys), you can decrypt your account's metadata without the password.
* On-the-fly Zipping: When a recipient selects multiple files, the browser decrypts and zips them locally in real-time as they stream from our server.
* Performance: Optimized to saturate gigabit connections (up to 250 MB/s) by maintaining persistent streams and minimizing protocol overhead.
Everything is hosted in Germany (EU) for GDPR compliance. I'd love to hear any feedback on the streaming architecture or the OPAQUE implementation!
I'm building https://aero.zip, an E2E encrypted, resumable file transfer tool (think WeTransfer but encrypted and not P2P). I just posted it to Show HN:
* Streaming ZIP: To allow downloading multiple files as a single archive without buffering, I implemented a custom streaming ZIP64 archiver. A Service Worker intercepts the request, fetches encrypted chunks, decrypts them, and constructs the ZIP stream on the fly in the browser.
* OPAQUE auth: I used the OPAQUE protocol (via serenity-kit) for the password-authenticated key exchange. It ensures the server never learns the password and protects weak passwords against offline attacks if the DB leaks.
* Passkey PRF auth: If your passkey provider supports PRF (like iCloud Keychain or Windows Hello), the app derives the data encryption key directly from the passkey, allowing a login flow that doesn't require entering a master password.
From what I understand, croc is P2P, i.e. both computers have to be on for the transfer to happen (the "relay" that they mention only helps negotiate the connection between two peers). With aero.zip, you upload your files to a server, and the recipient can download it whenever - either real-time while you're still uploading them (imitating the P2P/croc model), or at a later date. This is a more universal approach IMHO.
Also, aero.zip is a webapp, i.e. there's nothing to install, and you don't even need to sign up to send small files. Meanwhile, croc is a CLI utility which will be hard to use by mom-and-pop users.
> I uploaded a file and now I can't download it because the download endpoint is a 404.
Weird, looking at the logs it appears that the service worker didn't manage to register in your browser. Are you using some aggressive adblock by any chance?
I have to resort to registering a service worker and using it for downloads to make the decryption + download as a ZIP work for very large streams. The registered SW then gets added as an iframe, and that iframe triggers the download. In your case, it's as if the SW didn't manage to register so the added iframe led to nowhere.
> Except there is, it's 2GB or 100GB, you said it yourself.
Fair point - my phrasing was poor there. I meant that the architecture has no technical limits (unlike browser-based encryption which often crashes RAM on large files), whereas the 2GB/100GB are just business quotas to keep the lights on.
The architectural difference is actually why I built this. Standard E2EE services often choke on thousands of small files (because they attempt to upload everything with individual HTTP PUTs to S3) or struggle with massive single files (due to memory limits). By streaming encrypted chunks via WebSockets, aero.zip's setup handles 10k 1KB files or one 10GB file with roughly the same performance.
Remote: Yes (12+ years remote experience)
Willing to relocate: No
Technologies: Python, TypeScript/JavaScript, AWS (S3/EC2 Cost Optimization), PostgreSQL (Citus/sharding), C/C++, Temporal.io, Rust.
Résumé/Web: https://valiukas.dev/resume-linas-valiukas.pdf
Email: linas@valiukas.dev
I am a Lead/Staff Backend Engineer with 15+ years of experience. I specialize in reducing cloud costs, scaling large databases, and debugging systems at the syscall level.
Highlights:
* Saved $500k/year at Automattic: reduced S3 storage costs by 75% for a 2 PB dataset.
* 26x Performance Increase: Architected a bespoke binary protocol for WordPress restores, eliminating the #1 product churn reason.
* Scale: Managed a 30 TB+ PostgreSQL cluster (Citus) and NLP pipelines for MIT/Harvard for 10 years.
* Recent Build: Solo-founded https://aero.zip - a privacy-first file transfer service with E2E encryption and OPAQUE authentication.
I debug GnuPG with strace, migrate petabytes without downtime, and replace brittle Cron jobs with robust workflow systems. Looking for complex backend challenges.
reply