

Discover more from hrbrmstr's Daily Drop
Drop #289 (2023-07-05): 🚨 Watch-Out Wednesday 🚨
HODOR; Mind The [SSH] Gap; 2023 Top 25 CWE List
I took advantage of an alliterative tagline opportunity to talk about “security”, a term I am loath to use. I prefer the more verbose “safety and resilience”, and today's sections provide resources that discuss said topics in various modes.
HODOR
I'm not a fan of “information coincidences”, since they imply all sorts of unseen, higher-level machinations of the daily zeitgeist. But, when they happen, I will not shy away from them. Earlier this week I was having a short 🐘 convo with @coolbutuseless
on system call safety in R, then came across the item in this section which discusses the same thing, but in Node.js land.
Node.js — like virtually every other scripting language/system — provides a means for applications to interact with the underlying system via system calls. However, this super convenient feature comes with a potential risk: it enlarges the possibility of JavaScript arbitrary code execution vulnerabilities at the system call level. Current protection methods in JavaScript code, either through code debloating or restriction of read-write-execute permissions, fall short in protecting against potential breaches at the system call level. In response to this issue, the authors of a recent paper have developed HODOR, a lightweight system focused on providing runtime protection through precise system call limitations for Node.js applications.
Accomplishing this task presented HODOR with several intricate technical hurdles. Initially, HODOR needed to build accurate call graphs for both the Node.js application and the underlying Node.js framework. The authors weren't satisified with existing call graph builders and improved upon existing tools by introducing key optimizations at both the high-level JavaScript and low-level C/C++ parts. Subsequently, using the mappings from these call graphs, HODOR assembled two whitelists: one for the main-thread and another for the thread-pool, each listing the recognized necessary system calls.
Ultimately, using these whitelists, HODOR implemented a lightweight system call limitation via the Secure Computing Mode (seccomp) feature in the Linux kernel, which helped to reduce the attack surface significantly. They tested HODOR against over 160 real-world Node.js applications susceptible to arbitrary code/command execution attacks. The results showed a reduction of the potential attack surface to an average of nineteen percent and incurring negligible runtime overhead of less than three percent.
A version of Node.js for embedded systems — mininode — has some of this baked in, and Deno has a very nice sandboxing feature baked in. I hope future iterations of existing scripting languages (cough R cough) and every new scripting langauge bakes some of these safety features in.
Mind The [SSH] Gap
I've been doing the “cybersecurity thing” for a few decades now. If there is one thing I've learned, it's that most humans — whether in meatspace or cyberspace — assume some level of safety has been baked in to processes/systems they use and depend on. Attackers — whether it be the IRL snake oil salesfolk and tumbler-twisters of old, or modern day digital scammers and hackers — use this “safety default” to their advantage. With this in mind, a recent post from Akamai's Allen West felt important enough to share with y'all, since many of you do operate SSH servers on the scary internets (even if you don't know you do).
You are not the only entity that wants to log in to your system(s). When I talk to IRL humans about this, I usually get some sort of “why would attackers want access to my [RaspberryPi | janky old linux box | thermostat | router]
?”. Allen's article (and this section) should help answer that, at least in part.
The section header is a graph of the daily number of IP addresses dedicated to launching SSH bruteforce attacks in my massive internet sensor network at work. It shows two “bumps” at the tail end of the chart, and overall elevated attack infrastructure dedication. These correspond well with Akamai's findings (at least I can explain this coincidence).
Proxyjacking — in the form Akamai notes — is a relatively new phenomenon that has emerged in the last couple of years due to the growth and use of proxyware services. These services — which I will not link to, and you should not use — such as IPRoyal, Honeygain, and Peer2Profit, are “legitimate” applications that allow users to share their internet bandwidth with others who pay to use their IP address. However, cybercriminals have found a way to exploit this system for their own gain.
In a recent campaign discovered by Akamai, attackers use a compromised web server to distribute core dependencies, actively search for and remove competing mal-instances, and monetize the victim's extra bandwidth. This method requires fewer resources than, say, cryptocurrency mining and has a lower chance of discovery.
The overall concept of proxyjacking can be traced back to the early days of crimeware, where attackers added hacked servers to commercial proxy networks for profit. However, the recent rise in proxyware services has made this attack vector more lucrative and accessible for cybercriminals.
One example of a proxyjacking attack is the exploitation of the Log4j vulnerability. In this case, attackers installed an agent that turned the compromised account into a proxy server, allowing them to sell the IP to a proxyware service and collect the profit. This type of attack may not directly result in data destruction or intellectual property theft, but it could have indirect consequences, such as negatively impacting an organization's reputation or resources.
While the list of proxyware services used for proxyjacking is currently small, researchers believe that this attack vector will continue to grow as it offers a low-effort, high-reward opportunity for threat actors. To mitigate the risk of proxyjacking attacks, organizations should stay vigilant, keep their systems updated, and monitor their network traffic for any suspicious activity.
Keep all your internet-facing services patched and consider relying on multi-factor authentication or certificate access where possible. Limiting access to only trusted IP addresses can also reduce exposure, but it is generally more of a pain to keep accurate.
2023 Top 25 CWE List
MITRE released its annual Common Weakness Enumeration (CWE) Top 25 Most Dangerous Software Weaknesses, which highlights the most critical software and hardware weaknesses that can lead to serious vulnerabilities in software.
The list is calculated by analyzing public vulnerability data in the National Vulnerability Database (NVD) for their root causes via CWE mappings. There are, unfortunately, many familiar friends in the 2023 version.
Any of us can introduce the weaknesses, below, into things we build. This can take many forms: failing to sanitize input in a Shiny or Flask app; leaving important resources hanging on the internet without proper authentication; using old/unpatched equipment or software; or just defaulting to that assumption of safety noted in the first section.
You can hit up the MITRE link for more info, but here are this year’s biggest baddies:
CWE-787: Out-of-bounds Write
CWE-79: Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')
CWE-89: Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection')
CWE-416: Use After Free
CWE-78: Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')
CWE-20: Improper Input Validation
CWE-125: Out-of-bounds Read
CWE-22: Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal')
CWE-352: Cross-Site Request Forgery (CSRF)
CWE-434: Unrestricted Upload of File with Dangerous Type
CWE-862: Missing Authorization
CWE-476: NULL Pointer Dereference
CWE-287: Improper Authentication
CWE-190: Integer Overflow or Wraparound
CWE-502: Deserialization of Untrusted Data
CWE-77: Improper Neutralization of Special Elements used in a Command ('Command Injection')
CWE-119: Improper Restriction of Operations within the Bounds of a Memory Buffer
CWE-798: Use of Hard-coded Credentials
CWE-918: Server-Side Request Forgery (SSRF)
CWE-306: Missing Authentication for Critical Function
CWE-362: Concurrent Execution using Shared Resource with Improper Synchronization ('Race Condition')
CWE-269: Improper Privilege Management
CWE-94: Improper Control of Generation of Code ('Code Injection')
CWE-863: Incorrect Authorization
CWE-276: Incorrect Default Permissions
SQL Injection is STILL ON THE LIST. And, to add insult to injury, IT’S IN THE TOP THREE. Which means my profession has truly failed you all.
I'll be formally blogging about these on the work blog at the tail end of July. When that happens, I’ll drop a note here (we're on a mandatory paid shutdown this week and next week, so I am doing my best to “not work” even though I may have written said work post already).
Drop #289 (2023-07-05): 🚨 Watch-Out Wednesday 🚨
Container, yeah! Bad assumption, hell ya!
It way take some time for the ponderous R wheel to turn, but in the meantime is it too much to ask that the more nimble organizations making a living by providing R front ends such as IDEs do something like maybe blocking system calls? VSCode, RStudio, anyone?