Wednesday, October 25, 2017

0patching the Office DDE / DDEAUTO Vulnerability... ehm... Feature

When "Dynamic Data Exchange" Becomes "Dynamic Data Execution"


by Mitja Kolsek, the 0patch Team

Introduction

Two weeks ago SensePost's Etienne Stalmans and Saif-Allah El-Sherei published an interesting analysis of a Microsoft Office feature that can be easily exploited for running arbitrary code on user's computer. In the next few days, Cisco Talos reported of detected in-the-wild attacks exploiting this very issue, while SANS reported it being exploited by Necurs and Hancitor malware campaigns. Endgame's Bill Finlayson and Jared Day subsequently wrote an excellent root-cause analysis of this issue.

The feature in question is called Dynamic Data Exchange, introduced in early Windows systems but still used in many places today. For instance, if you want to see the revenue value from a specific cell in your financial statement Excel worksheet mirrored in a Word document - so that updating the worksheet reflects itself in Word, you can use DDE for that. The field that does that would look like this:

{ DDEAUTO Excel "statement.xlsx" revenue }

The "AUTO" in DDEAUTO instructs Word to automatically use DDE for updating the external value when the document is opened. Instead of DDEAUTO, one can use DDE, but it will require the user to manually trigger updating of the value.

The documentation on DDEAUTO and DDE fields describes their behavior, including that "the application name shall be specified in field-argument-1; this application must be running." But what happens when the application (e.g., Excel from the above example) is not running? This happens:



Word helpfully offers to launch the application for you, which is undoubtedly user-friendly. It is also where a feature turns into a vulnerability: the application name can be a full path to a malicious executable (on USB drive or network) or to a benign executable that will do malicious things based on the arguments it is provided. So when the user agrees to Word launching the DDE application, attackers code gets executed on his computer.

According to SensePost, "Microsoft responded that as suggested it is a feature and no further action will be taken, and will be considered for a next-version candidate bug." Microsoft is full of smart people who try very hard to keep their users secure (we personally know many of these people) and we believe they have good reasons for such decision - or for subsequently reverting it in case that should happen. In case of DDEAUTO (which is the better attack vector), the user has to provide two non-default answers in popup dialogs, so even if these dialogs are not security warnings, some amount of social engineering would certainly be required in an attack. While we're seeing reports of this feature being used by attackers, we currently don't have any data on how successful they are.

Regardless of Microsoft's position on this, both defense and offense sides sprung into action. The former started looking for mitigations and creating signatures and detection rules for blocking attacks, while the latter kept coming up with new attack vectors (Outlook email, Calendar invites) and ideas on how to bypass detection. A fun game to play, no doubt, but we already know that offense will always be winning. Putting guards around the hole can ever only slow the attackers down, as they will always find ways to bypass them. The only definitive way to prevent the hole from being exploited is to close it. In code. And that's what we do.


Analysis

The entire problem seems to stem from Word deviating from the documentation and helpfully attempting to launch the application named in a DDE or DDEAUTO field. This is implemented by calling CreateProcess with the provided application name, which covers two cases:

  1. If application name is a full path to an executable, such as "C:\\Windows\\System32\\cmd.exe", that executable is launched with arguments provided in the DDE/DDEAUTO field;
  2. If application name is not a full path, such as "Excel", CreateProcess starts looking for it in the system search path, starting with the current working directory, followed by three system folders, and ending with all locations specified in the PATH environment variable.

This has two security-related implications. One can obviously just specify a full-path malicious executable, even from a network path, and have it executed on user's computer. (Note that a network path can be on the Internet behind user's firewall, and Windows with the default-running Windows Client Service will download the executable via HTTP - it will just take longer.)

Less obviously,  there is a binary planting potential here: Word sets its current working directory to user's Documents folder upon launching (and let's assume the attacker can't plant his malicious executable there), which is good. However, the user can inadvertently change the current working directory by opening a file via File Open dialog, which happens by default. So an attacker could trick the user to open a Word document from his network share or USB key, which would set Word's current working directory to attacker-controlled location. Subsequently, a DDEAUTO field could launch a malicious executable without providing a full path to it.

We described this second attack vector to explain why simply blocking DDE/DDEAUTO fields that use a full path would not be enough to fully neutralize exploitation. In fact, this shows that even a benign case of {DDEAUTO Excel "statement.xlsx" revenue} in a trusted (even signed!) document could be used to launch malicious Excel.EXE from attacker's location.

DDE-related application launching in Word is apparently pretty dangerous, and we were tempted to put some security around it. But the more we thought about what to do, the more it became clear that it's impossible to distinguish between a malicious and benign use of DDE/DDEAUTO. We could disallow any forward- and back-slashes in the application name, but as shown above, even single-word application names can be misused. We could also change the current working directory to a safe location before CreateProcess gets called (and we actually already had a patch candidate doing that) but what is a universally safe location on millions of differently-configured computers worldwide?

Then we decided to do some functional testing, and discovered that we were unable to get Word (either 2010 or 2013) to properly launch the requested application at all in any meaningful way. For instance, the following did launch Excel (with binary planting concerns aside):

{ DDEAUTO Excel " " }


But that's clearly useless, as no file and range are specified. However, the following, while useful, did not launch Excel. Process Monitor showed a single attempt to launch excel.exe from user's Documents folder (probably because it was the current working directory):

{ DDEAUTO Excel "statement.xlsx" revenue }

In contrast, the above worked great for getting the revenue value from statement.xlsx when the latter was already opened in Excel on the same computer. And that was it. We decided to simply amputate the app-launching capability for the purpose of DDE.


Patching

We located the code block with the CreateProcess call in wwlib.dll, removed the call and simulated a failed CreateProcess call by putting 0 in eax. After creating a patch like this, it turned out that Word still launched the specified application. What was happening? It turned out there is a fallback mechanism in place that tries to launch the app again from mso.dll if CreateProcess in wwlib.dll fails. Word apparently really tries to help the user launch the app.

So we also removed the fallback code. The image below shows the relevant code graph, including the CreateProcess and Fallback blocks, which we effectively amputated.








Our first micropatch was for 64-bit Word 2010 with the following source code:


; Patch for WWLIB.DLL_14.0.7189.5001_64bit.dll
MODULE_PATH "C:\Program Files\Microsoft Office\Office14\wwlib.dll"
PATCH_ID 294
PATCH_FORMAT_VER 2
VULN_ID 3081
PLATFORM win64

patchlet_start
 PATCHLET_ID 1
 PATCHLET_TYPE 2

 PATCHLET_OFFSET 0x92458c ; at the beginning of the CreateProcess block
 JUMPOVERBYTES 68 ; jump over everything including CreateProcess call

 code_start

   mov eax, 0 ; we say that CreateProcess failed to prevent original code
              ; from tryig to close the handle of the non-existent process

 code_end
patchlet_end

patchlet_start
 PATCHLET_ID 2
 PATCHLET_TYPE 2

 PATCHLET_OFFSET 0x92463f ; at the beginning of the fallback block
 JUMPOVERBYTES 40 ; remove the entire block
 N_ORIGINALBYTES 5

 code_start

   nop ; no added code here, we just need to remove the original code

 code_end
patchlet_end



After applying this micropatch, Word no longer attempts to launch the application specified in DDE and DDEAUTO fields. The dialogs remain the same, and Word still asks if your want to launch the application - but even if you agree, the application doesn't get launched and Word proceeds as if the attempt to launch it has failed (notifying you that it cannot obtain DDE data).


Demonstration





Conclusion

We created micropatches for Office 2007, 2010 ,2013, 2016 and 365 (32-bit and 64-bit builds). To have micropatches applied, all you need to do is download, install and register our free 0patch Agent. (Glitch warning: due to being sandboxed in Word 2016 and 365, 0patch Console either shows an empty line for Word in the Applications list with only the on/off button, or doesn't show Word at all. While said button works as it should, it looks confusing. We're working on fixing this by end of our beta.)


Note that we only make micropatches for fully updated systems, so make sure to have your Office updated (latest service pack plus all subsequent updates from Microsoft) if you want to use them. We'll be tweeting out notifications about additional micropatches so follow us on Twitter if you're interested.

Our micropatch can co-exist with existing mitigations for this same issue, so you can still disable automatic updating of DDE fields (either manually, or using Will Dormann's collection of Registry settings for various Office versions) and use malware detection/protection mechanisms - although the latter will add no value once the hole has been closed.

If active exploitation of this feature continues to spread, Microsoft will likely respond with an update or mitigations. When they do, make sure to apply them - as the original vendor they know their code best, and they are also aware of many use cases none of us could think of.

Should you experience any problems with our micropatches, please let us know. It's unlikely that our injected code would be flawed (being a single CPU instruction), but you may have a DDE use case that doesn't agree with our solution. Send us an email to support@0patch.com if you do. One great thing about micropatches is that in addition to getting applied in-memory while applications are running, they can also get revoked in-memory. So we can revoke a micropatch and issue a better version without your users even knowing that anything happened. Zero user disturbance.

Finally, let's quickly rehash the benefits of micropatching:

  1. A micropatch, just like original vendor's update, actually closes the hole instead of putting guards (signature-based detection or prevention) on attacker's paths towards the hole.
  2. In contrast to typical vendor updates, which replaces a huge chunk of a product, a micropatch brings a minuscule change to the code on the computer. This means minimal possible risk of error, as well as ability for everyone to actually review the new code (anyone familiar with assembly language can quickly understand the source code above).
  3. A micropatch gets instantly applied in memory, even while the vulnerable application is running, so the user doesn't have to restart the application or even reboot the computer.
  4. If something is wrong with a micropatch (while the risk is minimal, it's still there), it can be just as instantly removed from running applications, and replaced with a corrected one. Again, users don't even notice anything. (You want them to focus on their work, not on security updates, don't you?)
  5. With the low risk of error and the ability to quickly apply and un-apply micropatches, you can afford to simplify your updating process. Except in the most critical of cases, you don't need to test a micropatch for weeks or months before applying it, as the cost of un-applying is approximately zero if you can do that from a central location (we're currently working on that, by the way). In comparison, what was your cost of un-applying a "fat update" from thousands of computers the last time something went wrong with an update? 
We'll probably always have "fat updates", as there will always be a need for significant functional changes and new features. But fat updates are really not fit for patching vulnerabilities - these need rapid deployment, not weeks or months of testing in user environments that keeps the window wide open for attackers even though official patches are available. Micropatching is the only approach we're aware of that can change that. And that's why we're doing it.

Keep your feedback coming! Thank you!

[Update 3/11/2017: Windows 10 introduced a new mitigation called Attack Surface Reduction with the Fall Creators Update. We took a look at how it helps block DDE-borne attacks and how it compares to our micropatches.]

Wednesday, October 4, 2017

Micropatching a Hypervisor With Running Virtual Machines (CVE-2017-4924)

The Now and the Future of Hypervisor Patching

by Luka Treiber and Mitja Kolsek of 0patch Team


Introduction

Those of you following our micropatching initiative already know that micropatching makes it possible to fix vulnerabilities without restarting the computer or even relaunching the patched application that a user might currently be using. In other words, no disruption for users and servers.

Now let's take it a step further. Do you know which component of an IT environment one least wants to restart? You guessed it: a hypervisor. Especially if dozens or hundreds of critical virtual machines are running, which all need to be stopped or suspended, and can't get back online until the hypervisor is patched. And then if the patch turns out to be broken... you get the picture - and it's not a pretty one.

When the next Heartbleed, Shellshock, or a "guest-to-host escape" vulnerability comes out, you can be pretty sure that hypervisors all around the World will get massively patched - and restarted. And lots of people, from hypervisor vendors to CIOs, admins and end users, will go through various levels of unhappiness.

A few weeks ago, Comsecuris published a detailed report on three vulnerabilities in VMware Workstation that allowed a malicious guest to cause a memory corruption in the hypervisor (vmware-vmx.exe) running on host (and we all know what that leads to). Nico and Ralf cleverly patched a guest component of VMware's graphics-related DLL in a guest machine to make it send a malformed data structure to the hypervisor - which then crashed because it lacked the sanitization checks. (Interestingly, the debug version of the hypervisor did have assert statements with these checks, and these turned out to be quite helpful for both vulnerability analysis and patching.)

All three Comsecuris' vulnerabilities have been patched by VMware, two in Workstation 12.5.5, and one in Workstation 12.5.7. We decided to write a micropatch for the latter to show how a hypervisor can be patched without stopping virtual machines running on top of it.


Reproducing the PoC

We reproduced the Comsecuris' "dcl_resource" PoC on a 64-bit Windows 10 machine running on VMware Workstation 12.5.5. (Fun fact: you can run a VMware Workstation inside another VMware Workstation.)

For those of you who might want to play with this PoC: install Python for Windows and Visual C++ 2015 Redistributables, place PoC files in a folder, launch exec_poc.py and wait for the virtual machine to crash. If it doesn't crash (as it didn't for us at first), make sure your virtual machine's Hardware Compatibility is set to "Workstation 12.x" or something else reasonably new, otherwise DirectX 10 will not be supported, and the PoC relies on that.


Vulnerability Analysis

Once we got it working, the PoC crashed the vmware-vmx.exe running our virtual machine. We attached a debugger but even though Comsecuris' report was quite detailed we found it hard to match our crash context to their analysis, as there is a lot of convoluted code there to look at.

So using a hint from Comsecuris' report, we replaced vmware-vmx.exe with vmware-vmx-debug.exe to use the debug version of the hypervisor, and repeated the procedure. The result was this:





An assert message popped up revealing the assert's source code line. It seemed odd that an exploit that apparently hasn't been envisioned in the release version would be stopped by an assert, but hey, it looked really promising.  So we searched the disassembly for shaderTransSM4.c string and the hex equivalent of 1856 (the line number) - 740h. And found a match (lower left orange block):




When scrolling up the code graph we got a view of a whole chain of asserts originating in a case clause named case 88 (upper right orange block):




Next we disassembled the release vmware-vmx.exe (the one that crashed) and found a matching case 88 clause there - but no checks resembling the assert chain which we found in the debug version.

We then made a diff with the fixed release 12.5.7 version of vmware-vmx.exe and found the exact same cascade of checks following the case 88 clause. So VMware developers apparently took the assert statements form the debug version and turned them into actual release version checks. The image below shows the patched case 88 code branch on the left, and its vulnerable match on the right.




The last block in the cascade (dark red) routes either to the green default case clause (also present in the vulnerable version on the right) or to a red block on the left which directs execution towards an error handler if rbp+1A8h points to a value of 80h or greater. That dark red block was the one that stopped the PoC. Rereading the original report also revealed a parallel to our conclusion. It said the disclosed vulnerabilities "were fixed with the exact same code as in the debug version".


Writing a Micropatch

With that information in our hands we could create a micropatch. We chose to set our patch location one instruction before the last jmp instruction in the grey box - on mov [r12+1Ch], ecx. We could theoretically inject it after the jmp but 0patch Agent currently does not support patching a relative jmp instruction (it will in the future). In the patch we implemented an equivalent of the buffer overflow check from the dark red block above; we only had to replace rbp+1A8h with an equivalent that worked in our context: r12+50h. If an attempted overflow is detected, our patch code calls up an "Exploit Attempt Blocked" dialog, sets rcx to string "0patch: Exploit Blocked for CVE-2017-4924!", and jumps to the error handler in the original code that writes our custom message string to vmware.log and terminates the processing of the malformed data structure.

So the nice thing about this patch is not only that it fixes the vulnerability without disrupting running virtual machines, but it also records an error in VMware Workstation's log for subsequent inspection.

This is the resulting 0pp file, the source code for our micropatch.



MODULE_PATH "..\AffectedModules\vmware-vmx.exe-12.5.5.17738\vmware-vmx.exe"
PATCH_ID 291
PATCH_FORMAT_VER 2
VULN_ID 2971
PLATFORM win64

patchlet_start
 PATCHLET_ID 1
 PATCHLET_TYPE 2
 PATCHLET_OFFSET 0x0021223f
 PIT vmware-vmx.exe!0x002122b5

 code_start
  cmp dword[r12+50h], 80h ; r12+50h equals rbp+50h in 12.5.5. debug
                          ; and rbp+1A8h in 12.5.7. release

  jb skip
  call PIT_ExploitBlocked
  call arg1                   ; push string
  db "0patch: Exploit Blocked for CVE-2017-4924!",0Ah,0;
 arg1:
  pop rcx
  jmp PIT_0x002122b5          ;jump to the error handler @ loc_1402122A3+12
 skip:
 code_end
patchlet_end


Going Live

See a video of our micropatch in action. As you can see after we demonstrate the crash, we patch vmware-vmx.exe while the virtual machine is running, and the PoC gets blocked.



Closing Remarks

We wanted to demonstrate how patching a vulnerability in a hypervisor could look like in the future: instant, without disturbing the running virtual machines, strictly targeted at a particular vulnerability (as opposed to replacing megabytes of code) and also instantly un-patchable in case of a flawed patch. While clearly VMware Workstation is unlikely to host critical machines that are costly to stop or suspend, this same vulnerability also affected ESXi  - which is running thousands of just such machines around the World. Unfortunately we can't micropatch ESXi (yet) but if there's interest, there is no reason why that couldn't be done.

You might have noticed that our patch only addresses the exact flaw demonstrated by Comsecuris'  PoC, while the debug hypervisor version has a cluster of assert statements, each likely to be triggered on a different invalid value. So an exploit writer could probably walk through these assert statements and compile PoCs for additional cases not addressed by our micropatch. That said, we're not here to create new PoCs, and we only make micropatches for vulnerabilities we can prove (i.e., for which we have a PoC for). If VMware was using micropatching, they could have easily implemented all these checks as micropatches (or even a single micropatch, albeit a bit larger than usual).

We're hoping to get the idea of micropatching to all product development groups who know that applying patches can be really costly for their users - and want to do something about it.

Finally, thanks to Nico Golde of Comsecuris for helpful hints on getting their PoC to work, and useful ideas about patching.

If you have 0patch Agent installed (it's still free!), all the magic is already there: this micropatch is already on your computer and is getting automatically applied whenever you launch VMware Workstation 12.5.5. Contact us if you want to have this same patch for some other version of  VMware Workstation.

@0patch


Thursday, September 21, 2017

Exploit Kit Rendezvous and CVE-2017-0022

How to Micropatch a Logical Flaw

by Luka Treiber, 0patch Team

This time I chose to take a look at Microsoft's XML Core Services Information Disclosure Vulnerability (CVE-2017-0022), which allows a malicious web site loaded in Internet Explorer to determine whether specific executable modules or applications are present on user's system or not. It has also been exploited in the wild as some exploit kits (Astrum, Neutrino) were using it to fingerprint victim's machines. Although an official fix has already been released in March, what made it interesting was that it was a logical vulnerability while so far, our team mostly patched memory corruptions.

Based on TrendLabs Security Intelligence Blog I managed to reproduce the PoC. The PoC (image below) checks for the existence of files by calling  XMLParser::LoadDTD with a binary file resource URL as input parameter. If the resource such as version info is found the vulnerable XMLParser::LoadDTD tries to parse it as a DTD but fails and returns 0x80070485 because version info is not a DTD. If the binary file or the resource within does not exist 0x80004005 is returned.








There were enough pointers in the report to easily spot the vulnerable function XMLParser::LoadDTD and make a diff between the vulnerable (left) and the patched version (right)






TrendLabs' report reads as if removing the check if (v3>=0) alone fixed CVE-2017-0022 so I located this check in assembly above


test    edi, edi
jl      short loc_728C183C



and created a micropatch that simply jumps over it. However, that did not change the outcome of the PoC. With my micropatch applied, the PoC still returned errorCode 0x80070485 for non-existing files. I then debugged Internet Explorer on a patched machine and found that function IsDownloadExternal following the skipped if also behaves differently. Diffing code graphs for this function showed that quite some changes had been introduced to its implementation:








The gray conditional blocks have been added as part of the official patch. After adding this logic to my micropatch for the vulnerable msxml3.dll, the vulnerability was finally neutralized.

The micropatch has been published and distributed to all installed 0patch Agents. If you want to see it in action, check the video below.




Black box analysis of a logical vulnerability like this one can turn out to be quite challenging because unlike memory corruptions, no exceptions are thrown that could be caught during debugging and help pinpoint the culprit. But it all turned out well and the released micropatch is a good proof of that.

Friday, September 1, 2017

0patching the RSRC Arbitrary NULL Write Vulnerability in LabVIEW (CVE-2017-2779)

Whether Vendors Patch Their Products or Not, We Have Your Back

by Mitja Kolsek, the 0patch Team


Three days ago, Cisco Talos published a post about a code execution vulnerability in LabVIEW, whereby opening a malformed VI file with LabVIEW results in writing NULL bytes at chosen memory locations. This can most likely be used for executing arbitrary code by carefully placing NULLs in various data structures or stack. Nothing unusual so far. 

According to Talos' post, the producer of LabVIEW, National Instruments, initially* refused to patch this vulnerability, stating that "National Instruments does not consider that this issue constitutes a vulnerability in their product, since any .exe like file format can be modified to replace legitimate content with malicious."

(* Subsequently, National Instruments stated that they would produce a patch.)

A VI file is not a Windows executable that would run on any Windows computer. However, if you have LabVIEW installed, a VI file will get opened by it, and can be made to automatically run its embedded code. This code is very powerful and by design has ability to access your file system and launch native executables. So a malicious VI file, say, received via email or found on the Internet, could attack your computer if opened in LabVIEW - even without the vulnerability described here.

This is not entirely different from, say, a Microsoft Word document, which is also not an executable file, but can contain powerful damaging macros. (Although Word does warn you about macros and you have to explicitly allow their execution.) 

National Instruments provides Security Best Practices stating that you should exercise the same precautions with a VI file as you would with a EXE or DLL file. This makes sense - if an attacker can get you to open his malicious VI file, he can simply put malicious VI code in it that will attack you, just as if he could get you to open a malicious EXE. Importantly, he does not gain any additional benefit from a memory corruption issue described here, as he would still need you to open his VI file - and in contrast to Word and macros, LabVIEW does not ask your permission to execute VI code.  

However, the Security Best Practices document further states that if you want to safely inspect a suspect VI before running it, you should add that VI as a sub-VI to a blank VI, and inspect its code before running it.

In this case, however, there is a difference between a legitimately-formatted VI with malicious VI code (which does not get executed as a sub-VI) and a malformed VI causing memory corruption when loaded (which executes malicious code even if loaded as a sub-VI).

This vulnerability therefore allows an attacker to mount an attack with a malicious VI file against a user following National Instruments' Security Best Practices. Since the vendor initially stated that they would not issue a fix (it's still not available at the time of this writing), we decided to make one ourselves.


Analysis

In order to fix this vulnerability, we needed to first understand it. We started with a sample VI file.


A .VI file (example shown above) is a data file in a publicly undocumented format. It gets opened with labview.exe, which, among other things, parses the file's RSRC segments into in-memory RSRC data structures. You can see one RSRC segment at the beginning of the file above, but there can be others further down in a file.

Talos' detailed vulnerability report provided useful details on where their malformed .VI file caused a crash. Apparently, a method called ClearAllDataHdls (yes, the affected DLL comes with some symbols) walks through an array of what we can assume are "data handles". Each data handle has an offset to its own array of some 20-byte objects, and the count of these objects. The code simply walks through all objects of all handles, and writes a NULL to each one of them. Manipulating the said offset allows for writing one or more NULLs at arbitrarily chosen locations in memory.

It was trivial to create a malformed .VI file from a sample file based on this information. And, as expected, it crashed LabVIEW with an access violation. However, it did not crash it in ClearAllDataHdls, but in a method called StandardizeAndSanityChkRsrcMap (actually in a small helper function called by it). What happened? Was our POC different, did we find another bug?

It turned out we were using LabVIEW 2017, while Talos did their testing on version 2016. It appears that in version 2017, LabVIEW added some RSRC sanitization code, and in fact looking at this method revealed some sanity checks are being done on the RSRC data, whereby a .VI file is rejected if these checks fail. Unfortunately, these checks are not for the malformed data in question; in fact, StandardizeAndSanityChkRsrcMap also performs initialization of above-mentioned 20-byte objects by reversing their byte order to little-endian format, and this very action is what resulted in our crash due to accessing an invalid memory address.

It was time to take a closer look at StandardizeAndSanityChkRsrcMap and understand the RSRC data structure. The following image shows the most important part of StandardizeAndSanityChkRsrcMap, where the outer loop walks through all the handles, and the inner loop walks through all objects of a given handle and byte-reverses them.





Now let's look at a sample RSRC structure in the memory, after all the values have been byte-reversed.


52 53 52 43 0d 0a 03 00 4c 56 49 4e 4c 42 56 57  RSRC....LVINLBVW
c4 28 00 00 50 03 00 00 20 00 00 00 a4 28 00 00  .(..P... ....(..
00 00 00 00 00 00 00 00 20 00 00 00 34 00 00 00  ........ ...4...

38 03 00 00 17 00 00 00 4c 56 53 52 00 00 00 00  8.......LVSR....
24 01 00 00 52 54 53 47 00 00 00 00 38 01 00 00  $...RTSG....8...
76 65 72 73 00 00 00 00 4c 01 00 00 43 4f 4e 50  vers....L...CONP
00 00 00 00 60 01 00 00 4c 49 76 69 00 00 00 00  ....`...LIvi....
74 01 00 00 42 44 50 57 00 00 00 00 88 01 00 00  t...BDPW........
49 43 4f 4e 00 00 00 00 9c 01 00 00 69 63 6c 38  ICON........icl8
00 00 00 00 b0 01 00 00 54 49 54 4c 00 00 00 00  ........TITL....
c4 01 00 00 43 50 43 32 00 00 00 00 d8 01 00 00  ....CPC2........
4c 49 66 70 00 00 00 00 ec 01 00 00 46 50 48 62  LIfp........FPHb
00 00 00 00 00 02 00 00 46 50 53 45 00 00 00 00  ........FPSE....
14 02 00 00 56 50 44 50 00 00 00 00 28 02 00 00  ....VPDP....(...
4c 49 62 64 00 00 00 00 3c 02 00 00 42 44 48 62  LIbd....<...BDHb
00 00 00 00 50 02 00 00 42 44 53 45 00 00 00 00  ....P...BDSE....
64 02 00 00 56 49 54 53 00 00 00 00 78 02 00 00  d...VITS....x...
44 54 48 50 00 00 00 00 8c 02 00 00 4d 55 49 44  DTHP........MUID
00 00 00 00 a0 02 00 00 48 49 53 54 00 00 00 00  ........HIST....
b4 02 00 00 50 52 54 20 00 00 00 00 c8 02 00 00  ....PRT ........
56 43 54 50 00 00 00 00 dc 02 00 00 46 54 41 42  VCTP........FTAB
00 00 00 00 f0 02 00 00
00 00 00 00 00 00 00 00  ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ff ff ff ff 00 00 00 00 a4 00 00 00 00 00 00 00  ................
04 00 00 00 ff ff ff ff 00 00 00 00 b8 00 00 00  ................
00 00 00 00 00 00 00 00 ff ff ff ff 00 00 00 00  ................
c8 00 00 00 00 00 00 00 00 00 00 00 ff ff ff ff  ................
00 00 00 00 d0 00 00 00 00 00 00 00 00 00 00 00  ................
ff ff ff ff 00 00 00 00 84 01 00 00 00 00 00 00  ................
00 00 00 00 ff ff ff ff 00 00 00 00 b8 01 00 00  ................
00 00 00 00 00 00 00 00 ff ff ff ff 00 00 00 00  ................
3c 02 00 00 00 00 00 00 00 00 00 00 ff ff ff ff  <...............
00 00 00 00 40 06 00 00 00 00 00 00 00 00 00 00  ....@...........
ff ff ff ff 00 00 00 00 5c 06 00 00 00 00 00 00  ........\.......
00 00 00 00 ff ff ff ff 00 00 00 00 64 06 00 00  ............d...
00 00 00 00 00 00 00 00 ff ff ff ff 00 00 00 00  ................
74 06 00 00 00 00 00 00 00 00 00 00 ff ff ff ff  t...............
00 00 00 00 50 09 00 00 00 00 00 00 00 00 00 00  ....P...........
ff ff ff ff 00 00 00 00 58 09 00 00 00 00 00 00  ........X.......
00 00 00 00 ff ff ff ff 00 00 00 00 60 09 00 00  ............`...
00 00 00 00 00 00 00 00 ff ff ff ff 00 00 00 00  ................
84 0a 00 00 00 00 00 00 00 00 00 00 ff ff ff ff  ................
00 00 00 00 08 24 00 00 00 00 00 00 00 00 00 00  .....$..........
ff ff ff ff 00 00 00 00 10 24 00 00 00 00 00 00  .........$......
00 00 00 00 ff ff ff ff 00 00 00 00 9c 24 00 00  .............$..
00 00 00 00 00 00 00 00 ff ff ff ff 00 00 00 00  ................
a4 24 00 00 00 00 00 00 00 00 00 00 ff ff ff ff  .$..............
00 00 00 00 ac 24 00 00 00 00 00 00 00 00 00 00  .....$..........
ff ff ff ff 00 00 00 00 d8 24 00 00 00 00 00 00  .........$......
00 00 00 00 ff ff ff ff 00 00 00 00 5c 25 00 00  ............\%..
00 00 00 00 80 00 00 00 ff ff ff ff 00 00 00 00  ................
38 28 00 00 00 00 00 00                          8(......



The structure begins with a 30h-byte header (purple), followed by a DWORD structure length (blue), which is the size of the entire structure as shown - in our case 338h. After that, a DWORD handle count (green), 17h, tells us that there are 23 handles in the handle array that follows (red). Each handle consists of three DWORDs: some seemingly user-readable keyword, count of handle's objects (subtracted by 1, so 0 means 1 object), and the offset of its first object; the offset is meant from the handle count (green). Finally, the rest of the structure is object data area (black). Each object takes 20 bytes, and if a handle has n objects, they take n * 20 consecutive bytes at the specified offset.

Clearly, a valid RSRC structure would have all handles' objects located neatly inside the object data area. But a malformed RSRC structure can specify an arbitrary offset, and thus tamper with chosen memory locations.


Patching

Our goal at this point was to add the missing sanity check to the original code: we should not allow accessing any object data outside the object data area.

We needed to find a good location for injecting the patch, and we chose one right after a handle's offset is obtained, at which point we had all information available to implement the sanitiy check. The following image shows the location of our patch.



 

We have the following information available at the patch injection point:
  1. esi holds the offset of the current handle's first object
  2. dword [ebp+10h] holds the number of objects for this handle (reduced by 1)
  3. dword [ebp-4] holds the address of the handle count value, which is right next to the structure length value in memory.
The existing sanitization code exits the function with return value 6 (in eax) when the existing sanity checks fail, indicating to the caller that the structure is invalid. When this happens, LabVIEW tells the user that the file is invalid. We decided to do the same in our sanity check. 

In pseudo-code, this is what we needed to do:

  1. if offset of the current handle is negative or ridiculously large, we return with error 6
  2. if the number of objects for the current handle is negative or ridiculously large, we return with error 6
  3. multiply the number of objects with 20 to get the size of the object array 
  4. add offset to the size of object array to get the offset immediately after the array
  5. calculate the maximum allowed offset by subtracting 34h (offset of handle count) from the structure length
  6. if the last byte of object array is beyond the maximum allowed offset, return with error 6 
  7. Otherwise continue

This is the source code of the actual patch:

MODULE_PATH "C:\Program Files (x86)\National Instruments\LabVIEW 2017\resource\mgcore_SH_17_0.dll"
PATCH_ID 276
PATCH_FORMAT_VER 2
VULN_ID 2892
PLATFORM win32


patchlet_start
 PATCHLET_ID 1
 PATCHLET_TYPE 2

 PATCHLET_OFFSET 0x30c94
 N_ORIGINALBYTES 5

 PIT mgcore_SH_17_0!0x30c09

 code_start

    ; esi is offset of the handle's object data
    test esi, 0FFF00000h   ; is offset negative or too huge?
    jnz error             ; if so, exit with error

    mov eax, dword [ebp+10h] ; eax = number of objects in this handle (-1)
    inc eax                  ; eax = actual number of objects in this handle
    test eax, 0FFF00000h     ; is number of objects negative or too huge?
    jnz error

    imul eax, 14h         ; size of object data for this handle

                          ; (1 object is 14h bytes)
    add eax, esi          ; eax = offset right after this handle's 

                          ; last object

    mov edx, dword [ebp-4] ; stored address of handles_num
    mov edx, [edx-4]      ; structure length is stored right before

                          ; handles_num
    sub edx, 34h          ; edx is the maximum allowed offset
    cmp eax, edx          ; are we out of bounds?
    jg error              ; if so, exit with error

    jmp continue

 error:
    call PIT_ExploitBlocked
    jmp PIT_0x30c09       ; jmp to epilogue with error code 6

 continue:

 code_end
patchlet_end



Our micropatch has been published and distributed to all installed 0patch Agents yesterday (two days after Talos published vulnerability details), and you can see it in action in this video.





The benefits of micropatching

This story is a common one: a software vendor creates a product, many users use it, then someone finds a vulnerability. The vendor is notified but it's expensive for them to create and distribute a patch outside their schedule. Even with an updating mechanism in place, the so-called "fat updates" (updates that replace huge chunks of the product) are risky; many things can go wrong and expensive full-blown testing has to be done. And then the update has to be delivered to users, who have to waste their precious time with updating. And all that just for a single vulnerability? Understandably, vendors are inclined to try postponing such unwanted updates and bundle them with scheduled ones, often buying their time by downplaying the issue. When that happens, the security community likes to drop the details ("hey, if the vendor says it's not an issue, there's no harm in publishing"), and that usually pushes the vendor to issue a fix after all. They do it under pressure, and the risk of error is higher than usual. Finally, since un-updating is not really a thing, a botched fix could mean a nightmare for users to just get back to the vulnerable functional state.

In contrast, in-memory micropatching can fix a vulnerability with minimal and extremely controlled code modification (usually a dozen or so machine instructions) with no unwanted side effects. In addition, a micropatch can be applied to a product instantly, while the product is running, and just as instantly removed if suspected to be causing problems. All this allows the testing to be less rigorous, and only focused on the modified code - therefore cheaper.

Now imagine National Instruments had micropatching integrated in LabVIEW. It would be inexpensive to create and distribute a highly reliable micropatch for a vulnerability like this - especially with their intimate knowledge of the product -, and they could stay on their original release schedule while users would get their LabVIEW installations micropatched without even knowing it. No PR mess, no unhappy users, and very little disruption of business. What's not to like?

Software vendors are welcome to approach us about saving money, grief, and their users' time with micropatching.


If you have 0patch Agent installed (it's free!), this micropatch is already on your computer and is getting automatically applied whenever you launch LabVIEW 2017. 

@mkolsek
@0patch

Thursday, August 24, 2017

0patching Foxit Reader's saveAs "0day" (CVE-2017-10952)

3rd-Party Patching a Logical Bug


By Mitja Kolsek, the 0patch team

A bit of introduction: last week we could all witness a familiar "It's a vuln - no it's not" dance, this time featuring Zero Day Initiative and Foxit Software. In short, ZDI reported to Foxit two security issues in its Reader and PhantomPDF products discovered by Steven Seeley and Ariele Caltabiano. Foxit said they would not fix these issues as their exploitation requires the user to disable Secure Mode and thus allow unsafe JavaScript code to execute. ZDI then moved to publish proof-of-concept details, which resulted in Foxit deciding to address these issues anyway. Finally,

Foxit stated that they would "add an additional guard in PhantomPDF/Reader code where when opening a PDF document contains these powerful ( and thus potentially insecure) JavaScript functions, the software will check if the document is digitally signed by a verifiable/trustworthy person of entity. Only certified documents can run these powerful JS functions even when “Safe Reading Mode” is turned off."

They also announced their plan to "release a Reader/PhantomPDF 8.3.2 patch update this week (ETA Aug 25th) with additional guard against misuse of powerful (potentially insecure) JavaScript functions — this will make Foxit software equivalent to what Adobe does."

We at 0patch like a challenge as much as the next guy. While we're usually patching memory corruption bugs (most critical remotely exploitable vulns are of that sort), we're happy to demonstrate that in-memory micrpatching can just as well be used for fixing logical bugs - at least temporarily, until the official vendor fix is applied.

So we set upon creating a micropatch for CVE-2017-10952, allowing a script inside a PDF document to use the saveAs function to save itself to an arbitrarily chosen location on user's computer, using an arbitrarily chosen file extension. For example, a PDF document containing a <html> block with some script anywhere in it could simply save itself as an HTA file (locally executable HTML file) in user's StartUp folder like this:

this.saveAs("/c/Users/”+ identity.loginName + ”/AppData/Roaming/Microsoft/Windows/STARTM~1/Programs/Startup/si.hta");

As a result, when the user logged in to Windows the next time, this HTA file would get executed.


Reproducing the issue


Reproducing the issue was simple: we downloaded and installed the latest version of Foxit Reader (8.3.1.21155) and put the above code into a sample PDF file to get our POC.

Opening the POC in Foxit Reader with default/recommended configuration resulted in the Safe Reading Mode warning. Pressing "OK" was enough to disable Safe Reading Mode and get our code executed. Indeed, si.hta got created in the StartUp folder.


Analysis

It's generally not trivial to find code that implements a JavaScript function, especially if the result of the POC is not a crash. However, in this case the saveAs function writes a file to disk so it can be intercepted at a call to one of the disk-writing Windows API functions. While the ZDI article mentions the WriteFile function, putting a breakpoint on it resulted in way too many hits. So we went with CreateFile.

However, CreateFile also got called constantly with some BMP file that Foxit Reader seems to be loading all the time for some reason. Making the breakpoint conditional did the trick by simply checking two letters of the file path and not breaking if they matched the said BMP file path:

bp kernel32!CreateFileW "j poi(poi(esp+4)+6)!=0x00730055 ''; 'gc'"

Bingo! The first break occurred right in our saveAs call, confirmed by our HTA path being passed to CreateFile. The call stack tail looked like this:


0030eba8 01703c5e kernel32!CreateFileW
0030ebdc 016f7d4f FOXITREADER+0x823c5e
0030ebfc 011c5a22 FOXITREADER+0x817d4f
0030efac 010bd155 FOXITREADER+0x2e5a22
0030f068 0238ad23 FOXITREADER+0x1dd155
0030f230 02377492 FOXITREADER+0x14aad23

0030f2e4 012f17c7 FOXITREADER+0x1497492
0030f31c 0217568e FOXITREADER+0x4117c7



Time for IDA. As FOXITREADER.EXE is a 54MB beast, it took IDA quite a while to sort it all out. After that, looking at the functions in the call stack revealed that the one containing address FOXITREADER+0x14aad23 holds the bulk of saveAs implementation. There are references to all saveAs arguments there (cPath, cConvID, cFS, bCopy and bPromptToOverwrite), and one can see the logic of  bPromptToOverwrite value, which, if true, calls PathFileExistsW and sets a local variable we called allowed_to_save_file to 0 if the user declines the overwrite.

The image below shows the place where allowed_to_save_file is set to 0, and a bit further down where allowed_to_save_file is checked, whereby a bunch of code is bypassed if it is 0. The bypassed code includes the green block where the execution proceeds towards the CreateFile call.






Patching

It would be very difficult for us to do exactly what Foxit is about to do to address this issue (only allowing properly signed documents to use dangerous functions like saveAs) because digging down deeper to find a way to the data on document signatures could easily take us days. So we decided to simplify the fix in a way to still allow saveAs, but not in a RCE-style dangerous way.  

We noticed (and tested) that in contrast to Adobe's products, Foxit Reader's saveAs function doesn't seem to support document conversion. Consequently, it makes no sense to allow a document to save itself with any other extension than ".pdf". So our logic would be to check the file name passed to saveAs, and only allow files with a ".pdf" extension to proceed, while silently blocking all other extensions.

Having this patching logic in place, we already knew how to get the file path in the function shown above, so we only needed a way to sabotage the file creation for disallowed extensions without causing any unwanted side effects.

The implementation of bPromptToOverwrite logic came in handy as we saw that when the user doesn't allow overwriting, the execution simply bypasses a code block that otherwise leads to CreateFile, and that surely has no unwanted side effects.

So we did a similar thing: We decided to inject our patch code at the beginning of the green block and do the following there:

1) Get the address of the last "." in cPath_string (if any).
2) If no dot, jump out of the block
3) Make a case-insensitive _wcsicmp with ".pdf"
4) If we don't have a match, jump out of the block 

5) Otherwise continue

IDA was really helpful in identifying implementations of _wcsrchr and _wcsicmp inside FOXITREADER.EXE for us so we didn't have to implement them in the patch.

Furthermore, we decided to identify any attempt to save a file with some other file extension as an exploit attempt, which results in an "Exploit Attempt Blocked" popup.

This is the micropatch that came out of this:


; target: Foxit Reader 8.3.1.21155
MODULE_PATH "C:\Program Files (x86)\Foxit Software\Foxit Reader\FoxitReader.exe"
PATCH_ID 275
PATCH_FORMAT_VER 2
VULN_ID 2891
PLATFORM win32

patchlet_start
PATCHLET_ID 1
PATCHLET_TYPE 2

PIT FoxitReader.exe!0x5CA2A1,FoxitReader.exe!0x5C9A22,FoxitReader.exe!0x14AAD23

PATCHLET_OFFSET 0x14AACF2
N_ORIGINALBYTES 5

code_start

    mov edx, dword [ebp-1A0h]
    push '.'
    push edx
    call PIT_0x5CA2A1           ; (_wcsrchr) find the last dot in cPath
    add esp, 8                  ; restore esp
    test eax, eax               ; was a dot found?
    jz Abort                    ; no dot found, we don't allow that
  
    call GetLocalPdfString      ; a trick to get the address of a local
                                ; string on stack
    dw __utf16__(".pdf"),0

    GetLocalPdfString:          ; at this point, the address of string
                                ; ".pdf" is on the stack
    push eax                    ; eax points to the found dot in cPath
    call PIT_0x5C9A22           ; (_wcsicmp) compare the two strings,
                                ; case insensitive
    add esp, 8                  ; restore esp
    test eax, eax               ; do strings match?
    jz Continue                 ; they do match, we allow saveAs to continue
  
    Abort:
    call PIT_ExploitBlocked     ; show the "Exploit Blocked" popup
    jmp PIT_0x14AAD23           ; jmp out of the block, sabotaging saveAs
  
    Continue:

code_end
patchlet_end


Micropatch in Action


We made a video to quickly show you how 0patch Agent applies this micropatch to a running Foxit Reader, effectively patching it without disturbing the user.




Closing Remarks

We made this micropatch to show how logical vulnerabilities can also be patched, and to demonstrate the amount of effort required for creating a micropatch. It took us 6 man-hours from installing the vulnerable Foxit Reader to having a working micropatch. Mind you, the majority of time was spent on analyzing the vulnerability and Reader's code, and deciding on a good way to patch. The original product developers already know the product inside out and would skip most of this process.

We believe that with Foxit's intimate knowledge of their product, and our knowledge of micropatching, a high-quality micropatch could be made in less than an hour. With our distribution system, it would be on everyone's computer within another hour, and applied automatically. Even users using Foxit Reader at that time would not notice anything - Reader would instantly get from "vulnerable" to "fixed" while they would scroll the pages of some document. And that's how we believe most software vulnerabilities should (and could) be fixed.

If you have 0patch Agent installed (it's free!), this micropatch is already on your computer and is getting automatically applied when you launch the vulnerable Foxit Reader. You can use this POC to play with it. Have fun, write some micropatches and if you're a software vendor interested in equipping your products with self-micropatching ability, ping us at support@0patch.com.

@mkolsek
@0patch

Update 8/29/2017: The latest Foxit Reader (8.3.2.25013) fixes CVE-2017-10951, CVE-2017-10952, and both vulnerabilities reported by



Monday, July 10, 2017

0patching the Quick Brown Fox of CVE-2017-0283


By Luka Treiber, 0patch Team.

Among many other things, last Patch Tuesday brought a fix for an RCE vulnerability named "Windows Uniscribe font processing heap-based memory corruption in USP10!MergeLigRecords".  A couple of days later Mateusz Jurczyk of Google Project Zero published a report with PoC attached. Out of the eight vulnerabilities reported at the same time in that Windows library I chose to patch this one for being the most severe.


Reproduction

Opening font files from the PoC.zip package in Windows Font Viewer displayed the "quick brown fox text" in weird constellations but without a crash. With no exact PoC instructions given this was the procedure that worked for me on Windows 7 x64:
  1. unpack the PoC
  2. install signal_sigsegv_313372b5_210_42111ccffd2e10aba8b5e6ba45289ef3.ttf (or any other TTF from the package) on the system
  3. run Notepad
  4. select Format->Font...>[Font] 4000 Tall and [Script] Arabic
immediately a crash occurs at

(1410.378): Unknown exception - code c000041d (!!! second chance !!!)
msvcrt!memcpy+0x1c0:
000007fe`fd651444 8a040a          mov     al,byte ptr [rdx+rcx] ds:00000000`025230e8=??

The callstack: it matches the one from report

msvcrt!memcpy+0x1c0
USP10!MergeLigRecords+0x7f
USP10!LoadTTOArabicShapeTables+0x5f8
USP10!LoadArabicShapeTables+0x148
USP10!ArabicLoadTbl+0xf0
USP10!UpdateCache+0xf5


Analysis of the PoC

It is usually worth to take a look at PoC and try to understand its payload first before diving into debugging, and thus save a lot of time by knowing what data patterns to look for in the application's memory. So I searched the Internet for the original TTF file the PoC was derived from. I quickly found one and started diffing the two files in order to locate malformed font attributes. Comparison showed a common structure of the files. However, it became clear that an automatic fuzzer was used and too many attributes were polluted with suspicious values to sift through by hand. That task was definitely not a time-saver so I had to give it up.


Analysis of the official patch

Extracting and analyzing the official patch is the next reasonable thing to do after a Patch Tuesday. So let's see how this goes.

The vulnerable version 1.626.7601.23688 of usp10.dll got replaced by the patched 1.626.7601.23807. The files are both of approximately the same size (788KB) and when compared using BinDiff a match of 733 functions is found. Among those are also the ones from the crash call stack above. It makes sense to check differences in those functions first as the patch is often a sanity check on input data inserted before a vulnerable call. The first one - USP10!MergeLigRecords from the immediate crash context - is identical but the next one - USP10!LoadTTOArabicShapeTables - shows some interesting changes in the function graph. Left is the old (vulnerable) and right is the new (patched) version of usp10.dll




We can see that the vulnerable call MergeLigRecords is enclosed in a loop.



Within that loop the only significant change in the patched version is the gray cmp-jnz block that has been added to the patched DLL version. Could this be the patch? With limited analysis done so far it is hard to say what the data in comparison (rsp+var_9C) means, but it is clear that by introduction of that block the criteria to reach the problematic call is more refined than in the old version. This is also a behavior I would expect of a patch.

The only way to make sure is to debug. So we start WinDbg and attach it to notepad.exe on a vulnerable system. We set three breakpoints. One at the vulnerable MergeLigRecords, one before the call to ttoGetTableInfo, and one after. The ttoGetTableInfo call is interesting because it takes two struct parameters - TTOOutput and TTOInput. The gray if block that follows right after, checks if one of the TTOOutput attributes (rsp+var_9C) is 4 and in case of a mismatch exits the loop. So what we're interested in is where/if rsp+var_9C changes inside ttoGetTableInfo and how it is related to the crash inside MergeLigRecords. Before we hit [F5] in WinDbg and click-through the PoC we also need to locate a match for the rsp+var_9C attribute from the patched version in the vulnerable code.




We see that, conveniently, a reference to the same attribute can be found in the old LoadTTOArabicShapeTables named as rsp+var_A4 (it resolves to @rsp+34 in the snippets below).

Now we can observe our checkpoints in WinDbg

0:002> bu0 @!"usp10"+1e32f "dw @rsp+34 L1";
0:002> bu1 @!"usp10"+1e334 "dw @rsp+34 L1";
0:002> bu2 @!"usp10"+1e3f3;
0:002> g

In the first two passes, the value of var_A4 is 0004 and does not change. Also the USP10!MergeLigRecords call executes without a crash.

USP10!LoadTTOArabicShapeTables+0x52f:
000007fe`fd59e32f e8bcfd0000      call    USP10!ttoGetTableInfo (000007fe`fd5ae0f0)
0:000> g
00000000`001edd14  0004
USP10!LoadTTOArabicShapeTables+0x534:
000007fe`fd59e334 85c0            test    eax,eax
0:000> g
Breakpoint 2 hit
USP10!LoadTTOArabicShapeTables+0x5f3:
000007fe`fd59e3f3 e8b8010000      call    USP10!MergeLigRecords (000007fe`fd59e5b0)
0:000> g
00000000`001edd14  0004
USP10!LoadTTOArabicShapeTables+0x52f:
000007fe`fd59e32f e8bcfd0000      call    USP10!ttoGetTableInfo (000007fe`fd5ae0f0)
0:000> g
00000000`001edd14  0004
USP10!LoadTTOArabicShapeTables+0x534:
000007fe`fd59e334 85c0            test    eax,eax
0:000> g
Breakpoint 2 hit
USP10!LoadTTOArabicShapeTables+0x5f3:
000007fe`fd59e3f3 e8b8010000      call    USP10!MergeLigRecords (000007fe`fd59e5b0)
0:000> g


In the third pass, var_A4 gets changed by the ttoGetTableInfo call from 0004 to 0001 and MergeLigRecords crashes the app.

00000000`001edd14  0004
USP10!LoadTTOArabicShapeTables+0x52f:
000007fe`fd59e32f e8bcfd0000      call    USP10!ttoGetTableInfo (000007fe`fd5ae0f0)
0:000> g
00000000`001edd14  0001
USP10!LoadTTOArabicShapeTables+0x534:
000007fe`fd59e334 85c0            test    eax,eax
0:000> g

Breakpoint 2 hit
USP10!LoadTTOArabicShapeTables+0x5f3:
000007fe`fd59e3f3 e8b8010000      call    USP10!MergeLigRecords (000007fe`fd59e5b0)
0:000> p
(14a8.189c): Access violation - code c0000005 (first chance)
First chance exceptions are reported before any exception handling.
This exception may be expected and handled.
msvcrt!memcpy+0x1c0:
000007fe`fd651444 8a040a          mov     al,byte ptr [rdx+rcx] ds:00000000`03bf5018=??

At this point we can conclude that the gray cmp-jnz block would've prevented the crash so we found a patch candidate and can make a 0patch of it.


Patching

So far we've assembled all required pieces to build one - it's all in the gray block in above diff. And below is the resulting .0pp file I made.

MODULE_PATH "C:\Windows\System32\usp10.dll"
PATCH_ID 273
PATCH_FORMAT_VER 2
VULN_ID 2536
PLATFORM win64

patchlet_start
PATCHLET_ID 1
PATCHLET_TYPE 2
PATCHLET_OFFSET 0x0001e32f
PIT USP10!0x2e0f0      ; import USP10!ttoGetTableInfo
JUMPOVERBYTES 5
N_ORIGINALBYTES 1

 code_start
  call PIT_0x2e0f0     ; call USP10!ttoGetTableInfo
  cmp word[rsp+34h], 4 ; var_A4
  je skip              ; inverse condition from original patch
  or eax, 01h          ; set piggyback condition
  skip: 
 code_end
patchlet_end

When turning the official patch into a 0patch I used a "jump condition piggy-backing" technique already described in a previous post (only this time It is the quick brown fox I'm piggy-backing). Conveniently the official patch is right next to a jump (jnz) that exits the problematic loop so this is also the location for our 0patch. I placed it before the original jnz. Due to lack of space (jnz is only 2 bytes long while we always need 5 to jump to our patch code) I had to place it before the ttoGetTableInfo call that 0patch Agent will substitute with the 5 byte jump. The first instruction in the patch is therefore the substituted original call to ttoGetTableInfo. What follows it is the check from the official patch. First var_A4 is compared to 4 to check if the loop can continue, otherwise eax that holds the result of ttoGetTableInfo, is set to 1 so the test eax,eax instruction from original code that follows the patch will set a jump flag (zf=0) for the jnz that will exit the loop.

After compiling this .0pp file with 0patch Builder, the patch gets applied and the PoC crashes no more. Our team tested it also against the other TTF files from the PoC.zip package and all of them seem to be disarmed. It is worth noting, however, that this patch only covers the execution route discovered in the Reproduction section above. In the diff of LoadTTOArabicShapeTablesthere there is one more cmp-jnz gray block added to the patched usp10.dll that exits a similar loop to the one covered above. But since there is no public PoC available that would trigger that branch of execution we won't include it in a 0patch. The only safe way for 0patch to work is to cover testable execution paths.

Now, if you haven't so far, install 0patch Agent  and check the PoC to test it for yourself. Next time you see a quick brown fox jumping, make sure that you have 0patch Agent enabled.

  翻译: