In an earlier blog, I talked about on-the-fly patch management. While I expressed my apprehensions about on-the-fly patching, I mentioned how newer technologies in trusted execution space and virtualization can be leveraged, to make on-the-fly patching more resilient, in comparison to where it stands now. I also acknowledged that on-the-fly patch management has its place, in spite of some misgivings. I recently stumbled on a vulnerabilities disclosure called ripple20, that further accentuated the potential value of on-the-fly patch management.
It is nothing new for security researchers to go on bug hunts, looking for holes in software(s), either with bug bounty in mind, or for other reasons. They generally focus on popular, widely used or generally known to be buggy software(s) and/or the ones that are more likely to be on internet facing systems, as a starting point. At times I have noticed that they do look for arcane or offshoot software that might be pervasive but not necessarily popular. Ripple20 appears to fall under the latter kind, zoning in on an embedded TCP/IP software, by Treck Inc. Apart from the fact that they were perhaps more methodical and far sighted about how they went about their disclosure, presumably in an attempt to positively impact their future course, there are several things about that disclosure that got my attention, especially from on-the-fly patch management perspective. For example, to quote from that link, "In the case of Ripple20, the starting point was embedded into Treck’s TCP/IP low-level Internet protocol suite library. The library could be used as-is, configured for a wide range of uses, or incorporated into a larger library. The user could buy the library in source code format and edit it extensively. It can be incorporated into the code and implanted into a wide range of device types. The original purchaser could decide to rebrand, or could be acquired by a different corporation, with the original library history lost in company archives. Over time, the original library component could become virtually unrecognizable. This is why, long after the original vulnerability was identified and patched, vulnerabilities may still remain in the field, since tracing the supply chain trail may be practically impossible." This level of non-streamlined/disorganized and/or chaotic nature of Treck's software distribution could at times make on-the-fly patch management indispensable! Following quote from the same link further confirms it - "Over the course of the disclosure process we found that while patching was difficult for some vendors, it could potentially be even more difficult or close to impossible for some end users to install the patches. (For example, if the library is on a separate physical component or the company that produced the component has ceased operations.)"
This link, also pertaining to Ripple20, makes the following comparable statements - "Experts now fear that all products using this library will most likely remain unpatched due to complex or untracked software supply chains." ... "Problems arise from the fact that the library was not only used by equipment vendors directly but also integrated into other software suites, which means that many companies aren't even aware that they're using this particular piece of code, and the name of the vulnerable library doesn't appear in their code manifests."
I hope patch management solutions like 0patch are taking notice of these kinds of disclosures, to positively impact the situation by creating micropatches, as applicable. While creating such patches, it should also be remembered that on-the-fly patch management could be taken to the next level, with trusted execution and virtualization, as mentioned in the earlier blog post.
I talked about the importance of the end-to-end trusted execution path in an earlier post. In that I alluded to the possibility of follow up posts on this topic, because of the potential this topic holds. I recently stumbled on a blog post that reminded me of its relevance.
The blog post I am referring to pertains to mitigations against Mimikatz style attacks. Among other things, it talks about how Mimikatz found a way to circumvent Credential Guard. To quote the blog directly, " While prior to this Mimikatz could harvest hashes directly from memory, what this bypass does is harvest credentials as they are entered - before they get to that protected memory area." I would suggest reading the referred blog for further details.
Another blog post on Credential Guard/MimiKatz as well talks about how MimiKatz circumvents Credential Guard. To quote that article, "When these credentials are typed, they can still be intercepted and stolen, e.g. with a key logger or with a custom SSP, as illustrated here." And here is another blog post dwelling into the same issue.
Based on the above mentioned blog posts, it is clear that irrespective of the level of protection afforded by newer technologies, Mimikatz like tools find a way to circumvent those protections. In the case in question, SSP interface enabled the circumvention. The attack vectors opened up by such APIs was provided by the same vendor that also provided the added protection! Such scenarios can perhaps be avoided by allowing a more stringent use of those APIs, at least in production environments. However, it still leaves the environment vulnerable by way of HID devices through which sensitive information is often times fed and the vulnerabilities they introduce. This situation all the more accentuates the need for an end-to-end trusted execution path, as described in the initial post.
Taking Patch Management To The Next Level By leveraging Hardware Virtualization And Trusted Execution For On-The-Fly Patching
I have explored use cases for trusted execution, Intel® SGX in specific, in several blog posts. This talks about running SQLite within Intel® SGX's secure enclave. And this, talks about running a cryptography library like Cryptlib within a secure enclave. And the internet is full of such examples. But, they are the more obvious use cases. There are varied non obvious and at times out of the box use cases that could greatly benefit from leveraging trusted execution. Patch management is one such area that may not seem like an obvious fit, yet stands to gain from leveraging trusted execution technologies.
On-the-fly patch management in specific, is neither that prevalent nor likely to be perceived as a good practice from conventional software management perspective. However, it does have its place. The following are just two of several examples that comes to mind -
Above mentioned use cases are nothing new. On-the-fly patching software is just for that reason. However, on-the-fly patching of binaries is still vulnerable. Not only are they more prone to stability issues themselves, if not implemented with extra care, they could create more problems than solve any. Even with meticulous implementation, they are still fragile by nature. Plus, unlike re-spun binaries, they don't necessarily secure the fix with a signed patch that is verified and loaded by the platform/OS loader, although they could give away the vulnerability they patch in a more obvious way. In other words, while on-the-fly patching patches vulnerabilities real time, it could potentially leave the system no less vulnerable!
Techniques like hooking and patching used by existing on-the-fly patching technologies are now nearly old relics. Thus this type of patch management stands to be seen as outdated. By leveraging hardware virtualization and trusted execution, it can be brought forward to keep up with the fast phased technology landscape. And the entire process can be made more secure and far more obscure to prying eyes. It could redefine the way vulnerable binary codes are scanned, guarded and substituted in real time, and right when the vulnerable code is about to be executed - just-in-time and on-the-fly. Features provided by hardware virtualization can be leveraged towards scanning and guarding the vulnerable regions, in far more powerful ways than current ones. Trusted execution can be used not only towards obscuring which vulnerable region is replaced and with what, the entire process could also be driven using secure remote computation, software attestation and more. So, at several levels, combining hardware virtualization with trusted execution could make on-the-fly patch management exponentially more secure and powerful.
While I consider further exploring this myself, I also wanted to invite others to this discussion. So, I thought I would share this insight and seek others' thoughts on this matter. I am especially keen on 0Patch's insight, as they appear to use the equivalency of on-the-fly patching. If they or anyone else would like to further discuss this, please use the comments below or reach out to email@example.com.
Founder of KryptoGuard™ technology initiative, product and services.