In an earlier blog, I talked about on-the-fly patch management. While I expressed my apprehensions about on-the-fly patching, I mentioned how newer technologies in trusted execution space and virtualization can be leveraged, to make on-the-fly patching more resilient, in comparison to where it stands now. I also acknowledged that on-the-fly patch management has its place, in spite of some misgivings. I recently stumbled on a vulnerabilities disclosure called ripple20, that further accentuated the potential value of on-the-fly patch management.
It is nothing new for security researchers to go on bug hunts, looking for holes in software(s), either with bug bounty in mind, or for other reasons. They generally focus on popular, widely used or generally known to be buggy software(s) and/or the ones that are more likely to be on internet facing systems, as a starting point. At times I have noticed that they do look for arcane or offshoot software that might be pervasive but not necessarily popular. Ripple20 appears to fall under the latter kind, zoning in on an embedded TCP/IP software, by Treck Inc. Apart from the fact that they were perhaps more methodical and far sighted about how they went about their disclosure, presumably in an attempt to positively impact their future course, there are several things about that disclosure that got my attention, especially from on-the-fly patch management perspective. For example, to quote from that link, "In the case of Ripple20, the starting point was embedded into Treck’s TCP/IP low-level Internet protocol suite library. The library could be used as-is, configured for a wide range of uses, or incorporated into a larger library. The user could buy the library in source code format and edit it extensively. It can be incorporated into the code and implanted into a wide range of device types. The original purchaser could decide to rebrand, or could be acquired by a different corporation, with the original library history lost in company archives. Over time, the original library component could become virtually unrecognizable. This is why, long after the original vulnerability was identified and patched, vulnerabilities may still remain in the field, since tracing the supply chain trail may be practically impossible." This level of non-streamlined/disorganized and/or chaotic nature of Treck's software distribution could at times make on-the-fly patch management indispensable! Following quote from the same link further confirms it - "Over the course of the disclosure process we found that while patching was difficult for some vendors, it could potentially be even more difficult or close to impossible for some end users to install the patches. (For example, if the library is on a separate physical component or the company that produced the component has ceased operations.)"
This link, also pertaining to Ripple20, makes the following comparable statements - "Experts now fear that all products using this library will most likely remain unpatched due to complex or untracked software supply chains." ... "Problems arise from the fact that the library was not only used by equipment vendors directly but also integrated into other software suites, which means that many companies aren't even aware that they're using this particular piece of code, and the name of the vulnerable library doesn't appear in their code manifests."
I hope patch management solutions like 0patch are taking notice of these kinds of disclosures, to positively impact the situation by creating micropatches, as applicable. While creating such patches, it should also be remembered that on-the-fly patch management could be taken to the next level, with trusted execution and virtualization, as mentioned in the earlier blog post.
I talked about the importance of the end-to-end trusted execution path in an earlier post. In that I alluded to the possibility of follow up posts on this topic, because of the potential this topic holds. I recently stumbled on a blog post that reminded me of its relevance.
The blog post I am referring to pertains to mitigations against Mimikatz style attacks. Among other things, it talks about how Mimikatz found a way to circumvent Credential Guard. To quote the blog directly, " While prior to this Mimikatz could harvest hashes directly from memory, what this bypass does is harvest credentials as they are entered - before they get to that protected memory area." I would suggest reading the referred blog for further details.
Another blog post on Credential Guard/MimiKatz as well talks about how MimiKatz circumvents Credential Guard. To quote that article, "When these credentials are typed, they can still be intercepted and stolen, e.g. with a key logger or with a custom SSP, as illustrated here." And here is another blog post dwelling into the same issue.
Based on the above mentioned blog posts, it is clear that irrespective of the level of protection afforded by newer technologies, Mimikatz like tools find a way to circumvent those protections. In the case in question, SSP interface enabled the circumvention. The attack vectors opened up by such APIs was provided by the same vendor that also provided the added protection! Such scenarios can perhaps be avoided by allowing a more stringent use of those APIs, at least in production environments. However, it still leaves the environment vulnerable by way of HID devices through which sensitive information is often times fed and the vulnerabilities they introduce. This situation all the more accentuates the need for an end-to-end trusted execution path, as described in the initial post.
Taking Patch Management To The Next Level By leveraging Hardware Virtualization And Trusted Execution For On-The-Fly Patching
I have explored use cases for trusted execution, Intel® SGX in specific, in several blog posts. This talks about running SQLite within Intel® SGX's secure enclave. And this, talks about running a cryptography library like Cryptlib within a secure enclave. And the internet is full of such examples. But, they are the more obvious use cases. There are varied non obvious and at times out of the box use cases that could greatly benefit from leveraging trusted execution. Patch management is one such area that may not seem like an obvious fit, yet stands to gain from leveraging trusted execution technologies.
On-the-fly patch management in specific, is neither that prevalent nor likely to be perceived as a good practice from conventional software management perspective. However, it does have its place. The following are just two of several examples that comes to mind -
Above mentioned use cases are nothing new. On-the-fly patching software is just for that reason. However, on-the-fly patching of binaries is still vulnerable. Not only are they more prone to stability issues themselves, if not implemented with extra care, they could create more problems than solve any. Even with meticulous implementation, they are still fragile by nature. Plus, unlike re-spun binaries, they don't necessarily secure the fix with a signed patch that is verified and loaded by the platform/OS loader, although they could give away the vulnerability they patch in a more obvious way. In other words, while on-the-fly patching patches vulnerabilities real time, it could potentially leave the system no less vulnerable!
Techniques like hooking and patching used by existing on-the-fly patching technologies are now nearly old relics. Thus this type of patch management stands to be seen as outdated. By leveraging hardware virtualization and trusted execution, it can be brought forward to keep up with the fast phased technology landscape. And the entire process can be made more secure and far more obscure to prying eyes. It could redefine the way vulnerable binary codes are scanned, guarded and substituted in real time, and right when the vulnerable code is about to be executed - just-in-time and on-the-fly. Features provided by hardware virtualization can be leveraged towards scanning and guarding the vulnerable regions, in far more powerful ways than current ones. Trusted execution can be used not only towards obscuring which vulnerable region is replaced and with what, the entire process could also be driven using secure remote computation, software attestation and more. So, at several levels, combining hardware virtualization with trusted execution could make on-the-fly patch management exponentially more secure and powerful.
While I consider further exploring this myself, I also wanted to invite others to this discussion. So, I thought I would share this insight and seek others' thoughts on this matter. I am especially keen on 0Patch's insight, as they appear to use the equivalency of on-the-fly patching. If they or anyone else would like to further discuss this, please use the comments below or reach out to email@example.com.
When it comes to security, you are only as secure as your weakest link. Thus providing end to end security is imperative for any technology that claims to provide a secure execution environment. This post is the first of possibly several that could follow, given the potential this topic holds. It briefly summarizes the current state of end-to-end security, and Intel® SGX technology within that. Taking one of several areas of application as an example, it then explains how the same technology could help fill a void, if it were to stretch itself to strive towards end-to-end security.
Providing comprehensive security and supporting trusted execution throughout the execution cycle requires all layers of software stack working in tandem to provide equal and uncompromising level of security, preferably by leveraging security aware hardware. While there are hardware supported security technologies to enable security at each layer or at several layers of software stack, and perhaps with hierarchical protection, there doesn't exist a consolidated hardware enabled security technology that provides comprehensive end-to-end security for an otherwise general purpose system. The disjoint nature of the current state of hardware enabled security can largely be attributed to the timeline in which each relevant technology was introduced and perhaps in some cases, without a more definitive vision of how the future technologies are going to play along with the existing ones. Within that, Intel® SGX is especially different, as it was built to distrust the environment outside its scope! While the reasoning for that built-in distrust is understandable, it does constraint the scope of usage of the technology considerably, because of such distrust by design.
Use cases that require interfacing with human interface devices, like modules requiring password or other input from users or towards secure display, generally tend to fall outside the scope of that built-in trust. Banking or e-commerce applications, are examples of where the above mentioned constraint is more obvious. Yet, it is exactly in these sectors there is a pressing need for technologies like Intel® SGX to step up, because of introduction of more sophisticated technologies in their own vertical, like EMV technology in credit card industry, which has shifted the risk to e-commerce, as mentioned here! In fact, as mentioned here, online card-not-present fraud is 81% more likely than point of sales fraud now, and just because of switch to EMV.
While Intel® SGX is currently constrained by way of its built-in distrust model and its scope of use, to be of help readily, it in fact can help offset the shift in risk. For that, Intel® SGX will have to broaden it's scope and/or provide tighter integration with few other existing hardware level security technologies. In specific, Intel is going to have to focus on adding more seamlessness, if not full blown integration, along with extending its own feature set, when technologies like Intel VT-d/VT-x/TPM/TXT are used alongside Intel® SGX, to provide more comprehensive end to end trusted execution path for scenario like the one mentioned above. What is likely to make it harder, if not infeasible to accomplish, is the decentralized nature of handling of peripherals, starting from the manufacturer. That is going to come in the way of single point of authority and autonomy that may be needed, to provide uncompromising level of security, architecturally and by design.
While Intel works out that problem, what would be necessary is to come up with stop gap solutions that fill the void momentarily. It looks like academia is tackling this problem and have come up with potential solutions. SGXIO, by researchers at Graz University of Technology, is one such solution that combines multiple technologies to provide an end to end trusted execution path. It enlists Intel's TPM/TXT, SGX and VT-d towards providing end to end trusted execution path. Combining technologies that perhaps weren't necessarily designed to work harmoniously could make the implementation somewhat clumsy, if not less feasible for the field. And it is mainly for this reason, it might be worth while for Intel to solve this problem at hardware level, end to end, with less if not near to no burden at software level.
This post discusses several possible ways in which an Intel® SGX application could be designed. First, the most obvious and simplest of options - choosing a design that is most conducive for the targeted environment. This can be done by studying the technology itself, its design, what it is targeted towards and built for. At that level, with CPU boundary as its security perimeter, Intel® SGX is designed to provide the smallest possible attack surface, along with hardware level access checks. Thus applications that have isolated a small subset of code as sensitive enough to warrant a secure execution environment are best suited for this option. This way, existing functionalities can remain nearly intact while porting just a small portion of sensitive code to work within a secure enclave. The downside of this approach is its restricted scope, which won't open up the technology for wider use.
Second design option would allow for a scenario where considerable parts of an application, or a module, or several modules enlisted by the application calls for secure execution, while the rest of the application could run outside that secure context. Such a design would require a good understanding of the application and an intimate knowledge of all its dependencies and path of execution, to decide what to port for secure execution and what not to. Also, interfaces need to be designed for secure enclave code to communicate with the outside, and vise versa, and precautions taken to avoid introducing vulnerabilities in the process.
Third of the design options and the most straight forward, would be to port an existing application or library, in its entirety, to run within a secure enclave environment. While relatively trivial to implement, it is neither ideal for significant scenarios nor comparatively more secure. But, this is the kind one is likely to widely encounter, especially among freely ported and published software.
What is more interesting is the fourth design option, where a contained version of the OS subsystem itself is ported, along with the targeted application, to run within a secure enclave. This minimizes the dependency on the classic subsystem, and adds to the overall application security in one way while reducing the same in another way due to the increased attack surface, at software level. This is what projects like Haven and Graphene SGX attempts, via its own flavor of library OS. While it is really interesting to experiment with this design model, I am not sure of its viability unless such a design is implemented by and within the underlying platform itself.
Lastly, the most interesting of design options is just in time trusted execution. This would require no source level change at all to the targeted applications but will introduce a complex just in time execution engine, built to switch to secure execution environment on the fly. And the decision as to what is run within the secure environment is as well decided on the fly and during execution time, perhaps by studying the context and activities during any point in time, but possibly more than that. This, while non trivial to implement, is what is likely to make Intel® SGX ubiquitous. Otherwise, the prospect of the technology is likely to rely on niche sectors currently leveraging it, along with technologies like blockchain dabbling with it lately, especially for its consensus model, but possibly more.
How many is one too many when it comes to migrating cryptography libraries to run within a hardware backed secure environment?
Following is a long pending post on the topic of running cryptography libraries within a secure environment.
There are quite a number of innovative ways to use Intel® SGX technology. The low hanging fruits usually gets the traction right away, not only because they are easier to attain but also because it satisfies an immediate need. At that level, porting classic libraries to work within a secure environment has received fair amount of attention. I myself discussed the topic of migrating SQLite to Intel® SGX environment, along with a proof of concept. A "SGX" keyword search alone, and within GitHub, returns several hundred results. Areas like cryptography and database technologies get far more attention than the rest when it comes to migrating to a secure environment. Former more so than the latter, within that. And that is understandable, mainly because, the point in enlisting cryptography is somewhat, if not fully lost if security cannot be guaranteed in the process. Thus cryptography makes for an ideal candidate for technologies like Intel® SGX to embrace, or more importantly, the other way around.
Then again, one might ask, how many is one too many when it comes to porting cryptography libraries or to be more specific, services that heavily enlist cryptography, to run within a secure environment like an Intel® SGX enclave? Intel® itself has ported OpenSSL to work within an enclave environment. Then there is WolfSSL, TaLos project, LibTomCrypt (source here), mbedTLS and more. So, when I ventured into porting Cryptlib (commercial version here), a security toolkit that provides varied services in its area, to work within an Intel® SGX enclave, I had to ask - is it really necessary? That is, to add support for yet another library, albeit with its own differentiating point, in comparison to the rest of its kind. I won't distract the conversation by getting into why Cryptlib is different and who or why someone might chose that over the rest. I would suggest looking into their website.
That said, the question as to whether or not it is worth porting Cryptlib, yet another cryptography library or to be more specific in this case, a security toolkit with considerable amount of cryptography functionality, need addressing. I believe the answer is, yes. Any library with an active customer base is worth a consideration. Because, to the customers of those libraries, it really comes down to whether security is important enough to switch to a tool that provides a more secure way to execute the much needed functionality or keep the same library for its differentiating benefits, over which it was chosen in the first place, but at the cost of better security. By forcing the active customer base to have to chose between the two is constraining them one way or other and there is no need for it, as porting existing libraries to work within a secure enclave is not that hard! Cryptlib perhaps being comparatively vast and more versatile in terms of the services it offers, might make porting a more involved task than most others. Nevertheless, I believe it is certainly worth the effort.
I have ported Cryptlib to a point where it makes for a functional prototype, which acts as a proof of concept that could be turned production ready, if there should be enough interest among its users in the field. If there are Cryptlib users and customers considering migrating to run the library within a secure enclave environment, powered by Intel® SGX technology, please feel free to reach out to firstname.lastname@example.org for further information.
I learnt of Google's decision to block code injection in Chrome processes and McAfee's reaction to it's impact on DLP software providers, including theirs, via Brian Reed's tweet. Code injection is a topic that is viewed as a nightmare by software platform providers and as something inevitable by some ISVs like security software makers and developer tools builders. That was a decade or two back or at least it should have been that way! The fact that we are still stagnating by using classic means to inject, hook and patch is why we are still having this tug of war between platform providers and other ISVs on this matter.
In their defense, platform providers have tried to provide extensions and APIs as an alternate to dissuade ISVs from injecting code the way we do. However, they are not nearly powerful enough for ISV needs and thus ISVs ultimately resort to much cruder means like code injection. And Google Chrome team, as Microsoft has realized for sometime now, is right in thinking that approaches like code injection is a significant cause for instability introduced into their environment. ISVs on the other hand have tried to make the injection process more stable by navigating away from chasing byte code patterns which are likely to break even with the release of a service pack to relying on more static regions that are less likely to break. Nevertheless, it is not 100% failsafe and thus the tug of war between platform providers and ISVs.
Rather than having to sacrifice useful features because of changes to the platform that leaves them crippled, ISVs ought to have caught up to more sophisticated means towards achieving the equivalency of code injection. As long as we are in a headlock working at the same level in the software stack, platform providers, as those hosting that layer are bound to have their way and for their own good. Security software makers ought to have moved one layer down already to be able to better monitor the platform they are trying to secure. Having a thin layer of Microvisor or hypervisor to accomplish just this is inevitable for any security software maker. In fact, McAfee itself has or had DeepSafe technology that could have helped with just this kind of situation. Of course, as the use of such technologies become ubiquitous, we are going to have to battle problems relating to chaining of Microvisors/hypervisors, bottlenecks in that area and other problems as that layer gets more attention. At that point hardware support/awareness for such needs is likely to gain traction. Nevertheless, we should have by now moved out of the layer in which we are fighting this code injection problem and the fact that we haven't fully is why there is this tug of war. DLP and other software shouldn't have to suffer because we are not catching up to this need fast enough.
In an earlier post I talked about in-memory data protection and how PCI-DSS could do a lot more to impose a specific requirement in this area. An RFE submitted earlier on the topic can be found here and a follow-up on that here.
We don't ignore end point protection because perimeter protection is in place. Same way we shouldn't ignore data in use protection because data at rest and data in transit protection is in place. One reason data in use protection is getting a short shrift is because of the lack of ubiquity in sophisticated technologies in this area. We might have reached a point where we may have crossed that chasm and entered a phase where the availability of such technology and its seamless adoption is within sight.
Microsoft's unveiling of "Azure Confidential Computing" is one example as to us having entered that phase. More information on that by Mark Russinovich, CTO, Microsoft Azure, is here. If a cloud platform with all its complexities has adopted technologies towards data in use protection, there is no excuse for other platforms, environment and sectors giving this topic or the technologies pertaining to it a short shrift.
I had talked about Intel® SGX, one such technology and its use here, here and here and a proof of concept of the technology at work is here. If you would like to further discuss this topic or need help adopting this technology towards protecting your business assets, feel free to reach us at email@example.com or via the contact form here.
A feasibility study of KryptoGuard™ brand leveraging Intel® SGX using SQLite
In the last post I mentioned I will take a scenario to explain how KryptoGuard™ brand leverages Intel® SGX to better protect sensitive data. This post is dedicated towards just that.
As part of the feasibility study, I wanted to take a database application and provide it the security and benefit of running in the context of an Intel® SGX enclave. I chose a database software because, more often than not, that's were sensitive data find's its home. And within that, I chose SQLite for a proof of concept because, it makes for a perfect fit, as it is one of the light weight, low foot print open database that has withstood the test of time.
The very nature of doing anything security centric dictates opting for the smallest possible attack surface. Intel® SGX provides that at hardware level by reducing the attack surface to CPU boundary. It makes sense for us to follow suite by doing the same at software level as well. And for that SQLite makes for a good candidate.
To be able to store certain kinds of sensitive data, businesses are required to abide by relevant regulations. And encryption becomes a mandatory requirement in such cases. It so happens, Intel® SGX's in-built cryptography can be leveraged to enforce that requirement more easily. Leveraging it not only helps meet a need, it also helps make the process simpler. To that end I wanted to enlist Intel® SGX's FS API to demonstrate how easy it is to encrypt and secure a database and SQLite design was a seamless fit to demonstrate that as well.
Intel® SGX provides PSW and SDK software to exercise its hardware feature. SQLite design made it easier for me to use both of their software in a mutually complimentary way to show the added security enjoyed by an SQLite database while storing, loading and processing sensitive data, all within an enclave, out of reach from any other layers of software, including higher privilege software! To state for clarity, I refer to SQLite software running within an enclave, powered by Intel® SGX, as trusted SQLite and otherwise as classic SQLite.
It is important to note that the database created with trusted SQLite can only be reopened and processed by trusted SQLite. Thus it enjoys all the security provided by Intel® SGX. For example, that database cannot be opened in a hex editor to get to its content in obtuse ways because of it being encrypted (leveraging Intel® SGX in our case) along with other Intel® SGX features like sealing, as and when appropriate.
Also, if a classic SQLite database were to be reopened in the same environment, it is susceptible to memory scrapping attacks. Where as, a trusted SQLite database, which can only be loaded within the same trusted environment is not susceptible to similar attacks. This is because, sensitive database data are earmarked as Intel® SGX resources when loaded by trusted SQLite. Hardware level access control checks are applied to such resources upon its access in memory. So, when a memory scrapper software tries to access it, hardware level access checks by Intel® SGX forbids such software from gaining access to the sensitive data irrespective of the privilege at which the scrapper runs, as it is not a code running within the expected enclave to pass those checks.
As you might have inferred from the above, Intel® SGX not only protects data in-memory at hardware level, it also provides the added benefit of making data at rest protection simpler in this case! This should fairly explain why I chose SQLite to demonstrate the use of Intel® SGX in protecting sensitive data, which aligns with our KryptoGuard™ brand goals and the use cases we target.
In our previous post I talked about leveraging Intel® SGX towards data loss prevention. In this post I will talk about the relevance of Intel® SGX to our KryptoGuard™ Brand.
KryptoGuard™ brand is currently focused on providing services towards enhancing data security, leveraging the latest technologies. This also sets the stage towards delivering products focused on data loss prevention. To that end, we have already covered some of the use cases KryptoGuard™ brand targets.
We expect our potential clients/customers to handle sensitive data like payment card data, health information, personally identifiable information(PII), all of which are required to abide by varied regulations. In an earlier post we talked about how PCI-DSS is woefully inadequate in enforcing in-memory requirements for payment card data, which is a frequent target.
Sensitive and/or confidential data could use more sophisticated technologies to better protect them. Intel® SGX makes for a perfect candidate to capitalize on to accomplish just that. Not only can sensitive data be earmarked as resources for secure access within Intel® SGX's enclave, access control checks to enforce that restriction is performed at hardware level thus forbidding compromised software at any other layer, including higher privilege software from accessing those resources. This helps protect sensitive data from infractions which target that data while it is being processed, a stage where it is most vulnerable because of lack of maturity in current protection systems to better handle this stage.
To top it, Intel® SGX also provides relatively seamless and easy ways to encrypt sensitive data before it is stored on disk. Cryptography key maintenance which would otherwise be a hassle is alleviated by its in-built cryptography feature that could be leveraged towards protecting sensitive data at rest.
As you might have realized by now, all of this is very conducive for our KryptoGuard™ targeted use cases ! In a future post, possibly next, I will take a scenario to better explain how we were able to use Intel® SGX towards better protecting sensitive data as it is being generated and processed.
Founder of KryptoGuard™ technology initiative, product and services.