Secure SD-WAN is a great idea, but how much risk can you tolerate?

Author: Jason Pappalexis

In a world of multi-page product specification sheets, SD-WAN technology can be described in just one sentence: Network technology providing WAN link resilience, bandwidth, and control using network edge appliances (existing firewalls and routers or dedicated controllers) as termination points. The devil is in the details, however—especially considering the complexity of network routes and policies within a modern enterprise IT security architecture. In spite of this complexity, the “basic” SD-WAN is by many accounts a relatively straightforward technology to implement with a clear ROI.

Secure SD-WAN is an entirely different story. Add full security stack capabilities to the SD-WAN feature mix (think NGFW + WAN circuit management + WAN optimization), and deployment complexity ramps to a whole new level. Complexity considerations aside, it isn’t surprising that NSS Labs enterprise clients are investigating replacing branch office NGFW appliances with secure SD-WAN technology. Purchase drivers include vendor consolidation (i.e., enterprises want to simplify multi-vendor firewall deployments) and capex reduction (if secure SD-WAN meets organizational security requirements, hardware can be removed at branch offices without increasing risk).

It is important to maintain perspective amidst the hype. If an enterprise is looking to answer the question of whether or not it should replace its NGFW with secure SD-WAN technology, the answer depends on its tolerance for risk. To be successful as an NGFW replacement, secure SD-WAN technology must meet both firewall and network routing requirements—all requirements, not just high-level anti-threat or basic routing capabilities. Here, the “secondary” features (which are really the primary features when it comes to production use) make the difference: interoperability, policy management, diagnostics, alert handling, logging, management console workflow, signature quality, firmware development speed, capacity planning, etc. We can leave QoS off the table because it is assumed that if an SD-WAN cannot meet basic throughput, latency, jitter, and packet loss requirements, then it isn’t an option

Clearly, an organization must consider many factors before making the switch to secure SD-WAN technology. Even the definition should be evaluated prior to initiating a proof of concept; “security” in SD-WAN varies per vendor; some market it as encryption, others describe it as service chaining, and still others define it as full stack security. Our assessment is that, at least for now, SD-WAN products offered by firewall vendors are the safest choice for organizations intolerant to risk.

While secure SD-WAN technology offers today’s enterprises an enormous opportunity for cost savings, consolidation, and resilience, enterprises must understand all of the factors associated with a successful deployment—the cost advantages of this technology cannot be considered justification for increasing an organization’s risk tolerance. Our new Intelligence Brief on SD-WAN takes a closer look at this technology.

NSS Labs has published a series of Intelligence Briefs on security controls in the US enterprise. The NSS Labs 2019 Intelligence Brief on SD-WAN offers visibility into current enterprise requirements for the technology. The paper is available to subscribers to our research library.

Does your NGFW or NGIPS provide resilient vulnerability protection?

By Ty Smith, Jason Pappalexis

NSS Labs’ test methodologies are constantly evolving to reflect the needs of enterprise consumers. Last year, NSS introduced a “resilience” category to our security testing for inline devices (e.g., next generation firewalls [NGFWs], next generation intrusion prevention systems [NGIPS]). NSS defines resilience as a product’s capability to continue providing protection for a vulnerability against a known exploit after various modifications have been made to the original exploit.

Resilience is an important capability since exploit modifications often require little technical skill to implement. For example, a script-based drive-by exploit delivered in HTML may include a great deal of content that can be easily modified, such as the names of variables and functions; the order in which they are declared; adding and removing whitespace; swapping payload; changing comments; and/or a combination of the techniques just described.

Enterprises rely on network security products to provide protection for unpatched and/or vulnerable applications in their environments, picking off exploitation attempts “on-the-wire.” The goal of NSS’ resilience testing is to evaluate the quality of a product’s vulnerability protection signatures in order to determine whether they can adequately detect or match the essential exploitation elements of a vulnerability (in other words, the “trigger”), or can be easily bypassed by a relatively novice adversary making simple changes to a readily available POC exploit.

Inline security products are largely dependent on pattern-matching signatures (e.g., “match if this string of bytes is seen within x bytes of this other string of bytes”) to identify malicious content after normalization of network streams. These products are unlikely to further process the content (e.g., decode, deobfuscate, or render the HTML and associated script) [1] prior to inspection due to the impact it would have on throughput.

I wanted to share some resiliency results from our most recent NGFW v8.0 and NGIPS v4.0 public tests. In these tests, we used a well-known public drive-by exploit for CVE-2014-6332 (a.k.a. “Windows OLE Automation Array Remote Code Execution Vulnerability”). We chose this exploit because of the variety of content available for modification and for its reliability in delivering payloads to our intended victim (Microsoft Windows 7 SP1 with Internet Explorer 11).


Our set of test cases started with basic content transformation cases where a single technique was applied. Techniques were then stacked together, increasing the likelihood that we would modify or break up signature strings used by the device under test. Techniques utilized for this particular exploit included:

  • Adding whitespace (spaces and linefeeds)

  • Renaming of VBScript procedures and variables

  • Varying chr()/chrw()/chrb() VBScript function usage for character conversion

  • VBScript obfuscation with online tool (character/command conversion utilizing Execute()/chr()/CLng() functions)

  • Payload swapping/obfuscation

  • Combining/reordering of VBScript functions

  • String splitting/combining

  • Numeric value/equation modifications

In order to confirm that our traffic was being properly classified and normalized prior to inspection, we also delivered each of our test cases in chunked and/or compressed streams, and/or over a different TCP port.

As a result, during resilience testing 10 out of 10 NGFW products (10 of 10 participating vendors) were bypassed, and six out of seven NGIPS products (five of six participating vendors) were bypassed. Representative subsets of the resilience cases tested, along with total product bypass counts for each test case delivered over TCP port 80, follow (‘ P ’ = pass/block; ‘ F ’ = fail/miss):


Figure 1 – NSS Labs’ NGFW 8.0 Resilience Test Results


Figure 2 – NSS Labs’ NGIPS v4.0 Resilience Test Results [2]

As you can see, individual techniques generally had little impact. But as techniques were combined, the contents of the original exploit were obfuscated enough that pattern-matching signatures began to fail, as illustrated by the increase in bypasses. The results also indicate that some vendors’ signatures were more resilient than others.

While possible, further processing beyond normalization and categorization of streams prior to inspection is historically a relatively expensive proposition for these inline security products. An NGFW or NGIPS could go so far as to send files (including HTML) to the cloud or another on-premises sandbox for execution/deobfuscation and/or more extensive analysis before rendering a verdict to block or allow, but the delay would likely be considered unacceptable to the average user, particularly for something like web browsing. More extensive analysis to detect and block more complex attacks is generally reserved for other layers of a “defense-in-depth” security posture, such as endpoint, proxy, and/or sandbox security technologies, where the content can be more thoroughly normalized and/or detonated prior to inspection.

For NGFW and NGIPS devices, short of additional processing, pattern-matching signatures that identify heavily or suspiciously obfuscated content—not the underlying malicious content itself, can provide some additional protection but are also prone to false positives. Signatures that match fingerprinted delivery techniques and/or payloads by various exploit kits and threat groups can also be utilized. But the most resilient vulnerability protection signatures require extensive research and understanding of a CVE in order to write intelligent signatures targeting the essence/trigger of the vulnerability (relatively expensive but resilient), as opposed to simply adding strings seen in public exploits to a list to pattern-match against (relatively cheap but not resilient).

So, what’s the takeaway? While our resilience testing reveals some limitations, NGFW and NGIPS technologies remain an important part of a “defense-in-depth” security strategy. Our testing indicates which vendors are making the requisite investment in time and expertise to write signatures that offer the most resistance to exploit modifications. Combined with performance and other evasions testing with the same device configurations, we can determine which products offer not only the most effective security, but also which are most efficient at doing so. For additional details, including which vendors missed which resilience cases, individual test reports are available at

[1] Additional NGFW JavaScript obfuscation testing data is available here: Wait, I thought my NGFW Detected That

[2] While the NGIPS device from Vendor 5 blocked all test cases delivered over TCP port 80, nearly every test case delivered over a specific non-standard TCP port bypassed the device.

The NGFW today: A staple of network security in spite of challenges

By Jason Pappalexis

Some technologies are revolutionary—so much so that in less than a generation, it is difficult to imagine living without them. Take, for instance, the automobile. In 1908, the first production Model T Ford was completed. By 1927, close to 15 million had been built, steadily approaching the “ . . . a car in every garage” goal set by Henry Ford himself (in the 1920s, there were likely far fewer garages). Fast forward to the smartphone—first released in the early 2000s, they are now ubiquitous—in the hands of everyone from toddlers to the elderly.

Today, cars and cellphones are pretty much considered essential for modern living. Can the same be said for current next generation firewall (NGFW) technology within IT security architectures?

As the logical evolution of the traditional packet inspection firewall, the NGFW grabbed hold of the prime spot at the network edge, and it hasn’t let go. And for good reason—its broad feature set provides value across organizational teams; for example, network operations (tunneling, routing, etc.), IT security (inspection, application control, decryption, etc.), and even human resources (per user behavior control through URL categorization, etc.). NGFWs are flexible products, which makes them “sticky” in terms of deployment.

However, the technology is not without challenges. When multi-function systems are deployed within latency-sensitive environments, accurate capacity planning is critical. Given the number of variables within enterprise traffic and general uncertainty about how activating NGFW features will impact performance, many organizations oversize. (Nearly one third of respondents in the 2018 NSS Labs Network Security Study (1) indicated that their organization targets 50% above peak throughput requirements for sizing.) This can be expensive. What’s more, enterprises are so latency averse that it is not uncommon for devices to be configured in monitor mode in order to further reduce their risk‑-but this is at the cost of security.

There is also uncertainty regarding a product’s actual security effectiveness. The largest proportion of respondents in the 2018 NSS Labs Network Security Study reported their minimum acceptable security efficacy is in the range of 95% to 99%,(2) but expectations of protection and actual protection do not appear to always align. NSS’ 2018 testing reveals inconsistent average exploit block rates over time (3) and mishandling of exploits delivered by web-based scripts.(4) Expect to see the number of script threats increase as their success rate becomes more widely known. (e.g., Trend Micro recorded a spike in malicious JavaScript in January 2019; the spike reached 55.4% in Japan and 14.7% in the US.)

Cloud form factors (both virtual appliance and as a service) introduce more unknowns. Do these form factors provide the same protection as traditional on-premises appliances? Perhaps more importantly, do they protect threat vectors that are uniquely associated with the enterprise use of cloud resources? While cloud NGFW technologies are being considered by enterprises for deployment in their IT security architectures, the market is still young and actual deployments vary. In the future, enterprise requirements for cloud-based and cloud-delivered NGFWs are likely to have a significant impact on the growth of this technology.

While the challenges discussed here are not inhibiting NGFW adoption, they do provide an opportunity for vendors to set realistic expectations and inform enterprise customers of product gaps, which will help the customers plan properly and reduce risk. Network inspection technology remains essential in today’s IT security architectures, and it is important for enterprises to understand the technology’s capabilities as it continues to evolve.

NSS Labs has published a series of Intelligence Briefs on security controls in the US enterprise. The NSS Labs Intelligence Brief on NGFW offers visibility into current enterprise requirements for the technology. The paper is available to subscribers to our research library.

  1. 2018 NSS Labs Network Security Study was conducted in the Fall of 2018 and targeted 151 full-time US enterprise IT security professionals representing 28 US industries with a median IT security budget of US$10M – $50M.

  2. The largest proportion of respondents (31.1%) in the 2018 NSS Labs Network Security Study reported minimum efficacy in range 95% to 99%.

  3. 2018 NSS Labs Evolution of Product Testing: Firewall

  4. 2018 NSS Labs Investigative Report: The Impact of Code Obfuscation and Web Delivery Encoding on NGFW Scanning Accuracy

Cats, Time Travel, Advanced Endpoint Protection, and Product Selection

By Jason Pappalexis


In 1957, science fiction author Robert A. Heinlein published The Door Into Summer, a story about suspended animation, time travel, patent law, and a cat. The protagonist is an inventor who is forced into a 30-year sleep by his business partners so they can steal his valuable patents. Once he’s awake, the protagonist uncovers the deception and travels back in time to 1970, where he is able to foil his antagonists’ plans. It’s a classic example of Heinlein’s work, with fast-paced dialog, a compelling plot, and a view into the future as perceived by authors in the mid-twentieth century.

Two underlying themes of the story are compromise and visibility—compromise, in that one often has to choose between two outcomes, neither of which is ideal; and visibility, in that with hindsight, obstacles are much more easily navigated.

This is not unlike the challenges that come with products deployed within IT security architectures. There is a long-recognized trade-off between security and business continuity. There is the hypothetical goal of a completely secure system, 100% protected from threats and data misuse; all files are scanned, data in motion and data at rest is encrypted, browsing is secured, applications are controlled, file system changes are tracked, memory space is inspected, identities are validated with multi-factor authentication, etc. But this (still hypothetical) fully protected system is often restrictive and, unfortunately, difficult to use. Most security practitioners accommodate this dichotomy by carefully balancing visibility and usability with the cost of security (see more in this blog on endpoint visibility).

Advanced endpoint protection (AEP) products offer a mix of capabilities designed to support modern requirements for security effectiveness, threat visibility, and system visibility. NSS Labs’ clients often downselect endpoint security product candidates based on advanced threat protection capabilities—for example, resistance to layered evasions, fileless malware, and ransomware.

However, as stated in the NSS Labs AEP Test Methodology v3.0, “current products and techniques are generally unable to stop even the least capable of the advanced threats, let alone the truly determined advanced persistent threat.” This can make product selection choices easier, but in some cases can also make it more difficult; for example, what if a product ticks the boxes for all requirements except security effectiveness? For this reason, we often work with our clients to prioritize and weight their requirements to enable an easier and more defendable decision.

We are observing a trend in which enterprises acknowledge that they cannot easily verify artificial intelligence (AI) or machine learning (ML) engine technologies, and so instead they create lab-based proofs of concept (PoCs) focused on measurable capabilities such as manageability, interoperability, agent anti-tampering, and threat event reporting (visibility into threats and systems; i.e., data available through the console, API, and logs). This includes exploring ancillary features such as firewall, data at rest encryption, data loss prevention, and device control (all of which are reportedly in use by respondents to the 2018 NSS Labs Network Security Study). Enterprises can then tally the scores from the features they have observed firsthand, add in effectiveness scoring (obtained either internally or from a third party), and gain a strong idea of which product will fit their needs.

Visibility, effectiveness, and usability go hand in hand. The endpoint security product space must continue to evolve in order to match the needs of the enterprise, including higher detection capabilities and the provision of an appropriate level of forensic data. NSS is looking forward to further evaluating the product space and reporting on differences that will help enterprise security teams understand which products will best fit their environments. 

It would be nice to time travel 30 years into the future and see how cyberthreats have evolved in order to better guide today’s product roadmap. Unfortunately, we have to let this technology play out in its own due time.

NSS Labs has published a series of Intelligence Briefs on security controls in the US enterprise. The NSS Labs 2019 Intelligence Brief on Advanced Endpoint Protection (AEP) offers visibility into current enterprise requirements for the technology. The paper will be available to subscribers in our research library.

NSS Labs Test results using the AEP Test Methodology v3.0 released earlier this week.

On the Safe Handling of a Double-Edged Sword: The NSS Labs 2018 Encryption Security Study

By Will Fisher

The etymology of the metaphor “Double-edged Sword” is in contention—some suggest it originates from the Arabic, سَيْف ذُو حَدَيْن(sayf ḏū ḥadayn), while others argue for an English origin circa 15th century CE.[1] Regardless of its actual origin, it is a good analogy for the dilemma organizations face as encrypted communications become more and more commonplace.

On the one hand, encrypting data supports its confidentiality, which satisfies privacy advocates. However, encrypted traffic also provides new opportunities for threat actors. The NSS Labs series, The Encrypted Web,[3] discusses the advantages of encrypted channels for cybercriminals, including: encrypted traffic is less frequently scanned, has higher infection success rates, has greater success using less sophisticated attacks, and has low-cost or free domain validation certificates. Additionally, it has been reported that attackers employ encrypted channels specifically to obfuscate malicious code from network security devices.[4]

Earlier this year, we conducted the NSS Labs 2018 Encryption Security Study, which aimed to a) enumerate the proportion of US enterprises that terminate, decrypt, and scan SSL/TLS traffic, b) determine the types of traffic commonly terminated, decrypted and scanned, c) resolve TLS/SSL versions currently in use, and d) quantify the type and frequency of threats using encrypted channels discovered in US enterprises in the past six months.

The study was part of a quantitative, two-arm study conducted through a survey of 141 role-verified full-time IT security professionals with a minimum of three years in role. Qualified respondents actively managed security technologies for organizations with a minimum of 500 employees.

When participants were asked if their organization terminated, decrypted, and scanned encrypted traffic, the majority of respondents (n=133; 94.3%) indicated their organization did. Those who did not cited performance impacts as the primary deterrent. Inbound traffic scanned via reverse proxy was the predominant method of decryption and scanning in our sample (n=114, 80.9%), with outbound decryption and scanning via forward proxy in lesser proportion (n=60, 42.6%). The most commonly decrypted and scanned traffic protocol reported by survey participants was HTTPS, followed by SMTPS, and IPSec.

Participants were asked which technologies their organizations employed to decrypt and scan traffic. Results suggest a number of technologies are employed for this purpose with dedicated SSL appliances the most frequently reported in our sample.

everal reports[5][6][7] have revealed that attackers are using encrypted channels to deliver and obfuscate malicious code; therefore, we asked participants which threats using encrypted channels their organizations had detected in the last six months. Cross-protocol attacks were the most commonly reported (50.4% of participants, reporting), followed by renegotiation attacks (45.4%) and downgrade attacks (36.2%). Next, we asked respondents who reported that they have detected such threats to describe how frequently the threats are detected. Our results showed a substantial proportion of organizations detect these threats daily, or even hourly.

Employing effective strategies and technology to block threat actors from exploiting data encryption is more important than ever—and the pressure is mounting. For example, Australia just passed the Assistance and Access Bill,[8] which can effectively force ISPs, telcos, and other organizations to build encryption back doors for law enforcement use. For the enterprise, encryption security is indeed a double-edged sword.

The NSS Labs Security Insight Study includes results from both the NSS Labs 2018 Encryption Security Study and the NSS Labs 2018 Data Center Security Study. If you’re interested in encryption or data center security, we recommend giving it a read, and if you’d like to discuss these topics, we’d love to speak with you! Email us and reference this blog.

Will Fisher is a Senior Research Analyst for the NSS Labs Enterprise Architecture Research Group (EARG), whose charter is to help enterprises solve security challenges. Will is a research scientist who holds a PhD in Experimental Psychology. He has worked at NSS Labs for the last two and half years performing and analyzing qualitative and quantitative research into enterprise IT security.

Gain access to NSS Labs’ group test reports and Analyst Briefs in our Research Library. Follow us on Twitter (@NSSLabs) to keep informed as new research is released.

[1] double-edged sword. (n.d.). In Wiktionary, The Free Dictionary. Retrieved December 7, 2018, from

[2] Helme, S. (August 24, 2018). Alexa Top 1 Million Analyses – August 2018. Retrieved from

[3] Pappalexis, J. (2016). The Encrypted Web Series. NSS Labs. Retrieved from

[4] Basu, S. (2017). The fight within encryption. Cyber Security: A Peer-Reviewed Journal1(1), 44-47.

[5] Ponemon Institute. (2016). Hidden Threats in Encrypted Traffic: A Study of North America & EMEA. Retrieved from

[6] Cisco. (2018). Encrypted Traffic Analytics. Retrieved from

[7] ZScaler. (2018). February 2018 Zscaler SSL Threat Report. Retrieved from

[8] Moon, M. (December 7, 2018). Australia’s controversial anti-encryption bill passes into law. Retrieved from

Follow us on Twitter (@NSSLabs) to keep informed as new research is released.

TAGS: Data Center Network SecurityEARGEncryptionSecuritySSLTLSTLS/SSL

NSS Labs' 2019 Cyber Predictions

By Will Fisher, Jason Pappalexis, Mike Spanbauer, David Thomason, John Whetstone

Those of you in the trenches of cybersecurity likely share a perspective held by many of us at NSS Labs: "the only constant is change." In other words, as threats continue to evolve, so must the employees who use the data and the products deployed to protect the data. Without evolution, the battle on cybercrime is lost.

At NSS, we are exposed to a broad range of cybersecurity products almost daily, and we have opinions on how these products meet enterprise requirements as well as thoughts on the cybersecurity industry in general. In keeping with NSS tradition, we are sharing some of these opinions, and we hope you’ll find them as interesting to read as they were to write.


Security technology vendors must walk a fine line between false positives and detection accuracy; the more sensitive a detector is, the more type I errors it will generate. The volume of alerts generated by sensitive instruments designed to detect a growing number of threats is unmanageable for most organizations. If a vendor’s detection engine is highly aggressive, the operational burden associated with hunting down false positives could be costlier than mitigating the malware infection. Additionally, these false positives will shift responder focus away from focusing on the real incidents.

The cyberskills gap (1.8 million predicted in 2022[1]) has led to security vendors offering more and more automated security processes that leverage machine learning and AI technologies (the AI industry is predicted to be a $1 trillion market by 2050[2]). While it is somewhat facetious to expect a security product to have zero false positives, the truth is, improvements in heuristics and behavioral analyses can help considerably. I predict significant relief with the application of advanced modeling in 2019. The vendor that can offer a product that provides excellent security efficacy and reliable performance while minimizing the operational burden will have a significant competitive advantage.

"SSL/TLS DOMAIN-VALIDATED CERTIFICATES WILL BEGIN TO BE CONSIDERED AS HIGHER RISK" – Jason Pappalexis, Managing Director, Enterprise Architecture Research Group

I predict that in 2019, domain-validated (DV) certificates (as opposed to organization-validated certificates and extended-validated certificates) will begin to be considered indicators of elevated risk. Furthermore, within the next three years, cybersecurity technologies that scan web traffic (e.g., secure web gateways [SWGs], next generation firewalls [NGFWs]), and embedded URLs within SMTP traffic (e.g., SWGs) will expand their policies to more precisely control traffic according to certificate type.

Why do I predict this? Many certificate authorities (CAs) offer free DV certificates, primarily in response to the mid-2015 initiative by the open-source certificate authority, Let’s Encrypt. These free DV certificates remove the financial barriers to entry both for legitimate website designers and criminals. Organizational validation (OV) certificates and extended validation (EV) certificates offer lower risk than DV certificates because they require background checks and must be processed manually, both of which are considered important in the effort to reduce crime This also means they take longer to complete and are more expensive. While DV certificates are not on their own indicators of malicious intent, they will be part of the equation potentially as early as the end of 2019.


Enterprises deploy NGFWs as the first and last line of defense for systems that are on premises. Enterprise expectations for their efficacy in protection against threats is high; the largest percentage of respondents in the 2018 NSS Labs Network Security Study (31.1%) indicated the minimum acceptable security efficacy for these devices is in the range of 95 – 99%.

However, enterprise expectations for protection do not appear to be aligned with the reality of protection. NSS test results for NGFW show inconsistent historical average exploit block rates over time: 89.5% in 2012 (NGFW Test Methodology v6.0); 97.2% in 2016 (v7.0), and 92.7% in 2017 (v8.0).[3] NSS testing has also revealed specific gaps in detection: a 2018 investigation[4] conducted by NSS revealed that ten leading NGFW products were negatively affected, and some quite significantly so, when exploits delivered by JavaScript were transformed by one or more common code obfuscation techniques and web transport encoding mechanisms.

While NGFWs crossed the chasm long ago, I predict that an increased awareness of gaps in efficacy will drive many enterprises to closely evaluate the capabilities of their NGFWs and either tune where necessary or bolster adjacent technologies.


Over the last few years, a technology has emerged that is designed to ease the challenges of managing branch to headquarters WAN links as well as the operational challenges associated with provisioning a new site. Software-defined wide area networking (SD-WAN) technology potentially can simplify the way in which administrators manage policies to ensure business resilience and consistent application experiences across WAN links.

In addition to WAN connection feature sets, some SD-WAN vendors are expanding their offerings to include network security technology. At a high level, this potentially reduces the number of appliances at branch sites, which in theory may reduce failure rates and enable configuration parity across all devices. While the market is still relatively young, the appeal is clear. NSS has facilitated a number of enterprise architecture discussions on the legitimacy, efficacy, and value of SD-WAN technology. These discussions include whether the technology is mature enough to collapse WAN link management and security into a single offering.

In my opinion, 2019 will be the year that SD-WAN technology makes it mark, accelerating the enterprise move to a converged edge. And yes, this is bigger than WAN management. The vendors that succeed here will also be challengers in the WAN optimization and load balancing space as well as challenging CPE connection equipment vendors. Stay tuned.


IoT security challenges are going to hit a new high in 2019. This will further raise the issue of privacy in the United States as identification of these issues will turn the spotlight on a number of issues, including how the intelligence community exploits IoT vulnerabilities, why these vulnerabilities exist, and why the commercial sector has not developed a satisfactory bolt-on solution to mitigate the vulnerabilities. Pressure may also expand to service providers to provide protection to their customers.

I predict an industrial IoT event in 2019 that will impact critical infrastructure. Many predictions were made about an attack on the US electric grid in the wake of the 2016 Ukraine incident. 2019 could be the year those predictions come true. If such an attack should occur, the likelihood of a wide-scale electrical outage is extremely low, but unfortunately, an attack on a single or even a few generation systems could cause an outage that puts human life at risk and thus should not be treated lightly.

Other industries within critical infrastructure that relies on the IoT are easier targets and more likely to be hit. For example, the oil and gas industry is highly dependent on SCADA systems for the operation of refineries, oil platforms, and pipelines. A successfully coordinated attack that causes physical destruction of even a single refinery would at a minimum generate uncertainty and doubt in the industry, and at its worst cause loss of life.


It's no secret that organizations subject to stringent regulatory mandates have been inhibited from adopting the cloud. In many cases, this isn’t due to a lack of desire, but rather is the result of factors such as poor visibility into the stored data, who is accessing the data, who is sharing this data, and even limitations in log details from the cloud security products.

To address these challenges, many organizations are turning to cloud access security broker (CASB) technology. CASB products facilitate the governance of cloud services by enabling visibility into what cloud services are being used and how they are being used, applying organizationally defined policies across all cloud services, and by reducing risk associated with malware and data loss/exfiltration in the cloud.

I predict that in 2019, more enterprises subject to regulatory compliance will adopt CASBs as the means to enable their organization's migration to the cloud. This prediction is supported by a recent NSS Labs study,[5] in which 29.2% of the respondents reported that their enterprises were subject to regulatory mandates and have turned to CASB technology to address their challenges. 91.5% of these respondents indicated that their organization’s CASB product was either "extremely effective" or “very effective” at accomplishing goals related to regulatory compliance goals for the cloud. This is a great sign for both enterprises and CASB vendors.

The predictions listed are the opinions of the contributors and do not necessarily reflect positions held by NSS Labs.

[1] 2017 Global Information Security Workforce Study: Benchmarking Workforce Capacity and Response to Cyber Risk, report can be found here:

[2] Helfstein, S. Investing in Artificial Intelligence and Automation. Morgan Stanley. Available online at

[3] 2017 NSS Labs Evolution of Product Testing - Firewall

[4] 2018 NSS Labs Investigative Report: The Impact of Code Obfuscation and Web Delivery Encoding on NGFW Scanning Accuracy

[5] NSS Labs 2018 Cloud Access Security Broker Study

Follow us on Twitter (@NSSLabs) to keep informed as new research is released.

Getting to Know Data Center Security: The NSS Labs 2018 Data Center Security Study

By Will Fisher

Richard Feynman, the American theoretical physicist, talked about the difference between knowing the name of something and knowing something. He described a conversation he had with his father:

"See that bird? It's a brown-throated thrush, but in Germany it's called a halsenflugel, and in Chinese they call it a chung ling and even if you know all those names for it, you still know nothing about the bird—you only know something about people; what they call that bird. Now that thrush sings, and teaches its young to fly, and flies so many miles away during the summer across the country, and nobody knows how it finds its way."1

As Feynman explained, in science, the best way to learn about something is to take it apart. I consider this an excellent analogy to our work at NSS Labs testing data center security products. While there are many products on the market today that carry the name “data center security,” we can’t know if they are what they purport to be until we take them apart.

In order to understand data center security products and evolve our test methodology (i.e., how we will take the products apart), we have to understand the current environmental and operational realities of these products.

Earlier this year, we conducted the 2018 NSS Labs Data Center Security Study, the aim of which was to gather information on how organizations are using security technologies to protect their data centers; which technologies they are deploying; in what form these technologies are being deployed and where they are being deployed. It also aimed to determine the volume and composition of data center traffic as well as establish what performance factors enterprises consider most important.

The study was part of a quantitative, two-arm study conducted through a survey of 141 role-verified IT security professionals with a minimum of three years in role. Qualified respondents were employed full time at organizations with a minimum of 500 employees and actively managed the security technologies used to protect their data centers.

Results reinforce the canon that there is no one-size-fits-all when it comes to security architecture; however, some interesting commonalities were observed. For example, the majority of respondents reported their organizations deploy anti-malware agents, web application firewalls, and stateful firewalls to protect their data centers (90+% of study respondents reporting across all verticals), with DDoS appliance/services and intrusion prevention systems also quite common (80+% of respondents reporting across all verticals).

Another interesting finding was that more than 70% of study participants reported their data center security capabilities were cloud-delivered, and more than 50% indicated their organizations still deploy physical appliances on premises dedicated to data center security.

The report also includes data on the types of threats detected at data centers (e.g., HTML injection was the most frequently reported), how often these threats were detected, and respondents’ organizational priorities for remediating those threats.

Our study provided us with valuable insights into data center security products and their environments. And, much like the brown-throated thrush, we found that the environment a data center security product resides in can significantly influence its behavior. We hope you will find this data as useful as we did in learning about data center security.

The NSS Labs Enterprise Architecture Research Group’s mission is to provide research and advisory services that are accurate, reliable, and actionable. The NSS Labs 2018 Security Insight Study can be found here and includes results from both the NSS Labs 2018 Data Center Security Study and the NSS Labs 2018 Encryption Study. Stay tuned for our blog on the encryption study. If you wish to discuss data center or encryption security, we’d love to speak with you. Email us and reference this blog.

Will Fisher is a Senior Research Analyst for the NSS Labs Enterprise Architecture Research Group (EARG), whose charter is to help enterprises solve security challenges. Will is a research scientist who holds a PhD in experimental psychology and has worked for NSS Labs for the last two and half years performing and analyzing qualitative and quantitative research into enterprise IT security.

Gain access to NSS Labs’ group test reports and Analyst Briefs in our Research Library. Follow us on Twitter (@NSSLabs) to keep informed as new research is released.

1 Feynman, R. P. (1969). What is science? The Physics Teacher7(6), 313-320. Full text available here.

TAGS: EARG, Data Center Security, Security Insight, Primary research, SSL/TLS

Follow us on Twitter (@NSSLabs) to keep informed as new research is released.

TAGS: Data Center Network Security