Each January on the SEI Weblog, we current the 10-most visited posts of the earlier 12 months. This 12 months’s listing of high 10 posts highlights our work in deepfakes, synthetic intelligence, machine studying, DevSecOps, and zero belief. Posts, which have been revealed between January 1, 2022 and December 31, 2022, are offered beneath in reverse order based mostly on the variety of visits.
#10 Most likely Don’t Depend on EPSS But
by Jonathan Spring
Vulnerability administration includes discovering, analyzing, and dealing with new or reported safety vulnerabilities in info techniques. The companies offered by vulnerability administration techniques are important to each pc and community safety. This weblog put up evaluates the professionals and cons of the Exploit Prediction Scoring System (EPSS), which is a data-driven mannequin designed to estimate the likelihood that software program vulnerabilities might be exploited in follow.
The EPSS mannequin was initiated in 2019 in parallel with our criticisms of the Frequent Vulnerability Scoring System (CVSS) in 2018. EPSS was developed in parallel with our personal try at enhancing CVSS, the Stakeholder-Particular Vulnerability Categorization (SSVC); 2019 additionally noticed model 1 of SSVC. This put up will give attention to EPSS model 2, launched in February 2022, and when it’s and isn’t applicable to make use of the mannequin. This newest launch has created a variety of pleasure round EPSS, particularly since enhancements to CVSS (model 4) are nonetheless being developed. Sadly, the applicability of EPSS is way narrower than folks would possibly anticipate. This put up will present my recommendation on how practitioners ought to and shouldn’t use EPSS in its present type.
Learn the put up in its entirety.
#9 Containerization on the Edge
by Kevin Pitstick and Jacob Ratzlaff
Containerization is a expertise that addresses lots of the challenges of working software program techniques on the edge. Containerization is a virtualization methodology the place an utility’s software program information (together with code, dependencies, and configuration information) are bundled right into a bundle and executed on a bunch by a container runtime engine. The bundle is known as a container picture, which then turns into a container when it’s executed. Whereas much like digital machines (VMs), containers don’t virtualize the working system kernel (normally Linux) and as a substitute use the host’s kernel. This strategy removes a few of the useful resource overhead related to virtualization, although it makes containers much less remoted and moveable than digital machines.
Whereas the idea of containerization has existed since Unix’s chroot system was launched in 1979, it has escalated in recognition over the previous a number of years after Docker was launched in 2013. Containers are actually extensively used throughout all areas of software program and are instrumental in lots of initiatives’ steady integration/steady supply (CI/CD) pipelines. On this weblog put up, we talk about the advantages and challenges of utilizing containerization on the edge. This dialogue will help software program architects analyze tradeoffs whereas designing software program techniques for the sting.
Learn the put up in its entirety.
#8 Ways and Patterns for Software program Robustness
by Rick Kazman
Robustness has historically been considered the flexibility of a software-reliant system to maintain working, in step with its specs, regardless of the presence of inner failures, defective inputs, or exterior stresses, over an extended time frame. Robustness, together with different high quality attributes, corresponding to safety and security, is a key contributor to our belief {that a} system will carry out in a dependable method. As well as, the notion of robustness has extra not too long ago come to embody a system’s skill to face up to modifications in its stimuli and setting with out compromising its important construction and traits. On this latter notion of robustness, techniques ought to be malleable, not brittle, with respect to modifications of their stimuli or environments. Robustness, consequently, is a extremely essential high quality attribute to design right into a system from its inception as a result of it’s unlikely that any nontrivial system may obtain this high quality with out conscientious and deliberate engineering. On this weblog put up, which is excerpted and tailored from a not too long ago revealed technical report, we are going to discover robustness and introduce ways and patterns for understanding and attaining robustness.
Learn the put up in its entirety.
View a podcast on this work.
#7 The Zero Belief Journey: 4 Phases of Implementation
by Timothy Morrow and Matthew Nicolai
Over the previous a number of years, zero belief structure has emerged as an essential subject inside the subject of cybersecurity. Heightened federal necessities and pandemic-related challenges have accelerated the timeline for zero belief adoption inside the federal sector. Personal sector organizations are additionally trying to undertake zero belief to deliver their technical infrastructure and processes according to cybersecurity finest practices. Actual-world preparation for zero belief, nevertheless, has not caught up with present cybersecurity frameworks and literature. NIST requirements have outlined the specified outcomes for zero belief transformation, however the implementation course of continues to be comparatively undefined. Zero belief can’t be merely applied by means of off-the-shelf options because it requires a complete shift in the direction of proactive safety and steady monitoring. On this put up, we define the zero belief journey, discussing 4 phases that organizations ought to tackle as they develop and assess their roadmap and related artifacts in opposition to a zero belief maturity mannequin.
Overview of the Zero Belief Journey
Because the nation’s first federally funded analysis and improvement heart with a transparent emphasis on cybersecurity, the SEI is uniquely positioned to bridge the hole between NIST requirements and real-world implementation. As organizations transfer away from the perimeter safety mannequin, many are experiencing uncertainty of their seek for a transparent path in the direction of adopting zero belief. Zero belief is an evolving set of cybersecurity paradigms that transfer defenses from static, network-based perimeters to give attention to customers, property, and assets. The CERT Division on the Software program Engineering Institute has outlined a number of steps that organizations can take to implement and keep zero belief structure, which makes use of zero belief ideas to plan industrial and enterprise infrastructure and workflows. These steps collectively type the premise of the zero belief journey.
Learn the put up in its entirety.
View a podcast on this work.
#6 Two Classes of Structure Patterns for Deployability
by Rick Kazman
Aggressive pressures in lots of domains, in addition to improvement paradigms corresponding to Agile and DevSecOps, have led to the more and more widespread follow of steady supply or steady deployment—fast and frequent modifications and updates to software program techniques. In immediately’s techniques, releases can happen at any time—probably tons of of releases per day—and every could be instigated by a special crew inside a company. With the ability to launch steadily implies that bug fixes and safety patches would not have to attend till the following scheduled launch, however quite could be made and launched as quickly as a bug is found and glued. It additionally implies that new options needn’t be bundled right into a launch however could be put into manufacturing at any time. On this weblog put up, excerpted from the fourth version of Software program Structure in Apply, which I coauthored with Len Bass and Paul Clements, I talk about the standard attribute of deployability and describe two related classes of structure patterns: patterns for structuring companies and for methods to deploy companies.
Steady deployment isn’t fascinating, and even attainable, in all domains. In case your software program exists in a posh ecosystem with many dependencies, it might not be attainable to launch only one a part of it with out coordinating that launch with the opposite elements. As well as, many embedded techniques, techniques residing in hard-to-access places, and techniques that aren’t networked could be poor candidates for a steady deployment mindset.
This put up focuses on the big and rising numbers of techniques for which just-in-time function releases are a big aggressive benefit, and just-in-time bug fixes are important to security or safety or steady operation. Usually these techniques are microservice and cloud-based, though the methods described right here will not be restricted to these applied sciences.
Learn the put up in its entirety.
View an SEI podcast on this subject.
#5 A Case Examine in Making use of Digital Engineering
by Nataliya Shevchenko and Peter Capell
A longstanding problem in giant software-reliant techniques has been to offer system stakeholders with visibility into the standing of techniques as they’re being developed. Such info isn’t all the time straightforward for senior executives and others within the engineering path to amass when wanted. On this weblog put up, we current a case research of an SEI mission by which digital engineering is getting used efficiently to offer visibility of merchandise underneath improvement from inception in a requirement to supply on a platform.
One of many commonplace conventions for speaking in regards to the state of an acquisition program is the program administration assessment (PMR). Because of the accumulation of element offered in a typical PMR, it may be laborious to establish duties which can be most urgently in want of intervention. The promise of recent expertise, nevertheless, is that a pc can increase human capability to establish counterintuitive points of a program, successfully rising its accuracy and high quality. Digital engineering is a expertise that may
- improve the visibility of what’s most pressing and essential
- establish how modifications which can be launched have an effect on an entire system, in addition to elements of it
- allow stakeholders of a system to retrieve well timed details about the standing of a product transferring by means of the event lifecycle at any time limit
Learn the put up in its entirety.
#4 A Hitchhiker’s Information to ML Coaching Infrastructure
by Jay Palat
{Hardware} has made a big impact on the sphere of machine studying (ML). Most of the concepts we use immediately have been revealed many years in the past, however the fee to run them and the information essential have been too costly, making them impractical. Latest advances, together with the introduction of graphics processing models (GPUs), are making a few of these concepts a actuality. On this put up we’ll take a look at a few of the {hardware} elements that affect coaching synthetic intelligence (AI) techniques, and we’ll stroll by means of an instance ML workflow.
Why is {Hardware} Necessary for Machine Studying?
{Hardware} is a key enabler for machine studying. Sara Hooker, in her 2020 paper “The {Hardware} Lottery” particulars the emergence of deep studying from the introduction of GPUs. Hooker’s paper tells the story of the historic separation of {hardware} and software program communities and the prices of advancing every subject in isolation: that many software program concepts (particularly ML) have been deserted due to {hardware} limitations. GPUs allow researchers to beat lots of these limitations due to their effectiveness for ML mannequin coaching.
Learn the put up in its entirety.
#3 A Technical DevSecOps Adoption Framework
by Vanessa Jackson and Lyndsi Hughes
DevSecOps practices, together with continuous-integration/continuous-delivery (CI/CD) pipelines, allow organizations to answer safety and reliability occasions shortly and effectively and to supply resilient and safe software program on a predictable schedule and price range. Regardless of rising proof and recognition of the efficacy and worth of those practices, the preliminary implementation and ongoing enchancment of the methodology could be difficult. This weblog put up describes our new DevSecOps adoption framework that guides you and your group within the planning and implementation of a roadmap to practical CI/CD pipeline capabilities. We additionally present perception into the nuanced variations between an infrastructure crew targeted on implementing a DevSecOps paradigm and a software-development crew.
A earlier put up offered our case for the worth of CI/CD pipeline capabilities and we launched our framework at a excessive stage, outlining the way it helps set priorities throughout preliminary deployment of a improvement setting able to executing CI/CD pipelines and leveraging DevSecOps practices.
Learn the put up in its entirety.
#2 What’s Explainable AI?
by Violet Turri
Contemplate a manufacturing line by which employees run heavy, probably harmful gear to fabricate metal tubing. Firm executives rent a crew of machine studying (ML) practitioners to develop a man-made intelligence (AI) mannequin that may help the frontline employees in making protected selections, with the hopes that this mannequin will revolutionize their enterprise by enhancing employee effectivity and security. After an costly improvement course of, producers unveil their advanced, high-accuracy mannequin to the manufacturing line anticipating to see their funding repay. As a substitute, they see extraordinarily restricted adoption by their employees. What went improper?
This hypothetical instance, tailored from a real-world case research in McKinsey’s The State of AI in 2020, demonstrates the essential function that explainability performs on the earth of AI. Whereas the mannequin within the instance could have been protected and correct, the goal customers didn’t belief the AI system as a result of they didn’t know the way it made selections. Finish-users deserve to know the underlying decision-making processes of the techniques they’re anticipated to make use of, particularly in high-stakes conditions. Maybe unsurprisingly, McKinsey discovered that enhancing the explainability of techniques led to elevated expertise adoption.
Explainable synthetic intelligence (XAI) is a robust device in answering important How? and Why? questions on AI techniques and can be utilized to deal with rising moral and authorized issues. In consequence, AI researchers have recognized XAI as a essential function of reliable AI, and explainability has skilled a latest surge in consideration. Nonetheless, regardless of the rising curiosity in XAI analysis and the demand for explainability throughout disparate domains, XAI nonetheless suffers from various limitations. This weblog put up presents an introduction to the present state of XAI, together with the strengths and weaknesses of this follow.
Learn the put up in its entirety.
View an SEI Podcast on this subject.
#1 How Simple is it to Make and Detect a Deepfake?
by Catherine A Bernaciak and Dominic Ross
A deepfake is a media file—picture, video, or speech, usually representing a human topic—that has been altered deceptively utilizing deep neural networks (DNNs) to change an individual’s identification. This alteration usually takes the type of a “faceswap” the place the identification of a supply topic is transferred onto a vacation spot topic. The vacation spot’s facial expressions and head actions stay the identical, however the look within the video is that of the supply. A report revealed this 12 months estimated that there have been greater than 85,000 dangerous deepfake movies detected as much as December 2020, with the quantity doubling each six months since observations started in December 2018.
Figuring out the authenticity of video content material could be an pressing precedence when a video pertains to national-security issues. Evolutionary enhancements in video-generation strategies are enabling comparatively low-budget adversaries to make use of off-the-shelf machine-learning software program to generate pretend content material with rising scale and realism. The Home Intelligence Committee mentioned at size the rising dangers offered by deepfakes in a public listening to on June 13, 2019. On this weblog put up, we describe the expertise underlying the creation and detection of deepfakes and assess present and future menace ranges.
The massive quantity of on-line video presents a chance for the USA authorities to reinforce its situational consciousness on a worldwide scale. As of February 2020, Web customers have been importing a mean of 500 hours of recent video content material per minute on YouTube alone. Nonetheless, the existence of a variety of video-manipulation instruments implies that video found on-line can’t all the time be trusted. What’s extra, as the concept of deepfakes has gained visibility in in style media, the press, and social media, a parallel menace has emerged from the so-called liar’s dividend—difficult the authenticity or veracity of reputable info by means of a false declare that one thing is a deepfake even when it isn’t.
Learn the put up in its entirety.
View the webcast on this work.
Trying Forward in 2023
We publish a brand new put up on the SEI Weblog each Monday morning. Within the coming months, search for posts highlighting the SEI’s work in synthetic intelligence, digital engineering, and edge computing.