As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published SEI reports, podcasts, and webcasts highlighting our work in security incident response, DevSecOps, and cybersecurity training. We have also included a series of three white papers that outline the three pillars of AI engineering: scalable, robust and secure, and human-centered.
These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.
Human-centered AI systems are designed to work with, and for, people. As the desire to use AI systems grows, human-centered engineering principles are critical to guide system development toward effective implementation and minimize unintended consequences. We identify three specific areas of focus to advance human-centered AI:
- the need for designers and systems to understand context of use and sense changes over time
- development of tools, processes, and practices to scope and facilitate human-machine teaming
- methods, mechanisms, and mindsets to engage in critical oversight
For each area, we identify ongoing work as well and challenges and opportunities in developing and deploying AI systems with confidence.
Download the white paper.
Robust and secure AI systems reliably operate at expected levels of performance, even when faced with uncertainty and in the presence of danger or threat. These systems have built-in structures, mechanisms, or mitigations to prevent, avoid, or provide resilience to dangers from a particular threat model. We identify three specific areas of focus to advance robust and secure AI for defense and national security:
- improving the robustness of AI components and systems
- designing for security challenges in modern AI systems
- developing processes and tools for testing, evaluating, and analyzing AI systems
For each area, we identify ongoing work as well as challenges and opportunities in developing and deploying AI systems with confidence.
Download the white paper.
by Contributor Hollen Barmer, Rachel Dzombak, Matt Gaston, Jay Palat, Frank Redner, John Wohlbier
Scalable AI is the ability of AI algorithms, data, models, and infrastructure to operate at the size, speed, and complexity required for the mission. Scalability is a critical concept in many engineering disciplines and is crucial to realizing operational AI capabilities. We identify three areas of focus to advance scalable AI:
- scalable management of data and models to overcome data scarcity and collection challenges
- enterprise scalability of AI development and deployment
- scalable algorithms and infrastructure
For each area, we identify ongoing work as well as challenges and opportunities in developing and deploying AI systems at scale.
Download the white paper.
by Justin Novak, Brittany Manley, David McIntire, Sharon Mudd, Angel Luis Hueca, Tracy Bills
The U.S. Department of State, Office of the Coordinator for Cyber Issues commissioned the Software Engineering Institute (SEI) to create the Sector CSIRT Framework for (1) developing a sector-based computer security incident response and coordination capability and (2) integrating this capability into a larger national cybersecurity ecosystem as applicable. The framework is a guide for helping interested parties develop the policies, processes, and procedures necessary to operationalize a sector computer security incident response team (CSIRT), a uniquely adapted, specialized incident response team. The framework outlines a process that moves the sector CSIRT from concept to reality. The framework helps the team developing the sector CSIRT understand the current conditions of incident response in the sector (i.e., the as-is state) and how to move it to a robust operating state (i.e., the to-be state). It bridges the gap between these two states using a well-defined roadmap and implementation process.
The Sector CSIRT Framework is intended for individuals and organizations—including CSIRT managers, national CSIRTs, and others—who are developing or implementing a sector CSIRT. Using this framework, these individuals or organizations can move a sector CSIRT from a concept to the reality of a fully operational team.
by Rachel Dzombak, Jay Palat
In this SEI Podcast, Rachel Dzombak and Jay Palat, both researchers with the SEI’s Emerging Technology Center, discuss growth in the field of artificial intelligence (AI) and how organizations can hire and train staff to take advantage of the opportunities afforded by AI and machine learning—and the critical need for an AI engineering discipline to grow the AI workforce.
Is Your Organization Ready for AI?
by Rachel Dzombak, Carol Smith
In this SEI Podcast, digital transformation lead Dr. Rachel Dzombak and research scientist Carol Smith, both with the SEI’s Emerging Technology Center at Carnegie Mellon University, discuss how AI Engineering can support organizations to implement AI systems. The conversation covers the steps that organizations need to take (as well as the hard conversations that need to occur) before they are AI ready.
Download/view/listen to the podcast.
by Aaron K. Reffett and Richard Laughlin
In a DevSecOps world the software supply chain extends beyond libraries upon which developed software depends. In this webinar we look at the SolarWinds incident as a worst-case exemplifying the breadth of the software supply chain issues confronting complex DevSecOps programs. We will explore the important architectural aspects of DevSecOps that are impacted by the software supply chain that require attention and potential mitigations to detect and respond to potential incidents.
What attendees will learn:
- The software supply chain issue is broad and impacts multiple aspects of DevSecOps.
- Programs need to be aware of how the software they leverage presents risks.
- Mitigation strategies must be put in place to address potential issues at the architectural level.
by Rotem D. Guttman and Josh Hammerstein
How do you teach cybersecurity to a middle school student? To a soldier? To some of the best hackers in the country? How do you evaluate all of these audiences’ skills? Cybersecurity training has been an ongoing challenge for decades. The key to making the best use of your training dollar is to craft training that matches your audience’s needs and engages them in a meaningful manner. When you create an experience so enthralling that your audience is logging in on nights and weekends just to continue participating, the value of immersive training truly shines. Join us during this webinar as Rotem Guttman shares the lessons he’s learned over a decade of developing engaging, immersive training and evaluation environments for a variety of audiences.
What attendees will learn
- how to make cybersecurity training engaging
- what motivates different types of learners
- the history of enhanced cybersecurity training at the SEI