Jupysec: Auditing Jupyter to Improve AI Security

How can you tell if your Jupyter instance is secure? The NVIDIA AI Red Team has developed a JupyterLab extension to automatically assess the security of Jupyter environments. Jupysec is a tool that evaluates the user’s environment against almost 100 rules that detect configurations and artifacts that have been identified by the AI Red Team as potential vulnerabilities, attack vectors, or indicators of compromise.

Against the backdrop of attacks on artificial intelligence and machine learning development operations, this presentation will illustrate how jupysec can help audit Jupyter environments against known risks. The Jupyter ecosystem consists of many interconnected components designed to execute Julia, Python, or R code in a client-server model and is favored among machine learning researchers for its power and flexibility. On operations to proactively assess the security of machine learning development operations, the NVIDIA AI Red Team frequently encounters various Jupyter configurations and deployments.

This presentation will start by introducing attendees to the Jupyter architecture, its role in machine learning development, and move into some demonstrations of Jupyter configurations that may introduce unintentional risks or be deliberately targeted by threat actors. We will demonstrate how with sufficient access, these vulnerabilities and misconfigurations may be used to impact machine learning development and systems.

Finally, we’ll demonstrate jupysec, a set of rules and JupyterLab extension that can be used to audit environments against these risks we’ve identified. Using the feedback from jupysec, we’ll harden the environment against the previously demonstrated attacks.

About the Speaker