In this blog, we will be discussing What are AI platform notebooks. Two critical success requirements for the majority of businesses are continuous innovation and market speed. The ability to build artificially intelligent machines via machine learning now drives much of this continuous innovation. Teams must also be enabled to provide reusability and cooperation that speeds up market time.
However, the biggest problem in empowering end-users is that it is fundamentally problematic for a data scientist to build machine learning models—a development environment must be created by installing all required packages, libraries, and CUDAs to execute code on visual devices (GPUs).
This procedure is laborious and frequently mistaken, which leads to inconsistencies in the package that may exacerbate model development. Even after the initial inconvenience has been overcome, people begin to understand that they operate in silos like individuals, who seldom can easily use the work of their team members.
The idea of shared destiny is fundamental to Google Cloud’s ambitions to become the most trusted cloud in the market – taking an active part in helping customers achieve better safety results on our platforms. In order to help customers include security in their deployments, we guide the form of security plans.
We have reviewed Google Cloud Security Fundamental Guide and deployable plan to help customers integrate security with their initial Google Cloud deployment. Today, with the release of our guide and deployable draft, we extend our range of plans. Secure private data in AI Platform Notebooks to help you implement data management and security policies to protect confidential AI platform Notebooks.
What are AI platform notebooks?
Notebooks is a maintained service that provides data scientists and machine learners with a JupyterLab environment in which to test, build, and deploy models for use in the production environment.
Security and privacy are essential for AI since sensitive data are frequently at the heart of AI and machine learning efforts. This blog post discusses how the following high-level notebook flow can be secured at all appropriate security levels.
AI Platform Notebooks offer an integrated and secure JupyterLab environment for companies. Enterprise data scientists use AI Platform Notebooks to experiment, create code and deploy models.
You may immediately start with a notebook that runs with a few clicks alongside key deep learning frameworks (TensorFlow Enterprise, PyTorch, RAPIDS, and many others). Today, AI Platform Notebooks may be executed on virtual deep learning machines or containers.
Business customers may wish to run your JupyterLab Notebooks within secure perimeters and control access to the Notebooks and data, particularly in highly regulated industries such as financial services, healthcare, and life science. The Notebooks for the AI platform have been built with these customers in mind, with security and access control as foundations of the service.
Recently, we have revealed that several AI Platform Notebook security features, including VPC Service Controls (VPC-SC), COMEK (Customer Managed encryption keys), and more, are available to the public. However, security involves more than just features; it also has to do with behavior. Let us look at the plan that offers a step-by-step method to protect your data and notebooks’ environment.
AI Platform Notebooks allow standard Google Cloud platform corporate security designs through VPC, shared VPC, and private IP limitations. You may utilize a Shielded VM for the AI Platform Notebooks compute instance and use CMEK to encrypt your disk data.
AI Platform Notebooks may be accessed in one of two predefined user access modes: one-user or a Service Account. You may also change access based on your Cloud Identity and Access Management (IAM) service setup. In the context of AI Platform Notebooks, look at these security concerns more carefully.
Compute Engine Security
AI Platform Shielded VM Notebooks offer a set of safety features that help to avoid rootkits and boot kits. This functionality, which includes images in the Notebook API and DLVM Debian 10, allows you to protect your corporate workload from hazards like remote attacks, escalating privileges, and hostile insiders.
Advanced platform security technologies such as a secure and measured boot, virtual module trusted platform (vTPM), UEFI firmware, and integrity monitoring are utilized for this capability. The default Calculation Engine enables the virtual Trusted Platform Module (vTPM) and integrity monitoring settings on the instance of a Shielded VM Notebook. Additionally, the Notebooks API provides an updated endpoint that allows you to upgrade the operating system manually or automatically to the latest DLVM image.
When you enable CMEK for a Notebook AI Platform instance, the key you supply is used instead of the Google-managed key to encrypt data on the Google boot and data drives.
If you require complete control over the keys used to encrypt your data, CMEK is best suited. CMEK allows you to manage your cloud KMS keys. For example, you may rotate or turn off a key or establish a rotation plan using the Cloud KMS API.
Data exfiltration mitigation
VPC Controls (VPC-SC) improves your ability to reduce the risks related to data exfiltration of Google Cloud services such as Cloud Storage and Big Query.
VPC-SC is supported in AI Notebook Platforms that prevent data from being read or copied outside of the perimeter to a resource by service operation, such as copying to a bucket of the Public Cloud Storage by using the command “gsutil cp” or to a permanently external Big Query table via the command “bq mk.”
AI Platform Notebooks Access Control and Audit Logging have their own set of Identity and Access Management responsibilities. Each given role is linked to a set of permissions. By adding a new member to a project, you may assign one or more IAM roles to that individual via an IAM policy.
Each IAM position has allowances that allow the member to access specific resources. IAM permissions for Notebooks from AI Platform are used to manage Notebook instances; you may create, delete, and modify Notebooks instances via the Notebooks API. (See this troubleshooting page for details on the configuration of JupyterLab access.)
AI Notebooks Platform produces admin activity audit logs which include information on actions that modify the configuration or metadata of the resource.
Consider the following scenarios for the usage of AI Platform Notebooks in light of these security features:
- Customers expect the same degree of security and monitoring as their IT infrastructure for their data and notebook instances.
- Customers expect uniform, easy-to-apply security policies when their data science teams access data.
- Without limiting broader access, customers want to limit access to sensitive data to specific individuals or teams.