Are Google Vertex AI’s Security Flaws Putting Your Data at Risk?

November 18, 2024
Are Google Vertex AI’s Security Flaws Putting Your Data at Risk?

Recent findings by researchers from Palo Alto Networks Unit 42 have unveiled two critical security vulnerabilities in Google’s Vertex AI machine learning (ML) platform. These flaws pose significant risks by potentially enabling attackers to gain unauthorized access and control over ML models and data in the cloud, leading to dangerous privilege escalation and data exfiltration threats. Vertex AI, launched in May 2021, is Google’s advanced platform designed to facilitate the training and deployment of custom ML models and AI applications on a large scale. This platform includes various features for automating and monitoring MLOps workflows, with Vertex AI Pipelines standing out as a significant component. The identified security gaps draw attention to the potential dangers associated with using this platform without appropriate protective measures in place. If exploited, these vulnerabilities could severely compromise the integrity and security of entire AI environments, underscoring the need for increased vigilance and tighter security protocols.

Exploiting the Vulnerabilities in Vertex AI

The first vulnerability, linked to Vertex AI’s Pipelines feature, enables attackers to manipulate custom job pipelines, thereby creating a job that operates a malicious image. This job launches a reverse shell, which acts as a backdoor, allowing unauthorized access to the environment. Due to the comprehensive permissions granted to the job within a tenant project, the malicious actors can misuse these permissions to infiltrate various internal Google Cloud resources and services. This flaw highlights a critical oversight in permission management and opens the door to significant privilege escalation risks.

To exploit this vulnerability, attackers can submit maliciously engineered custom job pipelines that bypass security protocols. Once the job is executed, it establishes a connection back to the attacker, facilitating unauthorized control and access to sensitive data and systems. The significant permissions within the tenant project allow the attackers to traverse various layers of the Google Cloud infrastructure, posing severe risks to otherwise secure environments. This finding calls for a reevaluation of permission policies and stricter control over custom job submissions on the Vertex AI platform.

Poisoned Models and Lateral Movements

The second vulnerability concerns the deployment of poisoned models within a tenant project. Malicious actors can leverage this flaw by deploying a compromised model that initiates a reverse shell upon execution. This action takes advantage of the read-only permissions assigned to the “custom-online-prediction” service account, enabling attackers to enumerate and access Kubernetes clusters and their credentials. With this access, they can conduct lateral movements from the Google Cloud Platform (GCP) to the Google Kubernetes Engine (GKE), effectively compromising the Kubernetes environment.

This vulnerability demonstrates the potential risk posed by deploying unverified or compromised models, underscoring the importance of stringent security measures and auditing practices. The ability to perform lateral movements allows attackers to escalate their access from the initial foothold within the Google Cloud environment, potentially leading to widespread impact and data compromise. The scenario highlights the necessity for developers to meticulously verify and validate models before deployment to safeguard against security breaches.

Google’s Response and the Importance of Security Audits

In response to the responsible disclosure of these vulnerabilities, Google has taken action to address the identified security gaps. This proactive approach signals Google’s commitment to enhancing the security of its AI and ML platforms and protecting user data from potential threats. However, to prevent similar incidents in the future, it remains crucial for organizations using Vertex AI to implement comprehensive security audits and maintain strict control over model deployments. These measures will help to identify and mitigate risks before they can be exploited by malicious actors.

Moreover, thorough auditing of permissions required for deploying models in tenant projects is essential to prevent attackers from leveraging unverified models and compromising entire AI environments. Security audits should regularly evaluate the permissions associated with various service accounts and ensure that they align with the principle of least privilege. This approach minimizes the risk of extensive permissions being abused and helps maintain the integrity and security of the AI infrastructure.

Broader Implications and Moving Forward

Recent research by Palo Alto Networks Unit 42 has uncovered two critical security vulnerabilities in Google’s Vertex AI machine learning (ML) platform. These weaknesses pose significant risks, potentially allowing attackers unauthorized access and control over ML models and data in the cloud. Such access could lead to dangerous privilege escalation and data exfiltration threats. Vertex AI, introduced in May 2021, is Google’s sophisticated platform designed to aid in the training and deployment of custom ML models and AI applications on a large scale. This platform incorporates various features for automating and monitoring MLOps workflows, with Vertex AI Pipelines being a notable component. The discovered security flaws highlight the potential dangers of using this platform without proper protective measures. If exploited, these vulnerabilities could severely compromise the integrity and security of AI environments, emphasizing the need for heightened vigilance and stricter security protocols to safeguard against such threats.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for subscribing.
We'll be sending you our best soon.
Something went wrong, please try again later