Even after a fix was issued, lingering prompt injection risks in GitLab’s AI assistant might allow attackers to indirectly deliver developers malware, dirty links, and more.
Go to Source
Author: Nate Nelson, Contributing Writer
Go to Source
Author: Nate Nelson, Contributing Writer