Israel’s Legit Security Secures $40M to Fortify Security of AI-Based Applications
Legit Security, a cybersecurity startup with Israeli roots, recently raised $40 million in a Series B funding round. This injection of capital, led by US venture capital firm CRV, aims to bolster protection for businesses against threats targeting their generative AI-based applications. The funding round also saw participation from previous investors including Cyberstarts, TCV, and Bessemer Venture Partners, pushing Legit Security’s total funding to $77 million.
Ahead in Cybersecurity Innovation
Established in 2020, Legit Security has successfully developed a security platform that supports organizations and businesses in protecting their software supply chains and applications from threat attacks. The platform works by identifying and mitigating vulnerabilities from the stage of code development up to cloud deployment. This innovative approach has attracted a roster of high-profile clients, including Google, NYSE, and Kraft Heinz.
Legit Security’s platform is engineered to scan software development pipelines for leaks and gaps, as well as the infrastructure and systems within those pipelines. As per estimates, by 2025, approximately 45% of global enterprises and organizations are likely to have experienced attacks on their software supply chains, marking a significant increase from 2021.
Investing in Expansion and Tackling AI Threats
The latest round of funding will be utilized by the startup to extend its sales, marketing, and R&D operations. However, perhaps more importantly, the funds will also enable the company to address the rising threat of Artificial Intelligence and Large Language Models (LLMs) in the development of pioneering applications.
With an increasing number of software development teams incorporating AI-generated code and LLMs, new security threats have surfaced rapidly. AI-based code assistants like GitHub Copilot or Tabnine, widely used by software developers in cloud environments, have introduced a variety of risks to data privacy and the safeguarding of sensitive data. As a response to this evolving threat landscape, several tech giants like Apple and Samsung are reportedly imposing restrictions on their employees’ usage of generative AI tools such as OpenAI’s ChatGPT and AI code assistants to prevent leaks of private information.
Subscribe to BNN Breaking
Sign up for our daily newsletter covering global breaking news around the world.