Darktrace (LSE: DARK) provides cybersecurity solutions. It uses mathematics to automatically detect abnormal behavior in organizations in order to manage risks from cyber-attacks. The company was founded in 2013 and is based in Cambridge, United Kingdom.
Research containing Darktrace
Get data-driven expert analysis from the CB Insights Intelligence Unit.
CB Insights Intelligence Analysts have mentioned Darktrace in 2 CB Insights research briefs, most recently on Oct 7, 2021.
Expert Collections containing Darktrace
Expert Collections are analyst-curated lists that highlight the companies you need to know in the most important technology spaces.
Darktrace is included in 3 Expert Collections, including AI 100.
Winners of CB Insights' annual AI 100, a list of the 100 most promising AI startups in the world.
Companies developing artificial intelligence solutions, including cross-industry applications, industry-specific products, and AI infrastructure solutions.
These companies protect organizations from digital threats.
Darktrace has filed 62 patents.
Computer security, Cyberwarfare, Cyberattacks, Cybercrime, Hacking (computer security)
Computer security, Cyberwarfare, Cyberattacks, Cybercrime, Hacking (computer security)
Latest Darktrace News
Nov 27, 2023
The NCSC and its US counterpart CISA have brought together tech companies and governments to countersign a new set of guidelines aimed at promoting a secure-by-design culture in AI development Share this item with your network: By Published: 27 Nov 2023 13:45 The UK’s National Cyber Security Centre has published a set of guidelines designed to help ensure that artificial intelligence (AI) technology is developed safely and securely, written alongside tech sector partners, and developed with crucial assistance from the US’ Cybersecurity and Infrastructure Security Agency (CISA). The Guidlelines for secure AI system development are said to be the first of their kind in the world, and besides the UK and US, were developed with input from other G7 nations, international agencies, and government bodies from a number of countries, including voices from the Global South. “We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” said NCSC CEO Lindy Cameron. “These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout. “I’m proud that the NCSC is leading crucial efforts to raise the AI cyber security bar: a more secure global cyber space will help us all to safely and confidently realise this technology’s wonderful opportunities,” she said. Cameron’s American counterpart, CISA director Jen Easterly, added: “The release of the Guidelines for secure AI system development marks a key milestone in our collective commitment – by governments across the world – to ensure the development and deployment of artificial intelligence capabilities that are secure by design. “As nations and organisations embrace the transformative power of AI, this international collaboration, co-developed by the UK NCSC and CISA, underscores the global dedication to fostering transparency, accountability, and secure practices. “The domestic and international unity in advancing secure-by-design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology evolution. This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of cross-border collaboration in securing our digital future.” The set of guidelines has been designed to help the developers of any system that incorporates AI make informed decisions about cyber security during the development process – whether it is being built from scratch or as an add-on to an existing tool or service provided by another entity. The NCSC believes security to be an “essential pre-condition of AI system safety” and integral to the development process from the outset and throughout. In a similar way to how the secure-by-design principles alluded to by CISA’s Easterly are increasingly being applied to software development, the cognitive leap to applying the same guidance to the world of AI should not be too difficult to make. The guidelines, which can be accessed in full via the NCSC website , break down into four main tracks – Secure Design, Secure Development, Secure Deployment and Secure Operation and Maintenance, and include suggested behaviours to help improve security. These include taking ownership of security outcomes for customers and users, embracing “radical transparency”, and baking secure-by-design practices into organisational structures and leadership hierarchies. The document has already been endorsed and co-sealed by a number of key organisations in the field, including tech giants Amazon, Google, Microsoft and OpenAI, and representatives from 17 other countries – besides the Five Eyes intelligence alliance and the G7, these include Chile, Czechia, Estonia, Israel, Nigeria, Norway, Poland, Singapore and South Korea. For the UK government, the document’s creation builds on discussions held at its AI Safety Summit at the beginning of November , which while not explicitly a cyber security-focused event, nevertheless sought to begin the necessary conversation around how society should manage the risks of AI . For the Americans, it follows on from a recently published CISA roadmap on AI , which supports an October Executive Order signed by president Joe Biden that aims to build a foundation of standards that might one day underpin state or federal-level legislation in the US – with inevitable global impact. CISA’s roadmap sets out five “lines of effort” that it means to pursue. These are: To responsibly use AI to support CISA’s core cyber mission; To assess and assure secure-by-design AI tech across the private and public sectors; To protect critical national infrastructure (CNI) from malicious AIs; To collaborate and communicate on AI efforts in the US and across the rest of the world; And to expand AI expertise and skills. Industry reacts “These early days of AI can be likened to blowing glass: while the glass is fluid it can be made into any shape, but once it has cooled, its shape is fixed. Regulators are scrambling to influence AI regulation as it takes shape,” said WithSecure cyber security advisor Paul Brucciani. “Guidelines are quick to produce since they do not require legislation, nonetheless, NCSC and CISA have worked with impressive speed to corral this list of signatories. Amazon, Google, Microsoft and OpenAI, the world-leading AI developers, are signatories. A notable absentee from the list is the EU.” Darktrace global head of threat analysis, Toby Lewis, said: “Security is a pre-requisite for safe and trustworthy AI and today’s guidelines from agencies including the NCSC and CISA provide a welcome blueprint for it. I’m glad to see the guidelines emphasise the need for AI providers to secure their data and models from attackers, and for AI users to apply the right AI for the right task. “Those building AI should go further and build trust by taking users on the journey of how their AI reaches its answers. With security and trust, we’ll realise the benefits of AI faster and for more people,” added Lewis, who prior to his appointment at Darktrace served as deputy technical director of incident management at the NCSC for four years. Read more about cyber security and AI Supported by Darktrace, Loughborough University is to recruit five doctoral researchers focusing on cross-disciplinary research in AI and cyber security . As cyber threats increasingly target cloud infrastructure, demand for robust and reliable incident response measures is through the roof. Find out why you might want to consider bringing AI to the table . We know that malicious actors are starting to use artificial intelligence (AI) tools to facilitate attacks, but on the other hand, AI can also be a powerful tool within the hands of cyber security professionals . Read more on Security policy and user awareness Chip sector gears up for AI revolution
Darktrace Frequently Asked Questions (FAQ)
When was Darktrace founded?
Darktrace was founded in 2013.
Where is Darktrace's headquarters?
Darktrace's headquarters is located at Maurice Wilkes Building, St John’s Innovation Park, Cambridge.
What is Darktrace's latest funding round?
Darktrace's latest funding round is IPO.
How much did Darktrace raise?
Darktrace raised a total of $232.3M.
Who are the investors of Darktrace?
Investors of Darktrace include KKR, Ten Eleven Ventures, Vitruvian Partners, Balderton Capital, Summit Partners and 9 more.
Who are Darktrace's competitors?
Competitors of Darktrace include Bastazo, Stellar Cyber, MixMode, Recorded Future, Mimecast and 7 more.
Compare Darktrace to Competitors
Vectra offers a platform to detect and respond to real-time cyberattacks. Its product uses artificial intelligence (AI) to automate, from network users and internet-of-things (IoT) devices to data centers and the cloud. It also enables users to internally track all traffic continuously and monitor the same to detect hidden attacks in progress. The company was formerly known as TraceVector. It was founded in 2011 and is based in San Jose, California.
ExtraHop operates as a company focusing on cloud-native network detection and response. It offers a platform that provides 360-degree visibility for detecting and responding to cyber threats, using network data and machine learning (ML) to identify network and application performance issues. It primarily serves sectors such as financial services, healthcare, e-commerce and retail, education, and the U.S. public sector. The company was founded in 2007 and is based in Seattle, Washington.
Cybereason develops software to help track the actions of cyber attackers. Its automated platform collects clues by learning to discern anomalies and analyzes the data using algorithms. The company was founded in 2012 and is based in Boston, Massachusetts.
CyCraft is an AI company that operates in the cybersecurity industry, focusing on the development of autonomous systems and fostering human-AI collaboration. The company offers a range of services including managed detection and response, incident response, compromise assessment, and risk intelligence, all aimed at enhancing cybersecurity resilience. CyCraft primarily serves sectors such as financial institutions, government agencies, and high-tech manufacturing. It was founded in 2017 and is based in New Taipei City, Taiwan.
GoSecure delivers managed detection and response (MDR) cybersecurity and expert advisory services. GoSecure Titan managed security solutions deliver multi-vector protection to counter modern cyber threats through a complete suite of offerings that extend the capabilities of our customers’ in-house teams. GoSecure Titan Managed Detection & Response offers a best-in-class mean-time-to-respond, with comprehensive coverage across customers’ networks, endpoints, and inboxes. The company was formerly known as CounterTrack and rebranded after its acquisition of GoSecure in June 2018. GoSecure was founded in 2002 and is based in San Diego, California.
Securonix specializes in threat detection and response for hybrid cloud, data-driven enterprises. Its product SIEM is designed to reduce noise, prioritize high-fidelity alerts, and enable precise responses to insider, cyber threats, and more. Its services are primarily utilized by sectors such as the healthcare, manufacturing, and financial services industries. It was founded in 2007 and is based in Addison, Texas.