Search company, investor...

Founded Year

2017

Stage

Unattributed VC - III | Alive

Total Raised

$100M

Valuation

$0000 

Last Raised

$100M | 1 yr ago

About DeepL

DeepL is a company that specializes in neural machine translation. The company offers a range of services including text translation in over 30 languages, document translation for various file formats, and an AI-powered writing tool that helps improve grammar, punctuation, and tone of voice. Its primary customers are individuals and businesses that require translation services. DeepL was formerly known as Linguee. It was founded in 2017 and is based in Cologne, Germany.

Headquarters Location

Maarweg 165

Cologne, 50825,

Germany

Loading...

ESPs containing DeepL

The ESP matrix leverages data and analyst insight to identify and rank leading companies in a given technology landscape.

EXECUTION STRENGTH ➡MARKET STRENGTH ➡LEADERHIGHFLIEROUTPERFORMERCHALLENGER
Enterprise Tech / Enterprise Applications

The translation tools market is a market that offers software and technologies to support translation, localization, and interpretation tasks. Translation tools include machine translation software, translation memory systems, and terminology management systems. The market has seen significant growth in recent years, driven by the increasing demand for localization and translation services across …

DeepL named as Highflier among 15 other companies, including Lilt, Lionbridge Technologies, and Phrase.

Loading...

Research containing DeepL

Get data-driven expert analysis from the CB Insights Intelligence Unit.

CB Insights Intelligence Analysts have mentioned DeepL in 1 CB Insights research brief, most recently on Apr 12, 2023.

Expert Collections containing DeepL

Expert Collections are analyst-curated lists that highlight the companies you need to know in the most important technology spaces.

DeepL is included in 3 Expert Collections, including Unicorns- Billion Dollar Startups.

U

Unicorns- Billion Dollar Startups

1,229 items

A

Artificial Intelligence

11,383 items

Companies developing artificial intelligence solutions, including cross-industry applications, industry-specific products, and AI infrastructure solutions.

A

AI 100

100 items

Latest DeepL News

4 paths to sustainable AI

Jan 31, 2024

Feature Artificial IntelligenceBudgetingBusiness Process Management Smaller models, better chips and renewable power help. So does only using AI when you need to. Credit: Gorodenkoff / Shutterstock Regulators, investors, customers, and even employees are pushing companies to minimize the climate impact of their AI initiatives. Everything from geothermal data centers to more efficient graphic processing units (GPUs) can help. But AI users must also get over the urge to use the biggest, baddest AI models to solve every problem if they truly want to fight climate change. Concerns that AI contributes to global warming stem from estimates that GPUs used to develop and keep AI models running use four times as much energy as those serving conventional cloud applications, and that AI could be on track to use as much electricity as Ireland. In response, regulators in Europe and the US are moving to require large users of AI to report on its environmental impact. Credit rating agencies and customers are paying closer attention to environmental, social, and governance (ESG) issues such as carbon emissions, says Faith Taylor, VP of global sustainability and ESG officer at global infrastructure services provider Kyndryl. In addition, she says, “Employees, especially the younger generation, say they’re not going to work at a company that doesn’t have certain environmental goals. We see it as a recruiting and retention factor.” As sustainable efforts become a greater priority, here are four ways companies are succeeding in streamlining their AI efforts. Use more efficient processes and architectures Boris Gamazaychikov, senior manager of emissions reduction at SaaS provider Salesforce, recommends using specialized AI models to reduce the power needed to train them. “Is it necessary for a model that can also write a sonnet to write code for us?” he asks. “Our theory is no. Our approach has been to create specific models for specific use cases rather than one general-purpose model.” He also recommends tapping the open-source community for models that can be pre-trained for various tasks. One example, he cites Meta’s Llama-2, from which he says more than 13,000 variants have been created. “All those 13,000 new models didn’t require any pre-training,” he says. “Think about how much computer and carbon that saved.” Salesforce’s AI Research team has also developed methods such as maximum parallelism, he adds, which split up compute-intensive tasks efficiently to reduce energy use and carbon emissions. Rather than training the model on all the training data at once, Salesforce trains the model in multiple “epochs” in which a portion of the data is slightly modified in each one based on the results of the earlier training. This results in a reduction of power consumption, he says. Some hyperscalers offer tools and advice on making AI more sustainable, such as Amazon Web Services, which provides tips on using serverless technologies to eliminate idle resources, data management tools, and datasets. AWS also has models to reduce data processing and storage, and tools to “right size” infrastructure for AI application. If used properly, such tools can help minimize compute resources needed for AI, and thus its environmental impact. Use less data Reducing the size of the dataset used to train a model is one of the most effective ways to minimize energy use and carbon emissions involved in AI. “You can reduce the size of many AI models by an order of magnitude, and only lose two to three percent of your accuracy,” says professor Amanda Stent, director of Colby College’s Davis Institute for Artificial Intelligence. “These techniques are well known but not as well used as they could be because people are enamored with the idea of size.” There’s also the matter of all the attention massive models have received in the press. Gamazaychikov says the latest version of Salesforce’s CodeGen model, which allows users to generate executable code using natural language, performs just as well as models twice its size. As a rough rule of thumb, he says, about a 50% drop in size means about an equivalent drop in carbon emissions. At video and music streaming service Plex, head of data science Scott Weston cuts the size of his training data by focusing on a specific need. “We don’t just want to find users who are going to subscribe or leave the platform, but those who should subscribe and how to make sure they do,” he says. Model training is simpler because the data set is more focused and confined to the specific business problem it’s trying to solve, he adds. “Then environment wins because we’re not using all this extra computing to train the models,” he says. Weston uses uplift modeling, running a series of A/B tests to determine how potential customers respond to different offers, and then uses the results of those tests to build the model. The size of the data sets is limited by business concerns. “We’re careful when conducting sizable tests as we don’t want to interrupt the regular communications flow with our customers.” Use renewable energy Hosting AI operations at a data center that uses renewable power is a straightforward path to reduce carbon emissions, but it’s not without tradeoffs. Online translation service Deepl runs its AI functions from four co-location facilities: two in Iceland, one in Sweden, and one in Finland. The Icelandic data center uses 100% renewably generated geothermal and hydroelectric power. The cold climate also eliminates 40% or more of the total data center power needed to cool the servers because they open the windows rather than use air conditioners, says Deepl’s director of engineering Guido Simon. Cost is another major benefit, he says, with prices of five cents per KW/hour compared to about 30 cents or more in Germany. The network latency between the user and a sustainable data center can be an issue for time-sensitive applications, says Stent, but only in the inference stage, where the application provides answers to the user, rather than the preliminary training phase. Deepl, with headquarters in Cologne, Germany, found it could run both training and inference from its remote co-location facilities. “We’re looking at roughly 20 milliseconds more latency compared to a data center closer to us,” says Simon. “During the inference process, making the initial connection to the AI engine might take 10 round trips, resulting in roughly a 200 to 300 millisecond delay due to distance, but you can optimize application to reduce that initial time.” The speed of the internet connection to the remote site can, of course, mitigate latency issues. Verne Global Iceland, one of Deepl’s Icelandic providers, claims to be the interconnect site for all submarine cable systems to and from Iceland, with redundant, high-capacity fiber connectivity to Europe and the US. Another consideration, says Stent, is whether a “renewable” data center is running the latest and most efficient GPUs, or tensor processing units (TPUs). If not, it might end up using more power than in a conventionally powered, but more modern, data center. That isn’t an issue for Deepl, though, because it houses its own “super state-of-the-art” servers in its co-location facilities, says Simon. Don’t use AI at all While AI generates buzz among employees and customers, it might be overkill if other approaches are easier to implement, and have less impact on the environment. “Always ask if AI/ML is right for your workload,” recommends AWS in its sustainability guidelines. “There’s no need to use computationally intensive AI when a simpler, more sustainable approach might succeed just as well. For example, using ML to route IoT messages may be unwarranted; you can express the logic with a rules engine.” Along with environmental considerations, Plex isn’t able to throw millions of dollars of compute into training the largest models. “It’s all about being scrappy and making sure you think through everything and not just throw dollars at the problem,” says Weston. Online gaming company Mino Games uses DataGPT, which integrates analytics, a caching database, as well as extract, translate and load (ETL) processes to speed queries, such as which new features to offer players. Data analytics lead Diego Cáceres urges caution about when to use AI. “Phrase the business problem carefully and determine whether simple math is good enough,” he says. Ongoing challenges Besides the cost of implementing sustainable AI within a distributed cloud-based workload, finding out which workload is consuming power is a problem, says Yugal Joshi, a partner at consulting firm Everest Group. As a result, he says, most companies focus first on business results from AI, and only then on sustainability. Another challenge, says Salesforce’s Gamazaychikov, is getting information from developers about the carbon footprint of their foundational AI models. With added regulation from sources such as the European Union and the U.S. Securities and Exchange Commission, “if companies don’t disclose the numbers already, they’ll have to start doing so soon,” he says. Yet another is the lure of dramatic AI-powered breakthroughs, whatever the cost to the environment. “Some companies say `I want to be sustainable,’ but they also want to be known for the excellence of their AI, and their employees want to do something transformational,” says Colby College’s Stent. “Until financial pressures force their AI efforts to become more efficient,” she says, “something else will drive them away from sustainability.” Related content

DeepL Frequently Asked Questions (FAQ)

  • When was DeepL founded?

    DeepL was founded in 2017.

  • Where is DeepL's headquarters?

    DeepL's headquarters is located at Maarweg 165, Cologne.

  • What is DeepL's latest funding round?

    DeepL's latest funding round is Unattributed VC - III.

  • How much did DeepL raise?

    DeepL raised a total of $100M.

  • Who are the investors of DeepL?

    Investors of DeepL include Benchmark, Bessemer Venture Partners, b2venture, Institutional Venture Partners, Atomico and 3 more.

  • Who are DeepL's competitors?

    Competitors of DeepL include Unbabel and 6 more.

Loading...

Compare DeepL to Competitors

ModelFront Logo
ModelFront

ModelFront specializes in AI-driven machine translation quality prediction within the language services industry. The company offers an API that provides segment-level quality scores, enabling prioritization of human translation efforts where necessary. ModelFront's technology integrates with existing translation management systems and supports over 100 languages, aiming to enhance translation workflows for high-volume or time-sensitive projects. It was founded in 2017 and is based in Palo Alto, California.

N
New Tranx Information Technology

New Tranx Information Technology is the developer of an artificial intelligence-powered translation browser plugin that offers instant online translations, computer-assisted translation, and localization services for web portals, present with translation capabilities in 37 languages.

United Language Group Logo
United Language Group

United Language Group is a company that specializes in translation and localization, operating within the language services industry. The company offers a range of services including translation, interpretation, and localization, which involve converting content into different languages, facilitating communication between different language speakers, and adapting products or services to suit different cultures and languages. United Language Group primarily serves sectors such as Life Sciences, Manufacturing, Legal, Healthcare, Government, and Insurance. It is based in Minneapolis, Minnesota.

G
Global Tone Communication Technology

Global Tone Communication Technology, dba GTCOM, is a provider of global cross-language big data solutions, developing machine translation, speech recognition, image recognition, semantic search, knowledge graph, as well as big data analysis and visualization, and has built the "YeeCloud" language ecosystem and "YeeSight" big data ecosystem. The company has also developed JoveBird, which offers financial investment solutions based on a set of financial analysis models and powerful cross-language big data processing capabilities, providing investors with the most up-to-date analysis of investment opportunities and facilitating the development of better strategies from a global perspective.

Language I/O Logo
Language I/O

Language I/O provides software that allows for multilingual customer support in multiple languages. The AI technology enables LIO to quickly generate accurate, company-specific translations of all user-generated content (UGC) including jargon, slang, abbreviations, and misspellings into over 100 languages via chat, email, article, and social support channels. LIO is accessible directly via API and integrates with CRMs, including Salesforce, Oracle, and Zendesk. The company was founded in 2011 and is based in Cheyenne, Wyoming.

Intento Logo
Intento

Intento focuses on machine translation and multilingual generative artificial intelligence (AI). The company offers a platform that provides immediate, tailored, and personalized language experiences, supporting over 650 languages. The platform is used by global businesses to translate documents, support tickets, and enterprise apps, and to provide real-time machine translation for customer support teams. It was founded in 2016 and is based in San Francisco, California.

Loading...

CBI websites generally use certain cookies to enable better interactions with our sites and services. Use of these cookies, which may be stored on your device, permits us to improve and customize your experience. You can read more about your cookie choices at our privacy policy here. By continuing to use this site you are consenting to these choices.