Search company, investor...
PerceptiLabs company logo


Founded Year



Seed VC - II | Alive

Total Raised


Last Raised

$2M | 2 yrs ago



About PerceptiLabs

PercPerceptiLabs is a dataflow driven, visual API for TensorFlow, carefully designed to make machine learning (or deep learning) modeling as intuitive as possible.

Headquarters Location

San Francisco, California, 94109,

United States

Missing: PerceptiLabs's Product Demo & Case Studies

Promote your product offering to tech buyers.

Reach 1000s of buyers who use CB Insights to identify vendors, demo products, and make purchasing decisions.

Missing: PerceptiLabs's Product & Differentiators

Don’t let your products get skipped. Buyers use our vendor rankings to shortlist companies and drive requests for proposals (RFPs).

Research containing PerceptiLabs

Get data-driven expert analysis from the CB Insights Intelligence Unit.

CB Insights Intelligence Analysts have mentioned PerceptiLabs in 2 CB Insights research briefs, most recently on Mar 3, 2020.

Expert Collections containing PerceptiLabs

Expert Collections are analyst-curated lists that highlight the companies you need to know in the most important technology spaces.

PerceptiLabs is included in 3 Expert Collections, including Digital Health.


Digital Health

8,838 items

Startups recreating how healthcare is delivered


Artificial Intelligence

9,442 items

This collection includes startups selling AI SaaS, using AI algorithms to develop their core products, and those developing hardware to support AI workloads.


AI 100

100 items

The winners of the 4th annual CB Insights AI 100.

PerceptiLabs Patents

PerceptiLabs has filed 1 patent.

The 3 most popular patent topics include:

  • Classification algorithms
  • Debuggers
  • Diagrams
patents chart

Application Date

Grant Date


Related Topics




Graphical user interface elements, Machine learning, Debuggers, Diagrams, Classification algorithms


Application Date


Grant Date



Related Topics

Graphical user interface elements, Machine learning, Debuggers, Diagrams, Classification algorithms



Latest PerceptiLabs News

Automobile Repair Self-Diagnosis and Traffic Light Management Enabled by AI

Oct 16, 2020

By AI Trends Staff Looking inside and outside, AI is being applied to the self-diagnosis of automobiles and to the connection of vehicles to traffic infrastructure. A data scientist at BMW Group in Munich, while working on his PhD, created a system for self-diagnosis called the Automated Damage Assessment Service, according to an account in  Mirage . Milan Koch was completing his studies at the Leiden Institute of Advanced Computer Science in the Netherlands when he got the idea. “It should be a nice experience for customers,” he stated. The system gathers data over time from sensors in different parts of the car. “From scratch, we have developed a service idea that is about detecting damaged parts from low speed accidents,” Koch stated. “The car itself is able to detect the parts that are broken and can estimate the costs and the time of the repair.” Milan Koch, data scientist, BMW Group, Munich Koch developed and compared different multivariate time series methods, based on machine learning, deep learning and also state-of-the-art automated machine learning (AutoML) models. He tested different levels of complexity to find the best way to solve the time series problems. Two of the AutoML methods and his hand-crafted machine learning pipeline showed the best results. The system may have application to other multivariate time series problems, where multiple time-dependent variables must be considered, outside the automotive field. Koch collaborated with researchers from the Leiden University Medical Center (LUMC) to use his hand-crafted pipeline to analyze Electroencephalography (EEG) data. Koch stated, ‘We predicted the cognition of patients based on EEG data, because an accurate assessment of cognitive function is required during the screening process for Deep Brain Stimulation (DBS) surgery. Patients with advanced cognitive deterioration are considered suboptimal candidates for DBS as cognitive function may deteriorate after surgery. However, cognitive function is sometimes difficult to assess accurately, and analysis of EEG patterns may provide additional biomarkers. Our machine learning pipeline was well suited to apply to this problem.” He added, “We developed algorithms for the automotive domain and initially we didn’t have the intention to apply it to the medical domain, but it worked out really well.” His models are now also applied to Electromyography (EMG) data, to distinguish between people with a motor disease and healthy people. Koch intends to continue his work at BMW Group, where he will focus on customer-oriented services, predictive maintenance applications and optimization of vehicle diagnostics. DOE Grant to Research Traffic Management Delays Aims to Reduce Emissions Getting automobiles to talk to the traffic management infrastructure is the goal of research at the University of Tennesse at Chattanooga, which has been awarded $1.89 million from the US Department of Energy to create a new model for traffic intersections that would reduce energy consumption. The UTC Center for Urban Informatics and Progress (CUIP)  will leverage its existing “smart corridor” to accommodate the new research. The smart corridor is a 1.25-mile span on a main artery in downtown Chattanooga, used as a test bed for research into smart city development and connected vehicles in a real-world environment. “This project is a huge opportunity for us,” stated Dr. Mina Sartipi, CUIP Director and principal investigator, in a press release . “Collaborating on a project that is future-oriented, novel, and full of potential is exciting. This work will contribute to the existing body of literature and lead the way for future research.” UTC is collaborating with the University of Pittsburgh, the Georgia Institute of Technology, the Oak Ridge National Laboratory, and the City of Chattanooga on the project. Dr. Mina Sartipi, Director, UTC Center for Urban Informatics and Progress In the grant proposal for the DOE, the research team noted that the US transportation sector accounted for more than 69 percent of petroleum consumption, and more than 37 percent of the country’s CO2 emissions. An earlier National Traffic Signal Report Card found that inefficient traffic signals contribute to 295 million vehicle hours of traffic delay, making up to 10 percent of all traffic-related delays. The project intends to leverage the capabilities of connected vehicles and infrastructures to optimize and manage traffic flow. While adaptive traffic control systems (ATCS) have been in use for a half century to improve mobility and traffic efficiency, they were not designed to address fuel consumption and emissions. Inefficient traffic systems increase idling time and stop-and-go traffic. The National Transportation Operations Coalition has graded the state of the nation’s traffic signals as D+. “The next step in the evolution [of intelligent transportation systems] is the merging of these systems through AI,” noted Aleksandar Stevanovic, associate professor of civil and environmental engineering at Pitt’s Swanson School of Engineering and director of the Pittsburgh Intelligent Transportation Systems (PITTS) Lab. “Creation of such a system, especially for dense urban corridors and sprawling exurbs, can greatly improve energy and sustainability impacts. This is critical as our transportation portfolio will continue to have a heavy reliance on gasoline-powered vehicles for some time.” The goal of the three-year project is to develop a dynamic feedback Ecological Automotive Traffic Control System (Eco-ATCS), which reduces fuel consumption and greenhouse gases while maintaining a highly operable and safe transportation environment. The integration of AI will allow additional infrastructure enhancements including emergency vehicle preemption, transit signal priority, and pedestrian safety. The ultimate goal is to reduce corridor-level fuel consumption by 20 percent. Read the source articles and information in Mirage , and in a press release from the UTC Center for Urban Informatics and Progress. By AI Trends Staff Data governance in data-driven organizations is a set of practices and guidelines that define where responsibility for data quality lives. The guidelines support the operation’s business model, especially if AI and machine learning applications are at work. Data governance is an operations issue, existing between strategy and the daily management of operations, suggests a recent account in the MIT Sloan Management Review . “Data governance should be a bridge that translates a strategic vision acknowledging the importance of data for the organization and codifying it into practices and guidelines that support operations, ensuring that products and services are delivered to customers,” stated author Gregory Vial is an assistant professor of IT at HEC Montréal. To prevent data governance from being limited to a plan that nobody reads, “governing” data needs to be a verb and not a noun phrase as in “data governance.” Vial writes, “The difference is subtle but ties back to placing governance between strategy and operations — because these activities bridge and evolve in step with both.” Gregory Vial, assistant professor of IT at HEC Montréal An overall framework for data governance was proposed by Vijay Khatri and Carol V. Brown in a piece in Communications of the ACM published in 2010. The two suggested the strategy is based on five dimensions that represent a combination of structural, operational and relational mechanisms. The five dimensions are: Principles at the foundation of the framework that relate to the role of data as an asset for the organization; Quality to define the requirements for data to be usable and the mechanisms in place to assess that those requirements are met; Metadata to define the semantics crucial for interpreting and using data — for example, those found in a data catalog that data scientists use to work with large data sets hosted on a data lake. Accessibility to establish the requirements related to gaining access to data, including security requirements and risk mitigation procedures; Life cycle to support the production, retention, and disposal of data on the basis of organization and/or legal requirements. “Governing data is not easy, but it is well worth the effort,” stated Vial. “Not only does it help an organization keep up with the changing legal and ethical landscape of data production and use; it also helps safeguard a precious strategic asset while supporting digital innovation.” Master Data Management Seen as a Path to Clean Data Governance Once the organization commits to data quality, what’s the best way to get there? Naturally entrepreneurs are in position to step forward with suggestions. Some of them are around master data management (MDM), a discipline where business and IT work together to ensure the accuracy and consistency of the enterprise’s master data assets. Organizations starting down the path with AI and machine learning may be tempted to clean the data that feeds a specific application project, a costly approach in the long run suggests one expert. “A better, more sustainable way is to continuously cure the data quality issues by using a capable data management technology. This will result in your training data sets becoming rationalized production data with the same master data foundation,” suggests Bill  O’Kane, author of a recent account from on master data management. Formerly an analyst with Gartner, O’Kane is now the VP and MDM strategist at Profisee, a firm offering an MDM solution. If the data feeding into the AI system is not unique, accurate, consistent and time, the models will not produce reliable results and are likely to lead to unwanted business outcomes. These could include different decisions being made on two customer records thought to represent different people, but in fact describe the same person. Or, recommending a product to a customer that was previously returned or generated a complaint. Perceptilabs Tries to Get in the Head of the Machine Learning Scientist Getting inside the head of a machine learning scientist might be helpful in understanding how a highly trained expert builds and trains complex mathematical models. “This is a complex time-consuming process, involving thousands of lines of code,” writes Martin Isaksson, co-founder and CEO of Perceptilabs, in a recent account in VentureBeat . Perceptilabs offers a product to help automation the building of machine learning models, what it calls a “GUI for TensorFlow.”. Martin Isaksson, co-founder and CEO, Perceptilabs “As AI and ML took hold and the experience levels of AI practitioners diversified, efforts to democratize ML materialized into a rich set of open source frameworks like TensorFlow and datasets . Advanced knowledge is still required for many of these offerings, and experts are still relied upon to code end-to-end ML solutions,” Isaksson wrote.. AutoML tools have emerged to help adjust parameters and train machine learning models so that they are deployable. Perceptilabs is adding a visual modeler to the mix. The company designed its tool as a visual API on top of TensorFlow, which it acknowledges as the most popular ML framework. The approach gives developers access to the low-level TensorFlow API and the ability to pull in other Python modules. It also gives users transparency into how the model is architected and a view into how it performs. By AI Trends Staff The ability to add automation to an existing marine vessel to make it autonomous is here today and is being proven by a Boston company. Sea Machines builds autonomous vessel software and systems for the marine industry. Founded in 2015, the company recently raised $15 million in a Series B round, making it total raised $27.5 million since 2017. Founder and CEO Michael G. Johnson, a licensed marine engineer, recently took the time to answer via email some questions AI Trends poses to selected startups. Describe your team, the key people Sea Machines is led by a team of mariners, engineers, coders and autonomy scientists. The company today has a crew of 30 people based in Boston; Hamburg, Germany; and Esbjerg, Denmark. Sea Machines is also hiring for a variety of positions, which can be viewed at Michael Johnson, Founder and CEO, Sea Machines What business problem are you trying to solve? The global maritime industry is responsible for billions in economic output and is a major driver of jobs and commerce. Despite the sector’s success and endurance, it faces significant challenges that can negatively impact operator safety, performance and profitability. Sea Machines is solving many of these challenges by developing technologies that are helping the marine industry transition into a new era of task-driven, computer-guided vessel operations. How does your solution address the problem? Autonomous systems solve for these challenges in several ways: Autonomous grid and waypoint following capabilities relieve mariners from manually executing planned routes. Today’s autonomous systems uniquely execute with human-like behavior, intelligently factoring in environmental and sea conditions (including wave height, pitch, heave and roll); change speeds between waypoints; and actively detect obstacles for collision avoidance purposes. Autonomous marine systems also enable optionally manned or autonomous-assist (reduced crew) modes that can reduce mission delays and maximize effort. This is an important feature for anyone performing time-sensitive operations, such as on-water search-and-rescues or other urgent missions. Autonomous marine systems offer obstacle detection and collision avoidance capabilities that keep people and assets safe and out of harm’s way. These advanced technologies are much more reliable and accurate than the human eye, especially in times of low light or in poor sea conditions. Because today’s systems enable remote-helm control and remote payload management, there is a reduced need for mariners (such as marine fire or spill response crews) to physically man a vessel in a dangerous environment. A remote-helm control beltpack also improves visibility by enabling mariners to step outside of the wheelhouse to whatever location provides the best vantage point when performing tight maneuvers, dockings and other precision operations. Autonomous marine systems enable situational awareness with multiple cameras and sensors streaming live over a 4G connection. This real-time data allows shoreside or at-sea operators a full view of an autonomous vessel’s environment, threats and opportunities. Minimally manned vessels can autonomously collaborate to cover more ground with less resources required, creating a force-multiplier effect. A single shoreside operator can command multiple autonomous boats with full situational awareness. These areas of value overlap for all sectors but for the government and military sector, new on-water capabilities and unmanned vessels are a leading driver. By contrast, the commercial sector is looking for increased productivity, efficiency, and predictable operations. Our systems meet all of these needs. Our technology is designed to be installed on new vessels as well as existing vessels. Sea Machines’ ability to upgrade existing fleets greatly reduces the time and cost to leverage the value of our autonomous systems. How are you getting to the market? Is there competition? Sea Machines has an established dealer program to support the company’s global sales across key commercial marine markets. The program includes many strategic partners who are enabled to sell, install and service the company’s line of intelligent command and control systems for workboats. To date, Sea Machines dealers are located across the US and Canada, in Europe, in Singapore and UAE. We have competition for autonomous marine systems, but our products are the only ones that are retrofit ready, not requiring new vessels to be built. Do you have any users or customers? Yes we have achieved significant sales traction since launching our SM series of products in 2018. Just since the summer, Sea Machines has been awarded several significant contracts and partnerships:  The first allowed us to begin serving the survey vessel market with the first announced collaboration with DEEP BV in the Netherlands. DEEP’s vessel outfitted with the SM300 entered survey service very recently. Next, we partnered with Castine-based Maine Maritime Academy (MMA) and representatives of the U.S. Maritime Administration (MARAD)’s Maritime Environmental and Technical Assistance (META) Program to bring valuable, hands-on education about autonomous marine systems into the MMA curriculum. Then we recently announced a partnership with shipbuilder Metal Shark Boats, of Jeanerette, Louisiana, to supply the U.S. Coast Guard (USCG)’s Research and Development Center (RDC) with a new Sharktech 29 Defiant vessel for the purposes of testing and evaluating the capabilities of available autonomous vessel technology. USCG demonstrations are happening now (through November 5) off the coast of Hawaii. Finally, just this month, we announced that the U.S. Department of Defense (DOD)’s Defense Innovation Unit (DIU) awarded us with a multi-year Other Transaction (OT) agreement. The primary purpose of the agreement is to initiate a prototype that will enable commercial ocean-service barges as autonomous Forward Arming and Refueling Point (FARP) units for an Amphibious Maritime Projection Platform (AMPP). Specifically, Sea Machines will engineer, build and demonstrate ready-to-deploy system kits that enable autonomous, self-propelled operation of opportunistically available barges to land and replenish military aircraft. In the second half of 2020 we are also commencing onboard collaborations with some crew-transfer vessel (CTV) operators serving the wind farm industry. How is the company funded? The company recently completed a successful Series B round, which provided $15M in funds, with a total amount raised of $27.5M since 2017. The most recent funds we were able to raise are going to significantly impact Sea Machines, and therefore the maritime and marine industries as a whole. The funds will be put to use to further strengthen our technical development team as well as build out our next level of systems manufacturing and scale our operations group to support customer deployments. We will also be investing in some supporting technologies to speed our course to full dock-to-dock, over-the-horizon autonomy. The purpose of our technology is to optimize vessel operations with increased performance, productivity, predictability and ultimately safety. In closing, we’d like to add that the marine industries are a critically significant component of the global economy and it’s up to us to keep it strong and relevant. Along with people, processes and capital, pressing the bounds of technology is a key driver. The world is being revolutionized by intelligent and autonomous self-piloting technology and today we find ourselves just beyond the starting line of a busy road to broad adoption through all marine sectors. If Sea Machines continues to chart the course with forward-looking pertinence, then you will see us rise up to become one of the most significant companies and brands serving the industry in the 21st century. Any anecdotes/stories? This month we released software version 1.7 on our SM300. That’s seven significant updates in just over 18 months, each one providing increased technical hardening and new features for specific workboat sectors. Another interesting story is about our Series B funding, which, due to the pandemic, we raised virtually. Because of where we are as a company, we have been proving our ability to retool the marine industry with our technology, and therefore we are delivering confidence to investors. We were forced to conduct the entire process by video conference, which may have increased overall efficiency of the raise as these rounds traditionally require thousands if not tens of thousands of miles of travel for face-to-face meetings, diligence, and handshakes. Remote pitches also proved to be an advantage because it allowed us to showcase our technology in a more direct way. We did online demos where we had our team remotely connected to our vessels off Boston Harbor. We were able to get the investors into the captain’s chair, as if they were remotely commanding a vessel in real-world operations. Finally, in January, we announced the receipt of ABS and USCG approval for our SM200 wireless helm and control systems on a major class of U.S.-flag articulated tug-barges (ATBs), the first unit has been installed and is in operation, and we look forward to announcing details around it. We will be taking the SM200 forward into the type-approval process. By John P. Desmond, AI Trends Editor Web applications are the primary focus of many cybercrime gangs engaged in data breaches, a primary security concern to retailers, according to the 2020 Data Breach Investigations Report (DBIR) recently released by Verizon, in its 13th edition of the report. Verizon analyzed a total of 157,525 incidents; 3,950 were confirmed data breaches. “These data breaches are the most serious type of incident retailers face. Such breaches generally result in the loss of customer data, including, in the worst cases, payment data and log-in and password combinations,” stated Ido Safruti, co-founder and chief technology officer, PerimeterX, a provider of security services for websites, in an account in Digital Commerce 360 . Among the reports highlights: Misconfiguration errors, resulting from failure to implement all security controls, top the list of the fastest-growing risk to web applications. Across all industries, misconfiguration errors increased from below 20 percent in the 2017 survey to over 40 percent in the 2020 survey. “The reason for this is simple,” Safruti stated. “Web applications are growing more and more complex. What were formerly websites are now full-blown applications made up of dozens of components and leveraging multiple external services.” Ido Safruti, co-founder and chief technology officer, PerimeterX External code can typically comprise 70 percent or more of web applications, many of them JavaScript calls to external libraries and services. “A misconfigured service or setting for any piece of a web application offers a path to compromise the application and skim sensitive customer data,” Safruti stated. Cybercriminal gangs work to exploit rapid changes on web applications, as development teams build and ship new code faster and faster, often tapping third-party libraries and services. Weaknesses in version control and monitoring of changes to web applications for unauthorized introductions of code, are vulnerabilities. Magecart attacks, from a consortium of malicious hacker groups who target online shopping cart systems especially on large ecommerce sites, insert rogue elements as components of Web applications with the goal of stealing credit card data of shoppers. “Retailers should consider advanced technology using automated and audited processes to manage configuration changes,” Safruti advises. Vulnerabilities are not patched quickly enough, leaving holes for attacks to exploit. Only half of vulnerabilities are patched within three months of discovery, the 2020 DBIR report found. These attacks offer hackers the potential of  large amounts of valuable customer information with the least amount of effort. Attacks against web application servers made up nearly 75% of breached assets in 2019, up from roughly 50% in 2017, the DBIR report found. Organized crime groups undertook roughly two-thirds of breaches and 86% of breaches were financially motivated. The global average cost of a data breach is $3.92 million, with an average of over $8 million in the United States, according to a 2019 study from the Ponemon Institute, a research center focused on privacy, data protection and information security. Another analysis of the 2020 DBIT report found that hacking and social attacks have leapfrogged malware as the top attack tactic. “Sophisticated malware is no longer necessary to perform an attack,” stated the report in SecurityBoulevard . Developers and QA engineers who develop and test web applications would benefit from the use of automated security testing tools and security processes that integrate with their workflow. “We believe developers and DevOps personnel are one of the weakest links in the chain and would benefit the most from remediation techniques,” the authors stated. Credential Stuffing Attack Exploit Users with Same Password Across Sites Credential stuffing is a cyberattack where lists of stolen usernames and/or email addresses are used to gain unauthorized access to user accounts through large-scale automated login requests directed against a web application. “Threat actors are always conducting credential stuffing attacks,” found a “deep dive” analysis of the 2020 DBIR report from SpyCloud, a security firm focused on preventing online fraud. The SpyCloud researchers advise users never to reuse passwords across online accounts. “Password reuse is a major factor in credential stuffing attacks,” the authors state. They advise using a password manager and storing a unique complex password for each account. The 2020 DBIR report found this year’s top malware variant to be password dumpers, malware that extracts passwords from infected systems. This malware is aimed at acquiring credentials stored on target computers, or involve keyloggers that acquire credentials as users enter them. Some 22 percent of breaches found were the result of social attacks, which are cyber attacks that involve social engineering and phishing. Phishing – making fake websites, emails, text messages, and social media messages to impersonate trusted entities – is still a major way that sensitive authentication credentials are acquired illicitly, SpyCloud researchers found. Average consumers are each paying more than $290 in out-of-pocket costs and spending 16 hours to resolve the effects of this data loss and the resultant account takeover, SpyCloud found. Business Increasing Investment in AI for Cybersecurity, Capgemini Finds To defend against the new generation of cyberattacks, businesses are increasing their investment in AI systems to help. Two-thirds of organizations surveyed by Capgemini Research last year said they will not be able to respond to critical threats without AI. Capgemini surveyed 850 senior IT executives from IT information security, cybersecurity and IT operations across 10 countries and seven business sectors. Among the highlights was that AI-enabled cybersecurity is now an imperative: Over half (56%) of executives say their cybersecurity analysts are overwhelmed by the vast array of data points they need to monitor to detect and prevent intrusion. In addition, the type of cyberattacks that require immediate intervention, or that cannot be remediated quickly enough by cyber analysts, have notably increased, including: cyberattacks affecting time-sensitive applications (42% saying they had gone up, by an average of 16%). automated, machine-speed attacks that mutate at a pace that cannot be neutralized through traditional response systems (43% reported an increase, by an average of 15%). Executives interviewed cited benefits of using AI in cybersecurity:  64% said it lowers the cost of detecting breaches and responding to them – by an average of 12%. 74% said it enables a faster response time: reducing time taken to detect threats, remedy breaches and implement patches by 12%. 69% also said AI improves the accuracy of detecting breaches, and 60% said it increases the efficiency of cybersecurity analysts, reducing the time they spend analyzing false positives and improving productivity. Budgets for AI in cybersecurity are projected to rise, with almost half (48%) of respondents said they are planning 29 percent increases in FY2020; some 73 percent were testing uses cases for AI in cybersecurity; only one in five organizations reported using AI in cybersecurity before 2019. “AI offers huge opportunities for cybersecurity,” stated Oliver Scherer, CISO of Europe’s leading consumer electronics retailer, MediaMarktSaturn Retail Group, in the Capgemini report. “This is because you move from detection, manual reaction and remediation towards an automated remediation, which organizations would like to achieve in the next three or five years.” Geert van der Linden, Cybersecurity Business Lead, Capgemini Group Barriers remain, including a lack of understanding in how to scale use cases from proof of concept to full-scale deployment. “Organizations are facing an unparalleled volume and complexity of cyber threats and have woken up to the importance of AI as the first line of defense,” stated Geert van der Linden, Cybersecurity Business Lead at Capgemini Group. “As cybersecurity analysts are overwhelmed, close to a quarter of them declaring they are not able to successfully investigate all identified incidents, it is critical for organizations to increase investment and focus on the business benefits that AI can bring in terms of bolstering their cybersecurity.”

PerceptiLabs Frequently Asked Questions (FAQ)

  • When was PerceptiLabs founded?

    PerceptiLabs was founded in 2017.

  • Where is PerceptiLabs's headquarters?

    PerceptiLabs's headquarters is located at San Francisco.

  • What is PerceptiLabs's latest funding round?

    PerceptiLabs's latest funding round is Seed VC - II.

  • How much did PerceptiLabs raise?

    PerceptiLabs raised a total of $4.24M.

  • Who are the investors of PerceptiLabs?

    Investors of PerceptiLabs include Hans Victor, STING Accelerator, Henrik von Schoultz, Luminar Ventures, Brightly Ventures and 3 more.

  • Who are PerceptiLabs's competitors?

    Competitors of PerceptiLabs include RapidMiner and 5 more.

Compare PerceptiLabs to Competitors

Dataiku Logo

Dataiku is a late-stage technology firm that develops a centralized data platform that democratizes the use of data science, machine learning, and AI in the enterprise. Dataiku helps businesses move along their data journey from data preparation to analytics at scale to enterprise AI by providing a common ground for data experts and explorers, a repository of best practices, shortcuts to machine learning and AI deployment/management, and a centralized, controlled environment. The firm primarily serves the retail, financial, pharmaceutical, and manufacturing industries. It was founded in 2013 and is based in New York, New York.

DataRobot Logo

DataRobot offers an enterprise machine learning platform that empowers users of all skill levels to make better predictions faster. Incorporating a library of hundreds of open-source machine learning algorithms, the DataRobot platform automates, trains, and evaluates predictive models in parallel, delivering more accurate predictions at scale.

Instrumental Logo

Instrumental offers a testing platform to manufacturers of electronics, to head off complicated problems before they start costing companies thousands of dollars per minute.

MoBagel Logo

MoBagel's main product Decanter AI provides enterprises automatically establish various predictive analysis models and convert enterprise data into business value.

Canvass Analytics Logo
Canvass Analytics

Canvass Analytics is a provider of AI-based, industrial, advanced analytics. Canvass Analytics' platform enables intelligent, industrial operations by putting AI in the hands of plant operators, empowering them with data-driven insights to improve complex, operational processes and optimize assets. Developed for the industrial sector, the company's AI utilizes machine learning to continuously adapt to operational variables, spearheading industrial operators to increase yield, improve quality, and lower energy consumption. Canvass Analytics' customers span the oil and gas, chemical and petrochemical, metals and mining, and energy sectors. The company was formerly known as Dat-uh IoT and rebranded in August 2017. Canvass Analytics was founded in 2016 and is based in Toronto, Canada.


Conundrum offers a platform for the metals and mining industry that provides a comprehensive toolset to build enterprise-scale industrial AI applications to help increase yield and efficiency of production. The platform enables users to access process-flow diagrams and dashboards and provides tools to quantify, annotate, and transform previously unmeasurable sources of useful data. The company was founded in 2017 and is based in Cambridge, England.

Discover the right solution for your team

The CB Insights tech market intelligence platform analyzes millions of data points on vendors, products, partnerships, and patents to help your team find their next technology solution.

Request a demo

CBI websites generally use certain cookies to enable better interactions with our sites and services. Use of these cookies, which may be stored on your device, permits us to improve and customize your experience. You can read more about your cookie choices at our privacy policy here. By continuing to use this site you are consenting to these choices.