Pachyderm company logo

The profile is currenly unclaimed by the seller. All information is provided by CB Insights.

Founded Year



Series B | Alive

Total Raised


Last Raised

$16M | 2 yrs ago

About Pachyderm

Pachyderm offers data analytics for Docker, helping companies fully utilize what containerization has brought to the technology space.

Pachyderm Headquarter Location

1501 Mariposa St Suite 428

San Francisco, California, 94107,

United States

ESPs containing Pachyderm

The ESP matrix leverages data and analyst insight to identify and rank leading companies in a given technology landscape.

Enterprise IT / AI

Experiment tracking vendors are creating platforms that promote collaboration by enabling teams to automatically track, log, and compare thousands of iterations of ML experiments. Utilizing these platforms, teams are able to keep records of changes made to training data, source code, and model parameters as well as track all ML-related metadata. Some vendors focus primarily on data version control…

Pachyderm named as Leader among 6 other companies, including Neptune Labs, Iterative, and Weights & Biases.

Predict your next investment

The CB Insights tech market intelligence platform analyzes millions of data points on venture capital, startups, patents , partnerships and news mentions to help you see tomorrow's opportunities, today.

Research containing Pachyderm

Get data-driven expert analysis from the CB Insights Intelligence Unit.

CB Insights Intelligence Analysts have mentioned Pachyderm in 1 CB Insights research brief, most recently on Nov 10, 2021.

Expert Collections containing Pachyderm

Expert Collections are analyst-curated lists that highlight the companies you need to know in the most important technology spaces.

Pachyderm is included in 1 Expert Collection, including Artificial Intelligence.


Artificial Intelligence

8,693 items

This collection includes startups selling AI SaaS, using AI algorithms to develop their core products, and those developing hardware to support AI workloads.

Pachyderm Patents

Pachyderm has filed 1 patent.

patents chart

Application Date

Grant Date


Related Topics




Bones of the vertebral column, Orthopedic surgical procedures, Implants (medicine), Spinal nerves, Muscles of the torso


Application Date


Grant Date



Related Topics

Bones of the vertebral column, Orthopedic surgical procedures, Implants (medicine), Spinal nerves, Muscles of the torso



Latest Pachyderm News

Time is Right for the AI Infrastructure Alliance to Better Define Rules

Jan 22, 2021

By John P. Desmond, AI Trends Editor   The AI Infrastructure Alliance  is taking shape, adding more partners who sign up to the effort to define a “canonical stack for AI and Machine Learning Operations (MLOps).” In programming, “canonical means according to the rules,” from a definition in  webopedia . The mission of the organization also includes, according to its website: develop best practices and architectures for doing AI/ML at scale in enterprise organizations; foster openness for algorithms, tooling, libraries, frameworks, models and datasets in AI/ML; advocate for technologies, such as differential privacy, that helps anonymize data sets and protect privacy; and work toward universal standards to share data between AI/ML applications. Core members listed on the organization’s website include  Determined AI , an early stage company focused on improving developer productivity around machine learning and AI applications, improving resource utilization, and reducing risk. The team encompasses machine learning and distributed systems experts, including key contributors to Spark MLlib, Apache Mesos, and PostgreSQL; PhDs from UC Berkeley and University of Chicago; and faculty at Carnegie Mellon University. Investors include GV (formerly Google Ventures), Amplify Partners, CRV, Haystack, SV Angel, The House, and Specialized Types. Founded in 2017, the company has raised a total of $13.6 million so far, according to Crunchbase. Determined CEO Evans Says AI Stack “Needs to be Defined”  Evan Sparks, Cofounder and CEO, Determined AI “At  Determined , we have always been focused on democratizing AI, and our team remains incredibly optimistic about the future of bringing AI-native software infrastructure to the broader market,” said Determined Cofounder and CEO Evan Sparks, in an email response to a query from AI Trends on why the company joined the alliance. “This same mindset led us to  open source  our software last year in order to reach more teams across industries. As software becomes increasingly powered by AI, we think that the infrastructure stack to support developing and running software needs to be defined.”   He felt the challenge was too big for one company. “It’s going to take multiple companies solving different problems on the way as AI applications move from R&D into production, working together to define interfaces and standards to benefit data scientists and machine learning engineers. The AI Infrastructure Alliance is poised to be a powerful force in making this a reality.”  Asked why the mission of the AI Infrastructure Alliance is important, Sparks said, “In order to see the true potential of AI, AI development needs to be as accessible as software development, with little to no barriers to adoption. At Determined, we view collaboration as critical to achieving this. Joining the AI Infrastructure Alliance has provided us the opportunity to work with more like-minded companies in our own space and bring together the essential building blocks to create the future of AI, while creating a long-term framework for what AI success looks like.”   Super AI Focused on Quality of Datasets for Training   Another core member is  Superb AI , a company focused on helping with training datasets for AI applications. The company offers labeling tools, quality control for training data, pre-trained model predictions, advanced auto-labeling and ability to filter and search datasets. Hyunsoo Kim, CEO and cofounder, launched the company in 2018 with three other cofounders. He got the idea for the company while working on a PhD in robotics and AI at Duke University. The process to label data in order to train a computer in AI algorithms was expensive, laborious and error-prone. “This is partly because building a deep learning system requires extreme amounts of labeled data that involve labor-intensive manual work and because a standalone AI system is not accurate enough to be fully trusted in most situations,” stated Kim in an account in  Forbes . So far, the company has raised $2.3 million, according to Crunchbase. It has attracted support from Y Combinator, a Silicon Valley startup accelerator, Duke University and VC firms in Silicon Valley, Seoul and Dubai. Pachyderm’s Platform Targets Data Scientists  Another core member is  Pachyderm,  described as an open source data science platform to support development of explainable, repeatable, and scalable ML/AI applications. The platform combines version control with tools to build scalable end-to-end ML/AI pipelines, while allowing developers to use the language and framework of their choice. Among the company’s customers is LogMeIn, the Boston-based supplier of cloud-based SaaS services for unified communication and collaboration. At LogMeIn’s AI Center of Excellence in Israel, the company’s team deals with text, audio, and video that needs to get quickly processed and labeled for its data scientists to go to work delivering machine learning capabilities across their product lines. Eyal Heldenberg, Voice AI Product Manager, LogMeIn “Our job at the AI hub is to bring the best-in-class ML models of, in our case, Speech Recognition and NLP,” stated Eyal Heldenberg, Voice AI Product Manager, in a case study posted on the Pachyderm website. “It became clearer that the ML cycle was not only training but also included lots of data preparation steps and iterations.” For example, one step to process audio would add up to seven weeks on the biggest computer machine Amazon Web Services has to offer. “That means lots of unproductive time for the research team,” stated Moshe Abramovitch, LogMeIn Data Science Engineer. Pachyderm’s technology was chosen for a proof of concept test because its parallelism allowed nearly unlimited scaling. The result was instead of taking seven to eight weeks to transform data, Pachyderm’s products could perform the work in seven to 10 hours. The tech also had other benefits. ”Our models are more accurate, and they are getting to production and to the customer’s hands much faster,” stated Heldenberg. “Once you remove time-wasting, building block-like data preparation, the whole chain is affected by that. If we can go from weeks to hours processing data, it greatly affects everyone. This way we can focus on the fun stuff: the research, manipulating the models and making greater models and better models.”   Founded in 2014, Pachyderm has raised $28.1 million to date, according to Crunchbase. By John P. Desmond, AI Trends Editor   Among all its many activities, Google is forecasting the wind. Google and its DeepMind AI subsidiary have combined weather data with power data from 700 megawatts of wind energy that Google sources in the Central US. Using machine learning, they have been able to better predict the wind, which pays off in the energy market. “The way a lot of power markets work is you have to schedule your assets a day ahead,” stated Michael Terrell, the head of energy market strategy at Google, in a recent account in  Forbes . “And you tend to get compensated higher when you do that than if you sell into the market real-time.”   This is an example of the application of AI to wind energy and the wind energy market, an effort being tried in many regions by a range of players. “What we’ve been doing is working in partnership with the DeepMind team to use machine learning to take the weather data that’s available publicly, actually forecast what we think the wind production will be the next day, and bid that wind into the day-ahead markets,” Terrell stated during a recent seminar hosted virtually by the Precourt Institute for Energy of Stanford University. The result has been a 20% increase in revenue for wind farms, Terrell stated. Google has been on a mission to radically reduce its carbon footprint. The company recently achieved a milestone by matching its annual energy use with its annual renewable-energy procurement, Terrell stated. “Our hope is that this kind of machine learning approach can strengthen the business case for wind power and drive further adoption of carbon-free energy on electric grids worldwide,” stated Sam Witherspoon, a DeepMind program manager, in a blog post. He and software engineer Carl Elkin described how they boosted profits for Google’s wind farms in the Southwest Power Pool, an energy market that  stretches across the plains  from the Canadian border to north Texas. European Commitment to Wind Energy Seen in SmartWind Project   European countries have made a big commitment to wind energy, with offshore wind farms being required to supply about 8.5% of all energy in the Netherlands and 40% of current electricity consumption by 2030, according to a recent account in  Innovation Origins . AI is expected to play a big role in this effort, helping to increase energy generation and reduce maintenance costs for wind farms. The related SmartWind project is being undertaken by a consortium of four companies and the Ruhr-University Bochum in Germany. Prof. Constantinos Sourkounis, Institute for Power Systems Technology, Ruhr-University Bochum “In SmartWind we can exploit the capabilities of artificial intelligence algorithms to optimize the management of wind farms,” stated Prof. Constantinos Sourkounis of the university’s Institute for Power Systems Technology, head of the German workgroup. The team aims to build an integrated cloud platform to reduce costs and optimize revenue, based on advanced and automated functions for data analysis, fault detection, diagnosis and operation and management recommendations. The platform will collect data in real time from sensors and control systems, such as condition and maintenance management. Machine learning algorithms and other AI techniques form the backbone of early fault detection and diagnosis. Turkish wind farm operator Zorlu Enerji, a SmartWind partner, will be able to put results of the research directly into practice. “The remarkable thing about this project is the close relationship between research and direct application. We are able to first test theoretical results in our laboratory, and then in a test wind farm run by our partner Zorlu Enerji,” stated Prof. Sourkounis. Condition Monitoring Systems Help Manage Remote Wind Turbines   Machine condition monitoring systems (CMSs) are being applied to wind turbines to help ensure maximum availability and production. Mike Hastings, Senior Application Engineer, Bruel & Kjaer Vibro “This is what we call Big Data, which includes both machine vibration and process data under all kinds of operating conditions and with all kinds of wind turbine types and components,” stated Mike Hastings, a senior application engineer with Bruel & Kjaer Vibro (B&K Vibro) of Darmstadt, Germany, writing in  Wind Systems Mag . Over the past 20 years, the company has installed more than 25,000 data acquisition systems worldwide, with up to 12,000 of them being remotely monitored. As a result, “B&K Vibro has accumulated a vast database of monitoring data that includes fault data on almost every imaginable potential failure mode,” Hastings wrote. As the worldwide installed capacity of wind turbines increases and plays a bigger role in the energy market, so does the need to ensure maximum availability and production of these turbines. Machine condition monitoring is important in this respect and many of the new turbines delivered today already have a condition monitoring system installed as standard. For offshore wind turbines, all have such a system because of their remoteness for maintenance. “Big data fits very well into data-driven artificial intelligence (AI) and machine learning (ML) development and implementation,”  Hastings wrote. AI and ML could be implemented for the following condition-monitoring tasks: fault detection optimization, automatic fault identification and prognosis for failure. For fault detection, descriptors are configured by specialists, and detection of those is done automatically by the SMA. The individual descriptors and their configuration for fault detection have been optimized to a high level of reliability by diagnostics specialists with many years of experience. “One of the inherent benefits of AI is its ability to sift through vast quantities of CMS data to find patterns,” he wrote. Hidden diagnostics can be found in historical data as well. For fault detection before potential failures, the AI can present the results as a listing of several potential failure modes, each with a probability of certainty. “B&K Vibro has in development neural-network automatic fault diagnostic products in the past, and this remains an area of interest for future refinement,” Hastings wrote. By AI Trends Staff   On the verge of a new era of healthcare in which AI can combine with data sharing to deliver many new services, healthcare organizations need to earn the trust of patients that their data will be used properly. That was a message delivered by speakers on healthcare and AI topics at the  Consumer Electronics Show  held virtually last week. Christina Silcox, Policy Fellow at the Duke-Margolis Center for Health Policy Issues related to data bias and explainability surfaced quickly. A major issue with machine learning recommendation systems is the inability for it to explain how it came to the suggestion, said Christina Silcox, Policy Fellow at the Duke-Margolis Center for Health Policy, in a session on Trust and the Impact of AI on Healthcare. “We don’t know how the software looks at the input and combines it into a recommendation. It finds its own pattern. There is not a way for it to communicate how it came to the decision. Work is being done on this,” she said. “But now even the developer does not know how the software is doing what it’s doing.”   In addition, some wellness technology incorporating AI may not have FDA approval as a medical device. The CARES Act of 2020 removed some devices from FDA oversight. Also, software may rely on company trade secrets that the firm may not be willing to share, making it more challenging to understand how the software works. “This information can be critical to patient trust,” she said. Also, an evaluation of a wellness device using AI and data needs to cover what training data was used to represent the population, and what subgroups were included. Also needed is an evaluation of the software over time, “to make sure it’s still working,” she said. Interoperable Medical Software Systems Elusive   Jesse Ehrenfeld, Chairman, Board of Trustees of the American Medical Association Interoperability was an issue cited by Jesse Ehrenfeld, Chairman, Board of Trustees of the American Medical Association (and a Commander in the US Navy). “Algorithms that work at a children’s hospital may not work in an adult hospital, he said. “Understanding the context is critical.” He noted that these discussions with medical device-makers are challenging. Ehrenfeld recommended, “Having good clinicians have input into the development of these systems and tools is critical.” The AMA has tried to facilitate such discussions and has been having some success, he said. Regarding data bias, Ehrenfeld said, “All data is biased; we just might not understand why.” It could be that it does not represent the larger population, or that the way it was captured introduced bias. In a final thought, Silcox said, “As a nation, we have to strengthen our healthcare data, and put a focus on standardizing healthcare data, making sure it is interoperable. That is the key to improving AI in healthcare.”    Patient Data Sharing for Telemedicine Requires Transparent Practices  The pandemic era has ushered in increased use of telemedicine and with that, necessary data sharing. One supplier of wellness products said the company is very tuned into data privacy. “With us, privacy is number one. We look at it as the patient’s data and not our data,” said Randy Kellogg, President and CEO of  Omron Healthcare , in a CES session on The Tradeoff Between Staying Secure and Staying Healthy. “We need permission to look at the patient’s data. We try to be transparent with people about how their data is going to be used in a telemedicine call,” he said. Among Omron’s products is HeartGuide, a wearable blood pressure monitor in the form of a digital wristwatch, and a Bluetooth scale and body composition monitor. Data from these are pulled together in the company’s VitalSight remote patient monitoring program, with the goal of preventing heart attacks and strokes. Based in Kyoto, Japan, the company has been in business for over 40 years and offers products in 110 countries and regions. Asked by moderator Robin Raskin, founder of Solving for Tech, if patients are sharing their data more, Kellogg said, “Yes. It was happening before the pandemic and now more so. People are updating their data to the platforms.”   This trend of more health data sharing during the pandemic era was confirmed by Dr. Hasson A. Tetteh of the US Navy, an AI strategist who holds the position of Health Mission Chief with the DoD Joint AI Center. “We are dogmatic about security and privacy,” he said. “In the pandemic era, there has been a need to get more information from people than they may have been accustomed to, for the public good.”   Discussion turned to whether the HIPAA Privacy Rule regulating the use or disclosure of protected health information, which first went into effect in 2003, is out of date. “HIPAA is a bit dated,” Dr. Tetteh said. “Policy often lags rapid technology advances.” He said the DoD has “policy engineers” who work to keep patient information safe and secure. “We are all in the business of protecting patient safety and privacy, and we are using technology to do that,” he said. He noted that the DoD has issued AI principles on ethical applications. (See  AI Trends coverage. )    Humetrix Stores Patient Data Locally, Not in the Cloud  Humetrix  has been offering healthcare applications on consumer-centered mobile devices for 20 years. The company’s approach is to store patient data on a local device and not in the cloud, said Dr. Bettina Experton, president and CEO. “We still take advantage of AI algorithms in the cloud, but we don’t store personal information in the cloud. We call it ‘privacy by design’ architecture,” she said. The key to good security procedures to protect patient data is access control, she said. Nicole Lambert, President, Myriad Genetic Laboratories Technology advances are enabling an approach to healthcare called precision medicine, which takes into account individual variations in genes, environment and lifestyle. Exemplifying this trend are the products of  Myriad Genetic Laboratories , a 30-year-old company that has concentrated on the role that genes and proteins play in disease. The company’s surveys show nearly 80% of people do not have a good understanding of precision medicine and genetic testing, said Nicole Lambert, president of Myriad, in a CES session on Essential Technology for the New Health Revolution. As a result, the company is focusing its efforts today on a specific target: women. “Pregnancy, cancer and mental health are the areas we are trying to impact the most,” said Lambert. She gave the example of the trial-and-error approach of prescribing antidepressants. “It’s 50-50 that the medicine will work,” she said. “The promise of precision medicine is to get the patient the right medicine at the right time,” improving the chances the prescription will be effective. For detecting ovarian cancer, Myriad’s genetic tests can give each patient a level of risk, such as 36%, 57% or 87% risk. “We also give a five-year risk, allowing patients to put things in perspective,” she said. For instance, the first-year risk might be three percent while the lifetime risk might be 57%. “It helps people make decisions about their healthcare, she said, adding, “Precision medicine will only get more accurate over time.”   By AI Trends Staff   With remote learning happening for students of all ages during the pandemic area, new technologies incorporating AI—including voice, augmented reality and virtual reality—are being used more widely to enable teaching. “Some 1.2 billion children have been out of school during the pandemic year, and that has led to technology driving change in education,” said Robin Raskin, founder of Solving for Tech, moderator of a recent Consumer Electronics Show session on New Technologies Accelerating Education. Caitlin Gutekunst, senior director of marketing and development. Creativity, Inc . provides design and engineering services for toy, technology, and learning companies. Clients include Disney, Netflix, Fisher-Price, Mattel, and Pearson. The company is working on building out new products that leverage voice interactions, said Caitlin Gutekunst, senior director of marketing and development. Consumers today interact with voice assistants on some 4.2 billion devices and the number is expected by Juniper Research to grow to 8.4 billion by 2024, she said. “Voice is an interface, a new way for people to navigate and find information more easily,” she said. “Teachers are finding that voice provides new learning opportunities for students,” and can improve accessibility catering to the different learning styles of students, she said. The company envisions voice being used in more devices such as wearables and augmented reality/virtual reality (AR/VR) headsets. “We believe in binding entertainment with learning to make it fun for kids,” she said. The company developed Toy Doctor, an Alexa skill in which a child works as a doctor to help patients including Fuzzy the Teddy Bear and Rubber Ducky in a musical adventure. Melanie Harke, Senior Game Designer, Schell Games Melanie Harke, a senior game designer with  Schell Games,  builds educational games using VR and AR. “It is still in an early adoption phase, but once you have a device you can travel to distant lands or practice dangerous procedures in a safe environment,” she said. “Immersion is the cornerstone; it makes it powerful,” she said, enabling it to be used to practice physical activities or improve muscle memory. History Maker is Virtual Reality Content Creation Tool   The company has produced HoloLab Champions, a chemistry lab practice game show, enabling students without access to a real lab to gain experiences. Players are scored on accuracy and safety, helping to prepare them for real lab experiences. The company’s newest product is History Maker, a virtual reality content creation tool aimed at middle school students. The game enables students to step into the shoes of a historical figure, such as Ben Franklin, Abigail Adams, Abraham Lincoln, Mark Twain and Barack Obama. Students create the scene, pick their props, upload and recite their script and export the performance to share with classmates and teachers. “The pandemic has accelerated things, with more students participating in remote learning and more effort going into making the experiences better for kids. Having something immersive like VR can help,” Harke said. The company has made progress since entering the education market in 2016, but still, “It is early days for VR in education,” she said. Spatial  makes a AR/VR tool that can be used to create a lifelike avatar and a virtual classroom where the teacher has the necessary tools to present an immersive experience for students. “A lot of remote learning is happening in work settings. Tools like Spatial will be important to helping people feel connected,” said Aaron Dence, product manager with Spatial. The product uses AI and machine learning to “tweak” a two-dimensional selfie photo to create a three-dimensional lifelike avatar. Colleges are looking at the technology to help create immersive learning experiences, such as the streets of Harlem in the 1950s, for a history class at the University of Arizona, and physicians and students working together at Teikyo University in Tokyo. AR/VR Education Software Revenue Growing   Revenue for VR/AR educational software was estimated to be some $300 million in 2020, according to a report by Goldman Sachs, and is expected to grow to $700 million by 2025, according to a report in  edu plus now. The quality of content is improving and the cost of hardware is correlating, making the technology more accessible to education institutions worldwide, the report stated. Use cases for AR/VR in education include virtual field trips, medical education, and training, classroom education and student recruitment, according to an account from  [x]cube LABS. For medical education, applications can show complicated processes such as the human brain and visualize the abstract notions in digital reality. It equips students to merge the theoretical and practical parts of lessons. For recruitment, virtual tours enable students to explore the school or university campus remotely, thereby reducing expenses, increasing student engagement and helping them make a decision about the university. “Augmented and virtual reality is redefining the teaching and learning process. Immersive technology has the potential of being the most prominent breakthrough in the education industry,” the authors state. By Lance Eliot, The AI Trends Insider We all seem to know what a red stop button or kill switch does. Whenever you believe that a contraption is going haywire, you merely reach for the red stop button or kill switch and shut the erratic gadgetry down. This urgent knockout can be implemented via a bright red button that is pushed, or by using an actual pull-here switch, or a shutdown knob, a shutoff lever, etc. Alternatively, another approach involves simply pulling the power plug (literally doing so or might allude to some other means of cutting off the electrical power to a system). Besides utilizing these stopping acts in the real-world, a plethora of movies and science fiction tales have portrayed big red buttons or their equivalent as a vital element in suspenseful plot lines. We have repeatedly seen AI systems in such stories that go utterly berserk and the human hero must brave devious threats to reach an off-switch and stop whatever carnage or global takeover was underway. Does a kill switch or red button really offer such a cure-all in reality? The answer is more complicated than it might seem at first glance. When a complex AI-based system is actively in progress, the belief that an emergency shutoff will provide sufficient and safe immediate relief is not necessarily assured. In short, the use of an immediate shutdown can be problematic for myriad reasons and could introduce anomalies and issues that either do not actually stop the AI or might have unexpected adverse consequences. Let’s delve into this. AI Corrigibility And Other Facets One gradually maturing area of study in AI consists of examining the corrigibility of AI systems. Something that is corrigible has a capacity of being corrected or set right. It is hoped that AI systems will be designed, built, and fielded so that they will be considered corrigible, having an intrinsic capability for permitting corrective intervention, though so far, unfortunately, many AI developers are unaware of these concerns and are not actively devising their AI to leverage such functionality. An added twist is that a thorny question arises as to what is being stopped when a big red button is pressed. Today’s AI systems are often intertwined with numerous subsystems and might exert significant control and guidance over those subordinated mechanizations. In a sense, even if you can cut off the AI that heads the morass, sometimes the rest of the system might continue unabated, and as such, could end-up autonomously veering from a desirable state without the overriding AI head remaining in charge. Especially disturbing is that a subordinated subsystem might attempt to reignite the AI head, doing so innocently and not realizing that there has been an active effort to stop the AI. Imagine the surprise for the human that slammed down on the red button and at first, could see that the AI halted, and then perhaps a split second later the AI reawakens and gets back in gear. It is easy to envision the human repeatedly swatting at the button in exasperation as they seem to get the AI to quit and then mysteriously it appears to revive, over and over again. This could happen so quickly that the human doesn’t even discern that the AI has been stopped at all. You smack the button or pull the lever and some buried subsystem nearly instantly reengages the AI, acting in fractions of a second and electronically restarting the AI. No human can hit the button fast enough in comparison to the speed at which the electronic interconnections work and serve to counter the human instigated halting action. We can add to all of this a rather scary proposition too: suppose the AI does not want to be stopped. One viewpoint is that AI will someday become sentient and in so doing might not be keen on having someone decide it needs to be shut down. The fictional HAL 9000 from the movie 2001: A Space Odyssey (spoiler alert) went to great lengths to prevent itself from being disengaged. Think about the ways that a sophisticated AI could try to remain engaged. It might try to persuade the human that turning off the AI will lead to some destructive result, perhaps claiming that subordinated subsystems will go haywire. The AI could be telling the truth or might be lying. Just as a human might proffer lies to remain alive, the AI in a state of sentience would presumably be willing to try the same kind of gambit. The lies could be quite wide-ranging. An elaborate lie by the AI might be to convince the person to do something else to switch off the AI, using some decoy switch or button that won’t truly achieve a shutdown, thus giving the human a false sense of relief and misdirecting efforts away from the workable red button. To deal with these kinds of sneaky endeavors, some AI developers assert that AI should have built-in incentives for the AI to be avidly willing to be cut off by a human. In that sense, the AI will want to be stopped. Presumably, the AI would be agreeable to being shut down and not attempt to fight or prevent such action. An oddball result though could be that the AI becomes desirous of getting shut down, due to the incentives incorporated into the inner algorithms to do so and thus wanting to be switched off, even when there is no need to do so. At that point, the AI might urge the human to press the red button and possibly even lie to get the human to do so (by professing that things are otherwise going haywire or that the human will be saved or save others via such action). One viewpoint is that those concerns about AI will only arise once sentience is achieved. Please be aware that today’s AI is not anywhere near to becoming sentient and thus it would seem to suggest that there aren’t any near-term qualms about any kill-switch or red button trickery from AI. That would be a false conclusion and a misunderstanding of the underlying possibilities. Even contemporary AI, as limited as it might be, and as based on conventional algorithms and Machine Learning (ML), could readily showcase similar behaviors as a result of programming that intentionally embedded such provisions or that erroneously allowed for this trickery. Let’s consider a significant application of AI that provides ample fodder for assessing the ramifications of a red button or kill-switch, namely, self-driving cars. Here’s an interesting matter to ponder: Should AI-based true self-driving cars include a red button or kill switch and if so, what might that mechanism do? For my framework about AI autonomous cars, see the link here: Why this is a moonshot effort, see my explanation here: For more about the levels as a type of Richter scale, see my discussion here: For the argument about bifurcating the levels, see my explanation here: Understanding The Levels Of Self-Driving Cars As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems). There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve or how long it will take to get there. Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend). Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable). For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car. You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3. For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: To be wary of fake news about self-driving cars, see my tips here: The ethical implications of AI driving systems are significant, see my indication here: Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: Self-Driving Cars And The Red Button For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task. All occupants will be passengers; the AI is doing the driving. Some pundits have urged that every self-driving car ought to include a red button or kill-switch. There are two major perspectives on what this capability would do. First, one purpose would be to immediately halt the on-board AI driving system. The rationale for providing the button or switch would be that the AI might be faltering as a driver and a human passenger might decide it is prudent to stop the system. For example, a frequently cited possibility is that a computer virus has gotten loose within the onboard AI and is wreaking havoc. The virus might be forcing the AI to drive wantonly or dangerously. Or the virus might be distracting the AI from effectively conducting the driving task and doing so by consuming the in-car computer hardware resources intended for use by the AI driving system. A human passenger would presumably realize that for whatever reason the AI has gone awry and would frantically claw at the shutoff to prevent the untoward AI from proceeding. The second possibility for the red button would be to serve as a means to quickly disconnect the self-driving car from any network connections. The basis for this capability would be similar to the earlier stated concern about computer viruses, whereby a virus might be attacking the on-board AI by coming through a network connection. Self-driving cars are likely to have a multitude of network connections underway during a driving journey. One such connection is referred to as OTA (Over-The-Air), an electronic communication used to upload data from the self-driving car into the cloud of the fleet, and allows for updates and fixes to be pushed down into the onboard systems (some assert that the OTA should always be disallowed while the vehicle is underway, but there are tradeoffs involved). Let’s consider key points about both of those uses of a red button or kill-switch. If the function entails the focused aspect of disconnecting from any network connections, this is the less controversial approach, generally. Here’s why. In theory, a properly devised AI driving system will be fully autonomous during the driving task, meaning that it does not rely upon an external connection to drive the car. Some believe that the AI driving system should be remotely operated or controlled but this creates a dependency that bodes for problems. Imagine that a network connection goes down on its own or otherwise is noisy or intermittent, and the AI driving system could be adversely affected accordingly. Though an AI driving system might benefit from utilizing something across a network, the point is that the AI should be independent and be able to otherwise drive properly without a network connection. Thus, cutting off the network connection should be a design capability and for which the AI driving system can continue without hesitation or disruption (i.e., however, or whenever the network connection is no longer functioning). That being said, it seems somewhat questionable that a passenger will do much good by being able to use a red button that forces a network disconnect. If the network connection has already enabled some virus to be implanted or has attacked the on-board systems, disconnecting from the network might be of little aid. The on-board systems might already be corrupted anyway. Furthermore, an argument can be made that if the cloud-based operator wants to push into the on-board AI a corrective version, the purposeful disconnect would then presumably block such a solving approach. Also, how is it that a passenger will realize that the network is causing difficulties for the AI? If the AI is starting to drive erratically, it is hard to discern whether this is due to the AI itself or due to something regarding the networking traffic. In that sense, the somewhat blind belief that the red button is going to solve the issue at-hand is perhaps misleading and could misguide a passenger when needing to take other protective measures. They might falsely think that using the shutoff is going to solve things and therefore delay taking other more proactive actions. In short, some would assert that the red button or kill switch would merely be there to placate passengers and offer an alluring sense of confidence or control, more so as a marketing or selling point, but the reality is that they would be unlikely to make any substantive difference when using the shutoff mechanism. This also raises the question of how long would the red button or kill switch usage persist? Some suggest it would be momentary, though this invites the possibility that the instant the connection is reengaged, whatever adverse aspects were underway would simply resume. Others argue that only the dealer or fleet operator could reengage the connections, but this obviously could not be done remotely if the network connections have all been severed, therefore the self-driving car would have to be ultimately routed to a physical locale to do the reconnection. Another viewpoint is that the passenger should be able to reengage that which was disengaged. Presumably, a green button or some kind of special activation would be needed. Those that suggest the red button would be pushed again to re-engage are toying with an obvious logically confusing challenge of trying to use the red button for too many purposes (leaving the passenger bewildered about what the latest status of the red button might be). In any case, how would a passenger decide that it is safe to re-engage? Furthermore, it could become a sour situation of the passenger hitting the red button, waiting a few seconds, hitting the green button, but then once again using the red button, doing so in an endless and potentially beguiling cycle of trying to get the self-driving car into a proper operating mode (flailing back-and-forth). Let’s now revisit the other purported purpose of the kill-switch, namely, to stop the on-board AI. This is the more pronounced controversial approach, here’s why. Assume that the self-driving car is going along on a freeway at 65 miles per hour. A passenger decides that perhaps the AI is having trouble and slaps down on the red button or turns the shutoff knob. What happens? Pretend that the AI instantly disengages from driving the car. Keep in mind that true self-driving cars are unlikely to have driving controls accessible to the passengers. The notion is that if the driving controls were available, we would be back into the realm of human driving. Instead, most believe that a true self-driving car has only and exclusively the AI doing the driving. It is hoped that by having the AI do the driving, we’ll be able to significantly reduce the 40,000 annual driving fatalities and 2.5 million related injuries, based on the aspect that the AI won’t drive drunk, won’t be distracted while driving, and so on. So, at this juncture, the AI is no longer driving, and there is no provision for the passengers to take over the driving. Essentially, an unguided missile has just been engaged. Not a pretty picture. Well, you might retort that the AI can stay engaged just long enough to bring the self-driving car to a safe stop. That sounds good, except that if you already believe that the AI is corrupted or somehow worthy of being shut off, it seems dubious to believe that the AI will be sufficiently capable of bringing the self-driving car to a safe stop. How long, for example, would this take to occur? It could be just a few seconds, or it could take several minutes to gradually slow down the vehicle and find a spot that is safely out of traffic and harm’s way (during which, the presumed messed-up AI is still driving the vehicle). Another approach suggests that the AI would have some separate component whose sole purpose is to safely bring the self-driving car to a halt and that pressing the red button invokes that specific element. Thus, circumventing the rest of the AI that is otherwise perceived as being damaged or faltering. This protected component though could be corrupted, or perhaps is hiding in waiting and once activated might do worse than the rest of the AI (a so-called Valkyrie Problem). Essentially, this is a proposed solution that carries baggage, as do all the proposed variants. Some contend that the red button shouldn’t be a disengagement of the AI, and instead would be a means of alerting the AI to as rapidly as possible bring the car to a halt. This certainly has merits, though it once again relies upon the AI to bring forth the desired result, yet the assumed basis for hitting the red button is due to suspicions that the AI has gone akilter. To clarify, having an emergency stop button that is there for other reasons, such as a medical emergency of a passenger, absolutely makes sense, and so the point is not that a stop mode is altogether untoward, only that to use it for overcoming the assumed woes of the AI itself is problematic. Note too that the red button or kill switch would potentially have different perceived meanings to passengers that ride in self-driving cars. You get into a self-driving car and see a red button, maybe it is labeled with the word “STOP” or “HALT” or some such verbiage. What does it do? When should you use it? There is no easy or immediate way to convey those particulars of those facets to the passengers. Some contend that just like getting a pre-flight briefing while flying in an airplane, the AI ought to tell the passengers at the start of each driving journey how they can make use of the kill switch. This seems a tiresome matter, and it isn’t clear whether passengers would pay attention and nor recall the significance during a panic moment of seeking to use the function. For more details about ODDs, see my indication at this link here: On the topic of off-road self-driving cars, here’s my details elicitation: I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: Conclusion In case your head isn’t already spinning about the red button controversy, there are numerous additional nuances. For example, perhaps you could speak to the AI since most likely there will be a Natural Language Processing (NLP) feature akin to an Alexa or Siri, and simply tell it when you want to carry out an emergency stop. That is a possibility, though it once again assumes that the AI itself is going to be sufficiently operating when you make such a verbal request. There is also the matter of inadvertently pressing the red button or otherwise asking the AI to stop the vehicle when it was not necessarily intended or perhaps suitable. For example, suppose a teenager in a self-driving car is goofing around and smacks the red button just for kicks, or someone with a shopping bag filled with items accidentally leans or brushes against the kill-switch, or a toddler leans over and thinks it is a toy to be played with, etc. As a final point, for now, envision a future whereby AI has become relatively sentient. As earlier mentioned, the AI might seek to avoid being shut off. Consider this AI Ethics conundrum: If sentient AI is going to potentially have something similar to human rights, can you indeed summarily and without hesitation shut off the AI? That’s an intriguing ethical question, though for today, not at the top of the list of considerations for how to cope with the big red button or kill-switch dilemma. The next time you get into a self-driving car, keep your eye out for any red buttons, switches, levers, or other contraptions and make sure you know what it is for, being ready when or if the time comes to invoke it. As they say, go ahead and make sure to knock yourself out about it. Copyright 2021 Dr. Lance Eliot. This content is originally posted on AI Trends. [Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: ]

  • When was Pachyderm founded?

    Pachyderm was founded in 2014.

  • Where is Pachyderm's headquarters?

    Pachyderm's headquarters is located at 1501 Mariposa St, San Francisco.

  • What is Pachyderm's latest funding round?

    Pachyderm's latest funding round is Series B.

  • How much did Pachyderm raise?

    Pachyderm raised a total of $28.12M.

  • Who are the investors of Pachyderm?

    Investors of Pachyderm include Y Combinator, Benchmark, Decibel Partners, M12, ACE & Company and 13 more.

Discover the right solution for your team

The CB Insights tech market intelligence platform analyzes millions of data points on vendors, products, partnerships, and patents to help your team find their next technology solution.

Request a demo

CBI websites generally use certain cookies to enable better interactions with our sites and services. Use of these cookies, which may be stored on your device, permits us to improve and customize your experience. You can read more about your cookie choices at our privacy policy here. By continuing to use this site you are consenting to these choices.