Search company, investor...

Founded Year



Series A | Alive

Total Raised


Last Raised

$18M | 2 yrs ago

About Expedera

Expedera is a company that focuses on providing scalable neural engine semiconductor intellectual property (IP) in the artificial intelligence (AI) industry. The company's main offerings include neural processing unit (NPU) products that are designed to improve performance, power, and latency in AI applications, while also reducing cost and complexity. These products are used in a wide range of applications, from wearables and smartphones to automotive systems and data centers. It was founded in 2018 and is based in Santa Clara, California.

Headquarters Location

3211 Scott Blvd. Suite 204

Santa Clara, California, 95054,

United States

650 887 0815


Expedera's Product Videos

Expedera's Products & Differentiators

    Origin IP

    xpedera’s Origin™ is a neural engine IP line of products that reduce memory requirements to the bare minimum, dramatically reducing overhead to unlock performance and power efficiency.


Expert Collections containing Expedera

Expert Collections are analyst-curated lists that highlight the companies you need to know in the most important technology spaces.

Expedera is included in 2 Expert Collections, including Semiconductors, Chips, and Advanced Electronics.


Semiconductors, Chips, and Advanced Electronics

6,635 items

Companies in the semiconductors & HPC space, including integrated device manufacturers (IDMs), fabless firms, semiconductor production equipment manufacturers, electronic design automation (EDA), advanced semiconductor material companies, and more


Artificial Intelligence

11,383 items

Companies developing artificial intelligence solutions, including cross-industry applications, industry-specific products, and AI infrastructure solutions.

Expedera Patents

Expedera has filed 5 patents.

The 3 most popular patent topics include:

  • artificial neural networks
  • integrated circuits
  • artificial intelligence
patents chart

Application Date

Grant Date


Related Topics




Artificial neural networks, Machine learning, Classification algorithms, Artificial intelligence, Image processing


Application Date


Grant Date



Related Topics

Artificial neural networks, Machine learning, Classification algorithms, Artificial intelligence, Image processing



Latest Expedera News

Expedera Proposes Stable Diffusion as Benchmark for Edge Hardware for AI – Semiwiki

Feb 5, 2024

Views: 0 A recent TechSpot article suggests that Apple is moving cautiously towards release of some kind of generative AI, possibly with iOS 18 and A17 Pro. This is interesting not just for Apple users like me but also for broader validation of a real mobile opportunity for generative AI. Which honestly had not seemed like a given, for multiple reasons. Finding a balance between performance and memory demand looks daunting for models baselining at a billion or more parameters. Will power drain be a problem? Then there are legal and hallucination issues, which perhaps could be managed through carefully limited use models. Despite the apparent challenges, I find it encouraging that a company which tends to be more thoughtful about product releases than most sees a possible path to success. If they can then so can others, which makes a recent blog from Expedera enlightening for me. A quick recap on generative image creation Generative imaging AI is a field whose opportunities are only just starting to be explored. We’re already used to changing our backgrounds for Zoom/Google Meet calls, but generative AI takes this much further. Now we can re-image ourselves in different costumes with different features in imaginary settings – a huge market for image-conscious consumers. More practically, we should be able to virtually try on clothing before we buy or explore options when remodeling a kitchen or bathroom. This technology is already available in the cloud (for example Bing Image Creator) but with all the downsides of cloud-based services, particularly in privacy and cost. Most consumers want to interact with such services through mobile devices; a better solution would be local AI embedded in those platforms. Generative AI through the open-source Stable Diffusion model is a good proxy for hardware platforms to serve this need and more generally for LLM models based on similar core technologies. Can on-board memory and performance be balanced at the edge? First, we need to understand the Stable Diffusion pipeline. This starts with a text encoder to process a prompt (“I want to see a pirate ship floating upside down above a sea of green jello”). That step is followed by a de-noising neural net which handles the diffusion part of the algorithm, through multiple iterations creating information for a final image from trained parameters. I think of this as a kind of inverse to conventional image recognition, matching between prompt requirements and the training to create a synthesized match to the prompt. Finally a decoder stage renders the image from the data constructed in the previous step. Each of these stages is a transformer model. The Expedera blog author, Pat Donnelly (Solutions Architect), gives a detailed breakdown of parameters, operations and data moves required throughout the algorithm which I won’t attempt to replicate here. What stood out for me was the huge number of data moves. Yet he assumes only an 8MB working memory based on requirements he’s seeing with customers rather than optimal throughput. When I asked him about this, he said that operation would clearly depend on a DDR interface to manage the bulk of this activity. This is a switch from one school of thought I have heard – that model execution must keep everything in local memory to meet performance requirements. But that would require an unreasonably large onboard SRAM. DRAM makes sense for handling the capacity, but another school of thought suggests that no one would want to put that much DRAM in a mobile device. That would be too expensive. Also slow and power hungry. DRAM or some other kind of off-chip memory makes more sense but what about the cost problem? See the above reference on Apple. Apparently they may be considering flash memory so perhaps this approach isn’t so wild. What about performance? Pat told me that for Stable Diffusion 1.5, assuming an 8K MAC engine with 7 MB internal memory and running at 750 MHz with 12 GBps external memory bandwidth, they can process 9.24 images/second through the de-noiser and 3.29 images/second through the decoder network. That’s very respectable consumer-ready performance. Power is always tricky to pin down since it depends on so many factors, but numbers I have seen suggest this should also be fine for expected consumer use models. A very useful insight. Seems like we should lay to rest the theory that big transformer AI for the edge cannot depend on off-chip memory. Again you can read the Expedera blog HERE . Share this post via:

Expedera Frequently Asked Questions (FAQ)

  • When was Expedera founded?

    Expedera was founded in 2018.

  • Where is Expedera's headquarters?

    Expedera's headquarters is located at 3211 Scott Blvd., Santa Clara.

  • What is Expedera's latest funding round?

    Expedera's latest funding round is Series A.

  • How much did Expedera raise?

    Expedera raised a total of $18M.

  • Who are the investors of Expedera?

    Investors of Expedera include Weili Dai and Sehat Sutardja.

  • Who are Expedera's competitors?

    Competitors of Expedera include Cadence and 6 more.

  • What products does Expedera offer?

    Expedera's products include Origin IP.


Compare Expedera to Competitors

Q is a company focused on artificial intelligence computing, specifically in the domain of machine learning inference. The company offers a General Purpose Neural Processing Unit (GPNPU), a licensable processor that is optimized for on-device machine learning inference and can run complex C++ code. This product is primarily used in the technology industry, particularly in sectors that require on-device artificial intelligence computing. It was founded in 2016 and is based in Burlingame, California.


Amsimcel is a deep tech startup that operates in the semiconductor industry. The company develops a GPU-powered Physical Verification framework for the next generation of Integrated Circuits (IC), aiming to establish the next generation of Electronic Design Automation tools and deliver methodologies for demanding projects. Amsimcel primarily serves the semiconductor industry. It was founded in 2017 and is based in Suceava, Romania.


Alsemy is an electronic design automation (EDA) company. It specialized in machine learning-based semiconductor modeling solutions. It was founded in 2019 and is based in Seoul, South Korea.


NeoLogic focuses on the development of next-generation processors. The company's main offering includes its patent-pending Quasi-CMOS technology, which reduces the transistor count of digital cores by up to three times, resulting in up to 50% reduction in power dissipation and up to 40% area saving. It enhances performance-per-watt efficiency. NeoLogic primarily serves the semiconductor industry. It was founded in 2021 and is based in Netanya, Israel.

Chainguard Logo

Chainguard is a supply chain security company. It offers a software product known as Chainguard Enforce that manages, monitors, and secures the software supply chains by default. The company was founded in 2021 and is based in Kirkland, Washington.

Wabbi Logo

Wabbi is a company that focuses on application security within the software development sector. The company offers a platform that manages and orchestrates the full lifecycle of vulnerabilities, translates security policies into development processes, and provides end-to-end management of application security programs. Wabbi primarily serves the software development industry. It was founded in 2018 and is based in Boston, Massachusetts.


CBI websites generally use certain cookies to enable better interactions with our sites and services. Use of these cookies, which may be stored on your device, permits us to improve and customize your experience. You can read more about your cookie choices at our privacy policy here. By continuing to use this site you are consenting to these choices.