News

Ainekko launches AI Foundry, an open hardware–software stack for AI

Company introduces an open, composable AI infrastructure stack

David Harold

An open-source initiative arrives promising RTL, emulation, APIs, and community tooling under one roof. Ainekko positions AI Foundry as a composable stack—edge to inference servers. The company is debuting at the RISC-V Summit; licenses, maturity level, and performance baselines are not specified so far.

AI is all about cats. (Source: Ainekko)

Going beyond the single core, Ainekko has announced AI Foundry as an open platform that extends from silicon RTL through emulation and developer APIs to system-level inference software. The company says initial resources are live at aifoundry.org and on GitHub, with community coordination via Discord. The launch coincides with RISC-V Summit (Santa Clara, October 22, 2025).

According to the release, the project aims to make chip-level innovation “composable and community-driven,” targeting use cases from lightweight edge devices to higher-performance inference systems.

AI Foundry is presented as a hub for collaboration across robotics, retail, industrial, and smart-device sectors. The company highlights open development and a do-ocracy governance approach. They did not reveal software or hardware license details, verification status, PPA (performance, power, area) goals, supported toolchains, or a silicon roadmap. No third-party benchmarks, conformance suites, or reference designs were provided beyond the commitment to modular building blocks and emulation tools.

What do we think?

There is a lot not disclosed in the launch announcement, but there is a growing demand for providers that can offer more than just cores. Teams want enablement—reference RTL, emulators, firmware, drivers, compilers, kernels, sample workloads—to shrink time to first silicon and time to useful inference. On that axis, AI Foundry’s stack not parts pitch is on trend.

Success will hinge on three things:

  1. Licensing and governance clarity. Open only scales if licenses are explicit and compatible across RTL, software, and docs, and if contribution rules are predictable.
  2. Verification depth. Reusable IP needs regression, coverage, and reproducible flows; otherwise integrators pay the tax later. Publishing test plans early would build trust.
  3. Opinionated reference platforms. Composability is attractive, but most customers want a known-good path: a couple of curated configs with measured PPA and BOM.

If Ainekko can package credible reference designs and keep the community active, this AI Foundry idea could earn adoption among startups and OEMs who want to escape closed, vertically integrated stacks. For now, it plays directly to the market’s appetite for enablement beyond cores—and that may be enough to get some to take a look.