AI and the future of semiconductor design
SiS talks to Lorenzo Servadei, Head of AI for Chip Design, Sony AI.
SIS: How do you see AI-powered EDA redefining the chip design process from initial concept through to fabrication — compared to traditional approaches?
LS: AI-powered EDA has the potential to augment the entire design and manufacturing flow. Take, for instance, the capability to fully connect requirements, design, and fabrication stages using cross-functional data. In this case, the conventional waterfall paradigm is substituted by a comprehensive data infrastructure.
This methodology can give engineers an understanding of what is going to happen in later stages, identify bottlenecks in the design stages, and accelerate the chip design process. Additionally, AI-powered EDA has the potential to assist in evaluating complex trade-offs, learning complex surrogates that can encompass multi-physics effects, and predicting the best design parameter choices.
SIS: In your view, what are the biggest bottlenecks in semiconductor design today that AI has the potential to alleviate or even eliminate?
LS: Design centers and fabs generate a large amount of data daily. This consists of information about products under development, manufacturing data, legacy technologies, and even deprecated products. In the AI-powered EDA era, there should be more attention paid to these design and manufacturing processes, as well as how to collect and reuse these types of data for future projects. AI models can be trained and used not only to automate and optimize designs, but also to support decisions, and help in the creation of new product roadmaps. Bottlenecks such as productivity constraints, long design-cycles, and lengthy iterations in design optimization can be reduced by these tools, which collect experience and outputs from previous flows and design choices.
SIS: How might AI-driven design tools change the skill sets and day-to-day responsibilities of engineers across the semiconductor supply chain?
LS: The ability to use automation software and AI tools will be essential for engineers across the semiconductor supply chain, as it is already very relevant today in software development, among other fields. For quick development and the exploration of ideas, products, and potential designs, engineers should be empowered to express the intent of their design flows with AI copilots. Engineers will need to master reward shaping for AI surrogates and optimizers so that models can gain a deeper understanding of the design intent and high-level functionalities.
Fine-grained knowledge of code will still be relevant, but mostly in a smaller percentage than it currently is. We will also see the demand for engineers who have a high level of understanding of AI flows and constraints continue to grow.
SIS: EDA has long been central to chip design — what’s fundamentally different now that AI is being integrated into these tools?
LS: While traditional EDA tools have been used to speed up and automate traditional, manual design flows and relieve the engineers from computationally heavy tasks, AI-powered EDA aims to expand its horizon in areas such as coding, review, and architectural design-space exploration, which have previously been constrained to experts in the field. Today, AI-tools can support engineers by augmenting and accelerating creative and explorative tasks, and have had a positive impact on many roles inside design R&D centers.
Research Impact: GENIE-ASI and Schemato
SIS: Your recent work on GENIE-ASI introduces a training-free, LLM-based method for analog subcircuit identification. What inspired you to explore this “training-free” direction, and what does it enable that previous methods couldn’t?
LS: We wanted a solution that copied how often engineers work: first by explaining the reasoning, and then turning that reasoning into a tool. GENIE-ASI leverages one- and few-shot capabilities of LLMs to produce human-readable instruction steps followed by executable Python code from just a handful of examples, sometimes even a single example. This avoids the heavy cost of large, labeled datasets and brittle, hand-crafted rules. Practically, it enables rapid adaptation to new or uncommon subcircuit variants, allows teams to quickly bootstrap detection logic, and yields reusable code that can be applied at scale without repeated LLM inference.
SIS: How does GENIE-ASI’s ability to generate executable Python code from a few examples change the workflow for analog designers?
LS: GENIE-ASI moves subcircuit identification from a largely manual or offline ML task into a lightweight coding loop that the designer can run and inspect immediately. Designers get a plain-language detection procedure, along with Python that runs against large netlist collections. This allows for faster iteration, reproducible tooling, and the ability to treat the generated detectors as first-class automation assets rather than opaque model outputs.
SIS: You also developed Schemato, which translates netlists into human-readable schematics. Why is interpretability such a crucial challenge in ML-generated circuit design, and how does Schemato address it?
LS: Interpretability is crucial because designers often evaluate and trust circuits visually. A netlist is exhaustive, but not human-friendly. If an ML system produces a design that an engineer cannot readily inspect in schematic form, adoption could stall. Schemato tackles this by translating netlists into LTSpice .asc schematics with attention to connectivity and layout fidelity. By fine-tuning a domain model and using prompt and in-context examples, Schemato produces schematics that engineers can load into LTSpice for immediate verification. This helps restore the human-in-the-loop: experts can see topology, run simulations, and judge intent quickly.
SIS: In your experiments, Schemato achieved higher compilation success and structural similarity than other LLMs. What does this suggest about the future role of foundation models in bridging the gap between machine-generated and human-interpretable designs?
LS: The metric gains show that foundation models can do more than generate text that looks plausible. They can produce syntactically correct, structurally faithful artifacts that plug directly into engineering tools. This suggests that foundation models can act as translators and codifiers in the design flow, bridging machine-generated ideas and human-interpretable artifacts, reducing manual conversion steps, and increasing trust.
Remaining gaps, such as larger or very rare components, indicate the following steps thereafter, including broader datasets, decomposition strategies, and hybrid pipelines that combine LLM reasoning with deterministic post-processing. In short, foundation models will be collaborators that produce verifiable, reusable outputs rather than black-box recommendations.
Innovation and efficiency
SIS: Can AI-assisted analog and mixed-signal design bring about measurable gains in performance, power efficiency, or time-to-market for semiconductor products?
LS: Yes, AI-assisted analog and mixed-signal design can produce measurable gains in performance, power efficiency, and time-to-market. Surrogate models and physics-inspired neural networks can accelerate the exploration of large parameter spaces, enabling designers to find better trade-offs, more quickly.
Generative and optimization tools can reduce iterations in layout and sizing, cutting development time while improving metrics like noise, linearity, and power.
SIS: How close are we to seeing AI-based EDA methods like GENIE-ASI or Schemato integrated into commercial design toolchains?
LS: Today, we are closer than many expect. Early deployments already show that AI tools can plug into existing flows when they produce verifiable artifacts that engineers can inspect, edit, and reuse. Methods, such as GENIE-ASI and Schemato, point to how this integration will happen – by generating human-readable logic, executable code, and tool-compatible schematics rather than opaque outputs.
Several vendors are introducing similar capabilities for estimation, topology detection, and layout assistance. Full integration into commercial suites will still arrive in stages, but the trajectory is clear. Assisted features are landing now, and domain-specific automation that can be validated, versioned, and audited is following quickly.
SIS: Do you see AI as primarily augmenting human engineers, or could it eventually automate significant portions of circuit design on its own?
LS: Over the next few years, AI will primarily augment engineers, amplifying creativity and throughput. Over time, for well-bounded tasks with abundant data and clear objectives, AI could automate significant portions of design. However, full end-to-end automation across all analog and mixed-signal tasks remains unlikely without strong verification and interpretability advancements.
SIS: How does your team evaluate the trade-off between automation and human oversight in the design process — particularly when reliability and verification are so critical?
LS: At Sony AI, we treat automation as a force multiplier, not a replacement for human oversight. We have defined safe automation envelopes, which allow us to determine which steps can be trusted to run autonomously, which require human review, and which must remain manual. Reliability is enforced by conservative validation, cross-checking with physics-based models, and mandatory human sign-off for release-critical decisions.
Cross-disciplinary and industry implications
SIS: Your work mentions “multi-physics device technologies.” Could you elaborate on how AI models help optimize across electrical, thermal, and mechanical domains simultaneously?
LS: AI models accelerate multi-physics co-optimization by providing fast surrogates that link electrical, thermal, and mechanical responses. This makes it feasible to search for joint design spaces where, for example, thermal gradients affect electrical behavior. The key is embedding physics constraints so the model respects conservation laws and known couplings while remaining fast enough for optimization loops.
SIS: Beyond analog design, what other domains of chip design (e.g., digital layout, verification, photonics) stand to benefit most from AI-powered EDA?
LS: Many domains stand to benefit from AI-powered EDA, as many applications are expanding as quickly as the computing technology is. Photonics and heterogeneous integration are two that stand out because there are many possible application fields, such as the time-consuming and computationally expensive EM / optical simulations, or complex multi-physics coupling (optical, thermal, electrical). Other
top-of-mind areas are 3D stacking, chiplet partitioning, and thermal management.
SIS: How might AI-enhanced design tools impact collaboration between design houses, foundries, and system integrators across the semiconductor ecosystem?
LS: AI-enhanced tools can standardize and speed handoffs across design houses, foundries, and system integrators by codifying best practices into reproducible flows. They can enable higher-fidelity virtual prototypes that reduce back-and-forth. That said, IP protection, data-sharing agreements, and validation standards will determine how close these parties can collaborate.
SIS: What kind of data infrastructure or standardization do you think is needed to make AI-powered EDA scalable across the industry?
LS: Scalable AI-powered EDA needs curated, interoperable datasets, standardized metadata, and agreed-upon interfaces for models and flows. Versioned model registries, common exchange formats for design intents and measurement data, and privacy-preserving federated learning constructs will be important. Equally important are benchmarks and shared verification suites to measure gains reliably.
Ethics, trust, and the human element
SIS: As AI takes on a larger role in chip design, how do we ensure transparency and trust in AI-generated design decisions?
Transparency starts with explainable models, traceable decision logs, and interfaces that can articulate why a recommendation was made. Combining data-driven suggestions with physics-based checks makes outputs easier to trust. Documentation, reproducible training data lineage, and the ability to reproduce or audit a design step are critical.
SIS: What challenges do you foresee in validating or certifying AI-generated designs, particularly for safety-critical applications like automotive or aerospace?
LS: Certification for AI-generated designs will be challenging because regulators and customers demand deterministic guarantees. There will need to be hybrid validation strategies, which include formal verification where possible, exhaustive simulation of critical corners, and independent audits of model training and test sets. For safety-critical systems, AI-generated designs will likely require conservative constraints and pedigree – like what is used today – for tool qualification.
SIS: How do you balance the excitement of rapid AI innovation with the practical realities of tool qualification and adoption in semiconductor workflows?
LS: When balancing the excitement of rapid AI innovation with the practical realities, you have to be very disciplined. Running rigorous experiments, quantifying gains, and exposing failure modes early is critical. Short-term pilots should focus on high-ROI problems where validation is tractable. At the same time engineers should invest in tooling for traceability, test suites, and risk assessment so adoption does not outpace qualification.
Looking ahead
SIS: What’s next for your research — are there areas where you see AI pushing the boundaries of what’s possible in semiconductor design in the next five years?
LS: While I am not able to share any specific details at this time, we will have some work focused on integrated, multi-objective systems that link physics, circuit implementation, and system-level metrics in a single optimization loop.
Advances in physics-inspired models, generative physical layout, and faster verification flows will push boundaries. Over the next five years, we expect AI to enable faster exploration of novel technologies and to unlock design points that were previously infeasible because of complexity or simulation time.
























