My Thoughts on Agentic AI

What's next for us FPGA developers?

My Thoughts on Agentic AI

Imagine for a minute that you wake up one day and you realize that you have a superpower — like the ability to fly. Well, that’s a bit like how I felt when I started using Claude Code about a month ago. Since then, when I talk and write about my experiences with this amazing new tool, I find myself trying to hold back my excitement, trying not to look like a crazy person. It’s like I feel the ground shaking beneath my feet while I see other people around me going about their lives as usual. I would tell people that this is huge, that it has flipped the tables on my work and the whole tech industry — but I’d see that they were not as shocked as I was. Claude Code was blowing my mind. The truth is that it has completely changed the way I work, at least when it comes to one big part of my work, and I struggle to imagine ever going back.

In the first few days, I had this feeling of being in a dream, like I was about to wake up and this would all be taken away from me. This changes everything, I thought. This can’t be real, this is too good to be true. But days passed and of course it didn’t go away. All of a sudden I felt like I was in a race. I was fixing everything that was broken and improving everything that wasn’t perfect. I was so eager to point this powerful tool at everything that I had built, was building, or was planning to build. The documentation website that I had been putting aside for years. The dev board repository that I’d been wanting to develop for ages. That tool I had been trying to build but couldn’t quite get right. All of a sudden, everything was possible. I was going to bed late and getting up early because my head wouldn’t stop thinking of new things to work on.

Then I kind of realized that I needed to focus. That side-project I’ve had on the back-burner for years is now possible — but that doesn’t make it the priority, even though my excitement tries to convince me otherwise. I still have a job to do, and the intelligent thing for me to do right now is to: (1) figure out how these tools affect my business, so that I can make sure I’m still working on things that are relevant, and (2) use the tools to help me actually perform that relevant work.

I’m aware that most people reading this blog work with FPGAs, so I’ll try to keep this relevant to that world — but I’ll admit this post is also self-serving, and I’m using it to gather my thoughts on how this technology changes my particular situation. I’m also aware that these tools are not yet able to help everyone do their jobs, so it’s important that I explain what my work actually involves. Although my blog is called FPGA Developer, I’m actually more of a hardware designer — think circuit design, schematic drawing, PCB layout. Over the years I’ve developed an expertise for designing boards for FPGAs, hence the blog and Opsero. But as most of you know, these days an “FPGA developer” needs to get their hands dirty across a lot of different fields. Any single FPGA-based project can involve not only hardware design, but coding in HDL, Tcl, Python, C/C++, Makefiles, programming drivers, understanding the Linux kernel, using simulation and software tools, and debugging physical hardware. The typical FPGA developer is, in some ways, a jack of all trades. In my case, I do hardware design very well — but on the software side of FPGA design, I am competent but certainly not an expert. Now that you know my background, the rest should make more sense.


How do I use Claude Code?

My main PC runs Windows 11, and I use Claude Code in the Git Bash console. On my secondary Ubuntu machine, I use it in the command terminal. I know there are ways to use it inside IDEs like VS Code, but I’m not accustomed to those tools anyway.

My workflow for a new Claude Code project is generally:

  • Create a new Git repo
  • Open Git Bash, clone the repo somewhere
  • cd to the repo and run claude
  • Tell Claude what I want to do, have some back-and-forth to nail down the details, then ask it to write up a comprehensive CLAUDE.md file (sometimes I do this part through claude.ai instead)

A lot of what I’ve done with Claude Code has involved pre-existing projects that already have their own Git repos. For those, I’ll sometimes just cd into the existing repo and run claude from there. Other times I’ll create a new wrapper repo — especially when I know Claude will be writing a lot of convenience scripts that I’ll want to put in version control. In that wrapper repo, I’ll add a repos folder and clone the existing project repo(s) into it (or ask Claude to do it). Then I’ll explain what we need to accomplish.

So basically — no fancy tools. Just Git and Git Bash.


What am I using it for?

So far I’ve used it on a few different projects:

  • Blender scripts for generating images of Opsero products with standardized angles, lighting, and resolution
  • A documentation site for all Opsero products (previously the docs were scattered across multiple domains with different branding)
  • An FPGA dev board repository (not public yet) that helps people compare boards and find the one that best meets their needs - similar to the dev board lists that I published a while ago, but with real sorting, comparing, searching capabilities
  • Updating all Opsero reference designs from version 2024.1 to 2025.2 of the AMD Xilinx tools
  • Lots of smaller fixes here and there not worth listing individually

The most significant project on that list — for me at least — is the reference design updates. This is a task I try to do once a year. It’s always tedious and frustrating work that brings me no joy whatsoever, but it’s necessary. To tackle it with Claude Code I gave Claude a list of the project repos and described the objectives in the CLAUDE.md file. Then we jumped into it step by step:

  1. Clone all repos into a repos folder, create a new dev branch in each
  2. Update all 2024.1 references to 2025.2
  3. Build all Vivado projects, fix any issues that arise
  4. Build all Vitis workspaces, fix any issues that arise
  5. Test on hardware, fix any issues that arise

For steps 1–4, Claude Code was able to work quite autonomously, but I was still there working along with it. I’ll handle that Vivado block diagram error, you keep going with the Vitis builds It feels like we were a real team, and it works so quickly that you always feel like you’re in a race. It probably could have tackled this job all by itself but maybe I just find it hard to let go 😄. Along the way I learned a lot — some lessons the hard way, others made my jaw drop in disbelief (in a good way).

Lessons learned

Claude Code uses a fresh terminal for each command. This means that every time it invokes the AMD Xilinx tools, it needs to source the settings64.sh scripts — even if you’d already sourced it in your own terminal session. If you don’t account for this, Claude wont break anything, but it will try and fail at least once. The fix is simple: add an explicit reminder to your CLAUDE.md file so it never forgets to source settings before invoking any Xilinx tooling.

Its training can lag behind the latest tool releases. At one point, Claude Code jumped straight into writing scripts using XSCT — which is deprecated and being removed in future Vitis releases. It wasn’t aware of this because it simply hadn’t been trained on the latest version yet. Fair enough, but you don’t want it to go too far down the rabbit hole before it realizes that it’s making a mistake. What to do instead: use a dedicated section in your CLAUDE.md to list known deprecations and their replacements. Something like:

## Tool Deprecations (as of 2025.2)
- XSCT is deprecated. Use the Vitis Python API instead.
- ...

This gives Claude a heads-up before it charges off in the wrong direction. And you’ll discover more of these as you work with CC, but you can also add a directive to the CLAUDE.md file that tells it to update the CLAUDE.md file with other tool depreciations (and other hurdles) that it discovers along the way. By the way, with my updates project, I had CC keep an UPDATES.md file in which it would store information regarding version differences that it stumbled upon in the process of updating the designs. This way, from one session to another it would always remember those differences and not have to stumble upon the same thing twice.

If it can’t fetch a web page, it may carry on regardless. I ran into a case where I asked Claude Code to read a specific page for information that was critical to the task. It hit a bot-blocker, failed silently, and then proceeded with the task anyway rather than flagging the problem. The information was critical to doing the task the way I wanted it to be done. The fix here is to be explicit in your CLAUDE.md or your instructions: if a URL cannot be accessed, stop and ask for help rather than proceeding without the information. If it can’t access a page, you can always copy and paste the content into the terminal, or save it somewhere and tell CC where you put it.

Most of the AMD Xilinx documentation isn’t accessible to Claude Code. This is a persistent limitation - I guess they don’t want bots training on their documentation. However, there are often third-party resources, forum posts, and community docs online that it can access and that turn out to be pretty useful. Just let it look around before you step in.

Sometimes Claude Code knows better than you do. I wanted Claude to write some scripts using the new Python flow in Vitis. As I just mentioned above, it couldn’t access the online docs. If I were doing this task myself, I would have read the docs, so I couldn’t see a way around it. CC started polling the actual tool installation folder — running executables in there. What are you doing?? ESC! I stopped it a couple times, thinking that this was not the way forward, it’s wasting time (and tokens) and we need to find a way to get those docs. Eventually I gave up on reaching the docs. I let CC do what it wanted to do and of course — it worked. It was trying to run the tool executables using the -help flag to figure out the API details. It got exactly what it needed. I was the idiot, not Claude Code.

Things Claude Code is particularly good at

Debugging. Working on the 2025.2 update, while running the echo server on hardware, I ran into some issues with lwIP. Before breaking out the debugger and getting my hands dirty, I just asked Claude Code to take a look at the code. It found the bug immediately. Problem solved. That one really blew my mind. Since then I have generally found that CC is amazing for debugging code. Start off by asking it to checkout the code. If that doesn’t immediately lead to the solution, give it access to the UART and the debugger via JTAG. In my experience, it wont take long before CC figures out what’s going wrong. ⚠️ One warning though… keep an eye on it during debug of external hardware and try to follow what’s going on at all times. At one point my FPGA board had gotten itself into a lock-up somehow and CC was not able to reset it over JTAG. Instead of saying to me: “hey could you do a power cycle on the board please?”, it just kept pushing away trying to find a solution through the debug tools. If I had gone to mow the lawn I’m guessing that this would have cost me a lot of tokens, but it also makes me wonder to what lengths CC would have gone to to crack through this problem via software means.

Writing automation scripts up front. One thing I learned is that it pays to ask Claude to write utility scripts at the start of a project rather than relying on it to run ad hoc commands every time. For the ref design update, we regularly needed to sweep across all repos and all projects to check which ones had built successfully and which had errors. Claude could do this manually each time with a bunch of Linux commands, but it’s far cleaner to have it write a proper status-check script once and then call that script consistently throughout the project. It’s more repeatable and consistent. Other useful scripts you can have it write are ones to program the FPGA board via JTAG, open a UART terminal and log the output. The goal is to make Claude as autonomous as possible.


How I think it will affect the FPGA market

FPGAs have always been notoriously difficult to work with, and I’ve long felt that this was the single biggest factor limiting their application to real-world problems. I now have no doubt that agentic AI tools will change that.

The most obvious near-term impact is on the barrier between software engineers and FPGA development. Software engineers typically have a fair understanding of the hardware their code runs on — they’re not starting from zero. HLS has made some progress toward bridging that gap over the years, but in my opinion AI will be a 10x improvement on that. It might even make HLS redundant as a concept. When you can describe what you want in a high-level way and have an AI agent figure out the implementation details — including HDL, timing, resources, and synthesis — the HLS abstraction is no longer relevant.

Some people say that FPGAs will lose ground to AI-optimized silicon, but I don’t see it that way. In my opinion, FPGAs don’t (and won’t) compete with dedicated AI accelerators — but they don’t need to. The more interesting role is as a complement. Think of using an FPGA to pre-process high-bandwidth sensor or video data before feeding it into an AI inference engine. That’s a natural fit, and it’s a use case that grows as AI workloads become more common at the edge. In that sense, I think FPGAs only benefit from the AI boom.

I think there’s a bright future ahead for FPGAs, and for the companies like AMD Xilinx, Altera, and Lattice that make them. It’s my honest opinion, but of course you could say that I am somewhat biased!


Now let’s get brutally honest. From here on this post is more a self-serving organization and recording of my thoughts, related to my professional situation and how I expect to be affected by agentic AI.

How I think it will affect Opsero

If you don’t know anything about Opsero, it’s enough to know that we’re a small team, we sell FPGA Mezzanine Cards (FMCs) and FPGA design services. There are two sides to the business — products and design services — and I expect both to be significantly affected.

Products

My strategy has always been to back products with exceptional software support and documentation. Specifically, I’ve invested heavily in making Opsero products compatible with — and providing reference designs for — the widest range of development boards possible. That investment has allowed Opsero to charge a premium over competitors who ship hardware with minimal support (I won’t name names! 😄).

Here’s the honest reality though: AI tools give my competitors the means to themselves develop the same level of software support and documentation for their products. The overhead of producing and maintaining high-quality reference designs and documentation is about to drop for everyone. That means the premium I’ve been able to charge will eventually compress as competitors catch up — or as customers simply build their own support using AI tools. The customer will say: Why would I buy the expensive Opsero product when I can buy the lower cost product and develop my own ref designs using AI?

But here’s the other side of that coin: the overhead drops for me too. Lower margins become more sustainable when the cost of supporting them is lower. And I still have something that can’t be replicated overnight — trust. My existing customers know Opsero, know the products, know that when they buy from Opsero they get something that works and is maintained. That relationship doesn’t disappear because AI got better.

The way I see it now, this is actually an opportunity if I’m smart enough to take it. More on that in the strategy section.

Design services

I’ll be blunt. I know that I’ll get some flak for this. I think the long-term future of hardware design services is that most of it gets done by agentic AI tools. Right now, companies like Anthropic haven’t focused their energy on tools specifically built for hardware design — but I think that’s inevitable. A lot of hardware engineers feel the way software engineers felt a few years ago, that their work is too creative, too physical, too domain-specific to be automated. I think they’re wrong, for the same reasons the software engineers turned out to be wrong.

As for the “software” side of FPGA design (HDL, simulation, timing closure, etc) - I think the same is true. Already Claude Code has shown me that it understands a lot about the AMD Xilinx tools and the FPGA design stages. Yes, it probably still needs an FPGA engineer to drive it to deliver something safe, reliable and within specs, but I think that eventually (months maybe years from now) it will be able to do all of that with a non-technical person behind the wheel.

Something that my experience with Claude Code has really driven home, is that all of us professionals need to re-evaluate the value that we bring to our employers and our customers. There was a time when being really good with an abacus was something highly valued, but today if you were to persistently use an abacus in your work, your employer would probably fire you for wasting your time and his/her money. We used to spend a lot of time typing code and searching for bugs in lines of text, but this activity does not deliver value anymore. We become like the stubborn guy with the abacus unless we swallow our pride and realize this quickly.

Over the last few years I’ve been deliberately shifting focus toward the product side of the business. Not in preparation for all this - I’m not that smart - I just wanted a more scalable business. Design services is now a small part of what I do, luckily. Losing it wouldn’t sink anything, but the idea of not having it as a backup is scary.


My strategy from here on

The way I see it, the right move is to lean hard into these tools.

Raise the bar on reference designs and documentation. CC lets me do more, faster. I want to develop reference designs for more application areas, and for more dev boards and push the documentation quality to another level.

Prepare the reference designs for use with AI tools. I’m excited about this one. The idea is to add structured instructions, CLAUDE.md files, and “skills” directly into my reference design repos — so that when a customer brings an AI agent to work on one of my designs, the agent has what it needs to be immediately useful. Making my products AI-agent-ready feels like the right direction. I imagine a customer cloning the git repo of one of our reference designs and asking CC to “add a UDP packet parser to do so and so” or “add this and that processing to the existing video pipeline”. I want Opsero ref designs to be ready for that, and produce the best user experience possible.

Produce content showing customers how to use AI with my reference designs. Blog posts and videos demonstrating Claude Code workflows on Opsero reference designs — showing, for example, how to port a design to a new board or how to debug a specific issue with AI assistance. In my opinion, this is the new relevant content for blogs like mine. It’s no longer about “here’s how you create a design with so-and-so IP core”, its about “here’s how to use AI to make this device do so-and-so”.

Grow the product range. If margins go down, maybe it’s a good idea to try to grow volume - or maybe I should just sell my car and switch to low-cost diet of rice and instant noodles? 😄. AI tools make it practical for a small team to manage a larger product catalog — more products, more compatibility targets, more applications. I see this as doing more of what I do best.


What I’ve been up to — and why it’s been a while

If you’re a regular reader, you’ve noticed I haven’t posted in over a year. Let me explain.

A lot of last year went to a client’s custom board design for a Kintex UltraScale+ project - lots of fun. Just a quick mention that I worked on this design with very talented Tomas Chester from Chester Electronic Design Inc. and I highly recommend this company for custom FPGA board designs and will do more stuff with them in the future. Also last year a significant amount of my time went into something I’m very excited about and hope to write up properly: a tool for running electromagnetic simulations on Altium designs using OpenEMS. I had to shelve that one half way through to deal with other priorities, but I will get back to it this year and I do intend to publish that work. It will indeed be open source. Last year I did also write a couple of articles that went into the FPGA Horizons Journal and another article for Hackster.

So I was busy. But that’s not the whole story. Part of my reluctance to post has been a nagging feeling that blogging itself was becoming less relevant. I myself turn to Claude or ChatGPT when I would have previously searched Google or someone’s blog (probably Adam Taylor’s). So I assume that my readers are doing the same.

I’ve come around on this though. I think that people still like to read from real humans that they feel a connection to. People tend to gravitate toward people they feel something in common with. I think my readers will still want to hear from me because they know me now, and because we’re in similar situations. So I intend to get back into writing posts on a regular basis and I hope that it still reaches people through the noise of this crazy AI driven world.

By the way, I’ll be attending FPGA Horizons US East 26 which is looking like it’s going to be a lot of fun. If you would like to connect for a face-to-face talk or for a drink, just let me know.