Top of mind as of December '25 (likely changed significantly within a few months/weeks)
This is about AI within the product management role, NOT about delivering AI in products—there's plenty currently shipping with Cisco's security portfolio, providing AI-enabled threat protection that's been in the works for many years, like getting visibility into malicious traffic without decrypting with Cisco's market-leading encrypted visibility engine (EVE) or inline machine-learning models protecting customers from zero-day vulnerabilities with snortML.

I wanted to document and share my experience with AI in the daily job tasks of a product manager, as it's been somewhat challenging to find useful learning material. There are many very useful foundational courses that are a must for understanding GenAI and machine learning. However, when it comes to getting real productivity gains, my experience has been that the application depends significantly on the individual situation of a job. There isn't a 'silver bullet' on how to use AI and gain 10x productivity—it's more a collection of productivity gains like a tool belt that needs to be used effectively depending on the situation, and in aggregate provides these productivity gains.
I've structured the utility of AI into the following levels. (I'm currently between Levels 2 & 3)
- Level 0: Re-defining how (re)search works - minor time savings
- Level 1: Prompt-engineering for meaningful results - discrete, possibly significant time savings
- Level 2: Agents doing tasks - recurring time savings, reduced human error
- Level 3: Agents identifying improvement opportunities for the work they are managing - added value
- Level 4: Agents working with each other for more complex solutions - significant time savings
- Level 5: Agents independently addressing emerging issues, freeing up mind-space for things that matter
Level 0: New world in (re)search
Level 0 is really "just" using GenAI as a complement/replacement for search engines. Rapid advancements in GenAI using LLMs provided immediate benefits at Level 0. That's been useful in personal life just as well as in the professional setting, making search more contextual. Dangers of LLMs hallucinating manifested quickly and require continuous diligence from the user to validate sources and cross-check assumptions. In one case, ChatGPT was making-up facts of competitor's capabilities that they actually didn't have (yet). To this day, the ability to prompt an AI to come up with some basic email templates or identify areas to research are useful additions that generate immediate value.
Caution: My personal challenge is the balance of benefit vs. assumption checking and validation. Despite seemingly logical and professional responses, many times the first response from a GenAI like OpenAI's ChatGPT or Claude's Sonnet is at best misleading, or worse, just plain wrong.
Level 1: Accelerating specific tactical asks
With prompt-engineering, AI can provide some dramatic time savings. I've personally used it numerous times, using one AI tool to come up with a prompt for another AI tool to then generate a result. This gets tremendously valuable if I also provide the AI tool with additional contextual information, such as research papers, publications, and even roadmap visibility. At Cisco we are enabled to use numerous AI models with up to 'highly confidential' datasets that really make a difference in getting results. Just recently I've leveraged a plethora of documents to equip a Claude model with contextual information to then come up with a webinar script. After (in this case minor) tweaks, the created output is being leveraged for upcoming recordings. Similarly, this is the level that also helps with the classic product management responsibility of creating PRDs (product requirements document). From my conversations with peers, many leverage AI to accelerate the creation of these otherwise time-intensive documents, so they can focus on what actually matters - the details of the requirements and how they advance the business strategy.
Tip: The more information you can provide the AI tool, the better the result will be. Not only can you proactively equip the AI model with information, you can also probe it for what information might be needed to deliver a more meaningful or reliable output.
Caution: Just like at Level 0, hallucinations are real. The model might make up data or facts that sound reasonable but don't stand up to reality. ALWAYS validate assumptions and statements made. This is particularly true for anything data-related. I've yet to find an AI tool that's reliable in understanding structured data (like CSV or XLSX files). It seems not just a limitation I'm facing, but my colleagues mention these limitations repeatedly, to the extent that they had to explain to an AI model what the dataset actually contains after the AI made some blatant misstatements. I'll be exploring using AI agents and Python code for more accurate and reliable data intelligence.
Ask: I'd be curious if you have had positive experiences with AI processing data! Comment or reach out to me on LinkedIn.
Level 2: Agents performing discrete tasks

This is a meaningful step-up from just using GenAI tools. Drowning in meeting notes from the last 10 years without taking real advantage of that significant knowledge base, I recently built an agent that now manages my meeting notes for me. Tremendous thanks go to Jason Cyr, VP of Design for Cisco's cybersecurity practice. He suggested the combination of Cursor (coding AI) and Obsidian (linked notes management) and boy did it deliver! (Check out his YouTube channel @Jason_Cyr for tons more insights on how to use AI in a professional setting)
Previously, my notes 'management' was just a digital version of a journal. Each day got its own note with meeting minutes and ToDos that either needed to get done that day or came up for tracking down the line. This being digital added benefits of copy-paste of ToDos to the next day if I was not able to complete them, and it added search capabilities for previous notes, particularly helpful in preparation for the coming day's meetings or during a meeting to validate previous discussion points. It still left a significant amount of untapped potential and still required a recurring effort of drafting out the day's meetings and remembering to copy-paste uncompleted ToDos.
To take this to the next level, I've followed Jason's advice and exported my notes into Obsidian. Once they're there, I explained their structure to Cursor to 'take a moment' and review ~10 years worth of meeting notes. It quickly identified recurring meetings, 1:1s, and helped build a structure of common meetings and people I interface with. From there I've asked it to track all historic occurrences for each of these people and meetings and add a quick summary of what's been discussed (leveraging a basic GenAI summary). It further created links between notes so I can easily jump to the previous occurrence of a 1:1 or a meeting to review what details have been discussed. All of this used AI to build Python scripts that leverage the Obsidian functionality and add GenAI summaries without having to code a single line. The element of an 'agent' comes in where it now automatically connects the current day's work with the knowledge base, grabs my next day's meetings to draft an outline of that day, and brings over any uncompleted ToDos. As I went through this process, I've asked Cursor to review its own work, identify optimizations and perform these optimizations, and importantly, document not only how the program is built, but also the learnings from where it made mistakes or performed unnecessary scripts.
Tip: Jason shared this in his videos as well, and I can completely relate - don't rely on AI summaries. Instead, use summaries as pointers and use AI to prioritize what content to review, but ALWAYS go to the actual content to learn. The moment you create dependency on summaries, levels of abstractions will separate understanding from reality and might result in painful and avoidable mistakes down the line.
ROI: Effort breaking even within days. This particular agent took me probably 4-5 hours to build end-to-end (including the optimizations), but it's now easily saving me 15 minutes every day of note templating, improves my accountability in avoiding lost ToDos, and enables me to have more meaningful conversations in each meeting.
Level 3: Agents identifying how to do things better
Now that I have an agent managing my notes and proactively creating a day ahead, I'm working to have it generate insights and learnings from my meeting patterns and relationships with the people I'm meeting. I'm excited about the potential to not only offload what I've done before manually, but also think differently—based on my individual work habits and 10 years of meeting history.
Level 4: Agents working with each other
As I am building more agents, I am looking forward to these agents interacting and working together. The Level 2 & 3 agents are more similar to managing a resource than coding a program. I can tell them where to improve and they remember the mistakes, and even remind me when learnings from previous mistakes are expected to impact what I'm asking for. Level 4 will take this to the next level of 'managing a team' of agents.
Level 5: Agents proactively addressing issues they are familiar with
While a little further out, I can see a future where my team of agents is able to address issues proactively without my intervention (likely with my review at first), so that I can focus primarily on strategic decision-making.
I'm curious to learn if/what AI implementations you find useful in your daily work. Feel free to hit me up on LinkedIn.