Feedback
Got feedback? We want it! Submit your comments and suggestions for our community here.

How We Used Claude AI Skills to Build a Smarter, more Efficient Search partner with our Splunk System

durnan13
Explorer

The Problem We Were Solving

Our organization runs Splunk Cloud with a mix of index families, a fairly complex multi-layer application architecture, and a user base that ranges from power users writing tstats queries to people who are brand new to SPL. The challenge wasn't just "can AI write a Splunk search" — it was:

  • Can it write the right search for our environment?
  • Can we trust that it won't kick off a runaway scan against a production index with 90 days of data and no scope filter?
  • Can it know that realm= is the correct scoping identifier in our kube indexes but host= is what you want in our ivue indexes — and that these two values are not interchangeable?

Out-of-the-box AI assistants don't know any of that. Without some assistance, our end users were struggling to get off the ground. We needed a way to encode it.


What Claude Skills Are

Claude (Anthropic's AI) has a feature called Skills — essentially structured knowledge files you attach to a Claude Project or Plugin. Skills are Markdown documents that the model reads as authoritative instructions and reference material. They're not training — they're live context loaded into every conversation.

This matters because it means you can give Claude your organization's actual knowledge, rules, and constraints, and it will apply them consistently. You're not hoping the model "remembers" best practices from its training data. You're telling it, explicitly, what your environment looks like and how you want it to behave.


What We Built: A Modular Skill File Architecture

We built what a Claude Project with a set of skill files organized around a master router (SKILL.md) that tells Claude which specialized skills to load based on the type of request.

The architecture looks like this:

SKILL.md                          ← Master router
├── splunk-core.md ← Always loaded — safety rules, approval framework
├── splunk-indexes.md ← Index inventory, retention, "is this index active?"
├── splunk-search-writing.md ← SPL patterns, macros, field reference
├── splunk-dashboards-viz.md ← Dashboard Studio JSON generation
├── splunk-member-investigations.md ← Member scoping, lookup workflows
├── splunk-operations.md ← Alerts, reports, rehydration processes
├── splunk-training.md ← Adaptive onboarding for new users
└── references/
├── connector-limitations.md ← MCP connector quirks and guardrails

├── cross-layer-investigation.md ← index pivot workflow

└── user-token-setup.md ← Token provisioning guide

Each file is scoped to a specific domain. The router tells Claude which combination to load based on what the user is asking. Asking to write a search loads splunk-core + splunk-search-writing. Investigating a member issue loads splunk-core + splunk-member-investigations + splunk-search-writing. Building a dashboard adds splunk-dashboards-viz. We also are rolling this into a plugin to use for Claude Code as well.


From this we are aiming for two parts:
Part 1: Guardrails on Best Practices
Part 2: Encoding Our Splunk Structure (what is the purpose of an index or source type)

We are running the Splunk MCP server here connected to our Claude system. I won't lie it took a bit to manage the initial setup to setup each user with individual tokens in the MCP server compared to other companies in which allow end users to manager their own tokens without gaining access to global admin settings.

All in all, this has allowed our users to go from basic users to more informed and efficient users. Their confidence is growing as they learn from the training aspect of this setup. We are still in the testing phase, but for us it is a game changer. Curious how many other Splunk users are exploring this part of the AI world!

durnan13
Explorer

Hi @inventsekar,

Now, I will say that this project on our end is large with all our information, so I did have it summarize points, but then copied it into word and reviewed all the points and added details or what I felt was important to it. I will take all the help I can get with initial typing, but it is always definitely important to review prior to posting. I even had a team mate who has been working with me on this effort double check it to make sure it didn't include incorrect information.

0 Karma

inventsekar
SplunkTrust
SplunkTrust

Dear @durnan13 

my initial reply was a "just kidding" kind of message, pls do not take it seriously, no hard feelings please. 

i really appreciate the good and formatted write-up. keep it up, thanks. 

 

 

 

0 Karma

durnan13
Explorer

Oh I completely laughed at your reply haha. No hard feelings here 😀! Thanks for the feedback! We have hit a couple of issues with the connection between Splunk and Claude and are working through those and as I mentioned in the post the MCP Server setup isn't the most friendly for an admin setting up 500 tokens, but hey! its a start!

0 Karma

inventsekar
SplunkTrust
SplunkTrust

Hi @durnan13 

may we know, how we can make sure, this post is "NOT" written by Claude AI itself 😉

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Design, Compete, Win: Submit Your Best Splunk Dashboards for a .conf26 Pass

Hello Splunkers,  We’re excited to kick off a Splunk Dashboard contest! We know that dashboards are a primary ...

May 2026 Splunk Expert Sessions: Security & Observability

Level Up Your Operations: May 2026 Splunk Expert Sessions Whether you are refining your security posture or ...

Network to App: Observability Unlocked [May & June Series]

In today’s digital landscape, your environment is no longer confined to the data center. It spans complex ...