Our organization runs Splunk Cloud with a mix of index families, a fairly complex multi-layer application architecture, and a user base that ranges from power users writing tstats queries to people who are brand new to SPL. The challenge wasn't just "can AI write a Splunk search" — it was:
realm= is the correct scoping identifier in our kube indexes but host= is what you want in our ivue indexes — and that these two values are not interchangeable?Out-of-the-box AI assistants don't know any of that. Without some assistance, our end users were struggling to get off the ground. We needed a way to encode it.
Claude (Anthropic's AI) has a feature called Skills — essentially structured knowledge files you attach to a Claude Project or Plugin. Skills are Markdown documents that the model reads as authoritative instructions and reference material. They're not training — they're live context loaded into every conversation.
This matters because it means you can give Claude your organization's actual knowledge, rules, and constraints, and it will apply them consistently. You're not hoping the model "remembers" best practices from its training data. You're telling it, explicitly, what your environment looks like and how you want it to behave.
We built what a Claude Project with a set of skill files organized around a master router (SKILL.md) that tells Claude which specialized skills to load based on the type of request.
The architecture looks like this:
SKILL.md ← Master router
├── splunk-core.md ← Always loaded — safety rules, approval framework
├── splunk-indexes.md ← Index inventory, retention, "is this index active?"
├── splunk-search-writing.md ← SPL patterns, macros, field reference
├── splunk-dashboards-viz.md ← Dashboard Studio JSON generation
├── splunk-member-investigations.md ← Member scoping, lookup workflows
├── splunk-operations.md ← Alerts, reports, rehydration processes
├── splunk-training.md ← Adaptive onboarding for new users
└── references/
├── connector-limitations.md ← MCP connector quirks and guardrails
├── cross-layer-investigation.md ← index pivot workflow
└── user-token-setup.md ← Token provisioning guideEach file is scoped to a specific domain. The router tells Claude which combination to load based on what the user is asking. Asking to write a search loads splunk-core + splunk-search-writing. Investigating a member issue loads splunk-core + splunk-member-investigations + splunk-search-writing. Building a dashboard adds splunk-dashboards-viz. We also are rolling this into a plugin to use for Claude Code as well.
From this we are aiming for two parts:
Part 1: Guardrails on Best Practices
Part 2: Encoding Our Splunk Structure (what is the purpose of an index or source type)
We are running the Splunk MCP server here connected to our Claude system. I won't lie it took a bit to manage the initial setup to setup each user with individual tokens in the MCP server compared to other companies in which allow end users to manager their own tokens without gaining access to global admin settings.
All in all, this has allowed our users to go from basic users to more informed and efficient users. Their confidence is growing as they learn from the training aspect of this setup. We are still in the testing phase, but for us it is a game changer. Curious how many other Splunk users are exploring this part of the AI world!
Hi @inventsekar,
Now, I will say that this project on our end is large with all our information, so I did have it summarize points, but then copied it into word and reviewed all the points and added details or what I felt was important to it. I will take all the help I can get with initial typing, but it is always definitely important to review prior to posting. I even had a team mate who has been working with me on this effort double check it to make sure it didn't include incorrect information.
Dear @durnan13
my initial reply was a "just kidding" kind of message, pls do not take it seriously, no hard feelings please.
i really appreciate the good and formatted write-up. keep it up, thanks.
Oh I completely laughed at your reply haha. No hard feelings here 😀! Thanks for the feedback! We have hit a couple of issues with the connection between Splunk and Claude and are working through those and as I mentioned in the post the MCP Server setup isn't the most friendly for an admin setting up 500 tokens, but hey! its a start!
Hi @durnan13
may we know, how we can make sure, this post is "NOT" written by Claude AI itself 😉