All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey,  Thanks for your response. With this azure cluster, we receive latency alerts all of the time. Comparing the installed and available resources to the recommended, we are under resourced. We hav... See more...
Hey,  Thanks for your response. With this azure cluster, we receive latency alerts all of the time. Comparing the installed and available resources to the recommended, we are under resourced. We have a clustered environment, two clusters at a different location for both the Search Heads as well as the Indexers. We have 10 Physical indexers, each with about 60TB of storage and 48 cores and 96 CPUs. Compare these indexers to the Azure ones, which have 4 Cores and 8 CPUs.   Our Azure cluster has been giving us latency warning for a few years now, even after a few upgrades. Now that I am more comfortable with our environment, I want to finally upgrade the cpu and memory to the recommended values.    We have 6 Azure Indexers, all of which have latency issues at some point or another. We have about 100 indexes, our top 3 sources ingest 600GB daily, and we average about 1.7 TB a day.    To sum up, these are under resourced, and they need more CPU.    Thanks
Yeh I have played with those modes and those helps in many cases and currently you can do things which haven’t been possible e.g. with 7.x versions. Those are defined here https://docs.splunk.com/Doc... See more...
Yeh I have played with those modes and those helps in many cases and currently you can do things which haven’t been possible e.g. with 7.x versions. Those are defined here https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/PropagateSHCconfigurationchanges But in curiosity how you manage those in this case that you could push initially all configurations from old SHC and later so that when user removed e.g. alerts from SHC by GUI and then it didn’t pop up after new apply without that you remove it first from deployer after initial push?
Hello, This is Krishna and I have been some POC about accessing Splunk logs through Rest API's. I was successful in calling the Rest API's through Spunk Enterprise version but in my company we have ... See more...
Hello, This is Krishna and I have been some POC about accessing Splunk logs through Rest API's. I was successful in calling the Rest API's through Spunk Enterprise version but in my company we have Splunk Cloud and so unable to call Rest API's as how I was able to do in Splunk Enterprise edition. I would like to know the details of how I can call Splunk Rest API's for Cloud edition. Below are my findings On my local instance of Splunk when I hit the below url it lists all the services available https://localhost:8089/services(it asked me for admin credentials which I provided) in which I am interested in the https://localhost:8089/services/search/jobs  so would like to call the similar ones for Cloud version   Thanks in Advance.
Hello Splunk Community! Welcome to another week of fun curated content as a part of our Splunk Answers Community Content Calendar! This week's posts are about Regex extraction, Search & Reporting... See more...
Hello Splunk Community! Welcome to another week of fun curated content as a part of our Splunk Answers Community Content Calendar! This week's posts are about Regex extraction, Search & Reporting, and Splunk Enterprise. We will be highlighting some of our Splunk users and experts who contributed in the Splunk Search board. Splunk Community Spotlight: Extracting FQDN from JSON Using Regex Regex can be a tricky beast, especially when you're trying to extract specific values from structured fields like JSON. Fortunately, the Splunk community continues to come together to solve these common challenges. The Challenge Here we're highlighting a helpful interaction from the Splunk Search board on Splunk Answers, where Karthikeya asked for assistance refining a regex pattern to extract a fully qualified domain name (FQDN) from a JSON field. The Solution The solution was simple, effective, and more broadly applicable.  Gcusello explains that regex captures everything after the prefix (such as v-) up until the trailing port number, assigning it to the field fqdn. The solution was also accompanied by a link to Regex101, allowing others to test and validate the expression. This improved version is more readable and reliable across similar patterns. Field extraction is a foundational task in Splunk whether you're indexing logs, building dashboards, or writing alerts. Getting it right the first time helps ensure your data is usable and searchable in meaningful ways. Regex plays a crucial role in this, but without best practices, you may end up with inconsistent results. Thanks to contributors like gcusello, users gain cleaner, more maintainable solutions that benefit the entire community. Splunk Community Spotlight: Joining Data Across Indexes with Field Coalescing In the fast-paced world of network monitoring and security, Splunk users often need to correlate data across multiple indexes. In this week’s Splunk Search board, we’re diving into a great question raised by a MrGlass and how yuanliu and gcusello helped unlock the solution with a clean and efficient approach. The Challenge While trying to locate some data between two indexes, when running the search over a larger time window, the data becomes inconsistent, likely due to the structure of the join and differences in field naming. The Solution Gcusello provides a great solution and yuanliu refines it to help the user. Rather than renaming fields individually or relying on joins (which can break or jumble data across time windows), Both of them offered a streamlined approach using conditional logic to cleanly unify the data. Working with multiple data sources in Splunk is common, especially in environments where network telemetry and authentication logs live in different places. Field normalization using coalesce() ensures your queries remain flexible, efficient, and easier to manage at scale. Key Takeaways Splunk’s community is a powerful learning resource,  from regex optimizations to full-scale deployment strategies. If you’re ever stuck, don't hesitate to ask a question on Splunk Answers, and you might just find a simpler, better way to solve your challenge. Shout-Out Big thanks to Karthikeya & MrGlass for raising the issue and gcusello & yuanliu for providing such clean and effective solutions. This is the kind of collaborative problem-solving that makes the Splunk Community thrive! Would you like to feature more solutions like this? Reach out @Anam Siddique on Slack or @Anam on Splunk Answers to highlight your question, answer, or tip in an upcoming Community Content post! Beyond Splunk Answers, the Splunk Community offers a wealth of valuable resources to deepen your knowledge and connect with other professionals! Here are some great ways to get involved and expand your Splunk expertise: Role-Based Learning Paths: Tailored to help you master various aspects of the Splunk Data Platform and enhance your skills. Splunk Training & Certifications: A fantastic place to connect with like-minded individuals and access top-notch educational content. Community Blogs: Stay up-to-date with the latest news, insights, and updates from the Splunk community. User Groups: Join meetups and connect with other Splunk practitioners in your area. Splunk Community Programs: Get involved in exclusive programs like SplunkTrust and Super Users where you can earn recognition and contribute to the community. And don’t forget, you can connect with Splunk users and experts in real-time by joining the Slack channel. Dive into these resources today and make the most of your Splunk journey!
"Then you must remember that if/when you have deploy those apps via deployer into SHC members you push everything into default directories. This means that if users have previously created those e.g.... See more...
"Then you must remember that if/when you have deploy those apps via deployer into SHC members you push everything into default directories. This means that if users have previously created those e.g. alerts by GUI then they cannot remove those what you have deployed by deployer. They can change those but admins must remove those from deployer and then push again into members." That's not entirely true. Whether the settings get pushed into default or local depends on the push mode. But yes, migrating contents is something that's best done with a bit of thinking, not just blindly copying files from point A to B.
No. You have 8 vCPUs whereas the minimum indexer specs show 24vCPUs. Actually it's a bit more complicated than that. Since most if not all modern CPUs include hyper-threading, OS shows more "CPUs" t... See more...
No. You have 8 vCPUs whereas the minimum indexer specs show 24vCPUs. Actually it's a bit more complicated than that. Since most if not all modern CPUs include hyper-threading, OS shows more "CPUs" than the die actually has cores. It's however not as easy to calculate performance of such setup since hyper-threading works pretty good with multithreaded applications but doesn't give that much of a performance boost with many single-threaded ones. Anyway, you most probably have a virtualized 4-core 2 threads-per-core CPU which is a really low end for a production indexer. Yes, you can run Splunk lab at home on 4 cores but if you want to put any reasonable load on the box - you'll have a lot of problems. As a rule of thumb Splunk uses 4-6 vCPUs just for single pipeline indexing. So there's not much juice left in your hardware for searching. This box is really undersized.
I want to get splunk log as actual log message in splunk server. I am getting all fields instant, thread, level, message attributes in splunk, But i don't want to get it like that.
HI, I have my json message with 4-5 json key value pairs.   I want to remove some of the fields and want to modify body before send it to splunk server. In OTEL Server i tried using file log rec... See more...
HI, I have my json message with 4-5 json key value pairs.   I want to remove some of the fields and want to modify body before send it to splunk server. In OTEL Server i tried using file log receiver to modify body and transform log statements to set body. my json contians  Body: Str({"instant":{"epochSecond":1747736470,"nanoOfSecond":616797000},"thread":"ListProc-Q0:I0","level":"DEBUG", "message":"{actual log message}}) My requirement is, i want to remove instant, thread, level fields and want to send "json value of message field, which comes dynamically" Updated body is getting printed in debug log, but still splunk server is showing original body as is. transform:\n log_statements:\n - context: log\n statements:\n \ - 'set(body, ParseJSON(body)[\"message\"])'\n - 'set(body, \"extracted\")'\n But my splunk server is showing it as is original body.    Can some one please help me with this issue.  
Thank you for the insight! I made this modification on my dashboard. However, the drilldown is still not accurately setting the token. 
You said that indexers are slow, but how you have made this conclusion? And did this mean that there is lack of cpu, memory or IO resources? Before you start to increase size of servers you must unde... See more...
You said that indexers are slow, but how you have made this conclusion? And did this mean that there is lack of cpu, memory or IO resources? Before you start to increase size of servers you must understand what exactly is your situation!  How much you are indexing daily how many queries you have how many indexers you have which kind of topology you have  have you SmartStore in use which kind of nodes you have what node types you have what storage you have what are you metrics for cpu, mem, io how many indexes etc
I had a few questions over how the DMC gathered its information for server specifications and how to extract them.  I am trying to add resources to our slower indexers. I am trying to follow the off... See more...
I had a few questions over how the DMC gathered its information for server specifications and how to extract them.  I am trying to add resources to our slower indexers. I am trying to follow the official resource docs here.  To begin, I am viewing my infrastructure through the DMC view: Monitoring Console -> Settings -> General Setup   And from here, I get a view that looks like: In this view, my server is said to have 4 Cores and 15884 MB worth of memory. These are Azure VMs, so they use vCPUs. This one specifically is an Azure VM indexer. We have latency issues with these and I believe it's due to it being under resourced.    Looking at my server specifically, I get the following CPU  specs:   From looking at the other hosts, from both sources, I have reached the conclusion that cores = cores x sockets (this makes sense) From some references I was looking at, it seemed like i needed to be looking at the CPU(s) value from the lscpu command. For this example, this server has 8 vCPU, and its recommended we have at least 12.  When upgrading, do i need to focus on the cores, or do i just need to specify how many vCPUs i need. I also wanted to know how I could extract this view from the DMC. I want to table all of the resource metrics so I can gather to get a full picture of what I have in my Splunk environment.  Thank you for any guidance or insight. 
first is "closer" to dedup since it keeps the first event in the event pipeline for each unique value of the dedup'd field(s)
As EP is not based on splunk source code / same framework there isn’t same kind of diag as on enterprise or uf have. Usually Splunk Support should instructed you also how that diagnostic package shou... See more...
As EP is not based on splunk source code / same framework there isn’t same kind of diag as on enterprise or uf have. Usually Splunk Support should instructed you also how that diagnostic package should create when they asked it.  Here is general instructions how you could create it https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/EdgeProcessor/Troubleshooting
That was just it what I said earlier that you must set it manually in smaller SHC. But still I propose to you, that install another SHC environment from scratch and migrate the needed apps and users f... See more...
That was just it what I said earlier that you must set it manually in smaller SHC. But still I propose to you, that install another SHC environment from scratch and migrate the needed apps and users from old SHC. In that way you probably avoid some issues later on!
Hi @hv64  Yes, you can connect Splunk to SAP HANA using Splunk DB Connect by placing the SAP HANA JDBC driver (ngdbc.jar) in $SPLUNK_HOME/etc/apps/splunk_app_db_connect/drivers. Download the SAP H... See more...
Hi @hv64  Yes, you can connect Splunk to SAP HANA using Splunk DB Connect by placing the SAP HANA JDBC driver (ngdbc.jar) in $SPLUNK_HOME/etc/apps/splunk_app_db_connect/drivers. Download the SAP HANA JDBC driver (ngdbc.jar) from SAP. Copy ngdbc.jar to $SPLUNK_HOME/etc/apps/splunk_app_db_connect/drivers. Restart Splunk In Splunk DB Connect, create a new database connection: - I believe you need to select on of the "Generic JDBC"  database types from the list as HANA isnt specifically listed. Edit the JDBC URL to: jdbc:sap://<host>:<port> - Choose the ngdbc.jar as the driver. Make sure the driver version matches your SAP HANA server version. There are some other community posts that might also help, but be warned that some of these are 10 years old! https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-DB-Connect-connection-to-Hana/m-p/311647 https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-integrate-Splunk-with-SAP-HANA/m-p/157993  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing   Relevant documentation: - Splunk DB Connect: Install database drivers - Splunk DB Connect: Set up database connections
Thats a good point @PickleRick - for some reason I've always used latest, mainly incase there is any reason that events dont get returned with the most recent first (e.g. sorting of some sort, change... See more...
Thats a good point @PickleRick - for some reason I've always used latest, mainly incase there is any reason that events dont get returned with the most recent first (e.g. sorting of some sort, changes to _time, lookups, appends etc) but I suppose stats will stop looking after the first event if using first() but could read all events to check its still the "latest". I might try this on a big dataset to see if it makes much difference!  
There is clear and simple steps how to migrate also users from old SHC or single node to a new SHC via deployer. https://docs.splunk.com/Documentation/Splunk/9.4.1/DistSearch/Migratefromstandalonese... See more...
There is clear and simple steps how to migrate also users from old SHC or single node to a new SHC via deployer. https://docs.splunk.com/Documentation/Splunk/9.4.1/DistSearch/Migratefromstandalonesearchheads I have done this couple of times in several environments both for apps and users. And don’t migrate anything from Splunk’s own system apps like search! Then you must remember that if/when you have deploy those apps via deployer into SHC members you push everything into default directories. This means that if users have previously created those e.g. alerts by GUI then they cannot remove those what you have deployed by deployer. They can change those but admins must remove those from deployer and then push again into members.  That seems to be quite common question from users side in this kind of cases!
@PickleRick @isoutamo @livehybrid @kiran_panchavat  The test found that when some nodes are isolated, the isolated nodes will not elect a new caption, because it requires more than half of the total... See more...
@PickleRick @isoutamo @livehybrid @kiran_panchavat  The test found that when some nodes are isolated, the isolated nodes will not elect a new caption, because it requires more than half of the total number of nodes. And the caption cannot be manually specified. The following is the returned information. This node is not the captain of the search head cluster, and we could not determine the current captain. The cluster is either in the process of electing a new captain, or this member hasn't joined the pool https://docs.splunk.com/Documentation/Splunk/9.4.2/DistSearch/SHCarchitecture#Captain_election
As a side note, completely irrelevant to the original problem - I'm wondering whether there will be any noticeable performance difference between first(something) and latest(something) in case of a d... See more...
As a side note, completely irrelevant to the original problem - I'm wondering whether there will be any noticeable performance difference between first(something) and latest(something) in case of a default base search returning results in reverse chronological order.
Add how you did this update? Just updated node by node or install new nodes where you migrate splunk somehow?