All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

ok last thing i want to know from all this solutions when i will perform it . Will i get  any new different data from what i am getting now because as of now i am getting timestamp, hostname, kuberne... See more...
ok last thing i want to know from all this solutions when i will perform it . Will i get  any new different data from what i am getting now because as of now i am getting timestamp, hostname, kubernetes.container name etc.
Thanks for the help @tscroggins. I was able to get the result calling the API. But I had to fill in the {search_id} manually, is there a way to get the {search_id} through the endpoint or I have to ... See more...
Thanks for the help @tscroggins. I was able to get the result calling the API. But I had to fill in the {search_id} manually, is there a way to get the {search_id} through the endpoint or I have to retrieve it from a parameter in another GET request. I need this because it's a daily alert and I would need to get the result through the API endpoint daily as well in BTP IS
Hello, if you have specific app conf (like after configuring it using HF web gui for a specific site), is it still recommended to use deployment server as this requires to sync / copy HF app/local co... See more...
Hello, if you have specific app conf (like after configuring it using HF web gui for a specific site), is it still recommended to use deployment server as this requires to sync / copy HF app/local conf back to deployment server etc/deployment-apps/app/local to avoid any deletion when reloading deployment server/app update from DS? I guess using DS is good for centralizing (same) configurations across HFs? https://docs.splunk.com/Documentation/Splunk/9.3.0/Updating/Createdeploymentapps "The only way to allow an instance to continue managing its own copy of such an app is to disable the instance's deployment client functionality. If an instance is no longer a client of a deployment server, the deployment server will no longer manage its apps."   Thanks.
@bishida  - Thanks for the details. 
The way auto-retry works is like this: Let’s say your test run at 9:00 fails on the first attempt. If auto retry is enabled, the test will immediately try again. This is to help prevent a failed test... See more...
The way auto-retry works is like this: Let’s say your test run at 9:00 fails on the first attempt. If auto retry is enabled, the test will immediately try again. This is to help prevent a failed test from some very brief condition like a network connectivity blip that only lasts a few seconds. If that second attempt fails again, then that test run is marked as failed. Either way, the next test run will take place at 9:30 based on your 30 minute schedule. It’s also nice that the “retry attempt” doesn’t count against your entitlement usage.
This sounds like a base use-case for the OpenTelemetry collector. When you run the OTel collector on an EC2, you’ll be streaming host metrics like cpu, memory, disk, and network directly to Splunk Ob... See more...
This sounds like a base use-case for the OpenTelemetry collector. When you run the OTel collector on an EC2, you’ll be streaming host metrics like cpu, memory, disk, and network directly to Splunk Observability Cloud. Since the EC2 is running in AWS, you’re also able to collect most of those same metrics through CloudWatch. The big difference is the OTel collector gives you the ability to collect high-resolution streaming metrics. This is important when correlating infrastructure metrics to application performance. The metrics coming from CloudWatch will be much lower resolution by default. But yes, this approach means that metrics are going to 2 different places: Splunk Observability Cloud and CloudWatch. The Data Management tab in Observability Cloud will give you guided instructions for installing the OTel collector.
Stupid form editor adds extra CRs. Having trouble getting this search to work as desired. I've tried these 2 methods and can't them to work:   eventtype="x" Name="x" | fields Name, host ... See more...
Stupid form editor adds extra CRs. Having trouble getting this search to work as desired. I've tried these 2 methods and can't them to work:   eventtype="x" Name="x" | fields Name, host | dedup host | stats count by host | appendpipe [stats count | where count=0 | eval host="Specify your text here"]     and using the   fillnull   command. Here is my search:   index=idx1 host=host1 OR host=host2 source=*filename*.txt field1!=20250106 (field2="20005") OR (field2="20006") OR (field2="20007") OR (field2="666") | stats count(field2) by field2, field3 | sort count(field2)   In this case the value for field2="666" does not exist in the results. Here're the results I get:   field2 field3 count(field2) 1 20005 This is field3 value 1 2 2 20006 This is field3 value 2 6 3 20007 This is field3 value 3 13   To summarize, I want to search for all the values of field2 and return the counts for each field2 value even if the field2 value is not found in the search; so, then, count(field2) for field2=666 would be 0. As follows:   field2 field3 count(field2) 1 666 <empty string> 0 2 20005 This is field3 value 1 2 3 20006 This is field3 value 2 6 4 20007 This is field3 value 3 13   This is a simplified example. The actual use case is that I want to search one data set and return all the field2 values and then search for those values in the first data set. This actual search I'm running looks like this:   index=idx1 host=host1 OR host=host2 source=*filename*.txt field1!=20250106 [search index=idx1 host=host1 OR host=host2 source=*filename*.txt field1=20250106 | fields field2 | dedup field2 | return 1000 field2] | stats count(field2) by field2, field3 | sort count(field2)   I want to find all the field2 values when field1=20250106 and then find the counts of those values in the field1!=20250106 events (even for when the count of some field2 values have count=0 in results).
I need help with below splunk query   index=XXX_XXX_XXX | eval job_status=if( 'MSGTXT' = "*ABEND*","ko","ok") | where job_status="ko"   If I change job_status="ok" its working but not for abo... See more...
I need help with below splunk query   index=XXX_XXX_XXX | eval job_status=if( 'MSGTXT' = "*ABEND*","ko","ok") | where job_status="ko"   If I change job_status="ok" its working but not for above condition, appreciate any suggestion on this.   Regards  
See https://community.splunk.com/t5/Dashboards-Visualizations/What-causes-quot-Search-auto-canceled-quot/m-p/452421 Also, check search.log for the canceled search to see if any messages explain why ... See more...
See https://community.splunk.com/t5/Dashboards-Visualizations/What-causes-quot-Search-auto-canceled-quot/m-p/452421 Also, check search.log for the canceled search to see if any messages explain why it was canceled.
Thanks @kiran_panchavat  So you are suggesting a fresh installation from tgz file. Not sure, why it worked for 2 hosts and now, it won't but I will give it a try. Also, I am assuming command "chown ... See more...
Thanks @kiran_panchavat  So you are suggesting a fresh installation from tgz file. Not sure, why it worked for 2 hosts and now, it won't but I will give it a try. Also, I am assuming command "chown -R splunk:splunk" can be replaced with the "chown -R splunkfwd:splunkfwd", as that's the user name I am running Splunk forwarder with. 
So I am 99% there New Search Index=xxxxx "Starting iteration" OR "Stopping iteration" | timechart count spac=15m by Series | rex "Starting\siteration[\s\-]+[(?<start_reg_id>[^\s]+)" | rex "Stopp... See more...
So I am 99% there New Search Index=xxxxx "Starting iteration" OR "Stopping iteration" | timechart count spac=15m by Series | rex "Starting\siteration[\s\-]+[(?<start_reg_id>[^\s]+)" | rex "Stopping\siteration[\s\-]+(?<stop_reg_id>[^:\s]+)" | eval Start_Reg_ID=start_reg_id | eval Stop_Reg_ID=stop_reg_id   When I run I get a count of 2 - Which is the start and the stop of the same ID It shows the Time Stamp and a count of 2 - when I see the events it is correct What I need to do is tell me if it was over 15 mins - maybe I need to redo the Timespan or put more time comments in...sorry I am a newbe I have got the result and it collorates the start and finish but now how do I say over 15 mins thats long
Hello @KashifIbrahim , Multiple points can create this problem, first you should check Splunk and Add-on version as latest Splunk version only support Python v3.9 and add-on's latest version are only... See more...
Hello @KashifIbrahim , Multiple points can create this problem, first you should check Splunk and Add-on version as latest Splunk version only support Python v3.9 and add-on's latest version are only compatible with Python v3.9. Apart from this you can check the Splunkd logs what they are saying.
I am getting the same exception, is there any solution to this mess?
Receving "Search auto-canceled" error while executing one month episod review. Please let us know if any quick solution.
Hi @ITWhisperer    please find the current query: index="index1" |search "slot" | rex field=msg "VF\s+slot\s+(?<slot_number>\d+)" | dedup msg | sort _time,host | stats range(_time) as downtime by ... See more...
Hi @ITWhisperer    please find the current query: index="index1" |search "slot" | rex field=msg "VF\s+slot\s+(?<slot_number>\d+)" | dedup msg | sort _time,host | stats range(_time) as downtime by host,slot_number
That sounds right 
What is you current query?
So I have looked at my Events And it does have a Common Unique ID on each start and stop event Example Starting iteration - 17000000 Stopping iteration - 17000000 So I guess I need to extract th... See more...
So I have looked at my Events And it does have a Common Unique ID on each start and stop event Example Starting iteration - 17000000 Stopping iteration - 17000000 So I guess I need to extract that number and perform a duration for this.
And Splunkd logs has the following error MONGO GB WARN MongoClient [999733 KVStoreUpgradeStartupThread] - Disabling TLS hostname validation for localhost ERROR KVStorageProvider [999733 KVStoreUpgr... See more...
And Splunkd logs has the following error MONGO GB WARN MongoClient [999733 KVStoreUpgradeStartupThread] - Disabling TLS hostname validation for localhost ERROR KVStorageProvider [999733 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on '127.0.0.1:8191']
Hi @gcusello , How this query can help me in excluding events of that particular IP addresses which have threat messages? In lookup table ip address will be filled manually daily or weekly basis by ... See more...
Hi @gcusello , How this query can help me in excluding events of that particular IP addresses which have threat messages? In lookup table ip address will be filled manually daily or weekly basis by user. But those IPs are to be excluded from search. I am confused. Please help me with the relevant query.