All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi everyone, I'm running a query in Splunk using the dbxquery command and received the following error:   Error in 'script': Getinfo probe failed for external search command 'dbxquery'.   When I... See more...
Hi everyone, I'm running a query in Splunk using the dbxquery command and received the following error:   Error in 'script': Getinfo probe failed for external search command 'dbxquery'.   When I check Apps -> Manage Apps -> Splunk DB Connect, I see the version is 2.4.0. Please help me identify the cause and how to fix this error. Thank you!
Thank you for your response, it has solved my problem!
Not sure if I fully understand the requirement.  But in general, you can assign a non-null string to those fields.  For example, | eval MX = coalesce(MX, "MX is null") The issue, I suspect, is when... See more...
Not sure if I fully understand the requirement.  But in general, you can assign a non-null string to those fields.  For example, | eval MX = coalesce(MX, "MX is null") The issue, I suspect, is when you transpose, all those values representing null will collapse and skew format.  Is this the problem?  If so,  you can force these values to be different, e.g., | eval MX = coalesce(MX, "MX is null for " . FQDN) Hope. this helps.
Try using the match function to test the field value. index=XXX_XXX_XXX | eval job_status=if(match('MSGTXT', "ABEND"),"ko","ok") | where job_status="ko"  
How can this be used to find the last 10 events in chronological order?   
Hi, Can any one please help in creating regex to extract 12 words(Words with characters/letters only) from beginning of the field? Sharing few samples with required output:   1)00012243asdsfgh - N... See more...
Hi, Can any one please help in creating regex to extract 12 words(Words with characters/letters only) from beginning of the field? Sharing few samples with required output:   1)00012243asdsfgh - No recommendations from System A. Message - ERROR: System A | No Matching Recommendations Required Output - No recommendations from System A. Message - ERROR: System A | No Matching Recommendations 2)001b135c-5348-4arf-b3vbv344v - Validation Exception reason - Empty/Invalid Page_Placement Value ::: Input received - Channel1; ::: Other details - 001sss-445-4f45-b3ad-gsdfg34 - Incorrect page and placement found: Channel1; Required Output - Validation Exception reason - Empty/Invalid Page_Placement Value ::: Input received - Channel1; 3)00assew-34df-34de-d34k-sf34546d :: Invalid requestTimestamp : 2025-01-21T21:36:21.224Z Required Output:Invalid requestTimestamp 4)01hg34hgh44hghg4 - Exception while calling System A - null Required Output:Exception while calling System A - null            
Hello, I have a question about sh deployer and search heads. We have three search heads within a cluster and for some reason at some point of time deployer connection got disconnected and now I am ... See more...
Hello, I have a question about sh deployer and search heads. We have three search heads within a cluster and for some reason at some point of time deployer connection got disconnected and now I am trying to connect it. Let me know what need to be done ? Is it just we need to match password of all search heads with deployer. Configurations I currently see: On Search head(1/2/3): /opt/splunk/etc/system/localserver.conf [shclustering] conf_deploy_fetch_url = https://XXXXXX:8089 disabled = 0 mgmt_uri = https://XXXXXXX:8089 replication_factor = 2 shcluster_label = shcluster1 id = 1F81D83B manual_detention = off Deployer : /opt/splunk/etc/system/localserver.conf [shclustering] shcluster_label = shcluster1 pass4SymmKey = XXXXXXX Thanks in advance for your help!  
We have a lookup that has all kinds of domain (DNS) information in it with about  60 fields like create date, ASN, name server IP,  MX IP, many of which are usually populated. But there are several f... See more...
We have a lookup that has all kinds of domain (DNS) information in it with about  60 fields like create date, ASN, name server IP,  MX IP, many of which are usually populated. But there are several fields which have no data - 10 to 20 on any given search (assuming that they are 'null'). The empty fields are likely to vary on each search. In other words some domains will have an MX record, some will not, but if they are in this lookup, they will always have a create-date.  I am presenting this data on a domain lookup dashboard, using "|transpose" so that you have a table with the field name and value on a dashboard. I would like to just show a field and a value where this is returned data and filter out or not show a field which is null. Is there a way to do this?
ok last thing i want to know from all this solutions when i will perform it . Will i get  any new different data from what i am getting now because as of now i am getting timestamp, hostname, kuberne... See more...
ok last thing i want to know from all this solutions when i will perform it . Will i get  any new different data from what i am getting now because as of now i am getting timestamp, hostname, kubernetes.container name etc.
Thanks for the help @tscroggins. I was able to get the result calling the API. But I had to fill in the {search_id} manually, is there a way to get the {search_id} through the endpoint or I have to ... See more...
Thanks for the help @tscroggins. I was able to get the result calling the API. But I had to fill in the {search_id} manually, is there a way to get the {search_id} through the endpoint or I have to retrieve it from a parameter in another GET request. I need this because it's a daily alert and I would need to get the result through the API endpoint daily as well in BTP IS
Hello, if you have specific app conf (like after configuring it using HF web gui for a specific site), is it still recommended to use deployment server as this requires to sync / copy HF app/local co... See more...
Hello, if you have specific app conf (like after configuring it using HF web gui for a specific site), is it still recommended to use deployment server as this requires to sync / copy HF app/local conf back to deployment server etc/deployment-apps/app/local to avoid any deletion when reloading deployment server/app update from DS? I guess using DS is good for centralizing (same) configurations across HFs? https://docs.splunk.com/Documentation/Splunk/9.3.0/Updating/Createdeploymentapps "The only way to allow an instance to continue managing its own copy of such an app is to disable the instance's deployment client functionality. If an instance is no longer a client of a deployment server, the deployment server will no longer manage its apps."   Thanks.
@bishida  - Thanks for the details. 
The way auto-retry works is like this: Let’s say your test run at 9:00 fails on the first attempt. If auto retry is enabled, the test will immediately try again. This is to help prevent a failed test... See more...
The way auto-retry works is like this: Let’s say your test run at 9:00 fails on the first attempt. If auto retry is enabled, the test will immediately try again. This is to help prevent a failed test from some very brief condition like a network connectivity blip that only lasts a few seconds. If that second attempt fails again, then that test run is marked as failed. Either way, the next test run will take place at 9:30 based on your 30 minute schedule. It’s also nice that the “retry attempt” doesn’t count against your entitlement usage.
This sounds like a base use-case for the OpenTelemetry collector. When you run the OTel collector on an EC2, you’ll be streaming host metrics like cpu, memory, disk, and network directly to Splunk Ob... See more...
This sounds like a base use-case for the OpenTelemetry collector. When you run the OTel collector on an EC2, you’ll be streaming host metrics like cpu, memory, disk, and network directly to Splunk Observability Cloud. Since the EC2 is running in AWS, you’re also able to collect most of those same metrics through CloudWatch. The big difference is the OTel collector gives you the ability to collect high-resolution streaming metrics. This is important when correlating infrastructure metrics to application performance. The metrics coming from CloudWatch will be much lower resolution by default. But yes, this approach means that metrics are going to 2 different places: Splunk Observability Cloud and CloudWatch. The Data Management tab in Observability Cloud will give you guided instructions for installing the OTel collector.
Stupid form editor adds extra CRs. Having trouble getting this search to work as desired. I've tried these 2 methods and can't them to work:   eventtype="x" Name="x" | fields Name, host ... See more...
Stupid form editor adds extra CRs. Having trouble getting this search to work as desired. I've tried these 2 methods and can't them to work:   eventtype="x" Name="x" | fields Name, host | dedup host | stats count by host | appendpipe [stats count | where count=0 | eval host="Specify your text here"]     and using the   fillnull   command. Here is my search:   index=idx1 host=host1 OR host=host2 source=*filename*.txt field1!=20250106 (field2="20005") OR (field2="20006") OR (field2="20007") OR (field2="666") | stats count(field2) by field2, field3 | sort count(field2)   In this case the value for field2="666" does not exist in the results. Here're the results I get:   field2 field3 count(field2) 1 20005 This is field3 value 1 2 2 20006 This is field3 value 2 6 3 20007 This is field3 value 3 13   To summarize, I want to search for all the values of field2 and return the counts for each field2 value even if the field2 value is not found in the search; so, then, count(field2) for field2=666 would be 0. As follows:   field2 field3 count(field2) 1 666 <empty string> 0 2 20005 This is field3 value 1 2 3 20006 This is field3 value 2 6 4 20007 This is field3 value 3 13   This is a simplified example. The actual use case is that I want to search one data set and return all the field2 values and then search for those values in the first data set. This actual search I'm running looks like this:   index=idx1 host=host1 OR host=host2 source=*filename*.txt field1!=20250106 [search index=idx1 host=host1 OR host=host2 source=*filename*.txt field1=20250106 | fields field2 | dedup field2 | return 1000 field2] | stats count(field2) by field2, field3 | sort count(field2)   I want to find all the field2 values when field1=20250106 and then find the counts of those values in the field1!=20250106 events (even for when the count of some field2 values have count=0 in results).
I need help with below splunk query   index=XXX_XXX_XXX | eval job_status=if( 'MSGTXT' = "*ABEND*","ko","ok") | where job_status="ko"   If I change job_status="ok" its working but not for abo... See more...
I need help with below splunk query   index=XXX_XXX_XXX | eval job_status=if( 'MSGTXT' = "*ABEND*","ko","ok") | where job_status="ko"   If I change job_status="ok" its working but not for above condition, appreciate any suggestion on this.   Regards  
See https://community.splunk.com/t5/Dashboards-Visualizations/What-causes-quot-Search-auto-canceled-quot/m-p/452421 Also, check search.log for the canceled search to see if any messages explain why ... See more...
See https://community.splunk.com/t5/Dashboards-Visualizations/What-causes-quot-Search-auto-canceled-quot/m-p/452421 Also, check search.log for the canceled search to see if any messages explain why it was canceled.
Thanks @kiran_panchavat  So you are suggesting a fresh installation from tgz file. Not sure, why it worked for 2 hosts and now, it won't but I will give it a try. Also, I am assuming command "chown ... See more...
Thanks @kiran_panchavat  So you are suggesting a fresh installation from tgz file. Not sure, why it worked for 2 hosts and now, it won't but I will give it a try. Also, I am assuming command "chown -R splunk:splunk" can be replaced with the "chown -R splunkfwd:splunkfwd", as that's the user name I am running Splunk forwarder with. 
So I am 99% there New Search Index=xxxxx "Starting iteration" OR "Stopping iteration" | timechart count spac=15m by Series | rex "Starting\siteration[\s\-]+[(?<start_reg_id>[^\s]+)" | rex "Stopp... See more...
So I am 99% there New Search Index=xxxxx "Starting iteration" OR "Stopping iteration" | timechart count spac=15m by Series | rex "Starting\siteration[\s\-]+[(?<start_reg_id>[^\s]+)" | rex "Stopping\siteration[\s\-]+(?<stop_reg_id>[^:\s]+)" | eval Start_Reg_ID=start_reg_id | eval Stop_Reg_ID=stop_reg_id   When I run I get a count of 2 - Which is the start and the stop of the same ID It shows the Time Stamp and a count of 2 - when I see the events it is correct What I need to do is tell me if it was over 15 mins - maybe I need to redo the Timespan or put more time comments in...sorry I am a newbe I have got the result and it collorates the start and finish but now how do I say over 15 mins thats long
Hello @KashifIbrahim , Multiple points can create this problem, first you should check Splunk and Add-on version as latest Splunk version only support Python v3.9 and add-on's latest version are only... See more...
Hello @KashifIbrahim , Multiple points can create this problem, first you should check Splunk and Add-on version as latest Splunk version only support Python v3.9 and add-on's latest version are only compatible with Python v3.9. Apart from this you can check the Splunkd logs what they are saying.