All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

See https://community.splunk.com/t5/Dashboards-Visualizations/What-causes-quot-Search-auto-canceled-quot/m-p/452421 Also, check search.log for the canceled search to see if any messages explain why ... See more...
See https://community.splunk.com/t5/Dashboards-Visualizations/What-causes-quot-Search-auto-canceled-quot/m-p/452421 Also, check search.log for the canceled search to see if any messages explain why it was canceled.
Thanks @kiran_panchavat  So you are suggesting a fresh installation from tgz file. Not sure, why it worked for 2 hosts and now, it won't but I will give it a try. Also, I am assuming command "chown ... See more...
Thanks @kiran_panchavat  So you are suggesting a fresh installation from tgz file. Not sure, why it worked for 2 hosts and now, it won't but I will give it a try. Also, I am assuming command "chown -R splunk:splunk" can be replaced with the "chown -R splunkfwd:splunkfwd", as that's the user name I am running Splunk forwarder with. 
So I am 99% there New Search Index=xxxxx "Starting iteration" OR "Stopping iteration" | timechart count spac=15m by Series | rex "Starting\siteration[\s\-]+[(?<start_reg_id>[^\s]+)" | rex "Stopp... See more...
So I am 99% there New Search Index=xxxxx "Starting iteration" OR "Stopping iteration" | timechart count spac=15m by Series | rex "Starting\siteration[\s\-]+[(?<start_reg_id>[^\s]+)" | rex "Stopping\siteration[\s\-]+(?<stop_reg_id>[^:\s]+)" | eval Start_Reg_ID=start_reg_id | eval Stop_Reg_ID=stop_reg_id   When I run I get a count of 2 - Which is the start and the stop of the same ID It shows the Time Stamp and a count of 2 - when I see the events it is correct What I need to do is tell me if it was over 15 mins - maybe I need to redo the Timespan or put more time comments in...sorry I am a newbe I have got the result and it collorates the start and finish but now how do I say over 15 mins thats long
Hello @KashifIbrahim , Multiple points can create this problem, first you should check Splunk and Add-on version as latest Splunk version only support Python v3.9 and add-on's latest version are only... See more...
Hello @KashifIbrahim , Multiple points can create this problem, first you should check Splunk and Add-on version as latest Splunk version only support Python v3.9 and add-on's latest version are only compatible with Python v3.9. Apart from this you can check the Splunkd logs what they are saying.
I am getting the same exception, is there any solution to this mess?
Receving "Search auto-canceled" error while executing one month episod review. Please let us know if any quick solution.
Hi @ITWhisperer    please find the current query: index="index1" |search "slot" | rex field=msg "VF\s+slot\s+(?<slot_number>\d+)" | dedup msg | sort _time,host | stats range(_time) as downtime by ... See more...
Hi @ITWhisperer    please find the current query: index="index1" |search "slot" | rex field=msg "VF\s+slot\s+(?<slot_number>\d+)" | dedup msg | sort _time,host | stats range(_time) as downtime by host,slot_number
That sounds right 
What is you current query?
So I have looked at my Events And it does have a Common Unique ID on each start and stop event Example Starting iteration - 17000000 Stopping iteration - 17000000 So I guess I need to extract th... See more...
So I have looked at my Events And it does have a Common Unique ID on each start and stop event Example Starting iteration - 17000000 Stopping iteration - 17000000 So I guess I need to extract that number and perform a duration for this.
And Splunkd logs has the following error MONGO GB WARN MongoClient [999733 KVStoreUpgradeStartupThread] - Disabling TLS hostname validation for localhost ERROR KVStorageProvider [999733 KVStoreUpgr... See more...
And Splunkd logs has the following error MONGO GB WARN MongoClient [999733 KVStoreUpgradeStartupThread] - Disabling TLS hostname validation for localhost ERROR KVStorageProvider [999733 KVStoreUpgradeStartupThread] - An error occurred during the last operation ('replSetGetStatus', domain: '15', code: '13053'): No suitable servers found (`serverSelectionTryOnce` set): [connection closed calling hello on '127.0.0.1:8191']
Hi @gcusello , How this query can help me in excluding events of that particular IP addresses which have threat messages? In lookup table ip address will be filled manually daily or weekly basis by ... See more...
Hi @gcusello , How this query can help me in excluding events of that particular IP addresses which have threat messages? In lookup table ip address will be filled manually daily or weekly basis by user. But those IPs are to be excluded from search. I am confused. Please help me with the relevant query.
Hello Splunker, After I upgraded to version 9.4 , KV store does not start , I generated a new certificate by renaming server.pem and restarting the splunk , And now I see the following error on mong... See more...
Hello Splunker, After I upgraded to version 9.4 , KV store does not start , I generated a new certificate by renaming server.pem and restarting the splunk , And now I see the following error on mongod.log [conn937] SSL peer certificate validation failed: self signed certificate in certificate chain NETWORK [conn937] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: self signed certificate in certificate chain. Ending connection from 127.0.0.1:38268 (connection id: 937) Does anyone have any idea what could be missing ? Appreciate your inputs in this regard, Thank you, Moh
Go to Settings->User Interface->Navigation menus and edit the menu for the app in question.
Hi @Obsidian_RS400 , let me understand: you have a lookup with a list of ips but without hostname and you want to complete the lookup taking the hostnames from the events, is this correct? If this... See more...
Hi @Obsidian_RS400 , let me understand: you have a lookup with a list of ips but without hostname and you want to complete the lookup taking the hostnames from the events, is this correct? If this is your requirement, you could run something like this: assuming the in the lookup you have two fields IP and hostname and that in the events you have the same fields: index=your_index | dedup hostname IP | fields IP hostname | append [ | inputlookup ipaddress.csv | fields IP hostname ] | stats values(hostname) AS hostname BY IP | outputlookup ipaddress.csv Ciao. Giuseppe
Hi @Karthikeya , follow these few steps: create a lookup called as you prefer (e.g. whitelisted_ips.csv), create a lookup definition with the same name adding in Advanced options CIDR in match_Ty... See more...
Hi @Karthikeya , follow these few steps: create a lookup called as you prefer (e.g. whitelisted_ips.csv), create a lookup definition with the same name adding in Advanced options CIDR in match_Type, Create a search that extract the IPs to whitelist, at the end of the search add the command | outputlookup whitelisted_ips.csv, using this search create an alert scheduling it with the frequency you like (e.g. once a day in the night) in this way, you can use the lookup to exclude the IPS from your results running a search like the following (if the field in the lookup is "ip" and you want to search the ips in a field called ip): <your_search> [ | inputlookup whitelisted_ips.csv | fields ip ] | ... if instead you want to search the ips in all the event, you can run: <your_search> [ | inputlookup whitelisted_ips.csv | rename ip AS query | fields query ] | ... in this way, you execute a full text search on the _raw of your events. Ciao. Giuseppe  
@jkamdar Here are the steps to install the Splunk_TA_nix add-on: 1. Download the add-on and place it in the `/tmp` directory or any preferred directory. 2. Extract the contents using the command: ... See more...
@jkamdar Here are the steps to install the Splunk_TA_nix add-on: 1. Download the add-on and place it in the `/tmp` directory or any preferred directory. 2. Extract the contents using the command: `tar -zxvf <.tgz> -C /opt/splunkforwarder/etc/apps` 3. Update the ownership with the command: `chown -R splunk:splunk /opt/splunkforwarder` 4. Restart the Splunk forwarder to apply the changes.
Hi @woodman2 , if you have a field whose name begins and ends with $, the search works because it finds this field, but in the Splunk dashboard it interprets the formalism not as a field, but as a t... See more...
Hi @woodman2 , if you have a field whose name begins and ends with $, the search works because it finds this field, but in the Splunk dashboard it interprets the formalism not as a field, but as a token that has not been passed and therefore remains hanging. As I said: you cannot use this format for your fields. Ciao. Giuseppe
Hi @ITWhisperer , Thanks for the reply . Let me explain you my exact requirement. Here I am trying to create a dashboard of visualizing and calculating downtime in VMs I manage. I am trying to ... See more...
Hi @ITWhisperer , Thanks for the reply . Let me explain you my exact requirement. Here I am trying to create a dashboard of visualizing and calculating downtime in VMs I manage. I am trying to calculate based on log messages that are sending to splunk from servers. Logs will have messages like  <timestamp> <nic-card-id> slot 1 removed <timestamp> <nic-card-id> slot 3 added I am calculating difference between 2 timestamps as a downtime and visualizing it. Output dashboard I am expecting Hostname, date , slot and the difference in time(downtime)   Current query is calculating the difference, but its adding previous downtime as well. my query is, I want it to show the downtime in host on 2 different dates instead of adding it.   Can you please help me with tihs?
Hi @Kashinath.Kumbharkar, Thank you for asking your question on the community. It looks like it's been a few days with no reply. Have you found any new information or an answer you can share here? ... See more...
Hi @Kashinath.Kumbharkar, Thank you for asking your question on the community. It looks like it's been a few days with no reply. Have you found any new information or an answer you can share here? If you still need help, you can contact AppDynamics Support: How do I open a case with AppDynamics Support?