All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi 1-I want to search result return everything after specific event till now. for example: index=main | search  "start service now" expected result is all events after this event till now   2-af... See more...
Hi 1-I want to search result return everything after specific event till now. for example: index=main | search  "start service now" expected result is all events after this event till now   2-after return all events after specific string in next step add specific field count incrementally. for example: i have field that call "Module" count increment overtime like below: index=main | table _time Module 08:30  10 08:37 15 08:38 30 08:40 40 08:58 43 . .   Any idea? Thanks
i  am having field like this below.   message :"{"\payement":"xxx", "\account:" xxx"}"   I  want  the  first  and last  quote .. How to  remove that
I am trying to make a comparison of one field against itself but from a previous day.  The use case is I'm trying to see if that value changes from day to day, the field is a file hash.  I sun a sear... See more...
I am trying to make a comparison of one field against itself but from a previous day.  The use case is I'm trying to see if that value changes from day to day, the field is a file hash.  I sun a search for today and rename the field I want to compare then run a subsearch and rename the field again so I can then compare them after the subsearch finishes but the eval always evaluates to false and displays the last response you place in the eval line.   My code: index=my_index RuleName="Monitor The File" FileName="file.exe" earliest="06/11/2021:00:00:00" latest="06/11/2021:24:00:00" | rename FileHash as "todays_hash" | append [ search index=my_index RuleName="Monitor The File" FileName="file.exe" earliest="06/12/2021:00:00:00" latest="06/12/2021:24:00:00" | rename FileHash as "yesterdays_hash"] | eval description=case(todays_hash=yesterdays_hash,"Hash has not changed", todays_hash!=yesterdays_hash,"Hash has changed") | table description todays_hash yesterdays_hash   I have tried changing the order of the eval putting != before == and it will always take the second options.  The table it showing the eval results and the 2 hashes. Thanks!
Hi By just adding command.arg.1 = '...' in commands.conf I get the following error: "Command test appears to be statically configured for search command protocol version 1 and static configuration ... See more...
Hi By just adding command.arg.1 = '...' in commands.conf I get the following error: "Command test appears to be statically configured for search command protocol version 1 and static configuration is unsupported by splunklib.searchcommands. Please ensure that default/commands.conf contains this stanza:.... " The stanza is in place with all the mentioned arguments. This question apparently was asked several times - with no answer so far. What is the problem? All I want is to pass some arguments to a streaming-command.   Thanks
Hi, I am testing out Splunk Fundamentals 1, and on Module 5 of the lab portion, after running the search, I am not getting any events. I have tried in both admin, and user roles, and have tested th... See more...
Hi, I am testing out Splunk Fundamentals 1, and on Module 5 of the lab portion, after running the search, I am not getting any events. I have tried in both admin, and user roles, and have tested the presets for all time, previous year, previous month, etc. 
Hi!  i am trying to create a search to display zero values in my chart. However my current search has multiple calculated fields ( |stats  sum(count) as Count,  avg(days) as avg_days,  avg(time) as ... See more...
Hi!  i am trying to create a search to display zero values in my chart. However my current search has multiple calculated fields ( |stats  sum(count) as Count,  avg(days) as avg_days,  avg(time) as avg_time by category time) I have this done by creating a dummy search with zero values and then using max command. I would like to only show zero values for the Count. Thank you for your help in advance!
Hello, We are planning to upgrade our splunk to version 8.1.4. We have 2 separate indexer cluster for 2 different client. All of then is running version 8.0.8. We are planning to upgrade all of our ... See more...
Hello, We are planning to upgrade our splunk to version 8.1.4. We have 2 separate indexer cluster for 2 different client. All of then is running version 8.0.8. We are planning to upgrade all of our splunk enterprise except for the indexer cluster (composed of 3 splunk indexer servers) of one of our client due to app not supported by 8.1.4. Will the indexer cluster of the splunk 8.0.8 still continue to function normally even though the rest of the splunk servers (search head, heavy forwarders, and the other client indexer cluster) is on version 8.1.4? Thank you.
HI, I have 3 searches that give results for errors and journey length. I wanted to add all these searches together and send an alert when it breaches the threshold values. Can you please help me wit... See more...
HI, I have 3 searches that give results for errors and journey length. I wanted to add all these searches together and send an alert when it breaches the threshold values. Can you please help me with how to combine these three searches so that we get them in a single alert? Search queries I wanted to combine -  Journey completion time   index=nextgen sourcetype=lighthouse_json sourcetype=lighthouse_json datasource=webpagetest | timechart span=1h avg(duration) AS "Journey completion time"   Errors   index=nextgen sourcetype=lighthouse_json sourcetype=lighthouse_json datasource=webpagetest errorORstatuscode=500 OR errorORstatuscode=4* NOT url="*sentry*" | timechart span=1h count(step) by step   Error status codes    index=nextgen sourcetype=lighthouse_json sourcetype=lighthouse_json datasource=webpagetest errorORstatuscode=500 OR errorORstatuscode=4* NOT url="*sentry*" | table _time, step, url, errorORstatuscode   Thanks, Swetha. G
HI, I have 3 searches that give results for errors and journey length. I wanted to add all these searches together and send an alert when it breaches the threshold values. Can you please help me wit... See more...
HI, I have 3 searches that give results for errors and journey length. I wanted to add all these searches together and send an alert when it breaches the threshold values. Can you please help me with how to combine these three searches so that we get them in a single alert. Search queries I wanted to combine -  Journey completion time       index=nextgen sourcetype=lighthouse_json sourcetype=lighthouse_json datasource=webpagetest | timechart span=1h avg(duration) AS "Journey completion time"       Errors       index=nextgen sourcetype=lighthouse_json sourcetype=lighthouse_json datasource=webpagetest errorORstatuscode=500 OR errorORstatuscode=4* NOT url="*sentry*" | timechart span=1h count(step) by step       Error status codes        index=nextgen sourcetype=lighthouse_json sourcetype=lighthouse_json datasource=webpagetest errorORstatuscode=500 OR errorORstatuscode=4* NOT url="*sentry*" | table _time, step, url, errorORstatuscode         Thanks, Swetha. G
Hello, I can not run Splunk Alerts, nor sending emails despite email settings.
I use the inbuilt ES  notables and incidents for creating the tickets for team to work on the issues. All the tickets are saved in separate index=notable, however the issue is that the drill down eve... See more...
I use the inbuilt ES  notables and incidents for creating the tickets for team to work on the issues. All the tickets are saved in separate index=notable, however the issue is that the drill down events of these notables are not saved separately. Drill down search runs on normal indexes such as windows/unix etc with shorter retention periods. Sometimes, I need those drill down events for research purpose, then i need to unfreeze the whole bucket to get one event of appropriate data.  I want to create a new index which will store drill down events for all the notables raised automatically and then I can adjust its retention period as per my need.  Kindly advise.    
Hello, we just updated ES from 6.4 to 6.6. The new incident review dashboard completely ignores suppressed events, showing them in the list. Is this a known issue or something caused by the upgrade?... See more...
Hello, we just updated ES from 6.4 to 6.6. The new incident review dashboard completely ignores suppressed events, showing them in the list. Is this a known issue or something caused by the upgrade? Anyone has a solution for this? (Splunk Enterprise version 8.1.3) Thanks  
Hi all, A security scan on our Splunk server has thrown up CVE-2018-11409. I've verified thatwe are affected -  I can access info on /en-US/splunkd/__raw/services/server/info/server-info?output_mod... See more...
Hi all, A security scan on our Splunk server has thrown up CVE-2018-11409. I've verified thatwe are affected -  I can access info on /en-US/splunkd/__raw/services/server/info/server-info?output_mode=json without being logged in. https://www.splunk.com/en_us/product-security/announcements-archive/SP-CAAAP5E.html claims this was fixed for unauthenticated users in Splunk 6.6.0+, but we're running Splunk 8.1.3 (that version info is even printed in /en-US/splunkd/__raw/services/server/info/server-info?output_mode=json !). Any idea why we could still be affected on Splunk 8.1.3? Thanks
Is there  any possibility to over write the index data , for example the data is indexing by the below query. | inputlookup  sample_Data.csv  | collect index= Collected_data  if i indexing the... See more...
Is there  any possibility to over write the index data , for example the data is indexing by the below query. | inputlookup  sample_Data.csv  | collect index= Collected_data  if i indexing the some other data to the same index , in this scenario the old data in the index should be over write by the new data , if it is possible ,  can you please explain how to do it.  | inputlookup  sample_Data2.csv  | collect index= Collected_data 
Hi Team, Please let us know what is the specific requirement of IP/domain whitelisting for the SaaS & On-Prem Controller. Thanks in advance!!
Hello, I would like to know if it is possible to send the reports generated in Splunk On Call (like the Response Metrics and Incident Frequency) to Splunk Enterprise. I would like to retrieve these re... See more...
Hello, I would like to know if it is possible to send the reports generated in Splunk On Call (like the Response Metrics and Incident Frequency) to Splunk Enterprise. I would like to retrieve these reports to use them in dashboards created in Enterprise.
Hi All , My query was is there any way we can reload the changed python script without restarting the splunk everytime?  I understand we have _bump for javascript and CSS & debug/refresh for .conf ... See more...
Hi All , My query was is there any way we can reload the changed python script without restarting the splunk everytime?  I understand we have _bump for javascript and CSS & debug/refresh for .conf files but do we have a similiar command for changed python scripts as well? Thanks & Regards AG.  
hi everyone i installed splunk after delete splunk  i go /splunk/bin and ./splunk start i see error message "ERROR: Couldn't determine $SPLUNK_HOME or $SPLUNK_ETC; perhaps one should be set in env... See more...
hi everyone i installed splunk after delete splunk  i go /splunk/bin and ./splunk start i see error message "ERROR: Couldn't determine $SPLUNK_HOME or $SPLUNK_ETC; perhaps one should be set in environment" how i can do? thank you very much!
I generated reports for a certain dataset from splunk and saved it as a pdf, now in future I wish to import that same pdf on splunk and plot different graphs and do other processing on it, how do i i... See more...
I generated reports for a certain dataset from splunk and saved it as a pdf, now in future I wish to import that same pdf on splunk and plot different graphs and do other processing on it, how do i import that pdf to splunk?
Hi Team, Getting Duplicate events during Index time Log ingestion method - UF What would be done to stop duplicate events?