All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Everyone, Recently I have installed Splunk db connect app (3.16.0) in my Splunk heavy forwarder (9.1.1). As per the documentation I have installed jre and installed the my sql add on. And crea... See more...
Hello Everyone, Recently I have installed Splunk db connect app (3.16.0) in my Splunk heavy forwarder (9.1.1). As per the documentation I have installed jre and installed the my sql add on. And created Identities, connections and inputs. But when I check for the data it is not getting ingested. So I enabled the debug mode and checked the logs and got the hec token error. But the hec token is configured in inputs.conf file and same I could see in the Splunk web gui under Data inputs -> HTTP event collector. could you please help if anyone faced this error before? Error: ERROR org.easybatch.core.job.BatchJob - Unable to write records java.io.IOException: There are no Http Event Collectors available at this time. ERROR c.s.d.s.dbinput.recordwriter.CheckpointUpdater - action=skip_checkpoint_update_batch_writing_failed java.io.IOException: There are no Http Event Collectors available at this time.   ERROR c.s.d.s.task.listeners.RecordWriterMetricsListener - action=unable_to_write_batch java.io.IOException: There are no Http Event Collectors available at this time.
Dear All, I want to setup an alert in an event. The event contains three timestamps, New Event time, Last update, and startDate time. So these are the logs coming from MS365.  I want the Alert if o... See more...
Dear All, I want to setup an alert in an event. The event contains three timestamps, New Event time, Last update, and startDate time. So these are the logs coming from MS365.  I want the Alert if one of the field "Incident Resolved = False" is satisfied even after 4 hours of startDate time. So we receive first event at startDate  but we don't want an alert until 4 hours of startDate .   startDateTime: 2024-07-01T09:00:00Z        
Our network device data sends data to a Syslog server and then up to our splunk instance.  I have a few TAs that I’ve requested our SysAdmin team install on my behalf so the logs can be parsed o... See more...
Our network device data sends data to a Syslog server and then up to our splunk instance.  I have a few TAs that I’ve requested our SysAdmin team install on my behalf so the logs can be parsed out.  However, the TAs aren’t parsing out the data, and furthermore, the network device logs come into the source type of “syslog” rather than the sourcetype in the respective TAs.  Where do I need to look or have the SysAdmins look at? (I’m just a power user). 
  Hello, I have a problem with the dropdown menu limit which displays a maximum of 1000 values. I need to display a list of 22000 values ​​and I don't know how to do it. Thank you so much  ... See more...
  Hello, I have a problem with the dropdown menu limit which displays a maximum of 1000 values. I need to display a list of 22000 values ​​and I don't know how to do it. Thank you so much   
Hi team,   Currently we are implemented the Standalone below servers, Zone-1 Environment Server Name IP Splunk Role DEV L4     Search Head+ Indexer QA L4   ... See more...
Hi team,   Currently we are implemented the Standalone below servers, Zone-1 Environment Server Name IP Splunk Role DEV L4     Search Head+ Indexer QA L4     Search Head + Indexer L4     Deployment Server   Zone-2         Environment Server Name IP Splunk Role DEV L4     Search Head + Indexer QA L4     Search Head + Indexer   In our environment, only 2 Indexer + Search head server is there on the same instance   How to Implement the High Availability Servers?   please help me the process.             
On my Splunk on Windows the Addon is very slow and i got some Error Messages. 07-01-2024 13:47:27.491 +0200 ERROR ScriptRunner [82504 TcpChannelThread] - stderr from 'D:\apps\Splunk\bin\Python3.ex... See more...
On my Splunk on Windows the Addon is very slow and i got some Error Messages. 07-01-2024 13:47:27.491 +0200 ERROR ScriptRunner [82504 TcpChannelThread] - stderr from 'D:\apps\Splunk\bin\Python3.exe D:\apps\Splunk\bin\runScript.py setup': cfg = cli.getConfStanza("ta_databricks_settings", "logging") 07-01-2024 13:47:27.491 +0200 ERROR ScriptRunner [82504 TcpChannelThread] - stderr from 'D:\apps\Splunk\bin\Python3.exe D:\apps\Splunk\bin\runScript.py setup': File "D:\apps\Splunk\etc\apps\TA-Databricks\bin\log_manager.py", line 32, in setup_logging 07-01-2024 13:47:27.491 +0200 ERROR ScriptRunner [82504 TcpChannelThread] - stderr from 'D:\apps\Splunk\bin\Python3.exe D:\apps\Splunk\bin\runScript.py setup': _LOGGER = setup_logging("ta_databricks_utils") This errors happend for 60 seconds and than the connection will estabished and i recieved the data.  
Hi Team, An alert is scheduled to run for every 2 hours  It is getting skipped per day the alert will run - 12 times For a week 12*7 = 84 times a week We could see in the skipped search resul... See more...
Hi Team, An alert is scheduled to run for every 2 hours  It is getting skipped per day the alert will run - 12 times For a week 12*7 = 84 times a week We could see in the skipped search result that the alert is skipped for 3000 times in last 7 days How is it possible? Below search is used to find the skipped search splunk_server=*prod1-heavy index="_internal" sourcetype="scheduler" host=*-prod1-heavy | eval scheduled=strftime(scheduled_time, "%Y-%m-%d %H:%M:%S") | lookup search_env_mapping host AS host OUTPUT tenant | stats count values(scheduled) as scheduled values(savedsearch_name) as search_name values(status) as status values(reason) as reason values(run_time) as run_time values(dm_node) as dm_node values(sid) as sid by savedsearch_name tenant | sort -count | search status!=success | table scheduled, savedsearch_name, status, reason,count,tenant
Hi, I would like to create a time chart for a specified time suppose 8AM to 2PM everyday for last 30 days. I am able to chart it however in visualisation, the line from 2PM to next day 8AM is a strai... See more...
Hi, I would like to create a time chart for a specified time suppose 8AM to 2PM everyday for last 30 days. I am able to chart it however in visualisation, the line from 2PM to next day 8AM is a straight line. How can we exclude that line for duration(2PM to next day 8AM) and just show chart for 8AM to 2PM everyday as a single line. Can we exclude the Green box line? Query Used(just conditions): | eval hour=tonumber(strftime(_time,"%H")) | where hour >=8 | where hour <=14 | fields - hour
How do I run a search against a sourcetype (which is very low volume), and display a custom text when there are 0 events found.  Search should be run for 30days, with a span of 1day. Output should b... See more...
How do I run a search against a sourcetype (which is very low volume), and display a custom text when there are 0 events found.  Search should be run for 30days, with a span of 1day. Output should be - _time results 04-23-2024 "No events found" 04-23-2024 "No events found" . . . 06-30-2024 23
Hey can anybody help with this task of how to find an account with the most login attempts  in the 4624 events within a time span of 10 min
After installing splunk in windows or Linux server we are able to see the logs in server but we are not able to see the logs in Splunk HI and we are getting error message as below: 07-01-2024 05... See more...
After installing splunk in windows or Linux server we are able to see the logs in server but we are not able to see the logs in Splunk HI and we are getting error message as below: 07-01-2024 05:21:16.653 -0500 ERROR TcpOutputFd [2997818 TcpOutEloop] - Connection to host=<ip address>:9998 failed
  Hello,   I have a dashboard with checkbox and input field. If you choose the group and type 'something' into to the text input the search is looking for category="something" If you choose the ... See more...
  Hello,   I have a dashboard with checkbox and input field. If you choose the group and type 'something' into to the text input the search is looking for category="something" If you choose the Any field the search is looking for "something". I want to set that if I choose the Any field the search does not add this tag: "", only search for something. But of course remain the tag with other checkbox selection, like category="something".   The main goal would be I'd like to free to use the Any field option. So now if I type e.g. something OR anything, the search does not understand correctly because it looks like "something OR anything", so it detect like one variable. So I like to see something OR anything.   Could you please help to modify my dashboard?     <form version="1.1" theme="light"> <label>Multiselect Text</label> <init> <set token="toktext">*</set> </init> <fieldset submitButton="false"> <input type="checkbox" token="tokcheck"> <label>Field</label> <choice value="Any field">Any field</choice> <choice value="category">Group</choice> <choice value="severity">Severity</choice> <default>category</default> <valueSuffix>=REPLACE</valueSuffix> <delimiter> OR </delimiter> <prefix>(</prefix> <suffix>)</suffix> <change> <eval token="form.tokcheck">case(mvcount('form.tokcheck')=0,"category",isnotnull(mvfind('form.tokcheck',"Any field")),"Any field",1==1,'form.tokcheck')</eval> <eval token="tokcheck">if('form.tokcheck'="Any field","REPLACE",'tokcheck')</eval> <eval token="tokfilter">replace($tokcheck$,"REPLACE","\"".$toktext$."\"")</eval> </change> </input> <input type="text" token="toktext"> <label>Value</label> <default>*</default> <change> <eval token="tokfilter">replace($tokselect$,"REPLACE","\"".$toktext$."\"")</eval> </change> </input> </fieldset> <row> <panel> <event> <title>$tokfilter$</title> <search> <query>index=* $tokfilter$</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="refresh.display">progressbar</option> </event> </panel> </row> </form>     Thank you very much in advance!
We all know that Splunk Enterprise calculates license usage at index time, and the "| delete" command essentially just hides data from search so doesn't free up license usage.  My question is whet... See more...
We all know that Splunk Enterprise calculates license usage at index time, and the "| delete" command essentially just hides data from search so doesn't free up license usage.  My question is whether this works the same way for Splunk Cloud / DDAS, or whether if I run "| delete" from search,  will it free up space in my DDAS entitlement? 
Hi Team, How to check the expiry date of a certificate in splunk windows using command line User is having local admin access and not able to delete the server.pem file (is there any other way to... See more...
Hi Team, How to check the expiry date of a certificate in splunk windows using command line User is having local admin access and not able to delete the server.pem file (is there any other way to delete it)
Hi there, I have this query below to search the top policies that has been used.    type="request" "request.path"="prod/" | stats count by policies{} | sort -count | head 10   by default all th... See more...
Hi there, I have this query below to search the top policies that has been used.    type="request" "request.path"="prod/" | stats count by policies{} | sort -count | head 10   by default all the policies is being generated with "default" which I wanted to get rid of when searching so properly shows the top 10 policies only.  The search query above example results are:   policies: default policies_1 policies_2 policies_3 ....   I wanted to get rid of the default showing on my result. Any idea or help is really appricated. 
I need to display priority data for 7 days with the percentage, however am unable to display it in 7 days. My below query works for a days search but doesn't displays for 7 days. Could you please hel... See more...
I need to display priority data for 7 days with the percentage, however am unable to display it in 7 days. My below query works for a days search but doesn't displays for 7 days. Could you please help with fixing the query. Below is my query. | multisearch [ search index=myindex source=mysoruce "* from *" earliest=-7d@d latest=@d | fields TRN, tomcatget, Queue ] [ search index=myindex source=mysoruce *sent* earliest=-7d@d latest=@d | fields TRN, TimeMQPut, Status] [ search index=myindex source=mysoruce *Priority* earliest=-7d@d latest=@d | fields TRN,Priority ] | stats values(*) as * by TRN | eval PPut=strptime(tomcatput, "%y%m%d %H:%M:%S") | eval PGet=strptime(tomcatget,"%y%m%d %H:%M:%S") | eval tomcatGet2tomcatPut=round((PPut-PGet),0) | fillnull value="No_tomcatPut_Time" tomcatput | fillnull value="No_tomcatGet_Time" tomcatget | table TRN, Queue, BackEndID, Status, Priority, tomcatget, tomcatput, tomcatGet2tomcatPut | eval E2E_5min=if(tomcatGet2tomcatPut<=300,1,0) | eval E2E_20min=if(tomcatGet2tomcatPut>300 and tomcatGet2tomcatPut<=1200,1,0) | eval E2E_50min=if(tomcatGet2tomcatPut>1200 and tomcatGet2tomcatPut<=3000,1,0) | eval E2EGT50min=if(tomcatGet2tomcatPut>3000,1,0) | eval Total = E2E_5min + E2E_20min + E2E_50min + E2EGT50min | stats sum(E2E_5min) as sum_5min sum(E2E_20min) as sum_20min sum(E2E_50min) as sum_50min sum(E2EGT50min) as sum_50GTmin sum(Total) as sum_total by Priority | eval bad = if(Priority="High", sum_20min + sum_50min + sum_50GTmin, if(Priority="Medium", sum_50min + sum_50GTmin, if(Priority="Low", sum_50GTmin, null()))) | eval good = if(Priority="High", sum_5min, if(Priority="Medium", sum_5min + sum_20min, if(Priority="Low", sum_5min+ sum_20min + sum_50min, null()))) | eval per_cal = if(Priority="High", (good / sum_total) * 100, if(Priority="Medium", (good / sum_total) * 100, if(Priority="Low", (good / sum_total) * 100, null()))) | table Priority per_cal looking to get output in below format.  
Hi folks, I am trying to get Defender logs into the  Splunk Add-On for Microsoft Security but I am struggling a bit. It "appears" to be configured correctly but I am seeing this error in the logs: ... See more...
Hi folks, I am trying to get Defender logs into the  Splunk Add-On for Microsoft Security but I am struggling a bit. It "appears" to be configured correctly but I am seeing this error in the logs: ERROR pid=222717 tid=MainThread file=ms_security_utils.py:get_atp_alerts_odata:261 | Exception occurred while getting data using access token : HTTPSConnectionPool(host=&#x27;api.securitycenter.microsoft.com&#x27;, port=443): Max retries exceeded with url: /api/alerts?$expand=evidence&amp;$filter=lastUpdateTime+gt+2024-05-22T12:34:35Z (Caused by ConnectTimeoutError(&lt;urllib3.connection.HTTPSConnection object at 0x7fe514fa1bd0&gt;, &#x27;Connection to api.securitycenter.microsoft.com timed out. (connect timeout=60)&#x27;)) Is this an issue with the way the Azure Connector App is permissioned or something else entirely? Thanks in advance
I Have used the below two events to test the SOURCE_KEY =     <132>1 2023-12-24T09:48:05+00:00 DCSECIDKOASV02 ikeyserver 8244 - [meta sequenceId="2850227"] {Warning}, {RADIUS}, {W-006001}, {An i... See more...
I Have used the below two events to test the SOURCE_KEY =     <132>1 2023-12-24T09:48:05+00:00 DCSECIDKOASV02 ikeyserver 8244 - [meta sequenceId="2850227"] {Warning}, {RADIUS}, {W-006001}, {An invalid RADIUS packet has been received.}, {0x0C744774DF59FC530462C92D2781B102}, {Source Location:10.240.86.6:1812 (Authentication)}, {Client Location:10.240.86.18:42923}, {Reason:The packet is smaller than minimum size allowed for RADIUS}, {Request ID:101}, {Input Details:0x64656661756C742073656E6420737472696E67}, {Request Type:Indeterminate} <132>1 2023-12-24T09:48:05+00:00 DCSECIDKOASV02 ikeyserver 8244 - [meta sequenceId="2850228"] {Warning}, {RADIUS}, {W-006001}, {An invalid RADIUS packet has been received.}, {0xBA42228CB3604ECFDEEBC274D3312187}, {Source Location:10.240.86.6:1812 (Authentication)}, {Client Location:10.240.86.19:18721}, {Reason:The packet is smaller than minimum size allowed for RADIUS}, {Request ID:101}, {Input Details:0x64656661756C742073656E6420737472696E67}, {Request Type:Indeterminate}   Using the below Regex: [xmlExtractionIDX] REGEX = .*?"]\s+\{(?<Severity>\w+)\},\s+\{\w+\},\s+\{(?<DeviceID>[^}]*)\},(.*) FORMAT = Severity::$1 DeviceID::$2 Last_Part::$3 WRITE_META = true   till that it's working fine then i want to add more precise extraction and want to extarct more info from the Last_Part field using the SOURCE_KEY =    [xmlExtractionIDX] REGEX = .*?"]\s+\{(?<Severity>\w+)\},\s+\{\w+\},\s+\{(?<DeviceID>[^}]*)\},(.*) FORMAT = Severity::$1 DeviceID::$2 Last_Part::$3 SOURCE_KEY = MetaData:Last_Part REGEX = Reason:(.*?)\} FORMAT = Reason::$1 WRITE_META = true   But it doesn't work now, Is there any advice to do that using SOURCE_KEY     
Hi Team, What I'm trying to achieve: Find the consecutive failure events followed by a success event.   | makeresults | eval _raw="username,result user1,fail user2,success user3,success user1,... See more...
Hi Team, What I'm trying to achieve: Find the consecutive failure events followed by a success event.   | makeresults | eval _raw="username,result user1,fail user2,success user3,success user1,fail user1,fail user1,success user2,fail user3,success user2,fail user1,fail" | multikv forceheader=1 | streamstats count(eval(result="fail")) as fail_counter by username,result reset_after="("result==\"success\"")" | table username,result,fail_counter   Outcome: The counter (fail_counter) gets reset for a user (say user1) if the next event is a success event for a different user (say, user2). username result fail_counter   user1 fail 1   user2 success 0   user3 success 0   user1 fail 1 <- counter reset for user1. It should be 2. user1 fail 2 It should be 3. user1 success 0   user2 fail 1   user3 success 0   user2 fail 1   user1 fail 1   Expected: The counter should not reset if the success event for user2 follows the failure event for user1. I would appreciate any help on this. Not sure what I'm missing here.
Hi everyone, I'm currently working on integrating Kaspersky CyberTrace with Splunk and have encountered a couple of issues I need help with: Converting Indicators Lookup Dashboard: I successful... See more...
Hi everyone, I'm currently working on integrating Kaspersky CyberTrace with Splunk and have encountered a couple of issues I need help with: Converting Indicators Lookup Dashboard: I successfully converted the "Kaspersky CyberTrace Matches" and "Kaspersky CyberTrace Status" dashboards to Dashboard Studio. However, the "Indicators Lookup" dashboard does not have an option for conversion and throws an error when I try to open it: "HTML Dashboards are no longer supported. You can recreate your HTML dashboard using Dashboard Studio." The code for this dashboard is quite extensive. Does anyone have any suggestions or best practices on how to convert it to Dashboard Studio effectively? Data Not Displaying in Dashboards: Even though data is being received from Kaspersky and stored in the Main index, the dashboards are not displaying any information. Has anyone faced similar issues or could provide insight into what might be going wrong here? Any guidance or solutions to these problems would be greatly appreciated. Thanks in advance for your help!