All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I would like some help related to the wrong time value in Threat Intelligence (KV Store Lookup ) "ip_intel". Each entry has a value of "1970/01/20 02:45:00" or similar to it...the date is... See more...
Hi all, I would like some help related to the wrong time value in Threat Intelligence (KV Store Lookup ) "ip_intel". Each entry has a value of "1970/01/20 02:45:00" or similar to it...the date is same...  I would assume that this is an issue related to parsing epoch time? But I am having a hard time identifying how this could be fixed. I would be happy with the approximate time of upload to "ip_intel". If anyone has suggestions I would appreciate it. Thanks
Hi everyone, Pretty new to Splunk and would really appreciate your insight on my current project. Currently creating a dashboard where I want to use a timepicker to change the values in my charts d... See more...
Hi everyone, Pretty new to Splunk and would really appreciate your insight on my current project. Currently creating a dashboard where I want to use a timepicker to change the values in my charts depending on the time period selected by the user via the Date Range - Between. Currently experiencing problems formatting my _time value to include DATE and eventHour together. Below is my search query and search result for reference. Thank you in advance. index=mainframe-platform sourcetype="mainframe:cecmaverage" EXPRSSN = D7X0 | dedup DATE EXPRSSN MIPS | eval DATE=strftime(strptime(DATE,"%d%b%Y"),"%Y-%m-%d") | eval HOUR=if (isnull(HOUR),"0",HOUR) | eval eventHour=substr("0".HOUR,-2,2).":00:00" | eval _time=strptime(DATE." ".eventHour,"%Y-%m-%d %H:%M:%S") | table DATE eventHour _time EXPRSSN MIPS  
Hi, i have 2 events with 3 fields: timestamp , servername, cpu_usage: 22-Mar-2022 00:00:00, server1 ,18 23-Mar-2022, 00:01:00 server1 , 82 22-Mar-2022 00:00:00, server2 ,78 23-Mar-2022, 00:... See more...
Hi, i have 2 events with 3 fields: timestamp , servername, cpu_usage: 22-Mar-2022 00:00:00, server1 ,18 23-Mar-2022, 00:01:00 server1 , 82 22-Mar-2022 00:00:00, server2 ,78 23-Mar-2022, 00:01:00 server2 , 14 I want to calculate difference between 2nd and 1st event for each server. Can you please suggest, how this can be done?
I have a scheduled report that runs once every 12 hour. But once it runs , it generates same email alerts multiple times during the scheduled time, Is there any way to compress / throttle to just on... See more...
I have a scheduled report that runs once every 12 hour. But once it runs , it generates same email alerts multiple times during the scheduled time, Is there any way to compress / throttle to just one report/email ? | tstats min(_time) as first_time max(_time) as last_time values(sourcetype) where TERM(121.121.1.165) OR TERM(876.234.11.214) OR TERM(192.176.30.196) by index | convert ctime(first_time) ctime(last_time)
Hi, I have 2 indexes and i am performing a join between both indexes to get the top 10 categories per region. Categories come from one index and region comes from the other index. I am able to perf... See more...
Hi, I have 2 indexes and i am performing a join between both indexes to get the top 10 categories per region. Categories come from one index and region comes from the other index. I am able to perform the join but I am unable to incorporate the top function to get the top 10 categories per region. Here is my query:   Can you please help? Many thanks, Patrick
    I am able to perfom search for disk space and can see the reuslts. However, I am not getting alert when I setup it in alert option. Below are the settings I have used: Search script: =======... See more...
    I am able to perfom search for disk space and can see the reuslts. However, I am not getting alert when I setup it in alert option. Below are the settings I have used: Search script: =============== index=perfmon host=XXXXXX OR host=YYYYYYYsourcetype="Perfmon:LogicalDisk" counter="% Free Space" instance="C:" OR instance="D:" OR instance="E:" Value earliest=-1m latest=now |dedup instance host| sort host| eval Value=round(Value,0)| where Value<50| stats list(host),list(instance),list(Value)| rename list(host) as Servers, list(instance) as Drives, list(Value) as FreeSpaceLeft% Cron expression : ===================== */5 * * * * Trigger alert condition: ========================= search Value <= 50 CAn you please help me on where it went wrong. I am not getting alert for this condition.    
hi   I have 2 pb with my eval clause below 1) when I have a look to the events collected, they dont correspond to the domain specified and the url specified so the sum on the field tpscap is ... See more...
hi   I have 2 pb with my eval clause below 1) when I have a look to the events collected, they dont correspond to the domain specified and the url specified so the sum on the field tpscap is wrong       | eval tpscap =if(domain="stm" AND url="*%g6_%*" OR url="*WS_STOMV2_H55*" AND web_dura > 50, 1, 0) | chart sum(tpscap) as tps        so what is wrong please? 2)  thanks
i want splunk to show me the geolocation from incoming traffic. as everyone knows syslog lines can vary a lot, it is not parsed at all besides the time and date. after downloading a days worth of sys... See more...
i want splunk to show me the geolocation from incoming traffic. as everyone knows syslog lines can vary a lot, it is not parsed at all besides the time and date. after downloading a days worth of syslog traffic and using the "extract fields" to highlight the IP address that needs to be used to find the location it was possible to see on the world map where the traffic came from. what i need is this exact feature but for real time data, i want to see this information from the syslog file in real time. so far it hasn't worked, and i dont know how to fix it. i use the same search on both the real-time syslog and downloaded syslog file however it only works with the downloaded syslog index=_* OR index=* sourcetype=syslog | iplocation clientip | geostats count by Country    
Hi, I have a dropdown with 3 options. When I select one of the option, the value should be in the token and passed to a base search. However, on the panel that uses this base search, the input neve... See more...
Hi, I have a dropdown with 3 options. When I select one of the option, the value should be in the token and passed to a base search. However, on the panel that uses this base search, the input never appears to be understood:   Here is the XML code for the dropdown and base search and panel:     Can you please help? Many thanks, Patrick
Hello. Given these logs: 2022-03-16 16:08:43.991 traceId="7890" svc="Service1" duration=132 2022-03-16 16:10:43.279 traceId="1234" svc="Service1" duration=132 2022-03-16 16:38:43.281 traceId="5... See more...
Hello. Given these logs: 2022-03-16 16:08:43.991 traceId="7890" svc="Service1" duration=132 2022-03-16 16:10:43.279 traceId="1234" svc="Service1" duration=132 2022-03-16 16:38:43.281 traceId="5678" svc="Service3" duration=219 2022-03-16 16:43:43.284 traceId="1234" svc="Service2" duration=320 2022-03-16 17:03:44.010 traceId="1234" svc="Service2" duration=1023 2022-03-16 17:04:44.299 traceId="5678" svc="Service3" duration=822 2022-03-16 17:19:44.579 traceId="5678" svc="Service2" duration=340 2022-03-16 17:32:44.928 traceId="1234" svc="Service1" duration=543 I would like in a single search to: extract all traceIds that happened between 17:00 and 17:05 search for the captured traceIds in larger range (say between 16:00 and 18:00) Is that possible? Thank you!
Hi, How to ingest Security Hub logs to splunk without using HEC token, do we have any Add-on? to ingest Security Hub logs to splunk? GuardDuty will be integrated into Security Hub first then sent... See more...
Hi, How to ingest Security Hub logs to splunk without using HEC token, do we have any Add-on? to ingest Security Hub logs to splunk? GuardDuty will be integrated into Security Hub first then sent out from security hub together with other events into splunk.   Thanks, Vijay Sri S
When an array of dictionaries is assigned to the output variable of a code block, only the whole array can be used as input on the following blocks, but using a data path selector for key selection t... See more...
When an array of dictionaries is assigned to the output variable of a code block, only the whole array can be used as input on the following blocks, but using a data path selector for key selection throws an error. Value of the output variable:   [ { "tags": [ "tag1" ], "domain": "example1.com" }, { "tags": [ "tag2" ], "domain": "example2.com" } ]   Data Paths:   Working: function_name:custom_function:result_domains Not Working: function_name:custom_function:result_domains.*.domain   The not working selection throws the following error:   Python Error: Traceback (most recent call last): File "../pylib/phantom/decided/internals.py", line 445, in call_playbook_action_callback File "../pylib/phantom/decided/internals.py", line 159, in invoke_callback_for_callable File "../pylib/phantom/decided/internals.py", line 268, in _invoke_callback_for_callable File "<playbook>", line 93, in get_url_calltrace_callback File "<playbook>", line 399, in domains File "<playbook>", line 499, in code_11 File "/opt/phantom/usr/python36/lib/python3.6/json/__init__.py", line 354, in loads return _default_decoder.decode(s) File "/opt/phantom/usr/python36/lib/python3.6/json/decoder.py", line 339, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/opt/phantom/usr/python36/lib/python3.6/json/decoder.py", line 357, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)   Is this a known limitation of the code block (not custom function) in Splunk SOAR?
My requirement,  is to run this alert with a time range of 12 hours and send email twice a day (every 12 hour) based on what it finds. Here is my configuration, Cron Expression : * */12 * * * Ti... See more...
My requirement,  is to run this alert with a time range of 12 hours and send email twice a day (every 12 hour) based on what it finds. Here is my configuration, Cron Expression : * */12 * * * Time Range: Last 12 hours Schedule Priority : Default Schedule Window : 5 minutes In my local time it runs between 9:30 AM - 10:30 AM and 9:30 PM - 10:30 PM. But, Between those (say between 9:30 AM to 10:30 AM), it triggers multiple emails alerts, like one alert in every 2 min kind of frequency.  What I want is, It should send one email during each run. (i.e. One email after every 12 hours). Can anyone guide what to change in the scheduling options to achieve this ?
  getting below error message when i configured the alert.Coiuld you please suggest what the further step. Pathname [9188 AlertNotifierWorker-0] - Pathname 'C:\Program Files\Splunk\bin\Python3.ex... See more...
  getting below error message when i configured the alert.Coiuld you please suggest what the further step. Pathname [9188 AlertNotifierWorker-0] - Pathname 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\etc\apps\search\bin\sendemail.py   For  below steps i followed 1) configured email settings 2)port number also enabled  
Hi, Splunkers, I have a doubt. now currently using Splunk enterprise 8.2.5, today morning the etc/password file auto-updated and detected by a third party software ( confidential ). I never changed... See more...
Hi, Splunkers, I have a doubt. now currently using Splunk enterprise 8.2.5, today morning the etc/password file auto-updated and detected by a third party software ( confidential ). I never changed the file, so my question is-- does Splunk auto-update the $SPLUNK_HOME/etc/password file? please provide any Splunk documentation 
I would like to transfer data from the data source to Forwarder via Syslog over TLS. Is it possible to use the default SSL certificate provided by Splunk to transfer data from the data source to the... See more...
I would like to transfer data from the data source to Forwarder via Syslog over TLS. Is it possible to use the default SSL certificate provided by Splunk to transfer data from the data source to the forwarder over Syslog over TLS? Is it possible to use the default SSL certificate provided by Splunk on non-Splunk equipment?
Hi community, I am new to Splunk and considering to evaluate it as our enterprise log collection and SIEM setup. If I want to forward logs to a Splunk forwarder and then it forwards to a Splunk s... See more...
Hi community, I am new to Splunk and considering to evaluate it as our enterprise log collection and SIEM setup. If I want to forward logs to a Splunk forwarder and then it forwards to a Splunk server, will the splunk server be able to parse the real IP address of the log source itself? Or will it see the splunk forwarder IP as the real source IP? We want to forward all our server logs to this splunk forwarder, and then to server. But being able to see real IP addresses is what we are concerned with.   thanks  
Hi, I would like to implement some splunk alert to check if there's any special event that happened after a certain event, all the events are grouped by the same request-id,  wonder if you could help... See more...
Hi, I would like to implement some splunk alert to check if there's any special event that happened after a certain event, all the events are grouped by the same request-id,  wonder if you could help on this, thanks queryA -   index=app  class=ClassA  conditionA=aVal | fields  rid, _ time | table rid, _time,   each result (rid, _time) is unique queryB -   index=app  class=ClassB conditionB=bVal   rid=queryA.rid and _time > queryA._time I would like to get the alert if queryB has a result.   If it is represented as a SQL, it would be like this  select  field1, fiedl2 ....  from queryB as B,               (select id, _time from queryA where  afield1=someval and afield2=val2) as A where B.id=A.id and B._time > A._time Any help would be greatly appreciated, thanks
Hi All,  I have 12 months worth of data that is contained on individual separate .csv files within my lookup tables.  Ie - | inputlookup JanStats.csv & | Inputlookup FebStats.csv etc I have a da... See more...
Hi All,  I have 12 months worth of data that is contained on individual separate .csv files within my lookup tables.  Ie - | inputlookup JanStats.csv & | Inputlookup FebStats.csv etc I have a dashboard that has about 12 different panels each showing different stats and visuals all coming from the one lookup table. Ie-  one panel is showing how many hours spent in which category : | inputlookup JanStats.csv | stats sum(Duration) as total by Entry Instead of creating 12 different monthed dashboards with all the same panels, I want to be able to use a drop down input button to select the month and have that months data from the individual lookup to display across the 12 panels on the one dash.  I started mucking around with this below, which works as such, but it is only showing the 1 panel search..where as i have 12 panels to search each time the drop down is selected - i am struggaling to figure out how to add in the extra panels i have ( they are of similar searchs, ie stats sum(Recovery) by User.     <form>   <label>Drop Down Testing</label>   <fieldset submitButton="false"></fieldset>   <row>     <panel>       <input type="dropdown" token="Month" searchWhenChanged="true">         <choice value="1">January</choice>         <choice value="2">February</choice>         <choice value="3">March</choice>         <default></default>         <change>           <condition value="1">             <set token="new_search">| inputlookup JanStats.csv  | stats sum(Duration) AS total by Entry  | table total</set>           </condition>           <condition value="2">             <set token="new_search">| inputlookup FebStats.csv  | stats sum(Duration) AS total by Entry  | table total</set>           </condition>           <condition value="3">             <set token="new_search">| inputlookup MarchStats.csv  | stats sum(Duration) AS total by Entry  | table total</set>           </condition>         </change>       </input>       <single>         <search>           <query>$new_search$</query>           <earliest>-4h@m</earliest>           <latest>now</latest>         </search>         <option name="colorMode">block</option>         <option name="drilldown">all</option>         <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option>         <option name="useColors">1</option>       </single>     </panel>   </row> </form>
We have synthetic tests and use the availability metric in our SLA reporting. Sometimes our tests fail due to a false positive. It would be great to be able to click a Failed Session in the control... See more...
We have synthetic tests and use the availability metric in our SLA reporting. Sometimes our tests fail due to a false positive. It would be great to be able to click a Failed Session in the controller UI and mark it as OK.