All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a HEC output coming to my hec receiver services/collector/event?auto_extract_timestamp=true I want to extract time from field named "time". The format of the event is like  { "event": {... See more...
I have a HEC output coming to my hec receiver services/collector/event?auto_extract_timestamp=true I want to extract time from field named "time". The format of the event is like  { "event": { "@timestamp": "2022-05-05T10:22:44.965Z" "time": 1651746176018, "my_text": "Pony 1 has left the barn" } }   I also have a prop.conf that have following configuration: CHARSET=UTF-8 KV_MODE=json LINE_BREAKER=([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD=13 NO_BINARY_CHECK=true SHOULD_LINEMERGE=true TIME_FORMAT=%s%3N TIME_PREFIX=\"time\": In result, my timestamp is extracted from field "@timestamp" and I was experimenting a lot with TIME_PREFIX field. But when I manually upload the json with a file, the field I need is parsed ok and "@timestamp" is ignored. 
Hello I have data that looks like this :    Name | Type | Value ------------------------------------------ Name1 | TypeA | 2 Name1 | TypeB | 4 Name1 | TypeC | 6 Name2 | TypeA | 4 Name2 | TypeB | 8 ... See more...
Hello I have data that looks like this :    Name | Type | Value ------------------------------------------ Name1 | TypeA | 2 Name1 | TypeB | 4 Name1 | TypeC | 6 Name2 | TypeA | 4 Name2 | TypeB | 8 Name2 | TypeC | 3 Name3 | TypeA | 1 Name3 | TypeB | 5 Name3 | TypeC | 7    Is it possible to add a column that sums the values by Name while keeping the Value column, like this :    Name | Type | Value | SumByName --------------------------------------------------------- Name1 | TypeA | 2 | 12 Name1 | TypeB | 4 | 12 Name1 | TypeC | 6 | 12 Name2 | TypeA | 4 | 15 Name2 | TypeB | 8 | 15 Name2 | TypeC | 3 | 15 Name3 | TypeA | 1 | 13 Name3 | TypeB | 5 | 13 Name3 | TypeC | 7 | 13    Thanks for the help.
2 events : request and response and unique id which binds this transaction. I have  issue where i have to calculate the total duration between request and response and average , max and min respons... See more...
2 events : request and response and unique id which binds this transaction. I have  issue where i have to calculate the total duration between request and response and average , max and min response time from all the transaction triggered per day/per hour. the below query works in extracting request and response but duration is not being calculated, or displayed  when i run the query : search query | stats earliest(dateTime) AS request latest(dateTime) AS response BY TransactionID | eval duration=response- request   result for above query : TransactionID                                                                          Request                                              Response 000877d43ef8778123243454bda780c5e5     2022-05-05 01:36:12.916      2022-05-05 01:36:13.27 Please help in writing query for calculating duration and avg, max,min response time for all the transaction happened in a day
Hi All, I was hoping that modification of KV_Mode=xml in props.conf under the [xmlwineventlog] stanza on the standalone index\search head\deployment server would properly parse the Event View data f... See more...
Hi All, I was hoping that modification of KV_Mode=xml in props.conf under the [xmlwineventlog] stanza on the standalone index\search head\deployment server would properly parse the Event View data from servers, but unfortunately I am not seeing all the message data that should be included.  Here is sample of data, please see Event.EventData.Binary field:  
Hello, I recently setup a test environment(clustered deployment) on  AWS  to monitor and get data into the peer nodes. My environment include: cluster master(hosting license), 3 Indexers, 1 Deploye... See more...
Hello, I recently setup a test environment(clustered deployment) on  AWS  to monitor and get data into the peer nodes. My environment include: cluster master(hosting license), 3 Indexers, 1 Deployer, 3 Search heads, 1 Deployment server and 2 Universal forwarders.  I configured the deployment server to push configuration to the forwarders, and all seem working fine; the forwarders are phoning home, their is sync between the DS and the UFs. But the peer nodes are not receiving the data. even though, I set up the listening port (9997). I did troubleshoot on the UF to see if they are pushing, excerpt of the output from the UFs: 05-05-2022 11:02:11.118 +0000 INFO HttpPubSubConnection [3276 HttpClientPollingThread_71E59550-AD46-4814-8460-DB66C1DD0BAD] - Running phone uri=/services/broker/phonehome/connection_172.31.28.182_8089_ip-172-31-28-182.ec2.internal_uf01_71E59550-AD46-4814-8460-DB66C1DD0BAD 05-05-2022 11:02:25.767 +0000 INFO TailReader [3323 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 05-05-2022 11:02:31.076 +0000 INFO AutoLoadBalancedConnectionStrategy [3316 TcpOutEloop] - Connected to idx=172.31.21.254:9997, pset=0, reuse=0. using ACK. 05-05-2022 11:02:55.767 +0000 INFO TailReader [3323 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 05-05-2022 11:03:01.006 +0000 INFO AutoLoadBalancedConnectionStrategy [3316 TcpOutEloop] - Connected to idx=172.31.22.208:9997, pset=0, reuse=0. using ACK. 05-05-2022 11:03:11.118 +0000 INFO HttpPubSubConnection [3276 HttpClientPollingThread_71E59550-AD46-4814-8460-DB66C1DD0BAD] - Running phone uri=/services/broker/phonehome/connection_172.31.28.182_8089_ip-172-31-28-182.ec2.internal_uf01_71E59550-AD46-4814-8460-DB66C1DD0BAD 05-05-2022 11:03:25.597 +0000 INFO TailReader [3323 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 05-05-2022 11:03:30.890 +0000 INFO AutoLoadBalancedConnectionStrategy [3316 TcpOutEloop] - After randomization, current is first in the list. Swapping with last item 05-05-2022 11:03:30.891 +0000 INFO AutoLoadBalancedConnectionStrategy [3316 TcpOutEloop] - Connected to idx=172.31.21.254:9997, pset=0, reuse=1. 05-05-2022 11:03:55.596 +0000 INFO TailReader [3323 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 05-05-2022 11:04:00.813 +0000 INFO AutoLoadBalancedConnectionStrategy [3316 TcpOutEloop] - Connected to idx=172.31.18.160:9997, pset=0, reuse=0. using ACK. 05-05-2022 11:04:11.124 +0000 INFO HttpPubSubConnection [3276 HttpClientPollingThread_71E59550-AD46-4814-8460-DB66C1DD0BAD] - Running phone uri=/services/broker/phonehome/connection_172.31.28.182_8089_ip-172-31-28-182.ec2.internal_uf01_71E59550-AD46-4814-8460-DB66C1DD0BAD 05-05-2022 11:04:25.596 +0000 INFO TailReader [3323 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 05-05-2022 11:04:30.704 +0000 INFO AutoLoadBalancedConnectionStrategy [3316 TcpOutEloop] - Connected to idx=172.31.22.208:9997, pset=0, reuse=0. using ACK. 05-05-2022 11:04:55.596 +0000 INFO TailReader [3323 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 05-05-2022 11:05:00.613 +0000 INFO AutoLoadBalancedConnectionStrategy [3316 TcpOutEloop] - Connected to idx=172.31.18.160:9997, pset=0, reuse=1. 05-05-2022 11:05:11.129 +0000 INFO HttpPubSubConnection [3276 HttpClientPollingThread_71E59550-AD46-4814-8460-DB66C1DD0BAD] - Running phone  Any idea on solution to this ?  
I am trying to extract a single section from within some JSON. (The original event is wrapped in even more json). I have built a regex and tested it, and everything seems to work. index=* sourcetype... See more...
I am trying to extract a single section from within some JSON. (The original event is wrapped in even more json). I have built a regex and tested it, and everything seems to work. index=* sourcetype=suricata | rex field=_raw "\"original\":(?<originalMsg>.+?})}," BUT once I put it into the config files, nothing happens. Props: [source::http:kafka_iap-suricata-log] LINE_BREAKER = (`~!\^<) SHOULD_LINEMERGE = false TRANSFORMS-also = extractMessage Transforms: [extractMessage] REGEX = "original":(.+?})}, DEST_KEY= _raw FORMAT = $1 Inputs: [http://kafka_iap-suricata-log] disabled = 0 index = ids-suricata-ext token = tokenyNumbersGoHere sourcetype = suricata Sample Event (copied from _raw): {"destination":{"ip":"192.168.0.1","port":80,"address":"192.168.0.1"},"ecs":{"version":"1.12.0"},"host":{"name":"rsm"},"fileset":{"name":"eve"},"input":{"type":"log"},"suricata":{"eve":{"http":{"http_method":"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0GET","hostname":"7.tlup.microsoft.com","url":"/filestreamingservice/files/eb3d","length":0,"protocol":"HTTP/1.1","http_user_agent":"Microsoft-Delivery-Optimization/10.0"},"event_type":"http","flow_id":"841906347931855","tx_id":4,"in_iface":"ens3f0"}},"service":{"type":"suricata"},"source":{"ip":"192.168.0.1","port":57576,"address":"192.168.0.1"},"network.direction":"external","log":{"offset":1363677358,"file":{"path":"/data/suricata/eve.json"}},"@timestamp":"2022-05-05T09:29:05.976Z","agent":{"hostname":"xxx","ephemeral_id":"5a1cb090","id":"bd4004192","name":"ram-nsm","type":"filebeat","version":"7.16.2"},"tags":["iap","suricata"],"@version":"1","event":{"created":"2022-05-05T09:29:06.819Z","module":"suricata","dataset":"suricata.eve","original":{"http":{"http_method":"\\0\\0\\0\\0\\0\\0\\0\\00\\0\\0GET","hostname":"7.t.microsoft.com","url":"/filestreamingservice/files/eb3d","length":0,"protocol":"HTTP/1.1","http_user_agent":"Microsoft-Delivery-Optimization/10.0"},"dest_port":80,"flow_id":845,"in_iface":"ens3f0","proto":"TCP","src_port":57576,"dest_ip":"192.168.0.1","event_type":"http","timestamp":"2022-05-05T09:29:05.976989+0000","tx_id":4,"src_ip":"192.168.0.1"}},"network":{"transport":"TCP","community_id":"1:uE="}}  
hi am newbie I have a duration time value with the format "1d hh:mm:ss" but I haven't gotten a thread that discusses summing with that format. Example: hostname=hostA outage="1d 21:49:48" host... See more...
hi am newbie I have a duration time value with the format "1d hh:mm:ss" but I haven't gotten a thread that discusses summing with that format. Example: hostname=hostA outage="1d 21:49:48" hostname=hostA outage="10:30:50" i want the result can be like that > total outage = 2d 08:20:38 or 56:20:38 happy for the help thanks
Hi all, I am using Splunk SOAR Community Edition and have a general question on how to correctly trigger a playbook. I am thinking of a scenario where an external alert from a SIEM like qRadar or... See more...
Hi all, I am using Splunk SOAR Community Edition and have a general question on how to correctly trigger a playbook. I am thinking of a scenario where an external alert from a SIEM like qRadar or Elastic should trigger a playbook. For example, a bruteforce alert should trigger a bruteforce playbook, a portscan alert should trigger a portscan playbook, and so on. Unfortunately, it is only possible to assign the same labels to all incoming SIEM alerts. Based on these labels a playbook is then executed. Is there any way to assign the labels based on the type (e.g. a field of the alarm) of the incoming alarm or to solve the difference between alarms in another way? As a workaround, I was thinking of a "general" playbook that distributes incoming alerts to specific playbooks. But is that really the best way to solve the problem? I'm looking forward to some ideas or hints. Thank you very much in advance. kind regards  simon
The csv files are getting forwarded from a linux UF agent to splunk as and when it is created ie. 2:50 CET time. For a file getting created at 2:50 CET time on the same uf agent is not reflecting  in... See more...
The csv files are getting forwarded from a linux UF agent to splunk as and when it is created ie. 2:50 CET time. For a file getting created at 2:50 CET time on the same uf agent is not reflecting  in the splunk. The inputs.conf file of the app is not having any intervals configured.
Hi Guys,  Looking for some support on this. We are trying to setup alerts for the CPU metric data, to have incident when average CPU usage reaches over 90% for over last 2 hours.  We created a foll... See more...
Hi Guys,  Looking for some support on this. We are trying to setup alerts for the CPU metric data, to have incident when average CPU usage reaches over 90% for over last 2 hours.  We created a following base search, | mstats avg(cpu_metric.pctIdle) as cpu_idle  where index=lxmetrics earliest=-4h latest=now() span=2h by host| eval cpu_used=round(100-cpu_idle,2) Problem, incidents created as soon CPU is over 90% when KPI search schedule reaches(15mins). It is not waiting for 2 hours to complete, to take the average. Need some light on this. Thanks
Hello Splunkers, As per the below details i need to create an alert on the basis of threshold value. But in this case every offeringID has different rate. So how can i calculate the threshold for e... See more...
Hello Splunkers, As per the below details i need to create an alert on the basis of threshold value. But in this case every offeringID has different rate. So how can i calculate the threshold for each offeringID and how can I map this under an alert as a generic thershold value. Please suggest some ideas on this.  As well as if any one aware about in dat -incache-memory in Splunk.  
Hi All, I am investigating the possibility of consolidating our separate standalone ES Searchheads into a single clustered ES instance. Due to network segmentation rules, my indexer clusters will ha... See more...
Hi All, I am investigating the possibility of consolidating our separate standalone ES Searchheads into a single clustered ES instance. Due to network segmentation rules, my indexer clusters will have to remain separate. My question is, can Splunk ES utilise separate Indexer clusters from the same ES Searchhead cluster, in the same way that non-ES searchhead clusters can? If it is possible, are there any gotcha's to be aware of?   Thanks in advance.   Waja1n0z1
I am preparing a SNOW incident trend which should showcase the percentage of tickets reduced/increased in current month as compare to the previous month along with the current opened tickets value. B... See more...
I am preparing a SNOW incident trend which should showcase the percentage of tickets reduced/increased in current month as compare to the previous month along with the current opened tickets value. But when I compared it with the help of timechart command and span it is giving me current value as 0. Ideally it should show me the value of total opened tickets. Since it is taking current days data it is showing as 0. How I make sure that it should the data for all opened incidents?
I have created a pie chart which has 3 values in it. I want to create drilldown for each value with the help of click value token mechanism. But since I am using dbxquery it is not allowing me to add... See more...
I have created a pie chart which has 3 values in it. I want to create drilldown for each value with the help of click value token mechanism. But since I am using dbxquery it is not allowing me to add token in it. Please help me with the solution.
Hello, Would it be possible for UFs to forward/send logs/events to other HFs/UFs? Thank you!  
my log appear:   1;1;laptop-rdvt90t4;http://update-software.xxx.com/WeatherFix03_SP03120.exe;C:\Windows\SysWOW64\DynamicWeather.exe;NT AUTHORITY\SYSTEM;2022-05-02 09:23:25;192.168.1.8;;; 1;1;lapto... See more...
my log appear:   1;1;laptop-rdvt90t4;http://update-software.xxx.com/WeatherFix03_SP03120.exe;C:\Windows\SysWOW64\DynamicWeather.exe;NT AUTHORITY\SYSTEM;2022-05-02 09:23:25;192.168.1.8;;; 1;1;laptop-rdv7446p;http://update-software.xxx.com/qatherFixP00190.exe;C:\Windows\SysWOW64\Der.exe;ScWhJ\lizonghao;2022-05-02 09:25:27;192.168.1.8;;; I use :strptime()  %H:%M:%S , and reg “202\d+-\d+\-\d+\s” to get the time,   but it look like wrong。 like this pic.     how to write this reg to get the  time?  
So i have this:     (index=* OR index=_*) (index="GA2014" EventCode=4625) | dedup RecordNumber | rename Account_Name AS EventObject.Account_Name EventCode AS EventObject.EventCode | stats ded... See more...
So i have this:     (index=* OR index=_*) (index="GA2014" EventCode=4625) | dedup RecordNumber | rename Account_Name AS EventObject.Account_Name EventCode AS EventObject.EventCode | stats dedup_splitvals=t count AS "Count of Event Object" by "EventObject.Account_Name" | sort limit=100000 "EventObject.Account_Name" | fields - _span | rename "EventObject.Account_Name" AS Account_Name | fillnull "Count of Event Object" | fields Account_Name, "Count of Event Object" | search NOT Account_Name="-"     Resulting into this:     +--------------+-----------------------+ | Account_Name | Count of Event Object | +--------------+-----------------------+ | SQLSERVICE | 1 | +--------------+-----------------------+ | STAFF | 1 | +--------------+-----------------------+ | STUDENT | 1 | +--------------+-----------------------+ | SUPORTE | 1 | +--------------+-----------------------+ | SUPPORT | 2 | +--------------+-----------------------+ | SYMANTEC | 1 | +--------------+-----------------------+     !!!!WITH!!!! These 3 over here:     +---------------+-----------------------+ | Account_Name | Count of Event Object | +---------------+-----------------------+ | АДМИН | 8 | +---------------+-----------------------+ | АДМИНИСТРАТОР | 8 | +---------------+-----------------------+ | ПОЛЬЗОВАТЕЛЬ | 8 | +---------------+-----------------------+     !!BUT!! When i do a search like this:     (index=* OR index=_*) (index="GA2014" EventCode=4625) | dedup RecordNumber | rename Account_Name AS EventObject.Account_Name EventCode AS EventObject.EventCode Workstation_Name AS EventObject.Workstation_Name | bucket _time span=1s | stats dedup_splitvals=t values("EventObject.EventCode") AS "Distinct Values of EventCode" by _time, "EventObject.Account_Name", "EventObject.Workstation_Name", "EventObject.EventCode" | sort limit=10000000 _time | rename "EventObject.Account_Name" AS Account_Name "EventObject.EventCode" AS EventCode "EventObject.Workstation_Name" AS Workstation_Name | fields _time, Account_Name, Workstation_Name, "Distinct Values of EventCode" | search NOT Account_Name="-"     I get this:     +---------------------+--------------+------------------+------------------------------+ | _time | Account_Name | Workstation_Name | Distinct Values of EventCode | +---------------------+--------------+------------------+------------------------------+ | 2020-02-21 01:03:48 | Demo | workstation | 4625 | +---------------------+--------------+------------------+------------------------------+ | 2020-02-21 01:05:57 | Reception | workstation | 4625 | +---------------------+--------------+------------------+------------------------------+ | 2020-02-21 01:09:06 | User11 | workstation | 4625 | +---------------------+--------------+------------------+------------------------------+ | 2020-02-21 01:10:34 | Ieuser | workstation | 4625 | +---------------------+--------------+------------------+------------------------------+     !!Without!!     АДМИН АДМИНИСТРАТОР ПОЛЬЗОВАТЕЛЬ     Nowhere to be seen in sight. Don't know right now if it applies only to these 3 or not, but i searched it with ctrl+f in browser and found nothing .... Honestly, i don't know what name to give to this thread/question. Maybe i can get some advice on this too, if i will be able to rename my thread/question .... P.S.: It's 2 in the mornin' over here, so if i have any typos, it must be the late hour ...
Is there a way to do a search like this; If Eventid=1111     only do these  statements elseif Eventid=2222     only do these statements elseif eventid=3333    only do these statements D... See more...
Is there a way to do a search like this; If Eventid=1111     only do these  statements elseif Eventid=2222     only do these statements elseif eventid=3333    only do these statements Do these extra statements ...
Hello Splunkers,    I have client that already has a IBM Qradar SIEM and wants to Integrates with Splunk SOAR (formely name as Splunk phantom).   Can you guys tell me if is it possible and how to... See more...
Hello Splunkers,    I have client that already has a IBM Qradar SIEM and wants to Integrates with Splunk SOAR (formely name as Splunk phantom).   Can you guys tell me if is it possible and how to implement it?   Regards, Marcos Pereira
I want to get QID list from yesterday’s published data.  For that I'm using PUBLISHED_DATETIME field with yesterday’s date. The date format for that field result is in GMT format (2005-11-11T08:00:00... See more...
I want to get QID list from yesterday’s published data.  For that I'm using PUBLISHED_DATETIME field with yesterday’s date. The date format for that field result is in GMT format (2005-11-11T08:00:00Z). For example, I’m running this search on may 4th, but I need to get QID fields with published date as 05/03/2022. (May 3rd) |table QID PUBLISHED_DATETIME