All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunkers, My greetings !! !! I have data coming from the Syslog server for which sourcetype Is "syslog", now, I have split the data going to three diff indexers in transfroms.conf using MetaDat... See more...
Hi Splunkers, My greetings !! !! I have data coming from the Syslog server for which sourcetype Is "syslog", now, I have split the data going to three diff indexers in transfroms.conf using MetaData:Index and using the regular expression like (abc* | xyz* ), and it is working fine. Now, I need to hardcode the sourcetype for each of the data going to the different index, now the sourcetype is coming as "syslog" but I want for every separate index I need to have separate sourcetype name .   Can you plz help !! 
A script is running fine in the UF agent but is not sending data to indexer. The UF agent is forwarding data to HF then to IDX.  
What Splunk enterprise version could I use to capture all the logs to include Windows XP, 7 and server 2008 and Solaris 9?  Currently have Splunk 6.5.3.
My webhook endpoint needs to retrieve the results of the alert that was triggered. Am I correct in thinking that the payload's "sid" value is the same as the Enterprise REST API's {search_id} value i... See more...
My webhook endpoint needs to retrieve the results of the alert that was triggered. Am I correct in thinking that the payload's "sid" value is the same as the Enterprise REST API's {search_id} value in the search/jobs/{search_id}/results endpoint? I'm a little surprised the webhook docs don't say anything about this since it seems like the logical next step. Normally I'd just try it myself, but we're in a gigantic corporate environment, there's tons of paperwork to get permission to do anything, etc. etc. -- much faster to just ask if I'm on the right track. And, I guess, the other obvious question is, if I'm not on the right track, how do I retrieve the search results based on a webhook payload? Thanks in advance!
Hi Team,   We want to do the IP allowlist in Crowdstrike. So we want to know the ip address range in Splunk to communicate with Crowdstrike through Falcon Event Streams Add-on
I'm in the middle of doing historical data migration form on-prem indexers to S3 in Splunk Cloud. Some of the data is making it through, but I'm getting a ton of these type messages in splunkd.log... See more...
I'm in the middle of doing historical data migration form on-prem indexers to S3 in Splunk Cloud. Some of the data is making it through, but I'm getting a ton of these type messages in splunkd.log on the on-prem indexers: WARN S3Client - Error getting object name = <...GUID/receipt.json(0,-1,) to localPath = /opt/splunk/var/run/splunk/cachemanager/receipt-(some numbers.json>  
Hello All, I have faced interesting issue. I have an ingest time extraction. [extract] REGEX = ^([^\r\n]+)$ FORMAT = message::$1 DEST_KEY = _raw Truncation not the case, I set it to zero and ... See more...
Hello All, I have faced interesting issue. I have an ingest time extraction. [extract] REGEX = ^([^\r\n]+)$ FORMAT = message::$1 DEST_KEY = _raw Truncation not the case, I set it to zero and the whole event is not longer than 5000 characters BUT the message field is truncated exactly after 4023 characters. Where it is written that an extracted field cannot be longer than this amount of characters. Thanks.
Hello Splunkers! I'm pretty new with Splunk and I retrieve an old splunk project that i didn't set up at all. I'm trying to train myself on it, but... I have some problems i couldn't solve alone. I... See more...
Hello Splunkers! I'm pretty new with Splunk and I retrieve an old splunk project that i didn't set up at all. I'm trying to train myself on it, but... I have some problems i couldn't solve alone. I have one Search Head, one Indexer and between 3 and 5 forwarders depending on my need.  Here is the VM of my indexer : Almost all logs that I collected went in /dev/vda1, which is not suppose to be the case. I've override the default storage location , but i guess it doesn't matter ... /opt/splunk/etc/system/local/indexes.conf   : [main] homePath = /mnt/data/$_index_name/db I assume it's the reason why i stillm got those messages :  Please let me know if I did something wrong or if i missed something, Thanks in advance for your help! Regards , Antoine  
Hi, Am quite new to splunk, and coming from Elasticsearch, so my knowledge is biased. However I did notice that Elastic performs faster on large datasets. I think 1 of the main reasons is the on-th... See more...
Hi, Am quite new to splunk, and coming from Elasticsearch, so my knowledge is biased. However I did notice that Elastic performs faster on large datasets. I think 1 of the main reasons is the on-the-fly field extractions splunk performs when searching.   hence we created a source_type for ingesttime fieldextraction.  Now I would expect these field would always be available, even when choosing for fast-mode. However, this seems not to be that way.   So my questions: (How) are fields stored in splunk in an index when extracted during ingest? Can I tell splunk to NOT extract extra fields for a certain index when in fast_mode or smart mode, but just show the fileds extracted during the ingest? Thnx
Hello everyone, I am currently working on the integration of Citrix Netscaler to Splunk. I`d like to see the App-/Netflow data in Splunk to use those for traffic balancing. My setup is as follows... See more...
Hello everyone, I am currently working on the integration of Citrix Netscaler to Splunk. I`d like to see the App-/Netflow data in Splunk to use those for traffic balancing. My setup is as follows: Splunk v8.2.4 Splunk App for Stream v 8.0.2 (and the TAs as well) Splunk Add-on for Citrix NetScaler v8.1.1 I was following the docs and installed as described. The files from TA Citrix are copied to stream app (https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/ConfigureIPFIXinputs). Eventhough - the netflow elements appear, they are not getting decoded and I am seeing this: Following IANA i was able to figure out that "5951" is ID of manufacturer: https://www.iana.org/assignments/enterprise-numbers/enterprise-numbers (which is Netscaler in this case). Unfortunately i did not find any documentation on the decoding procedure for those bytes. While trying to understand what the streamfwd binary does and how the solution is embedded into the python scripts, I stumbled over one interessting fact. in $SPLUNK_HOME/etc/apps/splunk_app_stream/bin/splunk_app_stream/models/vocabulary.py there is a refernce to this URL: "http://purl.org/cloudmeter/config" which seems to be involved into decoding somehow. However when i try to open this, it shows 404. Coming back to the original issue: those Appflows are not decoded. Is there a known solution for this? If not, does anyone know, where those element definitions may be found?   Many thanks in advance! Best Stan   PS: Seems to be related to https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-decode-netflow-elements-Key-Values-pair/m-p/595345  
Hi,   Am quite new to splunk so lease bear with me if I ask obvious questions. However things that were relatively simple in grafana (which we are coming from) seem uge tasks here in splunk. So I... See more...
Hi,   Am quite new to splunk so lease bear with me if I ask obvious questions. However things that were relatively simple in grafana (which we are coming from) seem uge tasks here in splunk. So I do hope someone can help me with the following ... I have this index,  in whch a field, ms_result is extracted.   This field can have numerous resultcodes. Only 2 of them are good ("OK" and "200-10000"). All other codes are errorcodes. Now I would like the total of the events with an errorcode to appear as a percentage of the overal total of events withing this search (per minute). So let's say, we have 1000 events, and 100 of them have an errorcode, then 10% should ben shown on the (area) graph. Below the picture I would like to recreate. Is this by any means possible?  Thnx
I have a HEC output coming to my hec receiver services/collector/event?auto_extract_timestamp=true I want to extract time from field named "time". The format of the event is like  { "event": {... See more...
I have a HEC output coming to my hec receiver services/collector/event?auto_extract_timestamp=true I want to extract time from field named "time". The format of the event is like  { "event": { "@timestamp": "2022-05-05T10:22:44.965Z" "time": 1651746176018, "my_text": "Pony 1 has left the barn" } }   I also have a prop.conf that have following configuration: CHARSET=UTF-8 KV_MODE=json LINE_BREAKER=([\r\n]+) MAX_TIMESTAMP_LOOKAHEAD=13 NO_BINARY_CHECK=true SHOULD_LINEMERGE=true TIME_FORMAT=%s%3N TIME_PREFIX=\"time\": In result, my timestamp is extracted from field "@timestamp" and I was experimenting a lot with TIME_PREFIX field. But when I manually upload the json with a file, the field I need is parsed ok and "@timestamp" is ignored. 
Hello I have data that looks like this :    Name | Type | Value ------------------------------------------ Name1 | TypeA | 2 Name1 | TypeB | 4 Name1 | TypeC | 6 Name2 | TypeA | 4 Name2 | TypeB | 8 ... See more...
Hello I have data that looks like this :    Name | Type | Value ------------------------------------------ Name1 | TypeA | 2 Name1 | TypeB | 4 Name1 | TypeC | 6 Name2 | TypeA | 4 Name2 | TypeB | 8 Name2 | TypeC | 3 Name3 | TypeA | 1 Name3 | TypeB | 5 Name3 | TypeC | 7    Is it possible to add a column that sums the values by Name while keeping the Value column, like this :    Name | Type | Value | SumByName --------------------------------------------------------- Name1 | TypeA | 2 | 12 Name1 | TypeB | 4 | 12 Name1 | TypeC | 6 | 12 Name2 | TypeA | 4 | 15 Name2 | TypeB | 8 | 15 Name2 | TypeC | 3 | 15 Name3 | TypeA | 1 | 13 Name3 | TypeB | 5 | 13 Name3 | TypeC | 7 | 13    Thanks for the help.
2 events : request and response and unique id which binds this transaction. I have  issue where i have to calculate the total duration between request and response and average , max and min respons... See more...
2 events : request and response and unique id which binds this transaction. I have  issue where i have to calculate the total duration between request and response and average , max and min response time from all the transaction triggered per day/per hour. the below query works in extracting request and response but duration is not being calculated, or displayed  when i run the query : search query | stats earliest(dateTime) AS request latest(dateTime) AS response BY TransactionID | eval duration=response- request   result for above query : TransactionID                                                                          Request                                              Response 000877d43ef8778123243454bda780c5e5     2022-05-05 01:36:12.916      2022-05-05 01:36:13.27 Please help in writing query for calculating duration and avg, max,min response time for all the transaction happened in a day
Hi All, I was hoping that modification of KV_Mode=xml in props.conf under the [xmlwineventlog] stanza on the standalone index\search head\deployment server would properly parse the Event View data f... See more...
Hi All, I was hoping that modification of KV_Mode=xml in props.conf under the [xmlwineventlog] stanza on the standalone index\search head\deployment server would properly parse the Event View data from servers, but unfortunately I am not seeing all the message data that should be included.  Here is sample of data, please see Event.EventData.Binary field:  
Hello, I recently setup a test environment(clustered deployment) on  AWS  to monitor and get data into the peer nodes. My environment include: cluster master(hosting license), 3 Indexers, 1 Deploye... See more...
Hello, I recently setup a test environment(clustered deployment) on  AWS  to monitor and get data into the peer nodes. My environment include: cluster master(hosting license), 3 Indexers, 1 Deployer, 3 Search heads, 1 Deployment server and 2 Universal forwarders.  I configured the deployment server to push configuration to the forwarders, and all seem working fine; the forwarders are phoning home, their is sync between the DS and the UFs. But the peer nodes are not receiving the data. even though, I set up the listening port (9997). I did troubleshoot on the UF to see if they are pushing, excerpt of the output from the UFs: 05-05-2022 11:02:11.118 +0000 INFO HttpPubSubConnection [3276 HttpClientPollingThread_71E59550-AD46-4814-8460-DB66C1DD0BAD] - Running phone uri=/services/broker/phonehome/connection_172.31.28.182_8089_ip-172-31-28-182.ec2.internal_uf01_71E59550-AD46-4814-8460-DB66C1DD0BAD 05-05-2022 11:02:25.767 +0000 INFO TailReader [3323 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 05-05-2022 11:02:31.076 +0000 INFO AutoLoadBalancedConnectionStrategy [3316 TcpOutEloop] - Connected to idx=172.31.21.254:9997, pset=0, reuse=0. using ACK. 05-05-2022 11:02:55.767 +0000 INFO TailReader [3323 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 05-05-2022 11:03:01.006 +0000 INFO AutoLoadBalancedConnectionStrategy [3316 TcpOutEloop] - Connected to idx=172.31.22.208:9997, pset=0, reuse=0. using ACK. 05-05-2022 11:03:11.118 +0000 INFO HttpPubSubConnection [3276 HttpClientPollingThread_71E59550-AD46-4814-8460-DB66C1DD0BAD] - Running phone uri=/services/broker/phonehome/connection_172.31.28.182_8089_ip-172-31-28-182.ec2.internal_uf01_71E59550-AD46-4814-8460-DB66C1DD0BAD 05-05-2022 11:03:25.597 +0000 INFO TailReader [3323 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 05-05-2022 11:03:30.890 +0000 INFO AutoLoadBalancedConnectionStrategy [3316 TcpOutEloop] - After randomization, current is first in the list. Swapping with last item 05-05-2022 11:03:30.891 +0000 INFO AutoLoadBalancedConnectionStrategy [3316 TcpOutEloop] - Connected to idx=172.31.21.254:9997, pset=0, reuse=1. 05-05-2022 11:03:55.596 +0000 INFO TailReader [3323 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 05-05-2022 11:04:00.813 +0000 INFO AutoLoadBalancedConnectionStrategy [3316 TcpOutEloop] - Connected to idx=172.31.18.160:9997, pset=0, reuse=0. using ACK. 05-05-2022 11:04:11.124 +0000 INFO HttpPubSubConnection [3276 HttpClientPollingThread_71E59550-AD46-4814-8460-DB66C1DD0BAD] - Running phone uri=/services/broker/phonehome/connection_172.31.28.182_8089_ip-172-31-28-182.ec2.internal_uf01_71E59550-AD46-4814-8460-DB66C1DD0BAD 05-05-2022 11:04:25.596 +0000 INFO TailReader [3323 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 05-05-2022 11:04:30.704 +0000 INFO AutoLoadBalancedConnectionStrategy [3316 TcpOutEloop] - Connected to idx=172.31.22.208:9997, pset=0, reuse=0. using ACK. 05-05-2022 11:04:55.596 +0000 INFO TailReader [3323 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 05-05-2022 11:05:00.613 +0000 INFO AutoLoadBalancedConnectionStrategy [3316 TcpOutEloop] - Connected to idx=172.31.18.160:9997, pset=0, reuse=1. 05-05-2022 11:05:11.129 +0000 INFO HttpPubSubConnection [3276 HttpClientPollingThread_71E59550-AD46-4814-8460-DB66C1DD0BAD] - Running phone  Any idea on solution to this ?  
I am trying to extract a single section from within some JSON. (The original event is wrapped in even more json). I have built a regex and tested it, and everything seems to work. index=* sourcetype... See more...
I am trying to extract a single section from within some JSON. (The original event is wrapped in even more json). I have built a regex and tested it, and everything seems to work. index=* sourcetype=suricata | rex field=_raw "\"original\":(?<originalMsg>.+?})}," BUT once I put it into the config files, nothing happens. Props: [source::http:kafka_iap-suricata-log] LINE_BREAKER = (`~!\^<) SHOULD_LINEMERGE = false TRANSFORMS-also = extractMessage Transforms: [extractMessage] REGEX = "original":(.+?})}, DEST_KEY= _raw FORMAT = $1 Inputs: [http://kafka_iap-suricata-log] disabled = 0 index = ids-suricata-ext token = tokenyNumbersGoHere sourcetype = suricata Sample Event (copied from _raw): {"destination":{"ip":"192.168.0.1","port":80,"address":"192.168.0.1"},"ecs":{"version":"1.12.0"},"host":{"name":"rsm"},"fileset":{"name":"eve"},"input":{"type":"log"},"suricata":{"eve":{"http":{"http_method":"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0GET","hostname":"7.tlup.microsoft.com","url":"/filestreamingservice/files/eb3d","length":0,"protocol":"HTTP/1.1","http_user_agent":"Microsoft-Delivery-Optimization/10.0"},"event_type":"http","flow_id":"841906347931855","tx_id":4,"in_iface":"ens3f0"}},"service":{"type":"suricata"},"source":{"ip":"192.168.0.1","port":57576,"address":"192.168.0.1"},"network.direction":"external","log":{"offset":1363677358,"file":{"path":"/data/suricata/eve.json"}},"@timestamp":"2022-05-05T09:29:05.976Z","agent":{"hostname":"xxx","ephemeral_id":"5a1cb090","id":"bd4004192","name":"ram-nsm","type":"filebeat","version":"7.16.2"},"tags":["iap","suricata"],"@version":"1","event":{"created":"2022-05-05T09:29:06.819Z","module":"suricata","dataset":"suricata.eve","original":{"http":{"http_method":"\\0\\0\\0\\0\\0\\0\\0\\00\\0\\0GET","hostname":"7.t.microsoft.com","url":"/filestreamingservice/files/eb3d","length":0,"protocol":"HTTP/1.1","http_user_agent":"Microsoft-Delivery-Optimization/10.0"},"dest_port":80,"flow_id":845,"in_iface":"ens3f0","proto":"TCP","src_port":57576,"dest_ip":"192.168.0.1","event_type":"http","timestamp":"2022-05-05T09:29:05.976989+0000","tx_id":4,"src_ip":"192.168.0.1"}},"network":{"transport":"TCP","community_id":"1:uE="}}  
hi am newbie I have a duration time value with the format "1d hh:mm:ss" but I haven't gotten a thread that discusses summing with that format. Example: hostname=hostA outage="1d 21:49:48" host... See more...
hi am newbie I have a duration time value with the format "1d hh:mm:ss" but I haven't gotten a thread that discusses summing with that format. Example: hostname=hostA outage="1d 21:49:48" hostname=hostA outage="10:30:50" i want the result can be like that > total outage = 2d 08:20:38 or 56:20:38 happy for the help thanks
Hi all, I am using Splunk SOAR Community Edition and have a general question on how to correctly trigger a playbook. I am thinking of a scenario where an external alert from a SIEM like qRadar or... See more...
Hi all, I am using Splunk SOAR Community Edition and have a general question on how to correctly trigger a playbook. I am thinking of a scenario where an external alert from a SIEM like qRadar or Elastic should trigger a playbook. For example, a bruteforce alert should trigger a bruteforce playbook, a portscan alert should trigger a portscan playbook, and so on. Unfortunately, it is only possible to assign the same labels to all incoming SIEM alerts. Based on these labels a playbook is then executed. Is there any way to assign the labels based on the type (e.g. a field of the alarm) of the incoming alarm or to solve the difference between alarms in another way? As a workaround, I was thinking of a "general" playbook that distributes incoming alerts to specific playbooks. But is that really the best way to solve the problem? I'm looking forward to some ideas or hints. Thank you very much in advance. kind regards  simon
The csv files are getting forwarded from a linux UF agent to splunk as and when it is created ie. 2:50 CET time. For a file getting created at 2:50 CET time on the same uf agent is not reflecting  in... See more...
The csv files are getting forwarded from a linux UF agent to splunk as and when it is created ie. 2:50 CET time. For a file getting created at 2:50 CET time on the same uf agent is not reflecting  in the splunk. The inputs.conf file of the app is not having any intervals configured.