All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

can someone point me to the capabilty that needs to be provided for ES users to be able to view Adaptive responses section in ES Incident review page. Few of my ES users from the security team who ha... See more...
can someone point me to the capabilty that needs to be provided for ES users to be able to view Adaptive responses section in ES Incident review page. Few of my ES users from the security team who have admin access to ES but cant see this section, however admin users can. Non admin users get an error message saying: Adaptive Responses: Error: No adaptive response actions found.   Admin Users see this  
I have a correlation search in Splunk ES that does some statistics, and return a table with the events; "src_ip", "dest_ip", "count", "latest", "action", "app", "src", "dest", and "dest_port". The se... See more...
I have a correlation search in Splunk ES that does some statistics, and return a table with the events; "src_ip", "dest_ip", "count", "latest", "action", "app", "src", "dest", and "dest_port". The search looks something  like the following. | tstats count latest(_time) as latest values(All_Traffic.action) as action values(All_Traffic.app) as app values(All_Traffic.src) as src values(All_Traffic.dest) as dest values(All_Traffic.dest_port) as dest_port from datamodel=Network_Traffic where All_Traffic.dest_ip=1.2.3.4 by All_Traffic.src_ip All_Traffic.dest_ip | rename All_Traffic.* as * When this correlation search triggers it writes an event to the notable index, and that notable event contains the fields that are outputed from the search, except src_ip and dest_ip. Note that I'm talking about the notable index here, not the incidents showing in the Incident Review. I've looked in the documentation for an explanation of this behaviour, but can't find anything. Can someone explain to me how Splunk picks which fields are to be written to the notable index events, and if possible, how one can force Splunk to write all fields from the search to the notable index?
Hello Team, This is the first time I am posting a question and hope that I have explained it thoroughly. I am trying to create a regex for a log file which contains multiple values throughout th... See more...
Hello Team, This is the first time I am posting a question and hope that I have explained it thoroughly. I am trying to create a regex for a log file which contains multiple values throughout the log which required same field name. but splunk does not allows to use same field name again. Here is the sample log: /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9/TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 Note: Text values are 4 char and Number contains 10 digits. How can I move forward to achieve a field extraction and format like this? /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 /TXT1/TXT2/NMBR1/NMBR2/NMBR3/NMBR4/NMBR5/NMBR6/NMBR7/NMBR8/NMBR9 Thank You in Advance   
I want to extract the two characters 78 from the barvalue  and have it in a separate column in my table:-  deltavalue = 890(11%) sigmavalue=334(56%) barvalue=445(78%)    
Hi there, I created multiple field extractions, extracting values from different sourcetypes into the same field: sourcetype0: "field0":"(?<geolocation_code>.{7}) sourcetype1: "field1":"(?<geoloca... See more...
Hi there, I created multiple field extractions, extracting values from different sourcetypes into the same field: sourcetype0: "field0":"(?<geolocation_code>.{7}) sourcetype1: "field1":"(?<geolocation_code>.{7}) sourcetype2: "field2":"(?<geolocation_code>.{7}) They are populated as expected, all looking like ABCDEF0, CDEFGH3 or ZDEGFH9. But when using them in search geolocation_code=ABCDEF0 I have zero hits even though the preview from the fields pane on the left shows me plenty of those values. Using geolocation_code!=ABCDEF0 on the other hand works exactly as inteded. Also geolocation_code=ABCDEF0* gives the result I expected from geolocation_code=ABCDEF0 even though this field only contains exactly the value I'm looking for. I don't really understand what is happening here and why only with this extraction but not with other. 
100 * sum([x]) / sum([y] - [z])  
How to loop the array values after split with delimiter  | eval json="{"key1":"key1value","key2":"key2value","key3":"key3value","key4":"key4Value" }" | eval keyNames ="key1,key2,key3,key4" // key... See more...
How to loop the array values after split with delimiter  | eval json="{"key1":"key1value","key2":"key2value","key3":"key3value","key4":"key4Value" }" | eval keyNames ="key1,key2,key3,key4" // key names can add or remove based on search string the requirement   | eval keys=split(keyNames ,";") How to loop these keys and perform some operation.  I have tired with some MV commands but no luck. Example:  | eval count = mvcount(keys) | streamstats count as counter | eval jsonKey= mvindex(keys,count) | eval keyValue = json_extract(json, jsonKey) I am not sure how to achieve this use case, can some one please help me on it.  
Hi.. I have to find the ip address hitting fw for that i have to implement the whois lookup for the hitting ips but no use i tried with the app Whois it's not working. Is there any way   Than... See more...
Hi.. I have to find the ip address hitting fw for that i have to implement the whois lookup for the hitting ips but no use i tried with the app Whois it's not working. Is there any way   Thanks....    
how do I make each column of a column graph a drilldown to a new dashboard in dashboard studio?
I want to align the <input> fields vertically in same row and same panel. Is there any way, that can help me to     <row depends="$field_show$"> <panel> <input type="dropdown"> <label>----</la... See more...
I want to align the <input> fields vertically in same row and same panel. Is there any way, that can help me to     <row depends="$field_show$"> <panel> <input type="dropdown"> <label>----</label> <choice value="true">true</choice> <choice value="false">false</choice> <default>false</default> <initialValue>false</initialValue> </input> <input type="time" searchWhenChanged="true"> <label>Select time range</label> <default> <earliest>-0m</earliest> <latest>+1d@h</latest> </default> <input type="dropdown"> <label>----</label> <choice value="true">true</choice> <choice value="false">false</choice> <default>false</default> <initialValue>false</initialValue> </input> <input type="time" searchWhenChanged="true"> <label>Select time range</label> <default> <earliest>-0m</earliest> <latest>+1d@h</latest> </default> </panel> </row>    
Hi,  I am a beginner here in Splunk. I am trying to search multiple lines in the log and generate an alert if certain values match.  For example, in the log below, I wanted to compare the IP addr... See more...
Hi,  I am a beginner here in Splunk. I am trying to search multiple lines in the log and generate an alert if certain values match.  For example, in the log below, I wanted to compare the IP address (10.0.46.173) when the Error Code is "ERROR_CREDENTIAL". If the IP addresses are the same, then an alert will be created.   2022-12-13 19:05:48.247 ERROR:ip-10-0-46-173.ap-northeast-1.compute.internal,10.0.46.173,19:05:48,ERROR_CREDENTIAL,sfsdf 2022-12-13 19:06:00.580 ERROR:ip-10-0-46-173.ap-northeast-1.compute.internal,10.0.46.173,19:06:00,ERROR_CREDENTIAL,kjsadasd 2022-12-13 19:06:17.537 ERROR:ip-10-0-46-173.ap-northeast-1.compute.internal,10.0.46.173,19:06:17,ERROR_CREDENTIAL,opuio   I wonder if anyone could help and guide me with right document.   Thanks, Junster
Hi! I have various syslog clients sending me logs about their current state (a certain process). Eg. [timestamp] host1 is in state "yes" [timestamp] host2 is in state "yes" [timestamp] host1 is... See more...
Hi! I have various syslog clients sending me logs about their current state (a certain process). Eg. [timestamp] host1 is in state "yes" [timestamp] host2 is in state "yes" [timestamp] host1 is in state "no" [timestamp] host2 is in state "no" When ever a state transition occurs randomly and at a low "rate" I don't care but I want to find and match when more than x hosts has logged the same state transition from "yes" to "no" (or vice versa) within a certain time interval eg. 60s. Kind of a "sliding window" idea where only chunks of/groups of the same value during a short period of time is of interest to me. Of course - thus far I've only managed to visualize the "peaks" but unfortunately the peaks are rather anonymous. They don't stand out. Any ideas where to start? Which attributes in SPLUNK search may present such desirable function?
I am running some diags to isolate some problems but I'm getting the following error message:   LSOF   Failed Result Message Could not detect `lsof` for ... See more...
I am running some diags to isolate some problems but I'm getting the following error message:   LSOF   Failed Result Message Could not detect `lsof` for the scottj1 user. Process splunkd - splunkd server (32362)   The application is on the host and it's in my .bashrc file.  When I run the command "which lsof" over ssh it correctly finds it.  What might be the problem here?
Hi, I have seen yellow and red health warnings for TCPOutAutoLB-0 for sometime.    We identified a few issues like a HF that was sending loads of data and locking to 1 single Indexer fo... See more...
Hi, I have seen yellow and red health warnings for TCPOutAutoLB-0 for sometime.    We identified a few issues like a HF that was sending loads of data and locking to 1 single Indexer for a long time.    But after some HF config changes (i.e. parallel pipelines and asynchronous fwding) the warning frequency has been reduced, but I still see these yellow warnings randomly.      How worried should you be if you see yellow or red health warnings like this >   TCPOutAutoLB-0 with RootCause  "more than <20% or 50%> of forwarding destinations have failed.."    These warnings eventually they clear.      When I check indexer saturation a few will be a 100% but I cannot tell for how long.  The 100% saturation seems to be very short lived.   Do these alerts fire based on count alone or does it have a saturation time check as well, like the 100% saturation state needs to be sustained for more than 30 seconds?   Are these typical warnings to just live with or is it something you never want to see?  or when is it a problem?   Is there a setting to increase the indexer HW use to process the events faster and clear blocks?   Thank you 
Hello, I'm using the MS Teams add-on to collect call records. (https://splunkbase.splunk.com/app/4994) The webhook port that opened for the collect offers TLS 1.0 and TLS 1.1. How can I disable i... See more...
Hello, I'm using the MS Teams add-on to collect call records. (https://splunkbase.splunk.com/app/4994) The webhook port that opened for the collect offers TLS 1.0 and TLS 1.1. How can I disable it?  The server.conf and web.conf already contain the  sslVersions=tls1.2  stanza, and all other ports are not offered the deprecated protocols...
Hello,, I have installed the Hyper-V TA on our search head and our heavy forwarders. There was a Splunkbase app called Splunk for Server Virtualization, but it was retired. What have Splunkers used... See more...
Hello,, I have installed the Hyper-V TA on our search head and our heavy forwarders. There was a Splunkbase app called Splunk for Server Virtualization, but it was retired. What have Splunkers used for creating visualizations of this data?  Thanks and God bless, Genesius
Hi. I have an issue but I can't find the solution nor someone who had the same issue so I post it here. I want to downsample the training part of a dataset using sample command. To do so, I get the ... See more...
Hi. I have an issue but I can't find the solution nor someone who had the same issue so I post it here. I want to downsample the training part of a dataset using sample command. To do so, I get the count of the minority class using a subsearch and I use it to sample my dataset by class value.   | sample [ | search index=main sourcetype="heartstroke_csv" | sample partitions=10 seed=29 | where partition_number<8 and stroke=1 | stats count as samples | return $samples ] seed=29 by stroke   This works fine, but when I use it inside an accelerated data model it raises the following error:   Error in 'sample' command: Unrecognized argument: [ | search index=main sourcetype="heartstroke_csv" | sample partitions=10 seed=29 | where partition_number<8 and stroke=1 | stats count as samples | return count=samples ]   So I thought about explicitly assign the argument count:   | sample count= [ | search index=main sourcetype="heartstroke_csv" | sample partitions=10 seed=29 | where partition_number<8 and stroke=1 | stats count as samples | return $samples ] seed=29 by stroke   And this doesn't work even in a regular unaccelerated dataset. It returns:   Error in 'sample' command: Invalid value for count: must be an int   I'm using the V 8.2.5 of Splunk Enterprise and the V 5.3.3 of Splunk Machine Learning Toolkit. I don't know if it's a version issue or my syntax is wrong. EDIT: I have tried also:   | sample [ | search index=main sourcetype="heartstroke_csv" | sample partitions=10 seed=29 | where partition_number<8 and stroke=1 | stats count as samples | return count=samples ] seed=29 by stroke   This works only in the unaccelerated data model. In the accelerated fails to recognize the argument again. EDIT2: As the final trial, I put explicitly the integer values to the count parameter ( ... | sample count=193 ... ) It seems my  model is unaccelerable. I have 2 datasets, one for the data and the other for the ids that correspond to the regular and downsampled version of the train and test sets.   dataset: DATA index=main sourcetype="heartstroke_csv" dataset: SETS index=main sourcetype="heartstroke_csv" | sample partitions=10 seed=29 | eval set=if(partition_number<8,"train","test") | appendpipe [ | where set="train" | appendpipe [ | where stroke=0 and bmi!="N/A" or stroke=1 | sample count=193 ```[ | search index=main sourcetype="heartstroke_csv" | sample partitions=10 seed=29 | where partition_number<8 and stroke=1 | stats count as samples | return count=samples ] ``` seed=29 by stroke | stats values(id) as ids by set | eval downsample=1 ] | appendpipe [ | stats values(id) as ids by set | eval downsample=0 ] | search ids=* ] | appendpipe [ | where set="test" | appendpipe [ | sample count=56 ```[ | search index=main sourcetype="heartstroke_csv" | sample partitions=10 seed=29 | where partition_number>=8 and stroke=1 | stats count as samples | return count=samples ]``` seed=29 by stroke | stats values(id) as ids by set | eval downsample=1 ] | appendpipe [ | stats values(id) as ids by set | eval downsample=0 ] | search ids=* ] | search ids=* | rename ids as id | table id set downsample  
Hi,   i have an edge server with splunk forward to ship log file to indexer. The log is being indexed but splunk is changing days for months. The events start with the example  17:00:16,965... See more...
Hi,   i have an edge server with splunk forward to ship log file to indexer. The log is being indexed but splunk is changing days for months. The events start with the example  17:00:16,965;06-12-2022 17:00:16.740;10.129.150.83; This event is from 6 of december but is indexed as 12 of June. The time field is ok but _time not. I add props.conf at app/local on edge server with the following configs but did not resolve [mbe-cdr] TIME_PREFIX = \d+:\d+:\d+\,\d+\; TIME_FORMAT = %d-%m-%Y %H:%M:%S.%Q   Thanks in advance  
Hello Splunkers, I recently created a custom alerts on my Search Head, and for this alert to run I needed to install a Pip library (here HttpNtlmAuth). I used this command :  /opt/splunk/bin/pyt... See more...
Hello Splunkers, I recently created a custom alerts on my Search Head, and for this alert to run I needed to install a Pip library (here HttpNtlmAuth). I used this command :  /opt/splunk/bin/python3.7 -m pip install <my_package> Afterwards my script & alert just ran correctly but the health check "Integrity check of installed files" failed because of this install (Splunk is complaining that my python bin and lib have changed, some other are missing). I have read that I can install manually the Python package, but I would have the same integrity check problems right ? Thanks for your answers, GaetanVP
Hi Everyone, I have a field called "User" that contains similar values and I was wondering how to remove or merge similar values? For example: "Tony W" and "Anthony W" (both values of the same fiel... See more...
Hi Everyone, I have a field called "User" that contains similar values and I was wondering how to remove or merge similar values? For example: "Tony W" and "Anthony W" (both values of the same field) should be merged together. I was looking at the fuzzy search and jellyfish apps on SplunkBase, but couldn't find a solution to the problem. My search query: (index="abc" Name=*) OR (index="xyz" department=* displayName=*) | eval User=if(isnull(Name), upper(displayName), upper(Name)) | stats values(department) as department by User