All Topics

Top

All Topics

Hi.. I have to find the ip address hitting fw for that i have to implement the whois lookup for the hitting ips but no use i tried with the app Whois it's not working. Is there any way   Than... See more...
Hi.. I have to find the ip address hitting fw for that i have to implement the whois lookup for the hitting ips but no use i tried with the app Whois it's not working. Is there any way   Thanks....    
how do I make each column of a column graph a drilldown to a new dashboard in dashboard studio?
I want to align the <input> fields vertically in same row and same panel. Is there any way, that can help me to     <row depends="$field_show$"> <panel> <input type="dropdown"> <label>----</la... See more...
I want to align the <input> fields vertically in same row and same panel. Is there any way, that can help me to     <row depends="$field_show$"> <panel> <input type="dropdown"> <label>----</label> <choice value="true">true</choice> <choice value="false">false</choice> <default>false</default> <initialValue>false</initialValue> </input> <input type="time" searchWhenChanged="true"> <label>Select time range</label> <default> <earliest>-0m</earliest> <latest>+1d@h</latest> </default> <input type="dropdown"> <label>----</label> <choice value="true">true</choice> <choice value="false">false</choice> <default>false</default> <initialValue>false</initialValue> </input> <input type="time" searchWhenChanged="true"> <label>Select time range</label> <default> <earliest>-0m</earliest> <latest>+1d@h</latest> </default> </panel> </row>    
Hi,  I am a beginner here in Splunk. I am trying to search multiple lines in the log and generate an alert if certain values match.  For example, in the log below, I wanted to compare the IP addr... See more...
Hi,  I am a beginner here in Splunk. I am trying to search multiple lines in the log and generate an alert if certain values match.  For example, in the log below, I wanted to compare the IP address (10.0.46.173) when the Error Code is "ERROR_CREDENTIAL". If the IP addresses are the same, then an alert will be created.   2022-12-13 19:05:48.247 ERROR:ip-10-0-46-173.ap-northeast-1.compute.internal,10.0.46.173,19:05:48,ERROR_CREDENTIAL,sfsdf 2022-12-13 19:06:00.580 ERROR:ip-10-0-46-173.ap-northeast-1.compute.internal,10.0.46.173,19:06:00,ERROR_CREDENTIAL,kjsadasd 2022-12-13 19:06:17.537 ERROR:ip-10-0-46-173.ap-northeast-1.compute.internal,10.0.46.173,19:06:17,ERROR_CREDENTIAL,opuio   I wonder if anyone could help and guide me with right document.   Thanks, Junster
Hi! I have various syslog clients sending me logs about their current state (a certain process). Eg. [timestamp] host1 is in state "yes" [timestamp] host2 is in state "yes" [timestamp] host1 is... See more...
Hi! I have various syslog clients sending me logs about their current state (a certain process). Eg. [timestamp] host1 is in state "yes" [timestamp] host2 is in state "yes" [timestamp] host1 is in state "no" [timestamp] host2 is in state "no" When ever a state transition occurs randomly and at a low "rate" I don't care but I want to find and match when more than x hosts has logged the same state transition from "yes" to "no" (or vice versa) within a certain time interval eg. 60s. Kind of a "sliding window" idea where only chunks of/groups of the same value during a short period of time is of interest to me. Of course - thus far I've only managed to visualize the "peaks" but unfortunately the peaks are rather anonymous. They don't stand out. Any ideas where to start? Which attributes in SPLUNK search may present such desirable function?
I am running some diags to isolate some problems but I'm getting the following error message:   LSOF   Failed Result Message Could not detect `lsof` for ... See more...
I am running some diags to isolate some problems but I'm getting the following error message:   LSOF   Failed Result Message Could not detect `lsof` for the scottj1 user. Process splunkd - splunkd server (32362)   The application is on the host and it's in my .bashrc file.  When I run the command "which lsof" over ssh it correctly finds it.  What might be the problem here?
Hi, I have seen yellow and red health warnings for TCPOutAutoLB-0 for sometime.    We identified a few issues like a HF that was sending loads of data and locking to 1 single Indexer fo... See more...
Hi, I have seen yellow and red health warnings for TCPOutAutoLB-0 for sometime.    We identified a few issues like a HF that was sending loads of data and locking to 1 single Indexer for a long time.    But after some HF config changes (i.e. parallel pipelines and asynchronous fwding) the warning frequency has been reduced, but I still see these yellow warnings randomly.      How worried should you be if you see yellow or red health warnings like this >   TCPOutAutoLB-0 with RootCause  "more than <20% or 50%> of forwarding destinations have failed.."    These warnings eventually they clear.      When I check indexer saturation a few will be a 100% but I cannot tell for how long.  The 100% saturation seems to be very short lived.   Do these alerts fire based on count alone or does it have a saturation time check as well, like the 100% saturation state needs to be sustained for more than 30 seconds?   Are these typical warnings to just live with or is it something you never want to see?  or when is it a problem?   Is there a setting to increase the indexer HW use to process the events faster and clear blocks?   Thank you 
Hello, I'm using the MS Teams add-on to collect call records. (https://splunkbase.splunk.com/app/4994) The webhook port that opened for the collect offers TLS 1.0 and TLS 1.1. How can I disable i... See more...
Hello, I'm using the MS Teams add-on to collect call records. (https://splunkbase.splunk.com/app/4994) The webhook port that opened for the collect offers TLS 1.0 and TLS 1.1. How can I disable it?  The server.conf and web.conf already contain the  sslVersions=tls1.2  stanza, and all other ports are not offered the deprecated protocols...
Hello,, I have installed the Hyper-V TA on our search head and our heavy forwarders. There was a Splunkbase app called Splunk for Server Virtualization, but it was retired. What have Splunkers used... See more...
Hello,, I have installed the Hyper-V TA on our search head and our heavy forwarders. There was a Splunkbase app called Splunk for Server Virtualization, but it was retired. What have Splunkers used for creating visualizations of this data?  Thanks and God bless, Genesius
Hi. I have an issue but I can't find the solution nor someone who had the same issue so I post it here. I want to downsample the training part of a dataset using sample command. To do so, I get the ... See more...
Hi. I have an issue but I can't find the solution nor someone who had the same issue so I post it here. I want to downsample the training part of a dataset using sample command. To do so, I get the count of the minority class using a subsearch and I use it to sample my dataset by class value.   | sample [ | search index=main sourcetype="heartstroke_csv" | sample partitions=10 seed=29 | where partition_number<8 and stroke=1 | stats count as samples | return $samples ] seed=29 by stroke   This works fine, but when I use it inside an accelerated data model it raises the following error:   Error in 'sample' command: Unrecognized argument: [ | search index=main sourcetype="heartstroke_csv" | sample partitions=10 seed=29 | where partition_number<8 and stroke=1 | stats count as samples | return count=samples ]   So I thought about explicitly assign the argument count:   | sample count= [ | search index=main sourcetype="heartstroke_csv" | sample partitions=10 seed=29 | where partition_number<8 and stroke=1 | stats count as samples | return $samples ] seed=29 by stroke   And this doesn't work even in a regular unaccelerated dataset. It returns:   Error in 'sample' command: Invalid value for count: must be an int   I'm using the V 8.2.5 of Splunk Enterprise and the V 5.3.3 of Splunk Machine Learning Toolkit. I don't know if it's a version issue or my syntax is wrong. EDIT: I have tried also:   | sample [ | search index=main sourcetype="heartstroke_csv" | sample partitions=10 seed=29 | where partition_number<8 and stroke=1 | stats count as samples | return count=samples ] seed=29 by stroke   This works only in the unaccelerated data model. In the accelerated fails to recognize the argument again. EDIT2: As the final trial, I put explicitly the integer values to the count parameter ( ... | sample count=193 ... ) It seems my  model is unaccelerable. I have 2 datasets, one for the data and the other for the ids that correspond to the regular and downsampled version of the train and test sets.   dataset: DATA index=main sourcetype="heartstroke_csv" dataset: SETS index=main sourcetype="heartstroke_csv" | sample partitions=10 seed=29 | eval set=if(partition_number<8,"train","test") | appendpipe [ | where set="train" | appendpipe [ | where stroke=0 and bmi!="N/A" or stroke=1 | sample count=193 ```[ | search index=main sourcetype="heartstroke_csv" | sample partitions=10 seed=29 | where partition_number<8 and stroke=1 | stats count as samples | return count=samples ] ``` seed=29 by stroke | stats values(id) as ids by set | eval downsample=1 ] | appendpipe [ | stats values(id) as ids by set | eval downsample=0 ] | search ids=* ] | appendpipe [ | where set="test" | appendpipe [ | sample count=56 ```[ | search index=main sourcetype="heartstroke_csv" | sample partitions=10 seed=29 | where partition_number>=8 and stroke=1 | stats count as samples | return count=samples ]``` seed=29 by stroke | stats values(id) as ids by set | eval downsample=1 ] | appendpipe [ | stats values(id) as ids by set | eval downsample=0 ] | search ids=* ] | search ids=* | rename ids as id | table id set downsample  
Hi,   i have an edge server with splunk forward to ship log file to indexer. The log is being indexed but splunk is changing days for months. The events start with the example  17:00:16,965... See more...
Hi,   i have an edge server with splunk forward to ship log file to indexer. The log is being indexed but splunk is changing days for months. The events start with the example  17:00:16,965;06-12-2022 17:00:16.740;10.129.150.83; This event is from 6 of december but is indexed as 12 of June. The time field is ok but _time not. I add props.conf at app/local on edge server with the following configs but did not resolve [mbe-cdr] TIME_PREFIX = \d+:\d+:\d+\,\d+\; TIME_FORMAT = %d-%m-%Y %H:%M:%S.%Q   Thanks in advance  
Hello Splunkers, I recently created a custom alerts on my Search Head, and for this alert to run I needed to install a Pip library (here HttpNtlmAuth). I used this command :  /opt/splunk/bin/pyt... See more...
Hello Splunkers, I recently created a custom alerts on my Search Head, and for this alert to run I needed to install a Pip library (here HttpNtlmAuth). I used this command :  /opt/splunk/bin/python3.7 -m pip install <my_package> Afterwards my script & alert just ran correctly but the health check "Integrity check of installed files" failed because of this install (Splunk is complaining that my python bin and lib have changed, some other are missing). I have read that I can install manually the Python package, but I would have the same integrity check problems right ? Thanks for your answers, GaetanVP
Hi Everyone, I have a field called "User" that contains similar values and I was wondering how to remove or merge similar values? For example: "Tony W" and "Anthony W" (both values of the same fiel... See more...
Hi Everyone, I have a field called "User" that contains similar values and I was wondering how to remove or merge similar values? For example: "Tony W" and "Anthony W" (both values of the same field) should be merged together. I was looking at the fuzzy search and jellyfish apps on SplunkBase, but couldn't find a solution to the problem. My search query: (index="abc" Name=*) OR (index="xyz" department=* displayName=*) | eval User=if(isnull(Name), upper(displayName), upper(Name)) | stats values(department) as department by User
I have daily user login/logout data like this: date,user,action 2020-04-14 01:00:00,user1,login 2020-04-14 01:05:00,user2,login 2020-04-14 01:10:00,user3,login 2020-04-14 02:40:00,user2,logout ... See more...
I have daily user login/logout data like this: date,user,action 2020-04-14 01:00:00,user1,login 2020-04-14 01:05:00,user2,login 2020-04-14 01:10:00,user3,login 2020-04-14 02:40:00,user2,logout 2020-04-14 02:50:00,user3,logout 2020-04-14 03:10:00,user2,login 2020-04-14 03:10:00,user1,logout 2020-04-14 03:30:00,user3,login 2020-04-14 04:20:00,user2,logout Users can login/logout multiple times in a day. A session closes and then new session opens. (like user2) I need to get the duration for every session and there is no session id. How can i merge this two events in one row: Login and first logout after login. Like this: login_date,logout_date,user 2020-04-14 01:00:00,2020-04-14 03:10:00,user1 2020-04-14 01:05:00,2020-04-14 02:40:00,user2 2020-04-14 01:10:00,2020-04-14 02:50:00,user3 2020-04-14 03:10:00,2020-04-14 04:20:00,user2 2020-04-14 03:30:00,-,user3 
As the title says, Splunk Web GUI does not show after I restart Splunk server from CLI. Help?
Hi, I'm looking for a way to change the hour of a time variable Exemple : myTime="2022-11-20 05:23:42" and I want myTime to be equal "2022-11-20 08:00:00" How can I proceed please ? Tha... See more...
Hi, I'm looking for a way to change the hour of a time variable Exemple : myTime="2022-11-20 05:23:42" and I want myTime to be equal "2022-11-20 08:00:00" How can I proceed please ? Thanks
I am new to splunk. Creating a report to count the successful and error logins to my system. The report shows the two columns but when I put the report on my dashboard, it adds two columns, span and ... See more...
I am new to splunk. Creating a report to count the successful and error logins to my system. The report shows the two columns but when I put the report on my dashboard, it adds two columns, span and span_days. I can't figure out how to remove those two extra columns. Any advice?
I have a lookup table with three columns Endpoints, Rate, Window I want to get the window value for a particular endpoint provided by me which i will use in my main query The Query looks like thi... See more...
I have a lookup table with three columns Endpoints, Rate, Window I want to get the window value for a particular endpoint provided by me which i will use in my main query The Query looks like this sourcetype="blabla" http_url = "some endpoint" minutesago= |inputlookup SomeFile.csv | search Endpoint = "Some endpoint" | return Window I get an error running this query Error in 'search' command: Unable to parse the search: Comparator '=' is missing a term on the right hand side. Can anyone help?
I have a search with a subsearch. I run into the limitations of the maximum results (50.000) Now Ia m trying to figure out to rebuild my search but I am stuck. Can anyone guide to the right directi... See more...
I have a search with a subsearch. I run into the limitations of the maximum results (50.000) Now Ia m trying to figure out to rebuild my search but I am stuck. Can anyone guide to the right direction? My search now :  <> index=TEST | search logger="success - Metadata:*" [ search index=TEST | search logger="Response: OK]" | fields message.messageId] | stats dc(message.messageId)</>
Hi, I am doing Boss of the SOC v1 and I stuck on question, where I need to use lookup. I imported .csv file ad here are my commands: index=botsv1 dest=192.168.250.70 src="23.22.63.114" http_metho... See more...
Hi, I am doing Boss of the SOC v1 and I stuck on question, where I need to use lookup. I imported .csv file ad here are my commands: index=botsv1 dest=192.168.250.70 src="23.22.63.114" http_method=POST | rex field=form_data "passwd=(?<passwd>[a-zA-Z]{6})" | lookup coldplay.csv Song as passwd OUTPUTNEW song   Error I get: Error in 'lookup' command: Could not find all of the specified destination fields in the lookup table.   Here I can find writeup with similar command: https://www.aldeid.com/wiki/TryHackMe-BP-Splunk/Advanced-Persitent-Threat##7_-_One_of_the_passwords_in_the_brute_force_attack_is_James_Brodsky%E2%80%99s_favorite_Coldplay_song._Which_six_character_song_is_it? I tried to run it and I received the same error.   Do you know how can I solve it?