All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi I want to know how long and when either of two games are being played on the PS4 or a laptop and be notified via email the IP address, when the game play started and when the game play stopped a... See more...
Hi I want to know how long and when either of two games are being played on the PS4 or a laptop and be notified via email the IP address, when the game play started and when the game play stopped and the duration the game was played. There are multiple game play sessions during the day. I want to be able to graph game play by day and week also. I am using squid proxy and the destination traffic for both games is known for example api.gamesite1.com for game 1 and api.gamesite2.com for game 2 and the traffic is initiated from the PS4 or laptop every 14 seconds on average and when the game is stopped playing the traffic stops appearing. Multiple sessions of either game could be played during the day so I want to capture each game session the source IP address, start and finish time and duration between start and finish time.  Can anyone help how to do this?
Hi, Team! I have a rule: index = example source = "Rule" | fields user, src_time, src_app, src, src_lat, src_long, src_city, src_country, dest_time, dest_app, dest, dest_lat, dest_long, dest_city... See more...
Hi, Team! I have a rule: index = example source = "Rule" | fields user, src_time, src_app, src, src_lat, src_long, src_city, src_country, dest_time, dest_app, dest, dest_lat, dest_long, dest_city, dest_country, distance, speed | stats count by dest, dest_app And there is a lookup, in which the ip addresses and apps are indicated in two columns. How can I exclude from the rule a bunch of ip and applications that are in the lookup?   Thank you for your time!  
I have a search that counts the amount of times a user runs a program, and then returns the usernames of the users who run it more than threshold 'x_times'. The search requires information from two s... See more...
I have a search that counts the amount of times a user runs a program, and then returns the usernames of the users who run it more than threshold 'x_times'. The search requires information from two sourcetypes that have one 'event_id' in common.     index=blah (sourcetype=example_1 OR sourcetype=example_2) earliest=-1d@d | stats values(username), values(type), values(program_name), values(hostname) by event_id | search type=filter_1 program_name=filter_2 | eventstats count as user_count by username | search user_count >= 5 | table username      My goal, is that after I have the usernames I'm interested in, I want to be able to run predictions on each username. I have tried this two ways: Way 1     | map search="search index=blah (sourcetype=example_1 OR sourcetype=example_2) earliest=-90d@d | stats values(username), values(type), values(program_name), values(hostname) by event_id | search username=$username$ type=filter_1 program_name=filter_2 | timechart span=1d count as distinct_count | predict disinct_count algorithm=LLP5 | table output1, output2, prediction"     Way 2     | map search="search index=blah sourcetype=example_1 earliest=-90d@d | join event_id max=0 [ search index=blah sourcetype=example_2 earlies=-90d@d | table event_id, program_name] | search username=$username$ type=filter_1 program_name=filter_2 | timechart span=1d count as distinct_count | predict disinct_count algorithm=LLP5 | table output1, output2, prediction"      Experimentation has shown me that the map search can't seem to process past the STATS or the JOIN. Without the STATS or the JOIN, and then only being able to use one filter, I can get it to work. But with them it fails, and I'm unable to get an accurate prediction. Is there a solution, or do I need to approach this a different way?
Hi Can someone please guide me how to Schedule PDF Delivery in Splunk dashboard studio.   Thanks in advance.
I recently setup Security Essentials for reporting on common ransomeware extensions. I received my first alert but it tagged a bunch of MP3s as TeslaCrypt3.0+  I have confirmed the files are not r... See more...
I recently setup Security Essentials for reporting on common ransomeware extensions. I received my first alert but it tagged a bunch of MP3s as TeslaCrypt3.0+  I have confirmed the files are not ransomeware. Is there a reason for this? how can I limit the amount of false positives? 
Hey guys, So I have two look up tables table1 and table 2.   Table 1 ID Username Fname Lname Table 2 Username   What i want to do is have my search result look like this   ID, Username(from ... See more...
Hey guys, So I have two look up tables table1 and table 2.   Table 1 ID Username Fname Lname Table 2 Username   What i want to do is have my search result look like this   ID, Username(from table one), Fname, Lname, Username(table two) 54, User1, John, Smith, User1   The reason i want it to do that is because i want to compare the username from table 1 to table 2 so that i can know the user is missing from the source we're getting the table 2 from. I was able to get append to work but the issue i run into is it wont place the usernames in the same row. It shows all the values for table one fills the columns and then shows all the values for table 2 with the table one columns. Ex:   ID, Username(from table one), Fname, Lname, Username(table two) 54, User1, John, Smith, (blank) 55, User2, Jane, Smith, (blank) (blank),(blank),(blank),(blank),User1 (blank),(blank),(blank),(blank),User2     I just want the usernames from table 1 to match and be in the same row as the username in table2
Hello Team Please join us on Saturday, September 25th 11:00 AM for our next Mumbai Splunk User Group meet-up. I will be presenting Splunk Observability to collect all metrics and metadata logs from ... See more...
Hello Team Please join us on Saturday, September 25th 11:00 AM for our next Mumbai Splunk User Group meet-up. I will be presenting Splunk Observability to collect all metrics and metadata logs from AWS. https://usergroups.splunk.com/e/m8sas8/  
There are no data on Mondays so my timecharts always dip to 0.   {search string} | eval date_wday=lower(strftime(_time,"%A")) | where NOT (date_wday=monday) | timechart span=1d count by ColName   ... See more...
There are no data on Mondays so my timecharts always dip to 0.   {search string} | eval date_wday=lower(strftime(_time,"%A")) | where NOT (date_wday=monday) | timechart span=1d count by ColName   Is there any way to make the timechart skip Mondays (not just set it to 0)? 
I have an alert that joins RAW events with a lookup containing thresholds (and yes, it has to be a join).  I would like to take one field from the alert details, last_file_time, and outputlookup only... See more...
I have an alert that joins RAW events with a lookup containing thresholds (and yes, it has to be a join).  I would like to take one field from the alert details, last_file_time, and outputlookup only that field back to the root lookup table. Question: Is there a way to only outputlookup a single field from the table output? Example: | inputlookup MyFileThresholds.csv | join type=left file_name [ search ....... eval last_file_time=strftime(_time, "%x %T") ] | table monitor_status current_time current_day file_name file_cutoff_time host last_file_time | outputlookup append=false MyFileThresholds.csv  <-I only want last_file_time going back to the root lookup table.
Hey guys. I have multiple events combined to transactions. I'd like to view the duration of each transaction on a timechart to have an overview about when and how long which transaction occured. My ... See more...
Hey guys. I have multiple events combined to transactions. I'd like to view the duration of each transaction on a timechart to have an overview about when and how long which transaction occured. My search so far is: searchterms | eval start_time = if(like(_raw, "%START%"), 'start', 'null') | eval end_time = if(like(_raw, "%END%"), 'end', 'null') | transaction JobDescription startswith=(LogMessage="* START *") endswith=(LogMessage="* END *") maxevents=5000 | timechart [pls help] I'm pretty lost on that case, so help is very appreciated
Hi Folks, I am getting the status of my applications(Server-001 and Server-002)every 15mins like the below example in json format and pushing it to splunk forwarder. I want to create a line cha... See more...
Hi Folks, I am getting the status of my applications(Server-001 and Server-002)every 15mins like the below example in json format and pushing it to splunk forwarder. I want to create a line chart or may be any other visualization based on the below events. Say suppose for pass we can assign value 1 and for FAIL we can assign value 0 and plot a chart over _time. Thanks Event 1                  "name": "Server-001",                   "status": "PASS" Event 2                   "name": "Server-002",                   "status": "PASS" Event 3                  "name": "Server-001",                  "status": "PASS" Event 4                 "name": "Server-002",                 "status": "PASS"
After onboarding done, The logs are reporting to splunk, But most of the events are showing like binary as below. The file CHARSET shows UTF-16LE, Have tried Auto also but stil   x00E\x00B\x00U\x00... See more...
After onboarding done, The logs are reporting to splunk, But most of the events are showing like binary as below. The file CHARSET shows UTF-16LE, Have tried Auto also but stil   x00E\x00B\x00U\x00G\x00:\x00g\x00e\x00t\x00D\x00S\x00M\x00U\x00s\x00e\x00r\x00s\x00:\x00G\x00e\x00t\x00t\x00i\x00n\x00g\x00 \x00i\x00n\x00f\x00o\x00r\x00m\x00a\x00t\x00i\x00o\x00n\x00 \x00f\x00o\x00r\x00 \x00R\x00U\x007\x006\x007\x001\x003\x001\x00 \x00 \x00 \x00 \x00D\x00E\x00B\x00U\x00G\x00:\x00g\x00e\x00t\x00D\x00S\x00M\x00U\x00s\x00e\x00r\x00s\x00:\x00S\x00u\x00c\x00c\x00e\x00s\x00s\x00   Props has written as below: SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-16LE disabled=false DATETIME_CONFIG=CURRENT
Hi All. Been using version 2.0.0 of SPL Rehab on our Splunk Cloud Search head for the past 12 months, we have just been upgraded to the Victoria experience and it no longer works. Error we are gett... See more...
Hi All. Been using version 2.0.0 of SPL Rehab on our Splunk Cloud Search head for the past 12 months, we have just been upgraded to the Victoria experience and it no longer works. Error we are getting is a 403 you don't have permissions to invoke DEBUG. Any ideas or should I contact splunk support?  
Hello Splunk community, We are facing memory issues with our Splunk. Problem is connected with the installation of a new add-on app. Environment is Windows server. I understand that any new app is a... See more...
Hello Splunk community, We are facing memory issues with our Splunk. Problem is connected with the installation of a new add-on app. Environment is Windows server. I understand that any new app is a different python process running in the server. Is there a way to associate these processes with specific python processes? I need to see which is using the most memory to possibly disable it. Thank you, Fuzzylogic
Hi, I want to create a dashboard, where a user has a drop down input to select a named time frame ($value$). The start and end date of the time frame are defined in a lookup table.  Each of my ev... See more...
Hi, I want to create a dashboard, where a user has a drop down input to select a named time frame ($value$). The start and end date of the time frame are defined in a lookup table.  Each of my events has a milestone date. I want to filter to those events where the milestone date is between the start and end date from the lookup table. I tried something like this: index=my_index | where milestone_date_epoch > [inputlookup mapping_lookup WHERE time_frame = $value$ | eval startdate = strptime(Start_date, "%Y-%m-%d") | return startdate] | where milestone_date_epoch < [inputlookup mapping_lookup WHERE time_frame = $value$ | eval enddate = strptime(End_date, "%Y-%m-%d") | return enddate] But I get an error message Can you help me to get this fixed?
Hello. I have 3 SH. When I switch the captain to another SH Data disappears in it. In its normal state, SH has 20 million events. but when he becomes captain. It has a maximum of 400-500 events.  Th... See more...
Hello. I have 3 SH. When I switch the captain to another SH Data disappears in it. In its normal state, SH has 20 million events. but when he becomes captain. It has a maximum of 400-500 events.  There is no such problem with the other two SH. What is the problem ? please help me!!
Dear community I am struggling with how to allow different format in a search input, but still finding the corresponding events In my events I have mac addresses of this format 84-57-33-0D-B4-A8 I... See more...
Dear community I am struggling with how to allow different format in a search input, but still finding the corresponding events In my events I have mac addresses of this format 84-57-33-0D-B4-A8 I have built a dynamic dashboard where the mac adresses are found if the user types in exactly this format . However the user might search for a mac address like this  8457330DB4A8  or 84:57:33:0D:B4:A8 so in order to find results successfully, I have to recalculate the two inputs, so that are changed to the expected format. So a test query like this recalculates the first format |makeresults | eval m = "aab2c34be26e" | eval MAC2 = substr(m,1,2)."-".substr(m,3,2)."-".substr(m,5,2)."-".substr(m,7,2)."-".substr(m,9,2)."-".substr(m,11,2) | fields MAC2   a test query like this recalculates the second format: |makeresults | eval m = "aa:c3:4b:e2:6e" | eval MAC2 = replace (m,":","-") | fields MAC2   But I am failing to combine it to a joint query dependent on the input if my $mac$ address can be all three formats, then I have to choose the recalculation dependent on the input.   My idea would be to write a condition  with a regex match of $mac$ with   ([0-9A-Fa-f]{2}[-]){5}  then no  recalculation ([0-9A-Fa-f]{2}[:]){5} then  replace like shown above ([0-9A-Fa-f]{2}){5} then  substitute like shown above   I tried several ways of CASE and IF, but never got it to work... any help highly appreciated! Thanks    
Hi I have key value that call (duration) in my application log that show duration of each job done. each day when I get maximum duration it has lot’s  of false positive because it is natural to bec... See more...
Hi I have key value that call (duration) in my application log that show duration of each job done. each day when I get maximum duration it has lot’s  of false positive because it is natural to become high duration in some point. It’s not normal when it continues high duration. e.g.  normal condition: 00:01:00.000 WARNING duration[0.01] 00:01:00.000 WARNING duration[100.01] 00:01:00.000 WARNING duration[0.01]   abnormal condition: 00:01:00.000 WARNING duration[0.01] 00:01:00.000 WARNING duration[100.01] 00:01:00.000 WARNING duration[50.01] 00:01:00.000 WARNING duration[90.01] 00:01:00.000 WARNING duration[100.01] 00:01:00.000 WARNING duration[0.01]   1-how can I detect abnormal condition with splunk? (Best way with minimum false positive on hug data) 2-which visualization or chart more suitable to show this abnormal condition daily? this is huge log file and it is difficult to show all data for each day on single chart. Any idea?  Thanks,
Dear Splunk Community, I need help extracting a string (CTJT) plus any 6 characters after. CTJT is the start of an error code and always the same, the 6 characters after are different but always 6 c... See more...
Dear Splunk Community, I need help extracting a string (CTJT) plus any 6 characters after. CTJT is the start of an error code and always the same, the 6 characters after are different but always 6 charaters. Meaning the full error code is 10 characters like this: CTJTAAB013 The error codes in the events are always on random positions, never fixed! I need to extract the errorcode and evaluate it in a field:   CTJT* | table errorcode | eval errorcode = "I want to fetch the error code here"     I have tried substr but I cant find a method for fetching the first index of CTJT. Can anyone help me create a regex that does the above or maybe some other way?   Thanks in advance
Hello, I am trying to connect NetBackup app to Splunk using REST API Modular Input App (https://splunkbase.splunk.com/app/1546/). Our use case is slightly complicated.  Request can be fulfilled in 2... See more...
Hello, I am trying to connect NetBackup app to Splunk using REST API Modular Input App (https://splunkbase.splunk.com/app/1546/). Our use case is slightly complicated.  Request can be fulfilled in 2 steps.  1. Need to send POST request and we get some token value as a results. 2. Send GET request using that token to get required data from NetBackup server. Does anyone had similar situation earlier or having any suggestions to implement this scenario. Update: I am able to implement 2 separate requests. Splunk is on windows platform. If it was on linux then I would have write a script which will  execute the first request using curl and copy  the token value to the input config of 2nd request. Not sure how to handle on windows platform. Thanks