All Topics

Top

All Topics

Looking for the most efficient way to find 2 way traffic in flow data for a particular set of IP/port/protocol combinations: index=flow protocol=6 AND src_port IN (94, 407, 1417, 1418, 1419, 1420) ... See more...
Looking for the most efficient way to find 2 way traffic in flow data for a particular set of IP/port/protocol combinations: index=flow protocol=6 AND src_port IN (94, 407, 1417, 1418, 1419, 1420) OR dest_port IN (94, 407, 1417, 1418, 1419, 1420) AND NOT src_port IN ( 21, 22) AND NOT dest_port IN ( 21, 22) This gets us the inital data set but having trouble formulating an efficient way to find matching events where src_ip = dest_ip and dest_ip = src_ip from the intial query and flow protocol=6 AND src_port IN (94, 407, 1417, 1418, 1419, 1420) OR dest_port IN (94, 407, 1417, 1418, 1419, 1420) AND NOT src_port IN ( 21, 22) AND NOT dest_port IN ( 21, 22) For example: src_ip = 10.1.1.10,  src_port=94,  dest_ip= 10.1.1.1, dest_port=407  would match: src_ip = 10.1.1.1,  src_port=94,  dest_ip= 10.1.1.10, dest_port=407  src_ip = 10.1.1.1,  src_port=1418,  dest_ip= 10.1.1.10, dest_port=407
Can any one suggest shall we upgrade splunk clustered environment from 8.0.3 to 8.2.2
I want to add its:translate attribute to few elements in source XML of dashboard. Is it possible ? Cause when I tried to do that, splunk throws a warning stating that the node attribute is unknow.
Hi, I have set up a Splunk Enterprise instance (version 8.2.1) and a Universal Forwarder instance on Docker on the same machine, and I'm trying to forward data into the Splunk indexer. Here's what I... See more...
Hi, I have set up a Splunk Enterprise instance (version 8.2.1) and a Universal Forwarder instance on Docker on the same machine, and I'm trying to forward data into the Splunk indexer. Here's what I have so far: On the Splunk Enterprise instance (1.1.1.1): Created an app named "abc" Created an index named "abc_idx" on app "abc" Created a sourcetype named "abc_data" on app "abc" On the Splunk forwarder: Added the indexer: "./bin/splunk add forward-server 1.1.1.1:9997" My very next command was "./bin/splunk add monitor /splunk_forward/log" Then I realized I wanted the monitored logs to be added to the index "abc_idx" and using the sourcetype "abc_data", so I removed the monitor, and then restarted the container. This is when I see the events appearing in the "main" index, so I believe the files did get forwarded. I then ran on the Splunk forwarder: ./bin/splunk add monitor /splunk_forward/log -index abc_idx -sourcetype abc_data But I did not see any event on the index "abc_idx". However, if I run the "oneshot" command, the events show up in the index "abc_idx" Is Splunk refusing to (re)index the same files again, even though they are going to different indexes? Also, I thought the commands I typed would end up in "/opt/splunkforwarder/etc/system/local/inputs.conf"? But I only see "[splunktcp://9997]" in it, not the folder I'm monitoring. Am I looking at the wrong file? However, I see the following in "/opt/splunkforwarder/etc/system/local/outputs.conf": [tcpout:default-autolb-group] server = 1.1.1.1:9997 [tcpout-server://1.1.1.1:9997] So why did my indexer configuration become part of the config file? Preferably, I would like to configure the forwarder using the config files, but I'm not sure exactly which ones to modify - local/inputs.conf and anything else? Thank you.
Hi Folks,   I was wondering how is the best way to collect audit log for the VMWARE Esxi. Regards Alessandro
Hi, I am trying to filter out fields from a table based on its content, example: LP. NAME SURNAME STREET CITY 1. Bob Smith mainst florida 2. Greg Obama secondaryst miami I want to remove sec... See more...
Hi, I am trying to filter out fields from a table based on its content, example: LP. NAME SURNAME STREET CITY 1. Bob Smith mainst florida 2. Greg Obama secondaryst miami I want to remove second row from table based on fact that it have "Greg" AND "Miami" in name and city fields. So far I tried: |search Name NOT "Greg" AND City NOT "Miami" but it didnt work for me. Any ideas?
Yes, I know that filtering was discussed many times here but my case is slightly different. I have a UF pulling events using WMI. It then pushes the events to upstream HFs. On the HFs I tried to do... See more...
Yes, I know that filtering was discussed many times here but my case is slightly different. I have a UF pulling events using WMI. It then pushes the events to upstream HFs. On the HFs I tried to do filtering similarily to https://docs.splunk.com/Documentation/Splunk/8.2.2/Forwarding/Routeandfilterdatad#Filter_WMI_and_Event_Log_events But in my case there are two differences: 1) I didn't want to filter out particular events. I wanted to filter out everything, just keep two kinds of events. Seems easy - just do a default transform with REGEX=. to set queue to nullQueue and then just match the ones you want to index and set queue to indexQueue. Well, it doesn't work. Maybe because: 2) I didn't want to apply this to the whole sourcetype. And here's where I suspect something might have gone wrong because if it was just that my transforms are bad, the default one sending to the nullQueue should work. But it seems that they don't work at all. My definitions: props.conf: [host::TEST...] TRANSFORMS-routing = TEST_default_drop,TEST_index (the hosts I'm getting the data from are called TEST01.domain.com, TEST02.domain.com and so on; I already tried host::TEST*.domain.com) transforms.conf [TEST_default_drop] REGEX=. DEST_KEY=queue FORMAT=nullQueue [TEST_index] REGEX=(?m)^EventCode=(103|104) DEST_KEY=queue FORMAT=indexQueue  Everything seems to be reasonably well, but it doesn't work - I'm getting all data in my index - no filtering at all. I wouldn't want to configure filtering for the whole sourcetype because I might use WMI in the future for other things and this particular filtering is only for this one kind of sources. Does UF set host field to something other than I'm expecting? Can I debug it somehow reasonably? (the UF is kinda a production one in general so it'd not be wise to turn fully blown debug of everything on it on)
we have two device AUSTDPVPN1 and AUSTDPVPN2 and current user logged in count on device as 0 and 2867. I want whenever  AUSTDPVPN1 user login count is 0 it should get replace with login user count o... See more...
we have two device AUSTDPVPN1 and AUSTDPVPN2 and current user logged in count on device as 0 and 2867. I want whenever  AUSTDPVPN1 user login count is 0 it should get replace with login user count of AUSTDPVPAN2. Below mentioned is query an their output Query index="pulse_secure_index" STS20641 | rex field=host (?<device>\w+).* | rex field=msg "STS20641: Number of concurrent users logged in to the device: (?<currUser>\d+)" | eval device=upper(device) | search device=AUSTDPVPN* | stats max(currUser) as currentUser BY device OUTOUT device currentUser AUSTDPVPN1 0 AUSTDPVPN2 2867   Suggest how we can get below mentioned output device currentUser AUSTDPVPN1 2867 AUSTDPVPN2 2867
So in detail, I have a dashboard that read log files to monitor the list of host's status which is UP or DOWN. But when some hosts are offline, then there would be no log file of their. I have a csv ... See more...
So in detail, I have a dashboard that read log files to monitor the list of host's status which is UP or DOWN. But when some hosts are offline, then there would be no log file of their. I have a csv file that list out all the host name, and I want to compare the two list that I have, to have it show the offline hosts, which is the hosts name that show up in the csv file but not on the dashboard that I already have. 
I have a regex in my dashboard that performs the validation whether the input value in the field is Numeric or Not.  The code/regex works perfectly well.  I  need some help in modifying the partic... See more...
I have a regex in my dashboard that performs the validation whether the input value in the field is Numeric or Not.  The code/regex works perfectly well.  I  need some help in modifying the particular regex so that it should also check if the input field contains any special character such as " <double quote> or <space> at the starting or ending or middle of the string (Basically Padding, Spacing or quotes), That would really  help.  The  regex that I have which only checks numeric is as follows. It is under the if(match(value ...section  , Highlighted in Bold ================================================= <input type="text" token="selText"> <label>Enter Only Digits</label> <change> <eval token="validationResult">if(match(value, &quot;^[0-9]+$&quot;), &quot;Numeric&quot;, &quot;Not Numeric&quot;)</eval> </change> </input> </fieldset> ================================================================================
Hello   <row> <panel> <title>Manage alerts</title> <input type="checkbox" token="detail_input_token"> <label></label> <choice value="hide">Hide detail</choice> ... See more...
Hello   <row> <panel> <title>Manage alerts</title> <input type="checkbox" token="detail_input_token"> <label></label> <choice value="hide">Hide detail</choice> <initialValue>hide</initialValue> </input> <html>debug detail_input_token:$detail_input_token$</html> <html rejects="$detail_input_token$"> <body> <p> <A HREF="/manager/myapp/data/lookup-table-files?ns=myapp&amp;pwnr=-&amp;search=mySearch&amp;count=100" target="_blank"> Manage <B>lookup mySearch</B> (csv files). </A> </p> </body> </html> </panel> </row>     It works in splunk 6.4.2 We migrate on splunk 8.2 and it doesn't work now. I do many change without success.  Does somebody have an idea ? Thanks in advance.
We have an issue wherein every time we attempt to create a search macro, create a lookup definition, create a new lookup, update a lookup file name, clone the mentioned knowledge objects,  Splunk res... See more...
We have an issue wherein every time we attempt to create a search macro, create a lookup definition, create a new lookup, update a lookup file name, clone the mentioned knowledge objects,  Splunk responds with " Your entry was not saved. The following error was reported: server abort. splunk " Can you let us know the cause of the issue in our Splunk instance? We are currently unable to create any new search macros in this environment.   They advised that there is a workaround of updating the .conf files in the backend, however our clients don't have access to the backend, and everytime they want to update something, the request goes directly to us. Does anyone know how to resolve this issue? We need the UI to function properly as it is causing delay in delivery.
Hi Team, When i m searching the  switch logs for last 7 days, i m gettting the error " Search auto-canceled and DAG execution error ". able to get last 15 or 60 mins logs, could you please suggest ... See more...
Hi Team, When i m searching the  switch logs for last 7 days, i m gettting the error " Search auto-canceled and DAG execution error ". able to get last 15 or 60 mins logs, could you please suggest how can i resolve this issue. i m using 8.1.3 splunk enterprise version. Thanks Sridevi M
so before the update (was v6.4.1) we would edit the incident in 'incident review' ->  add a comment or change some status -> click on 'save changes' and the pop up window would disappear and the inci... See more...
so before the update (was v6.4.1) we would edit the incident in 'incident review' ->  add a comment or change some status -> click on 'save changes' and the pop up window would disappear and the incident list would refresh. All good and dandy. But after the update (v6.6.0) when we click on 'save changes'; we have to wait for about 4 seconds for the close button to be clickable and then we have to click on it to dismiss the window. It is a bit of a productivity killer. I would like to find a way to remove the need to click on the close button. Can anybody point me to the right direction on where can I get some documentation on it? 
hello I use the search below in order to calculate the average of the field "diff"     index=toto | eval diff=strptime('Fin',"%d/%m/%Y %H:%M:%S")-strptime('Debut',"%d/%m/%Y %H:%M:%S") | eval dif... See more...
hello I use the search below in order to calculate the average of the field "diff"     index=toto | eval diff=strptime('Fin',"%d/%m/%Y %H:%M:%S")-strptime('Debut',"%d/%m/%Y %H:%M:%S") | eval diff=round(diff, 2) | stats avg(diff) as diff   I am a little surprised because I have the same results if I add a | search in my search for changing the type of machine index=toto | eval diff=strptime('Fin',"%d/%m/%Y %H:%M:%S")-strptime('Debut',"%d/%m/%Y %H:%M:%S") | eval diff=round(diff, 2) | search PPOSTE = * | stats avg(diff) as diff OR index=toto | eval diff=strptime('Fin',"%d/%m/%Y %H:%M:%S")-strptime('Debut',"%d/%m/%Y %H:%M:%S") | eval diff=round(diff, 2) | VPOSTE = * | stats avg(diff) as diff Is it the correct way to do that please?  
Hi Experts, I'm having some difficulties to extract the correct information from a file that was add to splunk. I tried to read/understand as much as I could but still struggling to correctly extra... See more...
Hi Experts, I'm having some difficulties to extract the correct information from a file that was add to splunk. I tried to read/understand as much as I could but still struggling to correctly extract the information. Here is a snip of my file: call_type: "I" alert_id: "8626530 " data_center: "XYZ2 " memname: "QWERTPX " order_id: "1OOUZ" severity: "R" status: "Not_Noticed " send_time: "20210928070008" last_user: " " last_time: " " message: "ASDFGH STARTUP OF REGION QWERTPX" run_as: "USER01 " sub_application: "QWERT " application: "HOUSEKEEPING " job_name: "JOBASDF " host_id: " " alert_type: "R" closed_from_em: " " ticket_number: " " run_counter: " " notes: " " call_type: "I" alert_id: "8626531 " data_center: "XYZ2 " memname: "QWERTZD " order_id: "1OOVH" severity: "R" status: "Not_Noticed " send_time: "20210928070009" last_user: " " last_time: " " message: "ASDFGH STARTUP OF REGION QWERTZD" run_as: "USER01 " sub_application: "QWERT " application: "HOUSEKEEPING " job_name: "JOBASDF " host_id: " " alert_type: "R" closed_from_em: " " ticket_number: " " run_counter: " " notes: " " call_type: "I" alert_id: "8626533 " data_center: "XYZ2 " memname: "QWERTZU " order_id: "1OOVV" severity: "R" status: "Not_Noticed " send_time: "20210928070009" last_user: " " last_time: " " message: "ASDFGH STARTUP OF REGION QWERTZU" run_as: "USER01 " sub_application: "QWERT " application: "HOUSEKEEPING " job_name: "JOBASDF " host_id: " " alert_type: "R" closed_from_em: " " ticket_number: " " run_counter: " " notes: " " call_type: "I" alert_id: "8626532 " data_center: "XYZ2 " memname: "QWERTZE " order_id: "1OOVJ" severity: "R" status: "Not_Noticed " send_time: "20210928070009" last_user: " " last_time: " " message: "ASDFGH STARTUP OF REGION QWERTZE" run_as: "USER01 " sub_application: "QWERT " application: "HOUSEKEEPING " job_name: "JOBASDF " host_id: " " alert_type: "R" closed_from_em: " " ticket_number: " " run_counter: " " notes: " " What I need is have this 21 fields extracted properly, at moment I tried the delimiters but it doesn't work with :   I believe I will have to write an regular expression (this is where I got stuck as I have no clue how...) Basically what I need is the below fields extracted from the file so I could run dashbords, reports, alerts etc... Field_1 - all_type: "I" Field_2 - alert_id: "0000007 " Field_3 - data_center: "XYZ2 " Field_4 - memname: "ABCABC01 " Field_5 - order_id: "1OO59" Field_6 - severity: "R" Field_7 - status: "Not_Noticed " Field_8 - send_time: "20210923210008" Field_9 - last_user: " " Field_10 - last_time: " " Field_11 - message: "MSG SHUTDOWN OF REGION ABCDEF" Field_12 - run_as: "USER01 " Field_13 - sub_application: "QWERT " Field_14 - application: "HOUSEKEEPING " Field_15 - job_name: "JOBASDF " Field_16 - host_id: " " Field_17 - alert_type: "R" Field_18 - closed_from_em: " " Field_19 - ticket_number: " " Field_20 - run_counter: " " Field_21 - notes: " " Really appreciate any help to achieve this  Thank you !!   
Hi Team  When i tried running the below eval command, i am getting some error message often. I wrote this below command to find out number of Samsung device used in a month.  eval Next= if(match(c... See more...
Hi Team  When i tried running the below eval command, i am getting some error message often. I wrote this below command to find out number of Samsung device used in a month.  eval Next= if(match(cs_user_agent, "SM-G980F"),"Samsung Galaxy S20-5G",if(match(cs_user_agent, "SM-G975W"),"Samsung Galaxy S10+",if(match(cs_user_agent, "SM-G935F"),"Samsung Galaxy S7 edge ",if(match(cs_user_agent, "SM-T350"),"Samsung Galaxy Tab",if(match(cs_user_agent, "SM-G950"),"Samsung Galaxy S8",if(match(cs_user_agent, "SM-G998"),"Samsung Galaxy S21 Ultra-5G",if(match(cs_user_agent, "SM-J120Z"),"Samsung Galaxy J1",if(match(cs_user_agent, "SM-A217F"),"Samsung Galaxy A21s",if(match(cs_user_agent, "SM-G988"),"Samsung Galaxy S20 Ultra 5G",if(match(cs_user_agent, "SM-A105G"),"Samsung Galaxy A10",if(match(cs_user_agent, "SM-A525"),"Samsung Galaxy A52",if(match(cs_user_agent, "SM-G991"),"Samsung Galaxy S21 5G",if(match(cs_user_agent, "SM-A225F"),"Samsung Galaxy A22",if(match(cs_user_agent, "SM-A725"),"Samsung Galaxy A72",if(match(cs_user_agent, "SM-G781"),"Samsung Galaxy S20 FE 5G",if(match(cs_user_agent, "SM-F900U"),"Samsung Galaxy Fold",if(match(cs_user_agent, "SM-A326"),"Samsung Galaxy A32 5G",if(match(cs_user_agent, "SM-F700"),"Samsung Galaxy Z Flip3 5G",if(match(cs_user_agent, "SM-A226"),"Samsung Galaxy A22 5G",if(match(cs_user_agent, "SM-N986"),"Samsung Galaxy Note20 Ultra 5G",if(match(cs_user_agent, "SM-A526"),"Samsung Galaxy A52 5G",if(match(cs_user_agent, "SM-A515"),"Samsung Galaxy A51",if(match(cs_user_agent, "SM-A217"),"Samsung Galaxy A21s",if(match(cs_user_agent, "SM-M326"),"Samsung Galaxy M32 5G",if(match(cs_user_agent, "SM-T7"),"Samsung Galaxy Tab S7 FE",if(match(cs_user_agent, "SM-T50"),"Samsung Galaxy Tab A7 10.4",if(match(cs_user_agent, "SM-T50"),"Samsung Galaxy Tab A7 10.4",if(match(cs_user_agent, "SM-T50"),"Samsung Galaxy J7 Prime",if(match(cs_user_agent, "SM-M515"),"Samsung Galaxy M51",if(match(cs_user_agent, "SM-A505"),"Samsung Galaxy A50",if(match(cs_user_agent, "SM-T22"),"Samsung Galaxy Tab A7 Lite",if(match(cs_user_agent, "SM-G930"),"Samsung Galaxy S7",if(match(cs_user_agent, "SM-N960"),"Samsung Galaxy Note9",if(match(cs_user_agent, "SM-J700"),"Samsung Galaxy J7",if(match(cs_user_agent, "SM-G970"),"Samsung Galaxy S10e",if(match(cs_user_agent, "SM-M127"),"Samsung Galaxy M12",if(match(cs_user_agent, "SM-N970"),"Samsung Galaxy Note10",if(match(cs_user_agent, "SM-A115"),"Samsung Galaxy A11",if(match(cs_user_agent, "SM-T87"),"Samsung Galaxy Tab S7",if(match(cs_user_agent, "SM-A315"),"Samsung Galaxy A31",if(match(cs_user_agent, "SM-M315F"),"Samsung Galaxy M31",if(match(cs_user_agent, "SM-A205"),"Samsung Galaxy A20",if(match(cs_user_agent, "SM-J500"),"Samsung Galaxy J5",if(match(cs_user_agent, "SM-T97"),"Samsung Galaxy Tab S7+","other")))))))))))))))))))))))))))))))))))))))))))) Note - could some one please help me finding out the best way to get the expected outcome from the user agent or please help to avoid the error. 
  how can i add some descriptions at all input log (metric, syslog, snmp, etc...)   i tried, add "_meta = description::test_description" in UF inputs.conf in this case, can be added description a... See more...
  how can i add some descriptions at all input log (metric, syslog, snmp, etc...)   i tried, add "_meta = description::test_description" in UF inputs.conf in this case, can be added description at all log but, cant HF case   so... i think, what if it could be applied to heavy forwarder? retried add "_meta ~~" in HF inputs.conf   but, not work   how can i do?     
I need to monitor user (s) or a groups' activities or the amount of Bandwidth they are using on an Index assigned to them. Thanks a ton. 
I have created a calculated field which parses _time from a date stamp in the data. However, it does not set _time correctly. If I set the calculated field to something different it's fine. So, was... See more...
I have created a calculated field which parses _time from a date stamp in the data. However, it does not set _time correctly. If I set the calculated field to something different it's fine. So, was just wondering if there was any documentation anywhere that talks about being able to override _time with a calculated field. NB: I can't set the event _time at ingestion to be the correct date from the data as I am ingesting a complete data set every day, where historical results may change, so I'm just using a 24h search and then changing _time.