All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi @muradgh i’m having the same issue on my fortigate logs using TCP but we’re using splunk cloud so modifying the props.conf file i think is not a straightforward task for us to do so i’m planning t... See more...
hi @muradgh i’m having the same issue on my fortigate logs using TCP but we’re using splunk cloud so modifying the props.conf file i think is not a straightforward task for us to do so i’m planning to use UDP instead..  are you able to share with me your syslog-ng.conf for fortigate logging if that’s ok with you? i also need inputs on setting up the correct filters to make the raw output readable and one line per event. did you also set the log format on fortigate firewall to use rfc 5424 when sending to syslog-ng? thank you in advance!
Thanks - that is a lot more detailed than my solution and I like the intersection - that will be useful for me to help people know what was in there - we often have hundreds of keys returned and to s... See more...
Thanks - that is a lot more detailed than my solution and I like the intersection - that will be useful for me to help people know what was in there - we often have hundreds of keys returned and to see which ones were retuned is really useful.    Thanks,  Steven
And I have managed to solve it.. should have fetched a coffee before posting I guess.  So just needed to add a |Search and IN after the |Split index="PreProduction" source="Transactions"  | eval K... See more...
And I have managed to solve it.. should have fetched a coffee before posting I guess.  So just needed to add a |Search and IN after the |Split index="PreProduction" source="Transactions"  | eval KeysSplit=split(Keys,", ") | search PKSSplit IN($ObjectRefs$) I can then |table my results.  Hopefully this may be useful to someone else. 
Doing some SPL like this may lead you in the right direction if I am understanding you question correctly. Note: The top portion of this code is just generating sampe data, the meat of the solution ... See more...
Doing some SPL like this may lead you in the right direction if I am understanding you question correctly. Note: The top portion of this code is just generating sampe data, the meat of the solution is where the comments start ``` <comment> ``` | makeresults | eval input_value="83, 9123, 272529, 1234" | append [ | makeresults | eval input_value="851056, 714062, 6234, 91258,272476" ] | append [ | makeresults | eval input_value="28, 10001, 18, 99923,1027385" ] ``` Generating field with the comma delimited list of Keys ``` | eval Keys="272476, 272529, 274669, 714062, 714273, 845143, 851056, 853957, 855183" ``` Splitting both Keys and simulated user input fields into multivalued fields ``` | eval mv_Keys=trim(split(Keys, ","), " "), mv_input_value=trim(split(input_value, ","), " ") ``` looping through each entry in a multivalue field 'mv_input_value' and checking if it exists in the list of Keys ``` | eval intersecting_keys=case( isnull(mv_input_value), null(), mvcount(mv_input_value)==1, if('mv_input_value'=='mv_Keys', 'mv_input_value', null()), mvcount(mv_input_value)>1, mvmap(mv_input_value, if('mv_input_value'=='mv_Keys', 'mv_input_value', null())) )   Results show in the screenshot You can split the comma delimited lists into MV fields and then loop through one of them to individually check if that number exists in another multivalued field. In this example I did this and created a new field 'intersecting_keys' to return the number that exist in both fields. 
It's good app but not good enough   Missing few additional fields.  For example: Parent_Process_Label (at least). <<< always Parent_Process_PID is "folder name".   
I'm sending $phrase$ in an email notification but they don't make it through because Splunk assumes they are variables. Is there a way to send these without Splunk recognizing them as a variable?  T... See more...
I'm sending $phrase$ in an email notification but they don't make it through because Splunk assumes they are variables. Is there a way to send these without Splunk recognizing them as a variable?  Thanks
Hi, we encountered the same issue after upgrading Splunk ES to 7.2.0. I am kindly asking to be more detailed by what do you mean by : I removed the stanza from the default folder, (which file in t... See more...
Hi, we encountered the same issue after upgrading Splunk ES to 7.2.0. I am kindly asking to be more detailed by what do you mean by : I removed the stanza from the default folder, (which file in the default folder?) I added a stanza with disabled = 1 in local folder, (again, in which file you added the stanza?) Also, are you referring to this recommendation (Ref: hxxps://docs.splunk.com/Documentation/ES/7.2.0/RN/KnownIssues )? Add the following comment at the end of the file. Conf File Check for Bias Language [confcheck_es_bias_language_cleanup://default] debug = <boolean>
I have an index set up that holds a number of fields, one of which is a comma separated list of reference numbers and I need to be able to search within this field via a dashboard. This is fine for ... See more...
I have an index set up that holds a number of fields, one of which is a comma separated list of reference numbers and I need to be able to search within this field via a dashboard. This is fine for a single reference as we can just search within the field and on the parameter on the dashboard prefix/suffix with wildcards but for multiple values, which can be significant, I can not see a way of searching While I have looked at |split and In neither seem to provide what I need though that may be down to what I tried.  Example data:  Keys="272476, 272529, 274669, 714062, 714273, 845143, 851056, 853957, 855183" I need to be able to enter in any number of keys, in any order, and find any records that contain ANY of the keys - not all of them in a set order. So for the above it should return if I search for (853957) or (855183,  714062) or (272476, 714062, 855183) Is anyone able to point me towards a logical solution on this - it will be a key aspect of our use of SPLUNK to enable users to copy/paste a list of reference numbers and assess where these occur in our logs. 
Not sure if this is exactly what you are looking for but I think it is pretty close. I got this output by stringing together a couple of streamstats with window=<int> and reset_before=<cri... See more...
Not sure if this is exactly what you are looking for but I think it is pretty close. I got this output by stringing together a couple of streamstats with window=<int> and reset_before=<criteria> parameters | sort 0 +Machine, +time | streamstats count as row | eval TimeStamp=strftime(time, "%m/%d/%Y %H:%M:%S") | fields - _time | fields + row, Machine, TimeStamp, time | streamstats window=3 count as running_count, min(time) as min_time, max(time) as max_time by Machine | eval seconds_diff='time'-'min_time', duration_diff=tostring(seconds_diff, "duration") | streamstats window=3 reset_before="("seconds_diff>300")" count as running_count by Machine | eval Occurrence=if( 'seconds_diff'<=300 AND 'running_count'==3, "TRUE", "FALSE" ) | fields + row, Machine, TimeStamp, Occurrence   Here is the full SPL I used to generate the screenshot (results may vary because of the use of relative_time()) | makeresults | eval Machine="machine 1", time=relative_time(now(), "-2h@s") | append [ | makeresults | eval Machine="machine 1", time=relative_time(now(), "-2h+18s@s") ] | append [ | makeresults | eval Machine="machine 1", time=relative_time(now(), "-2h+34s@s") ] | append [ | makeresults | eval Machine="machine 2", time=relative_time(now(), "+4d@d+20h@h+31m@m+48s@s") ] | append [ | makeresults | eval Machine="machine 1", time=relative_time(now(), "-2h+52s@s") ] | append [ | makeresults | eval Machine="machine 2", time=relative_time(now(), "+4d+5h+5m+2s") ] | append [ | makeresults | eval Machine="machine 1", time=relative_time(now(), "-2h+302s@s") ] | append [ | makeresults | eval Machine="machine 1", time=relative_time(now(), "+2d-5h+18s@s") ] | append [ | makeresults | eval Machine="machine 2", time=relative_time(now(), "+4d+5h+18s@s") ] | append [ | makeresults | eval Machine="machine 2", time=relative_time(now(), "+4d+5h+2m+1s") ] | append [ | makeresults | eval Machine="machine 2", time=relative_time(now(), "+4d+5h+2m+34s") ] | append [ | makeresults | eval Machine="machine 2", time=relative_time(now(), "+4d+5h+4m-12s") ] | append [ | makeresults | eval Machine="machine 2", time=relative_time(now(), "+4d@d+20h@h+43m@m+5s@s") ] | sort 0 +Machine, +time | streamstats count as row | eval TimeStamp=strftime(time, "%m/%d/%Y %H:%M:%S") | fields - _time | fields + row, Machine, TimeStamp, time | streamstats window=3 count as running_count, min(time) as min_time, max(time) as max_time by Machine | eval seconds_diff='time'-'min_time', duration_diff=tostring(seconds_diff, "duration") | streamstats window=3 reset_before="("seconds_diff>300")" count as running_count by Machine | eval Occurrence=if( 'seconds_diff'<=300 AND 'running_count'==3, "TRUE", "FALSE" ) | fields + row, Machine, TimeStamp, Occurrence
Hello all, I'm writing my first Modular Input app, and I'm wondering what's the best way to store a REST API key for my python script? I've seen mention that the key can be stored within splunk and ... See more...
Hello all, I'm writing my first Modular Input app, and I'm wondering what's the best way to store a REST API key for my python script? I've seen mention that the key can be stored within splunk and retrieved by the script, but no solid explanation on how to do that. Can anyone provide a secure method?   Thank you
It is not clear why row 5 should be true since you haven't shared the data (number of errors in each event). Having said that, are you trying to implement a sliding 5 minute window, or are you using... See more...
It is not clear why row 5 should be true since you haven't shared the data (number of errors in each event). Having said that, are you trying to implement a sliding 5 minute window, or are you using time bins? If you are using time bins, the row 5 is in a different bin to rows 1-4.
Additionally, you can use one of several apps implementing such source tracking. For example - https://splunkbase.splunk.com/app/4621 On the other hand, you can use Forwarder Monitoring in Monitorin... See more...
Additionally, you can use one of several apps implementing such source tracking. For example - https://splunkbase.splunk.com/app/4621 On the other hand, you can use Forwarder Monitoring in Monitoring Console to see "lost" forwarders (but this relies on _internal logs from the forwarder, not on the actual "production" events forwarder from given UF)
Hi @subasm, I'm quite sure that the issue is in the data. Open a case to Splunk Support to be sure. Ciao. Giuseppe
Hi @maede_yavari, the best approach is having a lookup (called e.g. perimeter.csv) containing the lista of all UFs to monitor (at least one column: host). Then you could run (e.g. every 15 minutes)... See more...
Hi @maede_yavari, the best approach is having a lookup (called e.g. perimeter.csv) containing the lista of all UFs to monitor (at least one column: host). Then you could run (e.g. every 15 minutes) a search like this: | tstats count WHERE index=_internal BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0  If you don't wnt to have this lookup, you could use this search to run every 15 minutes: | tstats count WHERE index=_internal earliest=-30d latest=now BY host _time | eval period=if(_time<now()-900,"Previus","Last") | stats dc(period) AS period_count values(period) AS period BY host | where period_count=1 AND period="Previus" this second solution has the advantage that you don't need to maintain the lookup but gives you less control because you don't check servers that aren't sending logs from 30 days and it's more heavy. Ciao. Giuseppe
Hi, I have installed Splunk Universal Forwarder on several Windows servers, and they send their Windows logs to the indexers. All Windows logs are saved in the 'windows-index.' However, sometimes, ... See more...
Hi, I have installed Splunk Universal Forwarder on several Windows servers, and they send their Windows logs to the indexers. All Windows logs are saved in the 'windows-index.' However, sometimes, some of the Universal Forwarders are disconnected, and I have no logs from them in a period of time. How can I find which Universal Forwarders are disconnected? I must mention that the number of UFs is more than 400.
Apparently the source files transfer to folder is in our control - it is verified that the data is NOT duplicates.  It seems to me there are issues while the data is inflight UF -> HF -> Indexers. ... See more...
Apparently the source files transfer to folder is in our control - it is verified that the data is NOT duplicates.  It seems to me there are issues while the data is inflight UF -> HF -> Indexers. Not sure how the ACK works in this set up.  
Hi, I have a table of time, machine, and total errors. I need to count for each machine how many times 3 errors (or more) happened in 5 min. if in one bucket more than 3 error happened I  sign thi... See more...
Hi, I have a table of time, machine, and total errors. I need to count for each machine how many times 3 errors (or more) happened in 5 min. if in one bucket more than 3 error happened I  sign this row as True.  finally i will return the frequency of 3 errors in 5 min (Summarize all rows==True) i succeeded in doing that in Python, but not in Splunk. i wrote the following code : | table TimeStamp,machine,totalErrors | eval time = strptime(TimeStamp, "%Y-%m-%d %H:%M:%S.%3N") | eval threshold=3 | eval time_window="5m" | bucket span=5m time | sort 0 machine,time | streamstats sum(totalErrors) as cumulative_errors by machine,time | eval Occurrence = if(cumulative_errors >= 3, "True", "False") | table machine,TimeStamp,Occurrence It almost correct. row 5 supposed to be True. If we calculate the delta time between row 1 to 5 more than 5 min passed, but if we calculate the delta time between row 2 to 5 less than 5 min passed  and number of errors >=3 errors. How to change it so it will find the delta time between each row (2 to 5 , 3 to 5,.. ) for each machine ? hope you understand. i need short and simple code because i will need to do that also for 1m,2m,.. 3,5,..errors row Machine TimeStamp Occurrence 1 machine1 12/14/2023 10:12:32     FALSE 2 machine1 12/14/2023 10:12:50 FALSE 3 machine1 12/14/2023 10:13:06 TRUE 4 machine1 12/14/2023 10:13:24 TRUE 5 machine1 12/14/2023 10:17:34 FALSE 6 machine1 12/16/2023 21:01:45 FALSE 7 machine2 12/18/2023 7:53:54 False thanks, Maayan
Hi @aguilard, if you want to receive logs from UFs, you don't need different ports to have different indexes, you can configure the inputs on the Forwarders addressing the correct index, so you can ... See more...
Hi @aguilard, if you want to receive logs from UFs, you don't need different ports to have different indexes, you can configure the inputs on the Forwarders addressing the correct index, so you can use one input on the indexers that's easier to manage. The inputs on the Forwarders an be manager by te Deployment Server, for more infos abut this see at https://docs.splunk.com/Documentation/Splunk/9.1.2/Updating/Aboutdeploymentserver let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated Ciao. Giuseppe
1969 dates are pre-epoch, that is, your time value is negative (when adjusted to timezone). Obviously, there is something else going on (which you are not showing us). For example, the value you ga... See more...
1969 dates are pre-epoch, that is, your time value is negative (when adjusted to timezone). Obviously, there is something else going on (which you are not showing us). For example, the value you gave is for 2023-12-19 23:14:39.567 in my time zone, not 2023-12-15 18:29:41, a timezone shift of some 4 days and 5 hours apparently!
@dtburrows3    Thank you!! This worked perfectly. No memory issues either. Do you know if there is a way to apply these using props/transforms or are these strictly in-line search time transforma... See more...
@dtburrows3    Thank you!! This worked perfectly. No memory issues either. Do you know if there is a way to apply these using props/transforms or are these strictly in-line search time transformations?