All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Apparently the source files transfer to folder is in our control - it is verified that the data is NOT duplicates.  It seems to me there are issues while the data is inflight UF -> HF -> Indexers. ... See more...
Apparently the source files transfer to folder is in our control - it is verified that the data is NOT duplicates.  It seems to me there are issues while the data is inflight UF -> HF -> Indexers. Not sure how the ACK works in this set up.  
Hi, I have a table of time, machine, and total errors. I need to count for each machine how many times 3 errors (or more) happened in 5 min. if in one bucket more than 3 error happened I  sign thi... See more...
Hi, I have a table of time, machine, and total errors. I need to count for each machine how many times 3 errors (or more) happened in 5 min. if in one bucket more than 3 error happened I  sign this row as True.  finally i will return the frequency of 3 errors in 5 min (Summarize all rows==True) i succeeded in doing that in Python, but not in Splunk. i wrote the following code : | table TimeStamp,machine,totalErrors | eval time = strptime(TimeStamp, "%Y-%m-%d %H:%M:%S.%3N") | eval threshold=3 | eval time_window="5m" | bucket span=5m time | sort 0 machine,time | streamstats sum(totalErrors) as cumulative_errors by machine,time | eval Occurrence = if(cumulative_errors >= 3, "True", "False") | table machine,TimeStamp,Occurrence It almost correct. row 5 supposed to be True. If we calculate the delta time between row 1 to 5 more than 5 min passed, but if we calculate the delta time between row 2 to 5 less than 5 min passed  and number of errors >=3 errors. How to change it so it will find the delta time between each row (2 to 5 , 3 to 5,.. ) for each machine ? hope you understand. i need short and simple code because i will need to do that also for 1m,2m,.. 3,5,..errors row Machine TimeStamp Occurrence 1 machine1 12/14/2023 10:12:32     FALSE 2 machine1 12/14/2023 10:12:50 FALSE 3 machine1 12/14/2023 10:13:06 TRUE 4 machine1 12/14/2023 10:13:24 TRUE 5 machine1 12/14/2023 10:17:34 FALSE 6 machine1 12/16/2023 21:01:45 FALSE 7 machine2 12/18/2023 7:53:54 False thanks, Maayan
Hi @aguilard, if you want to receive logs from UFs, you don't need different ports to have different indexes, you can configure the inputs on the Forwarders addressing the correct index, so you can ... See more...
Hi @aguilard, if you want to receive logs from UFs, you don't need different ports to have different indexes, you can configure the inputs on the Forwarders addressing the correct index, so you can use one input on the indexers that's easier to manage. The inputs on the Forwarders an be manager by te Deployment Server, for more infos abut this see at https://docs.splunk.com/Documentation/Splunk/9.1.2/Updating/Aboutdeploymentserver let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated Ciao. Giuseppe
1969 dates are pre-epoch, that is, your time value is negative (when adjusted to timezone). Obviously, there is something else going on (which you are not showing us). For example, the value you ga... See more...
1969 dates are pre-epoch, that is, your time value is negative (when adjusted to timezone). Obviously, there is something else going on (which you are not showing us). For example, the value you gave is for 2023-12-19 23:14:39.567 in my time zone, not 2023-12-15 18:29:41, a timezone shift of some 4 days and 5 hours apparently!
@dtburrows3    Thank you!! This worked perfectly. No memory issues either. Do you know if there is a way to apply these using props/transforms or are these strictly in-line search time transforma... See more...
@dtburrows3    Thank you!! This worked perfectly. No memory issues either. Do you know if there is a way to apply these using props/transforms or are these strictly in-line search time transformations?
Hi @gcusello  I think I understand now... Yes I want to receive logs from UFW. In that case I only need to set the inputs.conf file as you said and in the UFWs set the values for index and sour... See more...
Hi @gcusello  I think I understand now... Yes I want to receive logs from UFW. In that case I only need to set the inputs.conf file as you said and in the UFWs set the values for index and sourcetype, right? Thank you. 
Hi @aguilard, as I said which kind of logs are you speaking of? if syslogs, using the tcp protocol on port 9998 and 9999 the inputs you used are correct, but you cannot see them in the dashboard yo... See more...
Hi @aguilard, as I said which kind of logs are you speaking of? if syslogs, using the tcp protocol on port 9998 and 9999 the inputs you used are correct, but you cannot see them in the dashboard you shared in the screenshot, you have to search them in the TCP network inputs [Inputs > Network Inputs > TCP]. if instead you want to receive logs from another Splunk system (e.g. a Universal Forwarder) you can see in the dashboard you shared in the screenshot but you have to use the conf files I hinted. Probably you have some confusion in the kind of inputs: they are two different kind of inputs that are displayed in different dashboards. Ciao. Giuseppe
OK. I must here correct myself. It was true some time ago but since 7.2.0 we have this: https://docs.splunk.com/Documentation/Splunk/7.2.0/Indexer/Migratetomultisite#Convert_legacy_buckets_to_multis... See more...
OK. I must here correct myself. It was true some time ago but since 7.2.0 we have this: https://docs.splunk.com/Documentation/Splunk/7.2.0/Indexer/Migratetomultisite#Convert_legacy_buckets_to_multisite So with modern Splunk installations you can convert to multisite. Yaay!
Thanks for your response @gcusello  Maybe I do not understand some splunk concepts very well. All I want is if an event arrives to the port 9998 it should be indexed in the index iscore_test. As if... See more...
Thanks for your response @gcusello  Maybe I do not understand some splunk concepts very well. All I want is if an event arrives to the port 9998 it should be indexed in the index iscore_test. As if it the event arrives to the port the event should be indexed in the index iscore_prod. The inputs.conf that I setted for this app would be correct?  
Hi,  I am getting the below error when i'm trying to configure the Webhook alert to post in Microsoft Teams.   12-19-2023 11:57:56.700 +0000 ERROR sendmodalert [292254 AlertNotifierWorker-0] - a... See more...
Hi,  I am getting the below error when i'm trying to configure the Webhook alert to post in Microsoft Teams.   12-19-2023 11:57:56.700 +0000 ERROR sendmodalert [292254 AlertNotifierWorker-0] - action=webhook STDERR - Error sending webhook request: HTTP Error 400: Bad Request   12-19-2023 11:57:56.710 +0000 INFO sendmodalert [292254 AlertNotifierWorker-0] - action=webhook - Alert action script completed in duration=706 ms with exit code=2   12-19-2023 11:57:56.710 +0000 WARN sendmodalert [292254 AlertNotifierWorker-0] - action=webhook - Alert action script returned error code=2
Hi  @aguilard, if you're speaking of forwarding and receiving between Splunk systems (as it seeems from your screenshot), the inputs.conf that you used are wrong, these are for TCP network inputs. ... See more...
Hi  @aguilard, if you're speaking of forwarding and receiving between Splunk systems (as it seeems from your screenshot), the inputs.conf that you used are wrong, these are for TCP network inputs. as you can read at https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Inputsconf#inputs.conf.example , , the correct ones for forwarding and receiving are  [splunktcp://:9997] disabled = 0 [splunktcp://:9998] disabled = 0 [splunktcp://:9999] disabled = 0  Ciao. Giuseppe
this is my end_time: 1703027679.5678809 After this query, it showed this output but i am getting the 1969 format | eval time=strftime(time, "%m/%d/%y %H:%M:%S")  But when i tried with time... See more...
this is my end_time: 1703027679.5678809 After this query, it showed this output but i am getting the 1969 format | eval time=strftime(time, "%m/%d/%y %H:%M:%S")  But when i tried with time instead of time it showed correct  | eval time=strftime(1703027679.5678809, "%m/%d/%y %H:%M:%S") | table time
The indexes.conf is it copied succesfully and the indexer create the indexes correctly, the problem is the inputs.conf that is not working properly.
Okay, Thanks @VatsalJagani 
Hi All, I am trying to send email using sendemail command with csv as an attachment . Email is getting sent successfully but file is getting named as "unknown-<date_time>". I want to rename this f... See more...
Hi All, I am trying to send email using sendemail command with csv as an attachment . Email is getting sent successfully but file is getting named as "unknown-<date_time>". I want to rename this file. Please let me know how we are doing this. | sendemail sendresults=true format=csv to=\"$email$\" graceful=false message="This is a test email" subject="Test Email Check" Also , message and subject is getting truncated. I am getting message body as "This" and Subject as "Test". Please help me to know what is going wrong. Help on : Renaming the csv file. How to avoid message body and subject getting truncated. I really appreciate your help on this Regards, PNV
@Muthu_Vinith - Let's say if your CSV files are getting generated inside a folder called my_csv_files, then you can monitor that folder with Splunk to ingest all new CSV files in that folder. [monit... See more...
@Muthu_Vinith - Let's say if your CSV files are getting generated inside a folder called my_csv_files, then you can monitor that folder with Splunk to ingest all new CSV files in that folder. [monitor:///var/log/my_csv_files] # Above, you need to put full CSV path # you need to install Splunk or Splunk UF on that machine and enable this input disabled = 0 index = my_metrics_idx   Reference - https://docs.splunk.com/Documentation/Splunk/7.2.6/Data/Monitorfilesanddirectorieswithinputs.conf    I hope this helps!!! Kindly upvote if this helps!!!
Hello, I would like to separate my data streams by opening three receving ports. I have a multisite indexer cluster and I have created an app with this default inputs.conf file     [tcp://9998] ... See more...
Hello, I would like to separate my data streams by opening three receving ports. I have a multisite indexer cluster and I have created an app with this default inputs.conf file     [tcp://9998] disabled = 0 index = iscore_test sourcetype = iscore_test connection_host = ip [tcp://9999] disabled = 0 index = iscore_prod sourcetype = iscore_prod connection_host = ip     But when I check the receiving ports on the indexer it only shows the 9997 (that I would like to use just for splunk internal logs)   I think there is a faster way to do this rather than set the receiving ports manually in each indexer. I already checked and the app that I created was successfully copied to the indexers.  
Thanks for your support its worked for me.
Hi @Questioner, if you want the tooltips that are in that app, you can use js and css copying them from that app in your app and adding the header line, obviousl remember to restart Splunk on the SH... See more...
Hi @Questioner, if you want the tooltips that are in that app, you can use js and css copying them from that app in your app and adding the header line, obviousl remember to restart Splunk on the SH. let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi! @gcusello  Maybe some requirements are missing from the requirements...I think. I'll try to do that. Thank you for your help!