All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @gcusello @deepakc  I will check if Rsyslog is installed. As far as I am aware it shouldn't and also it is running on Windows. 20+ GB used in the license per day. Is it recommended to have Rs... See more...
Hi @gcusello @deepakc  I will check if Rsyslog is installed. As far as I am aware it shouldn't and also it is running on Windows. 20+ GB used in the license per day. Is it recommended to have Rsyslog / Kiwi Syslog running on the server and then to use file monitor to copy in events?
It is better to use rsyslog, but in this case I think the issue may be related to the fact the rsyslog is running and using the TCP:514 port - that may the cause of the current issue, and longer term... See more...
It is better to use rsyslog, but in this case I think the issue may be related to the fact the rsyslog is running and using the TCP:514 port - that may the cause of the current issue, and longer term its better to use rsyslog or syslong-ng or even better SC4S but that another topic.
Hi @yh , as @deepakc said, it's prefereable to use an rsyslog to take syslogs instead of Splunk, in this way you're sure to save logs even if your Splunk is down or overloaded. Are you receiving ma... See more...
Hi @yh , as @deepakc said, it's prefereable to use an rsyslog to take syslogs instead of Splunk, in this way you're sure to save logs even if your Splunk is down or overloaded. Are you receiving many logs? Ciao. Giuseppe
Hi @yh , I asked because filtering should be applied in the first full Stunk instance that the data pass through, If you don't have an intermediate HF (or another Splunk full instance) and your dat... See more...
Hi @yh , I asked because filtering should be applied in the first full Stunk instance that the data pass through, If you don't have an intermediate HF (or another Splunk full instance) and your data directly arrive to your stand alone Splunk server, the conf files are correctly located. Ciao. Giuseppe
I'm wondering if you have rsyslog running on the AIO - see if you can turn that off if its running  Check to see if its running, if so stop it as it defaults to TCP 514 sudo systemctl status rsyslo... See more...
I'm wondering if you have rsyslog running on the AIO - see if you can turn that off if its running  Check to see if its running, if so stop it as it defaults to TCP 514 sudo systemctl status rsyslog
Hi, For my case, this is not a heavy forwarder. But this is an indexer and search head. The UDP and TCP flow is direct to the AIO indexer / search head. I understand from the documentation that a ... See more...
Hi, For my case, this is not a heavy forwarder. But this is an indexer and search head. The UDP and TCP flow is direct to the AIO indexer / search head. I understand from the documentation that a HF is more flexible where we can filter using sourcetype, hosts etc but for the indexer it is still possible, but the example given is on the source. We did try to add a backlash to the equal sign, and even just try discarding all events with a "." as the regex search key but somehow the TCP source can't perform any filtering. Is there something unique with direct filtering on indexers? It shows clearly that "source = tcp:514" I don't suppose it has been renamed, or should I try renaming this port, not sure if that help. On the inputs conf, I think there are some TCP settings that directs the sourcetype based on the host address, but I don't suppose that will have any impact?
"How do I solve the problem with automatic report collection and sending?" Maybe you can use the below this to check, using the metadata command this example shows if a host has not sent any data to... See more...
"How do I solve the problem with automatic report collection and sending?" Maybe you can use the below this to check, using the metadata command this example shows if a host has not sent any data to the _internal index, this can be change to another index where you are expecting regular data to come to, and you can also change the period -5m to say 10 mins etc, you can then save this as an alert, or dashboard table  to inform you when there is no data and look as to why etc. | metadata type=hosts index=_internal | table host, firstTime, lastTime, recentTime | rename totalCount as Count firstTime as "First_Event" lastTime as "Last_Event" recentTime as "Last_Update" | fieldformat Count=tostring(Count, "commas") | fieldformat "First_Event"=strftime('First_Event', "%c") | fieldformat "Last_Event"=strftime('Last_Event', "%c") | fieldformat "Last_Update"=strftime('Last_Update', "%c") | where Last_Update <= relative_time(now(),"-5m") | table host, Last_Update   
Hi, I've seen in server.conf specification under clustering stanza, different parameter (https://docs.splunk.com/Documentation/Splunk/9.0.7/Admin/Serverconf#High_availability_clustering_configuratio... See more...
Hi, I've seen in server.conf specification under clustering stanza, different parameter (https://docs.splunk.com/Documentation/Splunk/9.0.7/Admin/Serverconf#High_availability_clustering_configuration) register_search_address: "This is the address that advertises the peer to search heads. This is useful in the cases where a splunk host machine has multiple interfaces and only one of them can be reached by another splunkd instance." register_forwarder_address: "This is the address on which a peer is available for accepting data from forwarder.This is useful in the cases where a splunk host machine has multiple interfaces and only one of them can be reached by another splunkd instance." register_replication_address: "This is the address on which a peer is available for accepting replication data. This is useful in the cases where a peer host machine has multiple interfaces and only one of them can be reached by another splunkd instance." So It's seems to be possible to use multiple IP, but I'm wordering how to convert peer with single IP to peer with multiple IP using this parameter. Frédéric
Hi @ArianeSantos , let me understand: your ingestion correcty worked until the 30th of April and stopped from the 1st of May, is it correct? In this case, check the date format of your data and che... See more...
Hi @ArianeSantos , let me understand: your ingestion correcty worked until the 30th of April and stopped from the 1st of May, is it correct? In this case, check the date format of your data and check if the events of the 1st of may was indexed with timestamp 2024-01-05. If you have an european date format (dd/mm/yyyy) and you didn't forced the format (TIESTAMP_FORMAT = %d/%m/%Y), Splunk by default uses the american format (mm/dd/yyyy), so in the first 12 days of the month, you have an error. You can solve the issue forcing the TIME_FORMAT. Ciao. Giuseppe
Hi @yh , the <189> string isn't relevant. It's relevant only the regex you are using. Anyway, having these logs, you should be able to filter both the logs containing "dstip\=8\.8\.8\.8". Try to ... See more...
Hi @yh , the <189> string isn't relevant. It's relevant only the regex you are using. Anyway, having these logs, you should be able to filter both the logs containing "dstip\=8\.8\.8\.8". Try to add a backslash before "=" event if it shouldn't be the issue. Are you sure that the not filtered events directly arrive to the HF and that there isn't another HF in the middle? Ciao. Giuseppe
Hi @fde , I'm not sure that's possible to use two IP addresses for each server: in Splunk every hostname has one IP address. Ciao. Giuseppe
Hello Giuseppe, Blanking out some of the details with XXXX for anonymity. I remember we did try with Regex = . for the TCP once too. Sample event from TCP: <189>logver=700140601 timestamp=171471... See more...
Hello Giuseppe, Blanking out some of the details with XXXX for anonymity. I remember we did try with Regex = . for the TCP once too. Sample event from TCP: <189>logver=700140601 timestamp=1714717074 devname="XXXXX" devid="XXXXX" vd="root" date=2024-05-03 time=06:17:54 eventtime=1714688275070439553 tz="XXX" logid="0001000014" type="traffic" subtype="local" level="notice" srcip=XX.XX.XX.XX srcname="XXXXXX" srcport=31745 srcintf="port1" srcintfrole="undefined" dstip=XXX.XXX.XXX.XXX dstname="XXX" dstport=443 dstintf="root" dstintfrole="undefined" srccountry="XXXX" dstcountry="XXX" sessionid=68756048 proto=6 action="deny" policyid=1 policytype="local-in-policy" poluuid="7575f13c-5066-51ed-1e15-40b0e5867f81" service="HTTPS" trandisp="noop" app="HTTPS" duration=0 sentbyte=0 rcvdbyte=0 sentpkt=0 rcvdpkt=0 appcat="unscanned" crscore=5 craction=262144 crlevel="low" Sample event from UDP: May 3 14:21:57 10.XX.XX.XX logver=700140601 timestamp=1714746117 devname="XXX" devid="XXX" vd="XXX" date=2024-05-03 time=14:21:57 eventtime=1714717317787162683 tz="XXX" logid="0000000013" type="traffic" subtype="forward" level="notice" srcip=XX.XX.X.X srcport=38915 srcintf="port5" srcintfrole="lan" dstip=XX.XX.XX.XX dstport=443 dstintf="port9" dstintfrole="wan" srccountry="Reserved" dstcountry="XXXX" sessionid=759555888 proto=17 action="accept" policyid=1 policytype="policy" poluuid="7ade8e92-454b-51e9-5c91-4feddb630366" policyname="XXXXX" service="udp/443" trandisp="snat" transip=XX.XX.XX.XXX transport=38915 appid=40169 app="QUIC" appcat="Network.Service" apprisk="low" applist="XXX" duration=49 sentbyte=1228 rcvdbyte=0 sentpkt=1 rcvdpkt=0 utmaction="block" countapp=1 It seems a bit weird to me that the TCP input doesn't have the data and time infront but starts with <189>, wonder if that is normal.  
Hi @daniel333 , not all the data source from Azure and Office 365 is free, someone is subject to a fee. Check if the data source you want is one of them. In addition, you could ask help to Splunk ... See more...
Hi @daniel333 , not all the data source from Azure and Office 365 is free, someone is subject to a fee. Check if the data source you want is one of them. In addition, you could ask help to Splunk Support,, don't ask help to Microsoft Support because they always answer: "ask to splunk", because Splunk is considered a competitor by Microsoft. Ciao. Giuseppe
Hi @myte , as you like! but scheduling the two searches: |.... main search... | bucket _time span=1h | stats count BY _time | stats avg(count) AS AverageCount max(count) AS MaxCount | e... See more...
Hi @myte , as you like! but scheduling the two searches: |.... main search... | bucket _time span=1h | stats count BY _time | stats avg(count) AS AverageCount max(count) AS MaxCount | eval AverageCount=round(AverageCount,2), MaxCount=round(MaxCount,2), Type="Per Hour" | collect index=my_summary and |.... main search... | bucket _time span=1m | stats count BY _time | stats avg(count) AS AverageCount max(count) AS MaxCount | eval AverageCount=round(AverageCount,2), MaxCount=round(MaxCount,2), Type="Per Minute" | stats values(AverageCount) AS AverageCount values(MaxCount) AS MaxCount BY Type | collect index=my_summary and running this search when you need resuts index=my_summary | table Type AverageCount MaxCount you have the same result in a single search and a quicker search.   let us know if you need more help, and, for the other people of Community, please, accept one answer. Ciao. Giuseppe P.S.: Karma Points are appreciated by al the Contributors
Hello Team, I am using a blueprint lambda to process cloudwatch logs to splunk. I have configured HEC url & HEC token in Splunk we UI. Installed splunk in AWS linux server. But while invoking the la... See more...
Hello Team, I am using a blueprint lambda to process cloudwatch logs to splunk. I have configured HEC url & HEC token in Splunk we UI. Installed splunk in AWS linux server. But while invoking the lambda function getting above error. HEC URL - http://54.67.83.247:8088/services/collector/raw Whitelisted the IP in security group of ec2 instance where splunk is installed. Can anyone help me to fix this issue ?
I had to set 744 permissions for folders, that solved my issue
Hi @yh , the TRANSFORMS command in the transforms.conf is a regex on the raw data, it doesn't work on fields. Are you sure that in the TCP raw data you have the string you configured? cuold you sh... See more...
Hi @yh , the TRANSFORMS command in the transforms.conf is a regex on the raw data, it doesn't work on fields. Are you sure that in the TCP raw data you have the string you configured? cuold you share some samples of both the events that you want to filter (both UDP and TCP)? Ciao. Giuseppe
These are some basic examples once you have ingested the data, the same principles apply to Windows metrics Analyse the data, work out the fields that contain the data and work on SPL, until it give... See more...
These are some basic examples once you have ingested the data, the same principles apply to Windows metrics Analyse the data, work out the fields that contain the data and work on SPL, until it gives you the results This example shows how you can monitor linux metrics - change the threshold (| where cpu_load_percent >=1) index=linux sourcetype=cpu | fields _time, host, cpu_load_percent, | eval date_time =strftime(_time, "%d/%m/%Y %H:%M:%S") | where cpu_load_percent >=1 | table date_time, host, cpu_load_percent | dedup host This example shows how you can memory percent % linux metrics - change the threshold (| where PercentMemory >=0) index=linux sourcetype=ps | fields _time, host, PercentMemory | eval date_time =strftime(_time, "%d/%m/%Y %H:%M:%S") | where PercentMemory >=0 | table date_time, host, PercentMemory | dedup host Do similar for Disk/processor etc
Thanks for the reply. We did do a netstat check and it's a TCP connection in between the source host and Splunk. Something similar that was tried, example [source::udp:514] TRANSFORMS-null1= ud... See more...
Thanks for the reply. We did do a netstat check and it's a TCP connection in between the source host and Splunk. Something similar that was tried, example [source::udp:514] TRANSFORMS-null1= udp_setnull [source::tcp:514] TRANSFORMS-null2= tcp_setnull However weirdly the TCP part is not working. Once we even tried removing UDP and just have the TCP portion, but it still doesn't work. Very weird. TRANSFORMS-null= setnull_tcp_traffic
Check that are actually receiving traffic from TCP sudo tcpdump -i <my_interface> tcp port <splunk_port> sudo tcpdump -i <my_interface> udp port <splunk_port> Try the below, and see that correct... See more...
Check that are actually receiving traffic from TCP sudo tcpdump -i <my_interface> tcp port <splunk_port> sudo tcpdump -i <my_interface> udp port <splunk_port> Try the below, and see that corrects it. [source::udp:514] TRANSFORMS-null= setnull_udp_traffic [source::tcp:514] TRANSFORMS-null= setnull_tcp_traffic [setnull_udp_traffic] REGEX = dstip=8\.8\.8\.8 DEST_KEY = queue FORMAT = nullQueue [setnull_tcp_traffic] REGEX = dstip=8\.8\.8\.8 DEST_KEY = queue FORMAT = nullQueue