All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

 
To make this question answerable, you need to also illustrate the content of your lookup.  Perhaps your lookup doesn't contain year? (Sometimes it makes more sense to not have year than having year.)... See more...
To make this question answerable, you need to also illustrate the content of your lookup.  Perhaps your lookup doesn't contain year? (Sometimes it makes more sense to not have year than having year.)  Also, if you only want to show events on nonHoliday, why the complicated post calculations? Assuming your lookup is like holidayDate holiday 1/1 New Year's Day 7/10 Don't Step on a Bee Day all you need is index=my_index | eval eventDate=strftime(_time, "%m/%d") | lookup holidayLookup.csv holidayDate as eventDate OUTPUT holidayDate | where isnull(holidayDate)  
hi!  Working on adding a holiday table as a lookup to reference for alerts based on volume and want to alert on different thresholds if its a holiday. the referenced search is showing data for 7/1... See more...
hi!  Working on adding a holiday table as a lookup to reference for alerts based on volume and want to alert on different thresholds if its a holiday. the referenced search is showing data for 7/10 as nonHoliday, even though for a test, i have it listed as a holiday in the lookup file.  its a .csv, so no initial formatting seems to be passing thru the file, need to format the holidayDate column in mm/dd/yyyy       index=my_index | eval eventDate=strftime(_time, "%m/%d/%Y") | lookup holidayLookup.csv holidayDate as eventDate OUTPUT holidayDate | eval dateLookup = strftime(holidayDate, "%m/%d/%Y") | eval holidayCheck=if(eventDate == dateLookup, "holiday", "nonHoliday") | fields eventDate holidayCheck | where holidayCheck="nonHoliday"       screen shot shows its captured the event date as expected and is outputting a value for holidayCheck, but, based on the data file its referencing, it should show as Holiday.  data structure holidayDate holidayName 07/10/2024 Testing Day 07/04/2024 Independence Day  
Please make sure you're accepting a resolution as an answer.
Thanks @Dallastek1  Did you use just this app in Cloud or also another app (IA, TA add-ons) on Heavy Forwarder?  
Thanks for the advice.  We are going to go to Ubuntu LTS, and create a separate syslog host then forward to the Splunk server.
As in object. Migration UF from 8.x to 9.x creates by itsself a Systemd Unit File! I do not want it, in first "restart" action, and not having to do manually a "disable boot-start" after restart. ... See more...
As in object. Migration UF from 8.x to 9.x creates by itsself a Systemd Unit File! I do not want it, in first "restart" action, and not having to do manually a "disable boot-start" after restart. First restart after deploying the new version (9.0.6), [DFS] Performing migration. [DFS] Finished migration. [Peer-apps] Performing migration. [Peer-apps] Finished migration. Creating unit file... Important: splunk will start under systemd as user: root The unit file has been created.   Loaded: loaded (/usr/lib/systemd/system/SplunkForwarder.service; enabled; vendor preset: disabled)   Is there a way in "restart" action to prevent the creation of Systemd Unit File? I'm under Centos-Stream. Thanks.
Hi all, I am monitoring a CSV file that has multiple lines and using a pipe as the delimiter:   I want to brake them to diferent events instead Splunk is treating it as one event with multiple line... See more...
Hi all, I am monitoring a CSV file that has multiple lines and using a pipe as the delimiter:   I want to brake them to diferent events instead Splunk is treating it as one event with multiple lines. I do have props.conf set on the IDXs but didnt change nothing,   #My Props.conf [my myfake-sourcetype] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 INDEXED_EXTRACTIONS=PSV KV_MODE=none disabled=false category=Structured pulldown_type=true FIELD_DELIMITER=| FIELD_NAMES=eruid|description|   My inputs.conf [monitor:///my/fake/path/hhhh.csv*] disabled = 0 sourcetype = hhhh:csv index = main crcSalt = <SOURCE>   eruid|description| batman|uses technology| superman|flies through the air| spiderman|uses a web| ghostrider| rides a motorcycle eruid|description| batman|uses technology| superman|flies through the air| spiderman|uses a web| ghostrider| rides a motorcycle eruid|description| batman|uses technology| superman|flies through the air| spiderman|uses a web| ghostrider| rides a motorcycle eruid|description| batman|uses technology| superman|flies through the air| spiderman|uses a web| ghostrider| rides a motorcycle eruid|description| batman|uses technology| superman|flies through the air| spiderman|uses a web| ghostrider| rides a motorcycle eruid|description| batman|uses technology| superman|flies through the air| spiderman|uses a web| ghostrider| rides a motorcycle eruid|description| batman|uses technology| superman|flies through the air| spiderman|uses a web| ghostrider| rides a motorcycle eruid|description| batman|uses technology| superman|flies through the air| spiderman|uses a web| ghostrider| rides a motorcycle eruid|description| batman|uses technology| superman|flies through the air| spiderman|uses a web| ghostrider| rides a motorcycle eruid|description| batman|uses technology| superman|flies through the air| spiderman|uses a web| ghostrider| rides a motorcycle eruid|description| batman|uses technology| superman|flies through the air| spiderman|uses a web| ghostrider| rides a motorcycle eruid|description| batman|uses technology| superman|flies through the air| spiderman|uses a web| ghostrider| rides a motorcycle eruid|description| batman|uses technology| superman|flies through the air| spiderman|uses a web| ghostrider| rides a motorcycle eruid|description| batman|uses technology| superman|flies through the air| spiderman|uses a web| ghostrider| rides a motorcycle eruid|description| batman|uses technology| superman|flies through the air| spiderman|uses a web| ghostrider| rides a motorcycle     regards  
Check your inputs.conf and ensure the stanzas are properly configured to monitor only the files that you want, specifically you can adjust the block and allow lists: [monitor:// whatever] whitel... See more...
Check your inputs.conf and ensure the stanzas are properly configured to monitor only the files that you want, specifically you can adjust the block and allow lists: [monitor:// whatever] whitelist = ( REGEX ) blacklist = ( REGEX )   That aside, I strongly encourage you to follow Giuseppe's advice and contact your Splunk admin to open a case on your behalf.
Hello all and hopefully @rechteklebe   I currently have PCAP analyzer to the point that I can copy over a *.pcap or *.pcapng" file to the monitor folder and it will run it through tshark and make th... See more...
Hello all and hopefully @rechteklebe   I currently have PCAP analyzer to the point that I can copy over a *.pcap or *.pcapng" file to the monitor folder and it will run it through tshark and make the output file that Splunk then ingests and powers the dashboards with. I found that Suricata on pfSense outputs the pcap files as log.pcap.<I think date data here>.  Example log.pcap.1720627634 These seem to not get converted and ingested.  I went to make a whitelist in the inputs.conf but running a btool check it comes back as an invalid key in stanza.  I remembered the webUI section to do this also does not offer a whilelist/blacklist stanza so I guess you have that disabled somehow.  I'm assuming one or some of you python scripts are filtering if the file is a *.pcap or *.pcapng. I'm finding it not an option to change file name format in the Suricata GUI in pfsense, or have rsync change the file names on copy or to have a bash script do it on the Linux host the files get moved to.  Are there a set of python scripts I can change this whitelisting in?  Or a way to enable whitelisting at the inputs.conf level?  Or could a transforms fix this?
@gcusello, I don't have access to open a case to Splunk support. it would be much appreciated if someone could help how to limit the monitoring files and control the memory consumption.
If you can't see your packets in tcpdump output it means that something is wrong earlier. From my limited knowledge of VMware products I'd say you reconfigured your syslog outputs on ESXi but didn't ... See more...
If you can't see your packets in tcpdump output it means that something is wrong earlier. From my limited knowledge of VMware products I'd say you reconfigured your syslog outputs on ESXi but didn't adjust firewall rules to allow outgoing traffic to another port. As for syslog - there are typically two approaches - one is to receive with a syslog daemon and write to files from which you'd read the events with monitor input on a UF (or even on your Splunk server if your setup is small enough but I'd prefer to separate this functionality to another small host). Another one is to use a syslog receiver (properly configured rsyslog/syslog-ng or SC4S) to receive data over the network and send to a HEC input. While in small setups direct receiving on Splunk server might be "good enough", you lose network-level metadata and it's more frustrating to manage inputs for different types of sources. (not to mention the low port issue). About the distro... well, that's a bit of a religious issue but depending on your willingness to spend money and other personal preferences I'd consider for production use (in no particular order): RHEL, Rocky/Alma, debian, Ubuntu LTS, SLES, OpenSUSE. No rolling-release distros for prod.
There at one point was an add-in that was created for sending Splunk logs to MS Sentinel, but appears it was depreciated some time ago. I am in need of incorporating customer data that uses Splunk to... See more...
There at one point was an add-in that was created for sending Splunk logs to MS Sentinel, but appears it was depreciated some time ago. I am in need of incorporating customer data that uses Splunk to my SOC Sentinel environment. Are there any built in functions that can be utilized to forward to Sentinel? The forwarding option in Splunk appears to only work to other Splunk instances. All current add-ins appear to be focused on ingest from Sentinel to Splunk.  I have been researching a variety of options, but none seem to fill the void that I can find at this time, outside of creating and maintaining my own Splunk add-in.
This may be useful to you: https://docs.splunk.com/Documentation/Splunk/9.2.2/SearchReference/Lookup#2._IPv6_CIDR_match_in_Splunk_Web
Per the DOCS, here: Install the Splunk Add-on for Windows - Splunk Documentation and for metric here: https://docs.splunk.com/Documentation/AddOns/released/Windows/Configuration#Collect_perfmon_data... See more...
Per the DOCS, here: Install the Splunk Add-on for Windows - Splunk Documentation and for metric here: https://docs.splunk.com/Documentation/AddOns/released/Windows/Configuration#Collect_perfmon_data_and_wmi:uptime_data_in_metric_index You should ensure you have a metrics index defined, and install it accordingly at every layer to ensure you're getting the data you need. 
  How can I match the IPs from csv file with the CIDR ranges in another csv? If no CIDR matches, I want to return "NoMatch" and if proper IP and CIDR match then return the CIDR  I tried the appro... See more...
  How can I match the IPs from csv file with the CIDR ranges in another csv? If no CIDR matches, I want to return "NoMatch" and if proper IP and CIDR match then return the CIDR  I tried the approach below, but I keep getting "No Match" for all entries, even though I have proper CIDR ranges: "| inputlookup IP_add.csv | rename "IP Address" as ip | appendcols [| inputlookup cidr.csv] | foreach cidr [ eval match=if(cidrmatch('<<FIELD>>', ip), cidr, "No Match")]" Note: I can't use join as I don't have IP field or ips in the cidr csv any help would be greatly appreciated. Thank you  
Yes - It's only perfmon data we're not getting. Splunk internals and event log events are both OK. AFAIK (and intended) these are not being collected as metrics.  I'd been through the article you re... See more...
Yes - It's only perfmon data we're not getting. Splunk internals and event log events are both OK. AFAIK (and intended) these are not being collected as metrics.  I'd been through the article you referenced, and heve now been back and checked my workings.  We've not installed the Windows add-on to every layer yet - I've just used bit of inputs.conf from it initially to get the data to look at and will then go back to all the clever bit once the basics are working. 
We had this issue with some of our devices for syslog data, the work around is to use a syslog server. If you are comfortable with Linux, then standup a server with rsyslog, do the appropriate config... See more...
We had this issue with some of our devices for syslog data, the work around is to use a syslog server. If you are comfortable with Linux, then standup a server with rsyslog, do the appropriate configs and then put a UF on the host and have it monitor the log folder/files, etc.
This may be a relevant source for additional troubleshooting: Solved: What's the best way to get Windows Perfmon data in... - Splunk Community
Tcpdump shows syslog coming from everything except our hosts.  I have tried udp/514 and tcp 1514.  Neither show up.  Everything else does show up.  When we had this on a Windows server there was no i... See more...
Tcpdump shows syslog coming from everything except our hosts.  I have tried udp/514 and tcp 1514.  Neither show up.  Everything else does show up.  When we had this on a Windows server there was no issue, we didn't have to do anything special - it was coming over on udp/514. What is the recommended method for ingesting syslog?  We are a small shop and have never had issues with this method in the past. Also, what distro would you recommend?  This is a new install, so it wouldn't be a stretch to rebuild it.