All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

For multiple sourcetypes, linecount is 2, while clearly, it should be 1. Has anybody encountered this case?
Hi @Nicolas2203  Soo...if you want to redact your logs sent to one place but not redact them sent to the other then I think you would have to use CLONE_SOURCETYPE and then apply some redaction and r... See more...
Hi @Nicolas2203  Soo...if you want to redact your logs sent to one place but not redact them sent to the other then I think you would have to use CLONE_SOURCETYPE and then apply some redaction and routing of this new sourcetype as required.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @gpalau  Please could you confirm the permissions that you have on the installation? ls -ltr /Applications/SplunkForwarder Are you intending to run Splunk as your own user?  According to the ... See more...
Hi @gpalau  Please could you confirm the permissions that you have on the installation? ls -ltr /Applications/SplunkForwarder Are you intending to run Splunk as your own user?  According to the docs (https://docs.splunk.com/Documentation/Splunk/latest/Installation/Systemrequirements#:~:text=for%20this%20platform.-,Mac%20operating%20systems,-The%20table%20lists) Mac OS 15.4 Sequoia is not yet supported *however* I am running this myself on an M1 Silicon Mac running 15.4 without issue, so it should work, but consider that it might not be officially supported. For reference, on my installation the permissions are as follows: ls -l /Applications/ | grep SplunkForwarder >> drwxr-xr-x@ 17 MyUsername wheel 544 17 Apr 17:07 SplunkForwarder ls -l /Applications/SplunkForwarder drwxr-xr-x 27 MyUsername wheel 864 20 Feb 19:41 bin -r--r--r-- 1 MyUsername wheel 57 20 Feb 16:30 copyright.txt drwxr-xr-x 32 MyUsername wheel 1024 17 Apr 17:07 etc -rw-r--r--@ 1 root wheel 0 17 Apr 17:06 Icon? drwxr-xr-x 3 MyUsername wheel 96 20 Feb 19:23 include drwxr-xr-x 32 MyUsername wheel 1024 17 Apr 17:06 lib -r--r--r-- 1 MyUsername wheel 59708 20 Feb 16:30 license-eula.txt drwxr-xr-x 5 MyUsername wheel 160 17 Apr 17:07 openssl -r--r--r-- 1 MyUsername wheel 522 20 Feb 18:01 README-splunk.txt drwxr-xr-x 4 MyUsername wheel 128 20 Feb 19:23 share -r--r--r-- 1 MyUsername wheel 53332 20 Feb 19:41 splunkforwarder-9.4.1-e3bdab203ac8-darwin-universal2-manifest drwxr-xr-x 3 MyUsername wheel 96 20 Feb 19:24 swidtag -rw-r--r-- 1 MyUsername wheel 0 20 Feb 19:23 uf drwx--x--- 7 MyUsername wheel 224 17 Apr 17:07 var  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
@gpalau  You're running macOS 15.4 (Sequoia), which is not officially listed as supported yet. The permission errors you're encountering when running Splunk Universal Forwarder 9.4.1 on macOS 15.4 ... See more...
@gpalau  You're running macOS 15.4 (Sequoia), which is not officially listed as supported yet. The permission errors you're encountering when running Splunk Universal Forwarder 9.4.1 on macOS 15.4 are likely due to incorrect ownership or permissions for the Splunk Forwarder directories, or the process not being run with sufficient privileges.   If the permissions issue persists, you can try resetting the permissions for the entire Splunk Forwarder directory: sudo chown -R $(whoami) /Applications/SplunkForwarder sudo chmod -R 755 /Applications/SplunkForwarder   MacOS Supports the below         
I went ahead and re-installed the Splunk Forwarder manually, and on the last step of the .pkg install it reads: Click the "Splunk" icon on the Desktop to start and connect to Splunk. To start Sp... See more...
I went ahead and re-installed the Splunk Forwarder manually, and on the last step of the .pkg install it reads: Click the "Splunk" icon on the Desktop to start and connect to Splunk. To start Splunk manually, open a Terminal window and run the command: 
$ /Applications/Splunk/bin/splunk start Documentation: http://docs.splunk.com/Documentation/Splunk However the installation path is /Applications/Splunk Forwarder/bin Then you have to manually run a command line to approve the license?  
Hi @MrGlass , Splunk isn't a database, so the join command must be used only when there isn't any other solution and when you have few data, instead use stats, somerhing lie this: (index=network "a... See more...
Hi @MrGlass , Splunk isn't a database, so the join command must be used only when there isn't any other solution and when you have few data, instead use stats, somerhing lie this: (index=network "arp-inspection" OR "packets received") OR (index=cisco_ise sourcetype=cisco:ise:syslog User_Name="host/*") | eval NetworkDeviceName=coalece(NetworkDeviceName, Network_Device) | rename mnemonic AS Port_Status | rename src_interface AS "src_int" | stats earliest(device_time) AS device_time values(User_Name) AS User_Name values(src_ip) AS src_ip values(src_mac) AS src_mac values(message_text) AS message_text values(Location) AS Location values(Port_Status) AS Port_Status BY "NetworkDeviceName" , "src_int" | table device_time, NetworkDeviceName, User_Name, src_int, src_ip, src_mac, message_text, Location, Port_Status Ciao. Giuseppe
I installed Splunk Forwarder 9.4.1 on macOS 15.4 and on first run I get a bunch of permission errors: Warning: cannot create "/Applications/SplunkForwarder/var/log/splunk Warning: cannot create "/Ap... See more...
I installed Splunk Forwarder 9.4.1 on macOS 15.4 and on first run I get a bunch of permission errors: Warning: cannot create "/Applications/SplunkForwarder/var/log/splunk Warning: cannot create "/Applications/SplunkForwarder/var/log/introspection" Warning: cannot create "/Applications/SplunkForwarder/var/log/watchdog" Warning: cannot create "/Applications/SplunkForwarder/var/log/client_events" This appears to be your first time running this version of Splunk. Could not open log file "/Applications/SplunkForwarder/var/log/splunk/first_install.log" for writing (2).   However these folders have the right permissions. A bit lost as to what to do here. 
Try to avoid using join - I suspect "data gets jumbled up when searching over longer periods of time" (not very precise terminology) is because subsearches (as used by join) are silently truncated at... See more...
Try to avoid using join - I suspect "data gets jumbled up when searching over longer periods of time" (not very precise terminology) is because subsearches (as used by join) are silently truncated at 50,000 events, so you join may not have all the events available that you are expecting (when you have extended periods of time). Try something along these lines: (index=network "arp-inspection" OR "packets received") OR (index=cisco_ise sourcetype=cisco:ise:syslog User_Name="host/*") | rename mnemonic as Port_Status | rename Network_Device as "NetworkDeviceName" | rename src_interface as "src_int" | stats values(device_time) as device_time, values(User_Name) as User_Name, values(src_ip) as src_ip, values(src_mac) as src_mac, values(message_text) as message_text, values(Location) as Location, values(Port_Status) as Port_Status by NetworkDeviceName, src_int or perhaps: (index=network "arp-inspection" OR "packets received") OR (index=cisco_ise sourcetype=cisco:ise:syslog User_Name="host/*") | eval Port_Status=coalesce(Port_Status, mnemonic) | eval NetworkDeviceName=coalesce(NetworkDeviceName, Network_Device) | eval src_int=coalesce(src_int, src_interface) | stats values(device_time) as device_time, values(User_Name) as User_Name, values(src_ip) as src_ip, values(src_mac) as src_mac, values(message_text) as message_text, values(Location) as Location, values(Port_Status) as Port_Status by NetworkDeviceName, src_int
I am trying to locate some data between two indexes, the common items are the src_interface and the network device name, but the data gets jumbled up when searching over longer periods of time. This ... See more...
I am trying to locate some data between two indexes, the common items are the src_interface and the network device name, but the data gets jumbled up when searching over longer periods of time. This is what I am using now.  index=network "arp-inspection" OR "packets received" | rename mnemonic as Port_Status | rename Network_Device TO "NetworkDeviceName" | rename src_interface TO "src_int" | join type=inner "NetworkDeviceName" , "src_int" [ search index=cisco_ise sourcetype=cisco:ise:syslog User_Name="host/*"] | table  device_time, NetworkDeviceName, User_Name, src_int, src_ip, src_mac, message_text, Location, Port_Status  
Hello @livehybrid  Thanks for your time OK, I understand now. I see what I was missing. Strangely, what I had done was working, and I was perplexed about that. I will test with the configura... See more...
Hello @livehybrid  Thanks for your time OK, I understand now. I see what I was missing. Strangely, what I had done was working, and I was perplexed about that. I will test with the configuration you provided; it makes more sense. But I have a quick question: if the logs need to be anonymized before they are sent to the distant_HF, will putting the two outputs in the _TCP_ROUTING in the inputs.conf work? Many thanks for you clear answer !!!  
Hi @tangtangtang12  I presume you are using ITSI/ITEW which is where you have installed the content pack? The content pack is only KPIs and relies on specific data to be available - The content pac... See more...
Hi @tangtangtang12  I presume you are using ITSI/ITEW which is where you have installed the content pack? The content pack is only KPIs and relies on specific data to be available - The content pack itself does not onboard the data. Please check out the docs around the content pack data requirements here: https://docs.splunk.com/Documentation/CPWindowsMon/latest/CP/DataReqs That docs page gives an inputs.conf sample and other info about the data required for these KPIs to run, including the metrics you are looking for.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Even there is only one app there are also at least one serverclass and those 40 clients. Those serverclass(es) bind together clients and this app which contains outputs.conf. Usually you should have... See more...
Even there is only one app there are also at least one serverclass and those 40 clients. Those serverclass(es) bind together clients and this app which contains outputs.conf. Usually you should have one app for defining general output target (indexers) and another which define ds instead of configuring this on installation gui. Over those there are usually many apps and serverclasses as @gcusello #alteady said. Here is one great conf presentation about DS https://conf.splunk.com/files/2024/slides/PLA1310C.pdf  
This splunk_server_group is e.g your defined additional group in MC setup like az_hec_test
It's normal to see "splunkd -p 8089 restart" in the process list because that is the command that launched Splunk.  I'm not sure, however, about why it appears so many times.  AIUI, there should be a... See more...
It's normal to see "splunkd -p 8089 restart" in the process list because that is the command that launched Splunk.  I'm not sure, however, about why it appears so many times.  AIUI, there should be a single "splunkd -p 8089 restart" process and additional "splunkd" processes for running searches.  I could be mistaken, however.
Hi @Haleb  *Yes* - This is to be expected, this is based on how the Splunk instance was started.  Essentially if Splunk started by "$SPLUNK_HOME/bin/splunk start" it will appended with "start", if ... See more...
Hi @Haleb  *Yes* - This is to be expected, this is based on how the Splunk instance was started.  Essentially if Splunk started by "$SPLUNK_HOME/bin/splunk start" it will appended with "start", if it was "$SPLUNK_HOME/bin/splunk restart" it will be appended with "restart". If you use systemd then it could be something like "splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd" Check out the following Community post with a little more info if interested: https://community.splunk.com/t5/Monitoring-Splunk/Difference-between-splunkd-p-8089-restart-or-splunkd-p-8089/m-p/255553  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Nicolas2203  You are referencing _SYSLOG_ROUTING which is for syslog routing, whereas your input is using _TCP_ROUTING. Did you mean to use _TCP_ROUTING in your transforms? Another thing is th... See more...
Hi @Nicolas2203  You are referencing _SYSLOG_ROUTING which is for syslog routing, whereas your input is using _TCP_ROUTING. Did you mean to use _TCP_ROUTING in your transforms? Another thing is that this will not clone your data, it will only *change* the routing.  When you specify multiple items in a TRANSFORMS they are processed in order, meaning that your network guest route is applied, then the second one. In your scenario the second transform applies to ALL events because of the "." in the REGEX which means that will be the routing which is applied. I think what you are looking for is the following: == props.conf == [fw:firewall] TRANSFORMS-clone = fwfirewall-route-network-guest, fwfirewall-clone == transforms.conf == [fwfirewall-clone] DEST_KEY = _TCP_ROUTING FORMAT = distant_HF,local_indexers REGEX = . [fwfirewall-route-network-guest] REGEX = \bNETWORK-GUEST\b DEST_KEY = _TCP_ROUTING FORMAT = local_indexers How this works is by specifying both outputs in _TCP_ROUTING when the REGEX matches "." (always) and then changes it to local_indexers IF the event contains NETWORK-GUEST. This could actually be simplified by setting the duplicate output in the input, then just overriding for the local_indexers if it contains NETWORK-GUEST: == inputs.conf == [tcp://22000] sourcetype = fw:firewall index = fw_index _TCP_ROUTING = distant_HF,local_indexers == props.conf == [fw:firewall] TRANSFORMS-redirectLocal = fwfirewall-route-network-guest == transforms.conf == [fwfirewall-route-network-guest] REGEX = \bNETWORK-GUEST\b DEST_KEY = _TCP_ROUTING FORMAT = local_indexers  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello kiran, yes this is a typo .
@Nicolas2203  Could you please check this? In your props.conf, you've referenced "fwfirewall-route-network-guest-", but in transforms.conf, the stanza is named "fwfirewall-route-network-guest". Is t... See more...
@Nicolas2203  Could you please check this? In your props.conf, you've referenced "fwfirewall-route-network-guest-", but in transforms.conf, the stanza is named "fwfirewall-route-network-guest". Is this a typo?
Hi @Hemant_h , et me understand you use case: you have around 40 DC with the Universal Forwarder. on these UFs I suppose that there are some add-ons, one of them contains outputs.conf file and it... See more...
Hi @Hemant_h , et me understand you use case: you have around 40 DC with the Universal Forwarder. on these UFs I suppose that there are some add-ons, one of them contains outputs.conf file and it's role is to address the system to receive data forwarders by UFs. It seems that these data from UFs are forwarderd to one (or more) Heavy Forwarders that forward data to the Indexers. then you have a Deployment server that deployes the add-on contaiing outsputs.conf. Usually on the UFs is installed (deployed by the DS) also the Splunk_TA_Windows to ingest logs. is this add-on installed on the UFs? have you this add-on deployed by the DS? are you receiving logs on Splunk? which logs are you receiving: internal, Windows or both? if you have only the outputs.conf add-on, you should receive only _* logs (internal Splunk logs). Anyway, the flow is usual from the UFs to the HF or directly to IDXs. Ciao. Giuseppe
Hi splunk community, I have a question on logs cloning/redirection Purpose : Extract logs containing "network-guest", and don't redirect this logs to a distant HF, but only to local indexers LOGS ... See more...
Hi splunk community, I have a question on logs cloning/redirection Purpose : Extract logs containing "network-guest", and don't redirect this logs to a distant HF, but only to local indexers LOGS ENTRY CONFIG Into an app Splunk_TA_FIREWALL inputs.conf [tcp://22000] sourcetype = fw:firewall index = fw_index _TCP_ROUTING = local_indexers This logs are perfectly working and are stored on my local indexers Now this logs must be cloned and redirected to a distant HF but not the logs containing "network-guest" THat my props and transforms config props.conf [fw:firewall] TRANSFORMS-clone = fwfirewall-route-network-guest-, fwfirewall-clone transforms.conf [fwfirewall-route-network-guest] REGEX = \bNETWORK-GUEST\b DEST_KEY = _SYSLOG_ROUTING FORMAT = local_indexers [fwfirewalll-clone] DEST_KEY = _SYSLOG_ROUTING FORMAT = distant_HF REGEX = . When I check into the logs, on the distant splunk, I don't see NETWORK-GUEST logs anymore, and I can see those logs on the local splunk Question is, I'm not sure I'm doing that the right way, and not sure if it works 100% Has someone a good knowledge on this kind of configuration ? Thanks a lot for the help Nico