All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @sigma  The first thing you could try is adding a $ to the end of the REGEX so that the match is forced to run to the end of the line.  Secondly, are there any other extractions that could be ov... See more...
Hi @sigma  The first thing you could try is adding a $ to the end of the REGEX so that the match is forced to run to the end of the line.  Secondly, are there any other extractions that could be overlapping with this? Its just good to rule out the affects of other props.conf on your work! Also, instead of DEST_KEY=_meta you could try WRITE_META=true like below, although I dont think this would affect your extraction here: REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\s+\d{1,3}(?:\.\d{1,3}){3}\s+\d+\s+\S+\s+(\S+)(?:\s+(iLO\d+))?\s+-\s+-\s+-\s+(.*)$ FORMAT = name::$1 version::$2 message::$3 WRITE_META = true Have you defined your fields.conf for the indexed fields? Add an entry to fields.conf for the new indexed field: # fields.conf [<your_custom_field_name>] INDEXED=true  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Found the same file mysteriously auto-created and after a bit of tinkering found what caused its creation, at least in my case: splunk backup kvstore -pointInTime true -archiveName my_archive The f... See more...
Found the same file mysteriously auto-created and after a bit of tinkering found what caused its creation, at least in my case: splunk backup kvstore -pointInTime true -archiveName my_archive The file vanishes again once the process finishes. But if for some reason it crashes/gets killed/whatever, the file is left in the filesystem.
Hi @lokeshchanana  Case sensitivity of the token is certainly causing you an issue here, if the token you're setting is "searchCriteria" then your cannot use "searchcriteria". Also, you can use Tok... See more...
Hi @lokeshchanana  Case sensitivity of the token is certainly causing you an issue here, if the token you're setting is "searchCriteria" then your cannot use "searchcriteria". Also, you can use Token filters to add quotes around the token if you prefer, so | eval search_col = if($searchCriteria|s$ == "s_user", user, path) should also be the same as: | eval search_col = if("$searchCriteria$" == "s_user", user, path)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @ewok  The Splunk Enterprise systemd unit (splunkd.service) is not shipped with the same AmbientCapabilities=CAP_DAC_READ_SEARCH line that the Universal Forwarder package adds to its own unit (Sp... See more...
Hi @ewok  The Splunk Enterprise systemd unit (splunkd.service) is not shipped with the same AmbientCapabilities=CAP_DAC_READ_SEARCH line that the Universal Forwarder package adds to its own unit (SplunkForwarder.service).  That line gives the UF process the Linux capability to bypass discretionary access controls (DAC) and read files such as /var/log/audit/audit.log even when the file is mode 0600 and owned by root. Enterprise installs simply omit that stanza, so splunkd runs with the default capability set and cannot open the audit log unless you relax the permissions or run Splunk as root (both STIG-failures). You can alter this behaviour for Splunk Enterprise (Full install) by editing the splunkd.service file: Create the override directory (run as root): systemctl edit splunkd.service Add theAmbientCapabilities: [Service] AmbientCapabilities=CAP_DAC_READ_SEARCH Reload systemd and restart Splunk: systemctl daemon-reload systemctl restart splunkd.service After the restart the running splunkd should have the same capability that the UF has and can read /var/log/audit/audit.log without touching file permissions or adding ACLs. The override should not be overwritten with Splunk package upgrades, but always verify after an upgrade that its still in place.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ewok  The Splunk Enterprise systemd unit (splunkd.service) is not shipped with the same AmbientCapabilities=CAP_DAC_READ_SEARCH line that the Universal Forwarder package adds to its own unit (Sp... See more...
Hi @ewok  The Splunk Enterprise systemd unit (splunkd.service) is not shipped with the same AmbientCapabilities=CAP_DAC_READ_SEARCH line that the Universal Forwarder package adds to its own unit (SplunkForwarder.service). That line gives the UF process the Linux capability to bypass discretionary access controls (DAC) and read files such as /var/log/audit/audit.log even when the file is mode 0600 and owned by root. Enterprise installs simply omit that stanza, so splunkd runs with the default capability set and cannot open the audit log unless you relax the permissions or run Splunk as root (both STIG-failures). You can alter this behaviour for Splunk Enterprise (Full install) by editing the splunkd.service file: Create the override directory (run as root): systemctl edit splunkd.service Add theAmbientCapabilities: [Service] AmbientCapabilities=CAP_DAC_READ_SEARCH Reload systemd and restart Splunk: systemctl daemon-reload systemctl restart splunkd.service After the restart the running splunkd should have the same capability that the UF has and can read /var/log/audit/audit.log without touching file permissions or adding ACLs. The override should not be overwritten with Splunk package upgrades, but always verify after an upgrade that its still in place.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@cboillot  If you are using syslog-ng, it is preferable to use the host_segment option to extract the host value. This approach helps avoid potential future issues caused by changes in hostname nami... See more...
@cboillot  If you are using syslog-ng, it is preferable to use the host_segment option to extract the host value. This approach helps avoid potential future issues caused by changes in hostname naming conventions or logging patterns that might break regex-based extraction. You can configure the destination stanza in your syslog configuration file to include the device IP address dynamically in the log file path. Additionally, you can use the host_segment setting to extract the host value for indexing in Splunk. syslog-ng .conf file Eg: for destination stanza //Macro might be different if you are using rsyslog or any other destination d_device_logs { file("/var/log/syslog/$SOURCEIP/${YEAR}-${MONTH}-${DAY}.log"); }; And update inputs.conf with host_segment eg: [monitor:///var/log/syslog/...] host_segment = 4 But if you want to stick with regex extraction then use, props.conf [cisco:ise:syslog] TRANSFORMS-set_host = ise_host_override transforms.conf [ise_host_override] REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\.\d+\s+(\S+) FORMAT = host::$1 DEST_KEY = MetaData:Host Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Almost! Yes, using the correct case for the token is vital, but it is putting the token in double quotes which is also vital. Using case instead of if is not important. Token values are substituted ... See more...
Almost! Yes, using the correct case for the token is vital, but it is putting the token in double quotes which is also vital. Using case instead of if is not important. Token values are substituted as a text substitution into the code of the dashboard (whether Studio or SimpleXML) before the dashboard code is executed. For example, if the searchCriteria token from the dropdown had the value "s_user", the | eval search_col = if($searchCriteria$ == "s_user", user, path) line would become | eval search_col = if(s_user == "s_user", user, path) i.e. if the s_user field has the string value "s_user". Using double quotes | eval search_col = if("$searchCriteria$" == "s_user", user, path) gives the line the intended meaning | eval search_col = if("s_user" == "s_user", user, path)  
@lokeshchanana  Simple XML tokens are case-sensitive. You must use $searchCriteria$—matching the capitalization in your input name | eval search_col = if("$searchCriteria$"=="s_user", user, path) ... See more...
@lokeshchanana  Simple XML tokens are case-sensitive. You must use $searchCriteria$—matching the capitalization in your input name | eval search_col = if("$searchCriteria$"=="s_user", user, path) Also you can use case() with token substitution, | eval search_col = case("$searchCriteria$"=="s_user", user, 1==1, path) This forces Splunk to treat "$searchCriteria$" as a string literal and compare it properly.   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@ewok  UF explicitly supports adding Linux capabilities via the AmbientCapabilities=CAP_DAC_READ_SEARCH setting. But full Splunk Enterprise instance does not utilize AmbientCapabilities, so it lacks... See more...
@ewok  UF explicitly supports adding Linux capabilities via the AmbientCapabilities=CAP_DAC_READ_SEARCH setting. But full Splunk Enterprise instance does not utilize AmbientCapabilities, so it lacks these elevated per-file read permissions. So your options to consider is, -Use the Universal Forwarder on Splunk server hosts for audit log collection and keep Splunk Enterprise running as non-root. -Run Splunkd as root (least preferred).   So my recommendation is If strict STIG compliance is essential, the best approach is to install the Universal Forwarder (UF) on your Splunk and use it solely for audit log collection. #https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/universal-forwarder-manual/9.4/working-with-the-universal-forwarder/manage-a-linux-least-privileged-user Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Running Splunk 9.3.5 on RHEL 8.  STIG hardened environment.  The non-Splunk RHEL instances running a Universal Forwarder have no issue access the audit.log files, apparently by virtue of the stateme... See more...
Running Splunk 9.3.5 on RHEL 8.  STIG hardened environment.  The non-Splunk RHEL instances running a Universal Forwarder have no issue access the audit.log files, apparently by virtue of the statement AmbientCapabilities=CAP_DAC_READ_SEARCH located in the /etc/systemd/system/SplunkForwarder.service file.  However, the same is not true on the Splunk instance.  They require read access permissions via a file ACL or something.  Where these options all result in multiple STIG compliance findings.  Which each require write ups a vendor (Splunk) dependencies.   Question - why?  Why can't Splunk access the audit.log files the same way as the UF?  Or is there some way to do the same sort of thing with AmbientCapabilities for Splunkd.Service? It is tempting to quit collecting these logs with Splunk itself and install UF on the Splunk instances too.
I am making a dashboard with the dropdown input called $searchCriteria$. I am trying to set the value of a search_col based on the value of the $searchCriteria$ token. I have tried the following:  ... See more...
I am making a dashboard with the dropdown input called $searchCriteria$. I am trying to set the value of a search_col based on the value of the $searchCriteria$ token. I have tried the following:  | eval search_col = if($searchcriteria$ == "s_user", user, path) | eval search_col = if('$searchcriteria$' == "s_user", user, path) | eval search_col = if($searchcriteria$ == 's_user', user, path) | eval search_col = if('$searchcriteria$' == 's_user', user, path) Even tried  | eval search_col = if(s_user == s_user, user, path) The value of search_col is the same as path.  I have tested, and the value of the $searchcriteria$ is getting set properly. What am I doing wrong?  
Did you ever get an anwer for this, I'm having the same problem, my universal forwarder sends it to my indexer to specific index, but the TA_cisco_ios doesn't  seem to do transform to correct the hos... See more...
Did you ever get an anwer for this, I'm having the same problem, my universal forwarder sends it to my indexer to specific index, but the TA_cisco_ios doesn't  seem to do transform to correct the hostname for me.  I'm not clear on what specific change on TA props.conf or transform.conf to read the specific index.
Hi,   I am trying to form a custom link to the episode/event in the email alert triggered from SPlunk ITSI.   However, when I open the link to that event or episode directly it always opens the a... See more...
Hi,   I am trying to form a custom link to the episode/event in the email alert triggered from SPlunk ITSI.   However, when I open the link to that event or episode directly it always opens the alert and episode link and you the have to again search for the events and check the details.   Is there a way to get the link to the episode directly taht a person can open without searching from the ist of the events?   the link to specific episode e.g. https://splunkcloud.com/en-US/app/itsi/itsi_event_management?tab=layout_1&emid=1sdfdff-3cd3-11f0-b7a7-44561c0a81024&earliest=%40d&latest=now&tabid=all-events when opened in separate window does not open that specific episode the above url is modified to not share the exact url for the episode.  
thanks for response ! @livehybrid @richgalloway  @richgalloway  - yes, the documentation does mention that numerics will be sorted as numbers. I am confused because I thought that by putting quote... See more...
thanks for response ! @livehybrid @richgalloway  @richgalloway  - yes, the documentation does mention that numerics will be sorted as numbers. I am confused because I thought that by putting quotes around the numbers, they would automatically get appended as strings to the fruit field and not as numbers.  So is it right to conclude that even though I added double quotes for numbers while appending them, by default Splunk does not take it as a string? Explicit type conversion is required as an additional step?
Hi @sswigart  Please can you confirm if you can see the _internal events for the hosts which are monitoring those files?  Did this answer help you? If so, please consider: Adding karma to show ... See more...
Hi @sswigart  Please can you confirm if you can see the _internal events for the hosts which are monitoring those files?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have a requirement to monitor log files created by Trellix on my windows 11 and 2019 hosts.  The log files are located at C:\ProgramData\McAfee\Endpoint Security\Logs\AccessProtection_Activity.log... See more...
I have a requirement to monitor log files created by Trellix on my windows 11 and 2019 hosts.  The log files are located at C:\ProgramData\McAfee\Endpoint Security\Logs\AccessProtection_Activity.log                                                                                                                                                               \ExploitPrevention_Activity.log                                                                                                                                                                \OnDemandScan_Activity.log                                                                                                                                                                 \SelfProtection_Activity.log   My stanza in the input.conf are configured as:   [monitor://C:\ProgramData\McAfee\Endpoint Security\Logs\AccessProtection_Activity.log disabled = 0 index = winlogs sourcetype = WinEventLog:HIPS start_from = oldest current_only = 0 checkpointInterval = 5 renderXel = false   Same format for each log. For some reason Splunk is not ingesting the log data.
Hi @cboillot  In that case, you could use props/transforms like this on the first HF/Indexer that the data hits: # props.conf [your_sourcetype] TRANSFORMS-host = ise_host_extraction # transforms.c... See more...
Hi @cboillot  In that case, you could use props/transforms like this on the first HF/Indexer that the data hits: # props.conf [your_sourcetype] TRANSFORMS-host = ise_host_extraction # transforms.conf [ise_host_extraction] # https://regex101.com/r/7VrxpN/1 REGEX = ^\S+\s+\S+\s+\S+\s+(\S+) FORMAT = host::$1 DEST_KEY = MetaData:Host    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
I am trying to invoke the threadPrint operation on the MBean java.lang:type=Runtime.  I think the UI is telling me that it takes a String array as input, but I can't figure out how to specify an arra... See more...
I am trying to invoke the threadPrint operation on the MBean java.lang:type=Runtime.  I think the UI is telling me that it takes a String array as input, but I can't figure out how to specify an array. I have tried many combinations: Blank Empty double quotes Empty single quotes Double-quoted values Single-quoted values Curly braces Square braces A single character Multiple characters A few give an immediate syntax error, like quotes aren't allowed.  Most give something like this: failed with error = Unsupported type = [Ljava.lang.String;, value = l  I think it's trying to tell me that my input is not a String array. How do I specify an array?   thanks  
Hello PickleRick,   I now know what you mean, the demo link using "| makeresults" fails. Lucky for me the html I use seems OK, and I do get the display of the table and the data iin the html code/... See more...
Hello PickleRick,   I now know what you mean, the demo link using "| makeresults" fails. Lucky for me the html I use seems OK, and I do get the display of the table and the data iin the html code/file, thanks for the TIP.  @eholz1
Yes, it is possible exactly that way @livehybrid described. But you still have to manually log in to CM and validate the bundle and push it so it doesn't save you much work (and makes the process... ... See more...
Yes, it is possible exactly that way @livehybrid described. But you still have to manually log in to CM and validate the bundle and push it so it doesn't save you much work (and makes the process... less obvious)