All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I wanted to filter Cisco ISE Logging Activities by authentication, authorization, and accounting. So far, I've been able to filter by Accounting, but not the other two. In Cisco ISE Logging... See more...
Hello, I wanted to filter Cisco ISE Logging Activities by authentication, authorization, and accounting. So far, I've been able to filter by Accounting, but not the other two. In Cisco ISE Logging, TACACS Accounting has been linked to splunk server. And I am running the following filter: <ISE Server> AND "CISE_TACACS_Accounting" AND Type=Accounting | stats count(_time) by _time. Any advice is much appreciated. Thanks.      
Hi livehybrid, Thanks you for your help and the information ! I did some test and it is looking promising, I just have an issue with the poolsz field, the value I get for total_license_limit_gb o... See more...
Hi livehybrid, Thanks you for your help and the information ! I did some test and it is looking promising, I just have an issue with the poolsz field, the value I get for total_license_limit_gb or "Daily License Limit" is 18446744073709551615 and my license limit is 300GB. Do I need to change someting to get the 300GB limit ? I guess this is something around ", sum(b) as total_usage_bytes by _time" to change but I'm not sure what ?   Thank you !
@bowesmana , Thanks for sharing one way of doing it .  I did some changes and trying to get result it is returning it in epoch form ( that value is coming from outer _time) because when I am again c... See more...
@bowesmana , Thanks for sharing one way of doing it .  I did some changes and trying to get result it is returning it in epoch form ( that value is coming from outer _time) because when I am again converting it to readable format it is getting converted to request_time .  index=aws_XXX Method response body after transformations: sourcetype="aws:apigateway" business_unit=XX aws_account_alias="XXXXX" network_environment=qa source="API-Gateway-Execution-Logs*" (application="XXX" OR application="XXXX") | rex field=_raw "Method response body after transformations: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text | where status=200 and messageGUID="af2ee9ec-9b02-f163-718a-260e83a877f0" | rename _time as request_time | fieldformat request_time=strftime(request_time, "%F %T") | table messageGUID,request_time | join type=inner messageGUID [ search kubernetes_cluster="XXXX*" index="aws_xXXX" sourcetype = "kubernetes_logs" source = *XXXX* | rex field=_raw "sendData: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | where messageGUID="af2ee9ec-9b02-f163-718a-260e83a877f0" | rename _time as pubsub_time | fieldformat pubsub_time=strftime(pubsub_time, "%F %T") | table messageGUID, pubsub_time ] |table messageGUID, request_time, pubsub_time kubernetes_cluster="eks-XXXX" index="aws_XXXX" sourcetype = "kubernetes_logs" source = *da_XXXXX* " "sendData" | rex field=_raw "sendData: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | where messageGUID="af2ee9ec-9b02-f163-718a-260e83a877f0" | rename _time as pubsub_time | fieldformat pubsub_time=strftime(pubsub_time, "%F %T") | table messageGUID, pubsub_time   When I am running inner search value I am getting    Also I would like to understand option you have provided ,can I run it for multiple dataset ?   
We are not Splunk Support - we're users like you. To properly troubleshoot an issue using regular expressions, we need to see some sample (sanitized) data.  Currently, I'm concerned that events with... See more...
We are not Splunk Support - we're users like you. To properly troubleshoot an issue using regular expressions, we need to see some sample (sanitized) data.  Currently, I'm concerned that events with "22" in the timestamp will be sent to nullQueue. The preferred way to specify the index for data is to put the index name in inputs.conf.  If the index name is absent from inputs.conf, data will go to the default index.  
Hi @zafar, The event you're seeing indicates that the WMI Input on your Universal Forwarder (UF) has performed a clean shutdown. This typically happens when the Splunk service is stopped/restarted O... See more...
Hi @zafar, The event you're seeing indicates that the WMI Input on your Universal Forwarder (UF) has performed a clean shutdown. This typically happens when the Splunk service is stopped/restarted OR can be if you have your WMI input setup on an interval - in which case it is notifying you that it has completed. Here are some steps you can take to investigate further: Check your ingested data: Are you actually missing any WMI events from the UF?  Check the Splunk Service: Ensure that the Splunk service is running on the Windows machine. You can check this in the Windows Services console or by running the following command in a command prompt: sc query SplunkForwarder Review the Splunkd.log: Look for any errors or warnings related to the splunk-wmi.exe in the Splunkd.log file, located in the $SPLUNK_HOME\var\log\splunk directory. Check for Resource Issues: Make sure the Windows machine has sufficient resources (CPU, memory, disk space) to run the Splunk UF. Insufficient resources can lead to unexpected shutdowns or process halting. Review Scheduled Restarts: If you have any scheduled tasks or scripts that restart the Splunk service or Windows machine, ensure they are configured correctly and not causing unintended shutdowns. If you continue to experience issues, please provide more details about your environment, including the Splunk version, operating system, and any relevant configuration settings. This will help in further troubleshooting. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @zafar , wmi is a way to extract events from a remote windows system, are you speaking of UF stopping events of the receiver or of monitored system? could you better describe how this Uf is work... See more...
Hi @zafar , wmi is a way to extract events from a remote windows system, are you speaking of UF stopping events of the receiver or of monitored system? could you better describe how this Uf is working? Ciao. Giuseppe
Hi, Windows UF stopped sending events. I saw this event in _internal index 'message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" Clean shutdown completed.' The UF version ... See more...
Hi, Windows UF stopped sending events. I saw this event in _internal index 'message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" Clean shutdown completed.' The UF version is 9.2.1 What does it mean, and how to avoid it from happening again?  
Hi @lux209 , You're on the right track with your search, to dynamically retrieve the license limit within your search, you can leverage the license_usage.log itself, which contains details about the... See more...
Hi @lux209 , You're on the right track with your search, to dynamically retrieve the license limit within your search, you can leverage the license_usage.log itself, which contains details about the license limits. You can modify your search to include the license limit by incorporating the type=RolloverSummary data in the license_usage.log. This type includes both the daily license usage and the daily license limits. Here's how you can adjust your search:   index=_internal source=*license_usage.log type=RolloverSummary host=macdev | stats latest(poolsz) as total_license_limit_gb, sum(b) as total_usage_bytes by _time | eval total_usage_gb=round(total_usage_bytes/1024/1024/1024,3) | rename _time as Date total_usage_gb as "Daily License Usage", total_license_limit_gb as "Daily License Limit" | where 'Daily License Usage' > 'Daily License Limit'   Key Points: type=RolloverSummary: This record type provides a summary of the license usage and limits for each pool. poolsz field: Represents the total license limit for the day in bytes, which you convert to GB. Dynamic Limits: By using this field, you avoid hardcoding the license limit in your alert. Consider setting up alerts not only when the limit is exceeded but also when usage approaches a certain percentage of the limit (e.g., 80-90%) to give you more lead time to react. For further details and best practices, you can refer to the Splunk Documentation on License Usage and License Monitoring.   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Namchin_Bar  I am suprised you are getting any data at all because drop_other_ports being second in the list will run AFTER the allow_ports and would set nullQueue for everything. You should set... See more...
Hi @Namchin_Bar  I am suprised you are getting any data at all because drop_other_ports being second in the list will run AFTER the allow_ports and would set nullQueue for everything. You should set this first in the list and then 'allow_ports' second. As it is, you're getting all the data which makes me think that neither are actually being applied. Is your source:: value in props.conf definitely correct?  Can you confirm if you are running these settings on a Universal or Heavy forwarder? Is the data coming from another Splunk forwarder? Is this UF/HF?   Did this answer help you? Please help by: Adding kudos to show if it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
The -R parameter is so that you list contents recursivly. If all directories and files are owned by Splunk:Splunk and have 700 (or 600 for files) permissions, that should be OK.
Dear Splunk Support, I am encountering an issue while configuring Splunk to filter logs based on specific ports (21, 22, 23, 3389) using props.conf and transforms.conf. Despite following the proper ... See more...
Dear Splunk Support, I am encountering an issue while configuring Splunk to filter logs based on specific ports (21, 22, 23, 3389) using props.conf and transforms.conf. Despite following the proper configuration steps, the filtering is not working as expected. Below are the details: System Details: Splunk Version: [9.3.2] Deployment Type: Heavy Forwarder Log Source: /opt/log/indexsource/* Configuration Applied: props.conf (Located at $SPLUNK_HOME/etc/system/local/props.conf) [source::/opt/log/indexsource/*] TRANSFORMS-filter_ports = filter_specific_ports transforms.conf (Located at $SPLUNK_HOME/etc/system/local/transforms.conf) [filter_specific_ports] REGEX = .* (21|22|23|3389) .* DEST_KEY = queue FORMAT = indexQueue And trying someways such as: transforms.conf: [filter_ports] REGEX = (21|22|23|3389) DEST_KEY = queue FORMAT = indexQueue [drop_other_ports] REGEX = . DEST_KEY = queue FORMAT = nullQueueAnd AND props.conf: [source::your_specific_source] TRANSFORMS-filter_ports = allow_ports, drop_other_ports transforms.conf: [allow_ports] REGEX = (21|22|23|3389) DEST_KEY = _MetaData:Index FORMAT = your_index_name [drop_other_ports] REGEX = . DEST_KEY = queue FORMAT = nullQueue Issue Observed: The expected behavior is that logs containing these ports should be routed to indexQueue, but they are not being filtered as expected. All logs are still being indexed in the default index. Checked for syntax errors and restarted Splunk, but the issue persists. Troubleshooting Steps Taken: Verified Regex: Confirmed that the regex .* (21|22|23|3389) .* correctly matches log lines using regex testing tools. Checked Splunk Logs: Looked for errors in $SPLUNK_HOME/var/log/splunk/splunkd.log but found no related warnings. Restarted Splunk: Restarted the service after configuration changes using splunk restart. Checked Events in Splunk: Ran searches to confirm that logs with these ports were still being indexed.   Request for Assistance: Could you please advise on: Whether there are any syntax issues in my configuration? If additional debugging steps are needed? Alternative methods to ensure only logs containing ports 21, 22, 23, and 3389 are routed correctly? Your assistance in resolving this issue would be greatly appreciated. Best regards, Namchin Baranzad Information Security Analyst M Bank Email: namchin.b@m-bank.mn
Hello, I'm building a search to get alerted when we go over the license. I have a search that is working well to get the license usage and get alert when it goes over the 300GB license: index=_in... See more...
Hello, I'm building a search to get alerted when we go over the license. I have a search that is working well to get the license usage and get alert when it goes over the 300GB license: index=_internal source=*license_usage.log type=Usage pool=* | eval _time=strftime(_time,"%m-%d-%y") | stats sum(b) as ub by _time | eval ub=round(ub/1024/1024/1024,3) | eval _time=strptime(_time,"%m-%d-%y") | sort _time | eval _time=strftime(_time,"%m-%d-%y") | rename _time as Date ub as "Daily License Quota Used" | where 'Daily License Quota Used' > 300 But here as you can see I did set the 300GB limit manually in the search. Is there a way to get this info from Splunk directly ? I see that in the CMC it use `sim_licensing_limit` to get the info, but this doesn't work when doing a search outside the CMC. Thanks ! Lucas
output on splunk  
Hi I just tried this and the output is still not organized  
If _time is empty then it's probably not finding the data, or it may be hitting join limitations. If you run the inner search and look for ONE of the messageGUID values, does it come up with a result... See more...
If _time is empty then it's probably not finding the data, or it may be hitting join limitations. If you run the inner search and look for ONE of the messageGUID values, does it come up with a result? If you run the search without looking at a particular messageGUID how many results do you get - join has a limit of 50,000 results from the inner search. join has many limitations and you are often best off using stats to do the same thing. This is a stats version and may help. (index=aws* Method response body after transformations: sourcetype="aws:apigateway" business_unit=XX aws_account_alias="xXXX" network_environment=test source="API-Gateway-Execution-Logs*") OR (kubernetes_cluster="eks-XXXXX*" index="awsXXXX" sourcetype = "kubernetes_logs" source = *XXXX* "sendData") ``` Search both data sets above ``` ``` Now get fields from data set 1 ``` | rex field=_raw "Method response body after transformations: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID1 | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text ``` Now get fields from data set 2 ``` | rex field=_raw "sendData: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID2 ``` Find the common GUID ``` | eval messageGUID=coalesce(messageGUID1, messageGUID2) ``` Filter first data set to status=200 and all of second data set where we have a guid ``` | where status=200 OR isnotnull(messageGUID2) ``` Get request time ``` | eval request_time=if(status=200, _time, null()) | stats values(_time) as times values(request_time) as request_time by messageGUID | fieldformat request_time=strftime(request_time, "%F %T") | fieldformat times=strftime(times, "%F %T")    
Actually, it's the sort command that is capping the results to 10k - always bites me, if you want to sort ALL results you must do sort 0 - ... Glad to hear it worked. As @yuanliu said, recommending... See more...
Actually, it's the sort command that is capping the results to 10k - always bites me, if you want to sort ALL results you must do sort 0 - ... Glad to hear it worked. As @yuanliu said, recommending map is not often found here, as it will run the map command sequentially, but if you have few errors, then the map will not have to make many iterations, but by default it will only run over 10 results unless you override the params.   
Thank you PickleRick, this was probably the reason, unfortunately I couldn't edit the max_stream_window.
Hi all, I tried restarting the Splunk service on heavy forwarder and logs are coming in again. Can I know why does Splunk stop receiving logs suddenly and we need to restart the service for it to w... See more...
Hi all, I tried restarting the Splunk service on heavy forwarder and logs are coming in again. Can I know why does Splunk stop receiving logs suddenly and we need to restart the service for it to work again? Thanks
When I wrote that command, the values I set were in the right place. However, when I go to IDX and save the data, the converted data does not seem to be saved.
I want to add one more flow based on host I want to store one index, one sourcetype in idx with one index and two sourcetypes.