All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, Windows UF stopped sending events. I saw this event in _internal index 'message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" Clean shutdown completed.' The UF version ... See more...
Hi, Windows UF stopped sending events. I saw this event in _internal index 'message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" Clean shutdown completed.' The UF version is 9.2.1 What does it mean, and how to avoid it from happening again?  
Hi @lux209 , You're on the right track with your search, to dynamically retrieve the license limit within your search, you can leverage the license_usage.log itself, which contains details about the... See more...
Hi @lux209 , You're on the right track with your search, to dynamically retrieve the license limit within your search, you can leverage the license_usage.log itself, which contains details about the license limits. You can modify your search to include the license limit by incorporating the type=RolloverSummary data in the license_usage.log. This type includes both the daily license usage and the daily license limits. Here's how you can adjust your search:   index=_internal source=*license_usage.log type=RolloverSummary host=macdev | stats latest(poolsz) as total_license_limit_gb, sum(b) as total_usage_bytes by _time | eval total_usage_gb=round(total_usage_bytes/1024/1024/1024,3) | rename _time as Date total_usage_gb as "Daily License Usage", total_license_limit_gb as "Daily License Limit" | where 'Daily License Usage' > 'Daily License Limit'   Key Points: type=RolloverSummary: This record type provides a summary of the license usage and limits for each pool. poolsz field: Represents the total license limit for the day in bytes, which you convert to GB. Dynamic Limits: By using this field, you avoid hardcoding the license limit in your alert. Consider setting up alerts not only when the limit is exceeded but also when usage approaches a certain percentage of the limit (e.g., 80-90%) to give you more lead time to react. For further details and best practices, you can refer to the Splunk Documentation on License Usage and License Monitoring.   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Namchin_Bar  I am suprised you are getting any data at all because drop_other_ports being second in the list will run AFTER the allow_ports and would set nullQueue for everything. You should set... See more...
Hi @Namchin_Bar  I am suprised you are getting any data at all because drop_other_ports being second in the list will run AFTER the allow_ports and would set nullQueue for everything. You should set this first in the list and then 'allow_ports' second. As it is, you're getting all the data which makes me think that neither are actually being applied. Is your source:: value in props.conf definitely correct?  Can you confirm if you are running these settings on a Universal or Heavy forwarder? Is the data coming from another Splunk forwarder? Is this UF/HF?   Did this answer help you? Please help by: Adding kudos to show if it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
The -R parameter is so that you list contents recursivly. If all directories and files are owned by Splunk:Splunk and have 700 (or 600 for files) permissions, that should be OK.
Dear Splunk Support, I am encountering an issue while configuring Splunk to filter logs based on specific ports (21, 22, 23, 3389) using props.conf and transforms.conf. Despite following the proper ... See more...
Dear Splunk Support, I am encountering an issue while configuring Splunk to filter logs based on specific ports (21, 22, 23, 3389) using props.conf and transforms.conf. Despite following the proper configuration steps, the filtering is not working as expected. Below are the details: System Details: Splunk Version: [9.3.2] Deployment Type: Heavy Forwarder Log Source: /opt/log/indexsource/* Configuration Applied: props.conf (Located at $SPLUNK_HOME/etc/system/local/props.conf) [source::/opt/log/indexsource/*] TRANSFORMS-filter_ports = filter_specific_ports transforms.conf (Located at $SPLUNK_HOME/etc/system/local/transforms.conf) [filter_specific_ports] REGEX = .* (21|22|23|3389) .* DEST_KEY = queue FORMAT = indexQueue And trying someways such as: transforms.conf: [filter_ports] REGEX = (21|22|23|3389) DEST_KEY = queue FORMAT = indexQueue [drop_other_ports] REGEX = . DEST_KEY = queue FORMAT = nullQueueAnd AND props.conf: [source::your_specific_source] TRANSFORMS-filter_ports = allow_ports, drop_other_ports transforms.conf: [allow_ports] REGEX = (21|22|23|3389) DEST_KEY = _MetaData:Index FORMAT = your_index_name [drop_other_ports] REGEX = . DEST_KEY = queue FORMAT = nullQueue Issue Observed: The expected behavior is that logs containing these ports should be routed to indexQueue, but they are not being filtered as expected. All logs are still being indexed in the default index. Checked for syntax errors and restarted Splunk, but the issue persists. Troubleshooting Steps Taken: Verified Regex: Confirmed that the regex .* (21|22|23|3389) .* correctly matches log lines using regex testing tools. Checked Splunk Logs: Looked for errors in $SPLUNK_HOME/var/log/splunk/splunkd.log but found no related warnings. Restarted Splunk: Restarted the service after configuration changes using splunk restart. Checked Events in Splunk: Ran searches to confirm that logs with these ports were still being indexed.   Request for Assistance: Could you please advise on: Whether there are any syntax issues in my configuration? If additional debugging steps are needed? Alternative methods to ensure only logs containing ports 21, 22, 23, and 3389 are routed correctly? Your assistance in resolving this issue would be greatly appreciated. Best regards, Namchin Baranzad Information Security Analyst M Bank Email: namchin.b@m-bank.mn
Hello, I'm building a search to get alerted when we go over the license. I have a search that is working well to get the license usage and get alert when it goes over the 300GB license: index=_in... See more...
Hello, I'm building a search to get alerted when we go over the license. I have a search that is working well to get the license usage and get alert when it goes over the 300GB license: index=_internal source=*license_usage.log type=Usage pool=* | eval _time=strftime(_time,"%m-%d-%y") | stats sum(b) as ub by _time | eval ub=round(ub/1024/1024/1024,3) | eval _time=strptime(_time,"%m-%d-%y") | sort _time | eval _time=strftime(_time,"%m-%d-%y") | rename _time as Date ub as "Daily License Quota Used" | where 'Daily License Quota Used' > 300 But here as you can see I did set the 300GB limit manually in the search. Is there a way to get this info from Splunk directly ? I see that in the CMC it use `sim_licensing_limit` to get the info, but this doesn't work when doing a search outside the CMC. Thanks ! Lucas
output on splunk  
Hi I just tried this and the output is still not organized  
If _time is empty then it's probably not finding the data, or it may be hitting join limitations. If you run the inner search and look for ONE of the messageGUID values, does it come up with a result... See more...
If _time is empty then it's probably not finding the data, or it may be hitting join limitations. If you run the inner search and look for ONE of the messageGUID values, does it come up with a result? If you run the search without looking at a particular messageGUID how many results do you get - join has a limit of 50,000 results from the inner search. join has many limitations and you are often best off using stats to do the same thing. This is a stats version and may help. (index=aws* Method response body after transformations: sourcetype="aws:apigateway" business_unit=XX aws_account_alias="xXXX" network_environment=test source="API-Gateway-Execution-Logs*") OR (kubernetes_cluster="eks-XXXXX*" index="awsXXXX" sourcetype = "kubernetes_logs" source = *XXXX* "sendData") ``` Search both data sets above ``` ``` Now get fields from data set 1 ``` | rex field=_raw "Method response body after transformations: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID1 | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text ``` Now get fields from data set 2 ``` | rex field=_raw "sendData: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID2 ``` Find the common GUID ``` | eval messageGUID=coalesce(messageGUID1, messageGUID2) ``` Filter first data set to status=200 and all of second data set where we have a guid ``` | where status=200 OR isnotnull(messageGUID2) ``` Get request time ``` | eval request_time=if(status=200, _time, null()) | stats values(_time) as times values(request_time) as request_time by messageGUID | fieldformat request_time=strftime(request_time, "%F %T") | fieldformat times=strftime(times, "%F %T")    
Actually, it's the sort command that is capping the results to 10k - always bites me, if you want to sort ALL results you must do sort 0 - ... Glad to hear it worked. As @yuanliu said, recommending... See more...
Actually, it's the sort command that is capping the results to 10k - always bites me, if you want to sort ALL results you must do sort 0 - ... Glad to hear it worked. As @yuanliu said, recommending map is not often found here, as it will run the map command sequentially, but if you have few errors, then the map will not have to make many iterations, but by default it will only run over 10 results unless you override the params.   
Thank you PickleRick, this was probably the reason, unfortunately I couldn't edit the max_stream_window.
Hi all, I tried restarting the Splunk service on heavy forwarder and logs are coming in again. Can I know why does Splunk stop receiving logs suddenly and we need to restart the service for it to w... See more...
Hi all, I tried restarting the Splunk service on heavy forwarder and logs are coming in again. Can I know why does Splunk stop receiving logs suddenly and we need to restart the service for it to work again? Thanks
When I wrote that command, the values I set were in the right place. However, when I go to IDX and save the data, the converted data does not seem to be saved.
I want to add one more flow based on host I want to store one index, one sourcetype in idx with one index and two sourcetypes.
hi Victor, thank you so much for your response. i attached a file showing what we do in sequence and the popup. maybe this helps to understand what we do. right or wrong.   thanks  max
index=aws* Method response body after transformations: sourcetype="aws:apigateway" business_unit=XX aws_account_alias="xXXX" network_environment=test source="API-Gateway-Execution-Logs*" | rex field=... See more...
index=aws* Method response body after transformations: sourcetype="aws:apigateway" business_unit=XX aws_account_alias="xXXX" network_environment=test source="API-Gateway-Execution-Logs*" | rex field=_raw "Method response body after transformations: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text | where status=200 | rename _time as request_time | fieldformat request_time=strftime(request_time, "%F %T") | join type=inner messageGUID [ search kubernetes_cluster="eks-XXXXX*" index="awsXXXX" sourcetype = "kubernetes_logs" source = *XXXX* "sendData" | rex field=_raw "sendData: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | table messageGUID, _time ] |table messageGUID, request_time, _time   _time is coming as Null as output    Also how can I rename this field also ?      
max_stream_window = <integer> * For the streamstats command, the maximum allow window size. * Default: 10000 This is probably the cause.
I tried the above dashboard code . At the first screenshot...no dropdown is selected     Second screenshot :test envis selected and the query started running query for "Unique User/Unique Cli... See more...
I tried the above dashboard code . At the first screenshot...no dropdown is selected     Second screenshot :test envis selected and the query started running query for "Unique User/Unique Client)  The "np-" value to index source="/aws/lambda/g-lambda-au-test"  "test" is substituted to the query already without selecting data Entity dropdown or time it autoran
Thank you for the help bowesmana. This solution works but it seems to cap my results to 10k Events, is this an inherent splunk thing or am I missing a piece of the puzzle? I did do a search for on... See more...
Thank you for the help bowesmana. This solution works but it seems to cap my results to 10k Events, is this an inherent splunk thing or am I missing a piece of the puzzle? I did do a search for only the INCLUDE=YES events ``` Ensure time descending order and mark the events that have an error ``` | sort - _time | streamstats window=1 values(eval(if(match(log_data,"error"), _time, null()))) as error_time ``` Save the error time and copy the error time down to all following records until the next error ``` | eval start_time=error_time | filldown error_time ``` Now filter events within 60 seconds prior to the error ``` | eval INCLUDE=if(_time>=(error_time-60) AND _time<=error_time, "YES", "NO") ``` Now do the same in reverse, i.e. time ascending order ``` | sort _time | filldown start_time ``` and filter events that are within 60 seconds AFTER the error ``` | eval INCLUDE=if(_time<=(start_time+60) AND _time>=start_time, "YES", INCLUDE) | fields - start_time error_time | search INCLUDE=YES  
Hi @sphiwee  I think the issue is that your current SPL concatenates all your data into a single field (`report`) separated by a line breaks, although its not clear how that line break is interprete... See more...
Hi @sphiwee  I think the issue is that your current SPL concatenates all your data into a single field (`report`) separated by a line breaks, although its not clear how that line break is interpreted by Teams. I have previously had success with Microsoft Teams using Markdown or specific JSON structures (like Adaptive Cards) for rich formatting like tables, especially via webhooks. Simple text won't be interpreted as a table. Technically speaking Teams webhook messages dont support Markdown, and HTML is encoded and treated as text. You can try modifying your SPL to generate a Markdown formatted table directly within the search results. This *might* render correctly in Teams depending on how the alert action sends the payload. Remove your last three lines (`eval row = ...`, `stats values(row) AS report`, `eval report = mvjoin(...)`). Add formatting logic after the `foreach` loops. index="acoe_bot_events" unique_id = * | lookup "LU_ACOE_RDA_Tracker" ID AS unique_id | search Business_Area_Level_2="Client Solutions Insurance" , Category="*", Business_Unit = "*", Analyst_Responsible = "*", Process_Name = "*" | eval STP=(passed/heartbeat)*100 | eval Hours=(passed*Standard_Working_Time)/60 | eval FTE=(Hours/127.5) | eval Benefit=(passed*Standard_Working_Time*Benefit_Per_Minute) | stats sum(heartbeat) as Volumes sum(passed) as Successful avg(STP) as Average_STP,sum(FTE) as FTE_Saved, sum(Hours) as Hours_Saved, sum(Benefit) as Rand_Benefit by Process_Name, Business_Unit, Analyst_Responsible | foreach * [eval FTE_Saved=round('FTE_Saved',3)] | foreach * [eval Hours_Saved=round('Hours_Saved',3)] | foreach * [eval Rand_Benefit=round('Rand_Benefit',2)] | foreach * [eval Average_STP=round('Average_STP',2)] ```--- Start Markdown Formatting ---``` | fillnull value="N/A" Process_Name Business_Unit Analyst_Responsible Volumes Successful Average_STP FTE_Saved Hours_Saved Rand_Benefit ``` Format each row as a Markdown table row ``` | eval markdown_row = "| " . Process_Name . " | " . Business_Unit . " | " . Analyst_Responsible . " | " . Volumes . " | " . Successful . " | " . Average_STP . "% | " . FTE_Saved . " | " . Hours_Saved . " | " . Rand_Benefit . " |" ``` Combine all rows into a single multivalue field ``` | stats values(markdown_row) as table_rows ``` Create the final Markdown table string ``` | eval markdown_table = "| Process Name | Business Unit | Analyst | Volumes | Successful | Avg STP | FTE Saved | Hours Saved | Rand Benefit |\n" . "|---|---|---|---|---|---|---|---|---|\n" . mvjoin(table_rows, "\n") ``` Select only the final field to be potentially used by the alert action ``` | fields markdown_table  In the alert action configuration, you'll need to reference the result field containing the Markdown. Often, you can use tokens like `$result.markdown_table$` Considerations for Markdown Approach: Character Limits: Teams messages and webhook payloads have character limits. Very large tables might get truncated. Rendering: Teams Markdown rendering for tables can sometimes be basic and may is not supported. Alert Action App: Success depends heavily on *how* your Teams alert action sends the payload. Some might wrap it in JSON, others might send raw text. You might need to experiment. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will