All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Team, I have total of only 192 Business Transaction in the first class BT. I know Node wise limit is 50 and Application wise limit is 200. Then why my Business Transaction is getting registe... See more...
Hello Team, I have total of only 192 Business Transaction in the first class BT. I know Node wise limit is 50 and Application wise limit is 200. Then why my Business Transaction is getting registered in All other traffic instead of First Class BT. I have attached the Two Screenshot for your reference(last 1 hour & last 1 week BT list) Please let me know the appropriate answer/solution to my aforementioned question. Thanks & Regards, Satishkumar
Issue is fixed after removing passwords.conf and restart splunk.
In your sample logs, it looks like you have a space after the first pipe which does not appear to be accounted for in your LINE_BREAKER pattern. Try something like this LINE_BREAKER=([\r\n]+)\d{4}\-... See more...
In your sample logs, it looks like you have a space after the first pipe which does not appear to be accounted for in your LINE_BREAKER pattern. Try something like this LINE_BREAKER=([\r\n]+)\d{4}\-\d{2}\-\d{2}\s\|\s\d{2}:\d{2}:\d{2}.\d{3}\s\|
  ### Updated Solution Using Internal Logs Here's an alternative method to monitor your daily license usage without requiring the "dispatch_rest_to_indexers" capability: 1. Search Query for Licens... See more...
  ### Updated Solution Using Internal Logs Here's an alternative method to monitor your daily license usage without requiring the "dispatch_rest_to_indexers" capability: 1. Search Query for License Usage: Use the following search query to fetch the license usage information from the internal logs: index=_internal source=*license_usage.log* type="RolloverSummary" | eval GB_used=round(b/1024/1024/1024, 2) | eval license_usage_percentage = (GB_used / 250) * 100 | stats latest(GB_used) as GB_used latest(license_usage_percentage) as license_usage_percentage | where license_usage_percentage > 70 | table GB_used, license_usage_percentage This query: - Fetches the license usage data from the internal license usage logs. - Converts the bytes used (`b`) to gigabytes (GB). - Calculates the percentage of license usage based on your 250 GB daily limit. 2. Create Alerts Based on Usage Percentage: You need to create three separate alerts based on different license usage thresholds. - Alert for 70% Usage: - Go to Search & Reporting in Splunk Cloud. - Enter the updated query in the search bar. - Click on Save As and select Alert. - Name the alert (e.g., "License Usage 70% Alert"). - Set the trigger condition to: license_usage_percentage > 70 - Choose the alert type as "Scheduled" and configure it to run once per day or at your preferred schedule. - Set the alert actions (e.g., email notification, webhook). - Alert for 80% Usage: - Repeat the steps above to create another alert. - Set the trigger condition to: license_usage_percentage > 80 - Alert for 90% Usage: - Follow the same process to create the final alert. - Set the trigger condition to: license_usage_percentage > 90 3. Configure Alert Notifications: - For each alert, set up the notification actions under Alert Actions (e.g., send an email, trigger a webhook). 4. Test Your Alerts: - Manually run the search and ensure it produces the expected results. - Adjust thresholds or conditions to test if the alerts trigger correctly.
The metadata command doesn't take filters other than index so filter after the data is returned   | metadata type=sources where index=gwcc | search source !="*log.2024-*" | eval source2=lower(mvind... See more...
The metadata command doesn't take filters other than index so filter after the data is returned   | metadata type=sources where index=gwcc | search source !="*log.2024-*" | eval source2=lower(mvindex(split(source,".2024"),-1)) | eval file=lower(mvindex(split(source,"/"),-1)) | table source, source2, file  
If facing issue after migrating single site to multisite indexer cluster. SF/RF not met after 5 days still fixup task increasing. Can any help to resolve this SF/RF issue ? 
Yes, That's true i got connectivity issue from an indexer and the problem happened surprisingly without any circumstances before    could you help ?
i did the rolling restart nothing happened the issue still persists and i don't know why it's happened ...
Hi @Samantha , as also @PickleRick and @ITWhisperer said, this seems to be a job for a scheduled report. If you want a dashboard, you could schedule a search (e.g. as an alert) running your search ... See more...
Hi @Samantha , as also @PickleRick and @ITWhisperer said, this seems to be a job for a scheduled report. If you want a dashboard, you could schedule a search (e.g. as an alert) running your search and sabing aggregated results in a summary index, then you could run the searches of your dashboard on this summary index. Ciao. Giuseppe
Hi @sumarri , Splunk isn't a database wre you can manually write notes. The only way it to add to your dashboard a link to a lookup to open using Splunk App for Lookup Editor, or use a workaround ... See more...
Hi @sumarri , Splunk isn't a database wre you can manually write notes. The only way it to add to your dashboard a link to a lookup to open using Splunk App for Lookup Editor, or use a workaround that I described in one past answer: https://community.splunk.com/t5/Dashboards-Visualizations/Dynamically-Update-a-lookup-file-on-click-of-a-field-and-showing/m-p/674605 Ciao. Giuseppe
Hi @mahesh27 , two questions: 1) do you see logs in a wrong format or don't you see logs? in the first case, props.conf is relevant, in the second case, there's a different issue. 2) if you see... See more...
Hi @mahesh27 , two questions: 1) do you see logs in a wrong format or don't you see logs? in the first case, props.conf is relevant, in the second case, there's a different issue. 2) if you see your logs in wrong format, I suppose that your logs are in one row (because you used SHOULD_LINEMERGE=false), so why are you using the LINE_BREAKER in that way? See how to index csv files using pipe as delimiter. My hint is to same some logs in a text file and try to ingest it using the manual Add logs feature, that guides you in props.conf definition and test. Ciao. Giuseppe
Hi, I am getting below error when trying to save the data inputs [all 5] which comes as part of Nutanix add-on. Has anyone seen this before and can suggest something?   Error- Encountered the foll... See more...
Hi, I am getting below error when trying to save the data inputs [all 5] which comes as part of Nutanix add-on. Has anyone seen this before and can suggest something?   Error- Encountered the following error while trying to save: Argument validation for scheme=nutanix_alerts: script running failed (PID 24107 killed by signal 9: Killed).  
metadata and with pipe at the front of .... completely new command/structure for me, but  it works, and works much faster But one more unexpected case has appeared due to this change. I cannot fi... See more...
metadata and with pipe at the front of .... completely new command/structure for me, but  it works, and works much faster But one more unexpected case has appeared due to this change. I cannot filter out rotated files which are in the directory and are not necessary . It looks something like file1.log file1.2024-09-01.log file1.2024-08-02.log etc. etc. and of course I only need the main , the most present file ( without any dates) so I tried | metadata type=sources where index=gwcc AND source !='*log.2024-*' | eval source2=lower(mvindex(split(source,".2024"),-1)) | eval file=lower(mvindex(split(source,"/"),-1)) | table source, source2, file but my "filter" does not work .
Hi Team "Could you please let us know when the latest version of the Splunk OVA for VMware will be released?"
Hi all, I'm having issues comparing user field in Palo Alto traffic logs vs last user reported by Crowdstrike/Windows events.Palo-Alto traffic logs is showing a different user in logs initiating the... See more...
Hi all, I'm having issues comparing user field in Palo Alto traffic logs vs last user reported by Crowdstrike/Windows events.Palo-Alto traffic logs is showing a different user in logs initiating the traffic during the time window compared to Crowd strike last user login reported for same endpoint. Has anyone you know faced similar issue ?   Thanks 
Using below props, but we don't see logs reporting to Splunk,   We are assuming that | (pipe symbol) works as a delimiter and we cannot use it in props.  Just want to know is this props are correct ... See more...
Using below props, but we don't see logs reporting to Splunk,   We are assuming that | (pipe symbol) works as a delimiter and we cannot use it in props.  Just want to know is this props are correct [tools:logs] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\d{4}\-\d{2}\-\d{2}\s\|\d{2}:\d{2}:\d{2}.\d{3}\s\| TIME_PREFIX=^ TIME_FORMAT=%Y-%m-%d | %H:%M:%S.%3N MAX_TIMESTAMP_LOOKAHEAD=28 Sample logs:   2022-02-22 | 04:00:34:909 | main stream logs | Staticapp-1 - Restart completed 2022-02-22 | 05:00:34:909 | main stream applicationlogs | Staticapp-1 - application logs (total=0, active=0, waiting=0) completed 2022-02-22 | 05:00:34:909 | main stream applicationlogs | harikpool logs-1 - mainframe script (total=0, active=0, waiting=0) completed      
Splunk support confirmed this is a bug in 9.1.0.2. Based on the SPL, it has been resolved in Beryllium 9.1.4 and Cobalt 9.2.1. As a workaround until we upgrade, I have appended a bogus OR condition ... See more...
Splunk support confirmed this is a bug in 9.1.0.2. Based on the SPL, it has been resolved in Beryllium 9.1.4 and Cobalt 9.2.1. As a workaround until we upgrade, I have appended a bogus OR condition with a wildcard, e.g.: OR noSuchField=noSuchValue* to the other OR conditions in our WHERE clauses, and this causes Splunk to return the correct result.
Hi @harsmarvania57 ,   Thanks, it's working with BMC Helix integration
Unless you call that transform from props.conf, it's completely ineffective. But that's just a side note. There are two separate issues here. One is to make sure only selected events get forwarded t... See more...
Unless you call that transform from props.conf, it's completely ineffective. But that's just a side note. There are two separate issues here. One is to make sure only selected events get forwarded to syslog server. The way to go is probably to define two syslog groups - one with a real syslog server and one - a deafult one - with a dummy syslog server. The default syslog output is just a sink to catch all events not redirected using transforms to a working syslog output. Another thing is to make sure specific data is not getting forwarded using splunk-tcp connection to downstream indexers. You can either use index filtering for this (but this works globally on tcp outputs) or you can do the same thing as with syslog but in reverse - do a dummy output and redirect there all events you don't want sent to indexers.
i got the indexes from existing use cases and searched. but i think i am supposed to add those indexes.   Thank you for the insight