All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

tried trim(field) but it not help
Hey @Karthikeya , Yeah, this is a pretty common issue and you're on the right about concurrent searches and quotas. From what you're describing, it sounds like you're hitting some resource limits. ... See more...
Hey @Karthikeya , Yeah, this is a pretty common issue and you're on the right about concurrent searches and quotas. From what you're describing, it sounds like you're hitting some resource limits. Here's what's likely happening: Check your role-based search quotas first: Go to Settings > Access controls > Roles Look at the roles your affected users have Check their "srchDiskQuota" and "srchJobsQuota settings With 100 roles, you might have some with really restrictive limits The "no jobs showing up" thing is typical - users often can't see queued/background jobs in their job manager, especially if they're system-generated or from other sessions. Things to look at: Infrastructure capacity - Are you actually running out of search head resources? Check your monitoring console for search concurrency and resource usage Role quotas vs actual demand - Your users might need higher concurrent search limits than what their roles allow Dashboard design issues - Heavy dashboards with tons of panels and no base searches can eat up concurrent search slots fast Quick fixes to try: Temporarily increase disk quotas for affected roles (but watch your infrastructure) Have users refresh their dashboards less frequently Look for dashboards that could use base searches instead of running everything separately To see what's actually queuing up, check your internal logs for that exact message you mentioned: "The maximum number of concurrent historical searches for this user based on their role quota has been reached." Start with the role quotas - that's usually the quickest win.  refer: https://splunk.my.site.com/customer/s/article/Search-Concurrency-The-maximum-number-of-concurrent-has-been-reached If this Helps, please Upvote.
Hi, I have this field in this format and i am using eval to convert but sometimes there is an extra space in it after :  Mon 2 Jun 2025 20:51:24 : 792 EDT  - with extra space after hhmmss (space be... See more...
Hi, I have this field in this format and i am using eval to convert but sometimes there is an extra space in it after :  Mon 2 Jun 2025 20:51:24 : 792 EDT  - with extra space after hhmmss (space before 792) Mon 2 Jun 2025 20:51:24 :792 EDT - this is another scenario where there will be no space i have to get 2 scenarios in this eval - any help | eval date_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%m/%d/%Y")  
Ah that all makes sense.  Thanks so much for the help.   Can't wait to try it.   Yep, that did the trick.  Thank you so much!  And yeah "customer" was just a typo on my part.
Hi all - I had this very same issue (installer hanging in Windows 11 and then needing to be terminated via Task Manager) and spent ages looking at how to resolve. I noticed that I had a couple of pen... See more...
Hi all - I had this very same issue (installer hanging in Windows 11 and then needing to be terminated via Task Manager) and spent ages looking at how to resolve. I noticed that I had a couple of pending Windows updates (including a hefty security patch) and once those had been installed and I rebooted, the Splunk install worked as expected. Not sure this will work for all instances of this particular issue, but it did for me and it is probably one of the easiest things to tick off before trying other options.   Hope this helps.
We are getting this particular error Waiting for queued jobs to start for most of our customers. When they click on manage jobs they are not jobs are there to delete. Why is this happening? Is there ... See more...
We are getting this particular error Waiting for queued jobs to start for most of our customers. When they click on manage jobs they are not jobs are there to delete. Why is this happening? Is there in concurrent searches concept here where to modify and how much can I increase? We nearly have 100 roles created as of now.
This is the official reply from the Splunk team: "The Splunk team was reported with the similar issue "search settings" in 'settings' doesn't work - Page not found error" just few months before ... See more...
This is the official reply from the Splunk team: "The Splunk team was reported with the similar issue "search settings" in 'settings' doesn't work - Page not found error" just few months before you reported the issue. Whereas the issue was with a different version and not with the Europium instance. The bug was fixed for the upcoming versions but not for the europium release. The Splunk team was able to reproduce the issue, but only without the proxy. Due to some limitations, the team was not able to reproduce the issue with respect to the proxy. Hence, the developer required additional details to find the cause for the bug as the developer doesn't have the access to your stack. The bug was reported and fixed recently. Hence, the issue is not mentioned over any of the release notes. Currently, we do not have any ETA regarding this issue to be documented for affected versions (9.2.x,9.3.x and 9.4.x)."
Hi @dersonje2  Splunk doesn’t impose a hard 350 K buckets per rebalance limit, it sounds like your first rebalance simply hit the default/configured rebalance-threshold and the master declared “good... See more...
Hi @dersonje2  Splunk doesn’t impose a hard 350 K buckets per rebalance limit, it sounds like your first rebalance simply hit the default/configured rebalance-threshold and the master declared “good enough” and stopped. By default the master will stop moving buckets once within 90 % of the ideal distribution, with over 40 indexers before adding the new ones, I guess in theory that could mean a pretty small number of buckets will end up on the new indexers as it will have started at ~80% distribution. If you want to make the spread more even then increase the rebalance_threshold within the [clustering] stanza in server.conf to a number closer to 1, where 1=100% distribution. This might improve the distribution you are getting.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
@woodcock  #Splunk Remote Upgrader for Linux Universal Forwarders It's not working, please see below serverclass.conf even tried by adding targetRepositoryLocationPolicy under serverClass stanza ... See more...
@woodcock  #Splunk Remote Upgrader for Linux Universal Forwarders It's not working, please see below serverclass.conf even tried by adding targetRepositoryLocationPolicy under serverClass stanza [serverClass:push_force_del_pkg_uf] in the UF still it is copying the app under /opt/splunkforwarder/etc/apps folder but i want it to be in /tmp directory [global] targetRepositoryLocationPolicy = rejectAlways [serverClass:push_upg_pkg_app:app:splunk_app_uf_remote_upgrade_linux] [serverClass:push_upg_pkg_app] whitelist.0 = XX.XX.XX.XX [serverClass:push_force_del_pkg_uf] targetRepositoryLocation = /tmp/ whitelist.0 = XX.XX.XX.XX [serverClass:push_force_del_pkg_uf:app:SPLUNK_UPDATER_MONITORED_DIR]    
The macro is updated. Also index and sourcetype are correct.
Thank you for the detailed response. We have verified the following: - The syslog and NetFlow data are being ingested under the correct sourcetypes and indexes. - We confirmed that the root event ... See more...
Thank you for the detailed response. We have verified the following: - The syslog and NetFlow data are being ingested under the correct sourcetypes and indexes. - We confirmed that the root event constraints in the Cisco_SDWAN data model are aligned with the expected sourcetype and index. - Running a search using the root event constraint returns no events, which supports our suspicion that the field extractions are not working as expected, and thus, the data is not being mapped properly to the data model. Regarding data model acceleration: - Acceleration for the Cisco_SDWAN data model is enabled but is fully built. - We also checked disk space on the indexers via the Monitoring Console, and there appears to be sufficient space on the volume holding the summaries. Given these findings, we believe the issue may be tied to field extractions not populating the necessary fields required by the data model. We would appreciate further guidance on verifying or correcting these field extractions, particularly for the syslog data. Thank you again for your support.  
Hello, I'm not finding info on the limits within Splunk's data rebalancing. Some context, I have ~40 indexers and stood up 8 new ones. The 40 old ones had an avg of ~150k buckets each. At some poi... See more...
Hello, I'm not finding info on the limits within Splunk's data rebalancing. Some context, I have ~40 indexers and stood up 8 new ones. The 40 old ones had an avg of ~150k buckets each. At some point the rebalance reported that it was completed (above the .9 threshold) even though there were only ~40k buckets on the new indexers. When I kicked off a second rebalance, it started from 20% again and continued rebalancing because the new indexers were NOT space limited on the smartstore caches yet. The timeout was set to 11 hours and the first one finished in ~4. The master did not restart during this balancing. Can anyone shed some more light on why the first rebalance died? Like, is there a 350k bucket limit per rebalance or something?
Hi @Priya70  This behaviour, where a search completes but the panel fails to render and the progress bar stays at 0%, often indicates a client-side rendering issue or a problem with the data format ... See more...
Hi @Priya70  This behaviour, where a search completes but the panel fails to render and the progress bar stays at 0%, often indicates a client-side rendering issue or a problem with the data format or volume being passed to the visualization component in the browser. Here are steps to troubleshoot: Check Browser Developer Console: Open your browser's developer tools (usually F12), go to the "Console" tab, and look for any JavaScript errors when the problematic panel is attempting to load or after the search completes. This is the most common place to find clues about rendering failures. Simplify the Search/Visualization: Temporarily simplify the search query for the affected panel. Try changing the visualization type to a simple table. If the table renders correctly, the issue is likely with the specific visualization type or its configuration. If even a simple table fails, the issue might be with the data itself or a more fundamental client-side problem. Verify Data Format: Ensure the fields and data types returned by your search match what the chosen visualization expects. For example, a timechart requires _time and numerical fields. Consider Data Volume: While the search completes, rendering a very large number of data points or complex structures can sometimes overwhelm the browser or the visualization library, leading to rendering failure. Try adding | head 100 to your search to limit results and see if it renders. Test in Different Browser/Incognito Mode: Rule out browser-specific issues, extensions, or cached data by testing the dashboard in a different browser or an incognito/private browsing window.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@Raj_Splunk_Ing  Modify Your Python Script in Alteryx: You need to find the part of your Python script where you build the URL or the parameters for the HTTP request to Splunk. You will then add a ... See more...
@Raj_Splunk_Ing  Modify Your Python Script in Alteryx: You need to find the part of your Python script where you build the URL or the parameters for the HTTP request to Splunk. You will then add a tz parameter to that request, setting its value to the time zone string you found in your splunk ui timezone. Eg: params = { 'search': spl_query, 'output_mode': 'csv', 'tz': splunk_ui_timezone } url adding tz parameter https://server/services/search/jobs/export?search=search%20index%3Dcfs_apiconnect_102212%20%20%20%0Asourcetype%3D%22cfs_apigee_102212_st%22%20%20%0Aearliest%3D-1d%40d%20latest%3D%40d%20%0Aorganization%20IN%20(%22ccb-na%22%2C%22ccb-na-ext%22)%20%0AclientId%3D%22AMZ%22%20%0Astatus_code%3D200%0Aenvironment%3D%22XYZ-uat03%22%0A%7C%20table%20%20_time%2CclientId%2Corganization%2Cenvironment%2CproxyBasePath%2Capi_name&tz=America%2FChicago&output_mode=csv Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@tangtangtang12  NEAPs are primarily triggered when an episode's severity changes (e.g., Normal -> Critical, or Critical -> High) or when a new notable event matches the policy. The most robust and... See more...
@tangtangtang12  NEAPs are primarily triggered when an episode's severity changes (e.g., Normal -> Critical, or Critical -> High) or when a new notable event matches the policy. The most robust and common way to achieve this involves using a Saved Search You need a search that identifies services currently in a critical or high state. Eg: | itsi_get_service_health | search service_health_score > 0 AND service_health_score < 60 /* Or whatever your critical/high thresholds are. Typically: Critical < 40, High < 60 (or similar) Adjust based on your ITSI configuration. */ | rename title AS itsi_service_name | fields itsi_service_name, service_health_score, severity_label | eval alert_message = "ITSI Service " + itsi_service_name + " is still " + severity_label + " (Health: " + service_health_score + ")." Then save as a Saved Search/Alert with your desired schedule. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
Thank you so much for the prompt answer!
@danielbb  _time is being set by the Tenable Add-on itself, using a timestamp field from the Tenable API response (e.g., last_seen, last_found). DATETIME_CONFIG = NONE in props.conf for tenable i... See more...
@danielbb  _time is being set by the Tenable Add-on itself, using a timestamp field from the Tenable API response (e.g., last_seen, last_found). DATETIME_CONFIG = NONE in props.conf for tenable is intentional to prevent Splunk from trying to re-parse _time from the event's raw data and potentially overriding the TA's carefully chosen timestamp Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@Lien  You're right, connecting Splunk Cloud directly to an on-premise Active Directory via LDAP is generally not the recommended or straightforward approach, and SAML is highly preferred. Why SAML... See more...
@Lien  You're right, connecting Splunk Cloud directly to an on-premise Active Directory via LDAP is generally not the recommended or straightforward approach, and SAML is highly preferred. Why SAML is Better for Splunk Cloud: Enhanced Security: -Your AD is not directly exposed to the internet for Splunk Cloud. -Authentication happens at your IdP. Splunk Cloud trusts the assertion from your IdP. -Easier to enforce Multi-Factor Authentication (MFA) via your IdP. Standardized Integration: SAML is a web browser SSO standard. It's well-understood and robust. Centralized Identity Management: Leverages your existing identity management infrastructure. No Direct Network Dependency:Splunk Cloud doesn't need a persistent network connection to your AD for authentication transactions. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
@mchoudhary  Multiple appends: Fundamentally, this is the main performance killer. You're running 6 distinct "heavy" searches. Main performance killer is on the transaction in EDR: The transaction ... See more...
@mchoudhary  Multiple appends: Fundamentally, this is the main performance killer. You're running 6 distinct "heavy" searches. Main performance killer is on the transaction in EDR: The transaction "event.DetectId" command is notoriously resource-intensive, especially over a 6-month period. It should be avoided if at all possible, or its scope drastically limited   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
I am thinking about which way is better to use LDAP(AD) or SAML for authentication of Splunk Cloud. Unlike Splunk standalone, the cloud version looks like a little tricky. I read some document that... See more...
I am thinking about which way is better to use LDAP(AD) or SAML for authentication of Splunk Cloud. Unlike Splunk standalone, the cloud version looks like a little tricky. I read some document that Splunk Cloud is not recommend to connect to AD- LDAP directly somewhere. But I could not find where they are. I am trying to connect LDAP from Splunk Cloud, but always got error and there were very few inforamtion showing in splunkd.log Can someone let me know if the direct connect to AD LDAP from Spunk cloud is recommended or not? Also if there is any trouble shooting tool can easily built the connection?