All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @sabbas  The Search Filter feature in Splunk roles is designed to restrict which events a user can search, not to transform or mask the data within those events. The syntax error "unbalanced par... See more...
Hi @sabbas  The Search Filter feature in Splunk roles is designed to restrict which events a user can search, not to transform or mask the data within those events. The syntax error "unbalanced parenthesis" occurs because the commands | eval or | rex mode=sed are pipeline commands. When you place them in the Search Filter, Splunk attempts to insert them into the initial search command (like litsearch or search) in a way that breaks the expected syntax, as pipeline commands cannot directly follow search terms within the initial command's arguments. Search Filters should contain search clauses that filter events, such as index=myindex, sourcetype=mysourcetype, or boolean combinations of these. They are applied before the user's search string is fully processed as an SPL pipeline. Modifying the _raw field based on user roles at search time is a complex requirement that is not directly supported by the standard role configuration's Search Filter. The typical Splunk method for masking sensitive data is done at index time using props.conf and transforms.conf. This is the most secure method as the data is masked before being written to disk, but it applies globally and is not role-specific. Implementing role-based masking at search time usually requires more advanced techniques, potentially involving custom search commands or complex logic applied via views or macros, which is beyond the scope of a simple role filter. Search Filters are for restricting events (e.g., index=sales), not transforming data (| eval, | rex). Placing pipeline commands (|) in the Search Filter string will cause syntax errors. Role-based data masking at search time is not a standard feature of role configurations. The search filter configuration page within Splunk states: The search filter can only include: source type source host index event type search fields the operators "*", "OR", "AND", "NOT" See Anonymize data for more ideas too.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello folks, We use Splunk cloud platform for our logging system. I was trying to use the Search Filter under the Restrictions tab in Edit Role to add a filter which masks JWT tokens and emails in a... See more...
Hello folks, We use Splunk cloud platform for our logging system. I was trying to use the Search Filter under the Restrictions tab in Edit Role to add a filter which masks JWT tokens and emails in a search but keep running into an error with the litsearch command: unbalanced parenthesis. Regex used in the search filter: | eval _raw=replace(_raw, "token=([A-Za-z0-9-]+\.[A-Za-z0-9-]+\.[A-Za-z0-9-_]+)", "token=xxx.xxx.xxx") | eval _raw=replace(_raw, "[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}", "xxx@xxx.xxx") When a user with the role that has the restriction above tries to search for anything the job inspector shows that there are parenthesis around the literal search and the search filter above so it would look like this litsearch (<search terms> | eval _raw....) I tried changing the search filter to be  | rex mode=sed "s/token=([A-Za-z0-9-]+\.[A-Za-z0-9-]+\.[A-Za-z0-9-_]+)/token=xxx.xxx.xxx/g" to only replace the tokens but still run into the same issue. When previewing the filter the results work fine, but when doing an actual query with the user it will fail. Any suggestions to make the search filter simpler, or any other methods I could use for role based search filtering?
Rich, this is working too; it will cover both scenarios right? when there is no extra space in it?
hi Rich, is reg much better than replace.. in my case looks like replace is working
Try using sed to normalize the data before conversion. | rex mode=sed field=ClintReqRcvdTime "s/: /:/" | eval date_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%m/%d/%Y... See more...
Try using sed to normalize the data before conversion. | rex mode=sed field=ClintReqRcvdTime "s/: /:/" | eval date_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%m/%d/%Y")  
@Kowshik looks like Splunk Observability Cloud could not connect with AWS. Review your authentication credentials and make sure you have proper access to all the regions on the list below. us-east-1... See more...
@Kowshik looks like Splunk Observability Cloud could not connect with AWS. Review your authentication credentials and make sure you have proper access to all the regions on the list below. us-east-1 us-east-2 us-west-1 us-west-2 https://help.splunk.com/en/splunk-observability-cloud/get-started/service-description/splunk-observability-cloud-service-description#available-regions-or-realms-0 Refer: https://help.splunk.com/en/splunk-observability-cloud/manage-data/connect-to-your-cloud-service-provider/connect-to-aws/troubleshoot-your-aws-integration#splunk-observability-cloud-doesnt-work-as-expected-0   If this Helps, Please Upvote.
Hi, Sai, sometimes i dont get the extra space so i have to cover 2 scenarios will this only work when there is an extra space or it should take care of it when there is no extra space also as we ar... See more...
Hi, Sai, sometimes i dont get the extra space so i have to cover 2 scenarios will this only work when there is an extra space or it should take care of it when there is no extra space also as we are specifying the extra space in the format and removing
@Raj_Splunk_Ing  try  | eval date_only=strftime(strptime(replace(ClintReqRcvdTime, "\s+", " "), "%a %d %b %Y %H:%M:%S :%N EDT"), "%m/%d/%Y")
tried trim(field) but it not help
Hey @Karthikeya , Yeah, this is a pretty common issue and you're on the right about concurrent searches and quotas. From what you're describing, it sounds like you're hitting some resource limits. ... See more...
Hey @Karthikeya , Yeah, this is a pretty common issue and you're on the right about concurrent searches and quotas. From what you're describing, it sounds like you're hitting some resource limits. Here's what's likely happening: Check your role-based search quotas first: Go to Settings > Access controls > Roles Look at the roles your affected users have Check their "srchDiskQuota" and "srchJobsQuota settings With 100 roles, you might have some with really restrictive limits The "no jobs showing up" thing is typical - users often can't see queued/background jobs in their job manager, especially if they're system-generated or from other sessions. Things to look at: Infrastructure capacity - Are you actually running out of search head resources? Check your monitoring console for search concurrency and resource usage Role quotas vs actual demand - Your users might need higher concurrent search limits than what their roles allow Dashboard design issues - Heavy dashboards with tons of panels and no base searches can eat up concurrent search slots fast Quick fixes to try: Temporarily increase disk quotas for affected roles (but watch your infrastructure) Have users refresh their dashboards less frequently Look for dashboards that could use base searches instead of running everything separately To see what's actually queuing up, check your internal logs for that exact message you mentioned: "The maximum number of concurrent historical searches for this user based on their role quota has been reached." Start with the role quotas - that's usually the quickest win.  refer: https://splunk.my.site.com/customer/s/article/Search-Concurrency-The-maximum-number-of-concurrent-has-been-reached If this Helps, please Upvote.
Hi, I have this field in this format and i am using eval to convert but sometimes there is an extra space in it after :  Mon 2 Jun 2025 20:51:24 : 792 EDT  - with extra space after hhmmss (space be... See more...
Hi, I have this field in this format and i am using eval to convert but sometimes there is an extra space in it after :  Mon 2 Jun 2025 20:51:24 : 792 EDT  - with extra space after hhmmss (space before 792) Mon 2 Jun 2025 20:51:24 :792 EDT - this is another scenario where there will be no space i have to get 2 scenarios in this eval - any help | eval date_only=strftime(strptime(ClintReqRcvdTime, "%a %d %b %Y %H:%M:%S :%3N %Z"), "%m/%d/%Y")  
Ah that all makes sense.  Thanks so much for the help.   Can't wait to try it.   Yep, that did the trick.  Thank you so much!  And yeah "customer" was just a typo on my part.
Hi all - I had this very same issue (installer hanging in Windows 11 and then needing to be terminated via Task Manager) and spent ages looking at how to resolve. I noticed that I had a couple of pen... See more...
Hi all - I had this very same issue (installer hanging in Windows 11 and then needing to be terminated via Task Manager) and spent ages looking at how to resolve. I noticed that I had a couple of pending Windows updates (including a hefty security patch) and once those had been installed and I rebooted, the Splunk install worked as expected. Not sure this will work for all instances of this particular issue, but it did for me and it is probably one of the easiest things to tick off before trying other options.   Hope this helps.
We are getting this particular error Waiting for queued jobs to start for most of our customers. When they click on manage jobs they are not jobs are there to delete. Why is this happening? Is there ... See more...
We are getting this particular error Waiting for queued jobs to start for most of our customers. When they click on manage jobs they are not jobs are there to delete. Why is this happening? Is there in concurrent searches concept here where to modify and how much can I increase? We nearly have 100 roles created as of now.
This is the official reply from the Splunk team: "The Splunk team was reported with the similar issue "search settings" in 'settings' doesn't work - Page not found error" just few months before ... See more...
This is the official reply from the Splunk team: "The Splunk team was reported with the similar issue "search settings" in 'settings' doesn't work - Page not found error" just few months before you reported the issue. Whereas the issue was with a different version and not with the Europium instance. The bug was fixed for the upcoming versions but not for the europium release. The Splunk team was able to reproduce the issue, but only without the proxy. Due to some limitations, the team was not able to reproduce the issue with respect to the proxy. Hence, the developer required additional details to find the cause for the bug as the developer doesn't have the access to your stack. The bug was reported and fixed recently. Hence, the issue is not mentioned over any of the release notes. Currently, we do not have any ETA regarding this issue to be documented for affected versions (9.2.x,9.3.x and 9.4.x)."
Hi @dersonje2  Splunk doesn’t impose a hard 350 K buckets per rebalance limit, it sounds like your first rebalance simply hit the default/configured rebalance-threshold and the master declared “good... See more...
Hi @dersonje2  Splunk doesn’t impose a hard 350 K buckets per rebalance limit, it sounds like your first rebalance simply hit the default/configured rebalance-threshold and the master declared “good enough” and stopped. By default the master will stop moving buckets once within 90 % of the ideal distribution, with over 40 indexers before adding the new ones, I guess in theory that could mean a pretty small number of buckets will end up on the new indexers as it will have started at ~80% distribution. If you want to make the spread more even then increase the rebalance_threshold within the [clustering] stanza in server.conf to a number closer to 1, where 1=100% distribution. This might improve the distribution you are getting.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
@woodcock  #Splunk Remote Upgrader for Linux Universal Forwarders It's not working, please see below serverclass.conf even tried by adding targetRepositoryLocationPolicy under serverClass stanza ... See more...
@woodcock  #Splunk Remote Upgrader for Linux Universal Forwarders It's not working, please see below serverclass.conf even tried by adding targetRepositoryLocationPolicy under serverClass stanza [serverClass:push_force_del_pkg_uf] in the UF still it is copying the app under /opt/splunkforwarder/etc/apps folder but i want it to be in /tmp directory [global] targetRepositoryLocationPolicy = rejectAlways [serverClass:push_upg_pkg_app:app:splunk_app_uf_remote_upgrade_linux] [serverClass:push_upg_pkg_app] whitelist.0 = XX.XX.XX.XX [serverClass:push_force_del_pkg_uf] targetRepositoryLocation = /tmp/ whitelist.0 = XX.XX.XX.XX [serverClass:push_force_del_pkg_uf:app:SPLUNK_UPDATER_MONITORED_DIR]    
The macro is updated. Also index and sourcetype are correct.
Thank you for the detailed response. We have verified the following: - The syslog and NetFlow data are being ingested under the correct sourcetypes and indexes. - We confirmed that the root event ... See more...
Thank you for the detailed response. We have verified the following: - The syslog and NetFlow data are being ingested under the correct sourcetypes and indexes. - We confirmed that the root event constraints in the Cisco_SDWAN data model are aligned with the expected sourcetype and index. - Running a search using the root event constraint returns no events, which supports our suspicion that the field extractions are not working as expected, and thus, the data is not being mapped properly to the data model. Regarding data model acceleration: - Acceleration for the Cisco_SDWAN data model is enabled but is fully built. - We also checked disk space on the indexers via the Monitoring Console, and there appears to be sufficient space on the volume holding the summaries. Given these findings, we believe the issue may be tied to field extractions not populating the necessary fields required by the data model. We would appreciate further guidance on verifying or correcting these field extractions, particularly for the syslog data. Thank you again for your support.  
Hello, I'm not finding info on the limits within Splunk's data rebalancing. Some context, I have ~40 indexers and stood up 8 new ones. The 40 old ones had an avg of ~150k buckets each. At some poi... See more...
Hello, I'm not finding info on the limits within Splunk's data rebalancing. Some context, I have ~40 indexers and stood up 8 new ones. The 40 old ones had an avg of ~150k buckets each. At some point the rebalance reported that it was completed (above the .9 threshold) even though there were only ~40k buckets on the new indexers. When I kicked off a second rebalance, it started from 20% again and continued rebalancing because the new indexers were NOT space limited on the smartstore caches yet. The timeout was set to 11 hours and the first one finished in ~4. The master did not restart during this balancing. Can anyone shed some more light on why the first rebalance died? Like, is there a 350k bucket limit per rebalance or something?