All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can give in this way and test and it will some how work. but this is not secure you know. Below is the role created for non-prod [role_abc] srchIndexesAllowed = non_prod srchIndexesDefault = ... See more...
You can give in this way and test and it will some how work. but this is not secure you know. Below is the role created for non-prod [role_abc] srchIndexesAllowed = non_prod srchIndexesDefault = non_prod srchFilter = (index=non_prod) Below is the role created for prod [role_xyz] srchIndexesAllowed = prod;opco_summary srchIndexesDefault = prod srchFilter = (index=prod) OR (index=opco_summary AND (service=juniper-prod OR service=juniper-cont )) I think this can help you.
Many thanks for the replies guys. That was what i was missing.
So, I have been struggling with this for a few days. I have thrown it against generative AI and not getting exactly what I want.  We have a requirement to ensure a percentage of timely critical even... See more...
So, I have been struggling with this for a few days. I have thrown it against generative AI and not getting exactly what I want.  We have a requirement to ensure a percentage of timely critical event completion investigation per month for Critical and High notable events in Splunk ES.  I have this query which gives me the numerator and denominator for the events, but does not break it out by Urgency/Severity:  | inputlookup incident_review_workflow_audit | where notable_time > relative_time(now(), "-1mon@mon") AND notable_time < relative_time(now(), "@mon") | eval EventOpenedEpoch = notable_time, TriageStartedEpoch = triage_time, ResolutionEpoch = notable_time + new_to_resolution_duration, DaysInNewStatus = round(new_duration/86400,2), DaysToResolution = round(new_to_resolution_duration/86400,2) | where new_to_resolution_duration>0 | eval "Event Opened" = strftime(EventOpenedEpoch, "%Y-%m-%d %H:%M:%S"), "Triage process started" = strftime(TriageStartedEpoch, "%Y-%m-%d %H:%M:%S"), "Event Resolved" = strftime(ResolutionEpoch, "%Y-%m-%d %H:%M:%S") | rename rule_id AS "Event ID" | table "Event ID", "Event Opened", "Triage process started", "Event Resolved", DaysInNewStatus, DaysToResolution | sort - DaysToResolution      Event ID Event Opened Triage process started Event Resolved DaysInNewStatus DaysToResolution 4160DC1A-7DF2-4F18-A229-2BA45F1ED9FA@@notable@@e90ff7db7d8ff92bbe8aa4566c1bab37 2025-07-05 02:02:13 2025-07-07 09:39:07 2025-07-21 13:26:26 2.32 16.48 7C412294-C46A-448A-8170-466CE301D56A@@notable@@0feff824336394dbe4dcbedcbf980238 2025-07-05 02:02:08 2025-07-07 09:39:07 2025-07-21 13:26:26 2.32 16.48   This query does give me the Urgency for events, but does not give me time to resolution: `notable` | search (urgency=critical) | eval startTime=strftime (_time, "%Y-%m-%d %H:%M:%S") | table startTime, rule_id source comment urgency reviewer status_description owner_realname status_label     startTime rule_id source comment urgency reviewer status_description owner_realname  status_label 2025-07-29 09:30:16 4160DC1A-7DF2-4F18-A229-2BA45F1ED9FA@@notable@@5ebbdf0e0821b477785b018e29d44973 Endpoint - ADFS Smart Lockout Events - Rule   critical   Event has not been reviewed. unassigned New 2025-07-29 09:30:12 AD72F249-8457-4D5E-9557-9621E2F5D3FF@@notable@@3043a1f3a2fbc3f92f67800a066ada66 Endpoint - ADFS Smart Lockout Events - Rule   critical   Event has not been reviewed. unassigned New 2025-07-29 07:15:18 7C412294-C46A-448A-8170-466CE301D56A@@notable@@54a0ffabacbf083cb7f2e370937fc2bf Endpoint - ADFS Smart Lockout Events - Rule The event has been triaged critical abcde00 Initial analysis of threat John Doe Triage Trying to combine them to get time to resolution plus urgency (so I can filter on urgency) has been a complete mess. If I do manage to combine them by trimming around the Event ID / rule_id, it doesn't give me the expected number or half the time it is missing the urgency.  Is there something I am missing, or is this even possible? Thanks in advance. 
Sir, When I do a query (index=_internal) looking for records from any of the logs, there are no results.
Hi, can anybody help to create dott chart? x-axis: _time y-axis: points of values of fields: tmp, min_w, max_w Here is the input table:   Here is the wished chart:  
As @livehybrid noted, explicit type conversion does not make a difference here.  If you need numbers sorted as strings then you must use the str operator.
ES 8.1.1 solved this for us!
But in local/authorize.conf this stanza is not there
Hi @splunklearner  In your authorize.conf file you have a stanza named [role_system_admin] remove the next two attributes: edit_roles_grantable = enabled grantableRoles = system_admin These lines... See more...
Hi @splunklearner  In your authorize.conf file you have a stanza named [role_system_admin] remove the next two attributes: edit_roles_grantable = enabled grantableRoles = system_admin These lines were required in the older versions of Splunk. Now however they are causing the issues you are seeing. Check out https://community.splunk.com/t5/Security/Users-missing-from-Access-Control/m-p/487058#M11170 for more info on this fix.      Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
We are having multiple roles created in Splunk restricted by their index and users will be added to this role via AD group and we use LDAP method for authentication.  Below is authentication.conf [... See more...
We are having multiple roles created in Splunk restricted by their index and users will be added to this role via AD group and we use LDAP method for authentication.  Below is authentication.conf [authentication] authType = LDAP authSettings = uk_ldap_auth [uk_ldap_auth] SSLEnabled = 1 bindDN = CN=Infodir-HBEU-INFSLK,OU=Service Accounts,DC=InfoDir,DC=Prod,DC=FED groupBaseDN = OU=Splunk Network Log Analysis UK,OU=Applications,OU=Groups,DC=Infodir,DC=Prod,DC=FED groupMappingAttribute = dn groupMemberAttribute = member groupNameAttribute = cn host = aa-lds-prod.uk.fed port = 3269 userBaseDN = ou=HSBCPeople,dc=InfoDir,dc=Prod,dc=FED userNameAttribute = employeeid realNameAttribute = displayname emailAttribute = mail [roleMap_uk_ldap_auth] <roles mapped with AD group created> Checked this post - https://community.splunk.com/t5/Security/How-can-I-generate-a-list-of-users-and-assigned-roles/m-p/194811 and try to give the same command -  |rest /services/authentication/users splunk_server=local |fields title roles realname |rename title as userName|rename realname as Name Given this in SH search, but hardly returning only 5 results but we have nearly 100 roles created. Even given splunk_server=*, still the same result. I am having admin role as well and I hope I have the needed capabilities. Not sure what am I missing here? Any thoughts?  
Hi @Narendra_Rao  If you’re looking for something for Splunk Cloud then check out https://www.splunk.com/en_us/blog/artificial-intelligence/unlock-the-power-of-splunk-cloud-platform-with-the-mcp-ser... See more...
Hi @Narendra_Rao  If you’re looking for something for Splunk Cloud then check out https://www.splunk.com/en_us/blog/artificial-intelligence/unlock-the-power-of-splunk-cloud-platform-with-the-mcp-server.html Having looked at the .conf25 sessions it sounds like there will be an official Splunk Enterprise MCP server released/announced then, for now it’s just cloud.   In the meantime, back in April I built https://github.com/livehybrid/splunk-mcp which I’ve been using with a couple of customers and currently testing a Splunk native app version which should be updated in GitHub soon.  Ultimately if you’re not in a hurry then it’s worth waiting to see what’s announced at Conf or using an existing open source version in the meantime.      Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification     Your feedback encourages the volunteers in this community to continue contributing.
I'm working on observability tooling and have built a MCP bridge that routes queries / Admin activities for splunk along with several other tools . How do i get if their is some existing MCP's bu... See more...
I'm working on observability tooling and have built a MCP bridge that routes queries / Admin activities for splunk along with several other tools . How do i get if their is some existing MCP's built already for splunk and move way ahead? Happy to collab!
I am using the same command and running as admin but getting only few users. We have nearly 50 but getting only 4-5. We use LDAP auth in our environment. Am I missing something? We create roles in de... See more...
I am using the same command and running as admin but getting only few users. We have nearly 50 but getting only 4-5. We use LDAP auth in our environment. Am I missing something? We create roles in deployer and push it to SHs 
livehybrid, wau. Absolut cool. Works fantastic. Thank you very much.
Hi @spisiakmi  You can add some HTML with CSS to a panel in your dashboard like this, note this only works for classic dashboards: <html> <style> .splunk-dropdown button, button span, .s... See more...
Hi @spisiakmi  You can add some HTML with CSS to a panel in your dashboard like this, note this only works for classic dashboards: <html> <style> .splunk-dropdown button, button span, .splunk-dropdown span, .splunk-dropdown label { font-size: 1.1em !important; } </style>    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, can anybody help, how to change the font size of drop-down items/selections? Here is my dropdown: <input type="dropdown" token="auftrag_tkn" searchWhenChanged="true" id="dropdownAuswahlAuftra... See more...
Hi, can anybody help, how to change the font size of drop-down items/selections? Here is my dropdown: <input type="dropdown" token="auftrag_tkn" searchWhenChanged="true" id="dropdownAuswahlAuftrag"> <label>Auftrag</label> <fieldForLabel>Auftrag</fieldForLabel> <fieldForValue>Auftrag</fieldForValue> <search> <query>xxxxx</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input>  
Strictly speaking, "$searchCriteria$" is not the same as $searchCriteria|s$ as the |s filter will deal with things such as embedded quotes, whereas just putting the token in double quotes will not. H... See more...
Strictly speaking, "$searchCriteria$" is not the same as $searchCriteria|s$ as the |s filter will deal with things such as embedded quotes, whereas just putting the token in double quotes will not. Having said that, in this instance, they are probably equivalent.
hi @wjrbrady , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
After the Splunk Master enters maintenance mode, one of the indexers goes offline and then back online, and disables maintenance mode. The fixup tasks get stuck for about a week. The number of fixup ... See more...
After the Splunk Master enters maintenance mode, one of the indexers goes offline and then back online, and disables maintenance mode. The fixup tasks get stuck for about a week. The number of fixup tasks pending goes from around 5xx to 102 (after deleting rb bucket. I assume its the issue of bucket syncing in indexer cluster because client's server is a bit laggy(network delay, low cpu)) There are 40 fixup tasks in progress and 102 fixup tasks pending in the indexer cluster master. The internal log shows that all those 40 tasks are displaying the following error: Getting size on disk: Unable to get size on disk for bucket id=xxxxxxxxxxxxx path="/splunkdata/windows/db/rb_xxxxxx" (This is usually harmless as we may be racing with a rename in BucketMover or the S2SFileReceiver thread, or merge-buckets command which should be obvious in log file; the previous WARN message about this path can safely be ignored.) caller=serialize_SizeOnDisk Delete dir exists, or failed to sync search files for bid=xxxxxxxxxxxxxxxxxxx; will build bucket locally. err= Failed to sync search files for bid=xxxxxxxxxxxxxxxxxxx from srcs=xxxxxxxxxxxxxxxxxxxxxxx CMSlave [6205 CallbackRunnerThread] - searchState transition bid=xxxxxxxxxxxxxxxxxxxxx from=PendingSearchable to=Unsearchable reason='fsck failed: exitCode=24 (procId=1717942)' Getting size on disk: Unable to get size on disk for bucket id=xxxxxxxxxxxxx path="/splunkdata/windows/db/rb_xxxxxx" (This is usually harmless as we may be racing with a rename in BucketMover or the S2SFileReceiver thread, or merge-buckets command which should be obvious in log file; the previous WARN message about this path can safely be ignored.) caller=serialize_SizeOnDisk The internal log shows that all those 102 tasks are displaying the following error: ERROR TcpInputProc [6291 ReplicationDataReceiverThread] - event=replicationData status=failed err="Could not open file for bid=windows~xxxxxx err="bucket is already registered with this peer" (Success)"  Does anyone know what "fsck failed exit code 24" and "bucket is already registered with this peer" mean? How can these issues be resolved to reduce the number of fixup tasks? Thanks.  
Hi @sigma  The first thing you could try is adding a $ to the end of the REGEX so that the match is forced to run to the end of the line.  Secondly, are there any other extractions that could be ov... See more...
Hi @sigma  The first thing you could try is adding a $ to the end of the REGEX so that the match is forced to run to the end of the line.  Secondly, are there any other extractions that could be overlapping with this? Its just good to rule out the affects of other props.conf on your work! Also, instead of DEST_KEY=_meta you could try WRITE_META=true like below, although I dont think this would affect your extraction here: REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\s+\d{1,3}(?:\.\d{1,3}){3}\s+\d+\s+\S+\s+(\S+)(?:\s+(iLO\d+))?\s+-\s+-\s+-\s+(.*)$ FORMAT = name::$1 version::$2 message::$3 WRITE_META = true Have you defined your fields.conf for the indexed fields? Add an entry to fields.conf for the new indexed field: # fields.conf [<your_custom_field_name>] INDEXED=true  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing