All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  I have a report showing who have added or removed a person to or from a group.  Like : index="win*" (EventCode=4728 OR EventCode=4729)  | table _time, result, target_group, target_user, src... See more...
Hi  I have a report showing who have added or removed a person to or from a group.  Like : index="win*" (EventCode=4728 OR EventCode=4729)  | table _time, result, target_group, target_user, src_user This however returns from time to time a user called "XXXXX01, XXXXX02 Etc, which is a shared account in our PAM solution.  I can find the user behind by searching:  index="PAM" duser="XXXXX*"  "cn2=(Action: Connect)" | table _time, duser, suser, command, reason How can i make the first search, then in case user = XXXXX01, search for the latest time XXXXX01 was used related to the time in the first search.  At the moment we need to run the search side by side, and the find the correct time and check out of the XXXXX account, since the ie XXXXX01 can be checked out many times during a work day. 
Phantom default login credentials do not work for aws instance. "admin/password" or "admin/ aws instace id" are not working. I guess phantom login documentation is out of dated. Can someone help please
Hi Splunkers, I have to configure, as alerts trigger actions, a mail sending that should have, in body test, some field contained in triggering events. I found here on community this post: How to im... See more...
Hi Splunkers, I have to configure, as alerts trigger actions, a mail sending that should have, in body test, some field contained in triggering events. I found here on community this post: How to implement tokens in Email alert? that explain vey clear to use $result.<field_name>$ notation but also that "the field you want to specify must be returned in the first result row of the search". So now a question arise. Suppose I have this sample search:   index=* sourcetype=cybereason:malware status=detected | stats count by machineName   It return rows with 2 fields: the machine where infection has been detected and the event count. So, I should be able to use, in my mail notification: $result.machineName$ $result.count$ but not $result.status$, cause it is used only as a filter and is not returned as search result. Am I wrong?
Hi Splunkers, we have to connect our On Prem SOAR Solution (Palo Alto Cortex) to a Splunk Cloud instance. The dedicated SOAR integrations use API and ask: Username Password URL/Hostanem/IP Addre... See more...
Hi Splunkers, we have to connect our On Prem SOAR Solution (Palo Alto Cortex) to a Splunk Cloud instance. The dedicated SOAR integrations use API and ask: Username Password URL/Hostanem/IP Address Destination port We have some problems in destination port; we tried all the Splunk common one (9997, 8000, 8089, 8443) but we got always the Connection Timeour error. I'm wondering if, due we have a Splunk Cloud Environment, we need to ask to support some 
Hi  Wanting to setup a Report to run on the third Monday of every month. Is there a way to do this? Any Cron Schedule I try doesn't allow me to commit the change. Such as: 0 8 * * 1 [ "$(date ... See more...
Hi  Wanting to setup a Report to run on the third Monday of every month. Is there a way to do this? Any Cron Schedule I try doesn't allow me to commit the change. Such as: 0 8 * * 1 [ "$(date +\%d -d 'today + 14 days')" -gt 14 ] && [ "$(date +\%u -d 'today + 14 days')" == 1 ] && echo "Run report"
I have some users that start with urn:forms:anonymous# in my lookup I was trying to to discard them use urn:forms:anonymous#*  i guess i have to use regex to bring al the users to start like this ... See more...
I have some users that start with urn:forms:anonymous# in my lookup I was trying to to discard them use urn:forms:anonymous#*  i guess i have to use regex to bring al the users to start like this urn:forms:anonymous#
Hello Community, I am looking at deploying Splunk Enterprise on AWS on a HEFTY EC2  compute-optimized instance, and attached EBS. i'd like to maximize the # of indexes on this EC2 since search perf... See more...
Hello Community, I am looking at deploying Splunk Enterprise on AWS on a HEFTY EC2  compute-optimized instance, and attached EBS. i'd like to maximize the # of indexes on this EC2 since search performance is of no concern .   I see the default index size is 500 GB, but I also know I can configure indexes.conf to whatever I want. For example, if I think I'll have ~ 97 TB of data I could say maxVolumeDataSizeMB = 102603162 on a single BIG indexer .  But off course just because I can doesn't mean I should. I see no clear recommendation of how to design multiple indexes with relation to Indexers and Search Heads. Maybe because it always depends.  In my case, since I care none for performance, can I put everything on one BIG EC2 ? Split it in 2 BIG machines ? As in, install Indexer and SH on same instance. Thanks in advance  
I can search my way into finding the result of a log clearing event bit if I use a data model with tstats it doesn't show. I think this might be because the action shows as action=deleted but the rea... See more...
I can search my way into finding the result of a log clearing event bit if I use a data model with tstats it doesn't show. I think this might be because the action shows as action=deleted but the reality is I don't' know. I am attaching a png of the issue and I and just wondering what is the best way to go about fixing this or changing it so I get it in the way it fits.  
  I have a saved search pushed to my splunk app. The search only gives me partial events searched (9k events ), where as when the saved search it in "search and reporting" app i get the complete ... See more...
  I have a saved search pushed to my splunk app. The search only gives me partial events searched (9k events ), where as when the saved search it in "search and reporting" app i get the complete results. (6000k events)   My savedsearch.conf inside my app directory  "/opt/splunk/etc/apps/My_APP/local/savedsearches.conf"     [My_SavedSearch] cron_schedule = 0 0 * * * dispatch.earliest_time = -7y@y dispatch.index_earliest = -7y@y dispatch.index_latest = now enableSched = 1 run_on_startup = 1 dispatch.max_count = 500000000 search = | pivot Authentication Authentication count(Authentication) AS totalcount SPLITROW sourcetype AS sourcetype SORT 100 sourcetype ROWSUMMARY 0 COLSUMMARY 0 SHOWOTHER 1 | eval modelname="Authentication"     My splunk app - savedsearch   savedsearch in search app
Hi  Need help on the below:   I have 2 field values for Status as RUNNING and SUCCESS. I want to generate 1st alert when the status becomes RUNNING for current day and the 2nd alert when the st... See more...
Hi  Need help on the below:   I have 2 field values for Status as RUNNING and SUCCESS. I want to generate 1st alert when the status becomes RUNNING for current day and the 2nd alert when the status get changed to SUCESS.  I don't want  duplicate alerts till the time status is not getting changed   Thanks
Hello everyone,  I need some help with a spl request.  <row> <panel> <title>SUIVI DES FLUX - TRANSMISSION WS</title> <input type="dropdown" token="partenaire" searchWhenChanged="true"> <... See more...
Hello everyone,  I need some help with a spl request.  <row> <panel> <title>SUIVI DES FLUX - TRANSMISSION WS</title> <input type="dropdown" token="partenaire" searchWhenChanged="true"> <label>PARTENAIRE</label> <search> <query>index=rcd earliest=@mon latest=now |table partenaire |dedup partenaire</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <choice value="*">ALL</choice> <initialValue>*</initialValue> <default>*</default> <change> <condition value="*"> <set token="new_search">index=rcd earliest=@mon latest=now |search $partenaire$ |eval date_appel=strftime(_time,"%b %y") | eval nb_appel_OK=if(isnotnull(nb_appel) AND statut="OK", nb_appel, "0") | eval nb_appel_KO=if(isnotnull(nb_appel) AND statut="KO",nb_appel,"0") | eval temps_rep_min_OK=if(isnotnull(temps_rep_min) AND statut="OK", temps_rep_min, "0") | eval temps_rep_min_KO=if(isnotnull(temps_rep_min) AND statut="KO",temps_rep_min,"0") | eval temps_rep_max_OK=if(isnotnull(temps_rep_max) AND statut="OK", temps_rep_max, "0") | eval temps_rep_max_KO=if(isnotnull(temps_rep_max) AND statut="KO",temps_rep_max,"0")| eval temps_rep_moyen_OK=if(isnotnull(temps_rep_moyen) AND statut="OK", temps_rep_moyen, "0") | eval temps_rep_moyen_KO=if(isnotnull(temps_rep_moyen) AND statut="KO",temps_rep_moyen,"0") |stats sum(nb_appel_OK) as nb_appel_OK, sum(nb_appel_KO) as nb_appel_KO sum(temps_rep_min_OK) as temps_rep_min_OK, sum(temps_rep_min_KO) as temps_rep_min_KO sum(temps_rep_max_OK) as temps_rep_max_OK, sum(temps_rep_max_KO) as temps_rep_max_KO, sum(temps_rep_moyen_OK) AS temps_rep_moyen_OK, sum(temps_rep_moyen_KO) as temps_rep_moyen_KO values(nom_ws) as nom_ws, values(date_appel) as date_appel |table nom_ws partenaire date_appel nb_appel_OK nb_appel_KO temps_rep_min_OK temps_rep_min_KO temps_rep_max_OK temps_rep_max_KO temps_rep_moyen_OK temps_rep_moyen_KO |append [ search index=rcd earliest=-1d@d latest=@d partenaire=$partenaire$ |eval time=strftime(_time,"%Y-%m-%d") | eval nb_appel_OK=if(isnotnull(nb_appel) AND statut="OK", nb_appel, "0") | eval nb_appel_KO=if(isnotnull(nb_appel) AND statut="KO",nb_appel,"0") | eval temps_rep_min_OK=if(isnotnull(temps_rep_min) AND statut="OK", temps_rep_min, "0") | eval temps_rep_min_KO=if(isnotnull(temps_rep_min) AND statut="KO",temps_rep_min,"0") | eval temps_rep_max_OK=if(isnotnull(temps_rep_max) AND statut="OK", temps_rep_max, "0") | eval temps_rep_max_KO=if(isnotnull(temps_rep_max) AND statut="KO",temps_rep_max,"0")| eval temps_rep_moyen_OK=if(isnotnull(temps_rep_moyen) AND statut="OK", temps_rep_moyen, "0") | eval temps_rep_moyen_KO=if(isnotnull(temps_rep_moyen) AND statut="KO",temps_rep_moyen,"0") |stats sum(nb_appel_OK) as nb_appel_OK, sum(nb_appel_KO) as nb_appel_KO sum(temps_rep_min_OK) as temps_rep_min_OK, sum(temps_rep_min_KO) as temps_rep_min_KO sum(temps_rep_max_OK) as temps_rep_max_OK, sum(temps_rep_max_KO) as temps_rep_max_KO, sum(temps_rep_moyen_OK) AS temps_rep_moyen_OK, sum(temps_rep_moyen_KO) as temps_rep_moyen_KO values(nom_ws) as nom_ws values(partenaire) as partenaire , values(date_appel) as date_appel |table nom_ws partenaire date_appel nb_appel_OK nb_appel_KO temps_rep_min_OK temps_rep_min_KO temps_rep_max_OK temps_rep_max_KO temps_rep_moyen_OK temps_rep_moyen_KO] |eval partenaire="$partenaire$"</set> </condition> <condition match="NOT match('value', &quot;*&quot;)"> <set token="new_search">index=rcd earliest=@mon latest=now |search $partenaire$ |eval date_appel=strftime(_time,"%b %y") | eval nb_appel_OK=if(isnotnull(nb_appel) AND statut="OK", nb_appel, "0") | eval nb_appel_KO=if(isnotnull(nb_appel) AND statut="KO",nb_appel,"0") | eval temps_rep_min_OK=if(isnotnull(temps_rep_min) AND statut="OK", temps_rep_min, "0") | eval temps_rep_min_KO=if(isnotnull(temps_rep_min) AND statut="KO",temps_rep_min,"0") | eval temps_rep_max_OK=if(isnotnull(temps_rep_max) AND statut="OK", temps_rep_max, "0") | eval temps_rep_max_KO=if(isnotnull(temps_rep_max) AND statut="KO",temps_rep_max,"0")| eval temps_rep_moyen_OK=if(isnotnull(temps_rep_moyen) AND statut="OK", temps_rep_moyen, "0") | eval temps_rep_moyen_KO=if(isnotnull(temps_rep_moyen) AND statut="KO",temps_rep_moyen,"0") |stats sum(nb_appel_OK) as nb_appel_OK, sum(nb_appel_KO) as nb_appel_KO sum(temps_rep_min_OK) as temps_rep_min_OK, sum(temps_rep_min_KO) as temps_rep_min_KO sum(temps_rep_max_OK) as temps_rep_max_OK, sum(temps_rep_max_KO) as temps_rep_max_KO, sum(temps_rep_moyen_OK) AS temps_rep_moyen_OK, sum(temps_rep_moyen_KO) as temps_rep_moyen_KO values(nom_ws) as nom_ws, values(date_appel) as date_appel by partenaire |table nom_ws partenaire date_appel nb_appel_OK nb_appel_KO temps_rep_min_OK temps_rep_min_KO temps_rep_max_OK temps_rep_max_KO temps_rep_moyen_OK temps_rep_moyen_KO |append [ search index=rcd $partenaire$ earliest=-1d@d latest=@d |eval time=strftime(_time,"%Y-%m-%d") | eval nb_appel_OK=if(isnotnull(nb_appel) AND statut="OK", nb_appel, "0") | eval nb_appel_KO=if(isnotnull(nb_appel) AND statut="KO",nb_appel,"0") | eval temps_rep_min_OK=if(isnotnull(temps_rep_min) AND statut="OK", temps_rep_min, "0") | eval temps_rep_min_KO=if(isnotnull(temps_rep_min) AND statut="KO",temps_rep_min,"0") | eval temps_rep_max_OK=if(isnotnull(temps_rep_max) AND statut="OK", temps_rep_max, "0") | eval temps_rep_max_KO=if(isnotnull(temps_rep_max) AND statut="KO",temps_rep_max,"0")| eval temps_rep_moyen_OK=if(isnotnull(temps_rep_moyen) AND statut="OK", temps_rep_moyen, "0") | eval temps_rep_moyen_KO=if(isnotnull(temps_rep_moyen) AND statut="KO",temps_rep_moyen,"0") |stats sum(nb_appel_OK) as nb_appel_OK, sum(nb_appel_KO) as nb_appel_KO sum(temps_rep_min_OK) as temps_rep_min_OK, sum(temps_rep_min_KO) as temps_rep_min_KO sum(temps_rep_max_OK) as temps_rep_max_OK, sum(temps_rep_max_KO) as temps_rep_max_KO, sum(temps_rep_moyen_OK) AS temps_rep_moyen_OK, sum(temps_rep_moyen_KO) as temps_rep_moyen_KO values(nom_ws) as nom_ws values(partenaire) as partenaire , values(date_appel) as date_appel |mvexpand partenaire |table nom_ws partenaire date_appel nb_appel_OK nb_appel_KO temps_rep_min_OK temps_rep_min_KO temps_rep_max_OK temps_rep_max_KO temps_rep_moyen_OK temps_rep_moyen_KO]</set> </condition> </change> <fieldForLabel>partenaire</fieldForLabel> <fieldForValue>partenaire</fieldForValue> </input> <html> <div id="htmlPanelWithToken"> </div> </html> </panel> </row>   I use two searches with a value condition depending on the value of filter : partenaire. I need to use this search to make it work with my js script. I don't know how to add the value conditions to the query below. <search id="mySearch"> <done> <set token="tokHTML">$result.data$</set> </done> <query>index=rcd_statuts_count libelle=web_service_supervision_count | search partenaire IN ($partenaire$) |eval date_appel=strftime(_time,"%b %y")|table nom_ws partenaire date_appel nb_appel_OK nb_appel_KO temps_rep_min_OK temps_rep_min_KO temps_rep_max_OK temps_rep_max_KO temps_rep_moyen_OK temps_rep_moyen_KO | eventstats sum(nb_appel_OK) as sum_nb_appel_ok sum(nb_appel_KO) as sum_nb_appel_ko |append [ search index=rcd earliest=-1d@d latest=@d | eval nb_appel_OK=if(isnotnull(nb_appel) AND statut="OK", nb_appel, "0") | eval nb_appel_KO=if(isnotnull(nb_appel) AND statut="KO",nb_appel,"0") | eval temps_rep_min_OK=if(isnotnull(temps_rep_min) AND statut="OK", temps_rep_min, "0") | eval temps_rep_min_KO=if(isnotnull(temps_rep_min) AND statut="KO",temps_rep_min,"0") | eval temps_rep_max_OK=if(isnotnull(temps_rep_max) AND statut="OK", temps_rep_max, "0") | eval temps_rep_max_KO=if(isnotnull(temps_rep_max) AND statut="KO",temps_rep_max,"0")| eval temps_rep_moyen_OK=if(isnotnull(temps_rep_moyen) AND statut="OK", temps_rep_moyen, "0") | eval temps_rep_moyen_KO=if(isnotnull(temps_rep_moyen) AND statut="KO",temps_rep_moyen,"0") |stats sum(nb_appel_OK) as nb_appel_OK, sum(nb_appel_KO) as nb_appel_KO sum(temps_rep_min_OK) as temps_rep_min_OK, sum(temps_rep_min_KO) as temps_rep_min_KO sum(temps_rep_max_OK) as temps_rep_max_OK, sum(temps_rep_max_KO) as temps_rep_max_KO, sum(temps_rep_moyen_OK) AS temps_rep_moyen_OK, sum(temps_rep_moyen_KO) as temps_rep_moyen_KO values(nom_ws) as nom_ws values(partenaire) as partenaire , values(date_appel) as date_appel |table nom_ws partenaire date_appel nb_appel_OK nb_appel_KO temps_rep_min_OK temps_rep_min_KO temps_rep_max_OK temps_rep_max_KO temps_rep_moyen_OK temps_rep_moyen_KO | eventstats sum(nb_appel_OK) as sum_nb_appel_ok sum(nb_appel_KO) as sum_nb_appel_ko]</query> <done> <condition> <set token="nom_ws">$nom_ws$</set> <set token="partenaire">$partenaire$</set> <set token="date_appel">$date_appel$</set> <set token="sum_nb_appel_ok">$result.sum_nb_appel_ok$</set> <set token="sum_nb_appel_ko">$result.sum_nb_appel_ko$</set> </condition> </done> Thank you so much      
Hi Team,   I am trying to write a search query where it will find the existing filename is present in the logs or not. Here is my static query looks like index="xyz" fileName="this.is.my.file.rec... See more...
Hi Team,   I am trying to write a search query where it will find the existing filename is present in the logs or not. Here is my static query looks like index="xyz" fileName="this.is.my.file.received.on.202306.test.json"  The below is the query i tried to pass the dynamic part into the filename but couldn't  index="xyz" fileName="this.is.my.file.received.on.{yourtime}.test.json" | eval yourtime = strftime(_time, "%Y-%m")   My questiions: 1. How can pass the dynamic part into the query ? 2. Can i use the same search query logic for creating a dashboard too ? Any help is appreciated. Thanks
I am getting the log file imported to Splunk, but each line is an event with no field name.  Can I break up the line into columns?  If not, how do I parse the line to extract a number? Index is: in... See more...
I am getting the log file imported to Splunk, but each line is an event with no field name.  Can I break up the line into columns?  If not, how do I parse the line to extract a number? Index is: index=test_7d sourcetype=kafka:producer:bigfix Events are: 2023-06-22 09:15:44,270 root - INFO - 114510 events have been uploaded to topic DC2_Endpoint_Configuration_IBM_BigFix_Patch_Join on Kafka 2023-06-22 09:15:37,204 root - INFO - Executing getDatafromDB 2023-06-22 09:15:35,704 root - INFO - 35205 events have been uploaded to topic DC2_Endpoint_Configuration_IBM_BigFix_Patch_Join on Kafka 2023-06-22 09:15:33,286 root - INFO - Executing getDatafromDB 2023-06-22 09:15:32,703 root - INFO - 167996 events have been uploaded to topic DC2_Endpoint_Configuration_IBM_BigFix_Patch_Join on Kafka 2023-06-22 09:15:22,479 root - INFO - Executing getDatafromDB 2023-06-22 09:15:19,031 root - INFO - 181 events have been uploaded to topic DC2_Endpoint_Configuration_IBM_BigFix_Patch_Join on Kafka Each line/event starts with the date, the wordwrap is making it look incorrect.  I need to parse the bold number of each line after '- INFO -' and add a zero if no number.  I can do this with a eval, but how do I parse if there is no field name to add to the 'regex' command? For example, here I'm using 'regex' to remove Operating Systems from dataset on a fieldname 'operating_system' which is one column of an sourcetype: | regex operating_system!="(Linux|AIX|CENTOS|WINDOWS|Digital UNIX|FreeBSD|HP-UX|Hyper-V|Juniper|Mac|Windows|NetBSD|OpenBSD|OpenVMS|Server 2012|Server Core 2012|Server 2016|Server 2019|Ubuntu|Solaris|Unix|ESX|vCenter Server|rbash|[\*\*\*\*\*\*]|\A[\-\-\-\-\-\-\-\-\-\-]|[\=\=\=\=\=\=\=\=\=\=])" I found the erex command that works, | erex ImportCount examples="0,35205,114510" But you have to enter a sample of the text you are looking for.  So it only works for one day and it has to be changed, the sample are values in the dataset, but everyday the log file changes with new values updated.  Can regex be used in place of the examples?
The server we are monitoring log files is in EDT timezone, the indexers are in utc time zone. the problem is logs are printing timestamp in log files which are 5 hours ahead of actual system time. ... See more...
The server we are monitoring log files is in EDT timezone, the indexers are in utc time zone. the problem is logs are printing timestamp in log files which are 5 hours ahead of actual system time. (Example: system time is 6PM EDT then in log file time it is showing 11PM)   now when these events are getting indexed the event time is showing 5 hours ahead of index time as it is taking system time (EDT)and converting it to UTC. what can be done to get the correct event time?  please help. 
Hi, we have been running indexer pods with Smartstore on S3 for a while without problems. When upgrading to AWS EKS v1.24+ the pods can't read EC2 instance metadata anymore and thus S3 authenticatio... See more...
Hi, we have been running indexer pods with Smartstore on S3 for a while without problems. When upgrading to AWS EKS v1.24+ the pods can't read EC2 instance metadata anymore and thus S3 authentication fails. Is anyone running this successfully?  AWS EKS v1.24+   -- Splunk 8.2.* / 9.0.* -- Splunk Operator 1.1.0 / 2.2.1 Documentation at https://docs.splunk.com/Documentation/Splunk/9.0.5/admin/Indexesconf states: remote.s3.access_key = <string> * Specifies the access key to use when authenticating with the remote storage system supporting the S3 API. * If not specified, the indexer will look for these environment variables: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY (in that order). * If the environment variables are not set and the indexer is running on EC2, the indexer attempts to use the access key from the IAM role. * Optional. * No default.  Is anyone running this by IAM Role and without setting AWS access and secret keys in indexes.conf?  Thankful for any hints, Arndt
We are trying to write a python synthetic script to monitor our websites (https://docs.appdynamics.com/appd/4.5.x/en/end-user-monitoring/browser-monitoring/browser-synthetic-monitoring/synthetic-scri... See more...
We are trying to write a python synthetic script to monitor our websites (https://docs.appdynamics.com/appd/4.5.x/en/end-user-monitoring/browser-monitoring/browser-synthetic-monitoring/synthetic-scripts). Some scripts needed to access our websites require downloading JS scripts from our internal hosts with untrusted certificates (we use internal CA). How can I initialize my own driver with my own capabilities? I.E. caps = webdriver.DesiredCapabilities.CHROME.copy() caps['acceptInsecureCerts'] = True driver = webdriver.Chrome(desired_capabilities=caps) gets overrided by WARNING: A Driver object is created automatically and is available as a local variable called `driver`. Ignoring call to `Chrome` on line 22 by returning `driver` reference.
I want to send customize email from Splunk ES  adaptive response action. How do i add custom templet for email  Message. second can I make To and Subject dynamic for each notable. pick up from eve... See more...
I want to send customize email from Splunk ES  adaptive response action. How do i add custom templet for email  Message. second can I make To and Subject dynamic for each notable. pick up from event field? Thanks  
We are developing a custom search command which requires access to an authenticated third party service. As the doc states, `As a Splunk Cloud Platform user, you are restricted to interacting with t... See more...
We are developing a custom search command which requires access to an authenticated third party service. As the doc states, `As a Splunk Cloud Platform user, you are restricted to interacting with the search tier only with the REST API. You cannot access other tiers by using the REST API. Splunk Support manages all tiers other than the search tier.` This apply by extension to the add-ons executed by the users?  
We are hosting our Splunk instances on AWS EC2 instances, and will begin using EBS encryption. I haven't found any clear answer to whether or not we have to make config changes to Splunk. Can anyone ... See more...
We are hosting our Splunk instances on AWS EC2 instances, and will begin using EBS encryption. I haven't found any clear answer to whether or not we have to make config changes to Splunk. Can anyone provide clarifications?
We are running splunk 9.0.5 We want to add an index to the default indexes for a user role, but the index does not show up in the list of indexes in the "Edit User Role" window, tab "Indexes" on th... See more...
We are running splunk 9.0.5 We want to add an index to the default indexes for a user role, but the index does not show up in the list of indexes in the "Edit User Role" window, tab "Indexes" on the search head There is data in the index and we do see the index in the monitoring console under Indexing / Index Detail:Deployment We did also add the following to the /opt/splunk/etc/system/local/server.conf on the search head : [introspection:distributed-indexes] disabled = false (And restarted the splunk service on the search head afterwards) The index was created earlier (before 9.0.5) via the master node file /opt/splunk/etc/master-apps/_cluster/local/indexes.conf (now moved to manager_apps) A push of the bundle did not make any changes (peers already had the correct version) What else could be the issue here ?