All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dear Community members, It would be helpful if someone assist me with relevant document of deployment of Splunk enterprise security on AWS Containers & AWS ec2 instance to compare which is the appro... See more...
Dear Community members, It would be helpful if someone assist me with relevant document of deployment of Splunk enterprise security on AWS Containers & AWS ec2 instance to compare which is the appropriate model that will be supported by Splunk for any future issues along with upgradation. 
https://docs.splunk.com/Documentation/ES/7.3.2/Install/InstallEnterpriseSecurity I installed Splunk Enterprise Security to verify operation, following the above manual, I installed and configured t... See more...
https://docs.splunk.com/Documentation/ES/7.3.2/Install/InstallEnterpriseSecurity I installed Splunk Enterprise Security to verify operation, following the above manual, I installed and configured the initial settings. When I opened the incident view in the app's menu bar, an error message appeared saying "An error occurred while loading some filters." When I opened the investigation, an error message appeared saying "Unknown Error: FAiled to fetch KV Store." I can't display the incident review and investigation. Is anyone else experiencing the same issue?
Hi Splunk Community, I’m new to integrating Citrix NetScaler with Splunk, but I have about 9 years of experience working with Splunk. I need your guidance on how to successfully set up this integrat... See more...
Hi Splunk Community, I’m new to integrating Citrix NetScaler with Splunk, but I have about 9 years of experience working with Splunk. I need your guidance on how to successfully set up this integration to ensure that: All data from NetScaler is ingested and extracted correctly. The dashboards in the Splunk App for Citrix NetScaler display the expected panels and trends. Currently, I have a 3-machine Splunk environment (forwarder, indexer, and search head). Here's what I’ve done so far: I installed the Splunk App for Citrix NetScaler on the search head. Data is being ingested from the NetScaler server via the heavy forwarder, but I have not installed the Splunk Add-on for Citrix NetScaler on the forwarder or indexer. Despite this, the dashboards in the app show no data. From your experience, is it necessary to install the Splunk Add-on for Citrix NetScaler on the heavy forwarder (or elsewhere) to extract and normalize the data properly? If so, would that resolve the issue of empty dashboards? Any insights or steps to troubleshoot and ensure proper integration would be greatly appreciated! Thanks in advance!    
"""https://docs.splunk.com/observability/en/rum/rum-rules.html#use-cases Write custom rules for URL grouping in Splunk RUM — Splunk Observability Cloud documentation Write custom rules to group URL... See more...
"""https://docs.splunk.com/observability/en/rum/rum-rules.html#use-cases Write custom rules for URL grouping in Splunk RUM — Splunk Observability Cloud documentation Write custom rules to group URLs based on criteria that matches your business specifications, and organize data to match your business needs. Group URLs by both path and domain.  you also need custom URL grouping rules to generate page-level metrics (rum.node.*) in Splunk RUM.""   As per the splunk documentation ,we have configured custom URL grouping. But rum.node.* metrics not available.   pls help on this Path configured    
Hello There,    I would like to pass mutiple values in label, Where in the current search i can able to pass onlu one values at a time, <input type="multiselect" token="siteid" searchWhenChanged... See more...
Hello There,    I would like to pass mutiple values in label, Where in the current search i can able to pass onlu one values at a time, <input type="multiselect" token="siteid" searchWhenChanged="true"> <label>Site</label> <choice value="*">All</choice> <choice value="03">No Site Selected</choice> <fieldForLabel>displayname</fieldForLabel> <fieldForValue>prefix</fieldForValue> <search> <query> | inputlookup site_ids.csv |search displayname != "ABCN8" AND displayname != "ABER8" AND displayname != "AFRA7" AND displayname != "AMAN2" </query> <earliest>-15m</earliest> <latest>now</latest> </search> <delimiter>_fc7 OR index=</delimiter> <suffix>_fc7</suffix> <default>03</default> <initialValue>03</initialValue> <change> <eval token="form.siteid">case(mvcount('form.siteid') == 2 AND mvindex('form.siteid', 0) == "03", mvindex('form.siteid', 1), mvfind('form.siteid', "\\*") == mvcount('form.siteid') - 1, "03", true(), 'form.siteid')</eval> </change> <change> <set token="tokLabel">$label$</set> </change> </input> I need to pass this label value as well, which is a multiselect value. Thanks!
Choropleth map provides city level resolution, is there way to get higher resolution such as street or block level? thanks!
Hi Dear Splunkers,  I have been working on creating a Custom TA for counting unicode characters for non-eng dataset (long story discussion post in PS), getting these lookup file errors 1)Error in... See more...
Hi Dear Splunkers,  I have been working on creating a Custom TA for counting unicode characters for non-eng dataset (long story discussion post in PS), getting these lookup file errors 1)Error in 'lookup' command: Could not construct lookup 'ucd_count_chars_lookup, _raw, output, count'. See search.log for more details. 2)The lookup table 'ucd_count_chars_lookup' does not exist or is not available. The search job has failed due to an error. You may be able view the job in the Job Inspector.   The Custom TA creation steps I followed: (on my personal laptop, installed bare-min fresh 9.3.2 enterprise trial version) 1) Created the custom TA named "TA-ucd" on app creation page (given read for all, execute for owner, shared with all apps) 2) created the ucd_category_lookup.py (made sure of the permissions) $SPLUNK_HOME/etc/apps/TA-ucd/bin/ucd_category_lookup.py (this file should be readable and executable by the Splunk user, i.e. have at least mode 0500) #!/usr/bin/env python import csv import unicodedata import sys def main(): if len(sys.argv) != 3: print("Usage: python category_lookup.py [char] [category]") sys.exit(1) charfield = sys.argv[1] categoryfield = sys.argv[2] infile = sys.stdin outfile = sys.stdout r = csv.DictReader(infile) header = r.fieldnames w = csv.DictWriter(outfile, fieldnames=r.fieldnames) w.writeheader() for result in r: if result[charfield]: result[categoryfield] = unicodedata.category(result[charfield]) w.writerow(result) main()  $SPLUNK_HOME/etc/apps/TA-ucd/default/transforms.conf [ucd_category_lookup] external_cmd = ucd_category_lookup.py char category fields_list = char, category python.version = python3 $SPLUNK_HOME/etc/apps/TA-ucd/metadata/default.meta [] access = read : [ * ], write : [ admin, power ] export = system 3) after creating the 3 files above mentioned, i did restart Splunk service and laptop as well.  4) still the search fails with lookup errors above mentioned.  5) source=*search.log* - does not produce anything (Surprisingly !) Could you pls upvote the idea - https://ideas.splunk.com/ideas/EID-I-2176 PS - long story available here - https://community.splunk.com/t5/Splunk-Search/non-english-words-length-function-not-working-as-expected/m-p/705650  
I have created one Dashboard and trying to add different field color. I navigated to "source " >tried updating XML code as "charting.fieldColors">{"Failed Logins":"#FF9900", "NonCompliant_Keys":"#FF0... See more...
I have created one Dashboard and trying to add different field color. I navigated to "source " >tried updating XML code as "charting.fieldColors">{"Failed Logins":"#FF9900", "NonCompliant_Keys":"#FF0000", "Successful Logins":"#009900", "Provisioning Successful":"#FFFF00"</option>" but still all clumns are showing as "Purple"   Can someone help me with it?
We are looking to configure the Splunk Add-on for Microsoft Cloud Services to use a Service Principal as opposed to a client key.  The documentation for the Add-On does not provide insight into how o... See more...
We are looking to configure the Splunk Add-on for Microsoft Cloud Services to use a Service Principal as opposed to a client key.  The documentation for the Add-On does not provide insight into how one would configure the Splunk Add-on for Microsoft Cloud Services to work with a Service Principal.  Does the Splunk Add-on for Microsoft Cloud Services service principals for authentication?  
I have a heavy forwarder that sends the same event to two different indexer cluster. Now this event has a new field "X" that I only want to see in one of the indexer clusters.  I know in the props.c... See more...
I have a heavy forwarder that sends the same event to two different indexer cluster. Now this event has a new field "X" that I only want to see in one of the indexer clusters.  I know in the props.conf I can configure the sourcetype to do the removal of the field but that would be on the sourcetype level. Is there any way to remove it on one copy and not the other?  Alternatively I could do the props.conf change on the indexer level instead.
We have new SH node which we are trying to add to the Search head cluster,  updated the configs in shcluster config and other configs.  After adding this node in the cluster , now we have two nodes ... See more...
We have new SH node which we are trying to add to the Search head cluster,  updated the configs in shcluster config and other configs.  After adding this node in the cluster , now we have two nodes as pert of  the SH cluster. We can see both the nodes up and running part of the cluster,   when we check it with "splunk show shcluster-status". But, when we check the kvstore status with " splunk show kvstore-status" old nodes shows as captain , but the newly built node is not joining this cluster and giving the below error in the logs. Error in Splunkd log on the search head which has issue.. 12-04-2024 16:36:45.402 +0000 ERROR KVStoreBulletinBoardManager [534432 KVStoreConfigurationThread] - Local KV Store has replication issues. See introspection data and mongod.log for details. Cluster has not been configured on this member. KVStore cluster has not been configured We have configured all the cluster related info on the newly built search head server(server.conf), dont see any configs missing. We also see below error on the SH ui page messages tab.. Failed to synchronize configuration with KVStore cluster. Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: search-head01:8191; the following nodes did not respond affirmatively: search-head01:8191 failed with Error connecting to search-head01:8191 (172.**.***.**:8191) :: caused by :: compression disabled. Anyone else faced this error before...need some support here...
I need to display list of all failed status code in column by consumers Final Result: Consumers Errors Total_Requests Error_Percentage list_of_Status Test 10 100 10  5... See more...
I need to display list of all failed status code in column by consumers Final Result: Consumers Errors Total_Requests Error_Percentage list_of_Status Test 10 100 10  500 400 404           Is there a way we can display the failed status codes as well in of list of status coloumn index=test | stats count(eval(status>399)) as Errors,count as Total_Requests by consumers | eval Error_Percentage=((Errors/Total_Requests)*100)
Estimados. donde podría encontrar la métrica Availability Trend de un job, para usarla en un dashboard. seria para hacerlo tal cual como esta en la imagen , pero en un dash. Translated version De... See more...
Estimados. donde podría encontrar la métrica Availability Trend de un job, para usarla en un dashboard. seria para hacerlo tal cual como esta en la imagen , pero en un dash. Translated version Dear all, where could I find the Availability Trend metric for a job to use it in a dashboard? I want to replicate it exactly as it appears in the image but in a dashboard. ^Post edited by @Ryan.Paredez to translate the post. 
Hello, I am having issues configuring the HTTP Event Collector on my organizations Splunk cloud instance. I have set up a token, and have been trying to test using the example curl commands. However... See more...
Hello, I am having issues configuring the HTTP Event Collector on my organizations Splunk cloud instance. I have set up a token, and have been trying to test using the example curl commands. However, I am having issues discerning which endpoint is the correct one. I have tested out several endpoint formats: - https://<org>.splunkcloud.com:8088/services/collector - https://<org>.splunkcloud.com:8088/services/collector/event - https://http-inputs-<org>.splunkcloud.com:8088/services/collector... - several other that I have forgotten.  For context, I do receive a response when I get from https://<org>.splunkcloud.com/services/server/info From what I understand, you cannot change the port from 8088 on a cloud instance, so I do not think it is a port error.  Can anyone point me to any resources that would be able to help me determine the correct endpoint? (Not this: Set up and use HTTP Event Collector in Splunk Web - Splunk Documentation. I've browsed for hours trying to find a more comprehensive resource.)   Thank you!  
we run in an issue with the Indexer ... if there are 5 Times an drop of the max day volume .. the indexer will be disable ... what is the case now with our installation  Error in tsats command: your... See more...
we run in an issue with the Indexer ... if there are 5 Times an drop of the max day volume .. the indexer will be disable ... what is the case now with our installation  Error in tsats command: your Splunk license expired or you have exceeded license limit too many times.  Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK The license is now free and no way back to Enterprise trial , there is no way back  when the license is expired.
I deleted my custom dashboard from the dashboard list on my AppDynamics SaaS Controller, is there a way I can recover a deleted dashboard?
Hello, I've created a simple app, let's call it IT_Users_App,  linked to a certain role called it_user.  In the app, a user with the role above, can see hundreds of OOTB dashboards by default.  I ... See more...
Hello, I've created a simple app, let's call it IT_Users_App,  linked to a certain role called it_user.  In the app, a user with the role above, can see hundreds of OOTB dashboards by default.  I would like to hide those OOTB dashboards from the app / role, in a bulk action. Doing so one by one will not be fun Is there a way to accomplish that?    Thanks in advance.  
Hi All, I have a bluecoat proxy log source for which I am using the official splunk addon. However, I noticed that the timestamp is not being parsed for from the logs and instead the index time is b... See more...
Hi All, I have a bluecoat proxy log source for which I am using the official splunk addon. However, I noticed that the timestamp is not being parsed for from the logs and instead the index time is being taken. To remedy this, I added a custom props in ../etc/apps/Splunk_TA_bluecoat-proxysg/local, with the following stanza: [bluecoat:proxysg:access:syslog] TIME_FORMAT=%Y-%m-%d %H:%M:%S TIME_PREFIX=^   Rest of the configuration is the same as it is in the base app (Splunk_TA_bluecoat-proxysg).   During testing, when I upload logs through Add Data, the the time stamp is being properly parsed. However when I start using SplunkTCP to ingest the data, the timestamp extraction stops working.  Note that in both of the scenarios, the rest of the parsing configurations (field extraction and mapping is working just fine). Troubleshooting: 1. I tried to check with btool for props .. I can see the custom stanza I added there. 2. Tried putting the props in ../etc/system/local 3. Restarted Splunk multiple times. Any ideas that I can try to get this to work? or where should I look at? Sample Log: 2024-12-03 07:30:06 9 172.24.126.56 - - - - "None" - policy_denied DENIED "Suspicious" - 200 TCP_ACCELERATED CONNECT - tcp beyondwords-h0e8gjgjaqe0egb7.a03.azurefd.net 443 / - - "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36 Edg/131.0.0.0" 172.29.184.14 39 294 - - - - - "none" "none" "none" 7 - - 631d69b45739e3b6-00000000df56e125-00000000674eb37e - - Splunk Search (Streaming data): Splunk Search (uploaded data):    
Hello There,    I'm hvaing issues in multiselect input dropdown <input type="multiselect" token="siteid" searchWhenChanged="true"> <label>Site</label> <choice value="*">All</choice> <choice va... See more...
Hello There,    I'm hvaing issues in multiselect input dropdown <input type="multiselect" token="siteid" searchWhenChanged="true"> <label>Site</label> <choice value="*">All</choice> <choice value="03">No Site Selected</choice> <fieldForLabel>displayname</fieldForLabel> <fieldForValue>prefix</fieldForValue> <search> <query> | inputlookup site_ids.csv | search displayname != "ABN8" AND displayname != "ABR8" AND displayname != "ABRA7" AND displayname != "ABMAN2" </query> <earliest>-15m</earliest> <latest>now</latest> </search> <delimiter>_fc7 OR index=</delimiter> <suffix>_fc7</suffix> <default>03</default> <initialValue>03</initialValue> <change> <eval token="form.siteid">case(mvcount('form.siteid') == 2 AND mvindex('form.siteid', 0) == "03", mvindex('form.siteid', 1), mvfind('form.siteid', "\\*") == mvcount('form.siteid') - 1, "03", true(), 'form.siteid')</eval> </change> </input> <input type="multiselect" token="system_number" searchWhenChanged="true"> <label>Node</label> <choice value="*">All</choice> <default>*</default> <initialValue>*</initialValue> <fieldForLabel>Node</fieldForLabel> <fieldForValue>sys_number</fieldForValue> <change> <eval token="form.system_number">case(mvcount('form.system_number') == 2 AND mvindex('form.system_number', 0) == "*", mvindex('form.system_number', 1), mvfind('form.system_number', "\\*") == mvcount('form.system,_number') - 1, "*", true(), 'form.system_number')</eval> </change> <search> <query>| inputlookup node.csv | fields site prefix Node sys_number | eval token_value = "$siteid$" | eval site_val = if(match(token_value, "OR\s*index="), split(replace(token_value, "\s*OR\s*index=\s*", ","), ","), token_value) | where prefix=site_val | dedup Node | table Node sys_number</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <prefix>"</prefix> <suffix>"</suffix> <valueSuffix>","</valueSuffix> <delimiter> </delimiter> </input> the problem here is, I need to have field for label as Node but , When I'm selecting an value in siteid, then selecting a value in Node, after that selecting the secong value in Siteid, the node change its value to sys_number, but actually it should be Node, as we mentioned fields label as Node only but it changes to sys_number.    this only happens after selecting any values in Node, if we select values in siteid, the Node behaved wierd. Other eise its fine, Thanks!
Hello Community, I am trying to create a connection so that I can sent metric running on 8125 port UDP on Splunk Enterprise (running locally) to Spunk Cloud (running prd-p-7mh2z.splunkcloud.com) but... See more...
Hello Community, I am trying to create a connection so that I can sent metric running on 8125 port UDP on Splunk Enterprise (running locally) to Spunk Cloud (running prd-p-7mh2z.splunkcloud.com) but I am getting below error. As I need to send UDP data running on port 8125, I am using heavy forwarder instead of universal forwarder and I have configured heavy forwarder pointing to "prd-p-7mh2z.splunkcloud.com:9997" Getting error on the dashboard ``` The TCP output processor has paused the data flow. Forwarding to host_dest=prd-p-7mh2z.splunkcloud.com inside output group default-autolb-group from host_src=rahusri2s-MacBook-Pro.local has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. ```   cat /Applications/splunk/etc/system/local/outputs.conf Password: [tcpout] defaultGroup = default-autolb-group indexAndForward = 1 [tcpout:default-autolb-group] server = prd-p-7mh2z.splunkcloud.com:9997 [tcpout-server://prd-p-7mh2z.splunkcloud.com:9997] # cat /Applications/splunk/etc/apps/search/local/inputs.conf [splunktcp://9997] connection_host = ip [udp://8125] connection_host = dns host = rahusri2s-MacBook-Pro.local index = 4_dec_8125_udp sourcetype = statsd   Thanks in advance. #splunk