All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Jerry, in that case where TA is installed on both Indexer and SH, Where the data input and all configurations are to be configured- on SH right (for Splunk Cloud deployment) below flow? Da... See more...
Hi Jerry, in that case where TA is installed on both Indexer and SH, Where the data input and all configurations are to be configured- on SH right (for Splunk Cloud deployment) below flow? Data sources --> HF(Syslog server) (TA not required)--> Cloud indexer (with TA)--> Cloud SH(with TA)    I'd also suggest if you could update the add-on documentation to include clear details pls. That would help. I have Splunk cloud with ITSI (not ES) and I want to test the Fortinet Add-on  
Dear Community members, It would be helpful if someone assist me with relevant document of deployment of Splunk enterprise security on AWS Containers & AWS ec2 instance to compare which is the appro... See more...
Dear Community members, It would be helpful if someone assist me with relevant document of deployment of Splunk enterprise security on AWS Containers & AWS ec2 instance to compare which is the appropriate model that will be supported by Splunk for any future issues along with upgradation. 
Thank you, gcusello, for your response! I would appreciate it if you could provide more details about the importance of installing this Add-on. Additionally, could you clarify who the owner of both... See more...
Thank you, gcusello, for your response! I would appreciate it if you could provide more details about the importance of installing this Add-on. Additionally, could you clarify who the owner of both the Add-on and the Application is? Were they developed by Splunk or NetScaler? Thank you in advance!
Hi @Amira , you have to install the Splunk Add-on for Citrix NetScaler also on the Heavy Forwarder. Then you have to create the index on the Indexer and  you must be sure that data are stored in th... See more...
Hi @Amira , you have to install the Splunk Add-on for Citrix NetScaler also on the Heavy Forwarder. Then you have to create the index on the Indexer and  you must be sure that data are stored in the correct index. Ciao. Giuseppe
Hi @masakazu , did you followed all the instructions in the above link? I never exerimented the above issue. My hint is to repeat all the steps in the installation procedure, checking if you have ... See more...
Hi @masakazu , did you followed all the instructions in the above link? I never exerimented the above issue. My hint is to repeat all the steps in the installation procedure, checking if you have kv-Store issues in your Splunk installation. If you don't solve, open a case to Splunk Support. Ciao. Giuseppe
https://docs.splunk.com/Documentation/ES/7.3.2/Install/InstallEnterpriseSecurity I installed Splunk Enterprise Security to verify operation, following the above manual, I installed and configured t... See more...
https://docs.splunk.com/Documentation/ES/7.3.2/Install/InstallEnterpriseSecurity I installed Splunk Enterprise Security to verify operation, following the above manual, I installed and configured the initial settings. When I opened the incident view in the app's menu bar, an error message appeared saying "An error occurred while loading some filters." When I opened the investigation, an error message appeared saying "Unknown Error: FAiled to fetch KV Store." I can't display the incident review and investigation. Is anyone else experiencing the same issue?
Please do not repeat the same question.  If needed, you can edit the post to correct or add more information; alternatively, delete one of them. Your posting does not demonstrate anything about only... See more...
Please do not repeat the same question.  If needed, you can edit the post to correct or add more information; alternatively, delete one of them. Your posting does not demonstrate anything about only one value is passed at a time.  How do you know?  Without showing the data, the selections you make, the search you use, and the output, no one can read your mind.  Here is a dashboard I constructed for another question and adapted to demonstrate that multiple values are being passed: <form version="1.1" theme="light"> <label>Multivalue input</label> <description>https://community.splunk.com/t5/Splunk-Search/Passing-a-mutiple-values-of-label-in-input-dropdown/m-p/705987</description> <fieldset submitButton="false"> <input type="multiselect" token="multivalue_field_tok" searchWhenChanged="true"> <label>select all field values</label> <choice value="*">All</choice> <default>WARN,WARNING</default> <delimiter> </delimiter> <fieldForLabel>log_level</fieldForLabel> <fieldForValue>log_level</fieldForValue> <search> <query>| makeresults format=csv data="log_level INFO WARN WARNING ERROR"</query> </search> </input> <input type="multiselect" token="multivalue_term_tok" searchWhenChanged="true"> <label>select all terms</label> <choice value="Installed">Installed</choice> <choice value="binary">binary</choice> <choice value="INFO">INFO</choice> <choice value="WARNING">WARNING</choice> <choice value="ERROR">ERROR</choice> <choice value="*">All</choice> <default>binary,ERROR</default> <delimiter> OR </delimiter> </input> </fieldset> <row> <panel> <title>$multivalue_field_tok$</title> <event> <search> <query>index = _internal log_level IN ($multivalue_field_tok$)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </event> </panel> <panel> <title>$multivalue_term_tok$</title> <event> <title>no field name</title> <search> <query>index = _internal ($multivalue_term_tok$)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </event> </panel> </row> </form> As you can see, you can select any combination of values.  They are passed faithfully into respective searches.
Hi Pickle, I wanted to update you that I made a mistake with the configuration in authentication.conf. Instead of defining a specific stanza for RADIUS, I should have used the [Scripted] stanza. Wi... See more...
Hi Pickle, I wanted to update you that I made a mistake with the configuration in authentication.conf. Instead of defining a specific stanza for RADIUS, I should have used the [Scripted] stanza. With this correction, the Python script is now working properly. It handles local authentication for dumped users and successfully authenticates one user via the script configured for RADIUS. I’m now working on customizing the script further to directly authenticate users from RADIUS. Thank you!
Hi Splunk Community, I’m new to integrating Citrix NetScaler with Splunk, but I have about 9 years of experience working with Splunk. I need your guidance on how to successfully set up this integrat... See more...
Hi Splunk Community, I’m new to integrating Citrix NetScaler with Splunk, but I have about 9 years of experience working with Splunk. I need your guidance on how to successfully set up this integration to ensure that: All data from NetScaler is ingested and extracted correctly. The dashboards in the Splunk App for Citrix NetScaler display the expected panels and trends. Currently, I have a 3-machine Splunk environment (forwarder, indexer, and search head). Here's what I’ve done so far: I installed the Splunk App for Citrix NetScaler on the search head. Data is being ingested from the NetScaler server via the heavy forwarder, but I have not installed the Splunk Add-on for Citrix NetScaler on the forwarder or indexer. Despite this, the dashboards in the app show no data. From your experience, is it necessary to install the Splunk Add-on for Citrix NetScaler on the heavy forwarder (or elsewhere) to extract and normalize the data properly? If so, would that resolve the issue of empty dashboards? Any insights or steps to troubleshoot and ensure proper integration would be greatly appreciated! Thanks in advance!    
"""https://docs.splunk.com/observability/en/rum/rum-rules.html#use-cases Write custom rules for URL grouping in Splunk RUM — Splunk Observability Cloud documentation Write custom rules to group URL... See more...
"""https://docs.splunk.com/observability/en/rum/rum-rules.html#use-cases Write custom rules for URL grouping in Splunk RUM — Splunk Observability Cloud documentation Write custom rules to group URLs based on criteria that matches your business specifications, and organize data to match your business needs. Group URLs by both path and domain.  you also need custom URL grouping rules to generate page-level metrics (rum.node.*) in Splunk RUM.""   As per the splunk documentation ,we have configured custom URL grouping. But rum.node.* metrics not available.   pls help on this Path configured    
Hello There,    I would like to pass mutiple values in label, Where in the current search i can able to pass onlu one values at a time, <input type="multiselect" token="siteid" searchWhenChanged... See more...
Hello There,    I would like to pass mutiple values in label, Where in the current search i can able to pass onlu one values at a time, <input type="multiselect" token="siteid" searchWhenChanged="true"> <label>Site</label> <choice value="*">All</choice> <choice value="03">No Site Selected</choice> <fieldForLabel>displayname</fieldForLabel> <fieldForValue>prefix</fieldForValue> <search> <query> | inputlookup site_ids.csv |search displayname != "ABCN8" AND displayname != "ABER8" AND displayname != "AFRA7" AND displayname != "AMAN2" </query> <earliest>-15m</earliest> <latest>now</latest> </search> <delimiter>_fc7 OR index=</delimiter> <suffix>_fc7</suffix> <default>03</default> <initialValue>03</initialValue> <change> <eval token="form.siteid">case(mvcount('form.siteid') == 2 AND mvindex('form.siteid', 0) == "03", mvindex('form.siteid', 1), mvfind('form.siteid', "\\*") == mvcount('form.siteid') - 1, "03", true(), 'form.siteid')</eval> </change> <change> <set token="tokLabel">$label$</set> </change> </input> I need to pass this label value as well, which is a multiselect value. Thanks!
I have field name/column name called ssh_status and {Noncompliant, successful logins , failed logins etc) are its sub fields or values. and under "Visualisation" , Noncompliant, successful logins , f... See more...
I have field name/column name called ssh_status and {Noncompliant, successful logins , failed logins etc) are its sub fields or values. and under "Visualisation" , Noncompliant, successful logins , failed logins etc these are showing in same color.
Choropleth map provides city level resolution, is there way to get higher resolution such as street or block level? thanks!
Hi Dear Splunkers,  I have been working on creating a Custom TA for counting unicode characters for non-eng dataset (long story discussion post in PS), getting these lookup file errors 1)Error in... See more...
Hi Dear Splunkers,  I have been working on creating a Custom TA for counting unicode characters for non-eng dataset (long story discussion post in PS), getting these lookup file errors 1)Error in 'lookup' command: Could not construct lookup 'ucd_count_chars_lookup, _raw, output, count'. See search.log for more details. 2)The lookup table 'ucd_count_chars_lookup' does not exist or is not available. The search job has failed due to an error. You may be able view the job in the Job Inspector.   The Custom TA creation steps I followed: (on my personal laptop, installed bare-min fresh 9.3.2 enterprise trial version) 1) Created the custom TA named "TA-ucd" on app creation page (given read for all, execute for owner, shared with all apps) 2) created the ucd_category_lookup.py (made sure of the permissions) $SPLUNK_HOME/etc/apps/TA-ucd/bin/ucd_category_lookup.py (this file should be readable and executable by the Splunk user, i.e. have at least mode 0500) #!/usr/bin/env python import csv import unicodedata import sys def main(): if len(sys.argv) != 3: print("Usage: python category_lookup.py [char] [category]") sys.exit(1) charfield = sys.argv[1] categoryfield = sys.argv[2] infile = sys.stdin outfile = sys.stdout r = csv.DictReader(infile) header = r.fieldnames w = csv.DictWriter(outfile, fieldnames=r.fieldnames) w.writeheader() for result in r: if result[charfield]: result[categoryfield] = unicodedata.category(result[charfield]) w.writerow(result) main()  $SPLUNK_HOME/etc/apps/TA-ucd/default/transforms.conf [ucd_category_lookup] external_cmd = ucd_category_lookup.py char category fields_list = char, category python.version = python3 $SPLUNK_HOME/etc/apps/TA-ucd/metadata/default.meta [] access = read : [ * ], write : [ admin, power ] export = system 3) after creating the 3 files above mentioned, i did restart Splunk service and laptop as well.  4) still the search fails with lookup errors above mentioned.  5) source=*search.log* - does not produce anything (Surprisingly !) Could you pls upvote the idea - https://ideas.splunk.com/ideas/EID-I-2176 PS - long story available here - https://community.splunk.com/t5/Splunk-Search/non-english-words-length-function-not-working-as-expected/m-p/705650  
Thank you.This works perfectly. 
As @ITWhisperer points out it depends if you have a single "series" in your data, e.g. as in this example which has 4 rows of the "type" field | makeresults | eval type=split("ABCD","") | mvexpand ... See more...
As @ITWhisperer points out it depends if you have a single "series" in your data, e.g. as in this example which has 4 rows of the "type" field | makeresults | eval type=split("ABCD","") | mvexpand type | chart count by type or whether you have 4 fields and a single row as in this example,, which allow you to change the colours of the "series" - i.e. colums | makeresults | eval type=split("ABCD","") | mvexpand type | eval xx="A" | chart count over xx by type If your results are like the first example, i.e. 4 rows and a type/count, then you have options to make it the other way, but a simple option is to do | transpose 0 header_field=type after your results, where "type" is your column name
You need the eval like this values(eval(if(status>399, status, null()))) as list_of_Status otherwise the eval just returns a boolean type result, so you need to use if and assign the result. You ... See more...
You need the eval like this values(eval(if(status>399, status, null()))) as list_of_Status otherwise the eval just returns a boolean type result, so you need to use if and assign the result. You can also do it like this after the stats using mvmap | eval list_of_Status=mvfilter(list_of_Status>=399)
I don't see how leap years could have anything to do with it. A leap year has 1 more day than a regular year, so that doesn't explain why they would use 1 less day than a regular year...
Thanks a lot. This works fine. Is there a way we can display only status which are greater than 399. Like (status>399) i tried values(eval(status>399)) but it didn't work. 
Thanks a lot. This works fine. Is there a way we can display only status which are greater than 399. Like (status>399) i tried values(eval(status>399)) but it didn't work.