All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How do I get a complete list of ALL my indexers created in Splunk Ent. Please see my SPL. It only lists the main indexes. | tstats count where index=* by splunk_server | fields - count
I know SVG is used to create the charts in dashboard. is there a way we can customize the SVG path, rect ?
Hello, I have a search query that produces a value similar to below.  What i am trying to accomplish is to extract the "Data", "Time", and "Notes" section and to output those values to a table with ... See more...
Hello, I have a search query that produces a value similar to below.  What i am trying to accomplish is to extract the "Data", "Time", and "Notes" section and to output those values to a table with each in a separate column.   What would be an efficient way to accomplish this?  I am seeing some regex syntax fines but not as familiar with it.  Any help is appreciated.  Thanks! ---------------------------------------- Value: CompName: XXX XXX Type: XXX EmpName: XXX Date: XX-XX-XXXX Time 9:00AM Notes: XXXX
Hi All, I configured the qualys add-on to receive data from a qualys cloud platform. He worked  fine but today we didn't receive any event from qualys and on the internal logs we found this messa... See more...
Hi All, I configured the qualys add-on to receive data from a qualys cloud platform. He worked  fine but today we didn't receive any event from qualys and on the internal logs we found this message. TA-QualysCloudPlatform (ioc_events): 2021-04-08 14:04:08 PID=12050 [MainThread] ERROR: Unsuccessful while calling API [500 : Internal Server Error]. Retrying after 300 seconds:  someone can help me?
How do I create a search with below table result? Date Range Time Range Count of Users Jan-4 0900-1700 900 Jan-5 0900-1700 500 Jan-6 0900-1700 300 Jan-7 0900-1700 200 Jan... See more...
How do I create a search with below table result? Date Range Time Range Count of Users Jan-4 0900-1700 900 Jan-5 0900-1700 500 Jan-6 0900-1700 300 Jan-7 0900-1700 200 Jan-8 0900-1700 800
I ran out of space in my HSC node. What is the best way to solve this problem? Should I move some of my data to another node, or is there a better way to do this? or can automatically free up space?
Hello! I have a cluster with 3000 volumes, and the Splunk Add-on for NetApp Data ONTAP only collects performance data for 50 volumes every collection interval. If I try to set perf_chunk_size_clust... See more...
Hello! I have a cluster with 3000 volumes, and the Splunk Add-on for NetApp Data ONTAP only collects performance data for 50 volumes every collection interval. If I try to set perf_chunk_size_cluster_mode=500, then it collects performance data for 500 volumes and then stops. I want to collect performance for all of my 3000 volumes. After the collection is finished (usually within seconds) I get "[getJob] could not find a job to do, sleeping before retry" from the worker logs.  What do I need to tweek in order to collect performance data from all of my 3000 volumes? Thanks, Sebastian
we are running 2 steps progress 1. Create lookup (by scheduled report ) 2. Create index summary using the lookup data     what will be the best way to make sure the step 2 starts only incase step... See more...
we are running 2 steps progress 1. Create lookup (by scheduled report ) 2. Create index summary using the lookup data     what will be the best way to make sure the step 2 starts only incase step 1 is completed successfully ?
Hello, I'm trying to configure splunk sso, but it fails because the splunk web host IP is not 127.0.0.1 and so doesn't match the splunkd trustedIP. Why is it and how can I change the host IP to 127.... See more...
Hello, I'm trying to configure splunk sso, but it fails because the splunk web host IP is not 127.0.0.1 and so doesn't match the splunkd trustedIP. Why is it and how can I change the host IP to 127.0.0.1? I have attached the screenshot from /debug/sso page.  It says splunk sso is enabled, but when I try to access, it gives me error as unauthorized. Any help would be much appreciated. Thanks in advance.
I am using modular_alert.py script for alert action to send SNMP traps. The script is not giving any error but it is also not sending traps.  Any suggestions on how to fix the issue? 
Trying to get a part of file in S3 into Splunk. Can i use blacklist option to filter out part of file and ingest rest of data in Splunk?
We all know that manipulating _MetaData:Index we can redirect some events to another index. But the question is - can we do it using Ingest-time evals? For example - using lookup() function to split... See more...
We all know that manipulating _MetaData:Index we can redirect some events to another index. But the question is - can we do it using Ingest-time evals? For example - using lookup() function to split by host field value to several separate indices.
Hi, I am unable to hide the X-axis scale in the bar chart. See below screenshot, I am plotting the chart using below query, where date is a dated field in my logs and name is name for something... See more...
Hi, I am unable to hide the X-axis scale in the bar chart. See below screenshot, I am plotting the chart using below query, where date is a dated field in my logs and name is name for something. index=* "some splunk query" | chart count(date) by name date Below are my charting options. I have tried multiple options here as you see below but none of it is actually helping me to hide the X-axis scale. <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisLabelsX.majorTickVisibility">hide</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisLabelsY.majorTickVisibility">hide</option> <option name="charting.axisLabelsY2.majorTickVisibility">hide</option> <option name="charting.axisY.scale">linear</option> <option name="charting.chart">bar</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.chart.stackMode">stacked100</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">1</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">0</option> <option name="charting.legend.placement">top</option> <option name="height">229</option> <option name="refresh.display">progressbar</option> </chart> Can someone please help ? Thanks in advance 
Hi Team, I Need to create an alert whenever a Linux Server shutdown and whenever a Linux server reboot, can you please help me with query condition, which parameter i need to search to get this aler... See more...
Hi Team, I Need to create an alert whenever a Linux Server shutdown and whenever a Linux server reboot, can you please help me with query condition, which parameter i need to search to get this alert implemented PS: We have RHEL,SUSE,Oracle Linux servers in our environment.    
Hi Team, I am aware that we can able to pull the license usage stats in splunk for index, host and sourcetype for a day. But whereas I want to pull the license usage stats in GB for last 30 days. I... See more...
Hi Team, I am aware that we can able to pull the license usage stats in splunk for index, host and sourcetype for a day. But whereas I want to pull the license usage stats in GB for last 30 days. I have two requirement seperately to pull the exact license usage stats for last 30 days in Splunk. 1.)The first one would be based on the app and severity of how much logs are ingested in GB for last 30 days for both the index. index=abc OR index=xyz app IN (splunk,"Splunk-Daemon") severity=informational ================================================================================ 2.) The second condition would be based on a index for two sourcetypes with a particular keyword.  2.) index=abc sourcetype IN ("efg","ijk") "www.splunk.com*"   So for each requirement how can I be able to pull the license usage in Splunk for last 30 days. Kindly help on the same.
Hi, We have a requirement to send data(all indexes data) to other tool using REST API.How can i display all indexes data using rest API. I tested with this end point,but it is showing only indexes i... See more...
Hi, We have a requirement to send data(all indexes data) to other tool using REST API.How can i display all indexes data using rest API. I tested with this end point,but it is showing only indexes information not data. curl -k -u admin:pwd https://IP:8089/services/data/indexes/ How can i display or provide full indexes data if any one use the endpoint?
Hello, we use mstats to visualize the _value. But for cpu perfmon values there is a number with 10 or more decimals after the pound like 45.736484739 For normal events we can use the eval command w... See more...
Hello, we use mstats to visualize the _value. But for cpu perfmon values there is a number with 10 or more decimals after the pound like 45.736484739 For normal events we can use the eval command with round but for metrics this should be not working. How can we round the metrics values in the charts?   | mstats avg(_value) prestats=t where metric_name="Processor.%_User_Time" AND index=* span=1m AND host=WI219SQL-P1 by metric_name | timechart avg(_value) as "AVG" span=60m  
Can someone help me with an ADDON for extracting fields out of the syslog data of McAfee DAM (Database Activity Monitor)
we are using iplocation command  i see that the GeoLite2-City.mmdb file is since 2019  [splunk@ilissplsh01 bin]$ ll /opt/splunk/share/GeoLite2-City.mmdb -r--r--r-- 1 splunk splunk 60695934 Dec 18 ... See more...
we are using iplocation command  i see that the GeoLite2-City.mmdb file is since 2019  [splunk@ilissplsh01 bin]$ ll /opt/splunk/share/GeoLite2-City.mmdb -r--r--r-- 1 splunk splunk 60695934 Dec 18 2019 /opt/splunk/share/GeoLite2-City.mmdb [splunk@ilissplsh01 bin]$   I have downloaded the file from https://www.maxmind.com/en/accounts/532070/geoip/downloads   also I see that there is Geolocation Lookup for Splunk APP (https://splunkbase.splunk.com/app/4102/#/overview) to allow iplocation    what is the recommended way to work with the command  ?   thanks 
hi, i've been trying for a long time now to get netscaler ipfix/netflow data properly ingested into a Nozo9110test splunk instance. I have the stream app and forwarder etc looking like they are work... See more...
hi, i've been trying for a long time now to get netscaler ipfix/netflow data properly ingested into a Nozo9110test splunk instance. I have the stream app and forwarder etc looking like they are working. all  the data comes in as sourcetype stream:netflow. however i need this to be sourcetype citrix:netscaler:ipfix. so i have added props and transforms to achieve that based on the ip address the data is coming from. now this has worked and the sourcetype has changed as you can see:   however, i expected the data to then be processed and the fields extracted from the netscalersyslogmessage field. in the netscaler TA there are entries to transform sourcetype on the detection of netscalerSyslogMessage but none of that seems to be happening. i'm sure i'm just missing something obvious but i'd really appreciate some help nailing this down.   here's what i've done to change the sourcetype: props: [source::stream:netflow] TRANSFORMS-changesourcetype = set_netscaler   transforms: [set_netscaler] FORMAT = sourcetype::citrix:netscaler:ipfix DEST_KEY = MetaData:Sourcetype REGEX= exporter_ip":"172.31.113.8