All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

What is the reason behind keeping default RF - 3 and SF - 2 ?? why splunk recommad it ?? what happen if we keep RF - 100 ??
How is data replicated in Clustering ?? What Happen if Cluster master goes down ??
Hi, Splunkers, I have a dashboard with 2 panels. there is one input token,  Gucid_token, what I need is when Gucid_token is any string but not a 1 or 2 digits number, then use it as search string ... See more...
Hi, Splunkers, I have a dashboard with 2 panels. there is one input token,  Gucid_token, what I need is when Gucid_token is any string but not a 1 or 2 digits number, then use it as search string for panle1's query , in this case, this token has nothing to do with panel 2 query. panel1 <query>sourcetype="omni:ors:voice" $Gucid_token$ Panel2 <query>sourcetype="omni:ors:voice"   keyword1 keyword2  | search skilllength >1  when Gucid_token is a 1 or 2 digits number, then ignore it in panel1's query,  but for panel2's query, use this  token to build search like  search skilllength > $Gucid_token$ panel1 <query>sourcetype="omni:ors:voice"  Panel2 <query>sourcetype="omni:ors:voice"   keyword1 keyword2  | search skilllength > $Gucid_token$   thx in advance.   Kevin
we have business database, we have many query to search business data into a view. now we want put these data into splunk, so we can use the 
I am looking for help on stats with eval  Input Events (each json is a event):   { "app_name": "app1","logEvent": "Received"} { "app_name": "app1","logEvent": "Received"} { "app_name": "app1","log... See more...
I am looking for help on stats with eval  Input Events (each json is a event):   { "app_name": "app1","logEvent": "Received"} { "app_name": "app1","logEvent": "Received"} { "app_name": "app1","logEvent": "Missing"} { "app_name": "app1","logEvent": "Delivered"} { "app_name": "app2","logEvent": "Received"} { "app_name": "app2","logEvent": "Delivered"}     My current query is :   index=np-dockerlogs sourcetype=sales | rename log_processed.* as * | eval logEvent =upper(logEvent) | search logEvent IN ("RECEIVED", "DELIVERED", "MISSING") | stats count by logEvent app_name   Current Output: app1 RECEIVED 2 app1 MISSING 1 app1 DELIVERED 1 app2 RECEIVED 1 app2 DELIVERED  1   Output i want to generate is to remove MISSING and subtract the count of Missing from Received. Received = Total Count of Received - Total Count of Missing Delivered = Total Count of Delivered app1 RECEIVED 1 app1 DELIVERED 1 app2 RECEIVED 1 app2 DELIVERED  1   Thank you
Please help with an SPL or use MC to see if / when a HF stops sending data or there is a big drop in the amount of data it usually sends like 10% of the normal data sent. How do I tell if a HF is sic... See more...
Please help with an SPL or use MC to see if / when a HF stops sending data or there is a big drop in the amount of data it usually sends like 10% of the normal data sent. How do I tell if a HF is sick & not functioning? I appreciate your time in advance. Thc
Hello all, Looking for a way to modify the Splunk Health Check for small buckets. Specifically, I would like the healthcheck to exclude certain indexes. For example, I like knowing if I am getting ... See more...
Hello all, Looking for a way to modify the Splunk Health Check for small buckets. Specifically, I would like the healthcheck to exclude certain indexes. For example, I like knowing if I am getting too many small buckets... but not if it is for my test index.  Buckets Root Cause(s): The percentage of small buckets (100%) created over the last hour is high and exceeded the red thresholds (50%) for index=test, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=6, small buckets=6 The percentage of small buckets (100%) created over the last hour is high and exceeded the red thresholds (50%) for index=test, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=5, small buckets=5
I am unable to make the Threat Intelligence input for hailataxii work using on-prem Splunk Enterprise. Splunk Enterprise version 8.2.4 and Enterprise Security version 7.0.0.   The Threat Intelligen... See more...
I am unable to make the Threat Intelligence input for hailataxii work using on-prem Splunk Enterprise. Splunk Enterprise version 8.2.4 and Enterprise Security version 7.0.0.   The Threat Intelligence Audit dashboard shows "TAXII feed polling starting" The Intelligence Audit events below show an error message   2022-01-10 20:11:51,120+0000 ERROR pid=3116 tid=MainThread file=threatlist.py:download_taxii:476 | <urlopen error [Errno 111] Connection refused> Traceback (most recent call last): File "/opt/splunk/lib/python3.7/urllib/request.py", line 1350, in do_open encode_chunked=req.has_header('Transfer-encoding')) File "/opt/splunk/lib/python3.7/http/client.py", line 1281, in request self._send_request(method, url, body, headers, encode_chunked) File "/opt/splunk/lib/python3.7/http/client.py", line 1327, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/opt/splunk/lib/python3.7/http/client.py", line 1276, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/opt/splunk/lib/python3.7/http/client.py", line 1036, in _send_output self.send(msg) File "/opt/splunk/lib/python3.7/http/client.py", line 976, in send self.connect() File "/opt/splunk/etc/apps/SA-ThreatIntelligence/contrib/libtaxii/clients.py", line 478, in connect (self.host, self.port), self.timeout, self.source_address) File "/opt/splunk/lib/python3.7/socket.py", line 728, in create_connection raise err File "/opt/splunk/lib/python3.7/socket.py", line 716, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/SA-ThreatIntelligence/bin/threatlist.py", line 439, in download_taxii taxii_message = handler.run(args, handler_args) File "/opt/splunk/etc/apps/SA-ThreatIntelligence/bin/taxii_client/__init__.py", line 173, in run return self._poll_taxii_11(parsed_args) File "/opt/splunk/etc/apps/SA-ThreatIntelligence/bin/taxii_client/__init__.py", line 81, in _poll_taxii_11 http_resp = client.call_taxii_service2(args.get('url'), args.get('service'), tm11.VID_TAXII_XML_11, poll_xml, port=args.get('port'), timeout=args['timeout']) File "/opt/splunk/etc/apps/SA-ThreatIntelligence/contrib/libtaxii/clients.py", line 344, in call_taxii_service2 response = urllib.request.urlopen(req, timeout=timeout) File "/opt/splunk/lib/python3.7/urllib/request.py", line 222, in urlopen return opener.open(url, data, timeout) File "/opt/splunk/lib/python3.7/urllib/request.py", line 525, in open response = self._open(req, data) File "/opt/splunk/lib/python3.7/urllib/request.py", line 543, in _open '_open', req) File "/opt/splunk/lib/python3.7/urllib/request.py", line 503, in _call_chain result = func(*args) File "/opt/splunk/etc/apps/SA-ThreatIntelligence/contrib/libtaxii/clients.py", line 374, in https_open return self.do_open(self.get_connection, req) File "/opt/splunk/lib/python3.7/urllib/request.py", line 1352, in do_open raise URLError(err) urllib.error.URLError: <urlopen error [Errno 111] Connection refused>   Any ideas??? 
Hi, we're trying to import sheetjs into a custom SplunkJS script so we can export some results into xlsx. Tried to add it in the required section at the begining of the script but is not working. ... See more...
Hi, we're trying to import sheetjs into a custom SplunkJS script so we can export some results into xlsx. Tried to add it in the required section at the begining of the script but is not working.   require([ "splunkjs/mvc", "<path to xlsx.full.min.js>", "splunkjs/mvc/searchmanager", "splunkjs/mvc/simplexml/ready!" ], function( mvc, XLSX, SearchManager) { console.log(XLSX.version); } );     Here is the sheetjs documentation: https://github.com/SheetJS/sheetjs Any help ll be appreciated. Regards. Javier.
We're moving to Splunk Cloud, but we have some legacy hosts for which I need a forwarder upgrade.  Is there any compatible UF version 7 or newer that runs on 32-bit Windows Server 2008 SP2 (not R2!) ... See more...
We're moving to Splunk Cloud, but we have some legacy hosts for which I need a forwarder upgrade.  Is there any compatible UF version 7 or newer that runs on 32-bit Windows Server 2008 SP2 (not R2!) I've searched the available older versions and I'm coming up empty.  I'm grabbing at the last straw here. Thank you in advance...
I have a Panel in a Dashboard which shows results of a Query and picks the time range from a TimePicker.  Goal: If the user selects time greater than 30 days in the TimePicker, the search for this s... See more...
I have a Panel in a Dashboard which shows results of a Query and picks the time range from a TimePicker.  Goal: If the user selects time greater than 30 days in the TimePicker, the search for this specific panel's query should not search for more than 30 days. It should set the time range to 30 days only if user selects time greater than 30 days in TimePicker. For time selected lesser than 30 days, this specific panel should display results for that selected time range.  This is how the current query for this panel looks like: eventtype=$app_name$ | timechart span=1h count by _time
Hello, I am working with the timechart command on my following query and I am running into some problems. I am trying to compute:  timechart span=15m sum(ofAField) as sumOfField, avg(sumOfFiel... See more...
Hello, I am working with the timechart command on my following query and I am running into some problems. I am trying to compute:  timechart span=15m sum(ofAField) as sumOfField, avg(sumOfField) as avgOfField by task My problem with this one is that when I run it. I get the correct output for the first task but the out for the rest of the task are wrong. I am assuming that for the rest of the tasks only the sum portion of the time chart query is being calculated and not the avg. For background context there are about 11 different task this time chart is being grouped by.  TIA
Hi Everyone,  I am new to splunk and need some help. I am attempting to create a dashboard that separates the asset's vulnerabilities by department.  Right now we get the asset with the vulnerabil... See more...
Hi Everyone,  I am new to splunk and need some help. I am attempting to create a dashboard that separates the asset's vulnerabilities by department.  Right now we get the asset with the vulnerability and was wondering if there is a way to group them by the naming convention. For instance. sec-9564 would be the security department.  So id be saying: if pc starts with sec* than group it into the Security Dept column.  In the end I need to show a dashboard with each departments vulnerabilities.  Any help with this would be appreciated !
Can anyone assist me with the SPL to subtract EBVS% and PFAVS% fields to allow the successful plays field to improve?  I've attached a screenshot below.  
Do we have a query for an alert if the memory usage increases by 20% from the previous 20 minutes.    So if the server was running with an average RAM usage of 50%, then the server suddenly increase... See more...
Do we have a query for an alert if the memory usage increases by 20% from the previous 20 minutes.    So if the server was running with an average RAM usage of 50%, then the server suddenly increases to 75% RAM usage, we would trigger an alert.
With Splunk (splunk-library-javalogging) library update to version 1.11.4 , _time doesnot show millisecond  .  Having multiple source , logs shows up messed up . Attaching screenshot below . Using   ... See more...
With Splunk (splunk-library-javalogging) library update to version 1.11.4 , _time doesnot show millisecond  .  Having multiple source , logs shows up messed up . Attaching screenshot below . Using   log4j2.xml config with Json Logger  .  How to see milliseconds as part of _time .   
Hello! I'm trying to make the splunk forwarder part of my gold image template for windows servers.  Right now, I have a script that installs the forwarder using vmware customization once the VM is ... See more...
Hello! I'm trying to make the splunk forwarder part of my gold image template for windows servers.  Right now, I have a script that installs the forwarder using vmware customization once the VM is cloned from it's template. I would instead like to install the forwarder on the gold image itself and get rid of the script. I am using directions from a splunk document. The 3rd section titled "Clone and restore the image" does not make sense to me, it seems to be saying clone the same image 2 or 3 times.  Restart the machine and clone it with your favorite imaging utility. After cloning the image, use the imaging utility to restore it into another physical or virtual machine. Run the cloned image. Splunk services start automatically. Use the CLI to restart Splunk Enterprise to remove the cloneprep information: Step 1 - ok so I have cloned my gold image into another machine (that I would think can be used for prod) Step 2- Why do I need to restore the cloned VM to another machine? Step 3 - ok Step 4 - This is pretty inconvenient, this makes me have to either manually do something, or script it which I'm trying not to do. This may be terminology confusion on my part, but is there not a way to completely configure the forwarder on the gold image, and when I clone it using vmware, it just comes up and works? https://docs.splunk.com/Documentation/Splunk/8.2.4/Admin/Integrateauniversalforwarderontoasystemimage      
I have a table with a few columns in it. One of the columns is 'url'. I would like to construct a relative search URL that opens in a new tab.   { "type": "drilldown.customUrl... See more...
I have a table with a few columns in it. One of the columns is 'url'. I would like to construct a relative search URL that opens in a new tab.   { "type": "drilldown.customUrl", "options": { "url": "/en-US/app/search/search?q=search%20index%3D\"$index$\"%20url%3D\"$click.value$\"", "newTab": true } }     Token resolution works. $index$ is replaced as it should be (this is an existing token on my dashboard). But I can't seem to access anything in table. $click.value$ is clearly wrong. I've tried a bunch of other styles from the docs too. $row.url.value$ $row.url$ $row.value.2$ $url.value$ The problem I think is that I'm looking at documentation for older Splunk versions and/or documentation for the classic xml dashboard. What's the correct way to do this? Also, is there up-to-date documentation available for this?    
Hi,   How can I reduce the storage size of an index, what are the different methods/options? Also, will removing logs using the delete command reduce the storage size of an index?   Kind Re... See more...
Hi,   How can I reduce the storage size of an index, what are the different methods/options? Also, will removing logs using the delete command reduce the storage size of an index?   Kind Regards, Aftab
Hi, I am seeking assistance to execute Python script located under custom app.  Script is working fine in cmd prompt. But from Splunk console it is throwing error code 1. I am trying to invoke scri... See more...
Hi, I am seeking assistance to execute Python script located under custom app.  Script is working fine in cmd prompt. But from Splunk console it is throwing error code 1. I am trying to invoke script through below custom search command defined under commands.conf file. | script <python script> I have also installed required modules like Pandas and its dependencies. I am still seeing below errors. Please assist me to fix below errors Traceback (most recent call last): stderr import pandas as pd stderr from 'C:\Program Files\Splunk\bin\Python3.exe \__init__.py", line 17, in <module> stderr "Unable to import required dependencies:\n" + "\n".join(missing_dependencies) ImportError: Unable to import required dependencies: numpy Importing the numpy C-extensions failed. This error can happen for The Python version is: Python3.7 from "C:\Program Files\Splunk\bin\Python3.exe" The NumPy version is: "1.22.0" and make sure that they are the versions you expect. Original error was: No module named 'numpy.core._multiarray_umath' External search command returned error code 1.