All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I need to move lots of dashboards, alerts, scheduled reports, lookup tables and data feeds to a new Splunk environment due to security issues.  I would like to make that transition an incremental pro... See more...
I need to move lots of dashboards, alerts, scheduled reports, lookup tables and data feeds to a new Splunk environment due to security issues.  I would like to make that transition an incremental process rather than one big 'switch over'. My questions are: > Which knowledge objects can be accessed via federation? > What are examples of federated syntax for 'inputlookup', 'loadjob', etc. commands? I realize there will be a performance impact, but that will be temporary while transition to the new Splunk environment is ongoing. Where is the current documentation for using federated access? Thanks in advance.  
Hello, We have an Splunk Enterprise version 8.0.5 configured in PRD, we have enabled boot-start with --systemd-managed 1 and specified the user splunk as the owner of the service. It keeps failing d... See more...
Hello, We have an Splunk Enterprise version 8.0.5 configured in PRD, we have enabled boot-start with --systemd-managed 1 and specified the user splunk as the owner of the service. It keeps failing during boot with this error: start request repeated too quickly for splunk.service If I run "splunk start | restart | stop" it uses systemd to manage the process also (what is correct) and it works properly after boot.  If I run "systemctl start splunk" after boot, the service starts ok. The problem is only during boot. Server information: NAME="Oracle Linux Server" VERSION="7.9" journalctl logs says: Failed at step EXEC spawning /opt/splunk/bin/splunk: No such file or directory -- Subject: Process /opt/splunk/bin/splunk could not be executed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- The process /opt/splunk/bin/splunk could not be executed and failed. -- -- The error number returned by this process is 2.   How can we fix it?
Dear Splunk community, I am having problems browsing for more apps in my Splunk installation as I am receiving the following error message. The error seems to originate from the following endpo... See more...
Dear Splunk community, I am having problems browsing for more apps in my Splunk installation as I am receiving the following error message. The error seems to originate from the following endpoint: https://MYSERVERURL/en-US/splunkd/__raw/services/appsbrowser/v1/app/ {"status": 400, "errors": ["not a valid option(s) for version"]}   Some information about my setup: Local installation on an Ubuntu machine I am using Splunk version 7.0.0 Free license I recently activated a custom certificate to encrypt traffic from my forwarders. This is working fine. However, I am not aware if this resulted in this error message but it is my best guess.   I am having troubles figuring out what to try next to fix this problem. I appreciate any hints that might help!
Hello, Integrated into our app, we had developed a custom search command (streaming) based on Splunk Enterprise SDK for Python v1.6.2 and using Python for Scientific Computing (for Linux 64-bit) v1.... See more...
Hello, Integrated into our app, we had developed a custom search command (streaming) based on Splunk Enterprise SDK for Python v1.6.2 and using Python for Scientific Computing (for Linux 64-bit) v1.4. This command worked fine under Splunk v7.2.9.1. We are trying to migrate to Splunk v8.1 (latest version available). To accomplish this, we are also migrating Splunk Enterprise SDK for Python to v1.6.15 (latest version available) and Python for Scientific Computing (for Linux 64-bit) to v2.0.2 (latest version available). The problem is that our command no longer works ... I looked at what the Splunk Platform Upgrade Readiness App indicates, and it only indicates warnings (by the way: 1 on the script of our custom command, 1 on the exec_anaconda.py of Python for Scientific Computing (for Linux 64-bit) and 13 on the Splunk Enterprise SDK for Python). Here is the error that appears (search.log when executing our custom command): 02-18-2021 18:08:01.794 INFO  ChunkedExternProcessor - Running process: /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/talios/bin/HUMS.py 02-18-2021 18:08:01.865 ERROR ChunkedExternProcessor - stderr: Traceback (most recent call last): 02-18-2021 18:08:01.865 ERROR ChunkedExternProcessor - stderr:   File "/opt/splunk/etc/apps/talios/bin/HUMS.py", line 7, in <module> 02-18-2021 18:08:01.865 ERROR ChunkedExternProcessor - stderr:     import exec_anaconda 02-18-2021 18:08:01.865 ERROR ChunkedExternProcessor - stderr:   File "/opt/splunk/etc/apps/talios/bin/exec_anaconda.py", line 17, in <module> 02-18-2021 18:08:01.865 ERROR ChunkedExternProcessor - stderr:     from util.base_util import get_apps_path 02-18-2021 18:08:01.865 ERROR ChunkedExternProcessor - stderr: ModuleNotFoundError: No module named 'util' 02-18-2021 18:08:01.870 ERROR ChunkedExternProcessor - EOF while attempting to read transport header read_size=0 02-18-2021 18:08:01.870 ERROR ChunkedExternProcessor - Error in 'hums' command: External search command exited unexpectedly with non-zero error code 1. 02-18-2021 18:08:01.872 ERROR SearchPhaseGenerator - Fallback to two phase search failed:Error in 'hums' command: External search command exited unexpectedly with non-zero error code 1. 02-18-2021 18:08:01.873 ERROR SearchOrchestrator - Error in 'hums' command: External search command exited unexpectedly with non-zero error code 1. 02-18-2021 18:08:01.873 ERROR SearchStatusEnforcer - sid:scheduler__admin__talios__RMD5b64ee28fa5a2b66b_at_1613671680_163 Error in 'hums' command: External search command exited unexpectedly with non-zero error code 1. This is an error in the exec_anaconda.py of Python for Scientific Computing (for Linux 64-bit) ... By following the solution of this post (https://community.splunk.com/t5/All-Apps-and-Add-ons/How-to-import-util-when-using-exec-anaconda/mp/488137) , we then come across the following error: 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr: Traceback (most recent call last): 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:   File "/opt/splunk/etc/apps/talios/bin/HUMS.py", line 12, in <module> 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:     from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option, validators 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:   File "/opt/splunk/etc/apps/talios/bin/splunklib/searchcommands/__init__.py", line 145, in <module> 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:     from .environment import * 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:   File "/opt/splunk/etc/apps/talios/bin/splunklib/searchcommands/environment.py", line 120, in <module> 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:     splunklib_logger, logging_configuration = configure_logging('splunklib') 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:   File "/opt/splunk/etc/apps/talios/bin/splunklib/searchcommands/environment.py", line 103, in configure_logging 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:     fileConfig(filename, {'SPLUNK_HOME': splunk_home}) 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:   File "/opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64/bin/linux_x86_64/lib/python3.7/logging/config.py", line 80, in fileConfig 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:     _install_loggers(cp, handlers, disable_existing_loggers) 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:   File "/opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64/bin/linux_x86_64/lib/python3.7/logging/config.py", line 196, in _install_loggers 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:     log.setLevel(level) 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:   File "/opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64/bin/linux_x86_64/lib/python3.7/logging/__init__.py", line 1353, in setLevel 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:     self.level = _checkLevel(level) 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:   File "/opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64/bin/linux_x86_64/lib/python3.7/logging/__init__.py", line 192, in _checkLevel 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr:     raise ValueError("Unknown level: %r" % level) 02-19-2021 12:53:48.296 ERROR ChunkedExternProcessor - stderr: ValueError: Unknown level: 'WARNING   ; Default: WARNING' This is an error in the Splunk Enterprise SDK for Python ... All this comes from our development environment which is in Docker, based on the official Splunk image (splunk / splunk: 8.1). If you have any leads see a solution. Thank you in advance for your help.
I have a table with 5 columns. Currently, I am passing 2 values First value  is  in the first column(clickvalue) and second field is in the second column(clickvalue2) .  So far it is working fine.  ... See more...
I have a table with 5 columns. Currently, I am passing 2 values First value  is  in the first column(clickvalue) and second field is in the second column(clickvalue2) .  So far it is working fine.  When I add another input  it does not take the input. It takes the the first column as clickvalue and where ever I click as clickvalue 2, so I an mot able to select the 3 rd input by clicking . I have tried to change the 3 rd input into $row.<fieldname>$ , clickvalue3, but none of this works. 
Hey!  I am trying to install Splunk App for infrastructure on our distributed Splunk platform and I was wondering we wether we could reveal to specific users based on their roles entities  that only... See more...
Hey!  I am trying to install Splunk App for infrastructure on our distributed Splunk platform and I was wondering we wether we could reveal to specific users based on their roles entities  that only regard to them / send data to their projects and apps.  Is that possible at all? I am just a bit concerned about revealing entities to clients and customers that have no relation to them.  Thanks!
Whats the definition for the vmstats field captured through the Splunk Add-on for Unix and Linux? that was what ive gathered so far... memTotalMB total memory memFreeMB the amount of idle m... See more...
Whats the definition for the vmstats field captured through the Splunk Add-on for Unix and Linux? that was what ive gathered so far... memTotalMB total memory memFreeMB the amount of idle memory memUsedMB total used memory memFreePct % of free memory memUsedPct % memory used pgPageOut   swapUsedPct the percentage amount of virtual memory used pgSwapOut   cSwitches The number of context switches per second interrupts The number of interrupts per second, including the clock forks number of forks  processes number of processes threads number of threads
Hi, Hello, I'm running Splunk Enterprise 8.1.2 on a REDHAT 8. I'm trying to get the Splunk Secure Gateway app running. I see that the requests from my Spunk server to prod.spacebridge.spl.mobi do ... See more...
Hi, Hello, I'm running Splunk Enterprise 8.1.2 on a REDHAT 8. I'm trying to get the Splunk Secure Gateway app running. I see that the requests from my Spunk server to prod.spacebridge.spl.mobi do not go through my proxy, despite the position of: - in ./splunk/etc/system/local/server.confs: [proxyConfig] http_proxy = http: //myproxy.mydomain: myport https_proxy = http: //myproxy.mydomain: myport proxy_rules = * no_proxy = localhost, 127.0.0.1, :: 1 - in ./splunk/etc/apps/splunk_secure_gateway/local/securegateway.conf [proxyConfig] http_proxy = http: //myproxy.mydomain: myport https_proxy = http: //myproxy.mydomain: myport Do you have any idea of the configuration error I may have made? Regards, Stephane
Hello, I am extracting a lot of values during search (using eval & split as recommended here), one of them being `username`. I also have a lookup table called "expected_usernames.csv" that contains ... See more...
Hello, I am extracting a lot of values during search (using eval & split as recommended here), one of them being `username`. I also have a lookup table called "expected_usernames.csv" that contains "service_expected_usernames" column and usernames in it. I am having a hard time writing a search query that would return only events where extracted username field is not equal to any of the usernames in the lookup file. I thought this answer would help, but it give me all the results not really caring about whether username matches or not.   index="mycustomindex" | rex field=source "(.*)\_(?<logtype>(connectionlog|userlog|useractivitylog))\_(\d{4})\-(\d{2})-(\d{2})T(\d{2}):(\d{2})\.gz" | search (logtype="connectionlog") | eval temp=split(_raw,"|") ... some extraction omitted for brevity ... | eval username=mvindex(temp,6) | fields - temp | search NOT [| inputlookup expected_usernames.csv | fields username | rename username AS service_expected_usernames | format ]   This still returns all the records, no filtering applied. What am I doing wrong?
Currently using a UF to forward logs to Splunk. Each .log file in the directory is 1 event but Splunk is separating each event into 4 separate logs. I have added a custom app on the search head wit... See more...
Currently using a UF to forward logs to Splunk. Each .log file in the directory is 1 event but Splunk is separating each event into 4 separate logs. I have added a custom app on the search head with a props.conf file as follows: [RMAN] SHOULD_LINEMERGE = false LINE_BREAKER = ((*FAIL)) TRUNCATE = 99999999 Restarted Splunk but events are still separated into 4 events. Anyone have any idea how to fix this. TIA
Hi, Please give solution for the below question  I wan to count ,average response time which is greater than 3 seconds. example:  Average Response time is 5000.00  5000>3000 (3sec) then what is ... See more...
Hi, Please give solution for the below question  I wan to count ,average response time which is greater than 3 seconds. example:  Average Response time is 5000.00  5000>3000 (3sec) then what is splunk query i have to use?
This has been asked before but the solutions I have seen are only for indexers. The best one I've seen is:   | rest /services/cluster/config | fields splunk_server guid   But like I said this is... See more...
This has been asked before but the solutions I have seen are only for indexers. The best one I've seen is:   | rest /services/cluster/config | fields splunk_server guid   But like I said this is only for indexers. I want something for search heads. Also, why not also include the cluster master, license manager, deployment server, search head deployer, and all the data forwarders, that would be quite useful.
I have around 10 columns in table and want to set the first 3 columns to 10% width and i used below method but its not applying the column width correctly. Need help in fixing the error   <row> ... See more...
I have around 10 columns in table and want to set the first 3 columns to 10% width and i used below method but its not applying the column width correctly. Need help in fixing the error   <row> <panel> <html depends="$alwaysHideCSSPanel$"> <style> #tableColumWidth table thead tr th:nth-child(1), #tableColumWidth table thead tr th:nth-child(2, #tableColumWidth table thead tr th:nth-child(3) { width: 10% !important; overflow-wrap: anywhere !important; } </style> </html> <table id="tableColumWidth">    
I have multiple events in Splunk like below : Exception:100 : *** Error 3006 Logons are disabled., Job=ABC Exception:XYZ API has failed. Exception: RDBMS error 2801: Duplicate unique prime key erro... See more...
I have multiple events in Splunk like below : Exception:100 : *** Error 3006 Logons are disabled., Job=ABC Exception:XYZ API has failed. Exception: RDBMS error 2801: Duplicate unique prime key error, Job=ABC Exception:100 : RDBMS error 2640: Specified table either does not exist in DEX or is moved to another map., Job=ABC I am looking for the text between "Exception:" and ", Job"  Output desired : *** Error 3006 Logons are disabled. RDBMS error 2801: Duplicate unique prime key error RDBMS error 2640: Specified table either does not exist in DEX or is moved to another map. I was trying split like below, however in some events , "Exception:" appears twice.  Hence second case above , gives me XYZ API has failed : eval temp=split(_raw, "Exception:") | eval temp1 = mvindex(temp,1) | eval temp2=split(temp1,"), Job") | eval EXCEPTION=mvindex(temp2,0) Is there any way to split based on second or last occurrence of Exception in the event ?  Thank you for any suggestion/help.  
Hi, I am confused about the new "USERS AND AUTHENTICATION" section in settings tab.  How can I add "roles" tab ?  Actually, I have this :  But no more "Roles" link ! Can you help me please ?... See more...
Hi, I am confused about the new "USERS AND AUTHENTICATION" section in settings tab.  How can I add "roles" tab ?  Actually, I have this :  But no more "Roles" link ! Can you help me please ? Thanks.   
Hi all, so I am facing this issue with what seems to be delayed/not receiving logs from the UFs. This is the current queue, we have gone from 1, to 2, to now 3 for the indexer's parallelIngestio... See more...
Hi all, so I am facing this issue with what seems to be delayed/not receiving logs from the UFs. This is the current queue, we have gone from 1, to 2, to now 3 for the indexer's parallelIngestionPipeline settings. On the indexer, the index queue is always full and is affecting the downstream from the 2 HFs.  There are about 16 intermediate forwarders sending to HF001, and HF002 is mainly doing API calls to pull data. The iops for the indexer is around 1600, cpu usage 50% and memory 31%. Any recommendations on what we can do to improve this, eg. additional indexer? Thanks.
HI, I would like to know why do we have this ERROR. even though it receives data after the Error, it's just continue showing the issue and the Owner are quite interested about the Error on what and ... See more...
HI, I would like to know why do we have this ERROR. even though it receives data after the Error, it's just continue showing the issue and the Owner are quite interested about the Error on what and why it shows.      
We would like to start working on Google integration with Splunk  is there any APP available with Dashboards and Reports we can install  ? I think we are going to use Splunk_TA_google-cloudplatform... See more...
We would like to start working on Google integration with Splunk  is there any APP available with Dashboards and Reports we can install  ? I think we are going to use Splunk_TA_google-cloudplatform to integrate the data 
Hi Team, I need suggestion please, I want to import XML data to Splunk using below py script My concerns are: Can I directly configure  py script output to index data in Splunk using inputs.conf,... See more...
Hi Team, I need suggestion please, I want to import XML data to Splunk using below py script My concerns are: Can I directly configure  py script output to index data in Splunk using inputs.conf, or first I need to save output into a .csv file. If yes can anyone please suggest some approach so that data does not get changed after storing it into a new .csv file. 2)How can I configure that .py file to fetch data in every 5 min. CODE : import requests import xmltodict import json url = "https://www.w3schools.com/xml/plant_catalog.xml" response = requests.get(url) content=xmltodict.parse(response.text) print(content)  
Good afternoon! Splunk Add-on for Microsoft Windows version 8.0.0 Splunk TA Windows, generates a data source without a domain name, i.e. just a host name. How can I bulk configure to display hostname... See more...
Good afternoon! Splunk Add-on for Microsoft Windows version 8.0.0 Splunk TA Windows, generates a data source without a domain name, i.e. just a host name. How can I bulk configure to display hostname with domain e.g. pc1.domain.com. Use in server.conf - hostnameOption = fullyqualifiedname, you can not because you need to distribute the configuration on a large number of PC via forwarder manager. Such a setup can be done through props and transformer