All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've installed and configured the Cisco AMP for Endpoints Events Input app 2.0.2, and the API calls seem to work, but data isn't coming in, instead repetitively logging into $SPLUNK_HOME/var/log/splu... See more...
I've installed and configured the Cisco AMP for Endpoints Events Input app 2.0.2, and the API calls seem to work, but data isn't coming in, instead repetitively logging into $SPLUNK_HOME/var/log/splunk/amp4e_events_input.log the following messages: 2022-03-31 11:35:05,815 ERROR Amp4eEvents - Consumer Error that does not look like connection failure! See the traceback below. 2022-03-31 11:35:05,816 ERROR Amp4eEvents - Traceback (most recent call last):   File "/opt/splunk/etc/apps/amp4e_events_input/bin/util/stream_consumer.py", line 34, in run     self._connection = pika.BlockingConnection(pika.URLParameters(self._url))   File "/opt/splunk/etc/apps/amp4e_events_input/bin/pika/adapters/blocking_connection.py", line 377, in __init__     self._process_io_for_connection_setup()   File "/opt/splunk/etc/apps/amp4e_events_input/bin/pika/adapters/blocking_connection.py", line 417, in _process_io_for_connection_setup     self._open_error_result.is_ready)   File "/opt/splunk/etc/apps/amp4e_events_input/bin/pika/adapters/blocking_connection.py", line 469, in _flush_output     raise maybe_exception pika.exceptions.ProbableAuthenticationError: (403, 'ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.')  I don't know what broker logfile it's suggesting I reference, or how to fix this error since the authentication type is hard-coded in the app.  All the errors I'm finding when I search relate to RabbitMQ.
Hi, we have a severe performance issue with dbxlookup in DB Connect App 3.8 for a MySQL DB. dbxlookups in a Splunk Query take several minutes to return results.  The strange thing is, that it only... See more...
Hi, we have a severe performance issue with dbxlookup in DB Connect App 3.8 for a MySQL DB. dbxlookups in a Splunk Query take several minutes to return results.  The strange thing is, that it only happens when using dbxlookup. Using dbxquery is blazing fast (1.4 seconds to return 100K results) and also when configuring the lookup in the DB Connect app (where you run the actual SQL query that is to be used for the lookup) it is extremely fast. Example of using dbxlookup in a search:   | makeresults | eval page_id=510376245 | dbxlookup connection="myDB" query="SELECT C.CONTENTID as content_id,C.TITLE as page_title, S.SPACEKEY as space_key,S.SPACENAME as space_name FROM CONTENT AS C LEFT JOIN SPACES AS S ON (C.SPACEID=S.SPACEID) ORDER BY C.CONTENTID DESC" "content_id" AS "page_id"   The search job inspector shows the long time the dbxlookup took. The search log does not yield any helpful information:   03-31-2022 16:27:11.988 INFO SearchParser [110743 StatusEnforcerThread] - PARSING: table _time, page_id 03-31-2022 16:27:11.988 INFO SearchParser [110743 StatusEnforcerThread] - PARSING: head 20 03-31-2022 16:27:11.988 INFO SearchParser [110743 StatusEnforcerThread] - PARSING: dbxlookup lookup="scwiki_db" 03-31-2022 16:27:11.988 INFO ChunkedExternProcessor [110743 StatusEnforcerThread] - Running process: /opt/splunk/jdk1.8.0_111/bin/java -Dlogback.configurationFile\=../config/command_logback.xml -DDBX_COMMAND_LOG_LEVEL\=DEBUG -cp ../jars/dbxquery.jar com.splunk.dbx.command.DbxLookupCommand 03-31-2022 16:27:12.585 INFO DispatchExecutor [110743 StatusEnforcerThread] - BEGIN OPEN: Processor=dbxlookup 03-31-2022 16:27:12.605 INFO DispatchExecutor [110743 StatusEnforcerThread] - END OPEN: Processor=dbxlookup 03-31-2022 16:27:12.605 INFO ChunkedExternProcessor [110743 StatusEnforcerThread] - Skipping custom search command since we are in preview mode: dbxlookup 03-31-2022 16:27:12.620 INFO PreviewExecutor [110743 StatusEnforcerThread] - Finished preview generation in 0.663433841 seconds. 03-31-2022 16:27:13.143 INFO DispatchExecutor [110818 phase_1] - END OPEN: Processor=table 03-31-2022 16:27:13.143 INFO DispatchExecutor [110818 phase_1] - BEGIN OPEN: Processor=dbxlookup 03-31-2022 16:27:13.364 INFO DispatchExecutor [110818 phase_1] - END OPEN: Processor=dbxlookup 03-31-2022 16:27:14.627 INFO ReducePhaseExecutor [110743 StatusEnforcerThread] - ReducePhaseExecutor=1 action=PREVIEW -> here the delay happens 03-31-2022 16:29:44.577 INFO PreviewExecutor [110743 StatusEnforcerThread] - Stopping preview triggers since search almost finished 03-31-2022 16:29:44.580 INFO DownloadRemoteDataTransaction [110818 phase_1] - Downloading logs from all remote event providers 03-31-2022 16:29:44.849 INFO ReducePhaseExecutor [110818 phase_1] - Downloading all remote search.log files took 0.270 seconds 03-31-2022 16:29:44.850 INFO DownloadRemoteDataTransaction [110818 phase_1] - Downloading logs from all remote event providers     So it must have something to do with the way dbxlookup works. Could be an java issue or an mysql driver issue, a combination of both or something completely different :-). We are using the latest DB Connect MySQL Add-On. I am grateful for any hints or tipps on how to troubleshoot this furter. Or an actual solution :-).
Hi, we are running a distributed Splunk environment and do monitor the messages which appearing when there are issues within the ecosystem.  We did read about how to customize messages and official... See more...
Hi, we are running a distributed Splunk environment and do monitor the messages which appearing when there are issues within the ecosystem.  We did read about how to customize messages and official Splunk docs for messages.conf but weren't able to receive good answers to that. Maybe one of you does have more experience with that https://docs.splunk.com/Documentation/Splunk/8.2.5/Admin/Customizeuserexperience https://docs.splunk.com/Documentation/Splunk/8.2.5/Admin/Messagesconf Can someone help to explain those parameters and the behavior? target = [auto|ui|log|ui,log|none] * Sets the message display target. * "auto" means the message display target is automatically determined by context. * "ui" messages are displayed in Splunk Web and can be passed on from search peers to search heads in a distributed search environment. * "log" messages are displayed only in the log files for the instance under the BulletinBoard component, with log levels that respect their message severity. For example, messages with severity "info" are displayed as INFO log entries. * "ui,log" combines the functions of the "ui" and "log" options. * "none" completely hides the message. (Please consider using "log" and reducing severity instead. Using "none" might impact diagnosability.) * Default: auto I try to find a way to control if messages are getting distributed to another instance like Monitoring Console or if they should only appear on the system where the issue  happend. Is that possible? Where do I find those event if I select "log" as parameter? do they appear only in splunkd.log? Thanks    
My company is using Splunk to store data for our apps, and we would like to use Tableau to build visualizations. I have installed the driver for Splunk, but I'm not clear on the required credentials,... See more...
My company is using Splunk to store data for our apps, and we would like to use Tableau to build visualizations. I have installed the driver for Splunk, but I'm not clear on the required credentials, which are server, port, username, and password. I have access to our company's Splunk Enterprise, but It's automatically logged in once I connect to my company's VPN, which means I do not know my username or password. I also have difficulty finding the server and port. I tried "splunk.xxx(my company's name).com" as the server, 8089 as the port, and my corporate credentials as username and password; however, it didn't work. Can someone help me with this problem? Thanks a lot.
Hi,  I need to display an overall status in a dashboard (Single Value) based on results returned from my splunk queries.  Example: If all status OK - Overall status=OK If  one or more statu... See more...
Hi,  I need to display an overall status in a dashboard (Single Value) based on results returned from my splunk queries.  Example: If all status OK - Overall status=OK If  one or more status is Failed and all other are OK (i.e no Job in Pending) - Overall Status=Failure If one or more status is in Failed and one or more is in Pending, Overall Status=Partial OK If all are Pending - Overall status=Pending Job Status A OK B OK C Failed D Pending   Any suggestions if the above is possible? 
Hi all, as in the previous posts I and II I'd like to anonymize names of cities and to keep the length of a string. The nature of logs is quite complex. I'm sharing the part in question: 2022-03... See more...
Hi all, as in the previous posts I and II I'd like to anonymize names of cities and to keep the length of a string. The nature of logs is quite complex. I'm sharing the part in question: 2022-03-31 15:23:11,210 INFO ...  - ... 381 lines omitted ... F_AUSWEISENDE=12.02.2022  F_AUSWEISNUMMER=A2A2A2AAA F_BEHOERDE=Berlin F_BV_FREITEXTANTRAG= --------------- What I'd like to get is: 2022-03-31 15:23:11,210 INFO ...  - ... 381 lines omitted ... F_AUSWEISENDE=12.02.2022  F_AUSWEISNUMMER=A2A2A2AAA F_BEHOERDE=XXXXXX F_BV_FREITEXTANTRAG= --------------- Sometimes, unfortunately, the names are more complex and include processing errors: F_BEHOERDE=Stadt Rastatt B\xFCrgerb\xFCro then I'd like to get: F_BEHOERDE=XXXXX XXXXXXX XXXXXXXXXXXXXXXX I've managed to create the regex which anonymizes city names but doesn't keep the length of them. If the dynamic version is not possible. Probably I will need to stick with this: s/F_BEHOERDE=.*/F_BEHOERDE=XXXXX/g  I'll be grateful for any hints
Hi, Is it possible to use Python (or other languages) to get logs that originated from specific hosts? For example, search for a list of hosts and return the logs that were ingested during a spec... See more...
Hi, Is it possible to use Python (or other languages) to get logs that originated from specific hosts? For example, search for a list of hosts and return the logs that were ingested during a specific date range. Thanks !   
The Splunk api trying to get data from the remote client api server, but it's showing SSL untrusted error. We want splunk to check or use the CA certificate copied into system and it should trust it,... See more...
The Splunk api trying to get data from the remote client api server, but it's showing SSL untrusted error. We want splunk to check or use the CA certificate copied into system and it should trust it, we need system path where we can copy the CA certificate file which will automatically use by splunk whenever it call to that remote api server.
Background In our company,  Splunk is owned by devops. I don't have the access to develop Splunk(like Splunk Dev). I can only use it and can't do or argue anything about Splunk settings! Many comman... See more...
Background In our company,  Splunk is owned by devops. I don't have the access to develop Splunk(like Splunk Dev). I can only use it and can't do or argue anything about Splunk settings! Many commands like 'eventstats' cannot be run due to space limit. For all that, we want to mine some useful data in log files(we cannot get the log files directly but can only get by Splunk, by the way). We want to find the potential bugs before the customers encountered them. Problems I tried to get the raw log events files by running the command which is simple but can get all events, after it finished, I clicked the "download" button. But some files are too big to download(10GB mostly)! So I want to find a way to run Splunk spider program to get the raw events. But I know this field of Splunk poorly. Have you tried this, or if you can think out another automated or half-automated solution ? Thanks!
We are seeing strange behavior after updating Splunk from 8.0.4.1 to 8.2.4. The major issue is with all queries that use the streamstats command; after observing this behavior, we updated the comman... See more...
We are seeing strange behavior after updating Splunk from 8.0.4.1 to 8.2.4. The major issue is with all queries that use the streamstats command; after observing this behavior, we updated the command to include the time difference as well, dividing over the time delta when computing the difference between the two events. Occasionally, graphs display regular statistics for a short period of time before switching to an abnormal view (due to dashboard auto refresh). If anyone encounters such a problem, please let me know since most of our dashboards are affected, and we attempted to generalize and adapt this to all of these dashboards in the hope that this will cure the problem.  
Good morning,     We recently upgraded our Palo Alto Firewall from 5060 to 5260. We log using a syslog server. How do we  point the the Palo Alto Networks App for Splunk to the new Firewall. thx.
When I uploaded the ZIP file for the app 'Windows Event Code Security Analysis' (https://github.com/stressboi/splunk_wineventcode_secanalysis) to my Splunk Cloud instance, the App Vetting process fou... See more...
When I uploaded the ZIP file for the app 'Windows Event Code Security Analysis' (https://github.com/stressboi/splunk_wineventcode_secanalysis) to my Splunk Cloud instance, the App Vetting process found the following error: undefined issues found. You must fix these issues before you can install your app. for details, see the report. Contents from report link: This XML file does not appear to have any style information associated with it. The document tree is shown below. <response> <messages> <msg type="ERROR">Not Found</msg> </messages> </response> Does anybody have any ideas how to solved it? Thank in advance for your help.
Hey, I am trying to use a subsearch with the loadjob command but it is failing:   Can you please help? Many thanks, Patrick
I am looking for a Alert query for monitoring the windows process below is the scenario 1. Lookup having a field name called "host" and "Process" 2. windows index query where the process gets u... See more...
I am looking for a Alert query for monitoring the windows process below is the scenario 1. Lookup having a field name called "host" and "Process" 2. windows index query where the process gets updating in the field called "Name" and we have host field as well by default. 3. Query needs to pick the value from the "host" and "Process" from the lookup and finds the matching in the windows based index query, events should generate in Splunk results Kindly assist.
Hello colleagues, we've implemented the ingest_time lookups but unfortunately the expected field from the configured csv-lookup does not show up in our searches. Following implementation steps ... See more...
Hello colleagues, we've implemented the ingest_time lookups but unfortunately the expected field from the configured csv-lookup does not show up in our searches. Following implementation steps were executed: 1. props.conf & transforms.conf prepared and stored under $SPLUNK_HOME/etc/system/local on all indexer nodes within the cluster. 2. index_lookup.csv prepared and stored under $SPLUNK_HOME/etc/system/lookups on all indexer nodes within the cluster. 3. Rolling restart of the nodes 4. fields.conf prepared and deployed via SHD to our SHs props.conf: [aws:cloudwatch] TRANSFORMS-define_index = define_rds_index transforms.conf: [define_rds_index] INGEST_EVAL = test_index=json_extract(lookup("index_lookup.csv", json_object("account_id", account_id), json_array(index_tag)),"index_tag") index_lookup.csv: account_id index_tag 886089063862 index_platform-sandbox-dev   fields.conf: [test_index] INDEXED = True   Has anyone an idea if we missed a step or something is misconfigured? Thank you very much!      
How to convert  `_time` to the column and  `host` as an index while using `mstats`? | mstats avg(_value) prestats=true WHERE metric_name="cpu.*" AND index="*" AND (host="host01.example.com" OR ho... See more...
How to convert  `_time` to the column and  `host` as an index while using `mstats`? | mstats avg(_value) prestats=true WHERE metric_name="cpu.*" AND index="*" AND (host="host01.example.com" OR host="host02.example.com" OR host="host03.example.com" OR host="host04.example.com" OR host="host05.example.com" OR host="host06.example.com" ) AND `sai_metrics_indexes` span=auto BY metric_name | timechart avg(_value) as "Avg" span=30m by metric_name | fillnull value=0 | foreach *[| eval "<<FIELD>>"=round('<<FIELD>>',2)]  The above results in as follows: What is Desired: host _time cpu.idle cpu.interrupt cpu.nice cpu.softirq cpu.steal cpu.system cpu.user cpu.wait host01.example.com 2022-03-31 07:30:00 57.56 0.00 22.98 0.08 0.00 18.75 0.59 0.04 host01.example.com 2022-03-31 08:00:00 59.08 0.00 22.02 0.11 0.00 18.06 0.70 0.04 host01.example.com 2022-03-31 08:00:00 61.79 0.00 20.53 0.08 0.00 16.96 0.62 0.04 Any help will be uch appeciated.
Hi,   I need to extract a string from a field in a lookup. need to extract between <query> and <query>  and the field name is "eai:data" any help would be appreciated.  
Hello guys, how/where should we setup Dynatrace app / add on on cluster? App + addon on SHC? On production we have HF available however not in test environment. https://splunkbase.splunk.com/ap... See more...
Hello guys, how/where should we setup Dynatrace app / add on on cluster? App + addon on SHC? On production we have HF available however not in test environment. https://splunkbase.splunk.com/app/4040/ https://splunkbase.splunk.com/app/3969/ Thanks for your help!
Hi, I need to upgrade UF forwarder from version 6.5.1 to version 8.0; is possible do it immediatly or I must install some other version before to install 8.0? UF forwarders are linux, windows and s... See more...
Hi, I need to upgrade UF forwarder from version 6.5.1 to version 8.0; is possible do it immediatly or I must install some other version before to install 8.0? UF forwarders are linux, windows and solaris. Thanks to all.
After upgrade to 8.2.5 we suffer from another issue with visualliation and showing dramatic wrong data... built this morning at 11:30 Dashboard/panel shows after some time (here ca 30m) compl... See more...
After upgrade to 8.2.5 we suffer from another issue with visualliation and showing dramatic wrong data... built this morning at 11:30 Dashboard/panel shows after some time (here ca 30m) completely different avg-visual but more concerning completely wrong data for 'count_pm' metric : Please see attached  my xml code, what do I miss here?