All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi! I am trying to setup an alert that triggers Jenkins job when the condition is met. In order to trigger Jenkins job I have to supply user/pwd in POST request. I am not sure if this is supported in... See more...
Hi! I am trying to setup an alert that triggers Jenkins job when the condition is met. In order to trigger Jenkins job I have to supply user/pwd in POST request. I am not sure if this is supported in Splunk Enterprise Version:8.0.5.1? 
I just installed the add on and got the java set up and I actually have jmx data coming into the main index, but I am not able to see jmx under the settings >> data inputs >> jmx   I would like to ... See more...
I just installed the add on and got the java set up and I actually have jmx data coming into the main index, but I am not able to see jmx under the settings >> data inputs >> jmx   I would like to have the data going to another index, but can not find out how to do this. here is the output of my print-modinput-config:  /opt/splunk/bin/splunk cmd splunkd print-modinput-config jmx <?xml version="1.0" encoding="UTF-8"?> <input> <server_host>SRVP01SPLUNK-01</server_host> <server_uri>https://127.0.0.1:8089</server_uri> <session_key>n0Zfn422VQQDkWH_MV^wkRCj3Zy_2yZVD^WYBSx84i69_3g2f^Ylatg_Mb^OOhhY0iodEKMOgZer23LjMRt5vlr5342o8g1uCDeQ73rYU6lRZw^Wfo</session_key> <checkpoint_dir>/opt/data/splunk/modinputs/jmx</checkpoint_dir> <configuration> <stanza name="jmx://_Splunk_TA_jmx_:mirth_poc" app="Splunk_TA_jmx"> <param name="config_file">_Splunk_TA_jmx.Splunk_TA_jmx.mirth_poc.xml</param> <param name="config_file_dir">etc/apps/Splunk_TA_jmx/local/config</param> <param name="disabled">0</param> <param name="host">$decideOnStartup</param> <param name="index">jmx_mirth</param> <param name="interval">30</param> <param name="polling_frequency">60</param> <param name="python.version">python3</param> <param name="sourcetype">jmx</param> <param name="start_by_shell">false</param> </stanza> </configuration> </input>  
Hi all, I have one field that simply shows that latest timestamp of logs. i) I was wondering how can I find the difference between the latest log time and the current system time? ii) Then with th... See more...
Hi all, I have one field that simply shows that latest timestamp of logs. i) I was wondering how can I find the difference between the latest log time and the current system time? ii) Then with that value I was hoping to check run a condition and print the condition to another field (e.g. timeliness). The condition I wanted to implemented was that if the difference is greater than 3 hours, then input "Out of Sync" in the timeliness field. However, in the alternative case, enter "Synced" in the timeliness field.  My latest log time is in the following format:  2021-07-23 02:54:09 Any help would be greatly appreciated!
I need to provide HA & better performance in MC for the Enterprise Console (ES) what health check items in MC or DMC do you recommend. Thank u in advance.
Hi all, I have a dropdown field that is used to filter the results of a pivot table. Is there a way that I can show and hide a column in the pivot table? For instance, say the token of the dropdown... See more...
Hi all, I have a dropdown field that is used to filter the results of a pivot table. Is there a way that I can show and hide a column in the pivot table? For instance, say the token of the dropdown field is 'select_field_1' ('version' and 'daysRemaining' are columns) Id imagine there is a conditional command where you can do if ($select_field_1|s$=certificate,show daysRemaining hide version) Any help would be greatly appreciated!
FYI -- Red marked URLs from the attached image should be remove from the output of splunk query which I shared below ..Please someone help for the same. Query used  in environment ===============... See more...
FYI -- Red marked URLs from the attached image should be remove from the output of splunk query which I shared below ..Please someone help for the same. Query used  in environment ===================== index=claims_pd env=pd_cloud_e sourcetype=claims:cif:ibuapps "https://" NOT "*.gco.net" NOT "*.gcoddc.net" NOT "*gco.net" | rex field=_raw "(?<externalURL>https:\/\/.[^\s]+)" | stats values(externalURL) as externalURL,list(ResponseMessage) as ResponseMessage, count by ServiceName | sort 0 - count | dedup externalURL |append [search sourcetype=claims:cif:ibuapps "javax.net.ssl.SSLException" OR "javax.net.ssl.SSLHandshakeException" OR "Unable to tunnel through proxy" OR "HTTP response '400: Bad Request'" OR "(504)Gateway Timeout" OR "Access is denied" AND (ServiceName OR (doFinally AND "method:handleErrorResponse")) | stats list(ResponseMessage) as ResponseMessage, count by ServiceName | sort - count | return ResponseMessage]
Scenario: 3 node SHC behind okta auth Suppose you have a URL splunk-foo.com points to an ALB which load balances user logins between SH1, SH2, and SH3. For example you navigate to https://splunk-f... See more...
Scenario: 3 node SHC behind okta auth Suppose you have a URL splunk-foo.com points to an ALB which load balances user logins between SH1, SH2, and SH3. For example you navigate to https://splunk-foo.com > you get directed to SH1, then SH1 redirects you to an IDP (like OKTA for MFA) after you complete authentication then you are logged in. Lets say when you initiated the OKTA -idpCert.pem  creation you used the clientcert of SH1 server.pem.  Now you will notice that when you logout from SH2 or SH3 you get an error like >   IDP failed to handle logout request.Status="Status Code="urn:oasis:names:tc:SAML:2.0:AuthnFailed"   After re-reading Splunk docs, Okta docs, Community Posts, etc (becoming thoroughly confused)…  We inferred that OKTA needs a copy of the SH1 server.pem as the clientCert for all other SHC nodes (i.e. SH2 and SH3).  So we copied/renamed the SH1 server.pem > idp-okta.pem and dropped it in the .../etc/auth/ dir and then configured in  .../etc/system/local/ authentication.conf   the path like this>   [saml] #clientCert = /opt/splunk/etc/auth/server.pem clientCert = /opt/splunk/etc/auth/idp-okta.pem     Apparently this works. However, I am wondering if this is the correct way???   As I said before the docs are a bit cloudy regarding this OKTA setup for SHCs.   As a single search head deployment the steps would work. Please advise if there is a better way or there is some unanticipated SSL concern with this method. RE: >>> https://docs.splunk.com/Documentation/Splunk/8.0.6/Security/SAMLSHC This appears to be updated recently with a new directions... or maybe we just misunderstood... It seems that you should not submit a specific SH node server.pem to OKTA to create a idpCert, but rather create a new cert.pem and then install the new "saml" clientCert.pem and the resulting idpCert on all the SHC nodes. As a side question, if you were to change all the SHC nodes to use the same server.pem, (i.e. replace SH2 and SH3 server.pem with SH1 server.pem) would that cause ssl to break or mess up the SHC performance? Thank you in advance.
We have one ES search head in a distributed environment. 1. If the search head goes down, do alerts queue up and trigger actions once Splunk is back up? 2. If yes, for what period of time are alert... See more...
We have one ES search head in a distributed environment. 1. If the search head goes down, do alerts queue up and trigger actions once Splunk is back up? 2. If yes, for what period of time are alerts retained for? Thank you. 
How to calculate Latency Over Last Minute, Total Requests/min, LBs with Highest Unhealthy Host % in the load balancer dashboard. We are facing some production issue and i just try to figure out the r... See more...
How to calculate Latency Over Last Minute, Total Requests/min, LBs with Highest Unhealthy Host % in the load balancer dashboard. We are facing some production issue and i just try to figure out the root cause from Splunk but could not able to fetch the correct report for latency. Can some one please help me in this. We are using Splunk cloud instance. The following field we have in the ELB logs timestamp, elb, client_ip, client_port, request_processing_time, response_processing_time, elb_status_code, received_bytes, ssl_cipher,  ssl_protocol, request, backend_processing_time, backend_status_code Any help on this would be appreciated
I need to learn how Microsoft Email data is ingested into Splunk Ent. or ES for Auditing purposes. Appreciate any details. 
Hi, so I just try configure 1 Splunk Enterpise and 1 server installed with universal forwarder+ Splunk Add-on for Unix and Linux. When I check the event from Splunk server, I got this unusual result:... See more...
Hi, so I just try configure 1 Splunk Enterpise and 1 server installed with universal forwarder+ Splunk Add-on for Unix and Linux. When I check the event from Splunk server, I got this unusual result: It is like my log not parsed, so there is no field about disk usage, cpu usage, etc. And because of that, when I use Splunk Apps for Unix and Linux for monitoring. I got below result:   How to solve this problem? I just follow the documentation and dont know how to solve this issue.
Hi, Is there a way to get the list of Public Static IP's of Hosted Synthetic Agents deployed across the globe for Synthetic Monitoring to whitelist in our firewall? I have whitelisted the below IP's... See more...
Hi, Is there a way to get the list of Public Static IP's of Hosted Synthetic Agents deployed across the globe for Synthetic Monitoring to whitelist in our firewall? I have whitelisted the below IP's in my firewall for port 80 and 443 as per https://docs.appdynamics.com/display/PAA/SaaS+Domains+and+IP+Ranges 18.134.158.244/32 18.166.70.91/32 3.8.177.132/32 35.178.177.76/32 18.195.41.33/32 18.195.153.182/32 18.195.58.148/32 52.48.243.82/32 52.59.59.81/32 52.57.220.140/32 52.28.41.3/32 52.29.131.127/32 52.28.115.60/32 52.29.0.31/32 52.28.52.91/32 52.58.102.110/32 54.93.152.243/32 13.55.209.28/32 13.54.206.49/32 13.210.238.7/32 13.228.123.222/32 54.169.20.120/32 13.229.165.25/32 52.220.139.232/32 13.250.145.93/32 3.0.41.185/32 54.169.146.24/32 54.255.158.185/32 54.251.124.11/32 54.255.54.138/32 54.255.181.23/32 52.77.48.234/32 3.7.137.141/32 13.126.36.88/32 15.207.171.186/32 3.6.202.33/32 52.66.74.73/32 13.127.224.172/32 3.7.29.86/32 35.154.60.73/32 3.6.225.200/32 It is not working and reports This site can't be reached - ERR_CONNECTION_TIMED_OUT Connection to server refused or timed out. Please suggest. Thanks in Advance. Best Regards, Kaushal
we have two Deployment Servers, one has apps for all of our servers the other has apps for all of our workstations by mistake we rolled out some Universal Forwarders to the servers and pointed them ... See more...
we have two Deployment Servers, one has apps for all of our servers the other has apps for all of our workstations by mistake we rolled out some Universal Forwarders to the servers and pointed them at the wrong Deployment Server, so I went through all of them and pointed them at the correct Deployment Server by editing their C:\Program Files\SplunkUniversalForwarder\etc\system\local\deploymentclient.conf and restarting Splunk now some of those servers moved to the correct Deployment Server and are listed on it's Forwarder Management page while others still remain on the wrong Deployment Server, (it has been over a week) this makes no sense to me, what could I be missing, what did I do wrong? I am new to Splunk < 2 years experience  
Hi all,  I have a pivot that changes the number of columns based on a drop-down selection.  The first two columns remain consistent however the remaining columns can change (1st e.g. has 6 additiona... See more...
Hi all,  I have a pivot that changes the number of columns based on a drop-down selection.  The first two columns remain consistent however the remaining columns can change (1st e.g. has 6 additional columns with auto-generated column names whereas 2nd has 4 additional columns). E.g. 1 CartridgeType Cartridge E:::MCAS1 E:::MCAS2 S:::MCAS1 S:::MCAS2 S:::MCAS3 S:::MCAS4 user etf 4 4 4 4 4 4 product brd 4 4 5 5 5 5   E.g. 2 CartridgeType Cartridge E:::MCAS1 E:::MCAS2 D:::MCAS1 D:::MCAS2 user etf 4 4 4 4 product brd 4 4 5 5   Is it possible (purely through the html or css AKA through the 'Source' button when editing a dashboard) to highlight rows red if they have different values along a row in the dynamically generated columns? For instance, in e.g. 1 since the second row's mcas columns don't all have the same value, highlight the whole row. CartridgeType Cartridge E:::MCAS1 E:::MCAS2 S:::MCAS1 S:::MCAS2 S:::MCAS3 S:::MCAS4 user etf 4 4 4 4 4 4 product brd 4 4 5 5 5 5   The confusion I'm having here is due to i ) non-static column names ii) Number of columns can change in quantity based on the dropdown selection.   Any help would be hugely appreciated! I've tried looking at the sample dashboard however haven't been able to figure out a solution based of them and an unable to implement a js option.
I'm running what I believe to be a somewhat standard input, from the Splunk Linux TA.  I just realized for some hosts the time is in the future.  All other time for logs from these hosts is correct, ... See more...
I'm running what I believe to be a somewhat standard input, from the Splunk Linux TA.  I just realized for some hosts the time is in the future.  All other time for logs from these hosts is correct,  only the package.sh output is 15 minutes in the future. Time on the affected hosts is correct.  How can it have the wrong time when it was a script Splunk executed itself? Input:   [script:///opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/package.sh] sourcetype = package source = package interval = 86400 disabled = false index = os    
Hi, I'd like to create a visualization that shows trends between alerts that have been fired. The graph will show the frequency of a given range of alerts and how often they was triggered on the so... See more...
Hi, I'd like to create a visualization that shows trends between alerts that have been fired. The graph will show the frequency of a given range of alerts and how often they was triggered on the source file.   Thanks, Rob 
Hi, I'm trying to configure HEC in our indexer cluster which doesn't have any HFs. Could anyone tell me about the process? I also read some community answers and documents that we create the token... See more...
Hi, I'm trying to configure HEC in our indexer cluster which doesn't have any HFs. Could anyone tell me about the process? I also read some community answers and documents that we create the tokens in CM and distribute to indexers.But i'm quite new to such process. Any detail steps are very much appreciated.
Hi, User needs a link which has the splunk qurery and results He wants to attach the link to already existing dashboard panel. The share job link is active only for 7 days, is there a way to attac... See more...
Hi, User needs a link which has the splunk qurery and results He wants to attach the link to already existing dashboard panel. The share job link is active only for 7 days, is there a way to attach a permanent link to the dashboard ?
Our development teams have started using Microsoft Power Apps to build application quicker.  Some applications are having performance challenges and the teams are looking to install AppDynamics to mo... See more...
Our development teams have started using Microsoft Power Apps to build application quicker.  Some applications are having performance challenges and the teams are looking to install AppDynamics to monitor the application and assist them with resolving issues.  Has anyone else tried to do this?
I'm currently working on migration of a single-node installation to indexer and search head clusters. Specifically on migrating the scheduled searches and alerts to the SHC, the recommended way of d... See more...
I'm currently working on migration of a single-node installation to indexer and search head clusters. Specifically on migrating the scheduled searches and alerts to the SHC, the recommended way of doing this appears to be to use the Deployer to deploy an app containing the configuration to the members. I've tested it - it works - and if I do the initial deployment with the savedsearches.conf in the 'local' directory then it allows the SHC members to make updates via the UI using the standard sync mechanism, and those changes won't get overwritten by future deployments because they are in the 'local' directory. So far so good. But, I am curious - is there anything wrong architecturally with stopping all of the SHC members, replacing the SPLUNK_HOME/etc/apps/search/local/savedsearches.conf with the same, identical file copied from the single-node install on each member, and then bringing them up? I'm led to believe this would work, but I'm also not sure if it would cause any consistency issues with future updates and sync across the cluster. Why would I want to consider doing this? Mainly to avoid having to look for scheduled searches in two places ("search" and "migration" apps) when making updates via the UI, which will be the main way users will add and edit scheduled searches. Any other potential pitfalls with this approach? Or methods that would avoid having to maintain scheduled searches across two apps? I'm only asking here about the scheduled searches. Everything else has been migrated to the correct locations via the indexer/search head cluster mechanisms, so for the purpose of this question you can assume that everything else regarding field extractions, indexes, etc. for those scheduled searches has already been migrated and I just need to migrate the scheduled search definitions. Thanks in advance.