All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have an eventtype that I want to delete, But before that I want to make sure that the eventtype isn't used anywhere , like in any datamodel, any correlation search, savedsearch , dashboard, tags et... See more...
I have an eventtype that I want to delete, But before that I want to make sure that the eventtype isn't used anywhere , like in any datamodel, any correlation search, savedsearch , dashboard, tags etc.... Is there a way , I can figure out where in the Splunk  an eventtype is used ?
How do I generate trendline to the below query index=os host=*gbcm* sourcetype=cpu VNextStatus=Live | timechart perc90(pctUser) span=10m by host_name
I have a dashboard with several multi-value fields containing IP details. I applied the following fieldformat command to truncate the result of such fields for the dashboard view. | fieldformat ipli... See more...
I have a dashboard with several multi-value fields containing IP details. I applied the following fieldformat command to truncate the result of such fields for the dashboard view. | fieldformat iplist=mvjoin(mvindex(iplist, 0, 9), ", ").if(mvcount(iplist)>10, " (".(mvcount(iplist)-10)." IPs truncated...)","") The goal is to create a field similar to the output below: 10.10.10.1, 10.10.10.2, 10.10.10.3, 10.10.10.4, 10.10.10.5, 10.10.10.6, 10.10.10.7, 10.10.10.8, 10.10.10.9, 10.10.10.10 (3 IPs truncated...) The fields are displayed in a dashboard table view according to the formatting, however when I try to drill down on these fields, the drilldown will carry over the formatted value, not the original multi-value content. I have included a test dashboard to demonstrate the behaviour. How can I modify the fieldformat command to truncate the field but also enable the dashboard to use the original field value in drilldowns? Thanks <form> <label>Fieldformat Test</label> <fieldset submitButton="false" autoRun="true"> <input type="text" token="tokIPList" searchWhenChanged="true"> <label>IP List</label> <default>10.10.10.1 10.10.10.2 10.10.10.3 10.10.10.4 10.10.10.5 10.10.10.6 10.10.10.7 10.10.10.8 10.10.10.9 10.10.10.10 10.10.10.11 10.10.10.12 10.10.10.13</default> </input> </fieldset> <row> <panel> <title>IP List input text displayed as multi value field</title> <table> <search> <query>| makeresults | fields - _time | eval iplist=$tokIPList|s$ | eval iplist=split(iplist, " ") | table iplist </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">row</option> <drilldown> <set token="tokDrilldown">$row.iplist$</set> </drilldown> </table> </panel> </row> <row> <panel> <title>IP List input text displayed with fieldformat applied</title> <table> <search> <query> <![CDATA[ | makeresults | fields - _time | eval iplist=$tokIPList|s$ | eval iplist=split(iplist, " ") | table iplist | fieldformat iplist=mvjoin(mvindex(iplist, 0, 9), ", ").if(mvcount(iplist)>10, " (".(mvcount(iplist)-10)." IPs truncated...)","") ]]> </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">row</option> <drilldown> <set token="tokDrilldown">$row.iplist$</set> </drilldown> </table> </panel> </row> <row> <panel> <title>Drilldown test</title> <table> <search> <query>| makeresults | fields - _time | eval formatted_iplist=$tokDrilldown|s$ | table formatted_iplist </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>  
Hello everyone, I have successfully installed Splunk stream on a distributed environment. Stream data are indexed remotely and can be searched manually. I have a couple of questions to ask: 1) I a... See more...
Hello everyone, I have successfully installed Splunk stream on a distributed environment. Stream data are indexed remotely and can be searched manually. I have a couple of questions to ask: 1) I am initiating a 15min Ephemeral Stream using either Splunk ES Incident Review console (using the adaptive response action "Stream Capture" available). I select "All" protocols. I can see the Ephemeral Stream under "Configure Streams" UI. Even though it starts 9 streams, after 15mins the streams disappear. This means that the streams were empty? Normally they will have a link that I can click and search them? Can I export them for later use or as artifact in an investigation? 2) On which index do these Ephemeral Streams get captured/indexed? 3) Even though my streams are working and data come in, I see that my Configure Streams - Avg. Traffic and Recent Traffic per protocol (15m)  are all zero. Why does this happen? This applies for both the enabled and estimate streams. Thank you in advance for your help. With kind regards, Chris
Hi All, My organisation has installed a few custom actions on our instance of splunk which we are now able to trigger from alerts (i.e. when editing an alert, they appear in the drop down of the "Tr... See more...
Hi All, My organisation has installed a few custom actions on our instance of splunk which we are now able to trigger from alerts (i.e. when editing an alert, they appear in the drop down of the "Trigger Actions" section. What I would like to do is trigger these actions from a dashboard drilldown.  Is there a way to do this?   When I edit the drilldown the only options for actions that I see are: Many thanks in advance!
Hello everybody, i need to connect an instance of Oracle OAM to Splunk. Do you have any suggestion on how to achieve this?   Thanks in advance.    
hello I use a splunk app with many different dashboards and I have 2 imporvement to do 1) I need to put an icon after the name of my app Except if I am mistaken I put the icon file in the static f... See more...
hello I use a splunk app with many different dashboards and I have 2 imporvement to do 1) I need to put an icon after the name of my app Except if I am mistaken I put the icon file in the static folder but what I have to do for displaying the icon after the app name? 2) I need to open a PDF file from my dashboard The PDF is located in the "Static" folder and the "KM" directory Exemple : etc/apps/workplace/static/KM/TEST.pdf <row> <panel> <html> <p> <a target="_blank" href="/static/app/workplace/static/KM/TEST.pdf"> <img width="48" height="38">TEST </a> </p> </html> </panel> </row>  But I can't open it What is the correct path to use please? Rgds
Hello everyone, How to get/tag the registry services from windows server and display in dashboard showcasing as faulty or error. Please help me on this. Thanks and regards, Subhan  
Hi team, I have to fetch the real-time data from windows server-4 numbers and create dashboard for the realtime data. Hoping for the support/resolution from your end. Thanks and regards, Subhan
Is there any way to know what splunk apps/add-ons I have access to ? Like using  rest command or any other SPL ?
I tried to run adaptive response actions from the Incident Review page in Splunk ES to send a notable event to Splunk Phantom, the notable event is sent but there is no artifact on the container then... See more...
I tried to run adaptive response actions from the Incident Review page in Splunk ES to send a notable event to Splunk Phantom, the notable event is sent but there is no artifact on the container then I found the error log as the picture below. Today I try to run this adaptive response with the same notable event again, there is no error and the container is sent to Splunk Phantom with all artifacts. Has anyone found this error before? I want to know how to prevent this error.
Hi Team, I have a query related to drilldown searches of notables. I want to export/show results of drilldown searches with variables substituted corresponding to each notable. Example, consider fo... See more...
Hi Team, I have a query related to drilldown searches of notables. I want to export/show results of drilldown searches with variables substituted corresponding to each notable. Example, consider following search: `notable` | search event_id="XXXXXX" | table drilldown_search,drilldown_earliest,drilldown_latest The above search will give me drilldown search but with variables not substituted. I want the variables to be substituted in the search results. Actual result of above search - index=abc action=failure user="$user$"  Desired output - index=abc action=failure user="johndoe@example.com"  Let me know if any further info is needed. Thanks in advance. Regards, Shaquib
I have a log file below format and props.conf wriiten below. I am getting first four lines as one event and the remaining lines as separate events. But I want as single event . Can anyone help me on ... See more...
I have a log file below format and props.conf wriiten below. I am getting first four lines as one event and the remaining lines as separate events. But I want as single event . Can anyone help me on this.   ******************************************************************************** product = WebSphere Application Server 20.0.0.3 (wlp-1.0.38.cl200320200305-1433) wlp.install.dir = /opt/IBM/wlp/ java.home = /opt/IBM/sdk/jre java.version = 1.8.0_241 java.runtime = Java(TM) SE Runtime Environment (8.0.6.7 - pxa6480sr6fp7-20200312_01(SR6 FP7)) os = Linux (3.10.0-1160.11.1.el7.x86_64; amd64) (en_GB) process = 29193@128.161.210.72 ******************************************************************************** [17/09/21 16:40:27:860 BST] 00000001 com.ibm.ws.kernel.launch.internal.FrameworkManager I CWWKE0002I: The kernel started after 3.119 seconds [17/09/21 16:40:28:003 BST] 0000003b com.ibm.ws.kernel.feature.internal.FeatureManager I CWWKF0007I: Feature update started. [17/09/21 16:40:28:809 BST] 0000003b com.ibm.ws.config.xml.internal.ConfigEvaluator W CWWKG0033W: The value [localHostOnly] specified for the reference attribute [allowFromEndpointRef] was not found in the configuration. [17/09/21 16:40:29:051 BST] 00000030 com.ibm.ws.security.ready.internal.SecurityReadyServiceImpl I CWWKS0007I: The security service is starting... [17/09/21 16:40:29:524 BST] 00000032 com.ibm.ws.annocache.service I OSGi Work Path [ /opt/IBM/wlp/usr/servers/e2/workarea/org.eclipse.osgi/43/data ] [17/09/21 16:40:31:924 BST] 00000031 com.ibm.ws.app.manager.internal.monitor.DropinMonitor A CWWKZ0058I: Monitoring dropins for applications. [17/09/21 16:40:33:586 BST] 00000031 com.ibm.ws.cache.ServerCache I DYNA1001I: WebSphere Dynamic Cache instance named baseCache initialized successful props.conf LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true BREAK_ONLY_BEFORE = (.\d{7}.\d\d:\d\d:\d\d.\d\d) MAX_TIMESTAMP_LOOKAHEAD = 18 DATETIME_CONFIG = TIME_FORMAT = %d/%m/%y %H:%M:%S:%3N %z TZ = BST TIME_PREFIX = "^ TRUNCATE = 0  
I managed to set up my WMI event-polling setup and it mostly works. Mostly, because it doesn't pull events from non-standard event logs like - for example - the WMI log itself. I know that in order... See more...
I managed to set up my WMI event-polling setup and it mostly works. Mostly, because it doesn't pull events from non-standard event logs like - for example - the WMI log itself. I know that in order to be able to see the event log via WMI I have to add an entry to registy (in my case it's Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Microsoft-Windows-WMI-Activity/Operational). I did that. And I can list the events with wbemtest as per https://docs.splunk.com/Documentation/Splunk/8.2.2/Troubleshooting/TroubleshootingWMI I can also do: splunk cmd splunk-wmi.exe -wql "SELECT Category, CategoryString, ComputerName, EventCode, EventIdentifier, EventType, Logfile, Message, RecordNumber, SourceName, TimeGenerated, TimeWritten, Type, User FROM Win32_NTLogEvent WHERE Logfile = \"Microsoft-Windows-WMI-Activity/Operational\"" -namespace \\ad.lab\root\cimv2 And it works (returns events). But If I set event_log_file = System, Security, Application, Microsoft-Windows-WMI-Activity/Operational in my wmi.conf file, only the "standard" log events are getting pulled (System, Security and Application). splunkd.log doesn't show anything regarding wmi apart from 09-21-2021 09:29:34.002 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - Running: "C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe" on PipelineSet 0 09-21-2021 09:29:34.002 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - PipelineSet 0: Created new ExecedCommandPipe for ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"", uniqueId=1 09-21-2021 09:29:34.221 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Attempting to connect to WMI provider \\ad.lab\root\cimv2 09-21-2021 09:29:34.267 +0200 INFO ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Connected to WMI provider \\ad.lab\root\cimv2 (connecting took 46.84 milliseconds) 09-21-2021 09:29:34.267 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Attempting to connect to WMI provider \\ad.lab\root\cimv2 09-21-2021 09:29:34.267 +0200 INFO ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Connected to WMI provider \\ad.lab\root\cimv2 (connecting took 0 microseconds) 09-21-2021 09:29:34.267 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Attempting to connect to WMI provider \\ad.lab\root\cimv2 09-21-2021 09:29:34.283 +0200 INFO ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Connected to WMI provider \\ad.lab\root\cimv2 (connecting took 15.62 milliseconds) 09-21-2021 09:29:34.283 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Attempting to connect to WMI provider \\ad.lab\root\cimv2 09-21-2021 09:29:34.283 +0200 INFO ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Connected to WMI provider \\ad.lab\root\cimv2 (connecting took 0 microseconds) 09-21-2021 09:29:34.502 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Executing query wql="SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA "Win32_NTLogEvent" AND TargetInstance.Logfile = "System"" (ad.lab: System) 09-21-2021 09:29:34.502 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Executing query wql="SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA "Win32_NTLogEvent" AND TargetInstance.Logfile = "Security"" (ad.lab: Security) 09-21-2021 09:29:34.502 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Executing query wql="SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA "Win32_NTLogEvent" AND TargetInstance.Logfile = "Application"" (ad.lab: Application) 09-21-2021 09:29:34.502 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Executing query wql="SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA "Win32_NTLogEvent" AND TargetInstance.Logfile = "Microsoft-Windows-WMI-Activity/Operational"" (ad.lab: Microsoft-Windows-WMI-Activity/Operational)  What more can I debug?
Hello, We are using the Tenable Infrastructure Vulnerability scanner to scan regularly our complete infrastructure. Tenable reports following findings for the Splunk Server Ports: https://www.tenab... See more...
Hello, We are using the Tenable Infrastructure Vulnerability scanner to scan regularly our complete infrastructure. Tenable reports following findings for the Splunk Server Ports: https://www.tenable.com/plugins/nessus/31705 SSL Anonymous Cipher Suites Supported Please find below the plugin output: The following is a list of SSL anonymous ciphers supported by the remote TCP server :   High Strength Ciphers (>= 112-bit key)     Name                          Code             KEX           Auth     Encryption             MAC     ----------------------        ----------       ---           ----     ---------------------  ---     AECDH-AES128-SHA              0xC0, 0x18       ECDH          None     AES-CBC(128)           SHA1     AECDH-AES256-SHA              0xC0, 0x19       ECDH          None     AES-CBC(256)           SHA1 The fields above are :   {Tenable ciphername}   {Cipher ID code}   Kex={key exchange}   Auth={authentication}   Encrypt={symmetric encryption method}   MAC={message authentication code}   {export flag}   Could you please advise how to adjust the SSL Splunk configuration to fix this issue? Can this be fixed by setting certain value to cipherSuite in server.conf? The above issue is reported for the ports (2)8191 and (2)8089.  Our server.conf (local) looks as follows: [kvstore] port = 28191 [license] master_uri = https://splunk-license.xxx.corp:443 # Workaround to overcome the connection issues to the license server [sslConfig] # To address Vulnerability Scan: # https://serverfault.com/questions/1034107/how-to-configure-ssl-certificates-for-splunk-on-port-8089 sslVersions = tls1.2 sslVersionsForClient = *,-ssl2 enableSplunkdSSL = true serverCert = /etc/apache2/splunk.pem # Workaround to overcome the connection issues to the license server cipherSuite = TLSv1+HIGH:TLSv1.2+HIGH:@STRENGTH # To address Vulnerability Scan: # https://community.splunk.com/t5/Archive/Splunk-shows-vulnerable-to-CVE-2012-4929-in-my-Nessus/m-p/29091 allowSslCompression = false useClientSSLCompression = false useSplunkdClientSSLCompression = false sslPassword = xxx [general] pass4SymmKey = xxx trustedIP = 127.0.0.1   The cipherSuite in server.conf (default) looks as follows: sslVersions = tls1.2 cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 ecdhCurves = prime256v1, secp384r1, secp521r1   Could you please advice? Kind regards, Kamil
Hi, does anyone here faces the same issue? Below is my sample query for reference.     | makeresults | eval statename= "Selangor" | eval mega="state" | lookup type.csv mega as megas OUTPUT WP_Kua... See more...
Hi, does anyone here faces the same issue? Below is my sample query for reference.     | makeresults | eval statename= "Selangor" | eval mega="state" | lookup type.csv mega as megas OUTPUT WP_Kuala_Lumpur_list, WP_Putrajaya_list, Johor_list, Kedah_list, Kelantan_list, Melaka_list, Negeri_Sembilan_list, Pahang_list, Perak_list, Pulau_Pinang_list, Sabah_list, Sarawak_list, Selangor_list, Terengganu_list, Perlis_list | eval res= case(statename= "Kuala Lumpur", WP_Kuala_Lumpur_list, statename= "Putrajaya", WP_Putrajaya_list, statename= "Johor", Johor_list, statename= "Kedah", Kedah_list, statename= "Kelantan", Kelantan_list, statename= "Melaka", Melaka_list, statename= "Negeri Sembilan", Negeri_Sembilan_list, statename= "Pahang", Pahang_list, statename= "Perak", Perak_list, statename= "Pulau Pinang", Pulau_Pinang_list, statename= "Sabah", Sabah_list, statename= "Sarawak", Sarawak_list, statename= "Selangor", Selangor_list, statename= "Terengganu", Terengganu_list, statename= "Perlis", Perlis_list) | table res       In the lookup, Selangor_list has more than 60 rows. But, when I ran the query it just show me 33 rows. then, I figure out if run the query with less OUTPUT it is able to show the correct data.   May I know any limitations on this?  
Hi I have schedule report that generate every morning, I want to show result quickly in dashboard. Any idea? Thanks,
Hi team, we have installed website monitoring app in our Splunk AWS , some urls it's automatically get reflected . For rest of the one which we wants to create alert for monitoring purpose those ... See more...
Hi team, we have installed website monitoring app in our Splunk AWS , some urls it's automatically get reflected . For rest of the one which we wants to create alert for monitoring purpose those we are not able to add using create input option. Can anyone confirm this app works differently in cloud or do we need to do some extra configuration. Any quick answer will be appreciated. Thanks
is there a way to exclude all logs being indexed for a certain field  for eg : sourcetype=azs  container_name=moss-logger I want my HF to filter any data being ingested from particular field (conat... See more...
is there a way to exclude all logs being indexed for a certain field  for eg : sourcetype=azs  container_name=moss-logger I want my HF to filter any data being ingested from particular field (conatiner_name) with value "moss-logger"
How do i select All in multiselect which basically selects all the value in that field and display, is it possible?