All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi team, I have to fetch the real-time data from windows server-4 numbers and create dashboard for the realtime data. Hoping for the support/resolution from your end. Thanks and regards, Subhan
Is there any way to know what splunk apps/add-ons I have access to ? Like using  rest command or any other SPL ?
I tried to run adaptive response actions from the Incident Review page in Splunk ES to send a notable event to Splunk Phantom, the notable event is sent but there is no artifact on the container then... See more...
I tried to run adaptive response actions from the Incident Review page in Splunk ES to send a notable event to Splunk Phantom, the notable event is sent but there is no artifact on the container then I found the error log as the picture below. Today I try to run this adaptive response with the same notable event again, there is no error and the container is sent to Splunk Phantom with all artifacts. Has anyone found this error before? I want to know how to prevent this error.
Hi Team, I have a query related to drilldown searches of notables. I want to export/show results of drilldown searches with variables substituted corresponding to each notable. Example, consider fo... See more...
Hi Team, I have a query related to drilldown searches of notables. I want to export/show results of drilldown searches with variables substituted corresponding to each notable. Example, consider following search: `notable` | search event_id="XXXXXX" | table drilldown_search,drilldown_earliest,drilldown_latest The above search will give me drilldown search but with variables not substituted. I want the variables to be substituted in the search results. Actual result of above search - index=abc action=failure user="$user$"  Desired output - index=abc action=failure user="johndoe@example.com"  Let me know if any further info is needed. Thanks in advance. Regards, Shaquib
I have a log file below format and props.conf wriiten below. I am getting first four lines as one event and the remaining lines as separate events. But I want as single event . Can anyone help me on ... See more...
I have a log file below format and props.conf wriiten below. I am getting first four lines as one event and the remaining lines as separate events. But I want as single event . Can anyone help me on this.   ******************************************************************************** product = WebSphere Application Server 20.0.0.3 (wlp-1.0.38.cl200320200305-1433) wlp.install.dir = /opt/IBM/wlp/ java.home = /opt/IBM/sdk/jre java.version = 1.8.0_241 java.runtime = Java(TM) SE Runtime Environment (8.0.6.7 - pxa6480sr6fp7-20200312_01(SR6 FP7)) os = Linux (3.10.0-1160.11.1.el7.x86_64; amd64) (en_GB) process = 29193@128.161.210.72 ******************************************************************************** [17/09/21 16:40:27:860 BST] 00000001 com.ibm.ws.kernel.launch.internal.FrameworkManager I CWWKE0002I: The kernel started after 3.119 seconds [17/09/21 16:40:28:003 BST] 0000003b com.ibm.ws.kernel.feature.internal.FeatureManager I CWWKF0007I: Feature update started. [17/09/21 16:40:28:809 BST] 0000003b com.ibm.ws.config.xml.internal.ConfigEvaluator W CWWKG0033W: The value [localHostOnly] specified for the reference attribute [allowFromEndpointRef] was not found in the configuration. [17/09/21 16:40:29:051 BST] 00000030 com.ibm.ws.security.ready.internal.SecurityReadyServiceImpl I CWWKS0007I: The security service is starting... [17/09/21 16:40:29:524 BST] 00000032 com.ibm.ws.annocache.service I OSGi Work Path [ /opt/IBM/wlp/usr/servers/e2/workarea/org.eclipse.osgi/43/data ] [17/09/21 16:40:31:924 BST] 00000031 com.ibm.ws.app.manager.internal.monitor.DropinMonitor A CWWKZ0058I: Monitoring dropins for applications. [17/09/21 16:40:33:586 BST] 00000031 com.ibm.ws.cache.ServerCache I DYNA1001I: WebSphere Dynamic Cache instance named baseCache initialized successful props.conf LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true BREAK_ONLY_BEFORE = (.\d{7}.\d\d:\d\d:\d\d.\d\d) MAX_TIMESTAMP_LOOKAHEAD = 18 DATETIME_CONFIG = TIME_FORMAT = %d/%m/%y %H:%M:%S:%3N %z TZ = BST TIME_PREFIX = "^ TRUNCATE = 0  
I managed to set up my WMI event-polling setup and it mostly works. Mostly, because it doesn't pull events from non-standard event logs like - for example - the WMI log itself. I know that in order... See more...
I managed to set up my WMI event-polling setup and it mostly works. Mostly, because it doesn't pull events from non-standard event logs like - for example - the WMI log itself. I know that in order to be able to see the event log via WMI I have to add an entry to registy (in my case it's Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Microsoft-Windows-WMI-Activity/Operational). I did that. And I can list the events with wbemtest as per https://docs.splunk.com/Documentation/Splunk/8.2.2/Troubleshooting/TroubleshootingWMI I can also do: splunk cmd splunk-wmi.exe -wql "SELECT Category, CategoryString, ComputerName, EventCode, EventIdentifier, EventType, Logfile, Message, RecordNumber, SourceName, TimeGenerated, TimeWritten, Type, User FROM Win32_NTLogEvent WHERE Logfile = \"Microsoft-Windows-WMI-Activity/Operational\"" -namespace \\ad.lab\root\cimv2 And it works (returns events). But If I set event_log_file = System, Security, Application, Microsoft-Windows-WMI-Activity/Operational in my wmi.conf file, only the "standard" log events are getting pulled (System, Security and Application). splunkd.log doesn't show anything regarding wmi apart from 09-21-2021 09:29:34.002 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - Running: "C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe" on PipelineSet 0 09-21-2021 09:29:34.002 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - PipelineSet 0: Created new ExecedCommandPipe for ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"", uniqueId=1 09-21-2021 09:29:34.221 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Attempting to connect to WMI provider \\ad.lab\root\cimv2 09-21-2021 09:29:34.267 +0200 INFO ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Connected to WMI provider \\ad.lab\root\cimv2 (connecting took 46.84 milliseconds) 09-21-2021 09:29:34.267 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Attempting to connect to WMI provider \\ad.lab\root\cimv2 09-21-2021 09:29:34.267 +0200 INFO ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Connected to WMI provider \\ad.lab\root\cimv2 (connecting took 0 microseconds) 09-21-2021 09:29:34.267 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Attempting to connect to WMI provider \\ad.lab\root\cimv2 09-21-2021 09:29:34.283 +0200 INFO ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Connected to WMI provider \\ad.lab\root\cimv2 (connecting took 15.62 milliseconds) 09-21-2021 09:29:34.283 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Attempting to connect to WMI provider \\ad.lab\root\cimv2 09-21-2021 09:29:34.283 +0200 INFO ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Connected to WMI provider \\ad.lab\root\cimv2 (connecting took 0 microseconds) 09-21-2021 09:29:34.502 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Executing query wql="SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA "Win32_NTLogEvent" AND TargetInstance.Logfile = "System"" (ad.lab: System) 09-21-2021 09:29:34.502 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Executing query wql="SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA "Win32_NTLogEvent" AND TargetInstance.Logfile = "Security"" (ad.lab: Security) 09-21-2021 09:29:34.502 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Executing query wql="SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA "Win32_NTLogEvent" AND TargetInstance.Logfile = "Application"" (ad.lab: Application) 09-21-2021 09:29:34.502 +0200 DEBUG ExecProcessor [7720 ExecProcessor] - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-wmi.exe"" WMI - Executing query wql="SELECT * FROM __InstanceCreationEvent WHERE TargetInstance ISA "Win32_NTLogEvent" AND TargetInstance.Logfile = "Microsoft-Windows-WMI-Activity/Operational"" (ad.lab: Microsoft-Windows-WMI-Activity/Operational)  What more can I debug?
Hello, We are using the Tenable Infrastructure Vulnerability scanner to scan regularly our complete infrastructure. Tenable reports following findings for the Splunk Server Ports: https://www.tenab... See more...
Hello, We are using the Tenable Infrastructure Vulnerability scanner to scan regularly our complete infrastructure. Tenable reports following findings for the Splunk Server Ports: https://www.tenable.com/plugins/nessus/31705 SSL Anonymous Cipher Suites Supported Please find below the plugin output: The following is a list of SSL anonymous ciphers supported by the remote TCP server :   High Strength Ciphers (>= 112-bit key)     Name                          Code             KEX           Auth     Encryption             MAC     ----------------------        ----------       ---           ----     ---------------------  ---     AECDH-AES128-SHA              0xC0, 0x18       ECDH          None     AES-CBC(128)           SHA1     AECDH-AES256-SHA              0xC0, 0x19       ECDH          None     AES-CBC(256)           SHA1 The fields above are :   {Tenable ciphername}   {Cipher ID code}   Kex={key exchange}   Auth={authentication}   Encrypt={symmetric encryption method}   MAC={message authentication code}   {export flag}   Could you please advise how to adjust the SSL Splunk configuration to fix this issue? Can this be fixed by setting certain value to cipherSuite in server.conf? The above issue is reported for the ports (2)8191 and (2)8089.  Our server.conf (local) looks as follows: [kvstore] port = 28191 [license] master_uri = https://splunk-license.xxx.corp:443 # Workaround to overcome the connection issues to the license server [sslConfig] # To address Vulnerability Scan: # https://serverfault.com/questions/1034107/how-to-configure-ssl-certificates-for-splunk-on-port-8089 sslVersions = tls1.2 sslVersionsForClient = *,-ssl2 enableSplunkdSSL = true serverCert = /etc/apache2/splunk.pem # Workaround to overcome the connection issues to the license server cipherSuite = TLSv1+HIGH:TLSv1.2+HIGH:@STRENGTH # To address Vulnerability Scan: # https://community.splunk.com/t5/Archive/Splunk-shows-vulnerable-to-CVE-2012-4929-in-my-Nessus/m-p/29091 allowSslCompression = false useClientSSLCompression = false useSplunkdClientSSLCompression = false sslPassword = xxx [general] pass4SymmKey = xxx trustedIP = 127.0.0.1   The cipherSuite in server.conf (default) looks as follows: sslVersions = tls1.2 cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 ecdhCurves = prime256v1, secp384r1, secp521r1   Could you please advice? Kind regards, Kamil
Hi, does anyone here faces the same issue? Below is my sample query for reference.     | makeresults | eval statename= "Selangor" | eval mega="state" | lookup type.csv mega as megas OUTPUT WP_Kua... See more...
Hi, does anyone here faces the same issue? Below is my sample query for reference.     | makeresults | eval statename= "Selangor" | eval mega="state" | lookup type.csv mega as megas OUTPUT WP_Kuala_Lumpur_list, WP_Putrajaya_list, Johor_list, Kedah_list, Kelantan_list, Melaka_list, Negeri_Sembilan_list, Pahang_list, Perak_list, Pulau_Pinang_list, Sabah_list, Sarawak_list, Selangor_list, Terengganu_list, Perlis_list | eval res= case(statename= "Kuala Lumpur", WP_Kuala_Lumpur_list, statename= "Putrajaya", WP_Putrajaya_list, statename= "Johor", Johor_list, statename= "Kedah", Kedah_list, statename= "Kelantan", Kelantan_list, statename= "Melaka", Melaka_list, statename= "Negeri Sembilan", Negeri_Sembilan_list, statename= "Pahang", Pahang_list, statename= "Perak", Perak_list, statename= "Pulau Pinang", Pulau_Pinang_list, statename= "Sabah", Sabah_list, statename= "Sarawak", Sarawak_list, statename= "Selangor", Selangor_list, statename= "Terengganu", Terengganu_list, statename= "Perlis", Perlis_list) | table res       In the lookup, Selangor_list has more than 60 rows. But, when I ran the query it just show me 33 rows. then, I figure out if run the query with less OUTPUT it is able to show the correct data.   May I know any limitations on this?  
Hi I have schedule report that generate every morning, I want to show result quickly in dashboard. Any idea? Thanks,
Hi team, we have installed website monitoring app in our Splunk AWS , some urls it's automatically get reflected . For rest of the one which we wants to create alert for monitoring purpose those ... See more...
Hi team, we have installed website monitoring app in our Splunk AWS , some urls it's automatically get reflected . For rest of the one which we wants to create alert for monitoring purpose those we are not able to add using create input option. Can anyone confirm this app works differently in cloud or do we need to do some extra configuration. Any quick answer will be appreciated. Thanks
is there a way to exclude all logs being indexed for a certain field  for eg : sourcetype=azs  container_name=moss-logger I want my HF to filter any data being ingested from particular field (conat... See more...
is there a way to exclude all logs being indexed for a certain field  for eg : sourcetype=azs  container_name=moss-logger I want my HF to filter any data being ingested from particular field (conatiner_name) with value "moss-logger"
How do i select All in multiselect which basically selects all the value in that field and display, is it possible?
I have a csv file which has field Account and it has over 1000+. In my logs it is named as yourAccount. how do i find the all the account logs from that csv file. Also can i rex the field and have th... See more...
I have a csv file which has field Account and it has over 1000+. In my logs it is named as yourAccount. how do i find the all the account logs from that csv file. Also can i rex the field and have the table for that as well in same query?
Hi, I have the below source, values in Red will keep changing source="/Application/logs/b80be40606aa7860f7de0c7ffa6b9d740581ec6035bc450ff5dfa3/apply-service/example.google.local:9818/Application-se... See more...
Hi, I have the below source, values in Red will keep changing source="/Application/logs/b80be40606aa7860f7de0c7ffa6b9d740581ec6035bc450ff5dfa3/apply-service/example.google.local:9818/Application-services/applyy-service:build-000/apply-service-464-xmp/system-out-dev.stdout"   in props.conf i tried below definitions but it is unable to pickup, how can i use wild cards to pick the correct source? 1) [source::/Application*] TRANSFORMS-anonymize = address-anonymizer 2) [source::/Application/logs/*/apply-service/*/Application-services/*/*/system-out-dev.stdout] TRANSFORMS-anonymize = address-anonymizer    
Hi All, I'm new to Splunk.  I'm not much familiar with the query search and lookup files. I have a custom IOC file with IPs & URLs and I want to search if there was any traffic to that destination. ... See more...
Hi All, I'm new to Splunk.  I'm not much familiar with the query search and lookup files. I have a custom IOC file with IPs & URLs and I want to search if there was any traffic to that destination. I went through few of the blogs and the suggestion was to create a csv lookup file.  Could you please let me know if it is the correct approach or is there any better way to search the IOCs?
I defined two eventypes: "loginAttempt" and "loginSuccess".  Now I am trying to create a chart where counts of both of these events are displayed side by side, per hour, to create a visual representa... See more...
I defined two eventypes: "loginAttempt" and "loginSuccess".  Now I am trying to create a chart where counts of both of these events are displayed side by side, per hour, to create a visual representation of the gap between attempted vs successful logins for each hour. Tabular representation would be something like: Date | Hour | Count of Attempts | Count of Successful I got individual counts working, but having a hard time figuring out how to combine the two while adding them up per hour.  Any help is greatly appreciated.
If you have have upgraded or planning to upgrade your Splunk Ent. to 8.2.2 & planning to upgrade your ES as well in the process. Please share your experience, do's & don'ts, and any KB links & docume... See more...
If you have have upgraded or planning to upgrade your Splunk Ent. to 8.2.2 & planning to upgrade your ES as well in the process. Please share your experience, do's & don'ts, and any KB links & documentation. Thank u in advance. 
hi how can i show max duration per servername?   index="my-index"        | rex "duration\[(?<duration>\d+.\d+)" | rex "id\[(?<id>\d+)" | rex "servername\[(?<servername>\w+)" | stats   max(dur... See more...
hi how can i show max duration per servername?   index="my-index"        | rex "duration\[(?<duration>\d+.\d+)" | rex "id\[(?<id>\d+)" | rex "servername\[(?<servername>\w+)" | stats   max(duration) as MAXduration by servername | table _time MAXduration id _raw this spl not show  (_time id _raw) on table! it just show MAXduration. I search about this and some people suggest use eventstats or streamstats. but now i have another problem. streamstats show (_time id _raw) correctly but same MAXduration for all servername. | streamstats max(duration) as MAXduration by servername _time                       MAXduration     id     _raw 00:12:00.000    1.2323                 921    00:12:00.000 info duration[1.2323]id[921]servername[server1] 00:12:00.000    1.4434                 956    00:12:00.000 info duration[1.4434]id[956]servername[server1] 00:12:00.000    1.9998                  231    00:12:00.000 info duration[1.9998]id[231]servername[server2] 00:12:00.000    1.8873                  543    00:12:00.000 info duration[0.8873]id[543]servername[server2] ... main goal is show maximum duration for each server. excpected output: _time                       MAXduration id     _raw 00:12:00.000    1.2323              921    00:12:00.000 info duration[1.2323]id[921]servername[server1] 00:12:00.000    1.6454              920    00:12:00.000 info duration[1.6454]id[920]servername[server2] 00:12:00.000    1.2545                821    00:12:00.000 info duration[1.2545]id[821]servername[server3] 00:12:00.000    0.1123                321    00:12:00.000 info duration[0.1123]id[321]servername[server4] any idea? thanks
Greetings!!!   Hello everyone, I have got an issue after ADDING LICENSE  trial ,I CANNOT SEARCH WHEN SEARCHING i got this error:  5 errors occured while the search was executing. therefore search r... See more...
Greetings!!!   Hello everyone, I have got an issue after ADDING LICENSE  trial ,I CANNOT SEARCH WHEN SEARCHING i got this error:  5 errors occured while the search was executing. therefore search results might be incomplete. * [splunkindexer1] Streamed search execute failed because: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. renew your splunk license by visiting www.splunk/com /store or calling 866.GET.SPLUNK. * * [splunkindexer2] Streamed search execute failed because: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. renew your splunk license by visiting www.splunk/com /store or calling 866.GET.SPLUNK. * * [splunkindexer3] Streamed search execute failed because: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. renew your splunk license by visiting www.splunk/com /store or calling 866.GET.SPLUNK. * * [splunkindexer4] Streamed search execute failed because: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. renew your splunk license by visiting www.splunk/com /store or calling 866.GET.SPLUNK. * [splunkindexer5] Streamed search execute failed because: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. renew your splunk license by visiting www.splunk/com /store or calling 866.GET.SPLUNK. It shows the above error and after installing the license trial and restart splunk; I also restarted splunkforwarders service ,  I got the above error while I search in "search&reporting app" and other new error which shows me many DMC alert is disabled? this may cause to not searching and showing the above error to all indexers? Kindly help me on this?   , Thank you in advance.
Hello All, I need to alert when the perc75(totalfilter) value reached greater than 40000 within 10 mins or more. I am sharing my original query and now I am looking for the above condition to be app... See more...
Hello All, I need to alert when the perc75(totalfilter) value reached greater than 40000 within 10 mins or more. I am sharing my original query and now I am looking for the above condition to be append with the below query to trigger alert   index=clai_pd env=pd*cloud* perflog getprovider RASNewDispatch-Ext_RASDispatchDetailScreen-getProviderNext_act OR RASDispatchPage-RASDispatchPanelSet-RASDispatchCardPanel-getProvider_act | timechart span=10m perc50(totalfilter), perc75(totalfilter) by count