All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Please I need  help with a detailed splunk Data accelerated data model authentication query for sucessful  login alerts using a | tstats summariesonly=true . The query should have count threshold.  T... See more...
Please I need  help with a detailed splunk Data accelerated data model authentication query for sucessful  login alerts using a | tstats summariesonly=true . The query should have count threshold.  The query should cover all products and vendors names used in the environment. It is not only successful login based on windows.
Hello, I am having logs in splunk in below manner. timestamp "LOGGER= PAGE NAME1 Other text" timestamp "LOGGER= PAGE NAME1 Other text" timestamp "LOGGER= PAGE NAME2 Other text" timestamp "LOGGER... See more...
Hello, I am having logs in splunk in below manner. timestamp "LOGGER= PAGE NAME1 Other text" timestamp "LOGGER= PAGE NAME1 Other text" timestamp "LOGGER= PAGE NAME2 Other text" timestamp "LOGGER= PAGE NAME2 Other text" timestamp "LOGGER= PAGE NAME3 Other text" timestamp "LOGGER= PAGE NAME3 Other text" timestamp "LOGGER= PAGE NAME1 Other text" I formatted search query index=index-name ns="namespace" | rex field=_raw "LOGGER=\s*(?<PAGE_NAME1>PAGE NAME1*)" | stats count by PAGE_NAME1 | append [search index=index-name ns="namespace" | rex field=_raw "LOGGER=\s*(?<PAGE_NAME2>PAGE NAME2*)" | stats count by PAGE_NAME2 ] | append [search index=index-name ns="namespace" | rex field=_raw "LOGGER=\s*(?<PAGE_NAME3>PAGE NAME3*)" | stats count by PAGE_NAME3 ] Got result like PAGE_NAME1 Count PAGE_NAME2 PAGE_NAME3 PAGE NAME1 3       2 PAGE NAME2     2   PAGE NAME3 I am looking result should look below Page Name Pages Visited PAGE_NAME1 3 PAGE_NAME3 2 PAGE_NAME3 2 Any idea how to format search query ?
 I've upgraded from splunk 8.0.3 to 8.2.2, and now i'm getting errors for my metrics query. This used to work: | mstats rate(_value) prestats=true WHERE metric_name="traffic_in" AND index="em_metri... See more...
 I've upgraded from splunk 8.0.3 to 8.2.2, and now i'm getting errors for my metrics query. This used to work: | mstats rate(_value) prestats=true WHERE metric_name="traffic_in" AND index="em_metrics" AND description="EDGE" AND name_cache="EDGE" span=60s BY name_cache | timechart rate(_value) span=120s useother=false BY name_cache | fields -_span* | rename "EDGE" as traffic_in | eval Gb_in=(traffic_in*8/1000/1000/1000) | append [ | mstats rate(_value) prestats=true WHERE metric_name="traffic_out" AND index="em_metrics" AND name_cache="EDGE" span=60s BY name_cache | timechart rate(_value) span=120s useother=false BY name_cache| fields - _span* | rename "EDGE" as traffic_out | eval Gb_out=(traffic_out*8/1000/1000/1000) ] | selfjoin keepsingle=true _time| fields _time Gb_in, Gb_out Now i get an error that says The following join field(s) do not exist in the data '_time'.  Has anything changed from 8.0.3 to 8.2.2 that could explain this?
I am utilizing the Graph API TA in order to pull in logs but I need to utilize a second installment of the same app on the same HF in order to pull in federal logs.  This is due to the fact that the ... See more...
I am utilizing the Graph API TA in order to pull in logs but I need to utilize a second installment of the same app on the same HF in order to pull in federal logs.  This is due to the fact that the original app is pointing to .com addresses and need to be pointed to .us addresses.  when I install the "copied app" it will not load or display the config screen. Does anyone know what needs to be changed within the app to make this possible? I tried just renaming the folder so that it would be a "Graph app_B" but the TA will not load after making this change. Is there any other .conf file that requires adjustment? Please be as specific as possible. Thanks!!
Hi All,  We are working on getting Appian data to splunk. Appian has been configured to push the logs to splunk syslog endpoint. We have tried several things but the data received in splunk is still ... See more...
Hi All,  We are working on getting Appian data to splunk. Appian has been configured to push the logs to splunk syslog endpoint. We have tried several things but the data received in splunk is still encrypted.  https://docs.appian.com/suite/help/21.2/Log_Streaming_for_Appian_Cloud.html#prerequisite-checklist Below is the config of splunk. Please help us if you have done this is past or have knowledge on how to fix this.  Splunk Version : 8.1.4 etc\apps\search\local\inputs.conf [tcp://514] connection_host = ip index = appian sourcetype = syslog   [SSL] requireClientCert = false serverCert =  $SPLUNK_HOME\etc\auth\splunkweb\myDataCertificate.pem sslVersions = tls1.2 cipherSuite = ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:DH-DSS-AES256-GCM-SHA384:DHE-DSS-AES256-GCM-SHA384:DH-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA256:DH-RSA-AES256-SHA256:DH-DSS-AES256-SHA256:ADH-AES256-GCM-SHA384:ADH-AES256-SHA256:ECDH-RSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-RSA-AES256-SHA384:ECDH-ECDSA-AES256-SHA384:AES256-GCM-SHA384:AES256-SHA256 Note : myDataCertificate.pem is a combination of server+interimCA+rootCA.    Sample Encrypted Data 10/10/21 8:04:18.000 PM \x00\x00\x00\x9E\x00\x9F\xC0|\xC0}\x003\x00g\x009\x00k\x00E\x00\xBE\x00\x88\x00\xC4\x00\x00\xA2\x00\xA3\xC0\x80\xC0\x81\x002\x00@\x008\x00j\x00D\x00\xBD\x00\x87\x00\xC3\x00\x00f\x00\x00D\x00\x00\x00\x00\x00\x00\xFF\x00\x00\x00#\x00\x00\x00 10/10/21 8:04:18.000 PM \xC0r\xC0\xC0\xC0/\xC00\xC0\x8A\xC0\x8B\xC0\xC0'\xC0\xC0v\xC0\xC0\x00\x9C\x00\x9D\xC0z\xC0{\x00/\x00<\x005\x00=\x00A\x00\xBA\x00\x84\x00\xC0\x00
In my large environment we have Splunk Ent. ES in a clustered environment. We are on-boarding new teams into our Splunk environment. To avoid housing all the new team's search content under the defau... See more...
In my large environment we have Splunk Ent. ES in a clustered environment. We are on-boarding new teams into our Splunk environment. To avoid housing all the new team's search content under the default search & reporting, how does one create a separate search & reporting so the new team don't have to deal with all the search & reporting spinning on the default one. Thanks a million for your help in advance.  
I've just set up with a new account ( james_e_thompson ) on the new Splunk Portal that cut in last week on 11/11/2021. I have the Splunk on call app on my iPhone.  Question #1, do I need to update ... See more...
I've just set up with a new account ( james_e_thompson ) on the new Splunk Portal that cut in last week on 11/11/2021. I have the Splunk on call app on my iPhone.  Question #1, do I need to update or switch to a new app in conjunction with setting up a new account for the Splunk Portal?  Current app on phone is SplunkOn-Call version 7.63.688 Question #2, how to validate all is well with my new Splunk portal account and whichever app needs to be available on my phone?
Hello Experts, I have recently started exploring on Splunk API's and trying to find a REST API to Add data to a splunk index. Can someone guide me as to if this is possible please? I already refer... See more...
Hello Experts, I have recently started exploring on Splunk API's and trying to find a REST API to Add data to a splunk index. Can someone guide me as to if this is possible please? I already referred to the Splunk API documentation and couldn't find an API reference which performs the task. Any inputs/Guidance will be highly appreciated. Thanks, Nitheesh
Hello community . I am experiencing a weird situation and I have not been able to find the cause. We are deploying a Splunk UF for windows on aprox 800 windows systems. On some windows servers (wi... See more...
Hello community . I am experiencing a weird situation and I have not been able to find the cause. We are deploying a Splunk UF for windows on aprox 800 windows systems. On some windows servers (windows 2016) a few days after  the UF installation through SCCM the UF Disappears. I was wondering if someone had a similar situation or any ideas where to start searching?
I want to extract the Country and the Node. When I use the rex in regex101, it works fine. But when I put it on Splunk search, it did not extract the Country and the Node. Do you guys know where is m... See more...
I want to extract the Country and the Node. When I use the rex in regex101, it works fine. But when I put it on Splunk search, it did not extract the Country and the Node. Do you guys know where is my mistake? This is my search query.     index="maxis_csaroam_index" source="/home/csaops/csaroam/*_MOS.csv" | dedup Description | table Description | rex field=Description "(?<Country>[\w]+)(?<Node>[\w\- ]*\n)"    
Hello,   We have Django logs in following format: 11/15/2021 08:34:38 [INFO - 171 ] - [tenant_move.py] - [STOP_PROCESS] : STOP_PROCESS(HANA Tenant Move Alerts) completed successfully - Rows affect... See more...
Hello,   We have Django logs in following format: 11/15/2021 08:34:38 [INFO - 171 ] - [tenant_move.py] - [STOP_PROCESS] : STOP_PROCESS(HANA Tenant Move Alerts) completed successfully - Rows affected : 1 and we would like to extract the following fields using regex, on the above example: TYPE=INFO LINE=171 SCRIPT=tenant_move.py MODULE=STOP_PROCESS .. ideally using single regex expression and not 4 separate. Could anyone help? Kind regards, Kamil
hi All, I have created a python script in my local that's working fine for me. But need to create it as a splunk addon to deploy for Splunk cloud hence need to pass username, password and accessto... See more...
hi All, I have created a python script in my local that's working fine for me. But need to create it as a splunk addon to deploy for Splunk cloud hence need to pass username, password and accesstoken as variables to the code instead of hardcoding. Can someone please point me to any sample example scripts where we are passing variables to the script. Since am beginner in python finding it difficult with the Splunk python helper functions. Thanks, Devon
I have a query which results in to a table data. I want to group the data and the count column should sum of grouped data. but this just results in total of all the fields in all the row and shows ... See more...
I have a query which results in to a table data. I want to group the data and the count column should sum of grouped data. but this just results in total of all the fields in all the row and shows up against all the values as same sum.   example log: 2021-11-15 11:17:25.899 level=INFO com.a.b.MyClass - Average latency=0.0 someRandomCount=12800 mySearchValue=SearchValue1=167,SearchValue2=154,SearchValue3=163 // AppId=3ba33f54-4588-49f8-9702-bf957392a029    my Query for this log is: mySearchValue="*" | rex "mySearchValue=(?<sValue>[^\"]+) //" | eval field1=split( sValue,",") | rex field=field1 "(?<Field1>[^\,]+)\=(?<Field2>[^\,]*)" | eval c=mvzip(Field1,Field2) | table Field1,Field2 | mvexpand c | rename Field1 as "My Values" | rename Field2 as "Count"   Note the string against "mySearchValue" in my log is not fixed to have 3 values, it can have any number different values. But the format of each one of them would be same : someString=123 (comma seperated).   The above queries sample result comes like below: My Values Count SearchValue1 SearchValue2 SearchValue3 167 154 163 SearchValue1 SearchValue2 SearchValue3 417 378 399   Each line is one row here, but the first section is extracted form first log encountered and splitted up in to rows then columns. and so on with other log lines.   I want this data to be grouped by My Values and sum respective Count values. If I add : stats sum(Field2) AS "groupCount" by Field1 Then I do get distinct "My Values" but the count (for every row) comes out to be the same which is total of all values (in this case: 1678) .
Dear Splunk team, Can you assign a resource to reach out to me. We recently deployed splunk enterprise in our environment and running 60day trial. We plan to use the solution for log collection, ana... See more...
Dear Splunk team, Can you assign a resource to reach out to me. We recently deployed splunk enterprise in our environment and running 60day trial. We plan to use the solution for log collection, analysis and reporting. I need someone to walk me through the platform and it features and also request license quote.
Hi Splunk chaps,  I'm facing problem with feeding HF from UF (HF is sending data to the cloud and this works fine).  I can exclude network or firewall issue - both servers are reachable from opposit... See more...
Hi Splunk chaps,  I'm facing problem with feeding HF from UF (HF is sending data to the cloud and this works fine).  I can exclude network or firewall issue - both servers are reachable from opposite side.  Below is a chunk of log errors from UF :  11-15-2021 11:12:57.024 +0000 INFO DC:DeploymentClient [6735 PhonehomeThread] - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 11-15-2021 11:13:09.024 +0000 INFO DC:DeploymentClient [6735 PhonehomeThread] - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 11-15-2021 11:13:10.140 +0000 WARN HttpPubSubConnection [6734 HttpClientPollingThread_97C72192-9F2D-4883-830A-776376593AC1] - Unable to parse message from PubSubSvr: 11-15-2021 11:13:10.140 +0000 INFO HttpPubSubConnection [6734 HttpClientPollingThread_97C72192-9F2D-4883-830A-776376593AC1] - Could not obtain connection, will retry after=70.985 seconds. 11-15-2021 11:13:17.695 +0000 WARN TcpOutputProc [3551 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=172.23.11.216 inside output group default-autolb-group from host_src=ldcrapnvvip10 has been blocked for blocked_seconds=446600. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. Please see output debug from UF.  /opt/splunkforwarder/etc/system/default/outputs.conf [syslog] /opt/splunkforwarder/etc/system/default/outputs.conf maxEventSize = 1024 /opt/splunkforwarder/etc/system/default/outputs.conf priority = <13> /opt/splunkforwarder/etc/system/default/outputs.conf type = udp /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/outputs.conf [tcpout] /opt/splunkforwarder/etc/system/default/outputs.conf ackTimeoutOnShutdown = 30 /opt/splunkforwarder/etc/system/default/outputs.conf autoLBFrequency = 30 /opt/splunkforwarder/etc/system/default/outputs.conf autoLBVolume = 0 /opt/splunkforwarder/etc/system/default/outputs.conf blockOnCloning = true /opt/splunkforwarder/etc/system/default/outputs.conf blockWarnThreshold = 100 /opt/splunkforwarder/etc/system/default/outputs.conf cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-SHA256 /opt/splunkforwarder/etc/system/default/outputs.conf compressed = false /opt/splunkforwarder/etc/system/default/outputs.conf connectionTTL = 0 /opt/splunkforwarder/etc/system/default/outputs.conf connectionTimeout = 20 /opt/splunkforwarder/etc/system/local/outputs.conf defaultGroup = default-autolb-group /opt/splunkforwarder/etc/system/default/outputs.conf disabled = false /opt/splunkforwarder/etc/system/default/outputs.conf dropClonedEventsOnQueueFull = 5 /opt/splunkforwarder/etc/system/default/outputs.conf dropEventsOnQueueFull = -1 /opt/splunkforwarder/etc/system/default/outputs.conf ecdhCurves = prime256v1, secp384r1, secp521r1 /opt/splunkforwarder/etc/system/default/outputs.conf forceTimebasedAutoLB = false /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/outputs.conf forwardedindex.0.whitelist = .* /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/outputs.conf forwardedindex.1.blacklist = _.* /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/outputs.conf forwardedindex.2.whitelist = (_audit|_introspection|_internal|_telemetry) /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/outputs.conf forwardedindex.filter.disable = false /opt/splunkforwarder/etc/system/default/outputs.conf heartbeatFrequency = 30 /opt/splunkforwarder/etc/system/default/outputs.conf indexAndForward = false /opt/splunkforwarder/etc/system/default/outputs.conf maxConnectionsPerIndexer = 2 /opt/splunkforwarder/etc/system/default/outputs.conf maxFailuresPerInterval = 2 /opt/splunkforwarder/etc/system/default/outputs.conf maxQueueSize = auto /opt/splunkforwarder/etc/system/default/outputs.conf readTimeout = 300 /opt/splunkforwarder/etc/system/default/outputs.conf secsInFailureInterval = 1 /opt/splunkforwarder/etc/system/default/outputs.conf sendCookedData = true /opt/splunkforwarder/etc/system/default/outputs.conf sslQuietShutdown = false /opt/splunkforwarder/etc/system/default/outputs.conf sslVersions = tls1.2 /opt/splunkforwarder/etc/system/default/outputs.conf tcpSendBufSz = 0 /opt/splunkforwarder/etc/system/default/outputs.conf useACK = false /opt/splunkforwarder/etc/system/default/outputs.conf useClientSSLCompression = true /opt/splunkforwarder/etc/system/default/outputs.conf writeTimeout = 300 /opt/splunkforwarder/etc/system/local/outputs.conf [tcpout-server://172.23.11.216:9997] /opt/splunkforwarder/etc/system/local/outputs.conf [tcpout:default-autolb-group] /opt/splunkforwarder/etc/system/local/outputs.conf disabled = false /opt/splunkforwarder/etc/system/local/outputs.conf server = 172.23.11.216:9997   Any ideas what blocks it?  thanks in advance, Sz  
Hi, Can we get list of Total Dashboards used in Splunk Environment followed by Number of Panel name and search query used to feed those panels ? any response would be appreciated   Thanks in adva... See more...
Hi, Can we get list of Total Dashboards used in Splunk Environment followed by Number of Panel name and search query used to feed those panels ? any response would be appreciated   Thanks in advance 
Hi, I recently upgraded to Splunk 8.2.2.1, when I try to collect a RapidDiag report in Settings > RapidDiag > Indexer Health I have the following errors : System Call Trace > Could not detect `st... See more...
Hi, I recently upgraded to Splunk 8.2.2.1, when I try to collect a RapidDiag report in Settings > RapidDiag > Indexer Health I have the following errors : System Call Trace > Could not detect `strace` for the root user. on following instance(s) => Indexers Network Packet > Could not detect `tcpdump` for the root user. on following instance(s) => Indexers IOPS > Could not detect `iostat` for the splunk user. on following instance(s) => Indexers Stack Trace > Could not detect `eu-stack` for the root user. on following instance(s) => Indexers All indexers are running Splunk 8.2.2.1 on Debian 9.13. Any idea ?
Hi How can I calculate duration of below log:   2021-07-15 00:00:01,869 INFO CUS.AbCD-AppService1-1234567 [AppListener] Receive Packet[00*]: Kafka[AppService1.APP1] 2021-07-15 00:00:01,988 INFO C... See more...
Hi How can I calculate duration of below log:   2021-07-15 00:00:01,869 INFO CUS.AbCD-AppService1-1234567 [AppListener] Receive Packet[00*]: Kafka[AppService1.APP1] 2021-07-15 00:00:01,988 INFO CUS.AbCD-AppService1-1234567 [AppCheckManager] Send Packet [01*] to [APP1.APP2] 2021-07-15 00:00:11,714 INFO CUS.AbCD-AppService2-9876543 [AppListener] Receive Packet[02*]: Kafka[AppService2.APP1] 2021-07-15 00:00:11,747 INFO CUS.AbCD-AppService2-9876543_CUS.AbCD-AppService1-1234567 [AppCheckManager] Send Packet [03*] to [APP1.AppService1] 2021-07-15 00:00:11,869 INFO CUS.AbCD-AppService1-1111111 [AppListener] Receive Packet[00*]: Kafka[AppService1.APP1] 2021-07-15 00:00:11,988 INFO CUS.AbCD-AppService1-1111111 [AppCheckManager] Send Packet [01*] to [APP1.APP2] 2021-07-15 00:00:15,714 INFO CUS.AbCD-AppService2-2222222 [AppListener] Receive Packet[02*]: Kafka[AppService2.APP1] 2021-07-15 00:00:15,747 INFO CUS.AbCD-AppService2-2222222_CUS.AbCD-AppService1-1111111 [AppCheckManager] Send Packet [03*] to [APP1.AppService1] Expected Output: id                                                                                                                                                                              duration CUS.AbCD-AppService2-9876543_CUS.AbCD-AppService1-1234567          9,878 CUS.AbCD-AppService2-2222222_CUS.AbCD-AppService1-1111111          3,878 FYI: 9,878=(00:00:11,747)-(00:00:01,869) 3,878=(00:00:15,747)-(00:00:11,869)   Thanks,
We are using the SSL Monitoring Extension on a Windows server that makes use of a standalone machine agent. The extension works for certain SSL Certificates and returns the expiration date and it ca... See more...
We are using the SSL Monitoring Extension on a Windows server that makes use of a standalone machine agent. The extension works for certain SSL Certificates and returns the expiration date and it can be viewed in the SaaS Controller UI through the Metric Browser. I am hoping that someone else is using this extension on Windows and can give advice on the issue. Issue: For a large amount of the domains the agent does not post any metrics about the SSL certs and the agent logs the below. Error we see in the agent log files: <Server_host_name_redacted>==> [Thread-1520850] 15 Nov 2021 08:41:47,693 ERROR ProcessExecutor$ErrorReader - Process Error - The process cannot access the file because it is being used by another process. <Server_host_name_redacted>==> [Monitor-Task-Thread6] 15 Nov 2021 08:41:47,693 ERROR SslCertificateProcessor - Error fetching expiration date for <Server_host_name_redacted> Additional Details: Agent: machineagent-bundle-64bit-windows-21.9.0.3184 OS: Windows Server 2019 (64-bit) Extension Link: https://developer.cisco.com/codeexchange/github/repo/Appdynamics/ssl-certificate-monitoring-extension Stackoverflow has posts mentioning the shortcomings of the Windows Open SSL, we are using cygwin64 with the extension already.  Dietrich 
Hello All, Anyone out there know how I can search for an event that is supposed to occur within 24 hours but has not?  Example: 1 - Invite is sent, if invite is not marked received in 24 hours i... See more...
Hello All, Anyone out there know how I can search for an event that is supposed to occur within 24 hours but has not?  Example: 1 - Invite is sent, if invite is not marked received in 24 hours it is a failure.  So, lets say --- invite was sent 11/14/21 and it is received on 11/16/21 this is a failure.  The start time would not be now() or relateive_time function because the start time would be the time the invite was sent.  Any help is greatly appreciated.