All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello community . I am experiencing a weird situation and I have not been able to find the cause. We are deploying a Splunk UF for windows on aprox 800 windows systems. On some windows servers (wi... See more...
Hello community . I am experiencing a weird situation and I have not been able to find the cause. We are deploying a Splunk UF for windows on aprox 800 windows systems. On some windows servers (windows 2016) a few days after  the UF installation through SCCM the UF Disappears. I was wondering if someone had a similar situation or any ideas where to start searching?
I want to extract the Country and the Node. When I use the rex in regex101, it works fine. But when I put it on Splunk search, it did not extract the Country and the Node. Do you guys know where is m... See more...
I want to extract the Country and the Node. When I use the rex in regex101, it works fine. But when I put it on Splunk search, it did not extract the Country and the Node. Do you guys know where is my mistake? This is my search query.     index="maxis_csaroam_index" source="/home/csaops/csaroam/*_MOS.csv" | dedup Description | table Description | rex field=Description "(?<Country>[\w]+)(?<Node>[\w\- ]*\n)"    
Hello,   We have Django logs in following format: 11/15/2021 08:34:38 [INFO - 171 ] - [tenant_move.py] - [STOP_PROCESS] : STOP_PROCESS(HANA Tenant Move Alerts) completed successfully - Rows affect... See more...
Hello,   We have Django logs in following format: 11/15/2021 08:34:38 [INFO - 171 ] - [tenant_move.py] - [STOP_PROCESS] : STOP_PROCESS(HANA Tenant Move Alerts) completed successfully - Rows affected : 1 and we would like to extract the following fields using regex, on the above example: TYPE=INFO LINE=171 SCRIPT=tenant_move.py MODULE=STOP_PROCESS .. ideally using single regex expression and not 4 separate. Could anyone help? Kind regards, Kamil
hi All, I have created a python script in my local that's working fine for me. But need to create it as a splunk addon to deploy for Splunk cloud hence need to pass username, password and accessto... See more...
hi All, I have created a python script in my local that's working fine for me. But need to create it as a splunk addon to deploy for Splunk cloud hence need to pass username, password and accesstoken as variables to the code instead of hardcoding. Can someone please point me to any sample example scripts where we are passing variables to the script. Since am beginner in python finding it difficult with the Splunk python helper functions. Thanks, Devon
I have a query which results in to a table data. I want to group the data and the count column should sum of grouped data. but this just results in total of all the fields in all the row and shows ... See more...
I have a query which results in to a table data. I want to group the data and the count column should sum of grouped data. but this just results in total of all the fields in all the row and shows up against all the values as same sum.   example log: 2021-11-15 11:17:25.899 level=INFO com.a.b.MyClass - Average latency=0.0 someRandomCount=12800 mySearchValue=SearchValue1=167,SearchValue2=154,SearchValue3=163 // AppId=3ba33f54-4588-49f8-9702-bf957392a029    my Query for this log is: mySearchValue="*" | rex "mySearchValue=(?<sValue>[^\"]+) //" | eval field1=split( sValue,",") | rex field=field1 "(?<Field1>[^\,]+)\=(?<Field2>[^\,]*)" | eval c=mvzip(Field1,Field2) | table Field1,Field2 | mvexpand c | rename Field1 as "My Values" | rename Field2 as "Count"   Note the string against "mySearchValue" in my log is not fixed to have 3 values, it can have any number different values. But the format of each one of them would be same : someString=123 (comma seperated).   The above queries sample result comes like below: My Values Count SearchValue1 SearchValue2 SearchValue3 167 154 163 SearchValue1 SearchValue2 SearchValue3 417 378 399   Each line is one row here, but the first section is extracted form first log encountered and splitted up in to rows then columns. and so on with other log lines.   I want this data to be grouped by My Values and sum respective Count values. If I add : stats sum(Field2) AS "groupCount" by Field1 Then I do get distinct "My Values" but the count (for every row) comes out to be the same which is total of all values (in this case: 1678) .
Dear Splunk team, Can you assign a resource to reach out to me. We recently deployed splunk enterprise in our environment and running 60day trial. We plan to use the solution for log collection, ana... See more...
Dear Splunk team, Can you assign a resource to reach out to me. We recently deployed splunk enterprise in our environment and running 60day trial. We plan to use the solution for log collection, analysis and reporting. I need someone to walk me through the platform and it features and also request license quote.
Hi Splunk chaps,  I'm facing problem with feeding HF from UF (HF is sending data to the cloud and this works fine).  I can exclude network or firewall issue - both servers are reachable from opposit... See more...
Hi Splunk chaps,  I'm facing problem with feeding HF from UF (HF is sending data to the cloud and this works fine).  I can exclude network or firewall issue - both servers are reachable from opposite side.  Below is a chunk of log errors from UF :  11-15-2021 11:12:57.024 +0000 INFO DC:DeploymentClient [6735 PhonehomeThread] - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 11-15-2021 11:13:09.024 +0000 INFO DC:DeploymentClient [6735 PhonehomeThread] - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected 11-15-2021 11:13:10.140 +0000 WARN HttpPubSubConnection [6734 HttpClientPollingThread_97C72192-9F2D-4883-830A-776376593AC1] - Unable to parse message from PubSubSvr: 11-15-2021 11:13:10.140 +0000 INFO HttpPubSubConnection [6734 HttpClientPollingThread_97C72192-9F2D-4883-830A-776376593AC1] - Could not obtain connection, will retry after=70.985 seconds. 11-15-2021 11:13:17.695 +0000 WARN TcpOutputProc [3551 parsing] - The TCP output processor has paused the data flow. Forwarding to host_dest=172.23.11.216 inside output group default-autolb-group from host_src=ldcrapnvvip10 has been blocked for blocked_seconds=446600. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. Please see output debug from UF.  /opt/splunkforwarder/etc/system/default/outputs.conf [syslog] /opt/splunkforwarder/etc/system/default/outputs.conf maxEventSize = 1024 /opt/splunkforwarder/etc/system/default/outputs.conf priority = <13> /opt/splunkforwarder/etc/system/default/outputs.conf type = udp /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/outputs.conf [tcpout] /opt/splunkforwarder/etc/system/default/outputs.conf ackTimeoutOnShutdown = 30 /opt/splunkforwarder/etc/system/default/outputs.conf autoLBFrequency = 30 /opt/splunkforwarder/etc/system/default/outputs.conf autoLBVolume = 0 /opt/splunkforwarder/etc/system/default/outputs.conf blockOnCloning = true /opt/splunkforwarder/etc/system/default/outputs.conf blockWarnThreshold = 100 /opt/splunkforwarder/etc/system/default/outputs.conf cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-SHA256 /opt/splunkforwarder/etc/system/default/outputs.conf compressed = false /opt/splunkforwarder/etc/system/default/outputs.conf connectionTTL = 0 /opt/splunkforwarder/etc/system/default/outputs.conf connectionTimeout = 20 /opt/splunkforwarder/etc/system/local/outputs.conf defaultGroup = default-autolb-group /opt/splunkforwarder/etc/system/default/outputs.conf disabled = false /opt/splunkforwarder/etc/system/default/outputs.conf dropClonedEventsOnQueueFull = 5 /opt/splunkforwarder/etc/system/default/outputs.conf dropEventsOnQueueFull = -1 /opt/splunkforwarder/etc/system/default/outputs.conf ecdhCurves = prime256v1, secp384r1, secp521r1 /opt/splunkforwarder/etc/system/default/outputs.conf forceTimebasedAutoLB = false /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/outputs.conf forwardedindex.0.whitelist = .* /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/outputs.conf forwardedindex.1.blacklist = _.* /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/outputs.conf forwardedindex.2.whitelist = (_audit|_introspection|_internal|_telemetry) /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/outputs.conf forwardedindex.filter.disable = false /opt/splunkforwarder/etc/system/default/outputs.conf heartbeatFrequency = 30 /opt/splunkforwarder/etc/system/default/outputs.conf indexAndForward = false /opt/splunkforwarder/etc/system/default/outputs.conf maxConnectionsPerIndexer = 2 /opt/splunkforwarder/etc/system/default/outputs.conf maxFailuresPerInterval = 2 /opt/splunkforwarder/etc/system/default/outputs.conf maxQueueSize = auto /opt/splunkforwarder/etc/system/default/outputs.conf readTimeout = 300 /opt/splunkforwarder/etc/system/default/outputs.conf secsInFailureInterval = 1 /opt/splunkforwarder/etc/system/default/outputs.conf sendCookedData = true /opt/splunkforwarder/etc/system/default/outputs.conf sslQuietShutdown = false /opt/splunkforwarder/etc/system/default/outputs.conf sslVersions = tls1.2 /opt/splunkforwarder/etc/system/default/outputs.conf tcpSendBufSz = 0 /opt/splunkforwarder/etc/system/default/outputs.conf useACK = false /opt/splunkforwarder/etc/system/default/outputs.conf useClientSSLCompression = true /opt/splunkforwarder/etc/system/default/outputs.conf writeTimeout = 300 /opt/splunkforwarder/etc/system/local/outputs.conf [tcpout-server://172.23.11.216:9997] /opt/splunkforwarder/etc/system/local/outputs.conf [tcpout:default-autolb-group] /opt/splunkforwarder/etc/system/local/outputs.conf disabled = false /opt/splunkforwarder/etc/system/local/outputs.conf server = 172.23.11.216:9997   Any ideas what blocks it?  thanks in advance, Sz  
Hi, Can we get list of Total Dashboards used in Splunk Environment followed by Number of Panel name and search query used to feed those panels ? any response would be appreciated   Thanks in adva... See more...
Hi, Can we get list of Total Dashboards used in Splunk Environment followed by Number of Panel name and search query used to feed those panels ? any response would be appreciated   Thanks in advance 
Hi, I recently upgraded to Splunk 8.2.2.1, when I try to collect a RapidDiag report in Settings > RapidDiag > Indexer Health I have the following errors : System Call Trace > Could not detect `st... See more...
Hi, I recently upgraded to Splunk 8.2.2.1, when I try to collect a RapidDiag report in Settings > RapidDiag > Indexer Health I have the following errors : System Call Trace > Could not detect `strace` for the root user. on following instance(s) => Indexers Network Packet > Could not detect `tcpdump` for the root user. on following instance(s) => Indexers IOPS > Could not detect `iostat` for the splunk user. on following instance(s) => Indexers Stack Trace > Could not detect `eu-stack` for the root user. on following instance(s) => Indexers All indexers are running Splunk 8.2.2.1 on Debian 9.13. Any idea ?
Hi How can I calculate duration of below log:   2021-07-15 00:00:01,869 INFO CUS.AbCD-AppService1-1234567 [AppListener] Receive Packet[00*]: Kafka[AppService1.APP1] 2021-07-15 00:00:01,988 INFO C... See more...
Hi How can I calculate duration of below log:   2021-07-15 00:00:01,869 INFO CUS.AbCD-AppService1-1234567 [AppListener] Receive Packet[00*]: Kafka[AppService1.APP1] 2021-07-15 00:00:01,988 INFO CUS.AbCD-AppService1-1234567 [AppCheckManager] Send Packet [01*] to [APP1.APP2] 2021-07-15 00:00:11,714 INFO CUS.AbCD-AppService2-9876543 [AppListener] Receive Packet[02*]: Kafka[AppService2.APP1] 2021-07-15 00:00:11,747 INFO CUS.AbCD-AppService2-9876543_CUS.AbCD-AppService1-1234567 [AppCheckManager] Send Packet [03*] to [APP1.AppService1] 2021-07-15 00:00:11,869 INFO CUS.AbCD-AppService1-1111111 [AppListener] Receive Packet[00*]: Kafka[AppService1.APP1] 2021-07-15 00:00:11,988 INFO CUS.AbCD-AppService1-1111111 [AppCheckManager] Send Packet [01*] to [APP1.APP2] 2021-07-15 00:00:15,714 INFO CUS.AbCD-AppService2-2222222 [AppListener] Receive Packet[02*]: Kafka[AppService2.APP1] 2021-07-15 00:00:15,747 INFO CUS.AbCD-AppService2-2222222_CUS.AbCD-AppService1-1111111 [AppCheckManager] Send Packet [03*] to [APP1.AppService1] Expected Output: id                                                                                                                                                                              duration CUS.AbCD-AppService2-9876543_CUS.AbCD-AppService1-1234567          9,878 CUS.AbCD-AppService2-2222222_CUS.AbCD-AppService1-1111111          3,878 FYI: 9,878=(00:00:11,747)-(00:00:01,869) 3,878=(00:00:15,747)-(00:00:11,869)   Thanks,
We are using the SSL Monitoring Extension on a Windows server that makes use of a standalone machine agent. The extension works for certain SSL Certificates and returns the expiration date and it ca... See more...
We are using the SSL Monitoring Extension on a Windows server that makes use of a standalone machine agent. The extension works for certain SSL Certificates and returns the expiration date and it can be viewed in the SaaS Controller UI through the Metric Browser. I am hoping that someone else is using this extension on Windows and can give advice on the issue. Issue: For a large amount of the domains the agent does not post any metrics about the SSL certs and the agent logs the below. Error we see in the agent log files: <Server_host_name_redacted>==> [Thread-1520850] 15 Nov 2021 08:41:47,693 ERROR ProcessExecutor$ErrorReader - Process Error - The process cannot access the file because it is being used by another process. <Server_host_name_redacted>==> [Monitor-Task-Thread6] 15 Nov 2021 08:41:47,693 ERROR SslCertificateProcessor - Error fetching expiration date for <Server_host_name_redacted> Additional Details: Agent: machineagent-bundle-64bit-windows-21.9.0.3184 OS: Windows Server 2019 (64-bit) Extension Link: https://developer.cisco.com/codeexchange/github/repo/Appdynamics/ssl-certificate-monitoring-extension Stackoverflow has posts mentioning the shortcomings of the Windows Open SSL, we are using cygwin64 with the extension already.  Dietrich 
Hello All, Anyone out there know how I can search for an event that is supposed to occur within 24 hours but has not?  Example: 1 - Invite is sent, if invite is not marked received in 24 hours i... See more...
Hello All, Anyone out there know how I can search for an event that is supposed to occur within 24 hours but has not?  Example: 1 - Invite is sent, if invite is not marked received in 24 hours it is a failure.  So, lets say --- invite was sent 11/14/21 and it is received on 11/16/21 this is a failure.  The start time would not be now() or relateive_time function because the start time would be the time the invite was sent.  Any help is greatly appreciated. 
HI, I am having some logs comes with XML format for Privilaged Access Manager, i need to extract the fields by default like this Example: fields <level></level> <k>=<v> commandinitiatio =USER <... See more...
HI, I am having some logs comes with XML format for Privilaged Access Manager, i need to extract the fields by default like this Example: fields <level></level> <k>=<v> commandinitiatio =USER <success>false</success> success=false jan 9 09:04:56 1.30.124.24 1 2012-01-09T14:04:56+00:00 yahoota.com pam - metric DETAIL <Metric><type>getAccount</type><level>1</level><description><hashmap><k>commandInitiator</k><v>USER</v><k>commandName</k><v>getAccount</v><k>clientType</k><v>java</v><k>osarch</k><v>amd64</v><k>targetServerAlias</k><v>USR_LOCL_MARKETING_INTELLIGENCE_BATCH</v><k>nodeid</k><v>&lt;?xml version="1.0" encoding="utf-8" ?&gt;&lt;nodeid&gt;&lt;macaddr&gt;&lt;/macaddr&gt;&lt;macaddr&gt:E0&lt;/macaddr&gt;&lt;macaddr&gt;A0:C:E0&lt;/macaddr&gt;&lt;macaddr&gt;:E0&lt;/macaddr&gt;&lt;macaddr&gt;1:E1&lt;/macaddr&gt;&lt;macaddr&gt;1:E3&lt;/macaddr&gt;&lt;macaddr&gt;:E2&lt;/macaddr&gt;&lt;machineid&gt;1E3&lt;/machineid&gt;&lt;applicationtype&gt;cspm&lt;/applicationtype&gt;&lt;/nodeid&gt;</v><k>enablefips</k><v>true</v><k>executionUID</k><v>bibatusr</v><k>version</k><v>4.5.3</v><k>scriptStat</k><v>/opt/ibm/bigintegrate/tools</v><k>scriptName</k><v>/home/infra/bibatusr/run_ds_job.sh</v><k>osversion</k><v>3.10.0-1127.el7.x86_64</v><k>digestLoginDate</k><k>applicationtype</k><v>cspm</v><k>getXMLIndicator</k><v>false</v></hashmap></description><errorCode>405</errorCode><userID>client</userID><success>false</success><originatingIPAddress>10.111.211.50</originatingIPAddress><originatingHostName>yahoota.com</originatingHostName><extensionType></extensionType></Metric>   I need this as default when these type logs integrated to splunk its automatically extract fields when we set up in props.conf and trasforms.conf so here what is the stanzas we have to write under this.
Good day, I am trying to get alerts via teams channel.. I followed the instructions on splunk docs on how to get webhook and to add to alert, but still the alert is triggering via telegram and not vi... See more...
Good day, I am trying to get alerts via teams channel.. I followed the instructions on splunk docs on how to get webhook and to add to alert, but still the alert is triggering via telegram and not via Teams, what should I look at?
Hi,    I want to send data to x index if the host is non prod and host name is like abc-nprd* for  /var/log However, would like to send data to y index if host is prod and host name is like abc-pr... See more...
Hi,    I want to send data to x index if the host is non prod and host name is like abc-nprd* for  /var/log However, would like to send data to y index if host is prod and host name is like abc-prd* for /var/log Don't want to create multiple apps for prod and non prod. So is there is way I can achieve the above by deploying the same app to prod and non prod.   Any help appreciated.  Thanks.  
I have a filename like this -11112021_MOS.csv -12112021_MOS.csv -13112021_MOS.csv   I want to create drop down based on the date. How can I do that?
Hello folks! That is my first post here and I hope you guys help me with my issue. I have inadvertently selected 4000+ notes and closed them all with the same note.  Is there any script or anythin... See more...
Hello folks! That is my first post here and I hope you guys help me with my issue. I have inadvertently selected 4000+ notes and closed them all with the same note.  Is there any script or anything on the ES Splunk UI I miss that can undo my mistake? Your help is much appreciated! Thank you all. 
Hello folks! That is my first post here and I hope you guys help me with my issue. I have inadvertently selected 4000+ notes and closed them all with the same note.  Is there any script or anythin... See more...
Hello folks! That is my first post here and I hope you guys help me with my issue. I have inadvertently selected 4000+ notes and closed them all with the same note.  Is there any script or anything on the ES Splunk UI I miss that can undo my mistake? Your help is much appreciated! Thank you all. 
cert.pem (/splunk/auth/splunkweb/cert.pem) not getting generated while trying to renew the certificate and restarting splunk service. Though server.pem (/splunk/auth/server.pem) gets generated when r... See more...
cert.pem (/splunk/auth/splunkweb/cert.pem) not getting generated while trying to renew the certificate and restarting splunk service. Though server.pem (/splunk/auth/server.pem) gets generated when restarting splunk service.   any help please?
Hi i have a log like this  Elapsed time: prediction timer 0.1953 seconds    and i created a rex like this rex "Elapsed\stime:\sprediction\stimer\s(?<predictionTime>\d+)\sseconds"   but i am unabl... See more...
Hi i have a log like this  Elapsed time: prediction timer 0.1953 seconds    and i created a rex like this rex "Elapsed\stime:\sprediction\stimer\s(?<predictionTime>\d+)\sseconds"   but i am unable to find the value at all what am i missing here ? any help would be appreciated