All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Splunk Enterprise (on-prem) is reported as having a hotfix for this CVE 8.2.3.2, but I am unable to locate the hotfix. We do not run DFS but management still wants this hotfix applied. Also is there ... See more...
Splunk Enterprise (on-prem) is reported as having a hotfix for this CVE 8.2.3.2, but I am unable to locate the hotfix. We do not run DFS but management still wants this hotfix applied. Also is there a way to test for the vulnerability before and after the hotfix?
What are some best practices collecting DB logs from MSSQL server please? Are there Apps or better done manually? Please provide details if you would. Need the search / App to tell us size, source, S... See more...
What are some best practices collecting DB logs from MSSQL server please? Are there Apps or better done manually? Please provide details if you would. Need the search / App to tell us size, source, Server names, IPS and so forth.  I thank you for your reply in advance. 
Our particular on-call agreement is not 24/7 but has some periods where nobody is on call. At the moment, when a caller calls the on-call number, one of three things will happen: If an operator is... See more...
Our particular on-call agreement is not 24/7 but has some periods where nobody is on call. At the moment, when a caller calls the on-call number, one of three things will happen: If an operator is on call, and they pick up the phone, then the caller speaks with the operator. If an operator is on call, and they do not pick up the phone in time, then the caller leaves a message (which is transcribed and made into an incident, and the audio is available on twilio). If nobody is on call, the phone rings indefinitely until the caller hangs up in frustration. We would prefer that, if nobody is on call, the caller leaves a message, as in the second case. Our setup is pretty simple, with just one rotation every day from 8:00 a.m. through midnight and with a handoff every week. The escalation policy has one step: notify the on-duty user(s) in rotation. The only way I can think to fix this would be to add a bot user with some phone number that will never pick up that assumes on-call from midnight to 8 a.m., but (a) that seems pretty hacky, and (b) it doesn't appear possible to add non-human users anyway. Is what I want possible? Thanks, Matt
index=* host=* rule=corp_deny_all_to_untrust NOT dest_port=4242 | table src_ip dest_ip transport dest_port application .. I am able to get the source IP with this query .. how can I get the AWS insta... See more...
index=* host=* rule=corp_deny_all_to_untrust NOT dest_port=4242 | table src_ip dest_ip transport dest_port application .. I am able to get the source IP with this query .. how can I get the AWS instance name? there is no such interesting field as AWS instance name .. kindly help
Does splunk have a patch for CVE-2021-4428 Qualys has identified Apache Log4j Remote Code Execution (RCE) Vulnerability (Log4Shell) on the Splunk servers. Please update impacted Splunk infrastructu... See more...
Does splunk have a patch for CVE-2021-4428 Qualys has identified Apache Log4j Remote Code Execution (RCE) Vulnerability (Log4Shell) on the Splunk servers. Please update impacted Splunk infrastructure with any updates they provide.
2021-12-13T05:22:49.578070-05:00 tp-docker6 b064ec36df18[1851]: cid:d4b7ce5a71da4dc8ab1d5ce535149ce7 code_version:release-2021-49 2021-12-13 10:22:49,577 - core.external - INFO - Response status: 40... See more...
2021-12-13T05:22:49.578070-05:00 tp-docker6 b064ec36df18[1851]: cid:d4b7ce5a71da4dc8ab1d5ce535149ce7 code_version:release-2021-49 2021-12-13 10:22:49,577 - core.external - INFO - Response status: 409 Payload: b'{"status":"conflict","statusMessage":"POINTS - Transaction already computed.","transactionId":"5000-3816-8092-5283-8043","reversalTransactionId":null}' CID = d4b7ce5a71da4dc8ab1d5ce535149ce7 CodeVersion = release-2021-49 host = tp-docker6.points.com source = /logs/docker/application-platform-6b.log
I tried to setup logstash -> splunk cloud trail, but due to the ssl issue, i cannot forward, anyone has any idea?   you may open the link below and see:  https://inputs.prd-p-ij0c3.splunkcloud.com... See more...
I tried to setup logstash -> splunk cloud trail, but due to the ssl issue, i cannot forward, anyone has any idea?   you may open the link below and see:  https://inputs.prd-p-ij0c3.splunkcloud.com:8088/ the logstash http output error -> ssl certificate name mis-match   I don't have option to disable the ssl. 
I am using splunk connector for kafka. https://github.com/splunk/kafka-connect-splunk/releases  https://splunkbase.splunk.com/app/3862/#/details  The version we are using is 1.1.0 . Is this impac... See more...
I am using splunk connector for kafka. https://github.com/splunk/kafka-connect-splunk/releases  https://splunkbase.splunk.com/app/3862/#/details  The version we are using is 1.1.0 . Is this impacted by the latest Log4j RCE Vulnerability ? Please let us know  
Hi, I've set up a Splunk monitor to send some json files to Splunk, however it doesn't send invalid json files. I can see the reasoning behind this, but I'd like to keep invalid json files so that I ... See more...
Hi, I've set up a Splunk monitor to send some json files to Splunk, however it doesn't send invalid json files. I can see the reasoning behind this, but I'd like to keep invalid json files so that I can see which ones are valid/invalid on my dashboard. Is there any way to make Splunk send over all json files even if they're invalid? Thanks   Edit: For instance, could I maybe make a sourcetype which sets the sourcetype field to be "json" if valid and "invalid_json" if not valid?
Hello, I am trying this for the first time and installed sc4s in my HF server, connected the sc4s with HF using HEC URL and token. As checked I am receiving data for sc4s events from HF. However whe... See more...
Hello, I am trying this for the first time and installed sc4s in my HF server, connected the sc4s with HF using HEC URL and token. As checked I am receiving data for sc4s events from HF. However when the syslog is being forwarded from netscaler over ports not receiving any data.  Apart from installation of sc4s and updating the hec url and token, I have enabled UDP port- 514 in iptables to accept data.  Really appreciate if anyone can help me in resolving this.
Hi i installed Splunk Stream to receive IPFIX, when I generate the IPFIX log by the third-party app I can see IPFIX in Splunk, but I can't see any IPFIX Traffic Generated from NSX-T, also I can see ... See more...
Hi i installed Splunk Stream to receive IPFIX, when I generate the IPFIX log by the third-party app I can see IPFIX in Splunk, but I can't see any IPFIX Traffic Generated from NSX-T, also I can see IPFIX traffic in the Splunk's machine with Wireshark. but it doesn't show in Splunk. Has anybody faced this problem?
HI All,  We have couple of searches as shown below  1. User Login From Suspicious Countries 2. Multiple AWS Console Failed Login Attempts from Different Source IPs 3. High CPU or Memory Usage on... See more...
HI All,  We have couple of searches as shown below  1. User Login From Suspicious Countries 2. Multiple AWS Console Failed Login Attempts from Different Source IPs 3. High CPU or Memory Usage on a server  Can someone pls advise which Mitre techniques can each one of these be mapped to? Thanks
Hello, I want a placeholder in input text box of Splunk dashboard like attached image. Could any one help me out with this requirement.  Thanks in advance.  
We have a saved search in a search head cluster which returns its results in a KV-Store lookup using append=true. Although the searches run successfully, the results where not stored in the KV-Store... See more...
We have a saved search in a search head cluster which returns its results in a KV-Store lookup using append=true. Although the searches run successfully, the results where not stored in the KV-Store for few executions. Unfortunately, we were not able to locate the issue in the mongod.log or splunkd.log. Any ideas are appreciated.
We save hash values from our ids and I want to search for them. I would expected I can do it this way: index=blub id=sha1("11122233")  But unfurtonaly it doesn't work. Also other attemps failed (fo... See more...
We save hash values from our ids and I want to search for them. I would expected I can do it this way: index=blub id=sha1("11122233")  But unfurtonaly it doesn't work. Also other attemps failed (for exampe to eval it first in a new variable). If I just use the sha1 it return the correct value, but somehow it doesn't work in the search.  Can anybody help here or has suggestion.   
need to extract only the number.. ie., 23 DiskDrive: \\.\PHYSICALDRIVE23
service:jmx:iiop://testsplunk/jndi/corbaname:iiop:testsplunk:9100/WsnAdminNameService#JMXConnector.   used this url with my hostname getting error . and i also tried through soap port and pid too .... See more...
service:jmx:iiop://testsplunk/jndi/corbaname:iiop:testsplunk:9100/WsnAdminNameService#JMXConnector.   used this url with my hostname getting error . and i also tried through soap port and pid too . same error am getting . any help?
I have a table that has batch ID, start & end time of each batch. How can I get the duration i.e. runtime of each batch.   When I am running this query then the duration field is coming blank. ... See more...
I have a table that has batch ID, start & end time of each batch. How can I get the duration i.e. runtime of each batch.   When I am running this query then the duration field is coming blank. index="bodata" | where BEX_TSP_START!="NULL" AND BEX_TSP_END!="NULL" | where BEX_DTE_BSNS=="09-12-2021" | eval Duration=strptime(BEX_TSP_END,"%H:%M.%S")-strptime(BEX_TSP_START,"%H:%M.%S") | table BEX_NUM_JOB,Duration Please help.  
My deployment consists of 2 servers to collect syslog sources. On each server is installed a rsyslog daemon that receives messages in UDP and spool them into log files. These files are monitored by a... See more...
My deployment consists of 2 servers to collect syslog sources. On each server is installed a rsyslog daemon that receives messages in UDP and spool them into log files. These files are monitored by a universal forwarder which sends the messages to indexers. This deployment is a good practice for indexing syslog data. A LB F5 is installed on the front end and routes the flows to both servers. I wanted to set up a mechanism that would allow me to manually add or remove a universal forwarder from the member pool when it is under maintenance or restarted for example. For a search head cluster it is possible by configuring a custom endpoint like the suggested solution https://community.splunk.com/t5/Monitoring-Splunk/F5-Load-balancer-Pool-member-health-monitor/m-p/459497 But it is not possible with a universal forwarer by design (the python library is not embedded) for security reasons So my question is how to manually disable a universal forwarder so that the server does not receive any more data from the LB?
You all are 0/4 on handling my bug reports. If even this doesn't get fixed this will be the last time I will report a bug this way. The context menu to go to the dashboard goes away when you try to ... See more...
You all are 0/4 on handling my bug reports. If even this doesn't get fixed this will be the last time I will report a bug this way. The context menu to go to the dashboard goes away when you try to move your mouse over it. Here is video of the issue: https://cdn.discordapp.com/attachments/919882621503275078/919884566871810099/splunk.m4v