All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I would like to start encrypting traffic between the universal forwarder on my Windows devices and my single Splunk 9.x indexer that is on a Windows server. For the moment I am only concerned with ge... See more...
I would like to start encrypting traffic between the universal forwarder on my Windows devices and my single Splunk 9.x indexer that is on a Windows server. For the moment I am only concerned with getting SSL going on the indexer. I see you can also setup a certificate on the clients for authentication to the server but I want to take it one step at a time.  I have a GoDaddy cert I would like to use with the indexer and I have looked over much of the documentation on Splunk's site on all the ways you can make this configuration work but it left me confused. I can't find any mention to what to do about the public key. I see where the documentation references the server certificate and even the sslPassword in the input.conf file but no reference where to to put the key location. Is it just assumed you combine the server cert + the private key into a single pem file and if so is the order just server cert first then private key? Example:   -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN PRIVATE KEY----- ... -----END PRIVATE KEY-----  
Again - it's not how it works. First, the application itself has to be able to generate - as we say - an "event" which will be either written to a file which Splunk's forwarder will be able to read ... See more...
Again - it's not how it works. First, the application itself has to be able to generate - as we say - an "event" which will be either written to a file which Splunk's forwarder will be able to read or sent via network (there are also other ways to receive or pull data into Splunk but these are the most popular ones). Then you have to ingest that data into Splunk. When you have this data in Splunk, yes you can schedule a report which will - for example - every 5 minutes check if/how many users logged into your system. But still, first and foremost, the application itself has to report this action somewhere so that Splunk can get such event. It's not a fortune teller you know
@gcusello  do you have any idea on troubleshooting this issue?
Thanks for the information. For the application I wanted to put an email alert on it for when someone is logging in and out of the application. Is that possible.
Yes. The additional options are one of the reasons for using TRANSFORM-based exractions instead of REPORT. Notice, however, that REPEAT_MATCH is for index-time extractions.  You might want to consid... See more...
Yes. The additional options are one of the reasons for using TRANSFORM-based exractions instead of REPORT. Notice, however, that REPEAT_MATCH is for index-time extractions.  You might want to consider MV_ADD
Splunk on its own is not a "monitoring tool" meaning that Splunk is not meant to do - for example - active checks against an application as monitoring suites do (it probably can be forced to do that ... See more...
Splunk on its own is not a "monitoring tool" meaning that Splunk is not meant to do - for example - active checks against an application as monitoring suites do (it probably can be forced to do that but it's not gonna be an optimal solution). Its forte is data analysis. So as long as you have data from external sources, you can put this data into Splunk, search it and analyze. Then - if you have events describing - for example - results of such checks, you can schedule an alert if there are too many failed probes or calculate whether the SLA levels were met or not.
I am very new to Splunk and having a hard time finding how to monitor applications. Can someone help? 
Hi, We are using following regex to capture "caused by" exceptions within java stack trace. Caused by: (?P<Exception>[^\r\n]+)   When testing in regex101, it seems to be working well. Captures bo... See more...
Hi, We are using following regex to capture "caused by" exceptions within java stack trace. Caused by: (?P<Exception>[^\r\n]+)   When testing in regex101, it seems to be working well. Captures both instances of "caused by" in the sample trace. https://regex101.com/r/yL1ucO/1  But when used with EXTRACT within props.conf, Splunk only gets the first instance, i.e. "SomeException". 2nd occurrence, "AnotherException" is not captured. Should I be using REPEAT_MATCH with transforms stanza, or is there a way to fix this within props itself?
Apologies I took out all the extra renames to try to simplify the search since those aren't really critical to the data I'm trying to get. The fields are actually as they are named in the full search... See more...
Apologies I took out all the extra renames to try to simplify the search since those aren't really critical to the data I'm trying to get. The fields are actually as they are named in the full search with the join. First search thus should be: index=api source=api_call | rename id as sessionID | fields apiName, message.payload, sessionID
I now get almost 2 million events, which is about all the events in the WAF log for yesterday, but no table of results. I know that yesterday there was 1 connection through the WAF which produced 6 A... See more...
I now get almost 2 million events, which is about all the events in the WAF log for yesterday, but no table of results. I know that yesterday there was 1 connection through the WAF which produced 6 API calls (one primary and then several downstream). So the number of lines in my table should be 6.
Hello, I'm trying to sum by groups (I have 2 groups) and then plot them individually and also the sum. I'm using following script to plot group 1. | fields inbound_rate outbound_rate HOST | where... See more...
Hello, I'm trying to sum by groups (I have 2 groups) and then plot them individually and also the sum. I'm using following script to plot group 1. | fields inbound_rate outbound_rate HOST | where HOST like "%location_a%" | addtotals fieldname=a_TPS | timechart span=5m sum(a_TPS) as a_TPS This works and sums all the server TPS from location a. Now I have servers in another location (location_b). How can I plot TPS for location a, location b and sum of both? Thanks.
Try specifying output_mode=json.  See https://docs.splunk.com/Documentation/Splunk/9.1.3/RESTUM/RESTusing#Encoding_schemes
addinfo adds the info_* fields to all the events in the event pipeline i.e. what ever is returned by your index search. makeresults (by default) created a single event. This can be changed with the c... See more...
addinfo adds the info_* fields to all the events in the event pipeline i.e. what ever is returned by your index search. makeresults (by default) created a single event. This can be changed with the count parameter, e.g. makeresults count=10
If you actually have tabs separating your fields (instead of commas), the issue is that you have used + (at least 1 occurrence) rather than * (zero or more occurrences) ^(?P<ACD>\w+\.\d+)\t(?P<ATTEM... See more...
If you actually have tabs separating your fields (instead of commas), the issue is that you have used + (at least 1 occurrence) rather than * (zero or more occurrences) ^(?P<ACD>\w+\.\d+)\t(?P<ATTEMPTS>[^\t]+)\t(?P<FAIL_REASON>[^\t]*)\t(?P<INTERVAL_FILE>[^\t]+)\t(?P<STATUS>\w+)\t(?P<START>[^\t]+)\t(?P<FINISH>[^\t]+)\t(?P<INGEST_TIME>.+)
Hi @Shashwat .Pandey, I looked around, and as mentioned above, the maximum number of exclusions is 500. This includes these types (and maybe more) case BASE_PAGE: case IFRAME: case VIRTUAL_PAGE:... See more...
Hi @Shashwat .Pandey, I looked around, and as mentioned above, the maximum number of exclusions is 500. This includes these types (and maybe more) case BASE_PAGE: case IFRAME: case VIRTUAL_PAGE: case AJAX_REQUEST: case SYNTH_JOB_REF:
I'm surprise this regex worked at all since the expression looks for tabs to separate the fields while the same data uses commas. That aside, would believe a single character makes the difference?  ... See more...
I'm surprise this regex worked at all since the expression looks for tabs to separate the fields while the same data uses commas. That aside, would believe a single character makes the difference?  See if you can find it below.  ^(?P<ACD>\w+\.\d+),(?P<ATTEMPTS>[^,]+),(?P<FAIL_REASON>[^,]*),(?P<INTERVAL_FILE>[^,]+),(?P<STATUS>\w+),(?P<START>[^,]+),(?P<FINISH>[^,]+),(?P<INGEST_TIME>.+) It's the asterisk in the FAIL_REASON group, which allows for zero characters in the field.
There is a calculation error with number bytes used by a given connection while logging. This log message is false positive. Use following workaround to suppress the log. set in $SPLUNK_HOME/etc/... See more...
There is a calculation error with number bytes used by a given connection while logging. This log message is false positive. Use following workaround to suppress the log. set in $SPLUNK_HOME/etc/log-local.cfg category.AutoLoadBalancedConnectionStrategy=ERROR Issue is fixed by splunk 9.1.3/9.2.1 where the log is logging correct value.
  Need help on getting rex query. I am getting below two events. I am able to rex for event 1 with NULL field. But I also need to capture the sample event 2 which does not have NULL value. Instead o... See more...
  Need help on getting rex query. I am getting below two events. I am able to rex for event 1 with NULL field. But I also need to capture the sample event 2 which does not have NULL value. Instead of NULL it just have ",," (no NULL values just two single quotes.). Need the rex command to capture the field in both the case. If event has NULL then need the NULL field and if just two single quote need blank value. sample event1: acd.55,1,NULL,C:\totalview\ftp\switches\customer1\55\020224.1100,PASS,2024-02-02 17:32:30.047 +00:00,2024-02-02 17:36:02.088 +00:00,212 Sample event 2: acd.85,1,,C:\totalview\ftp\switches\customer1\85\020224.1100,PASS,2024-02-02 17:31:30.032 +00:00,2024-02-02 17:32:00.226 +00:00,30   Created the below rex query which is working for event 1. But not recognizing if getting event 2 some time.  ^(?P<ACD>\w+\.\d+)\t(?P<ATTEMPTS>[^\t]+)\t(?P<FAIL_REASON>[^\t]+)\t(?P<INTERVAL_FILE>[^\t]+)\t(?P<STATUS>\w+)\t(?P<START>[^\t]+)\t(?P<FINISH>[^\t]+)\t(?P<INGEST_TIME>.+)
I think this will work without using join. (index=api source=api_call) OR index=waf | eval sessionID=coalesce(sessionID, message.id) | fields apiName, message.payload, sessionID,src_ip, requestHost,... See more...
I think this will work without using join. (index=api source=api_call) OR index=waf | eval sessionID=coalesce(sessionID, message.id) | fields apiName, message.payload, sessionID,src_ip, requestHost, requestPath, requestUserAgent | stats values(*) as * by sessionID | table apiName, message.payload, sessionID, src_ip, requestHost, requestPath, requestUserAgent  
splunkd.log is flooded by following log. WARN AutoLoadBalancedConnectionStrategy [xxxx TcpOutEloop] - Current dest host connection nn.nn.nn.nnn:9997, oneTimeClient=0, _events.size()=41, _refCount=2,... See more...
splunkd.log is flooded by following log. WARN AutoLoadBalancedConnectionStrategy [xxxx TcpOutEloop] - Current dest host connection nn.nn.nn.nnn:9997, oneTimeClient=0, _events.size()=41, _refCount=2, _waitingAckQ.size()=5, _supportsACK=1, _lastHBRecvTime=Thu Jun 20 12:07:44 2023 is using 18446603427033668018 bytes. Total tcpout queue size is 26214400. Warningcount=841