All Topics

Top

All Topics

Hi Team, I would like to establish an SSL/TLS-connection with third party CA certificates between the UFs -> HFs -> indexers. The order which i'm following to configure the TLS connection is below.... See more...
Hi Team, I would like to establish an SSL/TLS-connection with third party CA certificates between the UFs -> HFs -> indexers. The order which i'm following to configure the TLS connection is below. -----BEGIN CERTIFICATE----- ... (certificate for your server)... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- ...<Server Private Key – Passphrase protected> -----END RSA PRIVATE KEY----- ------BEGIN CERTIFICATE----- ... (the certificate authority certificate)... -----END CERTIFICATE----- Now, the question here is, can we remove RSA private key from the certficate. Do we need private key in order to establish the secure connection to the HF from UF?
We have a set of data which populate host and ip Eg. Host                  IP                            count ESDBAS         10.10.10.10              1 ASFDB             192.0.0.0           ... See more...
We have a set of data which populate host and ip Eg. Host                  IP                            count ESDBAS         10.10.10.10              1 ASFDB             192.0.0.0                   1 Query: index=a  sourcetype=b | stats values(ip) as IP count by host i need the result which any hostname that contain DB should come out on another field eg: Host                  IP                            count      Environment ESDBAS         10.10.10.10              1                      DB ASFDB             192.0.0.0                   1                      DB Please assist me on this    
Hello, I have some issues with the TIME_FORMAT field in props.conf file, getting some error messages "Failed to parse timestamp, defaulting to file modtime" . My pprops.conf file and a couple of sam... See more...
Hello, I have some issues with the TIME_FORMAT field in props.conf file, getting some error messages "Failed to parse timestamp, defaulting to file modtime" . My pprops.conf file and a couple of sample events are given below. Any help will be highly appreciated. Thank you! 00000000|REG|USER|LOGIN|rsd56qa|00000000||10.108.125.71|01||2023-05-09T11:00:59.000-04.00||||||success| 00000000|REG|USER|LOGIN|adb23rm|00000000||10.108.125.71|06||2023-05-10T06:05:43.000-04.00||||||success| [sourcetype] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) TIME_PREFIX=([^\|]+\|){10} TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%3N MAX_TIMESTAMP_LOOKAHEAD=30 TRUNCATE=2500
I have an event panel with 5 dropdown boxes as shown to be able to filter the base results based of 5 categories   by app name - there are two Apps BPE and BPO by sts eg 400's or 500's r... See more...
I have an event panel with 5 dropdown boxes as shown to be able to filter the base results based of 5 categories   by app name - there are two Apps BPE and BPO by sts eg 400's or 500's response codes etc by  mtd eg API method POST PATCH GET etc by booking ref by cal eg Calling API This is the event search I created to return the base results  app=BP* sts=* | table at,req.bookingReference,app,mtd,cid,sts,dur,rsc,cal,req.offerOptionCode | rename req.bookingReference as bookingReference, req.offerOptionCode as offerOptionCode| search app=* AND mtd=* AND sts=* AND bookingReference=* AND cal=* | sort by at asc When i remove the "search app=* AND mtd=* AND sts=* AND bookingReference=* AND cal=*" from the query, I then seem to get all the expected results which include POST PATCH and GET items, but with it included, I only get POST method results and not the GET and PATCH items. i suspect the AND statements are the culprit...I tried OR but then the filters don't work and won't filter the base results. Appreciate any guidance Thanks
HeavyforwarderをIntermediateForwarderとして使用し、UF→HF→SplunkCloudのような構成をするとき、UFから来たデータを転送するためにはHFでどのような設定を行う必要があるのでしょうか。 ウェブ上にあまり載っていないので、お答えいただけたら幸いです。 When using Heavyforwarder as IntermediateForwar... See more...
HeavyforwarderをIntermediateForwarderとして使用し、UF→HF→SplunkCloudのような構成をするとき、UFから来たデータを転送するためにはHFでどのような設定を行う必要があるのでしょうか。 ウェブ上にあまり載っていないので、お答えいただけたら幸いです。 When using Heavyforwarder as IntermediateForwarder and configuring like UF → HF → SplunkCloud, what kind of settings need to be made on HF to forward data coming from UF? I would appreciate it if you could answer as there is not much on the web. *Translated by the Splunk community Team*
I am hoping someone could provide some comments/replies to check if we are able to limit the max memory usage for Splunk Universal Forwarders. If yes, Is the config filename "limit.conf" ? Just to a... See more...
I am hoping someone could provide some comments/replies to check if we are able to limit the max memory usage for Splunk Universal Forwarders. If yes, Is the config filename "limit.conf" ? Just to add also, -Will there be any issues arising from limiting the memory usage? -I understand we can also limit the memory usage(have not tested yet) on the OS level, any advantages/disadvantages? Where can I also get a formal solution from Splunk which mentioned that the configuration is possible.
Dear All,   I was going through a Splunk conf 21 where the narrator explained to use the index time instead of search time using a Macro Out of curiosity, I went to understand the query and hav... See more...
Dear All,   I was going through a Splunk conf 21 where the narrator explained to use the index time instead of search time using a Macro Out of curiosity, I went to understand the query and have the following doubts:- 1) In row 5 of the query, What is "default start lookback" & "longest lookback" and from where they getting the value? 2) In row 6 of the query, What is "realtime lag" &" longest query" and from where they getting the value? 3) What is the concept of row 8? how the search is working? 4) What does row 14 mean? what is 1=2? Please find below the splunk query  
Hi Team, I have below raw logs: ReadFileImpl - Total number of records details processed for file: TRIM.UNB.D082423.T065617 is: 20516558 with total number of invalid record count: 0 - Data persiste... See more...
Hi Team, I have below raw logs: ReadFileImpl - Total number of records details processed for file: TRIM.UNB.D082423.T065617 is: 20516558 with total number of invalid record count: 0 - Data persisted to cache : 13169530 ReadFileImpl - Total number of records details processed for file: TRIM.BLD.D082423.T062015 is: 4043423 with total number of invalid record count: 0 - Data persisted to cache : 3388398 I wan to fetch the highlighted record counts along with file name. My current query: index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "Data persisted to cache "
Hi Team, I have below events: FileEventCreator - Completed Settlement file processing, TRIM.UNB.D082423.T065617 records processed: 13169530 FileEventCreator - Completed Settlement file processing,... See more...
Hi Team, I have below events: FileEventCreator - Completed Settlement file processing, TRIM.UNB.D082423.T065617 records processed: 13169530 FileEventCreator - Completed Settlement file processing, TRIM.BLD.D082423.T062015 records processed: 3388398 I want to fetch the records for different file name . Can someone guide me here. My current query: index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "FileEventCreator - Completed Settlement file processing" Thanks in advance.  
Hi Team, I have below query: index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balance... See more...
Hi Team, I have below query: index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" I want a true keyword and a green tick every time I receive this  "ReadFileImpl - ebnc event balanced successfully" Something like this: "ReadFileImpl - ebnc event balanced successfully"       true              tick mark If this appears "ReadFileImpl - ebnc event balanced successfully" 8 times in a day I want each statement separate with a true keyword and green tick. Can someone guide me here.
Here is what I am proposing as a manual workaround to pause some alerts but not all alerts during an release weekend / evening and looking for some input. We have several alerts that routinely are ... See more...
Here is what I am proposing as a manual workaround to pause some alerts but not all alerts during an release weekend / evening and looking for some input. We have several alerts that routinely are being triggered when there is a release due to no logins, missing data, etc... I cant 'break' the Splunk emailer as there are other apps that are not doing a release. If I identify those alerts that need to be disabled during a release I would like to name them so that they are easily identified in the Alert search tab. What I am thinking of doing is adding a character at the end of the alert name that makes it obvious that it is one of these 'special' alerts so that other alerts not needing to be disabled are not disable inadvertently. Does anyone know of any issues with using the 'registered trademark' character in the alert name ? ALT 0174 =  ® Then, when searching for alerts I would search for all alerts that include: *®*, disable them, then do the same and enable them once the release is over..
Hi I have installed splunk_ta_windows using deployment server using UF on windows clients and everything is fine.  I created index and pointed in inputs.conf and all looks good.  i also search da... See more...
Hi I have installed splunk_ta_windows using deployment server using UF on windows clients and everything is fine.  I created index and pointed in inputs.conf and all looks good.  i also search data fine but some sources and sourcetypes are missing when i input the query.
I'm trying to send log from my Linux installed on Hyper-v windows into my Splunk instance and it data doesn't seem to reach it's destination. I have entered the port number in my Splunk instance - Re... See more...
I'm trying to send log from my Linux installed on Hyper-v windows into my Splunk instance and it data doesn't seem to reach it's destination. I have entered the port number in my Splunk instance - Receive data - configure receiving and entered my port number. i edited my input.conf file and why can't I see my log in Splunk???
Don't know another way to do it ...  I had created containers from the Splunk export app for SOAR ( don't us that for Mission Control (MC) it got stuck in some kind of loop or something... so gr... See more...
Don't know another way to do it ...  I had created containers from the Splunk export app for SOAR ( don't us that for Mission Control (MC) it got stuck in some kind of loop or something... so gross but whatever        export token='YOURAUTOMATIONTOKEN' while true do curl -s -u ":${token}" 'https://YOURCOMPANY.soar.splunkcloud.com/rest/container?search_fields=id&_filter_artifact_count__lte=0&page_size=2200' | python3 -m json.tool | grep -E "(\bid\b)" | sed 's/.*: //g' | tr -d '\n' | sed -re 's/^/{\"ids\":\[/g' -re 's/,$/]}/g' > ids.txt curl -s -X DELETE -u ":${token}" 'https://YOURCOMPANY.soar.splunkcloud.com/rest/container' -d "`cat ids.txt`" done        
I have an index which has 15 hosts and around 15 sourcetypes mapped to all hosts.  How can I get events of only few selected hosts.  (I cannot create separate index of those hosts)
Has anyone used SimData for threat and vulnerability data generation? Is there a template available somewhere? Thanks.
Hi all, I'm at a bit of an impasse.  An executive would like to see colors that make sense to him in my Punchcard visualization of the number of WiFi devices in a particular space.  My data looks l... See more...
Hi all, I'm at a bit of an impasse.  An executive would like to see colors that make sense to him in my Punchcard visualization of the number of WiFi devices in a particular space.  My data looks like this: date_hour Location Capacity CapacityColor 0 Art Museum Staff 10 5 0 Lobby 3 5 1 Art Museum Staff 10 5 1 Lobby 5 5 10 Art Museum Staff 31 4 10 Lobby 90 2 11 Art Museum Staff 34 4   I have fiddled with all manner of "CapacityColor", charting options, even the field options I found at: https://docs.splunk.com/Documentation/Splunk/9.1.0/DashStudio/objOptRef  I've tried my search in both Dashboard Studio and Classic, though I'll be honest, I prefer Classic.  The best I seem to be able to do is using Sequential and setting max/min to like red and green.  Very "autumn" palette of 5 colors comes out, but I can't change the legend.  If I set the CapacityColor to match "Capacity" based on some thresholds like 90,75,50,10,0 it picks seemingly random numbers for the legend (which will confuse said executive). I wanted to be able to use fieldColors={"Over 90%": "red" ... } like I've seen with other charting options, but I haven't found an iteration of that which works either in the punchcard visualization. Has anyone found a way to modify the colors?
Is it possible to set up the VSCode extension to connect to multiple instances?
Actually I have an on-prem instance of Splunk Enterprise installed locally, but for incident response we need to forward specific indexes logs to Splunk Cloud, I've been reviewing the "Distributed Se... See more...
Actually I have an on-prem instance of Splunk Enterprise installed locally, but for incident response we need to forward specific indexes logs to Splunk Cloud, I've been reviewing the "Distributed Search" option but if I'm not mistaken it takes all the SE data. Is there any way to perform this activity?
I know queue backlog troubleshooting questions are very common but I'm stumped here. I have 2 Universal Forwarders forwarding locally monitored log files (populated by syslog-ng forwarding) over T... See more...
I know queue backlog troubleshooting questions are very common but I'm stumped here. I have 2 Universal Forwarders forwarding locally monitored log files (populated by syslog-ng forwarding) over TCP to 4 load-balanced Heavy Forwarders, which then send them to a cluster of 8 indexers. These Universal Forwarders are processing a lot of data, approximately 500 MB per minute each, but this setup worked without lag or dropped logs up until recently. The disk IO and network speeds should be easily able to handle this volume. However, recently, the Universal Forwarders have stopped forwarding logs and the only WARN/ERROR logs in splunk.d are as follows: 08-25-2023 14:40:09.249 -0400 WARN TailReader [25677 tailreader0] - Could not send data to output queue (parsingQueue), retrying... And then, generally some seconds later: 08-25-2023 14:41:04.250 -0400 INFO TailReader [25677 tailreader0] - ...continuing. My question is this: assuming there's no bottleneck in the TCP output to the 4 HFs, what "parsing" exactly is being done on these logs that would cause the parsingQueue to fill up? I've looked at just about every UF parsingQueue related question on Splunk to find an answer, and I've addressed some common gotchas: - maxKBps in the [thruput] stanza in limits.conf is set to 0 (unlimited thruput) - 2 parallel parsing queues, each with 1 GB of storage (higher than recommended but I was desperate) - no INDEXED_EXTRACTIONS for anything but Splunk's preconfigured internal logs I've taken some other steps as well: - set up logrotate on the monitored files to rotate any file that gets larger than 1GB, so Splunk isn't monitoring exceptionally large files - set "DATETIME_CONFIG = NONE" and "SHOULD_LINEMERGE = false" for all non-internal sourcetypes I don't understand why the parsingQueue would fill up when it doesn't seem like there's any parsing actions configured on this machine! Is anyone able to advise me on what to look for or change to resolve this? Thanks much.