All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, I am using below query: <row> <panel> <table> <search> <query>index="abc*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFile... See more...
Hi Team, I am using below query: <row> <panel> <table> <search> <query>index="abc*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval keyword=if(searchmatch("ReadFileImpl - ebnc event balanced successfully"),"True","")| eval phrase="ReadFileImpl - ebnc event balanced successfully"|table phrase keyword</query> <earliest>-1d@d</earliest> <latest>@d</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">true</option> <option name="wrap">true</option> <format type="color" field="keyword"> <colorPalette type="list">[#118832,#1182F3,#CBA700,#D94E17,#D41F1F]</colorPalette> <scale type="threshold">0,30,70,100</scale> </format> </table> </panel> </row> I want along with true and phrase  one checkmark should also come  in another column. Can someone guide me. Phrase keyword ReadFileImpl - ebnc event balanced successfully True ReadFileImpl - ebnc event balanced successfully True
Hi, I would like to get the list of all users, with roles and last login via splunk query. I tried the following query with a time range of "alltime" but it shows incorrect date for some users:  i... See more...
Hi, I would like to get the list of all users, with roles and last login via splunk query. I tried the following query with a time range of "alltime" but it shows incorrect date for some users:  index=_audit action="login attempt" | stats max(timestamp) by user Thank you, Kind regards Marta  
I have a HEC and I am receiving logs from CloudWatch and the default index is set to "aws". From the same HEC token I am also receiving Firewall logs from CloudWatch and these logs are also going to ... See more...
I have a HEC and I am receiving logs from CloudWatch and the default index is set to "aws". From the same HEC token I am also receiving Firewall logs from CloudWatch and these logs are also going to the index "aws". How can I transform the Firewall logs coming from the same HEC token from a different source to be assigned to index "paloalto"? I tried using the below config but it doesn't work props.conf [source::syslogng:dev/syslogng/*] TRANSFORMS-hecpaloalto = hecpaloalto disabled = false transforms.conf [hecpaloalto] DEST_KEY = _MetaData:Index REGEX = (.*) FORMAT = palo_alto I created the index palo_alto in the cluster master indexes.conf, applied cluster bundles to the indexers. And also applied the above config using deployment server to the Indexers. For some reason the logs are still going to the aws index.
Hello Splunkers, I am used to use the following command to decrypt $7 Splunk configuration password such as pass4SymmKey or sslConfig.   splunk show-decrypted --value '<encrypted_value>'    I ha... See more...
Hello Splunkers, I am used to use the following command to decrypt $7 Splunk configuration password such as pass4SymmKey or sslConfig.   splunk show-decrypted --value '<encrypted_value>'    I have several questions regarding this command :  1/ Do you ever find any official documentation about it ? I was  looking here but not result : https://docs.splunk.com/Documentation/Splunk/9.1.0/Admin/CLIadmincommands 2/ Is it possible to use this command for $6 encrypted (hased ?) string, like the one stored for admin password stored in $SPLUNK_HOME/etc/passwd. I suppose it's not possible since it's a password and it should not be "reversible" for security reason. 3/ This question is related to the previous one. Is it right to say that $7 value has been encrypted since it's possible to revert it and $6 has been hashed because it's impossible to get the clear value back ? Thanks for your help ! GaetanVP
Morning All  I've been asked to document everything we have on Splunk Platform (on prem) before moving to the cloud. Has anyone been in similar position and where did they start??  Any pointers wou... See more...
Morning All  I've been asked to document everything we have on Splunk Platform (on prem) before moving to the cloud. Has anyone been in similar position and where did they start??  Any pointers would be appreciated    Thank you    
Hi Team, I would like to establish an SSL/TLS-connection with third party CA certificates between the UFs -> HFs -> indexers. The order which i'm following to configure the TLS connection is below.... See more...
Hi Team, I would like to establish an SSL/TLS-connection with third party CA certificates between the UFs -> HFs -> indexers. The order which i'm following to configure the TLS connection is below. -----BEGIN CERTIFICATE----- ... (certificate for your server)... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- ...<Server Private Key – Passphrase protected> -----END RSA PRIVATE KEY----- ------BEGIN CERTIFICATE----- ... (the certificate authority certificate)... -----END CERTIFICATE----- Now, the question here is, can we remove RSA private key from the certficate. Do we need private key in order to establish the secure connection to the HF from UF?
We have a set of data which populate host and ip Eg. Host                  IP                            count ESDBAS         10.10.10.10              1 ASFDB             192.0.0.0           ... See more...
We have a set of data which populate host and ip Eg. Host                  IP                            count ESDBAS         10.10.10.10              1 ASFDB             192.0.0.0                   1 Query: index=a  sourcetype=b | stats values(ip) as IP count by host i need the result which any hostname that contain DB should come out on another field eg: Host                  IP                            count      Environment ESDBAS         10.10.10.10              1                      DB ASFDB             192.0.0.0                   1                      DB Please assist me on this    
Hello, I have some issues with the TIME_FORMAT field in props.conf file, getting some error messages "Failed to parse timestamp, defaulting to file modtime" . My pprops.conf file and a couple of sam... See more...
Hello, I have some issues with the TIME_FORMAT field in props.conf file, getting some error messages "Failed to parse timestamp, defaulting to file modtime" . My pprops.conf file and a couple of sample events are given below. Any help will be highly appreciated. Thank you! 00000000|REG|USER|LOGIN|rsd56qa|00000000||10.108.125.71|01||2023-05-09T11:00:59.000-04.00||||||success| 00000000|REG|USER|LOGIN|adb23rm|00000000||10.108.125.71|06||2023-05-10T06:05:43.000-04.00||||||success| [sourcetype] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) TIME_PREFIX=([^\|]+\|){10} TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%3N MAX_TIMESTAMP_LOOKAHEAD=30 TRUNCATE=2500
I have an event panel with 5 dropdown boxes as shown to be able to filter the base results based of 5 categories   by app name - there are two Apps BPE and BPO by sts eg 400's or 500's r... See more...
I have an event panel with 5 dropdown boxes as shown to be able to filter the base results based of 5 categories   by app name - there are two Apps BPE and BPO by sts eg 400's or 500's response codes etc by  mtd eg API method POST PATCH GET etc by booking ref by cal eg Calling API This is the event search I created to return the base results  app=BP* sts=* | table at,req.bookingReference,app,mtd,cid,sts,dur,rsc,cal,req.offerOptionCode | rename req.bookingReference as bookingReference, req.offerOptionCode as offerOptionCode| search app=* AND mtd=* AND sts=* AND bookingReference=* AND cal=* | sort by at asc When i remove the "search app=* AND mtd=* AND sts=* AND bookingReference=* AND cal=*" from the query, I then seem to get all the expected results which include POST PATCH and GET items, but with it included, I only get POST method results and not the GET and PATCH items. i suspect the AND statements are the culprit...I tried OR but then the filters don't work and won't filter the base results. Appreciate any guidance Thanks
HeavyforwarderをIntermediateForwarderとして使用し、UF→HF→SplunkCloudのような構成をするとき、UFから来たデータを転送するためにはHFでどのような設定を行う必要があるのでしょうか。 ウェブ上にあまり載っていないので、お答えいただけたら幸いです。 When using Heavyforwarder as IntermediateForwar... See more...
HeavyforwarderをIntermediateForwarderとして使用し、UF→HF→SplunkCloudのような構成をするとき、UFから来たデータを転送するためにはHFでどのような設定を行う必要があるのでしょうか。 ウェブ上にあまり載っていないので、お答えいただけたら幸いです。 When using Heavyforwarder as IntermediateForwarder and configuring like UF → HF → SplunkCloud, what kind of settings need to be made on HF to forward data coming from UF? I would appreciate it if you could answer as there is not much on the web. *Translated by the Splunk community Team*
I am hoping someone could provide some comments/replies to check if we are able to limit the max memory usage for Splunk Universal Forwarders. If yes, Is the config filename "limit.conf" ? Just to a... See more...
I am hoping someone could provide some comments/replies to check if we are able to limit the max memory usage for Splunk Universal Forwarders. If yes, Is the config filename "limit.conf" ? Just to add also, -Will there be any issues arising from limiting the memory usage? -I understand we can also limit the memory usage(have not tested yet) on the OS level, any advantages/disadvantages? Where can I also get a formal solution from Splunk which mentioned that the configuration is possible.
Dear All,   I was going through a Splunk conf 21 where the narrator explained to use the index time instead of search time using a Macro Out of curiosity, I went to understand the query and hav... See more...
Dear All,   I was going through a Splunk conf 21 where the narrator explained to use the index time instead of search time using a Macro Out of curiosity, I went to understand the query and have the following doubts:- 1) In row 5 of the query, What is "default start lookback" & "longest lookback" and from where they getting the value? 2) In row 6 of the query, What is "realtime lag" &" longest query" and from where they getting the value? 3) What is the concept of row 8? how the search is working? 4) What does row 14 mean? what is 1=2? Please find below the splunk query  
Hi Team, I have below raw logs: ReadFileImpl - Total number of records details processed for file: TRIM.UNB.D082423.T065617 is: 20516558 with total number of invalid record count: 0 - Data persiste... See more...
Hi Team, I have below raw logs: ReadFileImpl - Total number of records details processed for file: TRIM.UNB.D082423.T065617 is: 20516558 with total number of invalid record count: 0 - Data persisted to cache : 13169530 ReadFileImpl - Total number of records details processed for file: TRIM.BLD.D082423.T062015 is: 4043423 with total number of invalid record count: 0 - Data persisted to cache : 3388398 I wan to fetch the highlighted record counts along with file name. My current query: index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "Data persisted to cache "
Hi Team, I have below events: FileEventCreator - Completed Settlement file processing, TRIM.UNB.D082423.T065617 records processed: 13169530 FileEventCreator - Completed Settlement file processing,... See more...
Hi Team, I have below events: FileEventCreator - Completed Settlement file processing, TRIM.UNB.D082423.T065617 records processed: 13169530 FileEventCreator - Completed Settlement file processing, TRIM.BLD.D082423.T062015 records processed: 3388398 I want to fetch the records for different file name . Can someone guide me here. My current query: index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "FileEventCreator - Completed Settlement file processing" Thanks in advance.  
Hi Team, I have below query: index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balance... See more...
Hi Team, I have below query: index="600000304_d_gridgain_idx*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" I want a true keyword and a green tick every time I receive this  "ReadFileImpl - ebnc event balanced successfully" Something like this: "ReadFileImpl - ebnc event balanced successfully"       true              tick mark If this appears "ReadFileImpl - ebnc event balanced successfully" 8 times in a day I want each statement separate with a true keyword and green tick. Can someone guide me here.
Here is what I am proposing as a manual workaround to pause some alerts but not all alerts during an release weekend / evening and looking for some input. We have several alerts that routinely are ... See more...
Here is what I am proposing as a manual workaround to pause some alerts but not all alerts during an release weekend / evening and looking for some input. We have several alerts that routinely are being triggered when there is a release due to no logins, missing data, etc... I cant 'break' the Splunk emailer as there are other apps that are not doing a release. If I identify those alerts that need to be disabled during a release I would like to name them so that they are easily identified in the Alert search tab. What I am thinking of doing is adding a character at the end of the alert name that makes it obvious that it is one of these 'special' alerts so that other alerts not needing to be disabled are not disable inadvertently. Does anyone know of any issues with using the 'registered trademark' character in the alert name ? ALT 0174 =  ® Then, when searching for alerts I would search for all alerts that include: *®*, disable them, then do the same and enable them once the release is over..
Hi I have installed splunk_ta_windows using deployment server using UF on windows clients and everything is fine.  I created index and pointed in inputs.conf and all looks good.  i also search da... See more...
Hi I have installed splunk_ta_windows using deployment server using UF on windows clients and everything is fine.  I created index and pointed in inputs.conf and all looks good.  i also search data fine but some sources and sourcetypes are missing when i input the query.
I'm trying to send log from my Linux installed on Hyper-v windows into my Splunk instance and it data doesn't seem to reach it's destination. I have entered the port number in my Splunk instance - Re... See more...
I'm trying to send log from my Linux installed on Hyper-v windows into my Splunk instance and it data doesn't seem to reach it's destination. I have entered the port number in my Splunk instance - Receive data - configure receiving and entered my port number. i edited my input.conf file and why can't I see my log in Splunk???
Don't know another way to do it ...  I had created containers from the Splunk export app for SOAR ( don't us that for Mission Control (MC) it got stuck in some kind of loop or something... so gr... See more...
Don't know another way to do it ...  I had created containers from the Splunk export app for SOAR ( don't us that for Mission Control (MC) it got stuck in some kind of loop or something... so gross but whatever        export token='YOURAUTOMATIONTOKEN' while true do curl -s -u ":${token}" 'https://YOURCOMPANY.soar.splunkcloud.com/rest/container?search_fields=id&_filter_artifact_count__lte=0&page_size=2200' | python3 -m json.tool | grep -E "(\bid\b)" | sed 's/.*: //g' | tr -d '\n' | sed -re 's/^/{\"ids\":\[/g' -re 's/,$/]}/g' > ids.txt curl -s -X DELETE -u ":${token}" 'https://YOURCOMPANY.soar.splunkcloud.com/rest/container' -d "`cat ids.txt`" done        
I have an index which has 15 hosts and around 15 sourcetypes mapped to all hosts.  How can I get events of only few selected hosts.  (I cannot create separate index of those hosts)