All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Splunkers. I need some idea to show "Time Snap result" and "latest status change time by host". Source Data is like this; Host sent OK/NG every 5m/15m depends on OK/NG.(If NG, sent every 5m/ I... See more...
Hi, Splunkers. I need some idea to show "Time Snap result" and "latest status change time by host". Source Data is like this; Host sent OK/NG every 5m/15m depends on OK/NG.(If NG, sent every 5m/ If OK, sent every 15m) time, host, Status 2023/06/13 23:41:09,  A, OK 2023/06/13 23:39:17,  B, NG 2023/06/13 23:43:31,  C, OK 2023/06/13 23:34:17,  B, NG 2023/06/13 23:36:03,  A, NG 2023/06/13 23:29:17,  B, NG 2023/06/13 23:31:10,  A, NG 2023/06/13 23:24:17,  B, OK 2023/06/13 23:28:31,  C, OK 2023/06/13 23:26:49,  A, NG 2023/06/13 23:10:29,  A, OK 2023/06/13 23:09:17,  B, OK 2023/06/13 23:13:31,  C, NG What I want 2 type results like; <result1:Time Snap result> Time, NumbberOfOK, NumberOfNG, NG Host 2023/06/13 23:15, 3, 0,  2023/06/13 23:20, 3, 0, 2023/06/13 23:25, 3, 0, 2023/06/13 23:30, 2, 1, A B 2023/06/13 23:35, 2, 1, A B 2023/06/13 23:40, 2, 1, A B 2023/06/13 23:45, 2, 2, B <result2:latest status change time by host> host, NG_from, lastChangeTo_OK A, 2023/06/13 23:26:49, 2023/06/13 23:41:09 B, 2023/06/13 23:40:17,  C,, I'm trying to get what I need like using "bin" to make time rounded and so on, but I can't handle it for now. Any idea is very helpful. Thank you for your time.
Hello Splunkers, To remove the old decommissioned UFs and stop the annoying missing alert "DMC Alert - Missing forwarders" we need to Rebuild forwarder assets.    The issue is even after doing so,... See more...
Hello Splunkers, To remove the old decommissioned UFs and stop the annoying missing alert "DMC Alert - Missing forwarders" we need to Rebuild forwarder assets.    The issue is even after doing so, the table still contains old decommissioned UFs, How do we solve this?
Hello all,   So, we ran into (yet) another issue with Splunk... We have provisioned: - 1 Cluster Manager/ Deployer - 2 indexer peer - 2 search heads There is another search head (single, not i... See more...
Hello all,   So, we ran into (yet) another issue with Splunk... We have provisioned: - 1 Cluster Manager/ Deployer - 2 indexer peer - 2 search heads There is another search head (single, not in our cluster), that wants to reach our indexers over Internet. However, our indexers and cluster manager are behind a NAT. So, we did expose and make according translation to reach our Cluster Manager (the way it should be done). We also added the option in the server.conf called register_search_address under the [clustering] stanza, and put the FQDN (so the NAT can translate to each of the indexers). But what is this genious Splunk Manager Node doing instead : send the IP address of course! Internal one, so to make sure the Search Head will never be able to reach any peers. So we get this errors : ERROR DistributedPeerManagerHeartbeat [1665 DistributedPeerMonitorThread] - Status 502 while sending public key to cluster search peer https://10.X.X.X:8089: ERROR DistributedPeerManagerHeartbeat [1665 DistributedPeerMonitorThread] - Send failure while pushing public key to search peer = https://10.X.X.X:8089 , Connect Timeout ERROR DistributedPeerManagerHeartbeat [1665 DistributedPeerMonitorThread] - Unable to establish a connection to peer 10.X.X.X. Send failure while pushing public key to search peer = 10.X.X.X. For reference, configuration done on each peers  indexer : [clustering] manager_uri = https://fqdn-of-managernode:8089 mode = peer pass4SymmKey = blablabla register_search_address = https://fqdn-of-current-indexer-node Shall we file a bug? Thank you in advance for your answer!
I have an IP which is sending sending and receiving traffic displayed in timechart:  192.168.1.1 | timechart c by avg(bytes) if this IP is stop sending traffic , how do i setup an alert for that?  ... See more...
I have an IP which is sending sending and receiving traffic displayed in timechart:  192.168.1.1 | timechart c by avg(bytes) if this IP is stop sending traffic , how do i setup an alert for that?  i searched in many topics but cannot find the solution.
Suppose there are 10 events as "raw text" in Splunk in last 7 days as below : Event 1 : 7/11/23 5:28:33.265 PM "host":"111.123.23.34","level":1,"msg":"cricket score : 10","time":"2023-07-11T17:28:3... See more...
Suppose there are 10 events as "raw text" in Splunk in last 7 days as below : Event 1 : 7/11/23 5:28:33.265 PM "host":"111.123.23.34","level":1,"msg":"cricket score : 10","time":"2023-07-11T17:28:33.265Z" Event 2 : 7/11/23 6:28:33.265 PM "host":"111.123.23.34","level":2,"msg":"cricket score : 20","time":"2023-07-11T18:28:33.265Z" Event 3 : 7/12/23 5:28:33.265 PM "host":"111.123.23.34","level":3,"msg":"cricket score : 30","time":"2023-07-12T17:28:33.265Z" Event 4 : 7/12/23 6:28:33.265 PM "host":"111.123.23.34","level":4,"msg":"cricket score : 40","time":"2023-07-12T18:28:33.265Z" Event 5 : 7/13/23 5:28:33.265 PM "host":"111.123.23.34","level"5,"msg":"cricket score : 50","time":"2023-07-13T17:28:33.265Z" Event 6 : 7/13/23 6:28:33.265 PM "host":"111.123.23.34","level":1,"msg":"cricket score : 10","time":"2023-07-13T18:28:33.265Z" Event 7 : 7/14/23 5:28:33.265 PM "host":"111.123.23.34","level":2,"msg":"cricket score : 20","time":"2023-07-14T17:28:33.265Z" Event 8 : 7/14/23 6:28:33.265 PM "host":"111.123.23.34","level":3,"msg":"cricket score : 30","time":"2023-07-14T18:28:33.265Z" Event 9 : 7/15/23 5:28:33.265 PM "host":"111.123.23.34","level":4,"msg":"cricket score : 40","time":"2023-07-15T17:28:33.265Z" Event 10 : 7/15/23 6:28:33.265 PM "host":"111.123.23.34","level"5,"msg":"cricket score : 50","time":"2023-07-15T16:28:33.265Z" So I need to create a Splunk query to get output as below in table format. Date -  Sum of cricket Score on that particular date Date - Total Cricket Score 2023-07-11 - 30 2023-07-12 - 70 2023-07-13 - 60 2023-07-14 - 50 2023-07-15 - 90 Request your help for the same.
How to perform lookup from index search with dbxquery? | index=vulnerability_index | table ip_address, vulnerability, score ip_address vulnerability score 192.168.1.1 SQL Injection 9 ... See more...
How to perform lookup from index search with dbxquery? | index=vulnerability_index | table ip_address, vulnerability, score ip_address vulnerability score 192.168.1.1 SQL Injection 9 192.168.1.1 OpenSSL 7 192.168.1.2 Cross Site-Scripting 8 192.168.1.2 DNS 5 | dbxquery query="select * from tableCompany" ip_address company location 192.168.1.1 Comp-A Loc-A 192.168.1.2 Comp-B Loc-B 192.168.1.5 Comp-E Loc-E  After lookup IP in dbxquery: ip_address company location vulnerability Score 192.168.1.1 Comp-A Loc-A SQL Injection 9 192.168.1.1 Comp-A Loc-A OpenSSL 7 192.168.1.2 Comp-B Loc-B Cross Site-Scripting 8 192.168.1.2 Comp-B Loc-B DNS 5 Thank you so much
How can I use Splunk SDK/Rest API to get list of alerts and reports? For example, the page below shows total of 269 alerts. Would like to access these alerts with all its metadata (such as the underl... See more...
How can I use Splunk SDK/Rest API to get list of alerts and reports? For example, the page below shows total of 269 alerts. Would like to access these alerts with all its metadata (such as the underlying query).   Thanks!  
Hi: We have some applications that authenticate users with Microsoft Azure B2C (openid). With business transaction correlation enabled, users authenticating with these applications receive the authe... See more...
Hi: We have some applications that authenticate users with Microsoft Azure B2C (openid). With business transaction correlation enabled, users authenticating with these applications receive the authentication error IDX21323: RequireNonce was Null. For some reason the OpenidConnect nonce cookie is not being delivered to the user's Web Browser. Checking the cookies we see that only some ADRUM_BT cookies are set. The only work around for the time being has been to disable business transaction correlation completely. Any ideas on how to disable or fix this issue, without requiring a complete disabling of the business transaction correlation feature? Thanks, Roberto
I have been trying to figure this out but getting stumped. I have seen other questions similar but just slightly different.  I have read from other posts to try to not use outer join's ? I have two ... See more...
I have been trying to figure this out but getting stumped. I have seen other questions similar but just slightly different.  I have read from other posts to try to not use outer join's ? I have two queries that are executing the same query just over two different time frames and I want to find only those UserID's that exists in Query1 but not in Query2 which would tell me the user is currently active but was not active before... Query 1: index=anIndex sourcetype=aSourceypte earliest=-17m@m latest=-2m@m | rex field=_raw "^(?:[^,\n]*,){2}(?P<LoginUserID>\w+\.\w+)" | dedup host LoginUserID | sort host LoginUserID | table host LoginUserID Query2: index=anIndex sourcetype=aSourceypte earliest=-32m@m latest=-17m@m | rex field=_raw "^(?:[^,\n]*,){2}(?P<LoginUserID>\w+\.\w+)" | dedup host LoginUserID | sort host LoginUserID | table host LoginUserID   Been trying to figure out a way to use the count of the occurrences of LoginUserID and if it = 1 then that tells me it only exists in one query but stumped there too.  Am I close or thinking about this the wrong way ?
I'm trying to specify a single stanza in props.conf, with FIELDALIAS and EVAL expressions, for two different sourcetypes, "Snare:Security" and "XmlWinEventLog". However, when I use an OR pipe to spec... See more...
I'm trying to specify a single stanza in props.conf, with FIELDALIAS and EVAL expressions, for two different sourcetypes, "Snare:Security" and "XmlWinEventLog". However, when I use an OR pipe to specify both sourcetypes in the [<spec>], like so: [Snare:Security|XmlWinEventLog] neither sourcetype has the rules applied to it. Inspecting "source types" in my search head shows that the rules have been applied to the sourcetype "Snare:Security|XmlWinEventLog", instead of both the individual sourcetypes. Am I not using the pipe correctly? Per the splunk documentation: **[<spec>] stanza patterns:** When setting a [<spec>] stanza, you can use the following regex-type syntax: ... recurses through directories until the match is met or equivalently, matches any number of characters. * matches anything but the path separator 0 or more times. The path separator is '/' on unix, or '\' on Windows. Intended to match a partial or complete directory or filename. | is equivalent to 'or' ( ) are used to limit scope of |. \\ = matches a literal backslash '\'. It seems like it should work. I've tried placing parenthesis around the whole expression and around each individual sourcetype.
I have a list of below host in a csv uasws12 usaws120 usaws11 usaws13 susaws13 usaws130 usaws14 usaws15 usaws16 usaws17 usaws173 usaws18 tusaws18 so the output should be following, if t... See more...
I have a list of below host in a csv uasws12 usaws120 usaws11 usaws13 susaws13 usaws130 usaws14 usaws15 usaws16 usaws17 usaws173 usaws18 tusaws18 so the output should be following, if there is some preceding and succeeding char(including digits and alphabets), then it should be displayed as the output in another column. Please help me with the query. Expected Output:- uasws12 usaws13 usaws17 usaws18
|tstats count where index=app-data  (TERM(Errors) TERM( Started) TERM( in)  TERM(*s)  TERM(*ms))  OR (TERM(system)  TERM(restart)) when i run the above query i am getting overall(combined) results... See more...
|tstats count where index=app-data  (TERM(Errors) TERM( Started) TERM( in)  TERM(*s)  TERM(*ms))  OR (TERM(system)  TERM(restart)) when i run the above query i am getting overall(combined) results. but i want to see the results for each and every string  separately which i mentioned in the query. how can i do that????
I have ingested configuration information from WebSphere Application Server. Specifically, appserver configuration data. The events look like the following: {  Attributes: {  : -DFileNet.Content.... See more...
I have ingested configuration information from WebSphere Application Server. Specifically, appserver configuration data. The events look like the following: {  Attributes: {  : -DFileNet.Content.DownloadServerAffinityEnabled: TRUE -DFileNet.Content.GetBlockSizeKB: 4096 -DFileNet.Content.PutBlockSizeKB: 4096 -DFileNet.Content.UploadServerAffinityEnabled: TRUE -DFileNet.WSI.AutoDetectLTPAToken: is not set -Dappdynamics.agent.applicationName: is not set -Dappdynamics.agent.nodeName: is not set -Dcom.filenet.authentication.token.userid: is not set -Dcom.filenet.authentication.wsi.AutoDetectAuthToken: is not set -Dcom.filenet.repositoryconfig.allowWSIOnWAS: is not set -Dcom.ibm.mq.cfg.jmqi.UnmappableCharacterAction: is not set -Dcom.ibm.mq.cfg.jmqi.UnmappableCharacterReplacement: is not set -Dfilenet.pchconfig: is not set -Djava.awt.headless: TRUE -Djava.security.auth.login.config: ${USER_DIR}/DocumentRepository/jaas.conf.WebSphere -Djaxws.payload.highFidelity: is not set -Xdump: system:none -Xgcpolicy: is not set -Xmn2048M: is not set -Xmn512M: is not set -Xquickstart: is set -Xverbosegclog: is not set -javaagent: is not set -server: is set genericJvmArguments: -Xgcpolicy:gencon -Djaxws.payload.highFidelity=true -Dfilenet.pchconfig=${USER_DIR}\HJIPDash\PchConfig.properties -Xmn512M -Dcom.filenet.authentication.token.userid=sso:ltpa -DFileNet.WSI.AutoDetectLTPAToken=true -Dcom.filenet.authentication.wsi.AutoDetectAuthToken=true -Dcom.filenet.repositoryconfig.allowWSIOnWAS=true -DFileNet.Content.PutBlockSizeKB=10240 -DFileNet.Content.GetBlockSizeKB=10240 } Env: UAT Object: HJn8server3 (HJn8, UAT-230612150440) SectionName: JVM Arguments } The eveants field are made up of the following: Objects: AppServer Names SectionName: Are the various sections of an application server. For example: "JVM Configuratioin" Attributes: Are all the configurations for a give SectionName I have been unable to traverse the "Attributes" field of these events. I have tried making the Attributes field into an JSON Array and/or Object but have had no luck.  My search code has gotten so convoluted, I don't know where to start. The code I use to create a table listing the attributes of two appserver is:   index=websphere_cct SectionName      [ search index=websphere_cct         | dedup SectionName         | streamstats count as "RowCount"         | eval newfield="panel".RowCount         | head 1         | tail         | head 1         | table SectionName] Object=" HJn6server1 (HJn6, PROD-230612151857)" OR Object=" HJn4server1 (HJn4, PROD-230612151857)" | table Object, SectionName, Attributes.* | transpose 0 header_field=Object | fields - Attributes.* This produces the table:  
I have installed Splunk add on for AWS and created the inputs, which have a listed source type. However, when I try to search that source type, nothing comes up for the source. How can I fix this?
Hi all , We have a scripted input and when its exaction started we are keep on getting "INFO prior run of stanza 'ExchangeClientExperience' is still in progress. Skip this one" . If anyone experienc... See more...
Hi all , We have a scripted input and when its exaction started we are keep on getting "INFO prior run of stanza 'ExchangeClientExperience' is still in progress. Skip this one" . If anyone experiencing and got resolution , please help us too.
Hi, I am working on a task: calculating the percentage of employees working in food industry for each country. I tried to develop the code but its not working. please correct my where its mistake. ... See more...
Hi, I am working on a task: calculating the percentage of employees working in food industry for each country. I tried to develop the code but its not working. please correct my where its mistake. | stats sum("Number of employees") as Total_Emp by Country | where Industry="Food*" | stats sum("Number of employees") as Food_Emp by Country | eval Percent = round((Food_Emp/Total_Emp)*100,2)."%"
Hello All, I had installed UF on a windows server in our environment which is reporting to our Deployment server, Recently the hostname on it got changed and it stopped reporting to DS. I referred ... See more...
Hello All, I had installed UF on a windows server in our environment which is reporting to our Deployment server, Recently the hostname on it got changed and it stopped reporting to DS. I referred to this link and tried to run the below commands on CLI: https://community.splunk.com/t5/Getting-Data-In/How-can-I-change-the-default-hostname-in-Splunk/m-p/117398 ./splunk set servername foo.domain.com ./splunk set default-hostname foo.domain.com However, it is not recognizing these commands on the windows CLI . Error : "'./splunk' is not recognized as an internal or external command, operable program or batch file." Can you all kindly help understand what could be the issue?  I had ran these commands after going to the path where splunk UF is installed and under /etc/system/local/ directory. But I am still getting this error, I am not sure if I am missing any step, or is it the path error and why these commands are not working. We do not want to touch the configuration files, we would like to do the changes through CLI itself. Can someone please help me in understanding the steps needed to be followed. Thanks a ton in advance.
How do I merge the below 2 complex queries? Let me know if it's possible in Splunk? Search 1: -      index=ABC (eventtype=X OR eventtype=Y) log_subtype=DEF field_A="*SQL*" | stats values(A... See more...
How do I merge the below 2 complex queries? Let me know if it's possible in Splunk? Search 1: -      index=ABC (eventtype=X OR eventtype=Y) log_subtype=DEF field_A="*SQL*" | stats values(A) as A values(B) as B values(C) as C BY X, Y | where B > 2 | search NOT [|inputlookup test_1.csv | fields X ] | search NOT [|inputlookup test_2.csv | fields X ] | eval name="search_1"       Search 2: -     index=ABC (log_subtype="GHI" OR log_subtype="JKL") (severity="medium" OR severity="high" OR severity="critical") action=* NOT (field_B="Unknown(5000007)" AND action="blocked") NOT dest_ip="11.22.33.44" | stats values(D) as D values(E) as E values(A) as A BY X, Y | eval name="search_2"       I succeeded on merging the 2 searches up to some extent (up to stats command)     index=ABC (log_subtype="DEF" OR log_subtype="GHI" OR log_subtype="JKL")(((eventtype=X OR eventtype=Y) field_A="*SQL*") OR ((severity="medium" OR severity="high" OR severity="critical") action=* NOT (field_B="Unknown(5000007)" AND action="blocked") NOT dest_ip="11.22.33.44" )) | stats values(A) as A values(B) as B values(C) as C values(D) as D BY X, Y       I am not sure on how I can apply where condition and exclusion lookups from search 1 while combining as they are specific to search 1 and do not want to apply those to search 2?            
Hello, I'm sending JSon data to the Http Event collector. When I exectute searches, all the non-metadata fields have duplicated values:  Which causes tons of issues when doing sum, count...  O... See more...
Hello, I'm sending JSon data to the Http Event collector. When I exectute searches, all the non-metadata fields have duplicated values:  Which causes tons of issues when doing sum, count...  On my Splunk Cloud instance, I setup my source type this way, playing with KV_MODE, INDEXED_EXTRATIONS and AUTO_KV_JSON settings, but with no success...  Let me know what could be wrong? Thanks for your help.
Hello All, We have a custom alert action (built with the Splunk Add-on Builder) that sends search results to a HEC input. We have heartbeat searches that trigger the alert action periodically to ens... See more...
Hello All, We have a custom alert action (built with the Splunk Add-on Builder) that sends search results to a HEC input. We have heartbeat searches that trigger the alert action periodically to ensure different hosts are functioning appropriately. We have seen behavior where the the SID is not passed to the alert action. The logs we are seeing are as follows: To me, this seems that the alert action is not even being triggered due to the lack of an SID (hence the blank alert_actions field, which is normally populated by the name of our alert action). Has anyone else ran into this? Any tips for how to resolve? Thank you.