All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,   I'm trying to experiment sending data indexed in splunk to activeMQ. I'll probably need to use JMS Messaging Modular Input (yet to be tested, because I'm trying to output). Did someone do... See more...
Hello,   I'm trying to experiment sending data indexed in splunk to activeMQ. I'll probably need to use JMS Messaging Modular Input (yet to be tested, because I'm trying to output). Did someone do that already ? Could you share some feedback ? Thanks ! Ema
Hello We have multiple people working on the content in Splunk Enterprise Security, and I need to be able to find when Correlation searches were created What is the way to find it?     ... See more...
Hello We have multiple people working on the content in Splunk Enterprise Security, and I need to be able to find when Correlation searches were created What is the way to find it?        
Hi peeps, I need some information about migrating data from an instance in a cluster environment to a new cluster environment. I was unable to find documentation about this process, so I would like... See more...
Hi peeps, I need some information about migrating data from an instance in a cluster environment to a new cluster environment. I was unable to find documentation about this process, so I would like to get some advice or pros/cons details from the experts. Please help. Thank you. 
I'd like to create a base search in a report that will allow me to do a stats count against the different type of values searched for ie... Disk_Error_Count, Power_issue_Count, Severity_Major_Count e... See more...
I'd like to create a base search in a report that will allow me to do a stats count against the different type of values searched for ie... Disk_Error_Count, Power_issue_Count, Severity_Major_Count etc..... index=testindex  | search  ("*$CHKR*" "*disk*")  OR "*Power_Issue*" OR  "*Severity: Major*" OR "*Processor Down*" OR "*TEST Msg" OR "Report Delivery Failure" outputting the values to a lookup.csv I'm trying to prevent the report having to hit the index for the individual counts. I have a dashboard that will then output the counts for visualization.    
There are many app in Splunkbase some from well known companies and developers, so I assume those are safe. What about other apps? Are they reviewed by Splunk before being published?
I created this data table by "mvappend" command. dont have "_time" column and have only 3months records. MONTH itemA itemB itemC 2022-05 2022-06 2022-07 1 2 3 4 5 ... See more...
I created this data table by "mvappend" command. dont have "_time" column and have only 3months records. MONTH itemA itemB itemC 2022-05 2022-06 2022-07 1 2 3 4 5 6 7 8 9   I want to create a column chart : x-axis : MONTH , y-axis : value from this data table. But I cant by  using "chart" command. Please let me know how to create. Sorry if there are any mistakes in this  sentence.
Hi, I tried to filter events on version 2.30.0 based on v1.110.0 configuration, but it failed to dropped events in version 2. I also have read the document but somehow it still not working. maybe som... See more...
Hi, I tried to filter events on version 2.30.0 based on v1.110.0 configuration, but it failed to dropped events in version 2. I also have read the document but somehow it still not working. maybe something that I miss out. kindly advise SC4S V1.110.0 $ cat vendor_product_by_source.csv f_null_queue,sc4s_vendor_product,"null_queue" $ cat vendor_product_by_source.conf filter f_null_queue { host(10.14.1.98) or host(10.14.1.99) or host("uk-test-intfw*" type(glob)) }; Result: Events from above host has been dropped and didn’t see it show in Splunk SC4S v2.30.0 $ cat vendor_product_by_source.csv f_null_queue,sc4s_vendor_product,"null_queue" $ cat vendor_product_by_source.conf filter f_null_queue { host(10.14.1.98) or host(10.14.1.99) or host("uk-test-intfw*" type(glob)) }; Result: With the same statement as V1, events still continues flow into Splunk without filter. I have follow the document and make changed as below $ cat vendor_product_by_source.csv f_cisco_asa,sc4s_vendor_product,cisco_asa f_fortinet_fortios,sc4s_vendor_product,fortinet_fortios $ cat vendor_product_by_source.conf filter f_cisco_asa { host(10.14.1.98) or host(10.14.1.99) }; filter f_fortinet_fortios { host(uk-test-intfw*" type(glob)) };
Hi Team,  I have query, result returned for "dateofBirth" filed is "yyyymmdd" like "19911021", can I format the value return for "dateofBirth" like this: "yyyy/mm/dd": example "1991/10/21"  ... See more...
Hi Team,  I have query, result returned for "dateofBirth" filed is "yyyymmdd" like "19911021", can I format the value return for "dateofBirth" like this: "yyyy/mm/dd": example "1991/10/21"   Below is my query: index=hsl_app | search source = "http:dote-hsl-master-hslcheck" | rex "vaccineFlag\":{\"key\":(?<vaxFlagKey>[0-9]),\"value\":\"(?<vaxFlagValue>[^\"]+)\"}}" | rex max_match=0 "passengerHashedID\":\"(?<passengerHashedID>[^\"]+)" | rex max_match=0 "isCertRequired\":\"(?<isCertRequired>[^\"]+)" | rex max_match=0 "nationalityCode\":\"(?<nationality>[^\"]+)" | rex max_match=0 "birthDate\":\"(?<dateOfBirth>[^\"]+)" | rex "odEligibilityStatus\":\"(?<odEligibilityStatus>[^\"]+)" | rex max_match=0 "\"code\":\"(?<paxErrorCode>[^\"]+)\",\"message\":\"(?<paxErrorMessage>[^\"]+)" | eval paxCert = mvzip(passengerHashedID, isCertRequired, ",") | eval od = mvzip(boardPoint, offPoint, "-") | stats earliest(_time) as _time, values(nationality) as nationality, values(dateOfBirth) as dateOfBirth, values(airlineCode) as airlineCode, values(channelID) as channelID,values(boardPoint) as boardPoint, values(offPoint) as offPoint, values(od) as od, values(odEligibilityStatus) as odEligibilityStatus, values(vaxFlagValue) as vaxFlagValue, list(paxCert) as paxCert, values(paxErrorMessage) as paxErrorMessage, values(APIResStatus) as APIResStatus by requestId | where airlineCode ="SQ" | where isnotnull(paxCert) | mvexpand paxCert | dedup paxCert | eval paxID = mvindex(split(paxCert,","),0), isCertRequired= mvindex(split(paxCert,","),1) | stats latest(_time) as _time, values(vaxFlagValue) as vaxFlagValue, values(nationality) as nationality, values(dateOfBirth) as dateOfBirth, sum(eval(if(isCertRequired="Y", 1, 0))) as eligible, sum(eval(if(isCertRequired="N",1,0))) as notEligible by od | where NOT (vaxFlagValue="NONE" OR vaxFlagValue="NO SUPPORT") AND eligible = 0  
I am using the query below to gather all the request IDs of when an error occurs when calling an api. It provides a list of request ids (over 1000 per hour) in a table format. index=prod_diamond so... See more...
I am using the query below to gather all the request IDs of when an error occurs when calling an api. It provides a list of request ids (over 1000 per hour) in a table format. index=prod_diamond sourcetype=CloudWatch_logs source=*downloadInvoice* AND *error* NOT ("lambda-warmer") | fields error.requestId | rename error.requestId as requestId | stats values by requestId I want to then pass all the values gained from the query into a new query to find what device each requestId is coming from. The new query would look something like this. *requestId_1* OR *requestId_2* OR ....requestId_1000* *ChannelName* *lambda* This query will then be used to find the frequency of each device this error is occurring on. Is there a way to pass all the values retrieved from the first query into the second query within that format?   Please help  
I was wondering if anyone has experience installing the AB on a virtual machine? Is this possible? What are the challenges faced if there are any? There is nothing in the doc about this. Thanks in ad... See more...
I was wondering if anyone has experience installing the AB on a virtual machine? Is this possible? What are the challenges faced if there are any? There is nothing in the doc about this. Thanks in advance.
Hi Team -  Need your expertise in Regex. The below is the rawlog i need to extract the Date and time  the only unique is the WORD "START" & "END" goal is to find the response time between START and... See more...
Hi Team -  Need your expertise in Regex. The below is the rawlog i need to extract the Date and time  the only unique is the WORD "START" & "END" goal is to find the response time between START and END in a Table format. Note: there are no space in the log START</enteringExiting><logLevel>INFO</logLevel><messageType>LOG</messageType><applicationName>GstarSOA</applicationName<programName>GstarRecipientService_MF</programName><functionName>GetRecipient</functionName><host>PerfNode0</host><messageDetails>2022-06-17 04:10:53/utility/logging"><enteringExiting>END</enteringExiting><logLevel>INFO</logLevel><messageType>LOG</messageType><applicationName>GstarSOA</applicationName><programName>GstarRecipientService_MF</programName<functionName>GetRecipient</functionName><host>PerfNode0</host><messageDetails>2022-06-17 04:10:53
We are about to open up a Splunk ticket for this issue, but figured we'd check with the community first. Problem: The tstats command is not seeing all of our indexed data and queries would suggest ... See more...
We are about to open up a Splunk ticket for this issue, but figured we'd check with the community first. Problem: The tstats command is not seeing all of our indexed data and queries would suggest that our Forwarders are not sending data, which isn't true. We've run multiple queries against the index confirming the expected data exists in the index and the fields are indexed. In addition, the hosts show up in the data summary for the index. We are searching within a timeline in which events do exist in the index, so it's not like we are searching for data that never existed. We even performed a restart of the Splunk service and noted a significant number of hosts' data in the index have stopped being processed by tstats / tsidx according to the timestamp of the latest event for the hosts. It coincides with the Splunk restart but never starts processing the data again to be visible by tstats, even after several hours. Other hosts data is processed as expected, so we have some hosts with current "lastSeen" times:     | tstats count max(_time) as lastSeen where index=windows_sec earliest=-20d@d latest=@m by host | convert ctime(lastSeen)     Command that results in missing hosts:     | tstats values(host) by index     Similar command that also results in same "missing" hosts --- Fast Mode:     index=* | stats values(host) by index     Modifying the above command from Fast to Verbose mode results in all hosts being displayed as expected. Additional Info: Splunk v8.2.6 - no correlation between different Forwarder versions either. Splunkd.log has been analyzed line by line pre/post Splunk service restart. No leads there. Tsidx reduction is (and always has been) disabled for all of our indexes. We have seen very similar behavior for other queries where Fast Mode results in missing data but simply changing the mode to Verbose instantly populates all expected data in the results. We even have verified that all fields are identified in the initial "generating" query - no difference in Fast Mode. This seems like a super basic issue but has completely baffled us for some time and is causing serious heartburn and lack of trust in the data being presented to users. It's almost like a caching issue of some sort but we are grasping at straws now. Any thoughts/ideas would be welcome. Thanks.
Hi All, I have a mv field with a bunch of different values. I want to learn how to pull specific values based on string criteria. For examle the multivalue field may contain "App: A;  sn_ubs;  O... See more...
Hi All, I have a mv field with a bunch of different values. I want to learn how to pull specific values based on string criteria. For examle the multivalue field may contain "App: A;  sn_ubs;  Owner_Bob; Criticality_3;" How would I create an eval to pull just the "sn_ubs" into a new field name SN? I am unsure of what manipulation does this. I have tried mvfilter and that works but doesn't break out the value.
Splunkers, I just updated my app db_connect. Now all my connections are broken. I think they are forcing ssl now and that has broken them. This is error that produces: The driver could not establi... See more...
Splunkers, I just updated my app db_connect. Now all my connections are broken. I think they are forcing ssl now and that has broken them. This is error that produces: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target". I tried setting the key value pair to encrypt=false. I then get this error and my Server team says it's no longer using Kerberos. Login failed for user 'SVCSplunkDBRead'. ClientConnectionId:5fb7a943-44bb-46ce-bf52-63a9c90643df Any advice on how to fix the issue would be super awesome! I don't think the server team is going to turn on SSL right now.    These are my local confs: inputs.conf [http://db-connect-http-input] disabled = 0 db_connection.conf [EEHProd] connection_type = generic_mssql_kerberos database = EnterpriseExceptionSystem disabled = 0 host = SQLSERVER identity = SplunkDBRead jdbcUseSSL = true localTimezoneConversionEnabled = false port = 1433 readonly = true timezone = America/Denver customizedJdbcUrl = jdbc:sqlserver://SQLSERVER:1433;databaseName=EnterpriseExceptionSystem;selectMethod=cursor;encrypt=true;MultiSubNetFailover=True identities.conf [SplunkDBRead] disabled = 0 domain_name = ipce password = somepassword use_win_auth = true username = SVCSplunkDBRead identity_type = normal  
When I add this case statement to my search, all results for Severity are "Other". What did I miss? | eval Severity=case(score>=0.1 AND score<=3.9, "Low", score>=4.0 AND score<=6.9, "Medium", sc... See more...
When I add this case statement to my search, all results for Severity are "Other". What did I miss? | eval Severity=case(score>=0.1 AND score<=3.9, "Low", score>=4.0 AND score<=6.9, "Medium", score>=7.0 AND score<=8.9, "High", score>=9.0 AND score<=10.0, "Critical", true(), "Other")
Hello gurus I'm trying to return a percentage from the results of sub searches. The value User_count and Device_count are numerical but the eval returns nothing. If I convert either of the values t... See more...
Hello gurus I'm trying to return a percentage from the results of sub searches. The value User_count and Device_count are numerical but the eval returns nothing. If I convert either of the values to a number and leave the other named the eval works. Could you offer a suggestion to make this search work please? Thank you! index="test" earliest=-2d@d latest=-1d@d | dedup User | stats count(User) as User_count | append [search index="test" | stats dc(SerialNumber) as Device_count] | eval perc=round(User_count/Device_count*100, 2)."%"
Newly released Splunk 9 introduced an error or invalid stanza on `federated.conf`. Anybody knows how to fix this?   Invalid key in stanza [provider:splunk] in /opt/splunk/etc/system/default/federat... See more...
Newly released Splunk 9 introduced an error or invalid stanza on `federated.conf`. Anybody knows how to fix this?   Invalid key in stanza [provider:splunk] in /opt/splunk/etc/system/default/federated.conf, line 20: mode (value: standard). Invalid key in stanza [general] in /opt/splunk/etc/system/default/federated.conf, line 23: needs_consent (value: true).            
Hello I am a bit confused here but I have a search that runs and creates a multivalue  field called "tags{}.name". This is a multivalue field pulled from JSON data. However when I then use the output... See more...
Hello I am a bit confused here but I have a search that runs and creates a multivalue  field called "tags{}.name". This is a multivalue field pulled from JSON data. However when I then use the output of that search in a different search the field is no longer Multivalue and breaks if I try to split it. I need to either make this field delimited or ensure it remains a multi value field. Any help? Search 1, Field is multivalue Search 2, Field is no longer multivalue after using lookup.      
Hi friends, Do you know what roles or capabilities do I need to set the action.email = true in splunk Cloud via script for an alert? I appreciate it, Thanks
Hello, I have a new installation of Splunk 9.X. The instance is hosted on Ubuntu OS, on Azure Cloud. There is no Public IP associated with the instance. The instance can only be accessed via the ass... See more...
Hello, I have a new installation of Splunk 9.X. The instance is hosted on Ubuntu OS, on Azure Cloud. There is no Public IP associated with the instance. The instance can only be accessed via the associated Private IP Address (peering is established between Azure and my internal company network) I tried to telnet the Instance on port 8000, and it is accepting connections. In parallel, when I launch TCPDUMP and refresh the browser, I can see packets on TCPDUMP. Inspite, I am unable to access the instance via console. I get CONNECTION RESET on the browser. Please advise. -- Thanks, Siddarth