All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All,   I want to build a splunk query using stats to get count of messages for last 5 min, last 10min and last 15min.Something like below. Kindly let me know how below can be achieved?    T... See more...
Hello All,   I want to build a splunk query using stats to get count of messages for last 5 min, last 10min and last 15min.Something like below. Kindly let me know how below can be achieved?    Transaction       Last 5min Vol        Last 10min Vol       Last 15min Vol A B C
Hello Splunkers!! I want to work Splunk on https. I am using windows server. How to generate certificate in Splunk and Trustore in some easy steps available? I followed below document but not givi... See more...
Hello Splunkers!! I want to work Splunk on https. I am using windows server. How to generate certificate in Splunk and Trustore in some easy steps available? I followed below document but not giving any good results.   https://docs.splunk.com/Documentation/Splunk/9.2.1/Security/Howtoself-signcertificates  
index=app-index source=application.logs | rex field= _raw "RampData :\s(?<RampdataSet>\w+)" | rex field= _raw "(?<Message>Initial message received with below details|Letter published correctley to AT... See more...
index=app-index source=application.logs | rex field= _raw "RampData :\s(?<RampdataSet>\w+)" | rex field= _raw "(?<Message>Initial message received with below details|Letter published correctley to ATM subject|Letter published correctley to DMM subject|Letter rejected due to: DOUBLE_KEY|Letter rejected due to: UNVALID_LOG|Letter rejected due to: UNVALID_DATA_APP)" | chart count over RampdataSet by Message OUTPUT: RampdataSet Initial message received with below details Letter published correctley to ATM subject Letter published correctley to DMM subject Letter rejected due to: DOUBLE_KEY Letter rejected due to: UNVALID_LOG Letter rejected due to: UNVALID_DATA_APP WAC 10 0 0 10 0 10 WAX 30 15 15 60 15 60 WAM 22 20 20 62 20 62 STC 33 12 12 57 12 57 STX 66 30 0 96 0 96 OTP 20 10 0 30 0 30 TTC 0 5 0 5 0 5 TAN 0 7 0 7 0 7 But we want output as shown below: Total="Letter published correctley to ATM subject" + "Letter published correctley to DMM subject" + "Letter published correctley to DMM subject" + "Letter rejected due to: DOUBLE_KEY" + "Letter rejected due to: UNVALID_LOG" + "Letter rejected due to: UNVALID_DATA_APP" |table "Initial message received with below details"  Total RampdataSet Initial message received with below details Total WAC 10 20 WAX 30 165 WAM 22 184 STC 33 150 STX 66 222 OTP 20 70 TTC 0 15 TAN 0 21
Hi,   I'm connecting to a Vertica database.  The latest JDBC driver has been installed and connecting to an older Vertica instance. I set up an Identity with the username and password, but when I t... See more...
Hi,   I'm connecting to a Vertica database.  The latest JDBC driver has been installed and connecting to an older Vertica instance. I set up an Identity with the username and password, but when I tried to create a Connection, it fails with a authentication warning.   My solution for now was to edit the JDBC URL manually via the interface and add in the user and password parameters as shown below. e.g.  jdbc:vertica://my.host.name:5433/databasename?user=myusername&password=mypassword The connection now works and proves out that the JDBC driver and credentials are working. This isn't a proper solution though as anyone with administration privileges in DB Connect is able to see the username and password if they edited that connection. Any ideas on how to make a Vertica JDBC connection utilize the Identity set up? The jdbcUrlFormat in the configuration is: jdbc:vertica://<host>:<port>/<database> I was wondering if one solution is a way to reference the Identity here. e.g.  jdbc:vertica://my.host.name:5433/databasename?user=<IdentityUserName>&password=<IdentityPassword> I have tried similar things and that doesn't work either.
Hi , i am trying to execute multiline splunk commands as below using rest endpoint services/search/v2/jobs/export  https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch#search.2Fv2... See more...
Hi , i am trying to execute multiline splunk commands as below using rest endpoint services/search/v2/jobs/export  https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch#search.2Fv2.2Fjobs.2Fexport search command : | inputlookup some_inputlokupfile.csv | rename user as CUSTOMER,  zone as REGION, "product"  as PRODUCT_ID | fields CUSTOMER*, PRODUCT_ID | outputlookup some_example_generated_file.csv.gz override_if_empty=false   when i execute the curl it returns success 200 but file is not created. is it possible to invoke multiline search command using pipe with this or any other search api? the search is dynamic i cant create savedsearch and execute.      
what is the error in the below query which i am using to populate in drop down list index=aaa(source="/var/log/testd.log") |stats count by host | eval env=case(match(host, "*10qe*"), "Test", match(h... See more...
what is the error in the below query which i am using to populate in drop down list index=aaa(source="/var/log/testd.log") |stats count by host | eval env=case(match(host, "*10qe*"), "Test", match(host, "*10qe*"), "QA", match(host, "*10qe*"), "Prod" )  
hi everybody. I have three Splunk instances in three docker containers on the same subnet . I have mapped port 8089 on port 8099 on each container. No firewalls between them. I checked the route fro... See more...
hi everybody. I have three Splunk instances in three docker containers on the same subnet . I have mapped port 8089 on port 8099 on each container. No firewalls between them. I checked the route from/to all containers (via port 8099) and there are no blocks and no issues... But when i try to add on of the containers splunk as a search peer in a distributed search deployment, i always receive the error "Error while sending public key to search peer" Any suggestion about this? Thank to everybody in advance.
I am using the below query (server names replaced) to find when there is a greater than 50% difference in volume between 2 call routers (servers). For some reason im getting no timechart results, eve... See more...
I am using the below query (server names replaced) to find when there is a greater than 50% difference in volume between 2 call routers (servers). For some reason im getting no timechart results, even when setting the difference to 1% which should always return results. index=OMITTED source=OMITTED host="SERVER1" OR host="SERVER2" | stats max(Value) as Value by host | eventstats max(if(host='SERVER1', Value, null)) as server1_value max(if(host='SERVER2', Value, null)) as server2_value | eval value_difference = abs(server1_value - server2_value) | eval value_percentage_difference = if(coalesce(server1_value, server2_value) != 0, (value_difference / coalesce(server1_value, server2_value) * 100), 0) | where value_percentage_difference > 1 | timechart avg(value_percentage_difference)
do we have splunk attribute to fetch index  we are passing index in splunk query. with only log file do we have any splunk attribute to fetch index??? index = aaa index = bbb like we have for hos... See more...
do we have splunk attribute to fetch index  we are passing index in splunk query. with only log file do we have any splunk attribute to fetch index??? index = aaa index = bbb like we have for host index=aaa(source="/var/log/tes1.log" |stats count by host  
index keeps rolling of data due to size even after size has been increased. Is there another way to resolve this issue?
I have a cloud-based server sending events to the Indexer over my WAN link via Http Event Collector (HEC).  We have limited bandwidth on the WAN link.  I want to limit (blacklist) a number of event c... See more...
I have a cloud-based server sending events to the Indexer over my WAN link via Http Event Collector (HEC).  We have limited bandwidth on the WAN link.  I want to limit (blacklist) a number of event codes and reduce the transfer of log data over the WAN. Q:  Does a blacklist on inputs.conf for the HEC filter the events at the indexer, or does it stop those event from being transferred at the source? Q: If I install a Universal Forwarder, am I able to stop the blacklisted events from being sent across the WAN?
Can you please let me know the TIME_PREFIX  & TIME_FORMAT for the below log type. 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel extended error information 00:0009:00000:00000:2024/04/12 12:14... See more...
Can you please let me know the TIME_PREFIX  & TIME_FORMAT for the below log type. 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel extended error information 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel  read returning -1 state 1 00:0009:00000:00000:2024/04/12 12:14:02.34 kernel nrpacket: recv, Connection timed out, spid: 501, suid: 84
Hi, I have the following fields in logs on my proxy for backend services _time -> timestamp status_code -> http status code backend_service_url -> app it is proxying What I want to do is agg... See more...
Hi, I have the following fields in logs on my proxy for backend services _time -> timestamp status_code -> http status code backend_service_url -> app it is proxying What I want to do is aggregate status codes by the minute per URL for each status code. So sample output would look like: time backend-service Status code 200 Status code 201 status code 202 10:00 app1.com 10   2 10:01 app1.com   10   10:01 app2.com 10     Columns would be dynamic based on the available status codes in the timeframe I am searching. I found lot of questions on aggregating all 200's into 2xx or total counts by URL but not this. Appreciate any suggestions on how to do this. Thanks!
I have a dashboard where I want to report whether each value of the results of a query matches a value in a fixed list. I have a base search that produces  the fixed list: <search id="expectedResul... See more...
I have a dashboard where I want to report whether each value of the results of a query matches a value in a fixed list. I have a base search that produces  the fixed list: <search id="expectedResults"> <query> | makeresults | eval expectedResults="My Item 1", "My Item 2", "My Item 3" | makemv delim="," expectedResults | mvexpand expectedResults | table expectedResults </query> <done> <set token="expectedResults">$result.expectedResults$</set> </done> </search> Then I have multiple panels that will get results from different sources, pseudo-coded here: index="my_index_1"  query | table actualResults | stats values(actualResults) as actualResults Assume that the query returns "My Item 1" and "My Item 2". I am not sure how to compare the values returned from my query against the base list, to give something that reports whether it matches each value. My Item 1 True My Item 2 True My Item 3 False
Hi Splunkers, I am facing weird issue with addcoltotals command. While it is working perfectly fine if i open a new search tab but once i add the same query in Dashboard it is breaking down. I am t... See more...
Hi Splunkers, I am facing weird issue with addcoltotals command. While it is working perfectly fine if i open a new search tab but once i add the same query in Dashboard it is breaking down. I am trying to run the command in SplunkDB connect. Below is the snippet for reference. Below is the query index=db_connect_dev_data |rename PROCESS_DT as Date | table OFFICE,Date,MOP,Total_Volume,Total_Value | search OFFICE=GB1 |eval _time=strptime(Date,"%Y-%m-%d") |addinfo |eval info_min_time=info_min_time-3600,info_max_time=info_max_time-3600 |where _time>=info_min_time AND _time<=info_max_time |table Date,MOP,OFFICE,Total_Volume,Total_Value | addcoltotals "Total_Volume" "Total_Value" label=Total_GB1 labelfield=MOP |filldown | eval Total_Value_USD=Total_Value/1000000 | eval Total_Value_USD=round(Total_Value_USD,5) | stats sum(Total_Volume) as "Total_Volume",sum("Total_Value_USD") as Total_Value(mn) by MOP |search MOP=* |table MOP,Total_Volume,Total_Value(mn) Let me know if anyone know why it is happening,
Hello guys, so I'm currently trying to set up Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm ... See more...
Hello guys, so I'm currently trying to set up Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm chart, in my case what is the most recommended way to set the initial admin credentials, do I have to access every instance and define a "user-seed.conf" file under $SPLUNK_HOME/etc/system/local and then restart the instance, or is there an automated way to set the password across all instances by leveraging the helm chart.
Hello Team, I have a parent dashboard where I have 5 panels. These are linked to one child dashboard based on the token passing filter the data changes. However I notice that for one panel there is... See more...
Hello Team, I have a parent dashboard where I have 5 panels. These are linked to one child dashboard based on the token passing filter the data changes. However I notice that for one panel there is no field as Output due to which i get "no results found". Is there a logic to remove this token passed from the code. |search $form.app_tkn$ Category="A event" Type=$form.eventType$ Output=$form.output$
index=mainframe sourcetype=BMC:DEFENDER:RACF:bryslog host=s0900d OR host=s0700d | timechart limit=50 count(event) BY host | addcoltotals I am looking add the AVG from each 1 week total for eac... See more...
index=mainframe sourcetype=BMC:DEFENDER:RACF:bryslog host=s0900d OR host=s0700d | timechart limit=50 count(event) BY host | addcoltotals I am looking add the AVG from each 1 week total for each day 
I know there is Splunk Add-on for AWS, but I heard there is a simpler and easier way to read the buckets directly without using that Add-on. Is that true?  
Hello,  I am trying to create a custom view (also via Xpath) from EventViewer and later insert it into Splunk via a "WinEventLog" and leveraging the Windows Addon. Can it be done using "WinEven... See more...
Hello,  I am trying to create a custom view (also via Xpath) from EventViewer and later insert it into Splunk via a "WinEventLog" and leveraging the Windows Addon. Can it be done using "WinEventLog" or some other way in inputs.conf as it is for Application/Security/System?  [WinEventLog://MyCustomLog] As suggested here I tried this configuration but no logs were onboarded and it returned no error also in _internal logs.  Has anyone found a custom solution for inserting these newly created custom views from the EventViewer to Splunk? Thanks