All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I need an help with splunk search query where in an incident need to be generated for a log backup failure after 3 consecutive failures.  /nanoo1
I am using Splunk to Search historical data in a virtual index but I have noticed that the default date_year is being incorrectly added.   My data is from 2020 and when I search I specified a sourc... See more...
I am using Splunk to Search historical data in a virtual index but I have noticed that the default date_year is being incorrectly added.   My data is from 2020 and when I search I specified a source pointing to a  particular directory based on the date at which it was ingested.   Unfortunately the logs in question have a timestamp in the following format %b %e %H:%M:%S i.e no year..   When I run my search looking in the folder for 15/08/2020 some of the default dates are 2020 but some are 2021. index=vix_web        source="/data/xx/xxx/xxx/xxx/2020/08/15"   Having done some research on how the default times are extracted, it would seem  datetime.xml is used but I still don't know where the year is extracted from.   Can anyone help
Hello, I would like to center the dates of my timechart (column) :         I'm using the timechart command in order to get a table that is then transformed in a column chart. How can I do ... See more...
Hello, I would like to center the dates of my timechart (column) :         I'm using the timechart command in order to get a table that is then transformed in a column chart. How can I do this? Thank you. Best regards,
Hi, I am getting the following error on my search head whenever i run query in a newly created app. Search results might be incomplete: the search process on the peer:indexer1 ended prematurely. C... See more...
Hi, I am getting the following error on my search head whenever i run query in a newly created app. Search results might be incomplete: the search process on the peer:indexer1 ended prematurely. Check the peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search. [Indexer 1] Search process did not exit cleanly, exit_code=255, description="exited with code 255". Please look in search.log for this peer in the Job Inspector for more info. In the search.log this is the error" 12-15-2021 05:42:06.881 ERROR dispatchRunner - RunDispatch::runDispatchThread threw error: Application does not exist: " The above error is present for on all four of our indexers. What is the cause for this? How do we fix this error?   Thank You!
Hi , when I'm deploying new changes to my services I want to compare the last day's error logs to the last week to see if there has been an increase for a specific message.  I have trouble figuring o... See more...
Hi , when I'm deploying new changes to my services I want to compare the last day's error logs to the last week to see if there has been an increase for a specific message.  I have trouble figuring out how I display the counts for the different time ranges by message. This kind of gives the correct result but the same message for last and this week will not be grouped correctly.       sourcetype="my pod" level="error" | eval marker = if (_time < relative_time(now(), "-1d@d"), "lastweek", "thisweek") | multireport [where marker="thisweek" | stats count as this week by message] [where marker="lastweek" | stats count as last week by message]       Grateful for any help
if i have employees list .for each employee there are two status logged in and logged out, i need to find out the each users last status and if user last status is logged out i need to count how many... See more...
if i have employees list .for each employee there are two status logged in and logged out, i need to find out the each users last status and if user last status is logged out i need to count how many employees get logged out
Hi I have created an app using the Add-on builder, by:   collectionName = "myKVStore" service = connect(scheme=scheme, host=splunkd_host, port=splunkd_port, token=helper.session_key, owner="nob... See more...
Hi I have created an app using the Add-on builder, by:   collectionName = "myKVStore" service = connect(scheme=scheme, host=splunkd_host, port=splunkd_port, token=helper.session_key, owner="nobody") if not collectionName in service.kvstore: service.kvstore.create(collectionName)     I would like to see the data in my kvstore, IN Splunk, i have tried to query the api with no luck, also tried to define it transforms.conf:   [myKVstore] external_type = kvstore case_sensitive_match = false collection = myKVstore fields_list = _key, ....     If i then query it by: |inputlookup ... i get "collection does not exist" But it works fine within my code Ideas?
I am stuck with a query where I am trying to pass the field value from sub search to parent search: Query:    index=f5 sourcetype="*f5*" earliest=-1d@d latest=d@d [| inputlookup user where country... See more...
I am stuck with a query where I am trying to pass the field value from sub search to parent search: Query:    index=f5 sourcetype="*f5*" earliest=-1d@d latest=d@d [| inputlookup user where country="US" | fields UserName | rename user_name ]   Explanation: The field name which is going to match from the subsearch is the user_name, now in the parent search there are two fields for user that is user_name and Account_name and i need both of them in the end result (user_name contains internal users/ Account_name contains external users). I tried using coalesce to merge both the fields in the parent search but eval pops an error. Can anyone please help me in solving this problem ?
I want to see the result values of Src_ip and dst_ip are the same and "ok" and the number of these result values. What should I do? The code I made doesn't work well. index="my_index" |eval che... See more...
I want to see the result values of Src_ip and dst_ip are the same and "ok" and the number of these result values. What should I do? The code I made doesn't work well. index="my_index" |eval cheack=if(html_code==200,"error","OK") |stats list(src_ip) as src_ip list(dst_ip) as dst_ip by cheack |table src_ip , dst_ip , cheack , count  
Hello, I am using autoencoder in DLTK. I want to add partial fit functionality in it. I was able to add partial_fit function in model for autoencoder in /srv/app/model/. Also made required changes ... See more...
Hello, I am using autoencoder in DLTK. I want to add partial fit functionality in it. I was able to add partial_fit function in model for autoencoder in /srv/app/model/. Also made required changes in /srv/app/index.py. However those changes does not reflect while using SPL query in DLTK. I am not able to figure out when is the index.py file loaded if we make changes in it.  Can someone please help to understand when is the index.py file in /srv/app loaded?
suppose if i have user1,user2,user3 i need to find out last log message of each user h
Hi, is it possible for a checkbox with search on change enabled to only refresh one panel in a dashboard with multiple panels? Only that panel is using the input from the checkbox, so there's no nee... See more...
Hi, is it possible for a checkbox with search on change enabled to only refresh one panel in a dashboard with multiple panels? Only that panel is using the input from the checkbox, so there's no need for all other panels to be refreshed when there are changes made to it. 
Hello Splunkers, We created a custom command, to create jira tickets, taking the input from splunk table. The script is working correctly, when triggered from backend, when the values are hard-code... See more...
Hello Splunkers, We created a custom command, to create jira tickets, taking the input from splunk table. The script is working correctly, when triggered from backend, when the values are hard-coded. When changing the hard-coded values, with token, i.e dynamic, and trying to create ticket from UI, it creates 2 tickets with same ticket number. Below is out python code, and error what we are getting while triggering:     TIA,
Hi, App agent is not reporting to the controller after upgrading the machine agent.  On the dashboard application calls are not showing. ^ Post edited by @Ryan.Paredez for formatting
I have a current single instance deployment of Splunk 8.2.3 on Linux Fedora 35, and it keeps encouraging me to update my mmapv1 storageEngine to wiredTiger. However, when I follow the instructions fo... See more...
I have a current single instance deployment of Splunk 8.2.3 on Linux Fedora 35, and it keeps encouraging me to update my mmapv1 storageEngine to wiredTiger. However, when I follow the instructions for a current single instance deployment at https://docs.splunk.com/Documentation/Splunk/8.2.3/Admin/MigrateKVstore#Migrate_the_KV_store_after_an_upgrade_to_Splunk_Enterprise_8.1_or_higher_in_a_single-instance_deployment , it always fails after running the migration command. The entire output is: Starting KV Store storage engine upgrade: Phase 1 (dump) of 2: .....ERROR: Failed to migrate to storage engine wiredTiger, reason= where "reason" is blank. I haven't found anyone posting about getting this error without a reason. How should I complete the migration, or at least do further troubleshooting?
Hi Team, We have successfully integrated File Extension from Appdynamics/file-monitoring-extension: The AppDynamics File Watcher Extension can be used to provide metrics from configured files and di... See more...
Hi Team, We have successfully integrated File Extension from Appdynamics/file-monitoring-extension: The AppDynamics File Watcher Extension can be used to provide metrics from configured files and directories. ... (cisco.com) But now we moved to Machine Based agen to Cluster Agent. I want to know how can we deploy the existing extensions in cluster Agent. Cheers Vinay Kumar
(Select all that apply) 1. Virtual SplunkLive! 2. Splunk Workshops 3. Gaming 4. Splunk Events   I am practicing SE I questions and cannot find this online   any Idea??    Thanks in advance... See more...
(Select all that apply) 1. Virtual SplunkLive! 2. Splunk Workshops 3. Gaming 4. Splunk Events   I am practicing SE I questions and cannot find this online   any Idea??    Thanks in advance. 
Is it valid to use a where clause to compare a string value to a multivalue field in order to know if that value is one of the values in the multivalue field?  For example,  my query returns this re... See more...
Is it valid to use a where clause to compare a string value to a multivalue field in order to know if that value is one of the values in the multivalue field?  For example,  my query returns this result where firstName is a multivalued field:   lastName | firstName -------- ----------- Smith | Amy, Barbara, Carol Wilson | Carol, Deanna, Emily   In my query I add the following to the end of my query to find all rows containing "Carol" in the multivalue field.   where firstName="Carol"     The where clause seems to work fine and returns all the row containing "Carol" in the multivalue field.  I'm wondering if its a supported syntax because I didn't find an example that looks like this and the various "mv" functions seemed to be for more complicated operations. In this example, I'm looking to get all last names and any associated first name and then use a where clause to return anyone with a particular first name.
we got A complete with HTTP Event Collector , but now having.... The data is not formatted correctly. To see how to properly format data for Raw or Event HEC endpoints, see Splunk Event Data  i thin... See more...
we got A complete with HTTP Event Collector , but now having.... The data is not formatted correctly. To see how to properly format data for Raw or Event HEC endpoints, see Splunk Event Data  i thinking of using this....??? Splunk App for Stream what you thing???   What a good solution???  
Hi, I have a UNIX server Solaris 8 that ac/behave like a Splunk Proxy server for 2 other UNIX servers Solaris 8. In other words the 2 Solaris servers send the syslog file to the UNIX Solaris Proxy ... See more...
Hi, I have a UNIX server Solaris 8 that ac/behave like a Splunk Proxy server for 2 other UNIX servers Solaris 8. In other words the 2 Solaris servers send the syslog file to the UNIX Solaris Proxy server. I am trying to create a query that will shows the events coming from the 2 UNIX Solaris 8 servers. I run the below query for example: index=nix* serverproxy* | eval Status=if(like(source, "%FirstUNIXSolaris8%"), 1, 0) I am not getting any event that will show the FirstUNIX Solaris8 name/hostname. Please any suggestion how to create the specific query ? Thanks, Regards. Roberto