All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

i want to upgrade splunk from 8.2.2 to latest version. is there  a way to output the data stored in Splunk to another storage ?? please provide splunk documentation.   Appreciate your time. 
lets say I have a subsearch or multisearch. I want to have my subsearch/multisearch date to be 30 days before the start of main search date. Right now i have it hardcoded all the way from start d... See more...
lets say I have a subsearch or multisearch. I want to have my subsearch/multisearch date to be 30 days before the start of main search date. Right now i have it hardcoded all the way from start date of my data . But in reality I am interested only 30 day before main search.  The main search will be something like "Before 03/01/2022". So here my subsearch  earliest date should be  from "03/01/2022" minus 30 days  till "03/01/2022"      | multisearch [search index="abc" ] [search index="xyz" earliest="11/01/2021:20:00:00"]     Thanks.
Deleted
Can I get data in Splunk Cloud Platform? and how can i get it (REST API, library in python,...) Any help is appreciated
I have a macro named X that uses the lookup in the search and produces the results as follows  indexes index IN ("ABC","DEF")   where as indexes is column name   Now I want to use the macro X ... See more...
I have a macro named X that uses the lookup in the search and produces the results as follows  indexes index IN ("ABC","DEF")   where as indexes is column name   Now I want to use the macro X result (index IN ("ABC","DEF")) in a separate search as follows    my_search | where `X` which should execute as below my_search | where index IN ("ABC","DEF")   Now how can I achieve that?  
Can I get data in Splunk Cloud Platform? and how can i get it (REST API, library in python,...) Any help is appreciated
Hello,   This is my first time seeking help in a forum, I apologize if my ask is confusing.   I'm looking to pull the metrics for each analyst based on the Mean time to triage each type of no... See more...
Hello,   This is my first time seeking help in a forum, I apologize if my ask is confusing.   I'm looking to pull the metrics for each analyst based on the Mean time to triage each type of notable in the Incident Review dashboard. I need a table that shows the time it took for each analyst to put the status in "Ready for Review" after they put the status as "In Progress, the analyst name, & the notable name   This is a similar search to the one I have right now: |`incident_review` | rename status_label as status | where status == "Ready for Review" | sort - _time | table status,rule_id,rule_name,owner_realname | rename rule_id as "Notable ID" | rename rule_name as Notable | rename owner_realname as Analyst | join type=left rule_id [ search notable | rename _time as notable_creation_time | convert ctime(notable_creation_time) | stats min(notable_creation_time) as notable_creation_time by rule_id]
Hi, i have a Trellis view single value where it shows the statues of up/down. When the status is down, i would like to get a sound alert. Is it possible in Splunk? Please let me know, if there is a w... See more...
Hi, i have a Trellis view single value where it shows the statues of up/down. When the status is down, i would like to get a sound alert. Is it possible in Splunk? Please let me know, if there is a way of adding audible alert based on the query condition. Thank you!  
I have a single value trellis view where it shows the status of items up (Green) and down (Red). When the status is down (Red), i would like to get the trellis view to flash or blink. I added the htm... See more...
I have a single value trellis view where it shows the status of items up (Green) and down (Red). When the status is down (Red), i would like to get the trellis view to flash or blink. I added the html code below to my trellis view, however all statues green/red are flashing now. I would like the statuses or red items to flash only. Please let me know if there is a way to achieve this in Splunk dashboard. Thank You!   <panel depends="$alwaysHideCSSPanel$"> <html> <style> @keyframes blink { 100%, 0% { opacity: 0.6; } 60% { opacity: 0.9; } } #singlevalue rect { animation: blink 0.8s infinite; } </style> </html>  
I'm trying to write a Splunk query to find out a file size below 10 bytes from a log file. I have the index and log location but unable to find the exact query. Please help me out in a writing a quer... See more...
I'm trying to write a Splunk query to find out a file size below 10 bytes from a log file. I have the index and log location but unable to find the exact query. Please help me out in a writing a query and creating an alert out of it. 
I am currently needing to change our single site cluster to a two indexer configuration that is peered for ease of maintenance for the next person who replaces me at this site. I currently have 2 ind... See more...
I am currently needing to change our single site cluster to a two indexer configuration that is peered for ease of maintenance for the next person who replaces me at this site. I currently have 2 indexers, 1 deployment server, 1 cluster/license master and 1 search head. Here is what I need to do. First - Move all Splunk forwarders to use the second indexer as the deployment server without having to reinstall all the forwarders. I thought that his was as simple as changing the deployment.conf file, but it does not seem to be working. Maybe the cluster master has something? Second - Remove the indexers from the cluster and delete the cluster from the Cluster Master and break the distributed search. Third - Change the license server to the first indexer At this point, I can shut down all servers except the indexer and everything will work. If there is anyone that can help me, I would greatly appreciate it. I want to do this as quickly and smoothly as possible. Thank you in advanced. Robert
Hi All, I have the below line of code to categorize transactions based on the response time (duration) taken in seconds. | eval ranges=case(Duration<=1,"less",Duration>1 and Duration<=3,"between"... See more...
Hi All, I have the below line of code to categorize transactions based on the response time (duration) taken in seconds. | eval ranges=case(Duration<=1,"less",Duration>1 and Duration<=3,"between",Duration>3,"greater") Say i trigger a load test with 100 transactions which are  all taking between 1 to 3 Secs but surprisingly few txns say 1 to 4 txns out of 100 are NOT getting categorized in the table though their duration column has a value between 1 to 3 Secs. Can someone please let me know what is going wrong.  
Hi friends, I am trying to piece together some splunk searches across application logs to try and piece together what 'normal' traffic patterns look like, vs DDoS attacking IP addresses. The end goal... See more...
Hi friends, I am trying to piece together some splunk searches across application logs to try and piece together what 'normal' traffic patterns look like, vs DDoS attacking IP addresses. The end goal is to answer the question: "For each IP that connects to our application, what is the average connection count within a 5m span, across a 2 hour period? What are the outlier ( greater than average) 5m span connection counts?  I have the following timechart which has been useful, but I'm sure there is a better way to do this.    index=myapplicationindex sourcetype=_json cluster=cluster23 | timechart span=5m count by x_forwarded_for where count > 75    
We're doing a review of several thousand alerts. About half of them have this syntax at the end of the initial search terms, where "MyAlertName" is literally the alert name:     NOT tag::host=M... See more...
We're doing a review of several thousand alerts. About half of them have this syntax at the end of the initial search terms, where "MyAlertName" is literally the alert name:     NOT tag::host=MyAlertName     What does it mean? It doesn't seem to make any difference if it's there or not, but the searches do work with it present, apparently it is syntactically correct. The docs I've found relating to double-colon syntax don't seem to describe anything like this, and "host" in our environment is always a server name.
Hi Splunkers, We are streaming google app logs to splunk in distributed environment. We have G suite for Splunk app on SH and Input add-on on Heavy forwarder. I am seeing log drop on a particular d... See more...
Hi Splunkers, We are streaming google app logs to splunk in distributed environment. We have G suite for Splunk app on SH and Input add-on on Heavy forwarder. I am seeing log drop on a particular day for about 2 hrs and then the logging has turned normal. Unable to identify the reason for the same. Also the g suite application health dashboard shows the below error, @alacercogitatus , could you please help me identify the cause for logs drop and how to fix these errors?
Hi, I have below string and I am trying to get StartTime, EndTime and Count to be displayed in the dashboard. "Non-Match - Window Event not matches with events Count with StartTime=2020-02-03T11:00... See more...
Hi, I have below string and I am trying to get StartTime, EndTime and Count to be displayed in the dashboard. "Non-Match - Window Event not matches with events Count with StartTime=2020-02-03T11:00:00.000Z EndTime=2020-02-03T11:00:00.000Z Count=100\"   I tried multiple rex formats but couldn't succeed. Can I get some help with this please?
Hello experts, How to round up the values either before decimal point or up to max two decimal point. Below is my search query: | mstats avg(_value) prestats=true WHERE metric_name="memory.us... See more...
Hello experts, How to round up the values either before decimal point or up to max two decimal point. Below is my search query: | mstats avg(_value) prestats=true WHERE metric_name="memory.used" AND "index"="*" AND ( "host"="fsx2098" OR "host"="fsx2099" OR "host"="fsx0102" OR "host"="fsx0319" OR "host"="fsxtp072" ) AND `sai_metrics_indexes` span=auto BY host | timechart avg(_value) useother=false BY host WHERE max in top20 | fields - _span* Below is Result of above: Desired Value: time                                                 host 1              host2              host3                host4 2022-03-29 13:20:00             26                       33                     34                     32 2022-03-29 13:21:00             27                       34                    34                    34 OR time                                                 host 1              host2              host3                host4 2022-03-29 13:20:00             26.80                33.96             34.25                 32.93 Any help will be much appreciated.  
I am creating a dashboard which contains a query that returns application health events of this type: Server Application Type Status   servername app... See more...
I am creating a dashboard which contains a query that returns application health events of this type: Server Application Type Status   servername appname App Health UP   servername appname Disk Health UP   servername appname LDAP Health UP   servername appname Redis Health  DOWN     What I want instead is for the table to look like: Server Application App Health Disk Health LDAP Health Redis Health servername appname UP UP UP DOWN    What would be the best way to accomplish this? Thank you for any suggestions.  
I currently have a UF that is sending data to two different Splunk environment.  [monitor:///data/folder1/] index=main sourcetype=applog1 _TCP_ROUTING = SplunkTEST crcSalt = <SOURCE> [monitor:///... See more...
I currently have a UF that is sending data to two different Splunk environment.  [monitor:///data/folder1/] index=main sourcetype=applog1 _TCP_ROUTING = SplunkTEST crcSalt = <SOURCE> [monitor:///data/folder2/] index=main sourcetype=applog2 _TCP_ROUTING = SplunkPROD crcSalt = <SOURCE>   When i run the following oneshot command it sends it to my SplunkPROD. How do i ensure it sends to SplunkTEST? Is there a setting for _TCP_ROUTING /opt/splunkforwarder/bin/splunk add oneshot /data/data/folder1/app1.log -index main -sourcetype "applog1"