All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have made a (html) dashboard with a table and a search. Now, I would like to add some filtering, but i want to filter after the search is done because the search takes 2,5 minutes to load.  Is... See more...
Hi, I have made a (html) dashboard with a table and a search. Now, I would like to add some filtering, but i want to filter after the search is done because the search takes 2,5 minutes to load.  Is this possible and how? Thanks in advance.
I am recently playing around the Splunk cloud free trial, I followed the tutorial on its website but I could not receive any data from Splunk interface. I've set /var/log/splunk as a folder to monito... See more...
I am recently playing around the Splunk cloud free trial, I followed the tutorial on its website but I could not receive any data from Splunk interface. I've set /var/log/splunk as a folder to monitor and the receive server, but i dont see any data in.. any help thanks in advance..  
Required a single cron expression for alerts to trigger from 12 AM to 1:30 AM and 2:30 AM to rest of the day. Kindly help
Hi, I have hundreds of sourcetypes and the intervals when sourcetypes are sending data are not realtime, some are sending weekly, some are sending every few hours, some have other weird patterns. I ... See more...
Hi, I have hundreds of sourcetypes and the intervals when sourcetypes are sending data are not realtime, some are sending weekly, some are sending every few hours, some have other weird patterns. I want to predict when sourcetypes will send data, so I tried predict command and used ML Toolkit, unfortunately the query is extremely slow because of the map command.  Why is it so slow? The base query, taking the list of sourcetypes from a csv is extremely fast, under 1 second, and the query that comes after map, if I run it by itself it takes 4 seconds when I run for a specific sourcetype, so running for each sourcetype should not be so slow.  However when running them together with map command, it does not run, it is incredibly slow. Is there a way to make this faster? I tried without map, but joins were also problematic because the subquery that does the prediction does not work with more than 1 sourcetypes, so my other attempts to join the data has failed.   | inputlookup sourcetypes.csv | dedup sourcetype | table sourcetype | map [ search index="_internal" source="*metrics.log" group="per_sourcetype_thruput" series =$sourcetype$ | head 10 | stats count as counts_data_indexed by _time series | predict counts_data_indexed as predicted_counts_data_indexed algorithm=LLP5 holdback=0 future_timespan=1 upper0=upper0 lower0=lower0 | table sourcetype series _time counts_data_indexed predicted_counts_data_indexed | stats max(_time) as next_predicted_time last(series) as sourcetype | convert timeformat="%Y-%m-%d %H:%M:%S.%3N" ctime(next_predicted_time) ]   Now even if I change the first query and only have a sourcetype, the map command will still take forever.   --I am running this for 30 days | makeresults | eval sourcetype=splunkd | dedup sourcetype | map [ search index="_internal" source="*metrics.log" group="per_sourcetype_thruput" series=$sourcetype$ | head 20 | stats count as counts_data_indexed by _time series | predict counts_data_indexed as predicted_counts_data_indexed algorithm=LLP5 holdback=0 future_timespan=1 upper0=upper0 lower0=lower0 | table sourcetype series _time counts_data_indexed predicted_counts_data_indexed | stats max(_time) as next_predicted_time last(series) as sourcetype | convert timeformat="%Y-%m-%d %H:%M:%S.%3N" ctime(next_predicted_time)]       Ps: If I manage to do this query , will be able to make many useful queries and alerts based on predicted volume, if a source is sending more or less than usual, determining if a sourcetype has stopped sending (by comparing the prediction), there are many use cases that I can use in the future. 
I installed Search Activity App 3.0.1 on a 7.3.4 Splunk instance.   I do not have LDAP in this environment.  I cannot step through the setup process.  I cant get passed the Collecting Data page.  One... See more...
I installed Search Activity App 3.0.1 on a 7.3.4 Splunk instance.   I do not have LDAP in this environment.  I cannot step through the setup process.  I cant get passed the Collecting Data page.  One thing that I am curious about is that the "Existing Configuration Retrieved" check says "in Progress" however I have never installed Search Activity before.  Not sure if this never changing from "in Progress" is the reason the I cannot pass this page.  the Next button is greyed out.  I want to be able to configure this with the "No LDAP" option.
Hi All, need your help in getting the count correct for the below table. Table:  Time sitecode count 2020-08-21 FAW 1 2020-08-21 FAW 1 2020-08-21 FAW 1 2020-08-21 FAW 1 ... See more...
Hi All, need your help in getting the count correct for the below table. Table:  Time sitecode count 2020-08-21 FAW 1 2020-08-21 FAW 1 2020-08-21 FAW 1 2020-08-21 FAW 1 2020-08-21 FAW 1   Query:      index=moogsoft_e2e | eval Time = _time | fieldformat Time=strftime(Time,"%Y-%m-%d") | sort - Time | stats count by Time, sitecode       Expected output:  Time sitecode count 2020-08-21 FAW 5
I installed Search Activity App 3.0.1 on a 7.3.4 Splunk instance.   I do not have LDAP in this environment.  I cannot step through the setup process.  I cant get passed the Collecting Data page.  One... See more...
I installed Search Activity App 3.0.1 on a 7.3.4 Splunk instance.   I do not have LDAP in this environment.  I cannot step through the setup process.  I cant get passed the Collecting Data page.  One thing that I am curious about is that the "Existing Configuration Retrieved" check says "in Progress" however I have never installed Search Activity before.  Not sure if this never changing from "in Progress" is the reason the I cannot pass this page.  the Next button is greyed out.  I want to be able to configure this with the "No LDAP" option.
Hi   I want to extract dashboard graphs from splunk using api.could you please help on this. As I am new to this tool   Raj
Hi, I have a search which returns a filed name: create_time and the results are like this:  2020-08-11T17:10:00+0000 What I want to do is this search: index="automox" sourcetype="automox:software... See more...
Hi, I have a search which returns a filed name: create_time and the results are like this:  2020-08-11T17:10:00+0000 What I want to do is this search: index="automox" sourcetype="automox:software" severity=critical installed=true os_name="Server*" earliest=-1d | dedup server_name name But use the time in create_time as the basis for the earliest=-1d search. Is this sort of thing possible? Cheers.
Hello,  I created an alert, that alerts me about the service down but I need that when a service remains down from the last time I do not receive an alert for this service I only receive an alert fo... See more...
Hello,  I created an alert, that alerts me about the service down but I need that when a service remains down from the last time I do not receive an alert for this service I only receive an alert for the new service down, how can i do it  please any help !!!  | inputlookup services_oracle.csv | search NOT [search index=* sourcetype=srvscript | eventstats max(_time) as TimeEvent | where _time = TimeEvent | fields CMD ] | eval statut = "DOWN" | table CMD statut
After filling in the connection fields, and clicking save the Splunk returns the following error, There was an error processing your request. It has been logged (ID 6f52b801d2e1f92d). Could som... See more...
After filling in the connection fields, and clicking save the Splunk returns the following error, There was an error processing your request. It has been logged (ID 6f52b801d2e1f92d). Could someone help me?
    Hi Guys, I was hoping you can help me. I am using Splunk to analyze some logs that I got from a company, but I don't know how to interpret them. The files I am trying to analyze are XML, JMX... See more...
    Hi Guys, I was hoping you can help me. I am using Splunk to analyze some logs that I got from a company, but I don't know how to interpret them. The files I am trying to analyze are XML, JMX, .log format. The logs contain real time information about servers of the company. For example, I would like to know how can I find errors in these logs. Another thing I can't explain is that why some logs have one event, while some others have more. Thank you in advance!
Hello I have a table in dashboard like below  when I hover my mouse on any of the result a pop-up should appear and should give some text information on that pop-up for example w... See more...
Hello I have a table in dashboard like below  when I hover my mouse on any of the result a pop-up should appear and should give some text information on that pop-up for example when I hover my mouse on "TOKEN_VALIDATION" it should give some information as a pop-up. please suggest me the answers as soon as possible.
Hi All, We got a requirement that we need to remove panel from dashboard.Can anyone guide me how this could be achieved. We want to remove WK: Oracle Sessions By Program from the dashboard.Plea... See more...
Hi All, We got a requirement that we need to remove panel from dashboard.Can anyone guide me how this could be achieved. We want to remove WK: Oracle Sessions By Program from the dashboard.Please help. Regards, Rahul
I have a dashboard like below screenshot When I click on 1.0.9-SNAPSHOT(which is hightighted with blue colour in the screenshot) other panel with name"features Available-1.0.9-SNAPSHOT ... See more...
I have a dashboard like below screenshot When I click on 1.0.9-SNAPSHOT(which is hightighted with blue colour in the screenshot) other panel with name"features Available-1.0.9-SNAPSHOT (which is shown in red rectangular box) will appear. my requirement is when i again select it on 1.0.9-SNAPSHOT(which is hightighted with blue colour in the screenshot) the panel "features Available-1.0.9-SNAPSHOT should dissappear /Hide. could you please anyone suggest me the change.
Hi, How to parse below 2020.08.20 07:38:42 902 +1000
Hi, I'm having a bit of an issue with the Geographically Improbable Access panel in the Access Anomalies dashboard of the InfoSec app. Basically, if I add a "search user=username" into the search pow... See more...
Hi, I'm having a bit of an issue with the Geographically Improbable Access panel in the Access Anomalies dashboard of the InfoSec app. Basically, if I add a "search user=username" into the search powering it, I get a hit but without it I don't so for a given time period, I'm getting two results for the specified user if I search explicitly for them which look to be genuine but I don't see them on the general search. This is the search (I've mark my additional username search in bold): | tstats summariesonly=true allow_old_summaries=true values(Authentication.app) as app from datamodel=Authentication.Authentication where Authentication.action=success by Authentication.user, Authentication.src _time span=1s | rename "Authentication.*" as "*" | eventstats dc(src) as src_count by user | search user=username | search src_count>1 | sort 0 + _time | iplocation src | where isnotnull(lat) AND isnotnull(lon) | streamstats window=2 earliest(lat) as prev_lat, earliest(lon) as prev_lon, earliest(_time) as prev_time, earliest(src) as prev_src, earliest(City) as prev_city, earliest(Country) as prev_country, earliest(app) as prev_app by user | where (src != prev_src) | eval lat1_r=((lat * 3.14159265358) / 180), lat2_r=((prev_lat * 3.14159265358) / 180), delta=(((prev_lon - lon) * 3.14159265358) / 180), distance=(3959 * acos(((sin(lat1_r) * sin(lat2_r)) + ((cos(lat1_r) * cos(lat2_r)) * cos(delta))))), distance=round(distance,2) | fields - lat1_r, lat2_r, long1_r, long2_r, delta | eval time_diff=if((('_time' - prev_time) == 0),1,('_time' - prev_time)), speed=round(((distance * 3600) / time_diff),2) | where (speed > 500) | eval prev_time=strftime(prev_time,"%Y-%m-%d %H:%M:%S") | table user, src, _time, City, Country, app, prev_src, prev_time, prev_city, prev_country, prev_app, distance, speed Anyone got any ideas what's going on? @igifrin_splunk  Thanks
Hi, I have existing monitoring on my Windows servers. I wasn't the one who implemented it but I would like to add windows servers to monitor. I already installed the splunk universal forwarder but u... See more...
Hi, I have existing monitoring on my Windows servers. I wasn't the one who implemented it but I would like to add windows servers to monitor. I already installed the splunk universal forwarder but upon starting splunk-perfmon is not on processes (task manager). This is just based on what I found on some servers that are being monitored. Please help me what to check.
It's cool for using splunk cloud UI to monitor the data. But If I just want to monitor the data in my own site. (I have used the forwarder in my linux),  is there any API I can call to get the data ... See more...
It's cool for using splunk cloud UI to monitor the data. But If I just want to monitor the data in my own site. (I have used the forwarder in my linux),  is there any API I can call to get the data which meet some specific condition?
Hi there, I have a dashboard which splits the results by day of the week, to see for example the amount of events by Days (Monday, Tuesday, ...) My request is like that: myrequest | convert timefo... See more...
Hi there, I have a dashboard which splits the results by day of the week, to see for example the amount of events by Days (Monday, Tuesday, ...) My request is like that: myrequest | convert timeformat="%A" ctime(_time) AS Day | chart count by Day | rename count as "SENT" | eval wd=lower(Day) | eval sort_field=case(wd=="monday",1, wd=="tuesday",2, wd=="wednesday",3, wd=="thursday",4, wd=="friday",5, wd=="saturday",6 ,wd=="sunday",7) | sort sort_field | fields - sort_field, wd   Only problem with the request is that sometimes a day or two could be missing in the histogram (0 entries), and I wanted to have always the 7 days displayed (even with 0 results). Any way to do this ? Any help appreciated! (like a left join in SQL)