All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Splunkerninja, this is clearly not due to the row separation but the formation of the URL for getting the search results. So if the current dashboard is working for you with the search result URL, ... See more...
@Splunkerninja, this is clearly not due to the row separation but the formation of the URL for getting the search results. So if the current dashboard is working for you with the search result URL, just make the first change by closing the </row> after the table panel and open another <row> element before the html element. This will ensure that the existing dashboard is working as expected. As a second step, change the visibility of the row where table is listed and set a "depends" clause with non existing token.
Hi @apomona , this search is an alert that triggers when the EventCode is missed saing that for the missed host you didn't received any event i the last minute, bt you haven't information about when... See more...
Hi @apomona , this search is an alert that triggers when the EventCode is missed saing that for the missed host you didn't received any event i the last minute, bt you haven't information about when you received the last event. if you want a report about the periods when the eventCode is missed, you should use a different search: index="ad_windows" EventCode=4776 earliest=-60m@m latest=@m | eval period=if(_time>now()-60,"Last","Previous") | stats count(eval(period="Last")) AS count latest(_time) AS _time BY host | append [ | inputlookup DomainController.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 | table host _time in this way, you check if all the hosts in the lookup sent events with the above EventCode and, when missed, also the last event in the last hour. Ciao. Giuseppe
Hi ,   I have set-up a scheduled pdf report to be emailed of a complex dashboard with several graphs, unfortunately I keep getting a timeout error The dashboard takes about 2 minutes to display. A... See more...
Hi ,   I have set-up a scheduled pdf report to be emailed of a complex dashboard with several graphs, unfortunately I keep getting a timeout error The dashboard takes about 2 minutes to display. A simple dashboard with a single graph works perfectly so I'm sure the rest of the config is okay. What I did notice however is that I receive the email, with the following error about a minute after the report was scheduled: Scheduled view delivery. An error occurred while generating the PDF. Please see python.log for details.   Please help.
Please can you share some anonymised representative events demonstrating your issue?
Thanks @richgalloway please find the attached snaps as i am restricted to GUI 
Try like this: <panel id="pqr"> <input type="time" token="time"> <label>DateTime</label> <default> <earliest>@d</earliest> <latest>now</latest> ... See more...
Try like this: <panel id="pqr"> <input type="time" token="time"> <label>DateTime</label> <default> <earliest>@d</earliest> <latest>now</latest> </default> </input> </panel> <panel id="abc"> <title>Latest time token $latest_Time$</title> <input type="dropdown" token="timedrop"> <label>Time Dropdown</label> <choice value="now">Now</choice> <choice value="+3d">3d</choice> <choice value="+4d">4d</choice> <choice value="+5d">5d</choice> <default>now</default> <change> <eval token="latest_Time">if(isnull('timedrop') or 'timedrop'="now",now(),relative_time(if($time.latest$="now",now(),$time.latest$), $timedrop$))</eval> </change> </input> </panel> There doesn't seem to be a way to set an initial value on a time input - perhaps this is a bug?
Yes, I know that the search head is not for storing indexes and data, but i've seen that there is also a best practice of forwarding indexes of the search to the indexer layer. Since I don't have an ... See more...
Yes, I know that the search head is not for storing indexes and data, but i've seen that there is also a best practice of forwarding indexes of the search to the indexer layer. Since I don't have an indexer cluster, I need to choose only one indexer among the two. That's why I'm looking for the most suitable method between the two methods I've proposed.
Editing to make it better: Let's say I have login events with 2 important fields: past_deviceid, new_deviceid I want to check if the new_deviceid was assigned to a different user in the past, for t... See more...
Editing to make it better: Let's say I have login events with 2 important fields: past_deviceid, new_deviceid I want to check if the new_deviceid was assigned to a different user in the past, for that I need to compare the value of the field to the past_deviceid field of past events and I'm kinda stuck here In login events where the user uses their usual device, there'll be only 1 field called past_deviceid, we get the new_deviceid field only when there's a login with a new device In the end I want to have a table that shows the new_deviceid by all the users that hold/held it where there's more than 1 user Example: events with only 1 device: User: Josh old_Device: iPhone12348 --------------------------- User: John old_Device: samsung165 ---------------------------- case where there's a new device: User: Jane old_Device: iPhone17778 new_Device: samsung165   I want to have the following table, I guess the stats command fits here: DeviceID User samsung165 Jane John
Hello,    Thanks for your message.  I have indeed 4 devices (DomainController) I want to check.  I created the CSV file DomainController.csv with 1 column called host.  I want to specify that I ... See more...
Hello,    Thanks for your message.  I have indeed 4 devices (DomainController) I want to check.  I created the CSV file DomainController.csv with 1 column called host.  I want to specify that I always receive this Event and I want the alert to trigger when I stop receiving it for 1 minut Here is the Splunk query I use . Tell me if this is right :  index="ad_windows" EventCode=4776 | stats count BY host | append [ | inputlookup DomainController.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0   The resulst is a table with each DC on first column and 0 on the column called total   ------------------------------------------------------- However,  I know for a fact that one of my DC shut down for 5 minutes this morning so it means I stopped receiving the Event 4776 for 5 minuts. But when I use your query with selected times on the shutdown period, I still have 0.
You should not be instaling indexes on the SH - this is just for search purposes - data is stored on on the indexers and your indexes to should be set there . From your comment "so the collected l... See more...
You should not be instaling indexes on the SH - this is just for search purposes - data is stored on on the indexers and your indexes to should be set there . From your comment "so the collected logs are stored in the search head's index"... this is not the way The App/Add-on contain knowledge objects which are used for things like dashboards and parsing search time data, the TA only should be installed onto the indexer or Heavy forward if that is where the data is sent to first. As to your comment "I set up a new instance for the heavy forwarder on which I install the add-on, and I configure it to forward the indexes to the Indexer."  This is the way forward. 
Yeah that's correct basically these are 4 events. I am putting the config taken from GUI below  
I have an architecture with a single SH and two indexers. I've installed the Splunk for Microsoft 365 add-on on the search head, so the collected logs are stored in the search head's index, but I wan... See more...
I have an architecture with a single SH and two indexers. I've installed the Splunk for Microsoft 365 add-on on the search head, so the collected logs are stored in the search head's index, but I want them to be stored on the indexers.  Here are two other solutions : - Either I continue with the initial setup and select only one indexer amont the two to be the storage location for both the search head's data and the add-on. - Or, I set up a new instance for the heavy forwarder on which I install the add-on, and I configure it to forward the indexes to the Indexer. Which Solution is the best in my case ?   
Hello,  My events are indexed pretty fastly (really close to real time, maybe 1 or 2 seconds delay tops)   I mean when I have the search  index ="*" | where ComputerName="ComputerName" | search E... See more...
Hello,  My events are indexed pretty fastly (really close to real time, maybe 1 or 2 seconds delay tops)   I mean when I have the search  index ="*" | where ComputerName="ComputerName" | search EventCode=4776 I have all events even the one in real time   But whenever I add earliest=-10m latest=-2m for example, then I dont have any result left.
Hi @apomona, the question is: on how many servers do you want to check if the above EventCode isn't present? or, in othe words, have you a monitoring perimeter? if yes, put it in a lookup (called ... See more...
Hi @apomona, the question is: on how many servers do you want to check if the above EventCode isn't present? or, in othe words, have you a monitoring perimeter? if yes, put it in a lookup (called e.g. perimeter.csv) containing at least one column (called host), and then run a search like this: | index=wineventlog sourcetype=xmlwineventlog EventCode=4776 | stats count BY host | append [ | inputlookup | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 if you haven't a perimeter and you're sure that at least in the last hour you received at least one event with this EventCode, you could try: | index=wineventlog sourcetype=xmlwineventlog EventCode=4776 earliest=-60m@m latest=@m | eval period=if(_time>now()-3600,"Last","Previous") | stats dc(period) AS period_count values(period) AS period latest(_time) AS _time BY host | where period_count=1 AND period="Previous" I prefer the first solution because gives you more control: using the second one, you check only hosts in tha last hour. About the second question, avoid to use Real Time alerts because a Real Time search takes a CPU and doesn't release never! It's alway better to run a scheduled search (e.g. every 5 minutes), choose the frequency more adapt to your requirements. About the condition, using my solution you can trigger the alert when you have results (results>0). Ciao. Giuseppe
Hello,  My events are indexed pretty fastly (really close to real time, maybe 1 or 2 seconds delay tops)  
How quickly are your events getting indexed? For example, if you look at the _indextime field and compare it to the _time field you will notice a lag. If this is over a minute, it could be that the e... See more...
How quickly are your events getting indexed? For example, if you look at the _indextime field and compare it to the _time field you will notice a lag. If this is over a minute, it could be that the events between -2m and -1m are not indexed before -1m and would therefore not show up in your alert search
To get you started here's a number of links for you read and work through In short you need the nix TA, UF and configure inputs and outputs based on your requirements. #This shows you the TA Re... See more...
To get you started here's a number of links for you read and work through In short you need the nix TA, UF and configure inputs and outputs based on your requirements. #This shows you the TA Required (Nix TA) https://splunkbase.splunk.com/app/833 #This shows you the OS Supported = MacOs is listed https://docs.splunk.com/Documentation/AddOns/latest/UnixLinux/About   #Read the release notes https://docs.splunk.com/Documentation/AddOns/released/UnixLinux/Releasenotes And you will need to install a Universal Forwarder for the MacOS + configure outputs and TA inputs https://www.splunk.com/en_us/download/universal-forwarder.html
I am not sure why you are getting different results in Fast and Verbose mode, but there appear to be some oddities with your search: | eval newtime=strftime(_time,"%m/%d/%y %H:%M:%S") This creates ... See more...
I am not sure why you are getting different results in Fast and Verbose mode, but there appear to be some oddities with your search: | eval newtime=strftime(_time,"%m/%d/%y %H:%M:%S") This creates a string - what is the maximum of a string? | stats ... max(newtime) as "event time" Field times does not exist (after previous stats command) | eval times=mvindex(times, 0, 2) I am not sure whether these make a difference though
Hello all,  I am using SplunkCloud I have looking on the forum yesterday in order to create an alert when an Event is not detected.  My idea is to send a mail when the Event 4776 is not detected. ... See more...
Hello all,  I am using SplunkCloud I have looking on the forum yesterday in order to create an alert when an Event is not detected.  My idea is to send a mail when the Event 4776 is not detected.  The closer I have is this :  index ="*" | where ComputerName="ComputerName" | search EventCode=4776 This gives me every event 4776 on the device ComputerName I wanted to add  earliest=-2m@m latest=-1m@m like I saw on different places but the result goes to 0 while I know this event is sent multiple times per second (multiple like 100 times)   Second question, when I save as an Alert, I specify : Real Time,  Trigger when Specified : search count =0  Is this right ? I saw people saying results=0 but I have this error : Cannot parse alert condition. Unknown search command 'results'.. Thanks for the help