All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Obviously you must not use strftime before delta calculation.  With this point out of the way, try   index=myindex | stats values(_time) as _time values(recs) as recs count by Token | where count >... See more...
Obviously you must not use strftime before delta calculation.  With this point out of the way, try   index=myindex | stats values(_time) as _time values(recs) as recs count by Token | where count > 1 | fields Token Date | eval idx = mvrange(0, mvcount(Date)) | eval delta = mvmap(idx, if(idx > 0, tonumber(mvindex(Date, idx)) - tonumber(mvindex(Date, idx - 1)), null())) | fieldformat Date = strftime(Date, "%F %T.%2N") | table Token Date delta   The output from your sample data is Date Token delta 2024-06-25 17:20:08.26 2024-06-25 17:23:51.12 363311 222.860000 2024-06-25 18:10:58.86 2024-06-25 18:11:28.12 2024-06-25 18:12:19.38 2024-06-25 18:13:21.90 231321 29.260000 51.260000 62.520000 2024-06-25 15:17:18.06 2024-06-25 15:37:47.93 2024-06-25 15:41:03.21 827341 1229.870000 195.280000 Here is a data emulation for you to play with and compare with real data.   | makeresults format=csv data="Token, Date 363311, 2024-06-25 17:20:08.26 :: 2024-06-25 17:23:51.12 231321, 2024-06-25 18:10:58.86 :: 2024-06-25 18:11:28.12 :: 2024-06-25 18:12:19.38 :: 2024-06-25 18:13:21.90 827341, 2024-06-25 15:17:18.06 :: 2024-06-25 15:37:47.93 :: 2024-06-25 15:41:03.21" | eval Date = split(Date, " :: ") | eval Date = strptime(Date, "%F %T.%2N") ``` the above emulates index=myindex | stats values(_time) as _time values(recs) as recs count by Token | where count > 1 | fields Token Date ```  
I'm seeing the same behavior after upgrading the 9.2.1 and including the new indexes, any update on this?
To which add-on do you refer?  The installation instructions for it should say on which instance(s) it should be installed, but if you give us the name then we should be able to provide an answer. I... See more...
To which add-on do you refer?  The installation instructions for it should say on which instance(s) it should be installed, but if you give us the name then we should be able to provide an answer. Indexes, custom or otherwise, are always created on the indexers.  For a better user experience, they should also be defined on search heads.  Do that by deploying the same indexes.conf file to both instances, but without volume references on the SH.
Like @richgalloway said, Splunk is not great at searching for for missing things.  Meanwhile, if you already have the inventory, there is something you can do. Assuming lookup myinventory is in the ... See more...
Like @richgalloway said, Splunk is not great at searching for for missing things.  Meanwhile, if you already have the inventory, there is something you can do. Assuming lookup myinventory is in the form of hostname IPaddress abc 0.0.0.0 abc 2.2.2.2 xyz 4.5.6.7 zab 7.8.9.10 zab 6.7.8.9 and the requirement is to capture the following entries from the lookup where hostname in this lookup has no matching entry with hostname in index search and IPaddress in this lookup has no matching entry with IPaddress or hostname in index search. To make our task simpler, further assume that if an index search event matches anything in lookup, that hostname and/or IPaddress is/are no longer a candidate.  This is what you can try: index=asset_inventory | stats values(hostname) as hostname values(IPaddress) as IPaddress | appendcols [inputlookup myinventory | stats values(hostname) as lookupname values(IPaddress) as lookupaddress] | eval missingname = mvmap(lookupname, if(lookupname != hostname, lookupname, null())) | eval missingaddress = mvmap(lookupaddress, if(lookupaddress != IPaddress AND lookupaddress != hostname, missingaddress, null())) | lookup myinventory IPaddress as missingaddress output hostname as addressmissingname | eval missingname = mvappend(missingname, mvmap(addressmissingname, if(addressmissingname != hostname, addressmissingname, null()))) | table missingname Note: the search takes avdantage of Splunk's equality evaluation with multivalue. this search becomes complicated because your index search may return IP address in hostname and apparently you care about those entries.  If we ignore those entries and only compare hostname hostnames with inventory, the search can be as simple as index=asset_inventory | stats values(hostname) as hostname | appendcols [inputlookup myinventory | stats values(hostname) as lookupname] | eval missingname = mvmap(lookupname, if(lookupname != hostname, lookupname, null())) | fields - hostname  
Hi,  I need help in extracting the time gaps in a multi-value field represented as Date. My data output looks like this: index=myindex | stats values(_time) as _time values(recs) as recs co... See more...
Hi,  I need help in extracting the time gaps in a multi-value field represented as Date. My data output looks like this: index=myindex | stats values(_time) as _time values(recs) as recs count by Token | eval Date= strftime (_time,"%F %H:%M:%S.%2q ") | where count > 1 | table Token Date Token        Date 363311    2024-06-25 17:20:08.26                     2024-06-25 17:23:51.12 231321    2024-06-25 18:10:58.86                     2024-06-25 18:11:28.12                     2024-06-25 18:12:19.38                     2024-06-25 18:13:21.90 827341    2024-06-25 15:17:18.06                     2024-06-25 15:37:47.93                     2024-06-25 15:41:03.21 I would like to display the difference in time stamps in a new column called "time_gaps", and it would list the time in seconds between the latest time and the previous time, some Tokens have only 2 time stamps, so there should be only 1 value in the time_gap field, however, other that have 4, should have values representing the time difference between the 1st and 2nd, 2nd and 3rd, 3rd and 4th. I tried streamstats but it seems I may be doing something wrong. Any clean and effect SPL would be appreciated. Thanks
working with support for our cloud instance They removed the passwords.conf file due to the old api key still being in there and not being removed when you update it with a newly generated api key. ... See more...
working with support for our cloud instance They removed the passwords.conf file due to the old api key still being in there and not being removed when you update it with a newly generated api key. I then regenerated a new api key verified correct permissions in S1 and that resolved the issue.   Also note that sentinelone changed the length of time for a reqular user acct to have a api key to only 30 days. This was used by a previous admin so i created a new service acct just for splunk logs, and in there you can specify longer key life (30d, 60d, 90d ect).
I haven't messed with that at all, I simply have the box checked for Link to Results. 
Hi Thanks for this. I have the following on 3 indexers.   In the DB folder, the hot buckets have the same name on some indexes, so I don't think I can copy these. Perhaps I should not copy th... See more...
Hi Thanks for this. I have the following on 3 indexers.   In the DB folder, the hot buckets have the same name on some indexes, so I don't think I can copy these. Perhaps I should not copy them over and go for the other ones. I also see the data in the datamodel_summary section, but I have no data models on this data. Perhaps I don't need to copy these as well? Cheers Rob  
It is not really the search, it is how you set up the link in the alert.
Hi sir, Now I got it and your command is perfectly working fine with all scenarios. thanks much
Hi sir, thanks for your spontaneous reply.  I tried with this command and it worked. But i missed to inform that I have IP address as well under host field. Please guide me on this scenario. Thanks
Made a couple minor adjustments but this is what I needed to solve my issue. Thank you.
There may be many ways to do that.  Here's one. ... | rex field=Host "(?<part1>[^\.]+)" ``` If the field just extracted is a number then the Host field probably is an IP address ``` | eval Host = if... See more...
There may be many ways to do that.  Here's one. ... | rex field=Host "(?<part1>[^\.]+)" ``` If the field just extracted is a number then the Host field probably is an IP address ``` | eval Host = if(isnum(part1), Host, part1) ...
I came across this post for Splunk Enterprise upgrade. https://community.splunk.com/t5/Installation/What-do-I-validate-after-I-upgrade-Splunk-Enterprise-to-confirm/m-p/479261 I need details about w... See more...
I came across this post for Splunk Enterprise upgrade. https://community.splunk.com/t5/Installation/What-do-I-validate-after-I-upgrade-Splunk-Enterprise-to-confirm/m-p/479261 I need details about what to validate after ES upgrade. I already have this from Splunk docs. But I am looking for something as detailed as above post for ES. https://docs.splunk.com/Documentation/ES/7.3.1/Install/Upgradetonewerversion#Step_5._Validate_the_upgrade  
The search is very basic. The system is isolated so I can't copy/paste but it's just searching for one event code, action, and signature and tabling the results. There is nothing unique or unusual fr... See more...
The search is very basic. The system is isolated so I can't copy/paste but it's just searching for one event code, action, and signature and tabling the results. There is nothing unique or unusual from other alert searches. I don't understand how anything in the search string would cause a 404 error. 
I wonder if an outer join might have worked, but join is rarely the best answer because it performs poorly. One other approach is to use a subsearch to find the interesting transaction IDs and then ... See more...
I wonder if an outer join might have worked, but join is rarely the best answer because it performs poorly. One other approach is to use a subsearch to find the interesting transaction IDs and then search for those IDs. index="data" [search index="data" | stats values(eventtype) as eventtype by transaction_id | search eventtype="TYPE1" AND eventtype="TYPE2" | fields transaction_id | format ] | table *  
Start by creating a search which retrieves the information you are trying to find. How far have you got with that?
I am also experiencing this issue and have yet to find a solution. I am hopeful that the community will provide an answer to this problem.
It isn't in the XML code you posted
The following macro formats the time to a standard utc timezone: [utc] definition = eval time_offset=strftime(_time,"%:::z") | convert num(time_offset) | eval time_offset=if(time_offset<=0, "+" .... See more...
The following macro formats the time to a standard utc timezone: [utc] definition = eval time_offset=strftime(_time,"%:::z") | convert num(time_offset) | eval time_offset=if(time_offset<=0, "+" . -time_offset, tostring(-time_offset)), time_utc=relative_time(_time,time_offset . "h") | convert timeformat="%F %T UTC" ctime(time_utc) | convert `timeformat` ctime(_time) AS time_local The following macro sets the time to the timezone of your choice: [tz(1)] definition = eval utc_offset=strftime(_time,"%:::z") | convert num(utc_offset) | eval tz_offset = $tz$ - utc_offset, tz_offset = if(tz_offset>=0,"+".tz_offset,tz_offset), utc_offset = if(utc_offset<=0,"+".-utc_offset,tostring(-utc_offset)) | eval time_tz=relative_time(_time, tz_offset . "h"), utc_time=relative_time(_time,utc_offset . "h") | convert timeformat="%F %T UTC" ctime(utc_time) | convert timeformat="%F %T UTC$tz$" ctime(time_tz) | convert `timeformat` ctime(_time) AS my_time | fields - tz_offset utc_offset* | rename time_tz AS "time:$tz$" args = tz [timeformat] definition = timeformat="%F %T UTC%:::z %Z"