All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Had this same error, after uninstalling the Splunk forwarder, then installing Splunk enterprise on RHEL 9 linux.  Rebooted the system then ran the install again and had no reported memory errors.   T... See more...
Had this same error, after uninstalling the Splunk forwarder, then installing Splunk enterprise on RHEL 9 linux.  Rebooted the system then ran the install again and had no reported memory errors.   The above python script fix will work, but a reboot could work as well. 
Thank you @yuanliu , you solution worked, I had to make minor modifications below, but thank you very much indeed.   Modified the section after "... where count > 1" to below: | where count > 1... See more...
Thank you @yuanliu , you solution worked, I had to make minor modifications below, but thank you very much indeed.   Modified the section after "... where count > 1" to below: | where count > 1 | table Date Token _time | eval idx = mvrange(0, mvcount(_time)) | eval TimeGaps_Secs = mvmap(idx, if(idx > 0, tonumber(mvindex(_time, idx)) - tonumber(mvindex(_time, idx - 1)), null())) | fieldformat Date = strftime(_time, "%F %T.%2N") | table Token Date TimeGaps_Secs   Thank you Again.  
pls find the screenshot below, let me know where i am missing it pls  
updated post, thank you for the tip!   
Hi, We are continuously in violation the past 3 or 4 months as we are ingesting 600 to 800 GB extra on top of daily limit. We have received multiple hard warnings.  My question is what will happ... See more...
Hi, We are continuously in violation the past 3 or 4 months as we are ingesting 600 to 800 GB extra on top of daily limit. We have received multiple hard warnings.  My question is what will happen if we continue to be in violation or exceed the daily indexing volume limit?    I appreciate your answer in advance. Thanks.
Hi @chorn3567 , please share your search in text mode (using theInsert/Edit code sample button), otherwise it's realy difficoult to help you. Ciao. Giuseppe
Hi All! First post, super new user to Splunk.  Have a search that i modified from a one a team member previously created, im trying to take the output of ClientVersion and compare the 6wkAvg count... See more...
Hi All! First post, super new user to Splunk.  Have a search that i modified from a one a team member previously created, im trying to take the output of ClientVersion and compare the 6wkAvg count to the Today count for same timespan and see what the percentage -/+ is. Ultimately building towards alerting when below a certain threshold.  | fields _time ClientVersion | eval DoW=strftime(_time, "%A") | eval TodayDoW=strftime(now(), "%A") | where DoW=TodayDoW | search ClientVersion=FAPI* | eval ClientVersion=if((like("ClientVersion=FAPI*","%OR%") OR false()) AND false(), "Combined", ClientVersion) | bin _time span=5m | eval tempTime=strftime(_time,"%m/%d") | where (tempTime!="null") | eval tempTime=if(true() AND _time < relative_time(now(), "@d"), "6wkAvg", "Today") | stats count by ClientVersion _time tempTime | eval _time=round(strptime(strftime(now(),"%Y-%m-%d").strftime(_time,"%H:%M:%S"),"%Y-%m-%d%H:%M:%S"),0) | stats avg(count) as count by ClientVersion _time tempTime | eval ClientVersion=ClientVersion."-".tempTime | eval count=round(count,0)  
@AAlhabba , thank you for the solution .Worked like a charm.
I think you have the right idea, but streamstats doesn't work with multi-value fields.  Try this untested search index=myindex Token=* | streamstats window=2 range(_time) as time_gap by Token | stat... See more...
I think you have the right idea, but streamstats doesn't work with multi-value fields.  Try this untested search index=myindex Token=* | streamstats window=2 range(_time) as time_gap by Token | stats list(_time) as _time list(time_gap) as time_gaps by Token | eval Date= strftime (_time,"%F %H:%M:%S.%2q") | where count > 1 | table Token Date time_gaps  
Obviously you must not use strftime before delta calculation.  With this point out of the way, try   index=myindex | stats values(_time) as _time values(recs) as recs count by Token | where count >... See more...
Obviously you must not use strftime before delta calculation.  With this point out of the way, try   index=myindex | stats values(_time) as _time values(recs) as recs count by Token | where count > 1 | fields Token Date | eval idx = mvrange(0, mvcount(Date)) | eval delta = mvmap(idx, if(idx > 0, tonumber(mvindex(Date, idx)) - tonumber(mvindex(Date, idx - 1)), null())) | fieldformat Date = strftime(Date, "%F %T.%2N") | table Token Date delta   The output from your sample data is Date Token delta 2024-06-25 17:20:08.26 2024-06-25 17:23:51.12 363311 222.860000 2024-06-25 18:10:58.86 2024-06-25 18:11:28.12 2024-06-25 18:12:19.38 2024-06-25 18:13:21.90 231321 29.260000 51.260000 62.520000 2024-06-25 15:17:18.06 2024-06-25 15:37:47.93 2024-06-25 15:41:03.21 827341 1229.870000 195.280000 Here is a data emulation for you to play with and compare with real data.   | makeresults format=csv data="Token, Date 363311, 2024-06-25 17:20:08.26 :: 2024-06-25 17:23:51.12 231321, 2024-06-25 18:10:58.86 :: 2024-06-25 18:11:28.12 :: 2024-06-25 18:12:19.38 :: 2024-06-25 18:13:21.90 827341, 2024-06-25 15:17:18.06 :: 2024-06-25 15:37:47.93 :: 2024-06-25 15:41:03.21" | eval Date = split(Date, " :: ") | eval Date = strptime(Date, "%F %T.%2N") ``` the above emulates index=myindex | stats values(_time) as _time values(recs) as recs count by Token | where count > 1 | fields Token Date ```  
I'm seeing the same behavior after upgrading the 9.2.1 and including the new indexes, any update on this?
To which add-on do you refer?  The installation instructions for it should say on which instance(s) it should be installed, but if you give us the name then we should be able to provide an answer. I... See more...
To which add-on do you refer?  The installation instructions for it should say on which instance(s) it should be installed, but if you give us the name then we should be able to provide an answer. Indexes, custom or otherwise, are always created on the indexers.  For a better user experience, they should also be defined on search heads.  Do that by deploying the same indexes.conf file to both instances, but without volume references on the SH.
Like @richgalloway said, Splunk is not great at searching for for missing things.  Meanwhile, if you already have the inventory, there is something you can do. Assuming lookup myinventory is in the ... See more...
Like @richgalloway said, Splunk is not great at searching for for missing things.  Meanwhile, if you already have the inventory, there is something you can do. Assuming lookup myinventory is in the form of hostname IPaddress abc 0.0.0.0 abc 2.2.2.2 xyz 4.5.6.7 zab 7.8.9.10 zab 6.7.8.9 and the requirement is to capture the following entries from the lookup where hostname in this lookup has no matching entry with hostname in index search and IPaddress in this lookup has no matching entry with IPaddress or hostname in index search. To make our task simpler, further assume that if an index search event matches anything in lookup, that hostname and/or IPaddress is/are no longer a candidate.  This is what you can try: index=asset_inventory | stats values(hostname) as hostname values(IPaddress) as IPaddress | appendcols [inputlookup myinventory | stats values(hostname) as lookupname values(IPaddress) as lookupaddress] | eval missingname = mvmap(lookupname, if(lookupname != hostname, lookupname, null())) | eval missingaddress = mvmap(lookupaddress, if(lookupaddress != IPaddress AND lookupaddress != hostname, missingaddress, null())) | lookup myinventory IPaddress as missingaddress output hostname as addressmissingname | eval missingname = mvappend(missingname, mvmap(addressmissingname, if(addressmissingname != hostname, addressmissingname, null()))) | table missingname Note: the search takes avdantage of Splunk's equality evaluation with multivalue. this search becomes complicated because your index search may return IP address in hostname and apparently you care about those entries.  If we ignore those entries and only compare hostname hostnames with inventory, the search can be as simple as index=asset_inventory | stats values(hostname) as hostname | appendcols [inputlookup myinventory | stats values(hostname) as lookupname] | eval missingname = mvmap(lookupname, if(lookupname != hostname, lookupname, null())) | fields - hostname  
Hi,  I need help in extracting the time gaps in a multi-value field represented as Date. My data output looks like this: index=myindex | stats values(_time) as _time values(recs) as recs co... See more...
Hi,  I need help in extracting the time gaps in a multi-value field represented as Date. My data output looks like this: index=myindex | stats values(_time) as _time values(recs) as recs count by Token | eval Date= strftime (_time,"%F %H:%M:%S.%2q ") | where count > 1 | table Token Date Token        Date 363311    2024-06-25 17:20:08.26                     2024-06-25 17:23:51.12 231321    2024-06-25 18:10:58.86                     2024-06-25 18:11:28.12                     2024-06-25 18:12:19.38                     2024-06-25 18:13:21.90 827341    2024-06-25 15:17:18.06                     2024-06-25 15:37:47.93                     2024-06-25 15:41:03.21 I would like to display the difference in time stamps in a new column called "time_gaps", and it would list the time in seconds between the latest time and the previous time, some Tokens have only 2 time stamps, so there should be only 1 value in the time_gap field, however, other that have 4, should have values representing the time difference between the 1st and 2nd, 2nd and 3rd, 3rd and 4th. I tried streamstats but it seems I may be doing something wrong. Any clean and effect SPL would be appreciated. Thanks
working with support for our cloud instance They removed the passwords.conf file due to the old api key still being in there and not being removed when you update it with a newly generated api key. ... See more...
working with support for our cloud instance They removed the passwords.conf file due to the old api key still being in there and not being removed when you update it with a newly generated api key. I then regenerated a new api key verified correct permissions in S1 and that resolved the issue.   Also note that sentinelone changed the length of time for a reqular user acct to have a api key to only 30 days. This was used by a previous admin so i created a new service acct just for splunk logs, and in there you can specify longer key life (30d, 60d, 90d ect).
I haven't messed with that at all, I simply have the box checked for Link to Results. 
Hi Thanks for this. I have the following on 3 indexers.   In the DB folder, the hot buckets have the same name on some indexes, so I don't think I can copy these. Perhaps I should not copy th... See more...
Hi Thanks for this. I have the following on 3 indexers.   In the DB folder, the hot buckets have the same name on some indexes, so I don't think I can copy these. Perhaps I should not copy them over and go for the other ones. I also see the data in the datamodel_summary section, but I have no data models on this data. Perhaps I don't need to copy these as well? Cheers Rob  
It is not really the search, it is how you set up the link in the alert.
Hi sir, Now I got it and your command is perfectly working fine with all scenarios. thanks much
Hi sir, thanks for your spontaneous reply.  I tried with this command and it worked. But i missed to inform that I have IP address as well under host field. Please guide me on this scenario. Thanks