All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

When I set the timeframe as 7 days and try to search my splunk query in the Grafana, it returns value. But if I increase the timeframe to 14 days or more, then it returns NoData in Grafana. But I cre... See more...
When I set the timeframe as 7 days and try to search my splunk query in the Grafana, it returns value. But if I increase the timeframe to 14 days or more, then it returns NoData in Grafana. But I created a dashboard in splunk with the same query it returns value. Can anyone give some suggestion.
Hey All I have downloaded the app SSL Certificate lookup I using this search to see information about the certificate, but it gives me no information.   | makeresults | eval dest="example.c... See more...
Hey All I have downloaded the app SSL Certificate lookup I using this search to see information about the certificate, but it gives me no information.   | makeresults | eval dest="example.com" | mvexpand dest | lookup sslcert_lookup dest OUTPUT ssl_subject_common_name ssl_subject_alt_name ssl_end_time ssl_validity_window | eval ssl_subject_alt_name = split(ssl_subject_alt_name,"|") | eval days_left = round(ssl_validity_window/86400)   the domain is using port 8441 When i add for example splunk.com it works but not the one i want to see. What is wrong in the search, or what should i add? Thanks in advance
Try using different token names e.g. earliest_time and latest_time
I'm trying to pass 3 tokens from panel 1 into panel 2, earliest time, latest time, and a basic field value.  I can get the earliest time and field value to work, but latest time always defaults to "n... See more...
I'm trying to pass 3 tokens from panel 1 into panel 2, earliest time, latest time, and a basic field value.  I can get the earliest time and field value to work, but latest time always defaults to "now" no matter what I try. Panel 1 is a stacked timechart over a three week period, each stack is one week.  The values in the stack are different closure statuses from my SIEM.  I want to be able to click on a closure status in a single week and see the details of just the statuses from that week in panel 2. (ex. Mon Jun 17-Sun Jun 23)    Panel 1 looks like: index=siem sourcetype=triage | eval _time=relative_time(_time,"@w1") ```so my stacks start on monday``` | timechart span=1w@w1 count by status WHERE max in top10 useother=false | eval last=_time+604800 ```manually creating a latest time to use as token``` note: panel 1 is using a time input shared across most panels in the dashboard. (defaulting to 3 Mondays ago) In Configuration > Interaction, I'm setting 3 tokens, status=name, earliest=row._time.value, and latest=row.last.value     Panel 2 looks like: index=siem sourcetype=triage earliest=$earliest$ latest=$latest$ | rest of search   When I click a status in week 1 (2 weeks ago) I get statuses for weeks 1, 2, and 3. (earliest and status token is working) When I click a status in week 2 (1 weeks ago) I get statuses for weeks 2 and 3 (earliest and status token is working) When I click a status in week 3 (current week) I get the current week.  (earliest and status token is working Latest always defaults to now.   I've done something similar in the old dashboard, I eval'd the time modifiers while setting the token, but am much less familiar with json, not sure if this is a possibility. What I had previously done: <eval token="earliest">$click.value$-3600</eval>  
Had this same error, after uninstalling the Splunk forwarder, then installing Splunk enterprise on RHEL 9 linux.  Rebooted the system then ran the install again and had no reported memory errors.   T... See more...
Had this same error, after uninstalling the Splunk forwarder, then installing Splunk enterprise on RHEL 9 linux.  Rebooted the system then ran the install again and had no reported memory errors.   The above python script fix will work, but a reboot could work as well. 
Thank you @yuanliu , you solution worked, I had to make minor modifications below, but thank you very much indeed.   Modified the section after "... where count > 1" to below: | where count > 1... See more...
Thank you @yuanliu , you solution worked, I had to make minor modifications below, but thank you very much indeed.   Modified the section after "... where count > 1" to below: | where count > 1 | table Date Token _time | eval idx = mvrange(0, mvcount(_time)) | eval TimeGaps_Secs = mvmap(idx, if(idx > 0, tonumber(mvindex(_time, idx)) - tonumber(mvindex(_time, idx - 1)), null())) | fieldformat Date = strftime(_time, "%F %T.%2N") | table Token Date TimeGaps_Secs   Thank you Again.  
pls find the screenshot below, let me know where i am missing it pls  
updated post, thank you for the tip!   
Hi, We are continuously in violation the past 3 or 4 months as we are ingesting 600 to 800 GB extra on top of daily limit. We have received multiple hard warnings.  My question is what will happ... See more...
Hi, We are continuously in violation the past 3 or 4 months as we are ingesting 600 to 800 GB extra on top of daily limit. We have received multiple hard warnings.  My question is what will happen if we continue to be in violation or exceed the daily indexing volume limit?    I appreciate your answer in advance. Thanks.
Hi @chorn3567 , please share your search in text mode (using theInsert/Edit code sample button), otherwise it's realy difficoult to help you. Ciao. Giuseppe
Hi All! First post, super new user to Splunk.  Have a search that i modified from a one a team member previously created, im trying to take the output of ClientVersion and compare the 6wkAvg count... See more...
Hi All! First post, super new user to Splunk.  Have a search that i modified from a one a team member previously created, im trying to take the output of ClientVersion and compare the 6wkAvg count to the Today count for same timespan and see what the percentage -/+ is. Ultimately building towards alerting when below a certain threshold.  | fields _time ClientVersion | eval DoW=strftime(_time, "%A") | eval TodayDoW=strftime(now(), "%A") | where DoW=TodayDoW | search ClientVersion=FAPI* | eval ClientVersion=if((like("ClientVersion=FAPI*","%OR%") OR false()) AND false(), "Combined", ClientVersion) | bin _time span=5m | eval tempTime=strftime(_time,"%m/%d") | where (tempTime!="null") | eval tempTime=if(true() AND _time < relative_time(now(), "@d"), "6wkAvg", "Today") | stats count by ClientVersion _time tempTime | eval _time=round(strptime(strftime(now(),"%Y-%m-%d").strftime(_time,"%H:%M:%S"),"%Y-%m-%d%H:%M:%S"),0) | stats avg(count) as count by ClientVersion _time tempTime | eval ClientVersion=ClientVersion."-".tempTime | eval count=round(count,0)  
@AAlhabba , thank you for the solution .Worked like a charm.
I think you have the right idea, but streamstats doesn't work with multi-value fields.  Try this untested search index=myindex Token=* | streamstats window=2 range(_time) as time_gap by Token | stat... See more...
I think you have the right idea, but streamstats doesn't work with multi-value fields.  Try this untested search index=myindex Token=* | streamstats window=2 range(_time) as time_gap by Token | stats list(_time) as _time list(time_gap) as time_gaps by Token | eval Date= strftime (_time,"%F %H:%M:%S.%2q") | where count > 1 | table Token Date time_gaps  
Obviously you must not use strftime before delta calculation.  With this point out of the way, try   index=myindex | stats values(_time) as _time values(recs) as recs count by Token | where count >... See more...
Obviously you must not use strftime before delta calculation.  With this point out of the way, try   index=myindex | stats values(_time) as _time values(recs) as recs count by Token | where count > 1 | fields Token Date | eval idx = mvrange(0, mvcount(Date)) | eval delta = mvmap(idx, if(idx > 0, tonumber(mvindex(Date, idx)) - tonumber(mvindex(Date, idx - 1)), null())) | fieldformat Date = strftime(Date, "%F %T.%2N") | table Token Date delta   The output from your sample data is Date Token delta 2024-06-25 17:20:08.26 2024-06-25 17:23:51.12 363311 222.860000 2024-06-25 18:10:58.86 2024-06-25 18:11:28.12 2024-06-25 18:12:19.38 2024-06-25 18:13:21.90 231321 29.260000 51.260000 62.520000 2024-06-25 15:17:18.06 2024-06-25 15:37:47.93 2024-06-25 15:41:03.21 827341 1229.870000 195.280000 Here is a data emulation for you to play with and compare with real data.   | makeresults format=csv data="Token, Date 363311, 2024-06-25 17:20:08.26 :: 2024-06-25 17:23:51.12 231321, 2024-06-25 18:10:58.86 :: 2024-06-25 18:11:28.12 :: 2024-06-25 18:12:19.38 :: 2024-06-25 18:13:21.90 827341, 2024-06-25 15:17:18.06 :: 2024-06-25 15:37:47.93 :: 2024-06-25 15:41:03.21" | eval Date = split(Date, " :: ") | eval Date = strptime(Date, "%F %T.%2N") ``` the above emulates index=myindex | stats values(_time) as _time values(recs) as recs count by Token | where count > 1 | fields Token Date ```  
I'm seeing the same behavior after upgrading the 9.2.1 and including the new indexes, any update on this?
To which add-on do you refer?  The installation instructions for it should say on which instance(s) it should be installed, but if you give us the name then we should be able to provide an answer. I... See more...
To which add-on do you refer?  The installation instructions for it should say on which instance(s) it should be installed, but if you give us the name then we should be able to provide an answer. Indexes, custom or otherwise, are always created on the indexers.  For a better user experience, they should also be defined on search heads.  Do that by deploying the same indexes.conf file to both instances, but without volume references on the SH.
Like @richgalloway said, Splunk is not great at searching for for missing things.  Meanwhile, if you already have the inventory, there is something you can do. Assuming lookup myinventory is in the ... See more...
Like @richgalloway said, Splunk is not great at searching for for missing things.  Meanwhile, if you already have the inventory, there is something you can do. Assuming lookup myinventory is in the form of hostname IPaddress abc 0.0.0.0 abc 2.2.2.2 xyz 4.5.6.7 zab 7.8.9.10 zab 6.7.8.9 and the requirement is to capture the following entries from the lookup where hostname in this lookup has no matching entry with hostname in index search and IPaddress in this lookup has no matching entry with IPaddress or hostname in index search. To make our task simpler, further assume that if an index search event matches anything in lookup, that hostname and/or IPaddress is/are no longer a candidate.  This is what you can try: index=asset_inventory | stats values(hostname) as hostname values(IPaddress) as IPaddress | appendcols [inputlookup myinventory | stats values(hostname) as lookupname values(IPaddress) as lookupaddress] | eval missingname = mvmap(lookupname, if(lookupname != hostname, lookupname, null())) | eval missingaddress = mvmap(lookupaddress, if(lookupaddress != IPaddress AND lookupaddress != hostname, missingaddress, null())) | lookup myinventory IPaddress as missingaddress output hostname as addressmissingname | eval missingname = mvappend(missingname, mvmap(addressmissingname, if(addressmissingname != hostname, addressmissingname, null()))) | table missingname Note: the search takes avdantage of Splunk's equality evaluation with multivalue. this search becomes complicated because your index search may return IP address in hostname and apparently you care about those entries.  If we ignore those entries and only compare hostname hostnames with inventory, the search can be as simple as index=asset_inventory | stats values(hostname) as hostname | appendcols [inputlookup myinventory | stats values(hostname) as lookupname] | eval missingname = mvmap(lookupname, if(lookupname != hostname, lookupname, null())) | fields - hostname  
Hi,  I need help in extracting the time gaps in a multi-value field represented as Date. My data output looks like this: index=myindex | stats values(_time) as _time values(recs) as recs co... See more...
Hi,  I need help in extracting the time gaps in a multi-value field represented as Date. My data output looks like this: index=myindex | stats values(_time) as _time values(recs) as recs count by Token | eval Date= strftime (_time,"%F %H:%M:%S.%2q ") | where count > 1 | table Token Date Token        Date 363311    2024-06-25 17:20:08.26                     2024-06-25 17:23:51.12 231321    2024-06-25 18:10:58.86                     2024-06-25 18:11:28.12                     2024-06-25 18:12:19.38                     2024-06-25 18:13:21.90 827341    2024-06-25 15:17:18.06                     2024-06-25 15:37:47.93                     2024-06-25 15:41:03.21 I would like to display the difference in time stamps in a new column called "time_gaps", and it would list the time in seconds between the latest time and the previous time, some Tokens have only 2 time stamps, so there should be only 1 value in the time_gap field, however, other that have 4, should have values representing the time difference between the 1st and 2nd, 2nd and 3rd, 3rd and 4th. I tried streamstats but it seems I may be doing something wrong. Any clean and effect SPL would be appreciated. Thanks
working with support for our cloud instance They removed the passwords.conf file due to the old api key still being in there and not being removed when you update it with a newly generated api key. ... See more...
working with support for our cloud instance They removed the passwords.conf file due to the old api key still being in there and not being removed when you update it with a newly generated api key. I then regenerated a new api key verified correct permissions in S1 and that resolved the issue.   Also note that sentinelone changed the length of time for a reqular user acct to have a api key to only 30 days. This was used by a previous admin so i created a new service acct just for splunk logs, and in there you can specify longer key life (30d, 60d, 90d ect).
I haven't messed with that at all, I simply have the box checked for Link to Results.