All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Is there a limit to the number of servers on which Forwarder or Enterprise can be installed under the same license? For example, if there is a limit such as "up to 15 servers" for this license. I... See more...
Is there a limit to the number of servers on which Forwarder or Enterprise can be installed under the same license? For example, if there is a limit such as "up to 15 servers" for this license. If a limit exists, I would appreciate it if you could tell me its structure and classification.
Hi @trha_ , Would it be possible to somehow see a copy of this batch file? Cheers,    - Jo.
You could do something like this index = auth0 (data.type IN ("fu", "fp", "s")) | bucket span=5m _time | stats dc(eval(if('data.type'="s", null(), 'data.user_name'))) AS unique_failed_accounts ... See more...
You could do something like this index = auth0 (data.type IN ("fu", "fp", "s")) | bucket span=5m _time | stats dc(eval(if('data.type'="s", null(), 'data.user_name'))) AS unique_failed_accounts dc(eval(if('data.type'="s", 'data.user_name', null()))) AS unique_successful_accounts values(eval(if('data.type'="s", null(), 'data.user_name'))) as tried_accounts values(eval(if('data.type'="s", 'data.user_name', null()))) as successful_accounts values(data.client_name) as clientName values(eval(if('data.type'="s", null(), 'data.type'))) as failure_reason latest(eval(if('data.type'="s", 'data.user_name', null()))) as last_successful_account max(eval(if('data.type'="s", _time, null()))) as last_successful_time max(eval(if('data.type'="s", null(), _time))) as last_failed_time by data.ip | where unique_failed_accounts > 10 Then you can see the latest failed time, successful time and failed and successful accounts and make any decisions needed. What you're essentially after is the eval() test inside the stats to test what to collect. Make sure you wrap the field names in single quotes in that test, as it's an eval statement and the field names contain . characters.
There are some small improvements you could make in case there are 0 results in any given bin - if there is a missing range, then the appendcols may fail to align the data correctly, so this will ens... See more...
There are some small improvements you could make in case there are 0 results in any given bin - if there is a missing range, then the appendcols may fail to align the data correctly, so this will ensure there are the correct number of events before the transpose | makecontinuous aString1Count start=0 end=8 | fillnull count Do that for each search. Also, the initial age calculation is wrong in that it's using now() - _time which should actually be the search latest time, so I show a fix for that below. In addition you want to strip out the -1 to -2 minute section which is NOT in one of the ranges (your first range is +1 to +241) You can make it faster by making a single search rather than appendcols, which is not efficient. There are two ways , which simply depend on how you do the chart, but I include them for a learning exercise. index=anIndex sourcetype=aSourceType (aString1 OR aString2) earliest=-481m@m latest=-1m@m ``` Calculate the age of the event - this is latest timne - event time ``` | addinfo | eval age=info_max_time - _time ``` Calculate range bands we want ``` | eval age_ranges=split("1,6,11,31,61,91,121,241",",") ``` Not strictly necessary, but ensures clean data ``` | eval range=null() ``` This is where you set a condition "1" or "2" depending on whether the event is a result from aString1 or aString2 ``` | eval type=if(event_is_aString1, "A", "B") ``` Band calculation ``` | foreach 0 1 2 3 4 5 6 7 [ eval r=tonumber(mvindex(age_ranges, <<FIELD>>))*60, zone=if(age < 14400 + r AND age > r, <<FIELD>>, null()), range=mvappend(range, zone) ] ``` So this removes the events in the pre-1 minute band ``` | where isnotnull(range) ``` Now this chart gives you 8 rows and 3 columns, first column is range, 2nd is counts for aString1 and 3rd for aString2``` | chart count over range by type ``` This ensures you have values for each range ``` | makecontinuous range start=0 end=8 | fillnull A B ``` And now create your fields on a single row ``` | eval range=range+1 | eval string1Window{range}=A, string2Window{range}=B | stats values(string*) as string*  
Thank you for updating to text as @gcusello suggested.  It would be better if you can illustrate mock data in text tables as well. It is hard to see how ClientVersion in 6wkAvg could be useful, but ... See more...
Thank you for updating to text as @gcusello suggested.  It would be better if you can illustrate mock data in text tables as well. It is hard to see how ClientVersion in 6wkAvg could be useful, but I'll just ignore this point.  Because the only numeric field is Count, I assume that you want percentage change on this field.  Splunk provides a convenient command xyseries to swap fields into row values.  You can do something like this:   | xyseries _time tempTime ClientVersion Count | eval percentChange = round(('Count: Today' - 'Count: 6wkAvg') / 'Count: 6wkAvg' * 100, 2)   Your mock data will give _time ClientVersion: 6wkAvg ClientVersion: Today Count: 6wkAvg Count: Today percentChange 2024-06-26 00:00:00 FAPI-6wkAvg FAPI-today 1582 2123 34.20 2024-06-26 00:05:00 FAPI-6wkAvg FAPI-today 1491 1925 29.11 2024-06-26 00:10:00 FAPI-6wkAvg FAPI-today 1888 2867 51.85 2024-06-26 00:15:00 FAPI-6wkAvg FAPI-today 1983 2593 30.76 2024-06-26 00:20:00 FAPI-6wkAvg FAPI-today 2882 3291 14.19 Is this something you are looking for?  Here is an emulation you can play with and compare with real data   | makeresults format=csv data="ClientVersion, _time, tempTime, Count FAPI-6wkAvg, 2024-06-26 00:00:00, 6wkAvg, 1582 FAPI-today, 2024-06-26 00:00:00, Today, 2123 FAPI-6wkAvg, 2024-06-26 00:05:00, 6wkAvg, 1491 FAPI-today, 2024-06-26 00:05:00, Today, 1925 FAPI-6wkAvg, 2024-06-26 00:10:00, 6wkAvg, 1888 FAPI-today, 2024-06-26 00:10:00, Today, 2867 FAPI-6wkAvg, 2024-06-26 00:15:00, 6wkAvg, 1983 FAPI-today, 2024-06-26 00:15:00, Today, 2593 FAPI-6wkAvg, 2024-06-26 00:20:00, 6wkAvg, 2485 FAPI-today, 2024-06-26 00:20:00, Today, 2939 FAPI-6wkAvg, 2024-06-26 00:20:00, 6wkAvg, 2882 FAPI-today, 2024-06-26 00:20:00, Today, 3291" ``` the above emulates ... | stats avg(count) as count by ClientVersion _time tempTime | eval ClientVersion=ClientVersion."-".tempTime | eval count=round(count,0) ```    
Update, Memory error occurred again when trying to restart Splunk, updated the .py script.  My system has 12 GB of Free RAM.   I've seen this error now on 2 different Splunk 9.2.1 installs.
When I set the timeframe as 7 days and try to search my splunk query in the Grafana, it returns value. But if I increase the timeframe to 14 days or more, then it returns NoData in Grafana. But I cre... See more...
When I set the timeframe as 7 days and try to search my splunk query in the Grafana, it returns value. But if I increase the timeframe to 14 days or more, then it returns NoData in Grafana. But I created a dashboard in splunk with the same query it returns value. Can anyone give some suggestion.
When I set the timeframe as 7 days and try to search my splunk query in the Grafana, it returns value. But if I increase the timeframe to 14 days or more, then it returns NoData in Grafana. But I cre... See more...
When I set the timeframe as 7 days and try to search my splunk query in the Grafana, it returns value. But if I increase the timeframe to 14 days or more, then it returns NoData in Grafana. But I created a dashboard in splunk with the same query it returns value. Can anyone give some suggestion.
Hey All I have downloaded the app SSL Certificate lookup I using this search to see information about the certificate, but it gives me no information.   | makeresults | eval dest="example.c... See more...
Hey All I have downloaded the app SSL Certificate lookup I using this search to see information about the certificate, but it gives me no information.   | makeresults | eval dest="example.com" | mvexpand dest | lookup sslcert_lookup dest OUTPUT ssl_subject_common_name ssl_subject_alt_name ssl_end_time ssl_validity_window | eval ssl_subject_alt_name = split(ssl_subject_alt_name,"|") | eval days_left = round(ssl_validity_window/86400)   the domain is using port 8441 When i add for example splunk.com it works but not the one i want to see. What is wrong in the search, or what should i add? Thanks in advance
Try using different token names e.g. earliest_time and latest_time
I'm trying to pass 3 tokens from panel 1 into panel 2, earliest time, latest time, and a basic field value.  I can get the earliest time and field value to work, but latest time always defaults to "n... See more...
I'm trying to pass 3 tokens from panel 1 into panel 2, earliest time, latest time, and a basic field value.  I can get the earliest time and field value to work, but latest time always defaults to "now" no matter what I try. Panel 1 is a stacked timechart over a three week period, each stack is one week.  The values in the stack are different closure statuses from my SIEM.  I want to be able to click on a closure status in a single week and see the details of just the statuses from that week in panel 2. (ex. Mon Jun 17-Sun Jun 23)    Panel 1 looks like: index=siem sourcetype=triage | eval _time=relative_time(_time,"@w1") ```so my stacks start on monday``` | timechart span=1w@w1 count by status WHERE max in top10 useother=false | eval last=_time+604800 ```manually creating a latest time to use as token``` note: panel 1 is using a time input shared across most panels in the dashboard. (defaulting to 3 Mondays ago) In Configuration > Interaction, I'm setting 3 tokens, status=name, earliest=row._time.value, and latest=row.last.value     Panel 2 looks like: index=siem sourcetype=triage earliest=$earliest$ latest=$latest$ | rest of search   When I click a status in week 1 (2 weeks ago) I get statuses for weeks 1, 2, and 3. (earliest and status token is working) When I click a status in week 2 (1 weeks ago) I get statuses for weeks 2 and 3 (earliest and status token is working) When I click a status in week 3 (current week) I get the current week.  (earliest and status token is working Latest always defaults to now.   I've done something similar in the old dashboard, I eval'd the time modifiers while setting the token, but am much less familiar with json, not sure if this is a possibility. What I had previously done: <eval token="earliest">$click.value$-3600</eval>  
Had this same error, after uninstalling the Splunk forwarder, then installing Splunk enterprise on RHEL 9 linux.  Rebooted the system then ran the install again and had no reported memory errors.   T... See more...
Had this same error, after uninstalling the Splunk forwarder, then installing Splunk enterprise on RHEL 9 linux.  Rebooted the system then ran the install again and had no reported memory errors.   The above python script fix will work, but a reboot could work as well. 
Thank you @yuanliu , you solution worked, I had to make minor modifications below, but thank you very much indeed.   Modified the section after "... where count > 1" to below: | where count > 1... See more...
Thank you @yuanliu , you solution worked, I had to make minor modifications below, but thank you very much indeed.   Modified the section after "... where count > 1" to below: | where count > 1 | table Date Token _time | eval idx = mvrange(0, mvcount(_time)) | eval TimeGaps_Secs = mvmap(idx, if(idx > 0, tonumber(mvindex(_time, idx)) - tonumber(mvindex(_time, idx - 1)), null())) | fieldformat Date = strftime(_time, "%F %T.%2N") | table Token Date TimeGaps_Secs   Thank you Again.  
pls find the screenshot below, let me know where i am missing it pls  
updated post, thank you for the tip!   
Hi, We are continuously in violation the past 3 or 4 months as we are ingesting 600 to 800 GB extra on top of daily limit. We have received multiple hard warnings.  My question is what will happ... See more...
Hi, We are continuously in violation the past 3 or 4 months as we are ingesting 600 to 800 GB extra on top of daily limit. We have received multiple hard warnings.  My question is what will happen if we continue to be in violation or exceed the daily indexing volume limit?    I appreciate your answer in advance. Thanks.
Hi @chorn3567 , please share your search in text mode (using theInsert/Edit code sample button), otherwise it's realy difficoult to help you. Ciao. Giuseppe
Hi All! First post, super new user to Splunk.  Have a search that i modified from a one a team member previously created, im trying to take the output of ClientVersion and compare the 6wkAvg count... See more...
Hi All! First post, super new user to Splunk.  Have a search that i modified from a one a team member previously created, im trying to take the output of ClientVersion and compare the 6wkAvg count to the Today count for same timespan and see what the percentage -/+ is. Ultimately building towards alerting when below a certain threshold.  | fields _time ClientVersion | eval DoW=strftime(_time, "%A") | eval TodayDoW=strftime(now(), "%A") | where DoW=TodayDoW | search ClientVersion=FAPI* | eval ClientVersion=if((like("ClientVersion=FAPI*","%OR%") OR false()) AND false(), "Combined", ClientVersion) | bin _time span=5m | eval tempTime=strftime(_time,"%m/%d") | where (tempTime!="null") | eval tempTime=if(true() AND _time < relative_time(now(), "@d"), "6wkAvg", "Today") | stats count by ClientVersion _time tempTime | eval _time=round(strptime(strftime(now(),"%Y-%m-%d").strftime(_time,"%H:%M:%S"),"%Y-%m-%d%H:%M:%S"),0) | stats avg(count) as count by ClientVersion _time tempTime | eval ClientVersion=ClientVersion."-".tempTime | eval count=round(count,0)  
@AAlhabba , thank you for the solution .Worked like a charm.
I think you have the right idea, but streamstats doesn't work with multi-value fields.  Try this untested search index=myindex Token=* | streamstats window=2 range(_time) as time_gap by Token | stat... See more...
I think you have the right idea, but streamstats doesn't work with multi-value fields.  Try this untested search index=myindex Token=* | streamstats window=2 range(_time) as time_gap by Token | stats list(_time) as _time list(time_gap) as time_gaps by Token | eval Date= strftime (_time,"%F %H:%M:%S.%2q") | where count > 1 | table Token Date time_gaps