All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @scout29 , only completing the perfect solution from @PickleRick : index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" NOT date_wday IN ("saturday", "monday") | ... See more...
Hi @scout29 , only completing the perfect solution from @PickleRick : index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" NOT date_wday IN ("saturday", "monday") | stats avg(b) AS bytes BY date_month | eval TB= bytes/1024/1024/1024/1024 if you want to exclude also holydays from your average, you have to create a lookup containing the holydays for the exclusion. Ciao. Giuseppe
Hi @Sid, when I usieSplunk Cloud, I usually have one or two Heavy Forwarders on premise that I use to concentrate all the logs from my on-premise infratructure, so I can apply the configurations to ... See more...
Hi @Sid, when I usieSplunk Cloud, I usually have one or two Heavy Forwarders on premise that I use to concentrate all the logs from my on-premise infratructure, so I can apply the configurations to these HFs. If you haven't an on-premise HF directly sending logs to Splunk Cloud, you have to upload these two conf files in an Add-On. Ciao. Giuseppe
Hi @BattleOP , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Po... See more...
Hi @BattleOP , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors  
In the eval I just want [index=index_example  sourcetype=sourcetype_example |stats count] to be a single whole number First, this is not what subsearch is for.  A subsearch populates a group of ... See more...
In the eval I just want [index=index_example  sourcetype=sourcetype_example |stats count] to be a single whole number First, this is not what subsearch is for.  A subsearch populates a group of fields, not outputs a single value.  If you use format command to view its actual output, index=index_example sourcetype=sourcetype_example |stats count |format  you will see that it is equivalent to "count=86" or whatever the count is. Second, the subsearch retrieves the same set of events as the main search.  This is very wasteful.  It would be much more productive to present the problem you are trying to solve than focusing on a particular search command. Let me take a wild guess: You want to calculate percentage of count by status in total count.  Is this correct?  There are many ways to achieve this.  None of them requires subsearch of same events.  This is an example. index=index_example sourcetype=sourcetype_example |chart count by Status | eventstats sum(count) as total | eval output = count / total * 100 | fields - total  
Update: I found a solution to this. The : before the = appears to have been problematic. Any timestamps that come in greater than the current time are now set to the system time, effectively preve... See more...
Update: I found a solution to this. The : before the = appears to have been problematic. Any timestamps that come in greater than the current time are now set to the system time, effectively preventing Splunk from indexing future timestamps. I also added an additional index time eval statement to set a flag variable called timestamp_status to "CRITICAL" if future timestamps are found and reset to the current time. The idea behind that was to allow me to search for any "CRITICAL" values for the timestamp_status field and see where I need to make adjustments to timestamp parsing for data sources that may start to send future timestamps. For anyone interested, I will paste the configs below. props.conf: [default] TRANSFORMS-check_for_future_timestamp = check_for_future_timestamp fields.conf: [check_for_future_timestamp] INDEXED = True transforms.conf: [check_for_future_timestamp] INGEST_EVAL = timestamp_status=if(_time > time(), "CRITICAL", "OK"), _time=if(_time > time(), time(), _time)
Pls check this SPL: source="rex-empty-string.txt" host="laptop" sourcetype="rexEmpty" | rex "WifiCountryDetails\"\:\"(?<WifiCountryDetails>(\w+|\S))\"" | eval WifiCountryDetails = if(isnull(WifiCo... See more...
Pls check this SPL: source="rex-empty-string.txt" host="laptop" sourcetype="rexEmpty" | rex "WifiCountryDetails\"\:\"(?<WifiCountryDetails>(\w+|\S))\"" | eval WifiCountryDetails = if(isnull(WifiCountryDetails) OR len(WifiCountryDetails)==0, "No Country Name", WifiCountryDetails) | stats count by WifiCountryDetails  
Hello, Im trying to use the data from one search in another search.   This is what I'm trying to do:- index=index_example  sourcetype=sourcetype_example |chart count by Status |eval output=(count... See more...
Hello, Im trying to use the data from one search in another search.   This is what I'm trying to do:- index=index_example  sourcetype=sourcetype_example |chart count by Status |eval output=(count/ [index=index_example  sourcetype=sourcetype_example |stats count] *100) In the eval I just want [index=index_example  sourcetype=sourcetype_example |stats count] to be a single whole number
Hi Splunk New beginners: For absolute beginners...Splunk newbie learning videos.. https://www.youtube.com/@SiemNewbies101/playlists   I have added 25 small videos of rex... Completely for Splunk ... See more...
Hi Splunk New beginners: For absolute beginners...Splunk newbie learning videos.. https://www.youtube.com/@SiemNewbies101/playlists   I have added 25 small videos of rex... Completely for Splunk newbies and beginners. hope this helps somebody, thanks.
Creating a new splunk base app then verifying the email worked out. i was able to install apps using this work around. Thank you !
I'm not sure what you want to achieve but remember that subsearch will be executed first, then its output will be formatted and substituted into the main search before running it. So with the output ... See more...
I'm not sure what you want to achieve but remember that subsearch will be executed first, then its output will be formatted and substituted into the main search before running it. So with the output of your subsearch the resulting table command will look like table _time , sourceuser, targetuser, role, Operation ((role="AuthenticationAdministrator") OR (role="Authentication Policy Administrator") OR [...]) As you can see, it doesn't make sense.
1. Yes, you can split your count by how many fields you want. 2. "This doesn't work" is not a very constructive comment. Remember that it's you who asks for help. 3. index=wineventlog EvenCode IN ... See more...
1. Yes, you can split your count by how many fields you want. 2. "This doesn't work" is not a very constructive comment. Remember that it's you who asks for help. 3. index=wineventlog EvenCode IN (4728,4729) This will find all the events where a user was either added or removed to a security-enabled group. That's a start. But you want to find situations if a user was removed from another group and added to one of those you seek. So you want something like this index=wineventlog ((EventCode=4728 Group_Name IN (RDSUSers_GRSQCP01, RDSUSers_GROQCP01, RDSUSers_BRSQCP01, RDSUSers_BROQCP01, RDSUSers_VRSQCP01, RDSUSers_VROQCP01)) OR (EventCode=4729 NOT Group_Name IN (RDSUSers_GRSQCP01, RDSUSers_GROQCP01, RDSUSers_BRSQCP01, RDSUSers_BROQCP01, RDSUSers_VRSQCP01, RDSUSers_VROQCP01)) Looks a bit uglier, doesn't it. But it will give you all potentially interesting events. Now you have to find if they fit your criteria. Since we will need to reverse the order of the events (by default Splunk returns events in reverse chronological order which is a bit inconvenient for us here), you might need to only limit further processed data to the relevant fields (but if you have just a bunch of events this is an optional step) | fields EventCode Group_Name Subject_Account_Name | fields - _raw (Notice that I didn't explicitly include _time because it is added by default) Now we need to sort by _time so the earliest events are processed first | sort _time As the default order is a reverse chronological order doing just | reverse instead should work as well. Now we create a field containing timestamp for removal from a group and a name of a group a user was removed from | eval remstamp=if(EventCode=4729,_time,null()) | eval remgroup=if(EventCode=4729,Group_Name,null()) So now we can do streamstats to see when and from which groups the users were removed | streamstats last(remstamp) as last_removed last(remgroup) as removed_from by Subject_Account_Name This will propagate the information of a user's removal from a group to the next event regarding the same user. Now we need to find matching events. For 3 hours window it will be | where _time-last_removed<10800 Be aware that it will only find last removal for a given user. Tweaking the streamstats you can probably aggregate all removals but it's too late for me to think about it now  
The problem has nothing to do with lookup or with spath. (You are not even using lookup command.)  You cannot use a subsearch in table command.  It's that simple. Note: Always use a code box or text... See more...
The problem has nothing to do with lookup or with spath. (You are not even using lookup command.)  You cannot use a subsearch in table command.  It's that simple. Note: Always use a code box or text to illustrate search.  Screenshot is the worst form to share data like this. I get that you want to filter ModifiedProperties{1}.NewValue to only those roles in your subsearch.  To do this, you need search command, not table command.  Something like the following index=o365 Operation="Add member to role." | spath path=ModifiedProperties{1}.NewValue output=role | search [| inputlookup privileged_azure_ad_roles.csv where isprivilegedadrole=true | fields role] | spath path=Actor{0}.ID output=sourceuser | rename ObjectId as targetuser | table _time sourceuser targetuser role Operation Hope this helps.
Hi @cotyp  Were you able to find the solution to your problem. Please share the solution if you found one. I want to do the same, disable autorefresh from all the existing Dashbaords in our environ... See more...
Hi @cotyp  Were you able to find the solution to your problem. Please share the solution if you found one. I want to do the same, disable autorefresh from all the existing Dashbaords in our environment?
The illustrated search is rather confusing. (Also, data illustration is better done with text, not screenshots.) First, the original data contains fields tranche_heure and partenaire, but not tranche... See more...
The illustrated search is rather confusing. (Also, data illustration is better done with text, not screenshots.) First, the original data contains fields tranche_heure and partenaire, but not tranche_heure_partenaire or tranche_heure_bis.  So, I will ignore related manipulations.  Second, I am not sure if there is a point of preserving all raw values of temp_rep_moyen.  But because your search does so, I will preserve them as temp_rep_moyen_orig. Most importantly, handling the difference between statut values "OK" and "KO" before performing stats functions just make the job so much harder.  You should use groupby instead. index=rcd statut=OK partenaire=000000000P | eval date_appel=strftime(_time,"%b %y") | dedup nom_ws date_appel partenaire temps_rep_max temps_rep_min temps_rep_moyen nb_appel statut tranche_heure heure_appel_max | stats avg(eval(temps_rep_moyen * nb_appel)) as temps_rep_moyen sum(nb_appel) as som_nb_appel max(temps_rep_max) as temps_rep_max min(temp_rep_min) as temps_rep_min values(temps_rep_moyen) as temps_rep_moyen_orig values(nom_ws) as nom_ws, values(date_appel) as date_appel by statut | eval temps_rep_moyen = temps_rep_moyen / som_nb_appel | fields - som_nb_appel The above search will give you two rows of weighed temps_rep_moyen as you defined, temps_rep_max, temps_rep_min, and aggregated values of other fields including original values of temp_rep_moyen.  The first row corresponding to statut "KO", the second row, statut "OK".  Hope this helps. If you really need the fields to be named after statut, do so after this stats. (I'll leave that as your exercise.)
You need to clarify the problem.    The need to pass the multiselect values to macros2 Forget multiselect values.  What is this macros2?  If this is yet another macro, SPL syntax forbids you fr... See more...
You need to clarify the problem.    The need to pass the multiselect values to macros2 Forget multiselect values.  What is this macros2?  If this is yet another macro, SPL syntax forbids you from invoking a macro as parameter of another macro.  Do you mean macros2 represents the value from this multiselect token?  In that case, you cannot use single quote. (The outer quote in your pseudo code is also incorrect.  A macro is invoked by a pair of back ticks (`, not '). It really doesn't matter whether the token is multiselect or of any other type.  The dashboard simply treat the resultant value as a string.  How the string is formatted depends on the INPUT settings, what is the delimiter, whether prefix and postfix are used, whether value prefix and value postfix are used, etc.  In this regard, I don't see any question that can be answered based on your description.
That didn't work. Also, I don't think you can count by multiple fields.
Well, the TA seems to not have been updated for the last 5 years. Might be outdated.
Thanks maybe we just need to chuck the TA and just do it your way. Thanks man
No. Don't try to squeeze everything into one command | stats count, values(*) as * by domain This will give you results groupped by domain. So now you have to filter the results with another co... See more...
No. Don't try to squeeze everything into one command | stats count, values(*) as * by domain This will give you results groupped by domain. So now you have to filter the results with another command. | where count<=5 And you're home.
Consult your environment's documentation. But seriously - an alert is just a scheduled search. You can't automatically determine which indexes will be used when the search is run. Yes, you can do a ... See more...
Consult your environment's documentation. But seriously - an alert is just a scheduled search. You can't automatically determine which indexes will be used when the search is run. Yes, you can do a search for some common ways of specifying the index (most importantly the literal "index=something" string) but as you think of more ways of specifying the index to search it gets more and more impossible. Apart from simple "index=something" way you can do: 1) index IN (some set) 2) use an alias which will expand to a set of parameters (including index(es)) 3) place a condition on eventtype which can resolve to a condition for index(es) 4) use a subsearch which will dynamically create a index=something parameter (theoretically you can even choose index randomly this way). So you can see that in general case there is no way to reliably determine before running the search which indexes will be searched.