All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This is something you can relatively easily find after indexing by comparing values of _time and _indextime so creating an indexed field just to check if the timestamp is correct seems kinda like an ... See more...
This is something you can relatively easily find after indexing by comparing values of _time and _indextime so creating an indexed field just to check if the timestamp is correct seems kinda like an overkill. And manipulating _time (apart from possibly some format conversions which can't be resolved on simple props.conf parameters) is - as a rule of thumb - a very bad idea. Also look at the MAX_DAYS_HENCE parameter in props.conf.
1. index=* is something that is very rarely a good idea. Be as specific about your search as you can to use resources effectively 2. It makes no sense to stats by time and only afterwards splitting ... See more...
1. index=* is something that is very rarely a good idea. Be as specific about your search as you can to use resources effectively 2. It makes no sense to stats by time and only afterwards splitting into time-based buckets. For such case you should either bin first and then stats by _time or simply use timechart with a proper span. 3. As was already pointed out, head is not tbe way to go. The alternative to using dedup could be using stats first or last So your final search could look like this index=<be_specific_here> | bin span=1h _time | stats count by Request_URL _time | sort _time count | stats last(*) as * by _time As an exercise you could try to solve the same problem using another approach - adding stats with eventstats and filtering with where
There are two types of licensing for the Cloud. One is ingest pricing where you play for the amount of data which gets indexed. In this model if you drop events before indexing they don't get writte... See more...
There are two types of licensing for the Cloud. One is ingest pricing where you play for the amount of data which gets indexed. In this model if you drop events before indexing they don't get written to the index so they don't count against your license. Another model is workload licensing where you pay for the "computing power" for processing your data and allocated storage. In this case dropping events will not affect license usage directly.
Additionally, the format looks too much like part of JSON.  If part or the whole of your event is JSON, you should treat it as structured data, not isolated strings.  Can you share the raw event? (An... See more...
Additionally, the format looks too much like part of JSON.  If part or the whole of your event is JSON, you should treat it as structured data, not isolated strings.  Can you share the raw event? (Anonymize as needed.)
Aside from mistaken use of head as @richgalloway points out, what is the reason to perform stats on _time before bucketing if your goal is to find maximum per hour? index=* | bucket _time span=1h | ... See more...
Aside from mistaken use of head as @richgalloway points out, what is the reason to perform stats on _time before bucketing if your goal is to find maximum per hour? index=* | bucket _time span=1h | stats count by _time Request_URL | sort - count | dedup _time | sort _time  
Hi @parulguha ... if you could copy-paste the html code of the dashboard (after removing ip address, hostnames, etc), we can help you better, .. thanks. 
Hi @scout29 , only completing the perfect solution from @PickleRick : index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" NOT date_wday IN ("saturday", "monday") | ... See more...
Hi @scout29 , only completing the perfect solution from @PickleRick : index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" NOT date_wday IN ("saturday", "monday") | stats avg(b) AS bytes BY date_month | eval TB= bytes/1024/1024/1024/1024 if you want to exclude also holydays from your average, you have to create a lookup containing the holydays for the exclusion. Ciao. Giuseppe
Hi @Sid, when I usieSplunk Cloud, I usually have one or two Heavy Forwarders on premise that I use to concentrate all the logs from my on-premise infratructure, so I can apply the configurations to ... See more...
Hi @Sid, when I usieSplunk Cloud, I usually have one or two Heavy Forwarders on premise that I use to concentrate all the logs from my on-premise infratructure, so I can apply the configurations to these HFs. If you haven't an on-premise HF directly sending logs to Splunk Cloud, you have to upload these two conf files in an Add-On. Ciao. Giuseppe
Hi @BattleOP , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Po... See more...
Hi @BattleOP , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors  
In the eval I just want [index=index_example  sourcetype=sourcetype_example |stats count] to be a single whole number First, this is not what subsearch is for.  A subsearch populates a group of ... See more...
In the eval I just want [index=index_example  sourcetype=sourcetype_example |stats count] to be a single whole number First, this is not what subsearch is for.  A subsearch populates a group of fields, not outputs a single value.  If you use format command to view its actual output, index=index_example sourcetype=sourcetype_example |stats count |format  you will see that it is equivalent to "count=86" or whatever the count is. Second, the subsearch retrieves the same set of events as the main search.  This is very wasteful.  It would be much more productive to present the problem you are trying to solve than focusing on a particular search command. Let me take a wild guess: You want to calculate percentage of count by status in total count.  Is this correct?  There are many ways to achieve this.  None of them requires subsearch of same events.  This is an example. index=index_example sourcetype=sourcetype_example |chart count by Status | eventstats sum(count) as total | eval output = count / total * 100 | fields - total  
Update: I found a solution to this. The : before the = appears to have been problematic. Any timestamps that come in greater than the current time are now set to the system time, effectively preve... See more...
Update: I found a solution to this. The : before the = appears to have been problematic. Any timestamps that come in greater than the current time are now set to the system time, effectively preventing Splunk from indexing future timestamps. I also added an additional index time eval statement to set a flag variable called timestamp_status to "CRITICAL" if future timestamps are found and reset to the current time. The idea behind that was to allow me to search for any "CRITICAL" values for the timestamp_status field and see where I need to make adjustments to timestamp parsing for data sources that may start to send future timestamps. For anyone interested, I will paste the configs below. props.conf: [default] TRANSFORMS-check_for_future_timestamp = check_for_future_timestamp fields.conf: [check_for_future_timestamp] INDEXED = True transforms.conf: [check_for_future_timestamp] INGEST_EVAL = timestamp_status=if(_time > time(), "CRITICAL", "OK"), _time=if(_time > time(), time(), _time)
Pls check this SPL: source="rex-empty-string.txt" host="laptop" sourcetype="rexEmpty" | rex "WifiCountryDetails\"\:\"(?<WifiCountryDetails>(\w+|\S))\"" | eval WifiCountryDetails = if(isnull(WifiCo... See more...
Pls check this SPL: source="rex-empty-string.txt" host="laptop" sourcetype="rexEmpty" | rex "WifiCountryDetails\"\:\"(?<WifiCountryDetails>(\w+|\S))\"" | eval WifiCountryDetails = if(isnull(WifiCountryDetails) OR len(WifiCountryDetails)==0, "No Country Name", WifiCountryDetails) | stats count by WifiCountryDetails  
Hello, Im trying to use the data from one search in another search.   This is what I'm trying to do:- index=index_example  sourcetype=sourcetype_example |chart count by Status |eval output=(count... See more...
Hello, Im trying to use the data from one search in another search.   This is what I'm trying to do:- index=index_example  sourcetype=sourcetype_example |chart count by Status |eval output=(count/ [index=index_example  sourcetype=sourcetype_example |stats count] *100) In the eval I just want [index=index_example  sourcetype=sourcetype_example |stats count] to be a single whole number
Hi Splunk New beginners: For absolute beginners...Splunk newbie learning videos.. https://www.youtube.com/@SiemNewbies101/playlists   I have added 25 small videos of rex... Completely for Splunk ... See more...
Hi Splunk New beginners: For absolute beginners...Splunk newbie learning videos.. https://www.youtube.com/@SiemNewbies101/playlists   I have added 25 small videos of rex... Completely for Splunk newbies and beginners. hope this helps somebody, thanks.
Creating a new splunk base app then verifying the email worked out. i was able to install apps using this work around. Thank you !
I'm not sure what you want to achieve but remember that subsearch will be executed first, then its output will be formatted and substituted into the main search before running it. So with the output ... See more...
I'm not sure what you want to achieve but remember that subsearch will be executed first, then its output will be formatted and substituted into the main search before running it. So with the output of your subsearch the resulting table command will look like table _time , sourceuser, targetuser, role, Operation ((role="AuthenticationAdministrator") OR (role="Authentication Policy Administrator") OR [...]) As you can see, it doesn't make sense.
1. Yes, you can split your count by how many fields you want. 2. "This doesn't work" is not a very constructive comment. Remember that it's you who asks for help. 3. index=wineventlog EvenCode IN ... See more...
1. Yes, you can split your count by how many fields you want. 2. "This doesn't work" is not a very constructive comment. Remember that it's you who asks for help. 3. index=wineventlog EvenCode IN (4728,4729) This will find all the events where a user was either added or removed to a security-enabled group. That's a start. But you want to find situations if a user was removed from another group and added to one of those you seek. So you want something like this index=wineventlog ((EventCode=4728 Group_Name IN (RDSUSers_GRSQCP01, RDSUSers_GROQCP01, RDSUSers_BRSQCP01, RDSUSers_BROQCP01, RDSUSers_VRSQCP01, RDSUSers_VROQCP01)) OR (EventCode=4729 NOT Group_Name IN (RDSUSers_GRSQCP01, RDSUSers_GROQCP01, RDSUSers_BRSQCP01, RDSUSers_BROQCP01, RDSUSers_VRSQCP01, RDSUSers_VROQCP01)) Looks a bit uglier, doesn't it. But it will give you all potentially interesting events. Now you have to find if they fit your criteria. Since we will need to reverse the order of the events (by default Splunk returns events in reverse chronological order which is a bit inconvenient for us here), you might need to only limit further processed data to the relevant fields (but if you have just a bunch of events this is an optional step) | fields EventCode Group_Name Subject_Account_Name | fields - _raw (Notice that I didn't explicitly include _time because it is added by default) Now we need to sort by _time so the earliest events are processed first | sort _time As the default order is a reverse chronological order doing just | reverse instead should work as well. Now we create a field containing timestamp for removal from a group and a name of a group a user was removed from | eval remstamp=if(EventCode=4729,_time,null()) | eval remgroup=if(EventCode=4729,Group_Name,null()) So now we can do streamstats to see when and from which groups the users were removed | streamstats last(remstamp) as last_removed last(remgroup) as removed_from by Subject_Account_Name This will propagate the information of a user's removal from a group to the next event regarding the same user. Now we need to find matching events. For 3 hours window it will be | where _time-last_removed<10800 Be aware that it will only find last removal for a given user. Tweaking the streamstats you can probably aggregate all removals but it's too late for me to think about it now  
The problem has nothing to do with lookup or with spath. (You are not even using lookup command.)  You cannot use a subsearch in table command.  It's that simple. Note: Always use a code box or text... See more...
The problem has nothing to do with lookup or with spath. (You are not even using lookup command.)  You cannot use a subsearch in table command.  It's that simple. Note: Always use a code box or text to illustrate search.  Screenshot is the worst form to share data like this. I get that you want to filter ModifiedProperties{1}.NewValue to only those roles in your subsearch.  To do this, you need search command, not table command.  Something like the following index=o365 Operation="Add member to role." | spath path=ModifiedProperties{1}.NewValue output=role | search [| inputlookup privileged_azure_ad_roles.csv where isprivilegedadrole=true | fields role] | spath path=Actor{0}.ID output=sourceuser | rename ObjectId as targetuser | table _time sourceuser targetuser role Operation Hope this helps.
Hi @cotyp  Were you able to find the solution to your problem. Please share the solution if you found one. I want to do the same, disable autorefresh from all the existing Dashbaords in our environ... See more...
Hi @cotyp  Were you able to find the solution to your problem. Please share the solution if you found one. I want to do the same, disable autorefresh from all the existing Dashbaords in our environment?
The illustrated search is rather confusing. (Also, data illustration is better done with text, not screenshots.) First, the original data contains fields tranche_heure and partenaire, but not tranche... See more...
The illustrated search is rather confusing. (Also, data illustration is better done with text, not screenshots.) First, the original data contains fields tranche_heure and partenaire, but not tranche_heure_partenaire or tranche_heure_bis.  So, I will ignore related manipulations.  Second, I am not sure if there is a point of preserving all raw values of temp_rep_moyen.  But because your search does so, I will preserve them as temp_rep_moyen_orig. Most importantly, handling the difference between statut values "OK" and "KO" before performing stats functions just make the job so much harder.  You should use groupby instead. index=rcd statut=OK partenaire=000000000P | eval date_appel=strftime(_time,"%b %y") | dedup nom_ws date_appel partenaire temps_rep_max temps_rep_min temps_rep_moyen nb_appel statut tranche_heure heure_appel_max | stats avg(eval(temps_rep_moyen * nb_appel)) as temps_rep_moyen sum(nb_appel) as som_nb_appel max(temps_rep_max) as temps_rep_max min(temp_rep_min) as temps_rep_min values(temps_rep_moyen) as temps_rep_moyen_orig values(nom_ws) as nom_ws, values(date_appel) as date_appel by statut | eval temps_rep_moyen = temps_rep_moyen / som_nb_appel | fields - som_nb_appel The above search will give you two rows of weighed temps_rep_moyen as you defined, temps_rep_max, temps_rep_min, and aggregated values of other fields including original values of temp_rep_moyen.  The first row corresponding to statut "KO", the second row, statut "OK".  Hope this helps. If you really need the fields to be named after statut, do so after this stats. (I'll leave that as your exercise.)