All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @gcusello , If you mean with multiple values from Lookup, I didnt tried. I would like to check the time from lookup with index timestamp events of deviation +0.5Sec or -0.5Sec from the ti... See more...
Hello @gcusello , If you mean with multiple values from Lookup, I didnt tried. I would like to check the time from lookup with index timestamp events of deviation +0.5Sec or -0.5Sec from the time in index and i need to show the result. Please let me know if there are any other way to do it. Thanks!
Hi @smanojkumar , without the static value, does it run? Ciao. Giuseppe
Hi @thanh_on , you can know this viewing the license consuption for each day, that's the total indexeing volume of all the day in all the Indexers of the Cluster. Ciao. Giuseppe
We had the same issue after an update, but the solution resolved it. Thank you!
Hello @gcusello , I was just testing with single value, Obviously it will be dynamic. Thanks again!
Hi @thanh_on  The "Daily Data Volume" in this case is the amount of daily ingest.  You can get this by going to https://yourSplunkInstance/en-US/manager/system/licensing Or by running the followin... See more...
Hi @thanh_on  The "Daily Data Volume" in this case is the amount of daily ingest.  You can get this by going to https://yourSplunkInstance/en-US/manager/system/licensing Or by running the following search: index=_internal [ rest splunk_server=local /services/server/info | return host] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d | eval _time=_time - 43200 | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "volume" fixedrange=false | fields - _timediff | foreach "*" [ eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)]  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Okay @mchoudhary - this might look a little bizarre but stay with me....you could use the following table output, this uses a search to determine the months returned based on the earliest/latest set ... See more...
Okay @mchoudhary - this might look a little bizarre but stay with me....you could use the following table output, this uses a search to determine the months returned based on the earliest/latest set by the time picker and lists them out as per the screenshot below. Would this work for you? | table Source [| makeresults count=12 | streamstats count as month_offset | addinfo | eval start_epoch=info_min_time, end_epoch=info_max_time | eval start_month=strftime(start_epoch, "%Y-%m-01") | eval month_epoch = relative_time(strptime(start_month, "%Y-%m-%d"), "+" . (month_offset-1) . "mon") | where month_epoch <= end_epoch | eval month=strftime(month_epoch, "%b") | stats list(month) as search ]     |tstats count where index=main by _time span=1d | eval MonthNum=strftime(_time, "%Y-%m"), MonthName=strftime(_time, "%b") | eval Source="Email" | eval Blocked=count | stats sum(Blocked) as Blocked by Source MonthNum MonthName | xyseries Source MonthName Blocked | addinfo | table Source [| makeresults count=60 | streamstats count as month_offset | addinfo | eval start_epoch=info_min_time, end_epoch=info_max_time | eval start_month=strftime(start_epoch, "%Y-%m-01") | eval month_epoch = relative_time(strptime(start_month, "%Y-%m-%d"), "+" . (month_offset-1) . "mon") | where month_epoch <= end_epoch | eval month=strftime(month_epoch, "%b") | stats list(month) as search ]  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I found that only a couple of lookups were referenced in local files. I also reviewed all allowlist and denylist settings in distsearch.conf, and everything appears to be in order. Additionally, I c... See more...
I found that only a couple of lookups were referenced in local files. I also reviewed all allowlist and denylist settings in distsearch.conf, and everything appears to be in order. Additionally, I compared the server.conf files in both environments using btool. There were no significant differences aside from some expected variations in names and IDs. The only notable difference—though I doesn't seem to be relevant—is that captain_is_adhoc_searchhead is set to true in production.
Dear everyone, I have a Splunk Clustering (2 indexers) with: Replication Factor=2 Searchable Factor=2 I supposed to sizing a index A on indexes.conf. Then, I found this useful website: https://sp... See more...
Dear everyone, I have a Splunk Clustering (2 indexers) with: Replication Factor=2 Searchable Factor=2 I supposed to sizing a index A on indexes.conf. Then, I found this useful website: https://splunk-sizing.soclib.net/ My concern on this website is how to calculate "Daily Data Volume" (average uncompressed raw data). So, how can I calculate this ? Can I use a SPL command on Search Head to calculate this ? Thanks & best regards.
Hi @smanojkumar , let me understand: why do you fix _time to a fixed value? Ciao. Giuseppe
@PrewinThomas , I have reviewed the app configuration and logs but could not find any errors related to the issue. The application includes a passwords.conf file, which is replicated immediately acr... See more...
@PrewinThomas , I have reviewed the app configuration and logs but could not find any errors related to the issue. The application includes a passwords.conf file, which is replicated immediately across the SHC nodes. I wasn't able to find any exclusions regarding the custom file. 
Hello @gcusello & @PickleRick , Thanks for your time! I have converted both index time and Lookup time to epoch. It seems perfect with makeresults but while I'm using index data the time stamps... See more...
Hello @gcusello & @PickleRick , Thanks for your time! I have converted both index time and Lookup time to epoch. It seems perfect with makeresults but while I'm using index data the time stamps were changed to an another new value in the search, I herewith attched the snap of makeresults where its workign fine, and the other snaps as well,   Please let me know if i missed anything. Thanks!  
Thanks I'll follow up with them and see if they can help - should this App be de-listed from Splunkbase or can we get someone to check with the Devs and see if they might publish an updated versio... See more...
Thanks I'll follow up with them and see if they can help - should this App be de-listed from Splunkbase or can we get someone to check with the Devs and see if they might publish an updated version at a later stage? Not sure if someone from Splunk regularly checks the Apps to see if they're still available...
Hi Thanks very much for this great answer. This worked very well. CHeers 
@mchoudhary  You can try below, | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from datamodel=pps_ondemand where (Message_Log.filter.routeDirection="inbound") AND... See more...
@mchoudhary  You can try below, | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from datamodel=pps_ondemand where (Message_Log.filter.routeDirection="inbound") AND (Message_Log.filter.disposition="discard" OR Message_Log.filter.disposition="reject" OR Message_Log.filter.quarantine.folder="Spam*") earliest=-6mon@mon latest=now by _time | eval Source="Email" | eval MonthNum=strftime(_time, "%Y-%m"), MonthName=strftime(_time, "%b") | stats sum(Blocked) as Blocked by Source MonthNum MonthName | eventstats sum(Blocked) as Total by Source | appendpipe [ stats values(Total) as Blocked by Source | eval MonthNum="9999-99", MonthName="Total" ] | sort MonthNum | eval Month=MonthName | table Source Month Blocked Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos. Thanks!  
The main question here is whether you want to send new data or move existing data.
Be aware though that not all aggregation functions are further aggregatable. For example - sum or max/min can be aggregated from smaller spans into a correct overall value but avg cannot.
Ok. Not knowing your use case, there are some general tips: 1) Don't overwrite _time unless you're absolutely sure what you're doing. If you must overwrite time by other value extracted calculated f... See more...
Ok. Not knowing your use case, there are some general tips: 1) Don't overwrite _time unless you're absolutely sure what you're doing. If you must overwrite time by other value extracted calculated from your event high chance is that your source onboarding wasn't done correctly 2) As @gcusello a;ready pointed out - typically the best way of handling timestamps is using the unix epoch-based value, not a strftimed string representation. These are general rules and sometimes there are border cases when you need to do otherwise. But here comes another painful truth 3) Be wary of timezones
Instead of computing month before xyseries, it's better to carry _time into xyseries and use transpose to get your final layout.  Unlike xyseries, transpose preserves row order into column order. Bu... See more...
Instead of computing month before xyseries, it's better to carry _time into xyseries and use transpose to get your final layout.  Unlike xyseries, transpose preserves row order into column order. But then, given that you only have one prescribed "source", I wonder if xyseries and streamstats are a waste.  How about | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from datamodel=pps_ondemand where (Message_Log.filter.routeDirection="inbound") AND (Message_Log.filter.disposition="discard" OR Message_Log.filter.disposition="reject" OR Message_Log.filter.quarantine.folder="Spam*") earliest=-6mon@mon latest=now by _time span=1month@month | eval Month=strftime(_time, "%b") | transpose header_field=month column_name=Source | eval Source = "Email" | fillnull value=0 | addtotals Here, I removed the first stats sum because by using span=1mon@mon in tstats, that calculation is already done.  I also removed eventstats and streamstats because total on row is more easily performed with addtotals.
Hi @smanojkumar , in general, to compare timestamps it's always better to transform both of them in epochtime format (using the strptime function of the eval command). Ciao. Giuseppe