All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Okay @mchoudhary - this might look a little bizarre but stay with me....you could use the following table output, this uses a search to determine the months returned based on the earliest/latest set ... See more...
Okay @mchoudhary - this might look a little bizarre but stay with me....you could use the following table output, this uses a search to determine the months returned based on the earliest/latest set by the time picker and lists them out as per the screenshot below. Would this work for you? | table Source [| makeresults count=12 | streamstats count as month_offset | addinfo | eval start_epoch=info_min_time, end_epoch=info_max_time | eval start_month=strftime(start_epoch, "%Y-%m-01") | eval month_epoch = relative_time(strptime(start_month, "%Y-%m-%d"), "+" . (month_offset-1) . "mon") | where month_epoch <= end_epoch | eval month=strftime(month_epoch, "%b") | stats list(month) as search ]     |tstats count where index=main by _time span=1d | eval MonthNum=strftime(_time, "%Y-%m"), MonthName=strftime(_time, "%b") | eval Source="Email" | eval Blocked=count | stats sum(Blocked) as Blocked by Source MonthNum MonthName | xyseries Source MonthName Blocked | addinfo | table Source [| makeresults count=60 | streamstats count as month_offset | addinfo | eval start_epoch=info_min_time, end_epoch=info_max_time | eval start_month=strftime(start_epoch, "%Y-%m-01") | eval month_epoch = relative_time(strptime(start_month, "%Y-%m-%d"), "+" . (month_offset-1) . "mon") | where month_epoch <= end_epoch | eval month=strftime(month_epoch, "%b") | stats list(month) as search ]  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I found that only a couple of lookups were referenced in local files. I also reviewed all allowlist and denylist settings in distsearch.conf, and everything appears to be in order. Additionally, I c... See more...
I found that only a couple of lookups were referenced in local files. I also reviewed all allowlist and denylist settings in distsearch.conf, and everything appears to be in order. Additionally, I compared the server.conf files in both environments using btool. There were no significant differences aside from some expected variations in names and IDs. The only notable difference—though I doesn't seem to be relevant—is that captain_is_adhoc_searchhead is set to true in production.
Dear everyone, I have a Splunk Clustering (2 indexers) with: Replication Factor=2 Searchable Factor=2 I supposed to sizing a index A on indexes.conf. Then, I found this useful website: https://sp... See more...
Dear everyone, I have a Splunk Clustering (2 indexers) with: Replication Factor=2 Searchable Factor=2 I supposed to sizing a index A on indexes.conf. Then, I found this useful website: https://splunk-sizing.soclib.net/ My concern on this website is how to calculate "Daily Data Volume" (average uncompressed raw data). So, how can I calculate this ? Can I use a SPL command on Search Head to calculate this ? Thanks & best regards.
Hi @smanojkumar , let me understand: why do you fix _time to a fixed value? Ciao. Giuseppe
@PrewinThomas , I have reviewed the app configuration and logs but could not find any errors related to the issue. The application includes a passwords.conf file, which is replicated immediately acr... See more...
@PrewinThomas , I have reviewed the app configuration and logs but could not find any errors related to the issue. The application includes a passwords.conf file, which is replicated immediately across the SHC nodes. I wasn't able to find any exclusions regarding the custom file. 
Hello @gcusello & @PickleRick , Thanks for your time! I have converted both index time and Lookup time to epoch. It seems perfect with makeresults but while I'm using index data the time stamps... See more...
Hello @gcusello & @PickleRick , Thanks for your time! I have converted both index time and Lookup time to epoch. It seems perfect with makeresults but while I'm using index data the time stamps were changed to an another new value in the search, I herewith attched the snap of makeresults where its workign fine, and the other snaps as well,   Please let me know if i missed anything. Thanks!  
Thanks I'll follow up with them and see if they can help - should this App be de-listed from Splunkbase or can we get someone to check with the Devs and see if they might publish an updated versio... See more...
Thanks I'll follow up with them and see if they can help - should this App be de-listed from Splunkbase or can we get someone to check with the Devs and see if they might publish an updated version at a later stage? Not sure if someone from Splunk regularly checks the Apps to see if they're still available...
Hi Thanks very much for this great answer. This worked very well. CHeers 
@mchoudhary  You can try below, | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from datamodel=pps_ondemand where (Message_Log.filter.routeDirection="inbound") AND... See more...
@mchoudhary  You can try below, | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from datamodel=pps_ondemand where (Message_Log.filter.routeDirection="inbound") AND (Message_Log.filter.disposition="discard" OR Message_Log.filter.disposition="reject" OR Message_Log.filter.quarantine.folder="Spam*") earliest=-6mon@mon latest=now by _time | eval Source="Email" | eval MonthNum=strftime(_time, "%Y-%m"), MonthName=strftime(_time, "%b") | stats sum(Blocked) as Blocked by Source MonthNum MonthName | eventstats sum(Blocked) as Total by Source | appendpipe [ stats values(Total) as Blocked by Source | eval MonthNum="9999-99", MonthName="Total" ] | sort MonthNum | eval Month=MonthName | table Source Month Blocked Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos. Thanks!  
The main question here is whether you want to send new data or move existing data.
Be aware though that not all aggregation functions are further aggregatable. For example - sum or max/min can be aggregated from smaller spans into a correct overall value but avg cannot.
Ok. Not knowing your use case, there are some general tips: 1) Don't overwrite _time unless you're absolutely sure what you're doing. If you must overwrite time by other value extracted calculated f... See more...
Ok. Not knowing your use case, there are some general tips: 1) Don't overwrite _time unless you're absolutely sure what you're doing. If you must overwrite time by other value extracted calculated from your event high chance is that your source onboarding wasn't done correctly 2) As @gcusello a;ready pointed out - typically the best way of handling timestamps is using the unix epoch-based value, not a strftimed string representation. These are general rules and sometimes there are border cases when you need to do otherwise. But here comes another painful truth 3) Be wary of timezones
Instead of computing month before xyseries, it's better to carry _time into xyseries and use transpose to get your final layout.  Unlike xyseries, transpose preserves row order into column order. Bu... See more...
Instead of computing month before xyseries, it's better to carry _time into xyseries and use transpose to get your final layout.  Unlike xyseries, transpose preserves row order into column order. But then, given that you only have one prescribed "source", I wonder if xyseries and streamstats are a waste.  How about | tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from datamodel=pps_ondemand where (Message_Log.filter.routeDirection="inbound") AND (Message_Log.filter.disposition="discard" OR Message_Log.filter.disposition="reject" OR Message_Log.filter.quarantine.folder="Spam*") earliest=-6mon@mon latest=now by _time span=1month@month | eval Month=strftime(_time, "%b") | transpose header_field=month column_name=Source | eval Source = "Email" | fillnull value=0 | addtotals Here, I removed the first stats sum because by using span=1mon@mon in tstats, that calculation is already done.  I also removed eventstats and streamstats because total on row is more easily performed with addtotals.
Hi @smanojkumar , in general, to compare timestamps it's always better to transform both of them in epochtime format (using the strptime function of the eval command). Ciao. Giuseppe
@SN1  If you're moving the entire instance (including historical data, configs, and users) from one machine to another: Migrate a Splunk Enterprise instance from one physical machine to another | S... See more...
@SN1  If you're moving the entire instance (including historical data, configs, and users) from one machine to another: Migrate a Splunk Enterprise instance from one physical machine to another | Splunk Docs Restore Archived Indexed Data (Bucket-Level Transfer): If you want to move specific historical data (e.g., cold/frozen buckets) to another instance: Restore archived indexed data | Splunk Docs You can copy bucket files into the thaweddb directory of the target index on the new instance. This is ideal for selective historical data recovery.
Hi @SN1 , let me understand: you have two stand alone Splunk servers and you want to send data of an index from the second to the first, is it correct? if this is your requirement, the first questi... See more...
Hi @SN1 , let me understand: you have two stand alone Splunk servers and you want to send data of an index from the second to the first, is it correct? if this is your requirement, the first question should be why? but anyway, I need other two information for your solution: is there another Heavy forwarder forwarding these logs? do you want to forward all the data or only the ones of one index? if logs passing through another Splunk full instance (Heavy Forwarder), you have to work on it otherwise on the ServerB. You have to create a fork following the instructions at https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/forwarding-and-receiving-data/9.4/perform-advanced-configuration/forward-data-to-third-party-systems  if you want to forward all logs, you can configure forwarding and receiving [Settings > Forwarding and Receiving > Forwarding] with the option "Index and forwardiung", in this way you forward all logs maintaining a local copy of them, for more information see at https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/forwarding-and-receiving-data/9.4/perform-advanced-configuration/forward-data-to-third-party-systems#forward-all-data-0  If instead you want to forward only a subset of data you have to use the configurations at https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/forwarding-and-receiving-data/9.4/perform-advanced-configuration/forward-data-to-third-party-systems#forward-a-subset-of-data-0  Ciao. Giuseppe
Hi @Paul_Szoke  It looks like the developers of the apps no longer list this on their site at all, Ive searched high and low! It could be that they no longer maintain this but may be able to provide... See more...
Hi @Paul_Szoke  It looks like the developers of the apps no longer list this on their site at all, Ive searched high and low! It could be that they no longer maintain this but may be able to provide an earlier version. I think the best action here would be to contact them (https://www.rojointegrations.com/contact) for more info and to see if they can provide the app.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @SN1  Can you confirm - is this historic data that has already been indexed, or new data which is being received currently? If you are currently receiving data into A and want to send to B then ... See more...
Hi @SN1  Can you confirm - is this historic data that has already been indexed, or new data which is being received currently? If you are currently receiving data into A and want to send to B then check out https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/forwarding-and-receiving-data/9.4/perform-advanced-configuration/forward-data-to-third-party-systems#forward-all-data-0 If you are looking to move old indexes from A to B then the easiest way is to copy the buckets from one system to the other, if they are both standalone instances with unique GUID then this should be fine - just make sure you define the indexes in indexes.conf Check out https://help.splunk.com/en/splunk-enterprise/get-started/install-and-upgrade/9.4/upgrade-or-migrate-splunk-enterprise/migrate-a-splunk-enterprise-instance-from-one-physical-machine-to-another for more information on how to migrate from one to the other.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
hello i have an index  (A) on indexer and other index (B) on Search head (we are making it standalone) . i want to send data from index A to B . How to proceed . I have admin rights.
Apigee API Management Monitoring App for Splunk | Splunkbase The "Visit Site" link give this message:  "404 Sorry this page is not available The link you followed may be broken, or the page may ha... See more...
Apigee API Management Monitoring App for Splunk | Splunkbase The "Visit Site" link give this message:  "404 Sorry this page is not available The link you followed may be broken, or the page may have been removed." Are there any alternative ways to download?