All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@kiran_panchavat  Not working for me are you using different event breaks and timestamp  I used the below props.conf  [ <SOURCETYPE NAME> ] CHARSET=AUTO SHOULD_LINEMERGE=true LINE... See more...
@kiran_panchavat  Not working for me are you using different event breaks and timestamp  I used the below props.conf  [ <SOURCETYPE NAME> ] CHARSET=AUTO SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) TIME_FORMAT=%Y-%m-%dT%H:%M:%S TIME_PREFIX= "ds":\s*"
@Praz_123  Check this  
@kiran_panchavat  Same data would be like :- { "version": "200", "predictions": [ { "ds": "2023-01-01T01:00:00", "y": 25727, "yhat_lower": 23595.643771045987, "yhat_upper": 26531.7862039... See more...
@kiran_panchavat  Same data would be like :- { "version": "200", "predictions": [ { "ds": "2023-01-01T01:00:00", "y": 25727, "yhat_lower": 23595.643771045987, "yhat_upper": 26531.786203915904, "marginal_upper": 26838.980030149163, "marginal_lower": 23183.715141246714, "anomaly": false }, { "ds": "2023-01-01T02:00:00", "y": 24710, "yhat_lower": 21984.478022195697, "yhat_upper": 24966.416390280523, "marginal_upper": 25457.020250925423, "marginal_lower": 21744.743048120385, "anomaly": false }, { "ds": "2023-01-01T03:00:00", "y": 23908, "yhat_lower": 21181.498740796877, "yhat_upper": 24172.09825724038, "marginal_upper": 24449.705257711226, "marginal_lower": 20726.645610860345, "anomaly": false },
@Praz_123  Let me try in my lab and get back to you shortly. 
@kiran_panchavat  am getting the error as I need the same date and time extraction while using the time format and time prefix am getting the below error    Below is my props.conf  ... See more...
@kiran_panchavat  am getting the error as I need the same date and time extraction while using the time format and time prefix am getting the below error    Below is my props.conf  [ _json ] SHOULD_LINEMERGE=false LINE_BREAKER=([\S\s\n]+"predictions":\s\[\s*)|}(\s*\,\s*){|([\s\n\r]*\][\s\n\r]*}[\s\n\r]*) NO_BINARY_CHECK=true TIME_FORMAT=%Y-%m-%dT%H:%M:%S TIME_PREFIX=\[|ds\"\:\s\".
@Praz_123  \b\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(?:\.000)?\b  
You can technically achieve this through post processing of the timechart data. All you do is create your timechart in the smaller span, then add up the 6 X 10 minute blocks outside your time range a... See more...
You can technically achieve this through post processing of the timechart data. All you do is create your timechart in the smaller span, then add up the 6 X 10 minute blocks outside your time range and remove the unnecessary ones.  Here's an example using streamstats/eventstats - there are probably other ways, but this works  index=_audit | timechart span=10m count | eval t=strftime(_time, "%H") | streamstats window=6 sum(eval(if(t>=7 AND t<19, null(), count))) as hourly by t | eventstats max(hourly) as hourly_max min(hourly) as hourly_min by t | where hourly=hourly_min OR isnull(hourly) | eval hourly=hourly_max | fields - hourly* t You could make it simpler depending on your total search time range. You will see the X axis will not change, but you will only have hourly data points in the 19-07 hours.
Need to write a regex for  same as time and same as event given below in image   
@wjrbrady  Splunk timechart command’s span argument must be a fixed value per search execution—you cannot dynamically change the span within a single timechart based on the hour of the day. How... See more...
@wjrbrady  Splunk timechart command’s span argument must be a fixed value per search execution—you cannot dynamically change the span within a single timechart based on the hour of the day. However, you can achieve similar logic using a combination of eval, bin, and append Eg: using append ( search ... earliest=@d latest=now | eval hour=strftime(_time,"%H") | where hour > 7 AND hour < 19 | timechart span=10m sum(count) as count ) | append ( search ... earliest=@d latest=now | eval hour=strftime(_time,"%H") | where hour <= 7 OR hour >= 19 | timechart span=1h sum(count) as count ) | sort _time Also if you want a single timeline but with custom buckets, you can create your own time buckets using eval and bin Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos. Thanks!
Hello @PickleRick , I have tried implementing that, the timetsamp in lookup is not correctly passed in this search, I was bit confused. Whatever the lookup has in timestamp value, it was passed as... See more...
Hello @PickleRick , I have tried implementing that, the timetsamp in lookup is not correctly passed in this search, I was bit confused. Whatever the lookup has in timestamp value, it was passed as diffrent value in the search, I feels strange. please let me know if i have missed anything. Thanks!
@wipark  Check for Replication Quarantine or Bundle Issues Large or problematic files (e.g., big CSV lookups) can cause replication to fail or be quarantined. Review the metrics.log and splunkd.lo... See more...
@wipark  Check for Replication Quarantine or Bundle Issues Large or problematic files (e.g., big CSV lookups) can cause replication to fail or be quarantined. Review the metrics.log and splunkd.log on all SHC members for replication errors or warnings Test Manual Change Make a simple change to a standard file (e.g. props.conf) via the UI or REST API and see if it replicates. If standard files replicate but your custom file does not, it’s likely a file location or inclusion issue. If the cluster is out of sync - Force Resync if required eg: splunk resync shcluster-replicated-config Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos. Thanks!
@SCK  Snowflake calls Splunk API directly-Possible with Snowflake’s GetSplunk processor Reference-https://docs.snowflake.com/en/user-guide/data-integration/openflow/processors/getsplunk Splunk ex... See more...
@SCK  Snowflake calls Splunk API directly-Possible with Snowflake’s GetSplunk processor Reference-https://docs.snowflake.com/en/user-guide/data-integration/openflow/processors/getsplunk Splunk exports reports to cloud repo-Schedule Splunk Searches/Reports and export the results. Configure Splunk to send the scheduled report output to a supported cloud storage using scripts (Python, Bash), Splunk alert actions...and ingest to Snowflake using external stages Ref-https://estuary.dev/blog/snowflake-data-ingestion/#:~:text=The%20first%20step%20is%20to,stage%20(e.g.%2C%20CSV). Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos. Thanks!
Context: We have SPlunk ES setup on-prem. We want to extract the required payloads through queries, generate scheduled reports (e.g., daily), and export these to a cloud location for ingestion by S... See more...
Context: We have SPlunk ES setup on-prem. We want to extract the required payloads through queries, generate scheduled reports (e.g., daily), and export these to a cloud location for ingestion by Snowflake. Requirement: 1. Is there any way we can have API connection with Snowflake where it can call the API to extract specific logs from a specific index in SPlunk 2. If #1 is not possible, can we atleast run queries and send that report to a cloud repository for Snowflake to extract from.   TIA
Thank you for the tip about transform names, adding that to my Splunk notes. Hoping this filtering is only a temporary solution. I do want to stop the juniper equipment from sending "RT_FLOW_SESSION... See more...
Thank you for the tip about transform names, adding that to my Splunk notes. Hoping this filtering is only a temporary solution. I do want to stop the juniper equipment from sending "RT_FLOW_SESSION_CLOSE" logs once our team has more time.
Thank you so much!
Hi @livehybrid  Within your app, have you set a conf_replication_include key/value pair to tell the system to replicate it? conf_replication_include.<conf_file_name> = <boolean>   Yes I h... See more...
Hi @livehybrid  Within your app, have you set a conf_replication_include key/value pair to tell the system to replicate it? conf_replication_include.<conf_file_name> = <boolean>   Yes I have set that.  Is your production environment a different architecture (e.g. SHC vs single instance) than your local environment?   No, both are SHCs.
Regarding the DS specifically, have a good read of https://docs.splunk.com/Documentation/Splunk/latest/Updating/Upgradepre-9.2deploymentservers but essentially you need to make sure that your indexer... See more...
Regarding the DS specifically, have a good read of https://docs.splunk.com/Documentation/Splunk/latest/Updating/Upgradepre-9.2deploymentservers but essentially you need to make sure that your indexers have the relevant DS indexes created as the phone-home and other deployment data is now held here: == indexes == [_dsphonehome] [_dsclient] [_dsappevent] and also configure the outputs.conf to ensure that the data is saved locally on the DS too (so it can display the client info!) == outputs.conf == [indexAndForward] index = true selectiveIndexing = true  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @wipark  Within your app, have you set a conf_replication_include key/value pair to tell the system to replicate it? conf_replication_include.<conf_file_name> = <boolean> e.g. conf_replication... See more...
Hi @wipark  Within your app, have you set a conf_replication_include key/value pair to tell the system to replicate it? conf_replication_include.<conf_file_name> = <boolean> e.g. conf_replication_include.yourCustomConfFile = true Is your production environment a different architecture (e.g. SHC vs single instance) than your local environment?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @heres1  Confirmed by  the docs, there is no need to upgrade to an intermediate version - you can upgrade directly from 9.1.x to 9.4.x. There are quite a few differences between 9.1.1 and 9.4.2 ... See more...
Hi @heres1  Confirmed by  the docs, there is no need to upgrade to an intermediate version - you can upgrade directly from 9.1.x to 9.4.x. There are quite a few differences between 9.1.1 and 9.4.2 so I rather than me listing them all here, I'd recommend having a read through https://docs.splunk.com/Documentation/Splunk/9.4.2/Installation/AboutupgradingREADTHISFIRST as there may be other changes/feature deprecations that you rely on. Most notably is probably KVStore upgrades, SSL changes but there are also some big Deployment Server changes, therefore its also worth reading https://docs.splunk.com/Documentation/Splunk/latest/Updating/Upgradepre-9.2deploymentservers which details some of the changes and possible configuration changes you may have to make around your log forwarding on your DS in order to retain the visibility of the Forwarder Managment / Agent Manager section.   Are you running Linux or Windows? Im not sure of specific changes for either but happy to review this.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing   
One more thing - try to be more creative with your transform names. A "common" name like "setnull" can easily cause a collision with an identically named transform defined elsewhere. BTW, why not ju... See more...
One more thing - try to be more creative with your transform names. A "common" name like "setnull" can easily cause a collision with an identically named transform defined elsewhere. BTW, why not just _not_ send those events from the JunOS? You'll have both lower CPU load on the box and less work on the receiving end?