All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@wipark  Check for Replication Quarantine or Bundle Issues Large or problematic files (e.g., big CSV lookups) can cause replication to fail or be quarantined. Review the metrics.log and splunkd.lo... See more...
@wipark  Check for Replication Quarantine or Bundle Issues Large or problematic files (e.g., big CSV lookups) can cause replication to fail or be quarantined. Review the metrics.log and splunkd.log on all SHC members for replication errors or warnings Test Manual Change Make a simple change to a standard file (e.g. props.conf) via the UI or REST API and see if it replicates. If standard files replicate but your custom file does not, it’s likely a file location or inclusion issue. If the cluster is out of sync - Force Resync if required eg: splunk resync shcluster-replicated-config Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos. Thanks!
@SCK  Snowflake calls Splunk API directly-Possible with Snowflake’s GetSplunk processor Reference-https://docs.snowflake.com/en/user-guide/data-integration/openflow/processors/getsplunk Splunk ex... See more...
@SCK  Snowflake calls Splunk API directly-Possible with Snowflake’s GetSplunk processor Reference-https://docs.snowflake.com/en/user-guide/data-integration/openflow/processors/getsplunk Splunk exports reports to cloud repo-Schedule Splunk Searches/Reports and export the results. Configure Splunk to send the scheduled report output to a supported cloud storage using scripts (Python, Bash), Splunk alert actions...and ingest to Snowflake using external stages Ref-https://estuary.dev/blog/snowflake-data-ingestion/#:~:text=The%20first%20step%20is%20to,stage%20(e.g.%2C%20CSV). Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos. Thanks!
Context: We have SPlunk ES setup on-prem. We want to extract the required payloads through queries, generate scheduled reports (e.g., daily), and export these to a cloud location for ingestion by S... See more...
Context: We have SPlunk ES setup on-prem. We want to extract the required payloads through queries, generate scheduled reports (e.g., daily), and export these to a cloud location for ingestion by Snowflake. Requirement: 1. Is there any way we can have API connection with Snowflake where it can call the API to extract specific logs from a specific index in SPlunk 2. If #1 is not possible, can we atleast run queries and send that report to a cloud repository for Snowflake to extract from.   TIA
Thank you for the tip about transform names, adding that to my Splunk notes. Hoping this filtering is only a temporary solution. I do want to stop the juniper equipment from sending "RT_FLOW_SESSION... See more...
Thank you for the tip about transform names, adding that to my Splunk notes. Hoping this filtering is only a temporary solution. I do want to stop the juniper equipment from sending "RT_FLOW_SESSION_CLOSE" logs once our team has more time.
Thank you so much!
Hi @livehybrid  Within your app, have you set a conf_replication_include key/value pair to tell the system to replicate it? conf_replication_include.<conf_file_name> = <boolean>   Yes I h... See more...
Hi @livehybrid  Within your app, have you set a conf_replication_include key/value pair to tell the system to replicate it? conf_replication_include.<conf_file_name> = <boolean>   Yes I have set that.  Is your production environment a different architecture (e.g. SHC vs single instance) than your local environment?   No, both are SHCs.
Regarding the DS specifically, have a good read of https://docs.splunk.com/Documentation/Splunk/latest/Updating/Upgradepre-9.2deploymentservers but essentially you need to make sure that your indexer... See more...
Regarding the DS specifically, have a good read of https://docs.splunk.com/Documentation/Splunk/latest/Updating/Upgradepre-9.2deploymentservers but essentially you need to make sure that your indexers have the relevant DS indexes created as the phone-home and other deployment data is now held here: == indexes == [_dsphonehome] [_dsclient] [_dsappevent] and also configure the outputs.conf to ensure that the data is saved locally on the DS too (so it can display the client info!) == outputs.conf == [indexAndForward] index = true selectiveIndexing = true  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @wipark  Within your app, have you set a conf_replication_include key/value pair to tell the system to replicate it? conf_replication_include.<conf_file_name> = <boolean> e.g. conf_replication... See more...
Hi @wipark  Within your app, have you set a conf_replication_include key/value pair to tell the system to replicate it? conf_replication_include.<conf_file_name> = <boolean> e.g. conf_replication_include.yourCustomConfFile = true Is your production environment a different architecture (e.g. SHC vs single instance) than your local environment?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @heres1  Confirmed by  the docs, there is no need to upgrade to an intermediate version - you can upgrade directly from 9.1.x to 9.4.x. There are quite a few differences between 9.1.1 and 9.4.2 ... See more...
Hi @heres1  Confirmed by  the docs, there is no need to upgrade to an intermediate version - you can upgrade directly from 9.1.x to 9.4.x. There are quite a few differences between 9.1.1 and 9.4.2 so I rather than me listing them all here, I'd recommend having a read through https://docs.splunk.com/Documentation/Splunk/9.4.2/Installation/AboutupgradingREADTHISFIRST as there may be other changes/feature deprecations that you rely on. Most notably is probably KVStore upgrades, SSL changes but there are also some big Deployment Server changes, therefore its also worth reading https://docs.splunk.com/Documentation/Splunk/latest/Updating/Upgradepre-9.2deploymentservers which details some of the changes and possible configuration changes you may have to make around your log forwarding on your DS in order to retain the visibility of the Forwarder Managment / Agent Manager section.   Are you running Linux or Windows? Im not sure of specific changes for either but happy to review this.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing   
One more thing - try to be more creative with your transform names. A "common" name like "setnull" can easily cause a collision with an identically named transform defined elsewhere. BTW, why not ju... See more...
One more thing - try to be more creative with your transform names. A "common" name like "setnull" can easily cause a collision with an identically named transform defined elsewhere. BTW, why not just _not_ send those events from the JunOS? You'll have both lower CPU load on the box and less work on the receiving end?
SAP EC Payroll Cloud does not allow for PowerConnect to be installed as an add-on unlike the other SAP Cloud systems, so customers are forced to use SolMan or SAP Cloud ALM to monitor jobs.  
Thanks for your answer, i will take into considerations, however i have rollback my upgrading. 
Hi everyone, I'm developing an app that uses a custom configuration file. I'm updating the file using the Splunk JavaScript SDK and REST API calls. In my lab setup (Splunk 9.4.1), everything works a... See more...
Hi everyone, I'm developing an app that uses a custom configuration file. I'm updating the file using the Splunk JavaScript SDK and REST API calls. In my lab setup (Splunk 9.4.1), everything works as expected—the custom config file is replicated correctly. However, when I deploy the app in our production environment (also running Splunk 9.4.1), changes to the configuration file do not replicate to the other search heads. I used btool to verify that the file is not excluded from replication. Has anyone encountered a similar issue? What steps can I take to investigate and debug this? What specific logs or configurations should I check?
How would results of such search look? Do you want to change the span for the whole search or have multiple spans within one search? (what sense would it make then???)
Considerations to upgrade from Enterprise 9.1.1 to 9.4.2,  while its also a deployment server. 
Is there a question to it? What is it supposed to do?
Ok. Several things. 1. Unless you're working with a syslog-aware solution, loadbalancing syslogs usually doesn't end well. Having said that - I'm not aware whether modern haproxy "can syslog" or can... See more...
Ok. Several things. 1. Unless you're working with a syslog-aware solution, loadbalancing syslogs usually doesn't end well. Having said that - I'm not aware whether modern haproxy "can syslog" or cannot. Just never tried to use it for this purpose. 2. Whatever you do, with such a setup you're always introducing another explicit hop in your network path between syslog source and your syslog receiver(s). Some solutions (rsyslog for sure, not sure about syslog-ng) can spoof source address but that works only for UDP syslog and can lead to network-level problems especially when return route doesn't match the supposed datagram origin. With TCP you simply cannot spoof source address because return packets would go to the original source, not to the LB. 3. There is no single ‘syslog format" so each of your sources can send data in a different form. There are even some solutions which send differently formatted events depending on what subsystem those events come from. 4. There is no concept of "headers" in syslog. Proxies can add own headers but that usually applies to HTTP. 5. While the idea of having a LB component for HA seems sound there is one inherent flaw in this reasoning - the LB becomes your SPOF. And in your case - it adds a host of new problems without really solving old ones. If you really want to have a highly available syslog receiving solution you'd need something that: - can understand syslog and can process each event independently, can buffer events in case of network/receiver problems and so on - can be installed in a highly available 1+1 setup Additionally you might have problems with stuff like health checks for downstream receivers if you try to send plain TCP data. A general-use network-level load balancer doesn't meet the first requirement and typical open source syslog server on its own doesn't meet the second one (with a lot of fiddling with third party clustering tools you can get a pair of syslogs running with a floating IP but then you're introducing a new layer of maintenance headaches). So typically with syslog receiving you want to have a small syslog receiver as close to the sources and possible and as robust as possible. You don't want to send straight to HFs or indexers. Receiving syslogs directly on Splunk has its performance limitations and is difficult to maintain.
@yuanliu  Good Morning I've updated the Search query, let me know if anything needs to be adjusted. So far the Alert is not firing. My index search is looking for something  that doesn't exist so... See more...
@yuanliu  Good Morning I've updated the Search query, let me know if anything needs to be adjusted. So far the Alert is not firing. My index search is looking for something  that doesn't exist so it should always alert unless I update the Lookup table to today's date(5/27/2025) to mute.  
index=*sap sourcetype=FSC* | fields _time index Eventts ID FIELD_02 FIELD_01 CODE ID FIELD* source | rex field=index "^(?<prefix>\d+_\d+)" | lookup lookup_site_ids.csv prefix as prefix output name... See more...
index=*sap sourcetype=FSC* | fields _time index Eventts ID FIELD_02 FIELD_01 CODE ID FIELD* source | rex field=index "^(?<prefix>\d+_\d+)" | lookup lookup_site_ids.csv prefix as prefix output name as Site | eval name2=substr(Site,8,4) | rex field=Eventts "(?<Date>\d{4}-\d{2}-\d{2})T(?<Time>\d{2}:\d{2}:\d{2}\.\d{3})" | fields - Eventts | eval timestamp = Date . " " . Time | eval _time = strptime(timestamp, "%Y-%m-%d %H:%M:%S.%3N") | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S.%3N"), Condition="test" | eval Stamp = strftime(_time, "%Y-%m-%d %H:%M:%S.%3N") | lookup Stoppage.csv name as Site OUTPUT Condition Time as Stamp | search Condition="Stoppage" | where Stamp = Time | eval index_time = strptime(Time, "%Y-%m-%d %H:%M:%S.%3N") | eval lookup_time = strftime(Stamp, "%Y-%m-%d %H:%M:%S.%3N") | eval CODE=if(isnull(CODE),"N/A",CODE), FIELD_01=if(isnull(FIELD_01),"N/A",FIELD_01), FIELD_02=if(isnull(FIELD_02),"N/A",FIELD_02) | lookup code_translator.csv FIELD_01 as FIELD_01 output nonzero_bits as nonzero_bits | eval nonzero_bits=if(FIELD_02="ST" AND FIELD_01="DA",nonzero_bits,"N/A") | mvexpand nonzero_bits | lookup Decomposition_File.csv Site as name2 Alarm_bit_index as nonzero_bits "Componenty_type_and_CODE" as CODE "Component_number" as ID output "Symbolic_name" as Symbolic_name Alarm_type as Alarm_type Brief_alarm_description as Brief_alarm_description Alarm_solution | eval Symbolic_name=if(FIELD_01="DA",Symbolic_name,"N/A") , Brief_alarm_description=if(FIELD_01="DA",Brief_alarm_description,"N/A") , Alarm_type=if(FIELD_01="DA",Alarm_type,"N/A") , Alarm_solution=if(FIELD_01="DA",Alarm_solution,"N/A") | fillnull value="N/A" Symbolic_name Brief_alarm_description Alarm_type | table Site Symbolic_name Brief_alarm_description Alarm_type Alarm_solution Condition Value index_time Time _time Stamp lookup_time  
thanks for the update