All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

One more thing - try to be more creative with your transform names. A "common" name like "setnull" can easily cause a collision with an identically named transform defined elsewhere. BTW, why not ju... See more...
One more thing - try to be more creative with your transform names. A "common" name like "setnull" can easily cause a collision with an identically named transform defined elsewhere. BTW, why not just _not_ send those events from the JunOS? You'll have both lower CPU load on the box and less work on the receiving end?
SAP EC Payroll Cloud does not allow for PowerConnect to be installed as an add-on unlike the other SAP Cloud systems, so customers are forced to use SolMan or SAP Cloud ALM to monitor jobs.  
Thanks for your answer, i will take into considerations, however i have rollback my upgrading. 
Hi everyone, I'm developing an app that uses a custom configuration file. I'm updating the file using the Splunk JavaScript SDK and REST API calls. In my lab setup (Splunk 9.4.1), everything works a... See more...
Hi everyone, I'm developing an app that uses a custom configuration file. I'm updating the file using the Splunk JavaScript SDK and REST API calls. In my lab setup (Splunk 9.4.1), everything works as expected—the custom config file is replicated correctly. However, when I deploy the app in our production environment (also running Splunk 9.4.1), changes to the configuration file do not replicate to the other search heads. I used btool to verify that the file is not excluded from replication. Has anyone encountered a similar issue? What steps can I take to investigate and debug this? What specific logs or configurations should I check?
How would results of such search look? Do you want to change the span for the whole search or have multiple spans within one search? (what sense would it make then???)
Considerations to upgrade from Enterprise 9.1.1 to 9.4.2,  while its also a deployment server. 
Is there a question to it? What is it supposed to do?
Ok. Several things. 1. Unless you're working with a syslog-aware solution, loadbalancing syslogs usually doesn't end well. Having said that - I'm not aware whether modern haproxy "can syslog" or can... See more...
Ok. Several things. 1. Unless you're working with a syslog-aware solution, loadbalancing syslogs usually doesn't end well. Having said that - I'm not aware whether modern haproxy "can syslog" or cannot. Just never tried to use it for this purpose. 2. Whatever you do, with such a setup you're always introducing another explicit hop in your network path between syslog source and your syslog receiver(s). Some solutions (rsyslog for sure, not sure about syslog-ng) can spoof source address but that works only for UDP syslog and can lead to network-level problems especially when return route doesn't match the supposed datagram origin. With TCP you simply cannot spoof source address because return packets would go to the original source, not to the LB. 3. There is no single ‘syslog format" so each of your sources can send data in a different form. There are even some solutions which send differently formatted events depending on what subsystem those events come from. 4. There is no concept of "headers" in syslog. Proxies can add own headers but that usually applies to HTTP. 5. While the idea of having a LB component for HA seems sound there is one inherent flaw in this reasoning - the LB becomes your SPOF. And in your case - it adds a host of new problems without really solving old ones. If you really want to have a highly available syslog receiving solution you'd need something that: - can understand syslog and can process each event independently, can buffer events in case of network/receiver problems and so on - can be installed in a highly available 1+1 setup Additionally you might have problems with stuff like health checks for downstream receivers if you try to send plain TCP data. A general-use network-level load balancer doesn't meet the first requirement and typical open source syslog server on its own doesn't meet the second one (with a lot of fiddling with third party clustering tools you can get a pair of syslogs running with a floating IP but then you're introducing a new layer of maintenance headaches). So typically with syslog receiving you want to have a small syslog receiver as close to the sources and possible and as robust as possible. You don't want to send straight to HFs or indexers. Receiving syslogs directly on Splunk has its performance limitations and is difficult to maintain.
@yuanliu  Good Morning I've updated the Search query, let me know if anything needs to be adjusted. So far the Alert is not firing. My index search is looking for something  that doesn't exist so... See more...
@yuanliu  Good Morning I've updated the Search query, let me know if anything needs to be adjusted. So far the Alert is not firing. My index search is looking for something  that doesn't exist so it should always alert unless I update the Lookup table to today's date(5/27/2025) to mute.  
index=*sap sourcetype=FSC* | fields _time index Eventts ID FIELD_02 FIELD_01 CODE ID FIELD* source | rex field=index "^(?<prefix>\d+_\d+)" | lookup lookup_site_ids.csv prefix as prefix output name... See more...
index=*sap sourcetype=FSC* | fields _time index Eventts ID FIELD_02 FIELD_01 CODE ID FIELD* source | rex field=index "^(?<prefix>\d+_\d+)" | lookup lookup_site_ids.csv prefix as prefix output name as Site | eval name2=substr(Site,8,4) | rex field=Eventts "(?<Date>\d{4}-\d{2}-\d{2})T(?<Time>\d{2}:\d{2}:\d{2}\.\d{3})" | fields - Eventts | eval timestamp = Date . " " . Time | eval _time = strptime(timestamp, "%Y-%m-%d %H:%M:%S.%3N") | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S.%3N"), Condition="test" | eval Stamp = strftime(_time, "%Y-%m-%d %H:%M:%S.%3N") | lookup Stoppage.csv name as Site OUTPUT Condition Time as Stamp | search Condition="Stoppage" | where Stamp = Time | eval index_time = strptime(Time, "%Y-%m-%d %H:%M:%S.%3N") | eval lookup_time = strftime(Stamp, "%Y-%m-%d %H:%M:%S.%3N") | eval CODE=if(isnull(CODE),"N/A",CODE), FIELD_01=if(isnull(FIELD_01),"N/A",FIELD_01), FIELD_02=if(isnull(FIELD_02),"N/A",FIELD_02) | lookup code_translator.csv FIELD_01 as FIELD_01 output nonzero_bits as nonzero_bits | eval nonzero_bits=if(FIELD_02="ST" AND FIELD_01="DA",nonzero_bits,"N/A") | mvexpand nonzero_bits | lookup Decomposition_File.csv Site as name2 Alarm_bit_index as nonzero_bits "Componenty_type_and_CODE" as CODE "Component_number" as ID output "Symbolic_name" as Symbolic_name Alarm_type as Alarm_type Brief_alarm_description as Brief_alarm_description Alarm_solution | eval Symbolic_name=if(FIELD_01="DA",Symbolic_name,"N/A") , Brief_alarm_description=if(FIELD_01="DA",Brief_alarm_description,"N/A") , Alarm_type=if(FIELD_01="DA",Alarm_type,"N/A") , Alarm_solution=if(FIELD_01="DA",Alarm_solution,"N/A") | fillnull value="N/A" Symbolic_name Brief_alarm_description Alarm_type | table Site Symbolic_name Brief_alarm_description Alarm_type Alarm_solution Condition Value index_time Time _time Stamp lookup_time  
thanks for the update
Hi @braxton839  If they are HF then the config should work - you'll need to restart the HFs after deploying.  == props.conf == [juniper] TRANSFORMS-aSetnull = setnull == transforms.conf == # Filte... See more...
Hi @braxton839  If they are HF then the config should work - you'll need to restart the HFs after deploying.  == props.conf == [juniper] TRANSFORMS-aSetnull = setnull == transforms.conf == # Filter juniper teardown logs to nullqueue [setnull] REGEX = RT_FLOW_SESSION_CLOSE DEST_KEY = queue FORMAT = nullQueue If its coming in with the juniper sourcetype then Im not sure why this wouldnt work. Its worth double checking for typos etc. I assume there are no other props/transforms that you have customised which alter the queue value? Ive updated the TRANSFORMS- suffix on the above from the original to see if ordering makes any difference here, this should change the precedence and be applied before other things like sourcetype renaming.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing   
Hi @heres1  After a Splunk Enterprise upgrade, if Forwarder Management is not showing any "phoning home" (i.e., connected) Universal Forwarders, you probably want to check a few things as below: C... See more...
Hi @heres1  After a Splunk Enterprise upgrade, if Forwarder Management is not showing any "phoning home" (i.e., connected) Universal Forwarders, you probably want to check a few things as below: Check that the deployment server (Forwarder Management) settings, SSL certificates, and the deploymentclient configuration on your Universal Forwarders are intact and not overwritten by the upgrade.  You mentioned restoring the /etc folder I assume this includes the Splunk Secret in etc/auth ? Ensure the deployment server port (default 8089) is up and listening, and network connectivity from forwarders to this port is working. Its worth using curl where possible from one of the UF's to verify this. Check $SPLUNK_HOME/var/log/splunk/splunkd.log on both the server and forwarders for phoning home errors.   Upgrades may overwrite configuration files or change SSL settings. If /etc was restored, verify deployment-specific files like deploymentclient.conf (on forwarders) and serverclass.conf (on the deployment server) are correct and certificates/keys are valid.  Did you just upgrade the Deployment Server, or the UFs too?  As @kiran_panchavat mentioned - there were changed in 9.2 which affect the indexes used for DS data, although you were already on 9.3.1, right? Were the clients definately showing in Forwarder Management / Agent Manager prior to the upgrade? Note: The index configuration changes (https://docs.splunk.com/Documentation/Splunk/latest/Updating/Upgradepre-9.2deploymentservers) do not affect the operation of DS, ie it will still deploy apps to the UFs, they just do not show up in the UI, so its worth confirming that they are still able to access the DS!    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@heres1  Check this  https://docs.splunk.com/Documentation/Splunk/9.4.2/Updating/Upgradepre-9.2deploymentservers  This problem can occur in Splunk Enterprise 9.2 or higher if your deployment serve... See more...
@heres1  Check this  https://docs.splunk.com/Documentation/Splunk/9.4.2/Updating/Upgradepre-9.2deploymentservers  This problem can occur in Splunk Enterprise 9.2 or higher if your deployment server forwards its internal logs to a standalone indexer or to the peer nodes of an indexer cluster. This issue can occur after an upgrade or in a new installation of 9.2 or higher. To rectify, add these settings to outputs.conf on the deployment server: [indexAndForward] index = true selectiveIndexing = true If you add these settings post-upgrade or post-installation, you might need to restart the deployment server. Indexers require new internal deployment server indexes The deployment server uses several internal indexes new in version 9.2. These indexes are included in all indexers at the 9.2 level and higher, but if you try to forward data from those indexes to a pre-9.2 indexer, problems can result. If you forward data to your indexer tier, create these new internal deployment server indexes in indexes.conf on any pre-9.2 indexers in your environment: [_dsphonehome] [_dsclient] [_dsappevent] If the indexers are at version 9.2 or higher, they are already configured with those indexes. Data does not appear when forwarded through an intermediate forwarder This problem can occur if your deployment server forwards its internal index data through an intermediate forwarder to a standalone indexer or to the peer nodes of an indexer cluster. To rectify, add this setting to outputs.conf on the intermediate forwarder: [tcpout] forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent) If you specify the configuration within a deployment app and use the deployment server to deploy the app to the affected intermediate forwarders, you can later uninstall the app when the intermediate forwarders are upgraded to a future release that incorporates the update. Deployment Server's Forwarder Management UI exhibits unexpected behaviours after upgrading to version 9.2.x. | Splunk https://community.splunk.com/t5/Splunk-Enterprise/After-upgrading-my-DS-to-Enterprise-9-2-2-clients-can-t-connect/m-p/695607 
i have upgrade Splunk enterprise 9.3.1 to 94.2, already restore /etc, but now forwarder managment dose not show any universal phoning home
Yes, this is a Heavy Forwarder (to be specific, 2 Heavy Forwarders). Juniper device events logs are sent directly to these Heavy Forwarders. According to our inputs.conf file the sourcetype for th... See more...
Yes, this is a Heavy Forwarder (to be specific, 2 Heavy Forwarders). Juniper device events logs are sent directly to these Heavy Forwarders. According to our inputs.conf file the sourcetype for these events is: juniper
@livehybrid  not sure like how its working for you as still am unable to get the results.    
Hi @wjrbrady , I'm sorry but it isn't possible to dinamically change the span value in a timechart command. You have to define a value. Ciao. Giuseppe
It works! Thank you for the solution :)! 
Please try my updated query.