All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

  03/02/2025 15:22:41 info: created keep-alive: { "identifier": "gdghsjjsjjl", "info": { "category": "other", }, }   Example 1 Thats in the log file     03/02/2025 15:22:41 info: cre... See more...
  03/02/2025 15:22:41 info: created keep-alive: { "identifier": "gdghsjjsjjl", "info": { "category": "other", }, }   Example 1 Thats in the log file     03/02/2025 15:22:41 info: created keep-alive: { "identifier": "gdghsjjsjjl",   Example 1 This is the event in splunk     info: created keep-alive: { "identifier": "gdghsjjsjjl", "info": { "category": "other", }, } 03/02/2025 15:22:41 this is a test log Example 2 This is in the log file. Both of this events will be collected as 1
Since that app is supported by Splunk, consider opening a support case asking Splunk to resolve the deprecated libraries.
It's one of those settings which you can put into the default stanza but can override it on a per-index basis. In other words - you can set all indexes as replicated by default and then override this... See more...
It's one of those settings which you can put into the default stanza but can override it on a per-index basis. In other words - you can set all indexes as replicated by default and then override this behaviour and decide that this and that index are not replicated or vice versa - make all indexes not replicated except for a few chosen ones (not a very bright idea but hey, who is Splunk to tell you not to shoot yourself in the foot, eh? )
OK. "Did not succeed" doesn't tell us much. 1. What do the logs say (on both ends)? They should tell you if the connection has been attempted, if it failed, how it failed and so on. 2. Try connecti... See more...
OK. "Did not succeed" doesn't tell us much. 1. What do the logs say (on both ends)? They should tell you if the connection has been attempted, if it failed, how it failed and so on. 2. Try connecting directly from the UF machine to the indexer using openssl s_client splunk cmd openssl s_client -connect <your_indexer>:<port> -showcerts And see if it works.
The InfoSec App InfoSec App for Splunk | Splunkbase has not been updated in quite some time.  I am getting email from the  The Upgrade Readiness App detected 2 apps with deprecated jQuery on the h... See more...
The InfoSec App InfoSec App for Splunk | Splunkbase has not been updated in quite some time.  I am getting email from the  The Upgrade Readiness App detected 2 apps with deprecated jQuery on the https://xxx.splunkcloud.com:443 instance. InfoSec_App_for_Splunk The Upgrade Readiness App detects apps with outdated Python or jQuery to help Splunk admins and app developers prepare for new releases of Splunk in which lower versions of Python and jQuery are removed. For more details about your outdated apps, see the Upgrade Readiness App on your Splunk instance listed above.   Are we expecting to see an update?
Can't do anything without knowing your actual data (possibly anonymized if it contains sensitive information somewhere in the middle). As long as you don't have valid data which looks like a time... See more...
Can't do anything without knowing your actual data (possibly anonymized if it contains sensitive information somewhere in the middle). As long as you don't have valid data which looks like a timestamp in the middle of your multiline event, you will probably be good with something like (might need adjusting to your date format) LINE_BREAKER=([\r\n]+)\d{2}/\d{2}/\d{4} And don't touch SHOULD_LINEMERGE - it should be set to false and never ever changed to true (honestly, there are almost no valid use cases for it to be set to true).
Cisco provides a recommendation for hardware that should be followed but there are no recommended hardware or architecture setups that will allow the older TA to scale beyond an 8k throughput.  Only ... See more...
Cisco provides a recommendation for hardware that should be followed but there are no recommended hardware or architecture setups that will allow the older TA to scale beyond an 8k throughput.  Only an upgrade to the new TA will allow a higher throughput.
I'm not saying that it's great news since the support is limited and so on but it's good to know that at least _someone_ has at least _some_ knowledge of what's going on Thanks for sharing.
Indeed, Cisco Security (the supporting team) provided additional information that explained in detail what was happening with the TAs involved.  Please see my comments above.
There is indeed just one big source, because the Cisco FMC concentrates all Cisco logs and then provides the transfer interface via eStreamer.
An update for those who may have similar issues: Support for these Technical Apps and the eStreamer protocol interface to Splunk is not provided by either Splunk Support or Cisco Support.  Opening... See more...
An update for those who may have similar issues: Support for these Technical Apps and the eStreamer protocol interface to Splunk is not provided by either Splunk Support or Cisco Support.  Opening tickets with either party did not successfully resolve these questions.  The only support currently available for this interface is to use the email splunk_cisco_security_cloud@cisco.com.  If staff are available to respond, they will answer some limited questions. Cisco Security confirms that Splunkbase #3662 (TA-eStreamer) has been desupported in 2024, because the eNcore client is it based upon has been desupported.  In essence, the eNcore client is being replaced.  As a result, it is recommened (by Cisco Security) that systems with Splunkbase #3662 replace it with the new (supported) TA, which is Splunkbase #7404 (Cisco Security Cloud).   Cisco Security indicates that the older TA-eStreamer had a maximum expected throughput of under 10,000 events per second, and practically limited to under 8K of continuous throughput.  So, applications like our that routinely exceed 8K per second would never have successfully used TA-eStreamer for that performance level.  Performance levels for the newer Cisco Security Cloud app (#7404) are expected to be in the maximum range of 15-20K events per second, because it uses a new eStreamer SDK that replaces to decommissioned eNcore client software.  Since it is possible our application may rise above that level, I asked if the app potentially supported using things like load balancers to scale beyond 15-20K, but no answer was provided for this question. The Cisco Security team responding to my questions indicated that support for this TA is somewhat limited, with only a best effort support during Eastern US office hours. The Cisco Security team of course also recommends using hardware that follows their documented guidelines and provides sufficient memory, disk, and CPU to run the software at its maximum performance levels.  But, once those maximum performance levels are reached (15-20K per second) there is no recommended scaling beyond that.   Our team expects to experiment in various setups and architectures to see how far we can push the new TA.
Hi @Amith55555 , could you share some sample (eventually anonymized) of your logs of both types? please in text format (not screenshot!) using the "Insert/Edit code sample" button. Ciao. Giuseppe
Hey, i have a problem with event breaking. My app outputs logs that starts with date and time in the format 15/05/2024 16:35:45 Some events have an object in them and can be accross multiple lines. ... See more...
Hey, i have a problem with event breaking. My app outputs logs that starts with date and time in the format 15/05/2024 16:35:45 Some events have an object in them and can be accross multiple lines. But every event starts with date and time. For some reason splunk sometimes combine two events. And sometimes cut off an event who has an object in it. I tried multiple configs in the props.conf such as LINE_BREAKER , SHOULD_LINEMERGE, and more. Im new to splunk and i would be grateful if u can help me
Hi everyone! Is there a way to troubleshoot and fix this issue? We have other instances, and they work fine. Internet 24.7 Mbps download, 65.2 Mbps upload, so it's ok. ssh and ping to the host works ... See more...
Hi everyone! Is there a way to troubleshoot and fix this issue? We have other instances, and they work fine. Internet 24.7 Mbps download, 65.2 Mbps upload, so it's ok. ssh and ping to the host works fine, only the web page does not work for me. Colleagues do not have this problem. http://ip:8080/en-US/account/login?return_to=%2Fen-US%2F   This page isn’t working <ip> didn’t send any data. ERR_EMPTY_RESPONSE    
@tt-nexteng  Examine the splunkd.log file on both the Universal Forwarder and the Indexer for any TLS-related error messages. This can provide clues about what might be going wrong. Review your Do... See more...
@tt-nexteng  Examine the splunkd.log file on both the Universal Forwarder and the Indexer for any TLS-related error messages. This can provide clues about what might be going wrong. Review your Docker configuration to ensure that all necessary ports are exposed and that the container has the correct network settings. https://docs.splunk.com/Documentation/Splunk/9.4.0/Security/Validateyourconfiguration
It does seem an oversight for the dashboards to have been updated with a specific index name rather than a macro.  This is an app build by Splunk Works, so whilst it isnt an official Splunk supporte... See more...
It does seem an oversight for the dashboards to have been updated with a specific index name rather than a macro.  This is an app build by Splunk Works, so whilst it isnt an official Splunk supported app, you might be able to log a support case with Splunk about it and hopefully they might revert this back to including the macros. Due to the way the dashboards are distributed and compiled, it looks like it would be hard to manually edit the dashboards yourself especially as you are running in Splunk Cloud. I think your best option at this point is raise a support case and take it from there, hopefully they will be able to push an updated version! Good luck and sorry I couldnt help further.
@SplunkExplorerYou can keep it in default stanza. [default] # Configure all indexes to use the SmartStore remote volume called # "remote_store". # Note: If you want only some of your indexes to u... See more...
@SplunkExplorerYou can keep it in default stanza. [default] # Configure all indexes to use the SmartStore remote volume called # "remote_store". # Note: If you want only some of your indexes to use SmartStore, # place this setting under the individual stanzas for each of the # SmartStore indexes, rather than here. remotePath = volume:remote_store/$_index_name repFactor = auto  
I have tried many times following this document, but I keep failing.   If I remove the TLS settings and revert these four configuration files to their original state before modification, everything... See more...
I have tried many times following this document, but I keep failing.   If I remove the TLS settings and revert these four configuration files to their original state before modification, everything works fine.   Additionally, I am running the indexer in Docker. Could this be related to the issue?
I don't see why you cannot set repFactor in [default].  homePath also is listed as a per-index option, but I put it in the default stanza all the time (successfully). Try it and let us know how it w... See more...
I don't see why you cannot set repFactor in [default].  homePath also is listed as a per-index option, but I put it in the default stanza all the time (successfully). Try it and let us know how it works.
@tt-nexteng  Check this video for reference https://www.youtube.com/watch?v=vI7466EwG7I