All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Amith55555  Does the following work for you? SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)(\d{2}\/\d{2}\/\d{4}\s\d{2}:\d{2}:\d{2}) TIME_PREFIX = ^ TIME_FORMAT = %d/%m/%Y %H:%M:%S This assu... See more...
Hi @Amith55555  Does the following work for you? SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)(\d{2}\/\d{2}\/\d{4}\s\d{2}:\d{2}:\d{2}) TIME_PREFIX = ^ TIME_FORMAT = %d/%m/%Y %H:%M:%S This assumes your date format is DD/MM/YYYY not MM/DD/YYYY but feel free to tweak if required. Let me know how you get on!
It depends on what you  mean by "network traffic". If you can define the events, you could look at the eventgen tool to create your own data sets. Alternatively, if it is web traffic (which comes fro... See more...
It depends on what you  mean by "network traffic". If you can define the events, you could look at the eventgen tool to create your own data sets. Alternatively, if it is web traffic (which comes from a network), you could look at the tutorial dataset.
Hi, I wanted to check that how can I get total data transfer from on-prem heavy forwarders and intermediate forwarders to cloud indexer cluster? is there a search which can look into splunkd.log or ... See more...
Hi, I wanted to check that how can I get total data transfer from on-prem heavy forwarders and intermediate forwarders to cloud indexer cluster? is there a search which can look into splunkd.log or metrics.log from heavy forwarder for data transferred for 24 hours...
Hi Splunkers,  does anyone know if I there are datasets free to download? More precisely, I would need some network traffic dataset including good and bad domains for some Splunk Machine Learning t... See more...
Hi Splunkers,  does anyone know if I there are datasets free to download? More precisely, I would need some network traffic dataset including good and bad domains for some Splunk Machine Learning testing. I would appreciate every idea you have. Thanks in advance! BR
I've had a working Splunk instance for a month, but post patch it refuses to start the webUI. Where I would either start splunk (no issues) but the UI won't work. I've tried: Checking web.conf Ch... See more...
I've had a working Splunk instance for a month, but post patch it refuses to start the webUI. Where I would either start splunk (no issues) but the UI won't work. I've tried: Checking web.conf Checking ports Checking firewall-cmd Checking permissions. When restarting webserver via ./splunk restart splunkweb the splunkd.log shows it restarting then instantly stopping the module - what could be doing that?
Hello all, I am wondering if anyone has run into an issue where they receive a "500 error" on some large reports (small reports work fine)? The only feedback I got from the cSAM admin was to add a t... See more...
Hello all, I am wondering if anyone has run into an issue where they receive a "500 error" on some large reports (small reports work fine)? The only feedback I got from the cSAM admin was to add a time out value in Microsoft PowerQuery, doesn't quite seem to relate to CURL though.     personal_access_token = "MyRealToken", request_timeout_in_minutes = 10, // Specify your timeout value here data = Table.FromRecords( Json.Document( Web.Contents( csam_api_endpoint_url, [ Headers = [ #"Authorization"="Bearer " & personal_access_token, #"Content-Type" = "application/json" ], Timeout = #duration(0, 0, request_timeout_in_minutes, 0) ] ) ) ) in data  
Thank you for the responses. None of the above worked. I think this feature is not available in Splunk Dashboard Studio
Hello, I need some help for a query. I have to do this :  At the moment I haven't managed to get exactly what I've asked for, I can't place the dates on the last few days in the column, I've tr... See more...
Hello, I need some help for a query. I have to do this :  At the moment I haven't managed to get exactly what I've asked for, I can't place the dates on the last few days in the column, I've tried several things but to no avail.   All I've managed to do is this: index=aws_app_corp-it_datastage | spath input=_raw | eval Country=INVOCATIONID | eval StartTime=strptime(RUNSTARTTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q") | eval EndTime=strptime(RUNENDTIMESTAMP, "%Y-%m-%d %H:%M:%S.%Q") | eval Duration=round(abs(EndTime - StartTime)/60, 2) | eval Status = case( RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWW", "Completed with Warnings", RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FOK", "Successful Launch", RUNMAJORSTATUS="FIN" AND RUNMINORSTATUS="FWF", "Failure", RUNMAJORSTATUS="STA" AND RUNMINORSTATUS="RUN", "In Progress", 1=1, "Unknown" ) | eval StartTimeFormatted=strftime(StartTime, "%H:%M") | eval EndTimeFormatted=strftime(EndTime, "%H:%M") | eval StartTimeDisplay=if(isnotnull(StartTimeFormatted), "Start time: ".StartTimeFormatted, "Start time: N/A") | eval EndTimeDisplay=if(isnotnull(EndTimeFormatted), "End time: ".EndTimeFormatted, "End time: N/A") | table JOBNAME PROJECTNAME Country _time StartTimeDisplay EndTimeDisplay Status | rename JOBNAME as Job, PROJECTNAME as App | sort -_time |search Country="*" App="*" Status="*"  
  03/02/2025 15:22:41 info: created keep-alive: { "identifier": "gdghsjjsjjl", "info": { "category": "other", }, }   Example 1 Thats in the log file     03/02/2025 15:22:41 info: cre... See more...
  03/02/2025 15:22:41 info: created keep-alive: { "identifier": "gdghsjjsjjl", "info": { "category": "other", }, }   Example 1 Thats in the log file     03/02/2025 15:22:41 info: created keep-alive: { "identifier": "gdghsjjsjjl",   Example 1 This is the event in splunk     info: created keep-alive: { "identifier": "gdghsjjsjjl", "info": { "category": "other", }, } 03/02/2025 15:22:41 this is a test log Example 2 This is in the log file. Both of this events will be collected as 1
Since that app is supported by Splunk, consider opening a support case asking Splunk to resolve the deprecated libraries.
It's one of those settings which you can put into the default stanza but can override it on a per-index basis. In other words - you can set all indexes as replicated by default and then override this... See more...
It's one of those settings which you can put into the default stanza but can override it on a per-index basis. In other words - you can set all indexes as replicated by default and then override this behaviour and decide that this and that index are not replicated or vice versa - make all indexes not replicated except for a few chosen ones (not a very bright idea but hey, who is Splunk to tell you not to shoot yourself in the foot, eh? )
OK. "Did not succeed" doesn't tell us much. 1. What do the logs say (on both ends)? They should tell you if the connection has been attempted, if it failed, how it failed and so on. 2. Try connecti... See more...
OK. "Did not succeed" doesn't tell us much. 1. What do the logs say (on both ends)? They should tell you if the connection has been attempted, if it failed, how it failed and so on. 2. Try connecting directly from the UF machine to the indexer using openssl s_client splunk cmd openssl s_client -connect <your_indexer>:<port> -showcerts And see if it works.
The InfoSec App InfoSec App for Splunk | Splunkbase has not been updated in quite some time.  I am getting email from the  The Upgrade Readiness App detected 2 apps with deprecated jQuery on the h... See more...
The InfoSec App InfoSec App for Splunk | Splunkbase has not been updated in quite some time.  I am getting email from the  The Upgrade Readiness App detected 2 apps with deprecated jQuery on the https://xxx.splunkcloud.com:443 instance. InfoSec_App_for_Splunk The Upgrade Readiness App detects apps with outdated Python or jQuery to help Splunk admins and app developers prepare for new releases of Splunk in which lower versions of Python and jQuery are removed. For more details about your outdated apps, see the Upgrade Readiness App on your Splunk instance listed above.   Are we expecting to see an update?
Can't do anything without knowing your actual data (possibly anonymized if it contains sensitive information somewhere in the middle). As long as you don't have valid data which looks like a time... See more...
Can't do anything without knowing your actual data (possibly anonymized if it contains sensitive information somewhere in the middle). As long as you don't have valid data which looks like a timestamp in the middle of your multiline event, you will probably be good with something like (might need adjusting to your date format) LINE_BREAKER=([\r\n]+)\d{2}/\d{2}/\d{4} And don't touch SHOULD_LINEMERGE - it should be set to false and never ever changed to true (honestly, there are almost no valid use cases for it to be set to true).
Cisco provides a recommendation for hardware that should be followed but there are no recommended hardware or architecture setups that will allow the older TA to scale beyond an 8k throughput.  Only ... See more...
Cisco provides a recommendation for hardware that should be followed but there are no recommended hardware or architecture setups that will allow the older TA to scale beyond an 8k throughput.  Only an upgrade to the new TA will allow a higher throughput.
I'm not saying that it's great news since the support is limited and so on but it's good to know that at least _someone_ has at least _some_ knowledge of what's going on Thanks for sharing.
Indeed, Cisco Security (the supporting team) provided additional information that explained in detail what was happening with the TAs involved.  Please see my comments above.
There is indeed just one big source, because the Cisco FMC concentrates all Cisco logs and then provides the transfer interface via eStreamer.
An update for those who may have similar issues: Support for these Technical Apps and the eStreamer protocol interface to Splunk is not provided by either Splunk Support or Cisco Support.  Opening... See more...
An update for those who may have similar issues: Support for these Technical Apps and the eStreamer protocol interface to Splunk is not provided by either Splunk Support or Cisco Support.  Opening tickets with either party did not successfully resolve these questions.  The only support currently available for this interface is to use the email splunk_cisco_security_cloud@cisco.com.  If staff are available to respond, they will answer some limited questions. Cisco Security confirms that Splunkbase #3662 (TA-eStreamer) has been desupported in 2024, because the eNcore client is it based upon has been desupported.  In essence, the eNcore client is being replaced.  As a result, it is recommened (by Cisco Security) that systems with Splunkbase #3662 replace it with the new (supported) TA, which is Splunkbase #7404 (Cisco Security Cloud).   Cisco Security indicates that the older TA-eStreamer had a maximum expected throughput of under 10,000 events per second, and practically limited to under 8K of continuous throughput.  So, applications like our that routinely exceed 8K per second would never have successfully used TA-eStreamer for that performance level.  Performance levels for the newer Cisco Security Cloud app (#7404) are expected to be in the maximum range of 15-20K events per second, because it uses a new eStreamer SDK that replaces to decommissioned eNcore client software.  Since it is possible our application may rise above that level, I asked if the app potentially supported using things like load balancers to scale beyond 15-20K, but no answer was provided for this question. The Cisco Security team responding to my questions indicated that support for this TA is somewhat limited, with only a best effort support during Eastern US office hours. The Cisco Security team of course also recommends using hardware that follows their documented guidelines and provides sufficient memory, disk, and CPU to run the software at its maximum performance levels.  But, once those maximum performance levels are reached (15-20K per second) there is no recommended scaling beyond that.   Our team expects to experiment in various setups and architectures to see how far we can push the new TA.
Hi @Amith55555 , could you share some sample (eventually anonymized) of your logs of both types? please in text format (not screenshot!) using the "Insert/Edit code sample" button. Ciao. Giuseppe