All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi folks, the scenario is like below - have Enterprise security (ESS) in Splunk cloud + ESCU (content updates) as part of it - if we enable a ESCU detection it works all good. - we need to modify ... See more...
hi folks, the scenario is like below - have Enterprise security (ESS) in Splunk cloud + ESCU (content updates) as part of it - if we enable a ESCU detection it works all good. - we need to modify the ESCU slightly with a standard field and also the name of the search to fit existing organisation policy - The uuid remain the same What will happen when the next ESCU update comes? Will it overwrite the custom changes? What is the actual ESCU update looking for? is it looking for 'search name' or the 'search id (uuid)?'?   What will happen when the next ESCU update comes?
@livehybrid also i could not see any manually created apps are in manager apps folder. is this correct path i am searching to get the app lists in CM ?  
@ITWhisperer I received an error saying "Error in 'SearchParser': Missing a search command before '('.Error at position '90' of search query 'search (index=xxxx) CASE(SourceA) source..." Also any ... See more...
@ITWhisperer I received an error saying "Error in 'SearchParser': Missing a search command before '('.Error at position '90' of search query 'search (index=xxxx) CASE(SourceA) source..." Also any reason why the outer search is of 15d whereas subsearch is set for 2d?Is it for the optimisation?
Hi @rselv21  The defaults for Replication Factor (RF) and Search Factor (SF) are 3 and 2 respectively, this essentially allows for 1 node to be out of service without a loss of functionality. If a s... See more...
Hi @rselv21  The defaults for Replication Factor (RF) and Search Factor (SF) are 3 and 2 respectively, this essentially allows for 1 node to be out of service without a loss of functionality. If a secondary node also becomes unavailable then some buckets may not have any searchable copies available to search. Although non-searchable copies of the buckets can be made searchable, doing so takes time As others have said, its basically a trade off between risk of data loss (RF), risk of lowered availability (SF) and storage availability (Cost).  The rule of thumb for sizing is that a searchable replica bucket uses approx 35% of the original raw ingest size, and a non-searchable bucket uses approx 15% of the raw ingest size, so a RF:SF of 3:2 uses approx 85% of the raw ingested volume per day.  Personally, the defaults are pretty good for the majority, its only when you're limited by resource, OR have a large amount of searches running across a much larger indexer cluster that I would adjust these, as distributing more copies across a larger cluster can improve performance by ensuring there are no hotspots of data. Check out the following docs for some good background on this: https://docs.splunk.com/Documentation/Splunk/9.4.2/Indexer/Thesearchfactor  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
for your reference, below the syslog event received in SC4S that does not seem to trigger the parser  /etc/syslog-ng/conf.d/conflib/almost-syslog/app-almost-syslog-citrix_netscaler.conf tcpdump -n ... See more...
for your reference, below the syslog event received in SC4S that does not seem to trigger the parser  /etc/syslog-ng/conf.d/conflib/almost-syslog/app-almost-syslog-citrix_netscaler.conf tcpdump -n -vvv -i eth0 host 10.143.6.21 -s 0 dropped privs to tcpdump tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 11:06:40.745046 IP (tos 0x0, ttl 255, id 21289, offset 0, flags [none], proto UDP (17), length 289) 10.X.Y.Z.23350 > 192.X.Y.Z.syslog: [udp sum ok] SYSLOG, length: 261 Facility local0 (16), Severity info (6) Msg: 12/05/2025:11:06:40 host_nameXXXXX 0-PPE-1 : default TCP CONN_TERMINATE 465400 0 : Source 192.X.Y.Z:636 - Destination 10.X.Y.z:53621 - Start Time 12/05/2025:11:06:40 - End Time 12/05/2025:11:06:40 - Total_bytes_send 0 - Total_bytes_recv 1 \0x0a   any idea about how to see why it does not match? should I create some specific parser to force this trigger? thanks 
@livehybrid thank you so much for your reply. There is no app exist in cm managerapps folder.  Yes we have done index creation before a week and it was showing successful. But not sure what is the is... See more...
@livehybrid thank you so much for your reply. There is no app exist in cm managerapps folder.  Yes we have done index creation before a week and it was showing successful. But not sure what is the issue for this error. Plz guide. Thanks
The simplest way to do this (although perhaps not the most optimal) would be something like this (index=xxxx) orgName=xxx sourcetype=CASE(SourceB) earliest=-15d [search (index=xxxx) orgName=xxx sour... See more...
The simplest way to do this (although perhaps not the most optimal) would be something like this (index=xxxx) orgName=xxx sourcetype=CASE(SourceB) earliest=-15d [search (index=xxxx) orgName=xxx sourcetype=CASE(SourceA) earliest=-2d uniqueIdentifier="Class.ClassName.MethodName*" | dedup SourceASqlId | rename SourceASqlId as SourceBSqlId | table SourceBSqlId] | table SourceBSqlText Bear in mind that subsearches are limited to 50,000 events
Hi Team, I have 2 splunks as below (index=xxxx) orgName=xxx sourcetype=CASE(SourceA) earliest=-15d uniqueIdentifier="Class.ClassName.MethodName*" | dedup SourceASqlId | tableSourceASqlId (index=x... See more...
Hi Team, I have 2 splunks as below (index=xxxx) orgName=xxx sourcetype=CASE(SourceA) earliest=-15d uniqueIdentifier="Class.ClassName.MethodName*" | dedup SourceASqlId | tableSourceASqlId (index=xxxx) orgName=xxx sourcetype=CASE(SourceB) earliest=-15d SourceBSqlId=xxxx | table SourceBSqlText I want to form a single splunk to get ALL the distinct "SourceASqlId" [splunk # 1], get them as input to "SourceBSqlId" [splunk #2] and generate FINAL output as "SourceBSqlText How can we achieve it.Iam even ok if the date range can be reduce to say 2d to make the splunk optimised as I feel my requirement is very heavy compute intensive Thanks.
BTW, why would you want to override the sourcetype for a relatively well known and well implemented and supported linux_audit sourcetype?
Hi @br0wall  Is this a new insallation of Splunk on your device? Splunk provides a 60-day trial until you need to change it to the free version. Is the instance on your own device connected to the ... See more...
Hi @br0wall  Is this a new insallation of Splunk on your device? Splunk provides a 60-day trial until you need to change it to the free version. Is the instance on your own device connected to the license manager within your company, or is this a standalone instance? If this is a development instance the you could look at applying for a developer license at https://dev.splunk.com/enterprise/dev_license/ Alternative you could completely remove and re-install Splunk, however depending on your usecase this might not be an option - Note, you would lose all existing data! Nevertheless, this wont help you login. To reset the admin credentials for your Splunk instance follow the following instructions: Find the passw file for your instance ($SPLUNK_HOME/etc/passwd) and rename it to passwd.bk Create a file named user-seed.conf in your $SPLUNK_HOME/etc/system/local/ directory. In the file add the following text: [user_info] PASSWORD = NEW_PASSWORD In the place of "NEW_PASSWORD" insert the password you would like to use. Start Splunk Enterprise and use the new password to log into your instance from Splunk Web. Alternatively run the following: $SPLUNK_HOME/bin/splunk cmd splunkd rest --noauth POST /services/admin/users/admin "password=<your password>" For more info see https://docs.splunk.com/Documentation/Splunk/9.4.2/Security/Secureyouradminaccount#:~:text=ASCII%20character(s).-,Reset%20credentials,-If%20you%20lose  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
RF and SF are site-specific parameters depending on the organization risk tolerance and cost constraints. I've seen sites with RF=SF=1 (yes, you get no data resilience but management of some aspects ... See more...
RF and SF are site-specific parameters depending on the organization risk tolerance and cost constraints. I've seen sites with RF=SF=1 (yes, you get no data resilience but management of some aspects of your environment is easier) as well as SF=RF=number of indexers-1 (you shouldn't set your RF to be equal to your number of indexers!). BTW, 9 indexers and 7 SHs? Are those standalone SHs? Or do you have a 7-member SH cluster? (which seems an overkill unless you have a very specific use case)
Hi @sudha_krish , it's normal: metadata are forwarded only to other Splunk instances! using syslogs, you forward only raw events and you must recognize, in the third party system, metadata from the... See more...
Hi @sudha_krish , it's normal: metadata are forwarded only to other Splunk instances! using syslogs, you forward only raw events and you must recognize, in the third party system, metadata from the raw events (e.g. host is usually in the beginning of the even after the timestamp). Ciao. Giuseppe
If I understand you correctly, you're rewriting sourcetype to mysp and then expect Splunk to apply transforms define for that sourcetype to the events further down the ingestion pipeline. It doesn't ... See more...
If I understand you correctly, you're rewriting sourcetype to mysp and then expect Splunk to apply transforms define for that sourcetype to the events further down the ingestion pipeline. It doesn't work that way (but it's a common expectation, I myself thought it did a few years ago). Splunk decides at the beginning of the pipeline which settings apply to sourcetype/source/host triple and subsequent rewrites to those fields do not change it - the event goes through ingestion pipeline using the originally decided transforms. The only way to "switch" to another sourcetype is to use CLONE_SOURCETYPE (but then you have to handle the original copy of the event as well).
Hi @Pujarani  Please could you give some further info on your deployment architecture? Are you using Federated Search at all? Please can you confirm if your Cluster Manager has had a recent bundle ... See more...
Hi @Pujarani  Please could you give some further info on your deployment architecture? Are you using Federated Search at all? Please can you confirm if your Cluster Manager has had a recent bundle push, was this successful? Does the app in the log exist in the Cluster Manager $SPLUNK_HOME/etc/manager-apps folder?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @JohnSmith123  I dont think you get two-bites of the props.conf cherry when changing a sourcetype name, instead you need to apply your host_transform to the "default:log" sourcetype rather than t... See more...
Hi @JohnSmith123  I dont think you get two-bites of the props.conf cherry when changing a sourcetype name, instead you need to apply your host_transform to the "default:log" sourcetype rather than the new sourcetype name. Try the following: == props.conf == [default:log] TRANSFORMS-force_sourcetype = sourcetype_transform TRANSFORMS-force_host = host_transform == transforms.conf == [sourcetype_transform] SOURCE_KEY = _raw REGEX = <my_regex> DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::mysp [host_transform] REGEX = <my_regex> FORMAT = host::$1 DEST_KEY = MetaData:Host  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
See my response in your other thread.
Hi @sudha_krish  Sending out over HTTP does not use an open HTTP standard/API - it uses Splunk2Splunk protocol wrapped in HTTP, so therefore it is only supported for sending to other Splunk systems.... See more...
Hi @sudha_krish  Sending out over HTTP does not use an open HTTP standard/API - it uses Splunk2Splunk protocol wrapped in HTTP, so therefore it is only supported for sending to other Splunk systems.  If you want to send data to a non-Splunk system you can look at the syslog forwarding, however this sends the raw events before they are parsed. For more information on sending to external systems please check out https://docs.splunk.com/Documentation/SplunkCloud/latest/Forwarding/Forwarddatatothird-partysystemsd  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@sudha_krish  Splunk forwarders (Universal or Heavy) send only raw event data to non-Splunk systems over TCP or syslog by default, as outlined in the Splunk documentation. Metadata such as host, sou... See more...
@sudha_krish  Splunk forwarders (Universal or Heavy) send only raw event data to non-Splunk systems over TCP or syslog by default, as outlined in the Splunk documentation. Metadata such as host, sourcetype, source, and index is internal to Splunk and not included in the raw event payload.    To forward logs with metadata over HTTP reliably, tools like Cribl Stream are commonly used. These tools can intercept Splunk data, enrich it with metadata, and send it to third-party systems via HTTP.    Cribl (https://cribl.io/) allows you to route events to multiple systems but maintain full metadata. In addition, you can be very selective about what goes where and you can reshape and enrich events as they're moving.
That is correct. When Splunk sends data over "plain TCP" connection it just sends raw event (there is some degree of configurability but AFAIR it's limited to sending syslog priority header, maybe a ... See more...
That is correct. When Splunk sends data over "plain TCP" connection it just sends raw event (there is some degree of configurability but AFAIR it's limited to sending syslog priority header, maybe a timestamp). If you wanted to send the event metadata (sourcetype/source/host) along with the event you'd have to rewrite the event's raw contents. But if you want to retain the event and index it locally along with sending it out you most probably want to index it in an unchanged form. And it's where it's getting complicated. You'd have to use the CLONE_SOURCETYPE functionality to duplicate your event and split its processing path. Then send the original one to your indexer(s) and the cloned one you can modify and route to your TCP output.
@livehybrid  @PickleRick  @gcusello  Thanks for your responses. I found in the Splunk documentation that forwarding logs to third-party systems is typically done over TCP. I tried using TCP, but I di... See more...
@livehybrid  @PickleRick  @gcusello  Thanks for your responses. I found in the Splunk documentation that forwarding logs to third-party systems is typically done over TCP. I tried using TCP, but I did not receive Splunk metadata like host, sourcetype, source, and index on the third-party system.