All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

9.2.2 is the current version.    so the queue fillup and memory consumption in HF2, may be due to outgoing traffic ? it wont cause due to incoming large data , routing from HF1.. yes, we plan to... See more...
9.2.2 is the current version.    so the queue fillup and memory consumption in HF2, may be due to outgoing traffic ? it wont cause due to incoming large data , routing from HF1.. yes, we plan to configure add one more HF in HF2 layer as LB but it take some time. but we need to fix current ongoing issue. 
@Raghavsri  Whats the version of splunk you are running? Also to start with, check few options. Review logs: Look for errors, warnings, or abnormal behavior in splunkd.log Check destination hea... See more...
@Raghavsri  Whats the version of splunk you are running? Also to start with, check few options. Review logs: Look for errors, warnings, or abnormal behavior in splunkd.log Check destination health: Ensure that SyslogNG and the second indexer cluster are healthy and accepting data efficiently Also If HF2 is not able to forward data fast enough (due to network, destination, or performance issues), the queue fills up, consuming memory Memory upgrade: Increasing memory on HF2 may help if the issue is due to legitimate high data volume and not a leak or misconfiguration. However, if the problem is a memory leak/bandwidth issue, increasing memory will only delay the inevitable crash Load Balancing: Consider load balancing across multiple HFs if possible, to distribute the data load Monitor memory usage: Set up alerts for high memory usage to detect issues early. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
Hi Everyone, I encountered an issue while creating a new component for SplunkUI. I have followed the documentation tutorial https://splunkui.splunk.com/Packages/create/CreatingSplunkApps as well as... See more...
Hi Everyone, I encountered an issue while creating a new component for SplunkUI. I have followed the documentation tutorial https://splunkui.splunk.com/Packages/create/CreatingSplunkApps as well as the writings from others here https://blog.scrt.ch/2023/01/03/getting-started-with-splunkui/  but I am facing an error as shown in the image below. My Setup : node -v v18.19.1 npm -v 9.2.0 npx yarn -v 1.22.22  
Our data flow is syslog server sending more number of data to one HF1, then its routing to a indexer cluster as well as to another HF2. from this another HF2, routing data to syslogNG and  another in... See more...
Our data flow is syslog server sending more number of data to one HF1, then its routing to a indexer cluster as well as to another HF2. from this another HF2, routing data to syslogNG and  another indexer cluster, located in different environment Due to high volume of data in our syslog server, we increased the pipeline queue size as 2500MB. we faced backpressure in syslog and HFs , so vendor recommended to increase the pipeline size as 2500MB under server.conf , in both HFs and syslog server.  now the issue is HF2 consuming full memory(92GB) recently after the server reboot.  after consume 100% memory , HF2 went hung . if we decrease the parallel pipeline from 2 to 1 in HF2, it create backpressure in syslog server and HF1 , and pipelines getting burst.  before the HF2 reboot, the memory consumption was less than 10GB only with pipeline size as 2500MB and Splunkd process was normal. Note: so far HF1 not facing any memory(92GB) issue, located in between syslog server and HF2 now in this situation , increasing the memory in HF2 will be helpful ? or what will be best solution to overcome this scenario in future    
Hi @livehybrid  I have changed the interval to 600 seconds, but the data is still not available. Is there any other solution that you know?
So, this is a common pattern and the solution is to work your logic like this ```search indexes to get list of servers``` ... | stats max(_time_) as latest count by System | rename System_Name as Sy... See more...
So, this is a common pattern and the solution is to work your logic like this ```search indexes to get list of servers``` ... | stats max(_time_) as latest count by System | rename System_Name as System ``` At this point all Systems found will have a count > 0 ``` ``` So now add in your control group to the end of the list ``` | inputlookup append=t system_info.csv ``` Now this will "join" the two possible sets of 'System' together ``` | stats values(*) as * max(count) as count by System ``` And any of those with count = 0 are those that came from your lookup control and this gives you the ones not found in your data ``` | where count=0 The final where clause will cause only the missing items to show, but you can of course do what you need there with any time calculations based on the latest value from the top search. You can if you want use the lookup as a subsearch contstraint on the outer search so that it finds ONLY those in the lookup, as opposed to all systems, e.g. index=servers sourcetype=logs [ | inputlookup system_info.csv | fields System | rename System as System_Name ] ... Note that this assumes your data contains a field called System_Name, but your field in the lookup is System.  
@vy  Have you found any solution for this
Hi @kunalsingh  Do you get more detail in your $SPLUNK_HOME/var/log/splunk/splunkd.log ? This might pinpoint a specific python error which is causing the page not to render.  Did this answer help... See more...
Hi @kunalsingh  Do you get more detail in your $SPLUNK_HOME/var/log/splunk/splunkd.log ? This might pinpoint a specific python error which is causing the page not to render.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Amire22  I think you should be able to configure additional domains exactly the same way you did the first one; asset & identity data must ultimately reside in lookups (CSV or KV-store) on the E... See more...
Hi @Amire22  I think you should be able to configure additional domains exactly the same way you did the first one; asset & identity data must ultimately reside in lookups (CSV or KV-store) on the ES search head, those files are not forwarded automatically by indexers/HFs. Option A – query the directories directly from ES Install SA-ldapsearch (or the Splunk Add-on for Microsoft AD) on the ES search head. Create one stanza per domain with its own server, bindDN and credentials. Schedule one ldapsearch per domain that writes to a single lookup (e.g. identities.csv, assets.csv). ES will ingest those lookups when the “Identity – Lookup Gen” and “Asset – Lookup Gen” searches run. Option B – collect on each HF and ship as events Install SA-ldapsearch on a HF in each domain. Schedule a search or scripted input that outputs CSV-formatted events and forward them to a dedicated index, e.g. index=identity. Use a search to pull the data into a lookup: Essential SPL example index=identity sourcetype=ldap_identities | eval category="normal" | lookup update=true identities.csv identity OUTPUTNEW * | outputlookup identities.csv ES does not care where the data comes from as long as the final lookups exist on the search head. Multiple domains, lack of trust, or separate networks are irrelevant; you only need LDAP connectivity from whichever Splunk instance is executing the LDAP query. You should probably add a domain/prefix field to your A&I lookups to show which domain the entity originates. If you end up with a large CSV lookup consider switching to KV-store lookup. More info on apps/addons for bringing in assets/identities info can be found at https://help.splunk.com/en/splunk-enterprise-security-8/administer/8.0/asset-and-identity-management/extract-asset-and-identity-data-in-splunk-enterprise-security#id_23cc30fd_1876_4f43_97f4_3f37d7b6d98c__Extract_asset_and_identity_data_in_Splunk_Enterprise_Security    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
The best terms to use here are either "server", or "deployment". I would avoid the term "stack", as that does not translate well to the typical web developer definition of that term. The below should... See more...
The best terms to use here are either "server", or "deployment". I would avoid the term "stack", as that does not translate well to the typical web developer definition of that term. The below should help to distinguish further:   Splunk Server Generic term to indicate any host (computer) that provides a Splunk service Splunk Deployment Collective term referring to one or more Splunk servers that satisfy the various roles needed to provide a Splunk service.   See also: cluster, tier, search tier, search peer, index tier, role   Ref: [1]: https://help.splunk.com/en/splunk-cloud-platform/administer/admin-manual/9.3.2411/get-started-managing-splunk-cloud-platform/splunk-cloud-platform-deployment-types [2]: https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.4/welcome-to-splunk-enterprise-administration/splunk-platform-administration-the-big-picture  
I have recently created an addon via UCC Framework (Version-5.56.0),However I am facing issue while (editing or cloning) inputs and account in configuration page. #UCC framework    
As always - there are two questions. 1. Will it run? Probably. I've worked with 9.0 Splunk servers supplied with UFs going as far back as 6.6.x. 2. Is it a good idea? Depends on the circumstances. ... See more...
As always - there are two questions. 1. Will it run? Probably. I've worked with 9.0 Splunk servers supplied with UFs going as far back as 6.6.x. 2. Is it a good idea? Depends on the circumstances. As the others already said - if you have no other choice, you're running what you have. But it's usually better to upgrade (unless there are some critical bugs affecting your particular use case). If not for any other reason - 9.0 introduced configuration tracking so you can see what changed and when.
Hi @livehybrid  Unfortunately not, its a button CSS + Javascript.     <button type="button" class="web-ui-component__button jjui-11138fz" style="display: inline-block;"> <div data-analytics-name=... See more...
Hi @livehybrid  Unfortunately not, its a button CSS + Javascript.     <button type="button" class="web-ui-component__button jjui-11138fz" style="display: inline-block;"> <div data-analytics-name="resource-tile" data-testid="sampleApp_Dev" tabindex="-1" class="jjui-vkvk0d ell0llb0"> .............     I can successfully simulate the click of the button using "click" CSS selector    div[data-testid="sampleApp_Dev"]     but do not have direct access to the JS. I've traced it in Chrome but it has so many nested calls its challenging to find anything useful
Thank you! That gave me the proper direction to go! My final, validated version is... index=main sourcetype=syslog [ | makeresults | eval input_mac="48a4.93b9.xxxx" | eval mac_clean=lower(repla... See more...
Thank you! That gave me the proper direction to go! My final, validated version is... index=main sourcetype=syslog [ | makeresults | eval input_mac="48a4.93b9.xxxx" | eval mac_clean=lower(replace(input_mac, "[^0-9A-Fa-f]", "")) | where len(mac_clean)=12 | eval mac_colon=replace(mac_clean, "(..)(..)(..)(..)(..)(..)", "\1:\2:\3:\4:\5:\6") | eval mac_hyphen=replace(mac_clean, "(..)(..)(..)(..)(..)(..)", "\1-\2-\3-\4-\5-\6") | eval mac_dot=replace(mac_clean, "(....)(....)(....)", "\1.\2.\3") | eval query=mvappend(mac_clean, mac_colon, mac_hyphen, mac_dot) | mvexpand query | where isnotnull(query) | fields query | format ] | table _raw
Have you tried my examples? If you can send email and you have access those internal logs then there are at least one log line. If you cannot see those then you haven't have access to those logs to s... See more...
Have you tried my examples? If you can send email and you have access those internal logs then there are at least one log line. If you cannot see those then you haven't have access to those logs to see it. 2025-06-11 18:39:08,616 +0300 INFO sendemail:275 - Sending email. sid=1749656347.70143, subject="testing", encoded_subject="testing", results_link="None", recipients="['your.email@your.domain']", server="localhost" How you are sure that the issue is with splunk? Have you some logs which shows that e.g. alert is fired and it has try to send it via sendemail? For that reason I suggest 1st check that sending email is working and after that start to look why your alerts are not sending it. And quite often then the reason was that alert hasn't fired.
As already said technically you could use quite old UF with new splunk IHF/Server version. BUT you must understand that there are several improvements and also many security issues fixed on newer UF ... See more...
As already said technically you could use quite old UF with new splunk IHF/Server version. BUT you must understand that there are several improvements and also many security issues fixed on newer UF versions.  Of course if you have some ancient OS versions then you cannot upgrade UF on those, but then you should also consider to update those OS too.
Are you sure that e.g. some of your source nodes are not HF instead of UF? This is valid also form collecting node not only any nodes between UF and Indexers! Basically your configuration seems to b... See more...
Are you sure that e.g. some of your source nodes are not HF instead of UF? This is valid also form collecting node not only any nodes between UF and Indexers! Basically your configuration seems to be ok. Of course you could modify those REGEX little bit efficient than those are currently, but it's another story. As other already said the most obvious reason is that you have HF somewhere before your indexer (splunk enterprise node). You can check this on source node and all other nodes (check those from outputs.conf) $SPLUNK_HOME/bin/splunk version  This should show in UF Splunk Universal Forwarder 9.4.0 (build 6b4ebe426ca6) or in HF/indexer  Splunk 9.4.1 (build e3bdab203ac8) Just check which path it is running and replace SPLUNK_HOME with it. 
im getting the log all the logs are in level INFO I know for sure that splunk had an issue with sending emails at specific time but i cannot see any logs in _internal
im looking for a solution that i will be able to monitor if emails stopped receiving , not for troubleshoot for specific issue
Can you show current inputs.conf and props.conf stanzas for this CSV file? And example (modified) from 1st 2 lines (header + real masked events) from that file?