All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @zksvc  I'm on 1.22.19 If you're on 2+ there might be a conflict with the pnp nodeLinker (which I think is default) - so if you're on 2.x you could try creating a .yarnrc.yml file in the project... See more...
Hi @zksvc  I'm on 1.22.19 If you're on 2+ there might be a conflict with the pnp nodeLinker (which I think is default) - so if you're on 2.x you could try creating a .yarnrc.yml file in the project root with: nodeLinker: node-modules Out of interest, does this work? yarn lerna run build  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Sorry i was typo, but it still don't work  
Hi @livehybrid  Thanks for your reply, when i try to type "yarn run setup" i got error like this.  What yarn version you use btw ? 
anyone can reply plz
Hi @TestUser , Jokingly, I would say with a magic wand! In reality, at the moment there is no tool that allows this, even if, with the help of some Artificial Intelligence tools, we are getting clo... See more...
Hi @TestUser , Jokingly, I would say with a magic wand! In reality, at the moment there is no tool that allows this, even if, with the help of some Artificial Intelligence tools, we are getting closer. In any case, at the moment, to my knowledge, there are no tools of this type. Also because the new data must be identified and parsed; then you have to identify the filtering requirements and what you want to get as output, so I would say that at the moment it is not possible. A help could come from the Splunk Security Essentials app (https://splunkbase.splunk.com/app/3435) that provides a tool for identifying data flows and presents them with some dashboards, but in any case there is always a manual component of identifying and implementing the requirements. Ciao. Giuseppe
Hi @zksvc  I believe you were meant to run "yarn run setup" (missing the "run") at this point.  You might need to run "yarn install" first. This page is a great first-run tutorial on using @splunk... See more...
Hi @zksvc  I believe you were meant to run "yarn run setup" (missing the "run") at this point.  You might need to run "yarn install" first. This page is a great first-run tutorial on using @splunk/create too https://splunkui.splunk.com/Toolkits/SUIT/ComponentTutorial  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Shakeer_Spl , as I said, there isn't a version of Splunk Full Stack, there are two versions of Splunk on premise: Splunk Enterprise Splunk Universal Forwarder. The full Stack is only Splunk... See more...
Hi @Shakeer_Spl , as I said, there isn't a version of Splunk Full Stack, there are two versions of Splunk on premise: Splunk Enterprise Splunk Universal Forwarder. The full Stack is only Splunk Enterprise. For both the products there are many version (the last released is 9.4.3) and you can find it at https://www.splunk.com/en_us/download/splunk-enterprise.html . let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors  
Hi @Raghavsri , i had a similar issue in a past project. Check the parsing rules, maybe there are some not optimized regexes that requires too much memory, especially regexes that starts with ".*" ... See more...
Hi @Raghavsri , i had a similar issue in a past project. Check the parsing rules, maybe there are some not optimized regexes that requires too much memory, especially regexes that starts with ".*" Ciao. Giuseppe
Hello I know for sure that its Splunk end because Splunk told us that they had issue with sending emails Im getting the logs after running your example 
9.2.2 is the current version.    so the queue fillup and memory consumption in HF2, may be due to outgoing traffic ? it wont cause due to incoming large data , routing from HF1.. yes, we plan to... See more...
9.2.2 is the current version.    so the queue fillup and memory consumption in HF2, may be due to outgoing traffic ? it wont cause due to incoming large data , routing from HF1.. yes, we plan to configure add one more HF in HF2 layer as LB but it take some time. but we need to fix current ongoing issue. 
@Raghavsri  Whats the version of splunk you are running? Also to start with, check few options. Review logs: Look for errors, warnings, or abnormal behavior in splunkd.log Check destination hea... See more...
@Raghavsri  Whats the version of splunk you are running? Also to start with, check few options. Review logs: Look for errors, warnings, or abnormal behavior in splunkd.log Check destination health: Ensure that SyslogNG and the second indexer cluster are healthy and accepting data efficiently Also If HF2 is not able to forward data fast enough (due to network, destination, or performance issues), the queue fills up, consuming memory Memory upgrade: Increasing memory on HF2 may help if the issue is due to legitimate high data volume and not a leak or misconfiguration. However, if the problem is a memory leak/bandwidth issue, increasing memory will only delay the inevitable crash Load Balancing: Consider load balancing across multiple HFs if possible, to distribute the data load Monitor memory usage: Set up alerts for high memory usage to detect issues early. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos/Karma. Thanks!
Hi Everyone, I encountered an issue while creating a new component for SplunkUI. I have followed the documentation tutorial https://splunkui.splunk.com/Packages/create/CreatingSplunkApps as well as... See more...
Hi Everyone, I encountered an issue while creating a new component for SplunkUI. I have followed the documentation tutorial https://splunkui.splunk.com/Packages/create/CreatingSplunkApps as well as the writings from others here https://blog.scrt.ch/2023/01/03/getting-started-with-splunkui/  but I am facing an error as shown in the image below. My Setup : node -v v18.19.1 npm -v 9.2.0 npx yarn -v 1.22.22  
Our data flow is syslog server sending more number of data to one HF1, then its routing to a indexer cluster as well as to another HF2. from this another HF2, routing data to syslogNG and  another in... See more...
Our data flow is syslog server sending more number of data to one HF1, then its routing to a indexer cluster as well as to another HF2. from this another HF2, routing data to syslogNG and  another indexer cluster, located in different environment Due to high volume of data in our syslog server, we increased the pipeline queue size as 2500MB. we faced backpressure in syslog and HFs , so vendor recommended to increase the pipeline size as 2500MB under server.conf , in both HFs and syslog server.  now the issue is HF2 consuming full memory(92GB) recently after the server reboot.  after consume 100% memory , HF2 went hung . if we decrease the parallel pipeline from 2 to 1 in HF2, it create backpressure in syslog server and HF1 , and pipelines getting burst.  before the HF2 reboot, the memory consumption was less than 10GB only with pipeline size as 2500MB and Splunkd process was normal. Note: so far HF1 not facing any memory(92GB) issue, located in between syslog server and HF2 now in this situation , increasing the memory in HF2 will be helpful ? or what will be best solution to overcome this scenario in future    
Hi @livehybrid  I have changed the interval to 600 seconds, but the data is still not available. Is there any other solution that you know?
So, this is a common pattern and the solution is to work your logic like this ```search indexes to get list of servers``` ... | stats max(_time_) as latest count by System | rename System_Name as Sy... See more...
So, this is a common pattern and the solution is to work your logic like this ```search indexes to get list of servers``` ... | stats max(_time_) as latest count by System | rename System_Name as System ``` At this point all Systems found will have a count > 0 ``` ``` So now add in your control group to the end of the list ``` | inputlookup append=t system_info.csv ``` Now this will "join" the two possible sets of 'System' together ``` | stats values(*) as * max(count) as count by System ``` And any of those with count = 0 are those that came from your lookup control and this gives you the ones not found in your data ``` | where count=0 The final where clause will cause only the missing items to show, but you can of course do what you need there with any time calculations based on the latest value from the top search. You can if you want use the lookup as a subsearch contstraint on the outer search so that it finds ONLY those in the lookup, as opposed to all systems, e.g. index=servers sourcetype=logs [ | inputlookup system_info.csv | fields System | rename System as System_Name ] ... Note that this assumes your data contains a field called System_Name, but your field in the lookup is System.  
@vy  Have you found any solution for this
Hi @kunalsingh  Do you get more detail in your $SPLUNK_HOME/var/log/splunk/splunkd.log ? This might pinpoint a specific python error which is causing the page not to render.  Did this answer help... See more...
Hi @kunalsingh  Do you get more detail in your $SPLUNK_HOME/var/log/splunk/splunkd.log ? This might pinpoint a specific python error which is causing the page not to render.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Amire22  I think you should be able to configure additional domains exactly the same way you did the first one; asset & identity data must ultimately reside in lookups (CSV or KV-store) on the E... See more...
Hi @Amire22  I think you should be able to configure additional domains exactly the same way you did the first one; asset & identity data must ultimately reside in lookups (CSV or KV-store) on the ES search head, those files are not forwarded automatically by indexers/HFs. Option A – query the directories directly from ES Install SA-ldapsearch (or the Splunk Add-on for Microsoft AD) on the ES search head. Create one stanza per domain with its own server, bindDN and credentials. Schedule one ldapsearch per domain that writes to a single lookup (e.g. identities.csv, assets.csv). ES will ingest those lookups when the “Identity – Lookup Gen” and “Asset – Lookup Gen” searches run. Option B – collect on each HF and ship as events Install SA-ldapsearch on a HF in each domain. Schedule a search or scripted input that outputs CSV-formatted events and forward them to a dedicated index, e.g. index=identity. Use a search to pull the data into a lookup: Essential SPL example index=identity sourcetype=ldap_identities | eval category="normal" | lookup update=true identities.csv identity OUTPUTNEW * | outputlookup identities.csv ES does not care where the data comes from as long as the final lookups exist on the search head. Multiple domains, lack of trust, or separate networks are irrelevant; you only need LDAP connectivity from whichever Splunk instance is executing the LDAP query. You should probably add a domain/prefix field to your A&I lookups to show which domain the entity originates. If you end up with a large CSV lookup consider switching to KV-store lookup. More info on apps/addons for bringing in assets/identities info can be found at https://help.splunk.com/en/splunk-enterprise-security-8/administer/8.0/asset-and-identity-management/extract-asset-and-identity-data-in-splunk-enterprise-security#id_23cc30fd_1876_4f43_97f4_3f37d7b6d98c__Extract_asset_and_identity_data_in_Splunk_Enterprise_Security    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
The best terms to use here are either "server", or "deployment". I would avoid the term "stack", as that does not translate well to the typical web developer definition of that term. The below should... See more...
The best terms to use here are either "server", or "deployment". I would avoid the term "stack", as that does not translate well to the typical web developer definition of that term. The below should help to distinguish further:   Splunk Server Generic term to indicate any host (computer) that provides a Splunk service Splunk Deployment Collective term referring to one or more Splunk servers that satisfy the various roles needed to provide a Splunk service.   See also: cluster, tier, search tier, search peer, index tier, role   Ref: [1]: https://help.splunk.com/en/splunk-cloud-platform/administer/admin-manual/9.3.2411/get-started-managing-splunk-cloud-platform/splunk-cloud-platform-deployment-types [2]: https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.4/welcome-to-splunk-enterprise-administration/splunk-platform-administration-the-big-picture  
I have recently created an addon via UCC Framework (Version-5.56.0),However I am facing issue while (editing or cloning) inputs and account in configuration page. #UCC framework