All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Bisho-Fouad - The short answer is to create the same index on the HF. (It will not be used for storing data, the purpose is just that you will see the index name while configuring the inputs.)   ... See more...
@Bisho-Fouad - The short answer is to create the same index on the HF. (It will not be used for storing data, the purpose is just that you will see the index name while configuring the inputs.)   I hope this helps!!! Kindly upvote if it does!!!
Hi @jbv, yes: identify the technoloiges to ingest, choose the correct Add-On and (reading the documentation9 assign the correct sourcetype to the inputs. Ciao. Giuseppe
With those old datasets you must use "earliest=1" for all searches or "All time" option.
Hi when you are creating a new index on Cluster Master, it just create that on cluster peers not anywhere else. If/when you want that index definition also on HF you must add it there manually. Base... See more...
Hi when you are creating a new index on Cluster Master, it just create that on cluster peers not anywhere else. If/when you want that index definition also on HF you must add it there manually. Based on your environment there are couple of things to remember and you must modify those when you are adding indexes on HF and/or your SH(s). if you are using volumes that must update that to point correct volume definition and/or add "dummy" volume definition to other host when you are using non cluster environment on other nodes you should update repFactor from auto to 0 My proposal is that you have separate definitions for volumes and repFactor as default values for cluster and other nodes. Then own file/TA/SA for real index definitions. In that way you could use same index definition files on all nodes instead of update it every time. Just store it to git and then deploy it from there. And of course  you must deploy that another app/files which contains specific definitions once before real index definitions can deployed. If you forget this then your nodes didn't start as they haven't have valid index definitions. r. Ismo
Hi, I have a botsv1 dataset uploaded in Splunk simulated environment. But when I search "index=botsv1" , it returns 0 events. Although I have seen the dataset in apps folder. Also it can be seen in ... See more...
Hi, I have a botsv1 dataset uploaded in Splunk simulated environment. But when I search "index=botsv1" , it returns 0 events. Although I have seen the dataset in apps folder. Also it can be seen in indexes in settings section. Nothing  can be searched using keyword botsv1. I have tried various search options, but all failed. Please help me. Thanks in advance.
Ensure that you have set Sharing as global for that app + other needed KOs. Then it should work as any other visualisation. 
I want it as a dashboard panel. it works as is in the search visualisation tab
Hello @richgalloway  Thank you for your support. However, I would like to demonstrate that my NDR solution utilizes a centralized server called "Brain" to gather logs from network sensors. In orde... See more...
Hello @richgalloway  Thank you for your support. However, I would like to demonstrate that my NDR solution utilizes a centralized server called "Brain" to gather logs from network sensors. In order to achieve this, the optimal approach would be to establish a channel connecting the heavy forwarder to the NDR Brain. Therefore, I would appreciate your recommendations on this matter. so the best solution will be to create a channel between heavy forwarder and this NDR Brain so what is your recommendations ??
Hello , i just created new index on cluster master for new integrated log source, but can not find this new index on heavy forwarders to be configured as new data input. any recommendations for su... See more...
Hello , i just created new index on cluster master for new integrated log source, but can not find this new index on heavy forwarders to be configured as new data input. any recommendations for such as situation ?
Hi here https://github.com/splunk/splunk-ansible is one way to use ansible to install splunk environment. With ansible you could use both ways to add nodes to cluster (edit conf files + restart nod... See more...
Hi here https://github.com/splunk/splunk-ansible is one way to use ansible to install splunk environment. With ansible you could use both ways to add nodes to cluster (edit conf files + restart nodes and/or use cli commands).  Which one is better way is different story which depends on your needs and used software. As you need to add several nodes now (and probably more later) scripting is definitely the correct way to manage that. Personally I install even individual nodes with ansible. And of course you should use e.g. git to store your configuration and versioning it. r. Ismo
Thank You for your reply, I am using both UF and HF
Hi it's just as @gcusello said. You cannot create HA environment without indexer cluster. That needs minimum three nodes: manager node and minimum two peer. Here is https://www.splunk.com/en_us/pdf... See more...
Hi it's just as @gcusello said. You cannot create HA environment without indexer cluster. That needs minimum three nodes: manager node and minimum two peer. Here is https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf document which you should read and use as a base instructions how different splunk installations are working and for what purpose those are targeted. r. Ismo
Hi over those examples from @richgalloway I would like to point to set up your own development server (you can ask developer and/or dev/test license from splunk) and learn with this. That way you co... See more...
Hi over those examples from @richgalloway I would like to point to set up your own development server (you can ask developer and/or dev/test license from splunk) and learn with this. That way you could do e.g. data on boarding much easier without disturbing your production. Also create your apps etc. with this dev environment and then install apps into splunk cloud. r. Ismo
If/when you have strictly follow those requirements and steps that should work as expected without any other steps. Here is described how to test Try out the visualization And remember that when you... See more...
If/when you have strictly follow those requirements and steps that should work as expected without any other steps. Here is described how to test Try out the visualization And remember that when you are changing it you usually must restart splunk to get the new version into use!
Hi I hope that you have created a separate app for those. If not then it's time to create it! After that you can just copy and install this app into the new host. If/when you have CLI access to th... See more...
Hi I hope that you have created a separate app for those. If not then it's time to create it! After that you can just copy and install this app into the new host. If/when you have CLI access to this host (on development you always have) the easiest way to pack that app is  on cli  splunk package app <your app> Then splunk told where it put that package. You need just copy that package and then install it into your another nodes. r. Ismo 
I built a custom visualisation. I want to move it to the dashboard as a panel. how do it do that?
My visualisation is built following this framework https://docs.splunk.com/Documentation/Splunk/9.1.2/AdvancedDev/CustomVizTutorial
Hi In RHEL 8.9 you should use this version Enable boot-start on machines that run systemd And before that you must change ownership of all files which splunk is using to user splunk (or what ever u... See more...
Hi In RHEL 8.9 you should use this version Enable boot-start on machines that run systemd And before that you must change ownership of all files which splunk is using to user splunk (or what ever user you are using for run splunk). r. Ismo
Hi When your indexer goes down in cluster then CM try to spread out all buckets with your current indexers. This means that every indexers will divide those buckets which failed indexers have. In no... See more...
Hi When your indexer goes down in cluster then CM try to spread out all buckets with your current indexers. This means that every indexers will divide those buckets which failed indexers have. In normal situation you shouldn't change SF&RF on this situation. Instead of you should fix that failed indexers and bring it back to cluster. After that cluster will rebalance buckets with all available indexers. Basically indexer clusters works as it should. But for that reason you should have enough spare space on those peers to manage situation when one (or even more, based on your RF&SF) can be down for some time. Other option is that your peers frozen some buckets to have enough space for normal operations. Of course this needs that you are using volumes with suitable max limit. r. Ismo
Hi Team,   I am trying to below query, it showing the all servers are up, I tested one server stopped and checked  it's not showing Down status, please fine the below query index="_internal" | ev... See more...
Hi Team,   I am trying to below query, it showing the all servers are up, I tested one server stopped and checked  it's not showing Down status, please fine the below query index="_internal" | eval host=lower(host) | stats count BY host | append [ | eval host=lower(host) ] | eval status=if(total=0,"Down","up") | table host status   Please letme know exact query on that.