All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @BoscoBaracus , only to be clear: Indexers are managed by a  management node called Cluster Manager, Heavy Forwarders are managed by a management console called Deployment Server, there two r... See more...
Hi @BoscoBaracus , only to be clear: Indexers are managed by a  management node called Cluster Manager, Heavy Forwarders are managed by a management console called Deployment Server, there two roles must be located on two different Splunk servers. About syslog ingestion, you could use Splunk HF Network inputs, but it isn't a best practice. The best approach is to configure, on your HFs, one or more rsyslog inputs that receive syslogs and write them in different text files. Then you can read these text files using one or more file monitoring inputs  and ingest them in Splunk. You can configure the destination index in these Splunk input files. To configure rsyslog inputs, you can read at https://www.rsyslog.com/doc/index.html to configure Splunk file monitring inputs, you can read at https://docs.splunk.com/Documentation/Splunk/9.4.1/Data/MonitorfilesanddirectorieswithSplunkWeb Ciao. Giuseppe
Hi, I know that inclusive is the best case, but I am talking about when you have to start off with broad scope. Is there any other syntax I should follow? Like is there any difference in performanc... See more...
Hi, I know that inclusive is the best case, but I am talking about when you have to start off with broad scope. Is there any other syntax I should follow? Like is there any difference in performance between writing process_name!=a.exe process_name!=b.exe  and NOT process_name IN (a.exe, b.exe)
Hi,   I would like to request further assistance regarding the following. If I intend to change the domain of my existing All-in-One Splunk Enterprise server, what are the key areas I should be aw... See more...
Hi,   I would like to request further assistance regarding the following. If I intend to change the domain of my existing All-in-One Splunk Enterprise server, what are the key areas I should be aware of, and which configuration files need to be updated?
Good morning gcusello, Many thanks for your prompt response. I'm not sure if I was unclear, but I do not understand your suggestion. We do not have any combined roles. Our indexer cluster (three in... See more...
Good morning gcusello, Many thanks for your prompt response. I'm not sure if I was unclear, but I do not understand your suggestion. We do not have any combined roles. Our indexer cluster (three indexers) are managed by dedicated, separate SPLUNK management node. Heavy forwarder is a separate, standalone SPLUNK HF managed by the management console. We also have separate search heads. All according to good practices as far as I'm concerned. I already have few applications installed on the HF which correctly forward data to the index group to specific indexes. I know how to configure inputs.conf for a particular application to forward to indexer group and specific index. My question is: how can I configure receiving port under Data Inputs (TCP or UDP) to forward to indexer group to a specific index. I may have several, different sources (SYSLOG etc) which I want to forward to indexer group into separate, dedicated indexes. I don't want to mix data from different data sources into the same index. I hope that clarifies things a bit. Kind Regards, Mike.
Hi @Na_Kang_Lim , don't use the search command after the main search: for best performances put the search ters as left as possible: index=windows source=XmlWinEventLog:Security EventCode=4688 pro... See more...
Hi @Na_Kang_Lim , don't use the search command after the main search: for best performances put the search ters as left as possible: index=windows source=XmlWinEventLog:Security EventCode=4688 process_name=ipconfig.exe NOT process_command_line="ipconfig /all" NOT process_parent_path=*benign.exe host=BENIGN_HOSTS then, if it's possible, can you replace your exclusive filters with inclusive filters? in other words: use process_command_line IN (value1, value2, value3) instead of NOT ... Ciao. Giuseppe
Hi @Na_Kang_Lim , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @krishna4murali , yes, sorry, I was wrong: Sunday and Thursday! anyway, in the triggered alerts dashboard, you can see if the wrong days executions are related to your alert or to another one. ... See more...
Hi @krishna4murali , yes, sorry, I was wrong: Sunday and Thursday! anyway, in the triggered alerts dashboard, you can see if the wrong days executions are related to your alert or to another one. Ciao. Giuseppe
Hi @BoscoBaracus , at first, clustered Indexers are managed by the Cluster manager and Heavy Forwarders by Deployment Server, and it isn't a best practice to use the same server for both the roles, ... See more...
Hi @BoscoBaracus , at first, clustered Indexers are managed by the Cluster manager and Heavy Forwarders by Deployment Server, and it isn't a best practice to use the same server for both the roles, especially if the DS must manage more than 50 clients. Anyway, the situation is the same: on the HF, you have to configure all log forwarding to the Indexers, on the HF you have to create soma inputs indicating the indexes to store data, in this way all your logs are forwarded to the correct indexes. Just some addition hints: for syslogs, don't use Splunk network inputs but rsyslog that writes syslogs in a file taht you can read on your HF, to address clustered Indexers, use Indexers Discovery feature (https://docs.splunk.com/Documentation/Splunk/9.4.2/Indexer/indexerdiscovery). As anticipated, don't use the Cluster Manager as Deployment Server, use a different server, possibly dedicated: if you have few clients to manage (less than 50) you can use another server of your Splunk infrastructure, but not Cluster Manager or Searcjh Head or Indexers. Ciao. Giuseppe
Hi everyone, The OP is here. My problem has been solved.  The cause was that one of our admins, mistakenly create another EXTRACT- clause for EventID in another app. So here is my advice if you e... See more...
Hi everyone, The OP is here. My problem has been solved.  The cause was that one of our admins, mistakenly create another EXTRACT- clause for EventID in another app. So here is my advice if you ever get into similar situation: 1. Find whether the field is being affected, is created with index-time extraction or search-time extraction. You check this either using the `::` operator, or in the props.conf and transforms.conf file (REPORT- and EXTRACT- are search time, which happen on search head; and TRANSFORMS- is index-time which happen on Indexers). Look into how it is extracted! 2. Then grep the field name in the apps directory (or slave-apps, master-apps - depend on your scope and set up), look into all the functions that affected the field in props.conf and transforms.conf
I am looking for the best way in terms of performance when adding filtering of certain events for security rules. Normally for a security rule, it starts off with quite a large scope, for example: i... See more...
I am looking for the best way in terms of performance when adding filtering of certain events for security rules. Normally for a security rule, it starts off with quite a large scope, for example: index=windows source=XmlWinEventLog:Security process_name=ipconfig.exe  Then often in your environment, you would have to filter benign processes, behaviors. Currently, this is how I am writing filters index=windows source=XmlWinEventLog:Security EventCode=4688 process_name=ipconfig.exe | search NOT process_command_line="ipconfig /all" | search NOT process_parent_path=*benign.exe host=BENIGN_HOSTS This gives the best readability, but I am looking for best performance. Then what is the best way to write filters? 
Good morning All, I have been trying to figure out how can I create a data input on a heavy forwarder to forward data to a specific index located on indexer cluster. I have three indexers organised ... See more...
Good morning All, I have been trying to figure out how can I create a data input on a heavy forwarder to forward data to a specific index located on indexer cluster. I have three indexers organised in a cluster. The indexers and heavy forwarder are managed by management node. I have used Windows Universal forwarder to forward events to a particular index to indexers group (cluster) but I'm struggling to find a way of configuring similar thing on Linux based HF. Basically, what I'm trying to achieve is to configure SYSLOG port (this will be custom port, let's say 1514) to receive SYSLOG data from particular SYSLOG host and forward it to custom index created on indexers group (cluster). When adding a port in Data Inputs, I can specify local index, but not remote, clustered index. On the HF in Data Forwarding section, I can see All are forwarded to the indexer cluster. Would anyone know how I can achieve this? Any help would be much appreciated. Kind Regards, Mike.
It's running  at unexpected days in the same time
Nope, its not a clone.
Thanks for your response. Just now i verified from the application server and i can see that the Splunk Universal Forwarder Service is running on all our servers but i cannot see Splunk Heavy Forw... See more...
Thanks for your response. Just now i verified from the application server and i can see that the Splunk Universal Forwarder Service is running on all our servers but i cannot see Splunk Heavy Forwarder (HF). Is there anything suggestion you have for Splunk Universal Forwarder Service so that my requirement for creating the dashboard get over.?
I’m trying to forward logs and events from Trellix EPO SaaS to Splunk Cloud for monitoring purposes. To do this, I’ve installed the Trellix EPO SaaS Connector add-on in Splunk. During the setup, the ... See more...
I’m trying to forward logs and events from Trellix EPO SaaS to Splunk Cloud for monitoring purposes. To do this, I’ve installed the Trellix EPO SaaS Connector add-on in Splunk. During the setup, the connector requires API credentials to establish communication between Splunk and Trellix. However, even after completing the configuration, I’m not seeing any logs being ingested into Splunk. Additionally, I’m not entirely sure what each field in the configuration tab represents, which makes troubleshooting difficult. So i just configure: + IAM URL = Token Endpoint URL in Client Credentials Management + API Gateway URL = https://api.manage.trellix.com I am using Trellix MVISION Trial and Splunk Cloud Trial for testing purpose.  
Unfortunaly the number of results is not fixed it varies between 20 and 30. Going back to Classic, is somgthing i do not want. I am going to skip this panel Thanks everybody, Regards, Harr... See more...
Unfortunaly the number of results is not fixed it varies between 20 and 30. Going back to Classic, is somgthing i do not want. I am going to skip this panel Thanks everybody, Regards, Harry
Dashboard Studio is still immature when it comes to some sophisticated features that are easy to do in Classic SimpleXML dashboards. If you are already hitting the limitations of DS, you should consi... See more...
Dashboard Studio is still immature when it comes to some sophisticated features that are easy to do in Classic SimpleXML dashboards. If you are already hitting the limitations of DS, you should consider reimplementing in Classic (although you will lose some of the layout options which DS provides). There is a lot of support from the community and elsewhere for using some of the sophisticated techniques available to Classic dashboards. Unfortunately, there is not an easy migration path back from DS to Classic (and to be fair, there is only a limited migration from Classic to DS), so, beyond simple dashboards, you have to make a choice between dashboard types based on which features are most important to you and which you can live without. (Probably not the answer you wanted!)
How very strange!  This might sound funny question - but the alert you are seeing being triggered is definitely the alert you're looking at and that its not a clone of or similar alert?  Did this... See more...
How very strange!  This might sound funny question - but the alert you are seeing being triggered is definitely the alert you're looking at and that its not a clone of or similar alert?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @MisterB  Would the following solve your requirement?  A = data('Latency', rollup='average').percentile(pct=99).mean().publish() You could change the rollup to 'max' if you'd prefer.   D... See more...
Hi @MisterB  Would the following solve your requirement?  A = data('Latency', rollup='average').percentile(pct=99).mean().publish() You could change the rollup to 'max' if you'd prefer.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing