All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Good morning gcusello, Many thanks for your prompt response. I'm not sure if I was unclear, but I do not understand your suggestion. We do not have any combined roles. Our indexer cluster (three in... See more...
Good morning gcusello, Many thanks for your prompt response. I'm not sure if I was unclear, but I do not understand your suggestion. We do not have any combined roles. Our indexer cluster (three indexers) are managed by dedicated, separate SPLUNK management node. Heavy forwarder is a separate, standalone SPLUNK HF managed by the management console. We also have separate search heads. All according to good practices as far as I'm concerned. I already have few applications installed on the HF which correctly forward data to the index group to specific indexes. I know how to configure inputs.conf for a particular application to forward to indexer group and specific index. My question is: how can I configure receiving port under Data Inputs (TCP or UDP) to forward to indexer group to a specific index. I may have several, different sources (SYSLOG etc) which I want to forward to indexer group into separate, dedicated indexes. I don't want to mix data from different data sources into the same index. I hope that clarifies things a bit. Kind Regards, Mike.
Hi @Na_Kang_Lim , don't use the search command after the main search: for best performances put the search ters as left as possible: index=windows source=XmlWinEventLog:Security EventCode=4688 pro... See more...
Hi @Na_Kang_Lim , don't use the search command after the main search: for best performances put the search ters as left as possible: index=windows source=XmlWinEventLog:Security EventCode=4688 process_name=ipconfig.exe NOT process_command_line="ipconfig /all" NOT process_parent_path=*benign.exe host=BENIGN_HOSTS then, if it's possible, can you replace your exclusive filters with inclusive filters? in other words: use process_command_line IN (value1, value2, value3) instead of NOT ... Ciao. Giuseppe
Hi @Na_Kang_Lim , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @krishna4murali , yes, sorry, I was wrong: Sunday and Thursday! anyway, in the triggered alerts dashboard, you can see if the wrong days executions are related to your alert or to another one. ... See more...
Hi @krishna4murali , yes, sorry, I was wrong: Sunday and Thursday! anyway, in the triggered alerts dashboard, you can see if the wrong days executions are related to your alert or to another one. Ciao. Giuseppe
Hi @BoscoBaracus , at first, clustered Indexers are managed by the Cluster manager and Heavy Forwarders by Deployment Server, and it isn't a best practice to use the same server for both the roles, ... See more...
Hi @BoscoBaracus , at first, clustered Indexers are managed by the Cluster manager and Heavy Forwarders by Deployment Server, and it isn't a best practice to use the same server for both the roles, especially if the DS must manage more than 50 clients. Anyway, the situation is the same: on the HF, you have to configure all log forwarding to the Indexers, on the HF you have to create soma inputs indicating the indexes to store data, in this way all your logs are forwarded to the correct indexes. Just some addition hints: for syslogs, don't use Splunk network inputs but rsyslog that writes syslogs in a file taht you can read on your HF, to address clustered Indexers, use Indexers Discovery feature (https://docs.splunk.com/Documentation/Splunk/9.4.2/Indexer/indexerdiscovery). As anticipated, don't use the Cluster Manager as Deployment Server, use a different server, possibly dedicated: if you have few clients to manage (less than 50) you can use another server of your Splunk infrastructure, but not Cluster Manager or Searcjh Head or Indexers. Ciao. Giuseppe
Hi everyone, The OP is here. My problem has been solved.  The cause was that one of our admins, mistakenly create another EXTRACT- clause for EventID in another app. So here is my advice if you e... See more...
Hi everyone, The OP is here. My problem has been solved.  The cause was that one of our admins, mistakenly create another EXTRACT- clause for EventID in another app. So here is my advice if you ever get into similar situation: 1. Find whether the field is being affected, is created with index-time extraction or search-time extraction. You check this either using the `::` operator, or in the props.conf and transforms.conf file (REPORT- and EXTRACT- are search time, which happen on search head; and TRANSFORMS- is index-time which happen on Indexers). Look into how it is extracted! 2. Then grep the field name in the apps directory (or slave-apps, master-apps - depend on your scope and set up), look into all the functions that affected the field in props.conf and transforms.conf
I am looking for the best way in terms of performance when adding filtering of certain events for security rules. Normally for a security rule, it starts off with quite a large scope, for example: i... See more...
I am looking for the best way in terms of performance when adding filtering of certain events for security rules. Normally for a security rule, it starts off with quite a large scope, for example: index=windows source=XmlWinEventLog:Security process_name=ipconfig.exe  Then often in your environment, you would have to filter benign processes, behaviors. Currently, this is how I am writing filters index=windows source=XmlWinEventLog:Security EventCode=4688 process_name=ipconfig.exe | search NOT process_command_line="ipconfig /all" | search NOT process_parent_path=*benign.exe host=BENIGN_HOSTS This gives the best readability, but I am looking for best performance. Then what is the best way to write filters? 
Good morning All, I have been trying to figure out how can I create a data input on a heavy forwarder to forward data to a specific index located on indexer cluster. I have three indexers organised ... See more...
Good morning All, I have been trying to figure out how can I create a data input on a heavy forwarder to forward data to a specific index located on indexer cluster. I have three indexers organised in a cluster. The indexers and heavy forwarder are managed by management node. I have used Windows Universal forwarder to forward events to a particular index to indexers group (cluster) but I'm struggling to find a way of configuring similar thing on Linux based HF. Basically, what I'm trying to achieve is to configure SYSLOG port (this will be custom port, let's say 1514) to receive SYSLOG data from particular SYSLOG host and forward it to custom index created on indexers group (cluster). When adding a port in Data Inputs, I can specify local index, but not remote, clustered index. On the HF in Data Forwarding section, I can see All are forwarded to the indexer cluster. Would anyone know how I can achieve this? Any help would be much appreciated. Kind Regards, Mike.
It's running  at unexpected days in the same time
Nope, its not a clone.
Thanks for your response. Just now i verified from the application server and i can see that the Splunk Universal Forwarder Service is running on all our servers but i cannot see Splunk Heavy Forw... See more...
Thanks for your response. Just now i verified from the application server and i can see that the Splunk Universal Forwarder Service is running on all our servers but i cannot see Splunk Heavy Forwarder (HF). Is there anything suggestion you have for Splunk Universal Forwarder Service so that my requirement for creating the dashboard get over.?
I’m trying to forward logs and events from Trellix EPO SaaS to Splunk Cloud for monitoring purposes. To do this, I’ve installed the Trellix EPO SaaS Connector add-on in Splunk. During the setup, the ... See more...
I’m trying to forward logs and events from Trellix EPO SaaS to Splunk Cloud for monitoring purposes. To do this, I’ve installed the Trellix EPO SaaS Connector add-on in Splunk. During the setup, the connector requires API credentials to establish communication between Splunk and Trellix. However, even after completing the configuration, I’m not seeing any logs being ingested into Splunk. Additionally, I’m not entirely sure what each field in the configuration tab represents, which makes troubleshooting difficult. So i just configure: + IAM URL = Token Endpoint URL in Client Credentials Management + API Gateway URL = https://api.manage.trellix.com I am using Trellix MVISION Trial and Splunk Cloud Trial for testing purpose.  
Unfortunaly the number of results is not fixed it varies between 20 and 30. Going back to Classic, is somgthing i do not want. I am going to skip this panel Thanks everybody, Regards, Harr... See more...
Unfortunaly the number of results is not fixed it varies between 20 and 30. Going back to Classic, is somgthing i do not want. I am going to skip this panel Thanks everybody, Regards, Harry
Dashboard Studio is still immature when it comes to some sophisticated features that are easy to do in Classic SimpleXML dashboards. If you are already hitting the limitations of DS, you should consi... See more...
Dashboard Studio is still immature when it comes to some sophisticated features that are easy to do in Classic SimpleXML dashboards. If you are already hitting the limitations of DS, you should consider reimplementing in Classic (although you will lose some of the layout options which DS provides). There is a lot of support from the community and elsewhere for using some of the sophisticated techniques available to Classic dashboards. Unfortunately, there is not an easy migration path back from DS to Classic (and to be fair, there is only a limited migration from Classic to DS), so, beyond simple dashboards, you have to make a choice between dashboard types based on which features are most important to you and which you can live without. (Probably not the answer you wanted!)
How very strange!  This might sound funny question - but the alert you are seeing being triggered is definitely the alert you're looking at and that its not a clone of or similar alert?  Did this... See more...
How very strange!  This might sound funny question - but the alert you are seeing being triggered is definitely the alert you're looking at and that its not a clone of or similar alert?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @MisterB  Would the following solve your requirement?  A = data('Latency', rollup='average').percentile(pct=99).mean().publish() You could change the rollup to 'max' if you'd prefer.   D... See more...
Hi @MisterB  Would the following solve your requirement?  A = data('Latency', rollup='average').percentile(pct=99).mean().publish() You could change the rollup to 'max' if you'd prefer.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @palyogit  Looking at this I think there are two issues. Im not entirely sure they are related as others have suggested, because you would usually expect an event to be dropped if it hits the TRU... See more...
Hi @palyogit  Looking at this I think there are two issues. Im not entirely sure they are related as others have suggested, because you would usually expect an event to be dropped if it hits the TRUNCATE limit, you would just be left with the first 10,000 characters. The first thing to do is increase that 10000 limit - are you expecting the events to be this large? # props.conf # [httpevent] # Increase to a number bigger than the events which are being truncated. TRUNCATE=50000 The other log line which caught my eye is: RegexExtractor: Interpolated to processor::nullqueue especially because you are missing the events entirely. Do you have any props which are setting the nullqueue? Please can you do a btool and share hte output? $SPLUNK_HOME/bin/splunk cmd btool props list --debug httpevent  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@krishna4murali  That behavior isn't expected—using 0 11 * * 1,4 in your cron schedule should result in the job triggering only at 11:00 AM on Mondays and Thursdays. Could you confirm: Is the job a... See more...
@krishna4murali  That behavior isn't expected—using 0 11 * * 1,4 in your cron schedule should result in the job triggering only at 11:00 AM on Mondays and Thursdays. Could you confirm: Is the job actually running at 11:00 AM on Tuesdays and Wednesdays, or at any other unexpected times? Please check: Review the time zone setting on the system host where this cron is configured. Also for testing try to configure with below just to make sure it's executing as expected or not. 0 12 * * * Ideally should execute At 12 PM every day, on the hour Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Thanks for reply @livehybrid ,  There are no invalid characters on cron and there is no space between the comma and after last digit.