All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I have a correlation search created in Enterprise security. Scheduled as below. Mode: guided Time range> Earliest: -24h, Latest: Now, Cron: 0 03 * * *, scheduling: realtime, schedule window: au... See more...
Hi, I have a correlation search created in Enterprise security. Scheduled as below. Mode: guided Time range> Earliest: -24h, Latest: Now, Cron: 0 03 * * *, scheduling: realtime, schedule window: auto, priority: auto Trigger alert when greater than 0 Throttling > window duration: 0 Response action > To:mymailid, priority: normal, Include: Link to alert, link to result, trigger condition, attach csv, Trigger time In this case, mail is not getting delivered regularly. If I try executing the same SPL query in search, it showing more than 300 rows result
Hi, did you find any solution for this issue. i am facing the same now
  It sounds like some sort of setting related to connection issue: A few thinsg to check: Is there a firewall between your DB connect server and the HEC server? Ensure the port(s) are availble E... See more...
  It sounds like some sort of setting related to connection issue: A few thinsg to check: Is there a firewall between your DB connect server and the HEC server? Ensure the port(s) are availble Ensure on Splunk HEC server, you have global settings enabled: Click Settings > Data Inputs. Click HTTP Event Collector. Click Global Settings. In the All Tokens toggle button, select Enabled. Some other aspects to check and troubleshoot: #Check if the Hec collector is healthy curl -k -X GET -u admin:mypassword https://MY_Splunk_HEC_SERVER:8088/services/collector/health/1.0 #Check if HEC stanzas with config are configured /opt/splunk/bin/splunk http-event-collector list -uri https://MY_Splunk_HEC_SERVER:8089 #Check the settings using btool /opt/splunk/bin/splunk cmd btool inputs list --debug http
Hi Put simply, I am trying to wrap my head around how I can configure an alert to trigger is a metric is X% higher or lower than the same metric, say 1 day ago. So for example if I search index=... See more...
Hi Put simply, I am trying to wrap my head around how I can configure an alert to trigger is a metric is X% higher or lower than the same metric, say 1 day ago. So for example if I search index=my_index eventStatus=fault | stats count by eventStatus Searching "Last 15 minutes", giving say 100 results, can I trigger an alert IF the same search in the same 15 minute timeframe 1 day ago is for example 10% higher or lower?   Thanks
Hello everyone,  I'm working to set up many Universal Forwarder to monitor a MFT logs.  MFT store all it's logs in the directory /data/mft/efs/logs/ . In this directory, there are files and subdir... See more...
Hello everyone,  I'm working to set up many Universal Forwarder to monitor a MFT logs.  MFT store all it's logs in the directory /data/mft/efs/logs/ . In this directory, there are files and subdirectories that we do not want to monitor. The log files that we want to monitor are in subdirectories and these subdirectories rotate every day. When MFT launches a flow for today, for exemple, it creates a sub-directory: /data/mft/efs/logs/2024-07-02/mft_flow.log  I created an inputs.conf file :    [default] _meta=env::int-test [monitor:///data/mft/efs/logs/*] disabled=false sourcetype=log4j host=test-aws-lambda-splunk-code followTail=0 whitelist=\d{4}-\d{2}-\d{2}\/.*\.log index=test_filtre    But I don’t get anything in my Splunk Enterprise. Anyone can help me ? 
I use this search | dbinspect index=* | stats sum(rawSize) as total_size by index | eval total_size_mb = total_size / (1024 * 1024) | table index total_size_mb and get this result is this right? ... See more...
I use this search | dbinspect index=* | stats sum(rawSize) as total_size by index | eval total_size_mb = total_size / (1024 * 1024) | table index total_size_mb and get this result is this right?  
Hi @LuísMSB, in the Community, you can find thousands of answers to this question! Anyway, you have two choices: create a lookup containing the perimeter to monitor, checks if an host sent logs ... See more...
Hi @LuísMSB, in the Community, you can find thousands of answers to this question! Anyway, you have two choices: create a lookup containing the perimeter to monitor, checks if an host sent logs in the last 30 days and didn't send in the last hour in the first case, you have to create a lookup called perimeter.csv and containing at least one column (host), then you can run a search like the following | tstats count WHERE index=* BY host | append [ | inputlookup perimeter.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 if instead you don't want to manage a lookup, you can use this search | tstats latest(_time) AS _time count WHERE index=* earliest=-30d@d latest=now BY host | eval period=if(_time<now()-3600,"previous","latest") | stats dc(period) AS period_count values(period) AS period BY host | where period_count=1 AND period="previous" I prefer first solution because gives you more control. Ciao. Giuseppe
Hi @jacknguyen, this isn't the dashboard I indicated, becsuase you need the historic license consuption not the daily one, anyway, you have a configuration issue on your Monitoring Console, I hint t... See more...
Hi @jacknguyen, this isn't the dashboard I indicated, becsuase you need the historic license consuption not the daily one, anyway, you have a configuration issue on your Monitoring Console, I hint to open a case to Splunk Support for this, otherwise, you cannot solve your request. Ciao. Giuseppe
I cannot see anything. Do you know the search can check this?  
Hi @jacknguyen, in the monitoring Console at [Indexing > License Usage > Historic License usage ] you can display the license usage split by index or sourcetype, etc... If this doesn't exactly answ... See more...
Hi @jacknguyen, in the monitoring Console at [Indexing > License Usage > Historic License usage ] you can display the license usage split by index or sourcetype, etc... If this doesn't exactly answer to your question, you can start from this search to customize your own. Ciao. Giuseppe 
Hello, I have the Unix/Linux Add-on installed in my Splunk Cloud. This Add-on gives me a list of Inactive Hosts. How do I create an episode 1 to 1 that alerts me every time a new host goes inactive?
Hello, I want to setup MTBF & MTTR  for databases and its servers in AppDynamics, kindly direct to a knowledge based document on how to achieve this aspect.
I cannot access the License Master, I also check Monitoring console in Index volume and instance, no result founds. 
Hi @jacknguyen , if you use the Monitoring Console or the License consuption dashboard, you can have these information. Ciao. Giuseppe
Never mind, i figured out. Just moved the |eval statement to the end after the table command, it worked.
Hi guys, My boss check on Splunk Master and see that, he want to know  index, source, sourcetype, capacity of log/day for each sourcetype, How can I see that I used this search before, but I fe... See more...
Hi guys, My boss check on Splunk Master and see that, he want to know  index, source, sourcetype, capacity of log/day for each sourcetype, How can I see that I used this search before, but I feel its not corect 100%, | dbinspect index=* | stats sum(rawSize) as total_size by index | eval total_size_mb = total_size / (1024 * 1024) | table index total_size_mb How I can check   this on my Indexer, I can ssh to Indexer too. Thank you for your time
Hi @Roger_FB , as me and @PickleRick said, you cannot configure different replication and search factors for each index, but only one for the entire cluster. You can only define that there are not ... See more...
Hi @Roger_FB , as me and @PickleRick said, you cannot configure different replication and search factors for each index, but only one for the entire cluster. You can only define that there are not replicated indexes. Ciao. Giuseppe
No. You cannot define (site) replication/search factors on a per-index level. You can set an index to being non-replicated but cannot go beyond that. You set (site)SF/RFs at the server level and onl... See more...
No. You cannot define (site) replication/search factors on a per-index level. You can set an index to being non-replicated but cannot go beyond that. You set (site)SF/RFs at the server level and only define whether the index is replicated at index level. So to have a setup with some indexes being replicated between sites and some not you'd need to have separate clusters (I'm not sure however how you go about site-affinity for search-heads when you're connecting to multiple clusters - never tried that).
There are at least two separate issues here. One is monitoring for data that used to be ingested but is no more, regardless of the reason for it (maybe there is a configuration problem on the recevi... See more...
There are at least two separate issues here. One is monitoring for data that used to be ingested but is no more, regardless of the reason for it (maybe there is a configuration problem on the receving end, maybe the source simply stopped sending data, maybe something else). There are several apps for that on Splunkbase. For example TrackMe - https://splunkbase.splunk.com/app/4621 Another thing is finding errors coming from your inputs (expired certs, broken connections, non-responding API endpoints and so on). And this is something you'd normally look for in _internal index indeed add those you'll find primarily in splunkd.log but also specific add-ons can create their own log files. So it's a bit more complicated than just a single search to find everything that's wrong.
Quick followup question, is there a way to include another field ( as in column ) as part of the final output? For example, if i have something like below where there is another field "Priority" c... See more...
Quick followup question, is there a way to include another field ( as in column ) as part of the final output? For example, if i have something like below where there is another field "Priority" calculated using eval, how to include it in the final output?   As of now,  using the below query,  Priority doesn't show any data and thats expected because Priority is not part of our chart command.  I tried all different combos to add Priority in the chart command but couldn't figure out how to.   | eval Priorty = case(Alert like "001","P1",Alert like "002","P2") | chart count by Alert status | addtotals col=t fieldname=Count label=Total labelfield=Alert | table rule_name Count status Priority