All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

| streamstats count as row current=f last(Value) as previous | eval row=row%2 | eval diff=if(row=1,Value-previous*row,null()) | fields - previous row
Hi @chakavak, the default folder has a minor priority than local and you cannot modify it. [default] host = mydashboard must be inserted in inputs.conf not in server.conf. Open a case to Splunk Su... See more...
Hi @chakavak, the default folder has a minor priority than local and you cannot modify it. [default] host = mydashboard must be inserted in inputs.conf not in server.conf. Open a case to Splunk Support for behavior non aligned with documentation, sending them a diag from that UF. Ciao. Giuseppe
you can run below query in your CMC. | rest splunk_server_group=* /services/licenser/pools | eval total_quota_gb = round(your_quota_field / (1024 * 1024 * 1024), 2) | eval used_gb = round(your_use... See more...
you can run below query in your CMC. | rest splunk_server_group=* /services/licenser/pools | eval total_quota_gb = round(your_quota_field / (1024 * 1024 * 1024), 2) | eval used_gb = round(your_used_field / (1024 * 1024 * 1024), 2) | eval usage_percentage = round((used_gb / total_quota_gb) * 100, 2) | table splunk_server, total_quota_gb, used_gb, usage_percentage | eval alert_level = case( usage_percentage > 90, "Critical", usage_percentage >= 80, "High", usage_percentage >= 70, "Medium", true(), "Normal" ) | eval alert_message = case( usage_percentage > 90, "License usage has crossed critical threshold at " . usage_percentage . "%. Immediate attention required!", usage_percentage >= 80, "License usage has reached " . usage_percentage . "%. Please take immediate action.", usage_percentage >= 70, "License usage has reached " . usage_percentage . "%. Please take action.", true(), "License usage is within normal range." ) | where usage_percentage > 70 | table splunk_server, total_quota_gb, used_gb, usage_percentage, alert_level, alert_message   Make sure to replace your_quota_field & your_used_field with the correct field name representing the license quota in your Splunk Cloud environment.
lets say i have a query which is giving no result at present date but may give in future .  In this query I have calculated timeval = strftime(_time,"%y-%m-%d")  , since there is not data coming so ... See more...
lets say i have a query which is giving no result at present date but may give in future .  In this query I have calculated timeval = strftime(_time,"%y-%m-%d")  , since there is not data coming so "_time" will be empty hence timeval does not give any result . But still I have to show timeval with the help of present time , how can i do that .  i also used at the end of query appendpipe[stats count| where count==0  eval timeval=strftime(now(),%d/%m/%Y) | where count==0] but still no result.
21 = 1+2+3+4+5+6 i.e. it comes from your addcoltotals - try this | addcoltotals labelfield=Name Score*
Installed universal forwarder credential package and UF agent in a Windows Machine. Still not receiving data. Restart of splunk forwarder done. Installation of both package is with same user i.e. ro... See more...
Installed universal forwarder credential package and UF agent in a Windows Machine. Still not receiving data. Restart of splunk forwarder done. Installation of both package is with same user i.e. root. Unable to even receive any type of data from the windows OS.Need assistance.
No problem. You can "unmark" a post as not being a solution but no worries. Switching to /raw is also one of the possible solutions.
I found a serverName = $COMPUTERNAME in the path blow: \Peogrm Files\splunkuniversalforwarder\etc\system\default \server.conf  I changed this parameter and also added [default] host = mydashboard i... See more...
I found a serverName = $COMPUTERNAME in the path blow: \Peogrm Files\splunkuniversalforwarder\etc\system\default \server.conf  I changed this parameter and also added [default] host = mydashboard in config file , it didn't work  
You can't mix different distribution methods. If you're using ansible, use it to deploy to the deployer - that's the way to manage the SHC. What the deployer pushes depends on the push mode.
Hi Splunkers, today I have a "curiosity" about an architectural design I examinated last week. The idea is the following: different regions (the 5 continents, in a nutshell), every one with its set ... See more...
Hi Splunkers, today I have a "curiosity" about an architectural design I examinated last week. The idea is the following: different regions (the 5 continents, in a nutshell), every one with its set of log sources and Splunk Components. All Splunk "items" are on prem: Forwarder, Indexers, SH and so on. More over, every region has 2 SH: one with Enterprise Security and another one without it. Untile now, "nothing new under the sun", like we say in Italy. The new element, I men new for me and my experience, is the following one: there is a "centralized" cluster of SH, each one with Enterprise Security installed on it, that should collect the notables events from every regional ES. So, the flow about those component should be: Europe ES Notables -> "Centralized" ES Cluster America ES Notables -> "Centralized" ES Cluster And so on. So, my wonder is: is there any doc about forward Notables events from a ES platform to another one? I searched but I didn't find anything about that (probabile I searched bad, I know).  
Hi @PickleRick    Works like a charm! Thank you! It's way better than reverting to the /raw endpoint. Unfortunately I can't mark your answer as a solution anymore. I will edit my solution adding w... See more...
Hi @PickleRick    Works like a charm! Thank you! It's way better than reverting to the /raw endpoint. Unfortunately I can't mark your answer as a solution anymore. I will edit my solution adding what you suggested.   Thank you!
You have to be more specific. 1. There are many index names and sourcetypes which are not used in your environment. For example, I don't think you're using index names that I use in my private lab e... See more...
You have to be more specific. 1. There are many index names and sourcetypes which are not used in your environment. For example, I don't think you're using index names that I use in my private lab environment at home. You have to be more specific about what you need (while with the indexes you can mean checking just all defined indexes, with sourcetypes it's not clear) 2. You can't find something that isn't there. So you must have a list against which you'll be comparing your search results.  See https://www.duanewaddle.com/proving-a-negative/
It is probably due to the add-on calling an obsolete method from the splunklib. You can't do anything about it yourself except for either updating the add-on (if possible) or asking the developer to ... See more...
It is probably due to the add-on calling an obsolete method from the splunklib. You can't do anything about it yourself except for either updating the add-on (if possible) or asking the developer to fix it.
Hi @anandhalagaras1 , in this case see in the Cloud Monitoring Console App at https://<your_instance>.splunkcloud.com/en-US/app/splunk_instance_monitoring/alerts, you can find the aler named "CMC Al... See more...
Hi @anandhalagaras1 , in this case see in the Cloud Monitoring Console App at https://<your_instance>.splunkcloud.com/en-US/app/splunk_instance_monitoring/alerts, you can find the aler named "CMC Alert - Ingest Volume Exceeds 80%". You can open in search this alert and enable it: the search is  `sim_licensing_summary_base` | `sim_licensing_summary_no_split` | append [| search `sim_licensing_limit`] | stats latest(GB) as usage latest("license limit") as limit | eval ratio = usage/limit | where ratio > .8 but maybe the macros don't run outside this app, but you can run it in the app. if you want to use it outside the app, you should replae the macros. Ciao. Giuseppe
The /event endpoint gives you more flexibility than /raw so I'd advise to use /event anyway. But in order for HEC input _not_ to skip the timestamp recognition (which it does by default - it either g... See more...
The /event endpoint gives you more flexibility than /raw so I'd advise to use /event anyway. But in order for HEC input _not_ to skip the timestamp recognition (which it does by default - it either gets the timestamp from the field pushed with (not in!) an event or assigns current timestamp), you must add the ?auto_extract_timestamp=true parameter to the url. Like https://your_indexer:8088/services/collector/event?auto_extract_timestamp=true
Hi. I also have a large size files on some servers, about 10Gb per day in 3 files each server, and those files during the day are very delayed to be ingested, with ACK to true. While those files de... See more...
Hi. I also have a large size files on some servers, about 10Gb per day in 3 files each server, and those files during the day are very delayed to be ingested, with ACK to true. While those files delay from 1 to also 4 hours to be indexed, other files on same servers are ingested fine in realtime. So, also with UF 8.2.12, i think it's a thruput of Network Infrastructure, or maybe too many datas from those inputs 🤷‍ I also have   [thruput] maxKBps = 0 [general] parallelIngestionPipelines = 2 [queue] maxSize = 100MB [queue=parsingQueue] maxSize = 10MB I don't think there are other methods, since it's a phisiological problem 🤷‍ The only way, maybe, is to add more Indexers in SPLUNK Infra or ask the Applicative Teams to split those file in more servers 🤷‍
To be honest, I look into any app not built and supported by Splunk. Not only due to security reasons but also many third-party provided apps are simply badly written and you can't get them to work w... See more...
To be honest, I look into any app not built and supported by Splunk. Not only due to security reasons but also many third-party provided apps are simply badly written and you can't get them to work without fixing them yourself.
1. I don't understand what you mean by "I have files sent to search head". If you're trying to use your SH also as a forwarder... well, that's not a good practice. But it shouldn't be the cause of th... See more...
1. I don't understand what you mean by "I have files sent to search head". If you're trying to use your SH also as a forwarder... well, that's not a good practice. But it shouldn't be the cause of the problem here. 2. Since you're sending SYNs, the indexer is listening on the port and apparently even gets those SYNs on the wire, there are two possible explanations - either your local firewall (iptables? firewalld? that new fancy nftables?) is filtering the packets or you have badly configured routing and packets are dropped by rp_filter.
This is simply bad data (at least from Splunk's point of view). Even if you managed to break it into events (but I gotta honestly say that I see no way to reliably make sure you break in proper plac... See more...
This is simply bad data (at least from Splunk's point of view). Even if you managed to break it into events (but I gotta honestly say that I see no way to reliably make sure you break in proper places and only in those places; manipulating structured data with just regexes is simply not reliable because regexes are not structure-aware), you'll still have those headers and footers (attached to an end of another event). Also resulting events would have inconsistent contents - one event would have "event1" field, another would be "event2". The best solution here would be to process your data and split before pushing it to Splunk.
1. This is not recursion 2. This is an old thread with possibly low visibility. Please create a new thread, describe your problem, what data you have, what results you need to raise your chances of ... See more...
1. This is not recursion 2. This is an old thread with possibly low visibility. Please create a new thread, describe your problem, what data you have, what results you need to raise your chances of getting a meaningful response.