All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As far as I remember you only get a warning about possible incomplete results if some of your indexers are down. It has nothing to do with source servers (and that's how I interpret your question - y... See more...
As far as I remember you only get a warning about possible incomplete results if some of your indexers are down. It has nothing to do with source servers (and that's how I interpret your question - you want to know when one of source servers isn't sending data). In case of a downed indexer(s) Splunk is warning you that it might not have all the data it should have. And it makes sense because the missing indexers could have had buckets which have not been replicated yet or might have been replicated but are not searchable. But it deals only with the state of the Splunk infrastructure, not the sources. Splunk has no way of knowing what "partial" data is in case of missing sources. There are some apps meant for detecting downed sources but they don't affect searches running on the data from those sources (although you could add a safeguard based on similar technique to the one with require based on a lookup or something).
I dont know why but the fields "Value" displays anything when i execute your search even if the field exists  
Not sure why you are using prestats=true - try something like this | tstats count as Count from datamodel=Cisco_Security.Secure_Malware_Analytics_Dataset where index IN (add_on_builder_index, ba_tes... See more...
Not sure why you are using prestats=true - try something like this | tstats count as Count from datamodel=Cisco_Security.Secure_Malware_Analytics_Dataset where index IN (add_on_builder_index, ba_test, cim_modactions, cisco_duo, cisco_etd, cisco_multicloud_defense, cisco_secure_fw, cisco_sfw_ftd_syslog, cisco_sma, cisco_sna, cisco_xdr, duo, encore, fw_syslog, history, ioc, main, mcd, mcd_syslog, notable, notable_summary, resource_usage_test_index, risk, secure_malware_analytics, sequenced_events, summary, threat_activity, ubaroute, ueba, whois) sourcetype="cisco:sma:submissions" Secure_Malware_Analytics_Dataset.status IN ("*") by Secure_Malware_Analytics_Dataset.analysis_behaviors_title | eventstats sum(Count) as Total | eval Percent=100*Count/Total | sort - Count | head 20
If there is no such input to choose from it might indeed be the case that there is no direct possibility to capture pods status. Which wouldn't be that surprising since Splunk typically deals with lo... See more...
If there is no such input to choose from it might indeed be the case that there is no direct possibility to capture pods status. Which wouldn't be that surprising since Splunk typically deals with logs and logs usually contain transitions between states, not states themselves. You could probably write your own scrpited input to periodically call proper API endpoint to capture those states and ingest it into Splunk but that requires some development on your side.
Hello, Could you please provide guidance on how to retrieve the daily quantity of logs per host? Specifically, I am looking for a method or query to get the amount of logs generated each day, brok... See more...
Hello, Could you please provide guidance on how to retrieve the daily quantity of logs per host? Specifically, I am looking for a method or query to get the amount of logs generated each day, broken down by host. Best regards,
You should understand what your data is not blindly copy other searches and expect them to work on different data! Your data probably already has the _time field with valid data (although I am guess... See more...
You should understand what your data is not blindly copy other searches and expect them to work on different data! Your data probably already has the _time field with valid data (although I am guessing here as (yet again) you haven't shared your events (as has been suggested many times before!) - try this index="main" sourcetype="Perfmon:disk" | timechart eval(round(avg(Value),0)) by host If it doesn't work, may I suggest you provide more information such as the event you have in your index?
Thank you soo much @ITWhisperer  this worked for me
Convert your lookup so it has a pattern and name for the pattern e.g. logline pattern Deprecated configuration detected in path Please update your settings to use the latest configuration opt... See more...
Convert your lookup so it has a pattern and name for the pattern e.g. logline pattern Deprecated configuration detected in path Please update your settings to use the latest configuration options. *Deprecated configuration detected in path* Please update your settings to use the latest configuration options.* Query execution time exceeded the threshold: seconds. Query: SELECT * FROM users WHERE last_login *Query execution time exceeded the threshold:*seconds. Query: SELECT * FROM users WHERE last_login* Query execution time exceeded the threshold: seconds. Query: SELECT * FROM contacts WHERE contact_id *Query execution time exceeded the threshold:*seconds. Query: SELECT * FROM contacts WHERE contact_id* Then add a lookup definition and use advanced option to set WILDCARD(pattern) Now you can use lookup on your events to find out which type of loglines you have   | lookup patterns.csv pattern as _raw | stats count by logline  
Thank you for sharing it! I wrote a little script with simple logic using this workaround. I have rolled it out across my environment and will monitor how it works. But I think that this approach ... See more...
Thank you for sharing it! I wrote a little script with simple logic using this workaround. I have rolled it out across my environment and will monitor how it works. But I think that this approach is quite a safe way to "remediate" the problem before the Splunk team fixes it. function Stop-SplZombieProcess { [CmdletBinding()] param ( [Parameter(Mandatory = $true)] [string]$HostName, [Parameter(Mandatory = $false)] [int]$Threshold = 5000, [Parameter(Mandatory = $false)] [switch]$MultiZombie ) begin { } process { Write-Host "Trying to find zombies on the host '$HostName'." $Procs = Invoke-Command -ComputerName $HostName -ScriptBlock {Get-Process | Where-Object {$_.ProcessName -eq 'splunkd'} } if ($Procs.Count -eq 1) { Write-Host "Only one splunkd process with '$($Procs.Handles)' handles was found. Most likely it is not a zombie." } else{ [array]$Zombies = $Procs | Where-Object { $_.Handles -ge $Threshold } if ($Zombies) { if ($Zombies.Count -eq 1) { $ProcId = $Zombies.Id Write-Host "Zombie was found. The number of handles is '$($Zombies.Handles)'. Trying to kill." Invoke-Command -ComputerName $HostName -ScriptBlock {Stop-Process -Id $using:ProcId -Force} Write-Host "The zombie process with ProcId '$ProcId' has been killed on the host '$HostName'." } elseif ($MultiZombie) { Write-Host 'Performing zombie multikill.' foreach ($Item in $Zombies) { $ProcId = $Item.Id Write-Host "Zombie was found. The number of handles is '$($Item.Handles)'. Trying to kill." Invoke-Command -ComputerName $HostName -ScriptBlock {Stop-Process -Id $using:ProcId -Force} Write-Host "The zombie process with ProcId '$ProcId' has been killed on the host '$HostName'." } } else { Write-Warning "Found more than one process with handles more than '$Threshold'. Rise the threshold value or use the 'MultiZombie' switch to kill more than one zombie." } } else { Write-Host "Zombies not found on the host '$HostName'." } } } end { [System.gc]::collect() } }
Summary index not showing in drop down happened to us due to WLM and all-time search restriction
Hi, I have instrumented a node.js agent with auto instrumentation in cluster agent.My application is reporting but there is no call graph have been captured for BTs. I have checked the agent prope... See more...
Hi, I have instrumented a node.js agent with auto instrumentation in cluster agent.My application is reporting but there is no call graph have been captured for BTs. I have checked the agent properties and discovered that by default this property is disabled. AppDynamics options: excludeAgentFromCallGraph,true Can anyone suggest how can i enable this property for auto instrumentation method.
Hi All, I need to download and install below app via command line https://splunkbase.splunk.com/app/263 Please help me with the exact commands, I tried with multiple commands, login is success... See more...
Hi All, I need to download and install below app via command line https://splunkbase.splunk.com/app/263 Please help me with the exact commands, I tried with multiple commands, login is successful and getting token but during app download getting 404 bad request error
its what I am doing but it returns no heat map index="main" sourcetype="Perfmon:disk" | eval _time=strptime(time, "%m/%d/%Y %H:%M") | timechart eval(round(avg(Value),0)) by host instead this whic... See more...
its what I am doing but it returns no heat map index="main" sourcetype="Perfmon:disk" | eval _time=strptime(time, "%m/%d/%Y %H:%M") | timechart eval(round(avg(Value),0)) by host instead this which returns an heat map | inputlookup sample-data.csv | eval _time=strptime(time, "%m/%d/%Y %H:%M") | timechart eval(round(avg(value),0)) by name  
how can I use top command after migrating to tstats? I need the same result, but looks like it can be done only using top, so I need it index IN (add_on_builder_index, ba_test, cim_modactions, cis... See more...
how can I use top command after migrating to tstats? I need the same result, but looks like it can be done only using top, so I need it index IN (add_on_builder_index, ba_test, cim_modactions, cisco_duo, cisco_etd, cisco_multicloud_defense, cisco_secure_fw, cisco_sfw_ftd_syslog, cisco_sma, cisco_sna, cisco_xdr, duo, encore, fw_syslog, history, ioc, main, mcd, mcd_syslog, notable, notable_summary, resource_usage_test_index, risk, secure_malware_analytics, sequenced_events, summary, threat_activity, ubaroute, ueba, whois) sourcetype="cisco:sma:submissions" status IN ("*") | rename analysis.threat_score AS ats | where isnum(ats) | eval ats_num=tonumber(ats) | eval selected_ranges="*" | eval token_score="*" | eval within_selected_range=0 | rex field=selected_ranges "(?<start>\d+)-(?<end>\d+)" | eval start=tonumber(start), end=tonumber(end) | eval within_selected_range=if( (ats_num >= start AND ats_num <= end) OR token_score="*", 1, within_selected_range ) | where within_selected_range=1 | rename "analysis.behaviors{}.title" as "Behavioral indicator" | top limit=10 "Behavioral indicator" I tried this but it doesnt return me percent | tstats prestats=true count as Count from datamodel=Cisco_Security.Secure_Malware_Analytics_Dataset where index IN (add_on_builder_index, ba_test, cim_modactions, cisco_duo, cisco_etd, cisco_multicloud_defense, cisco_secure_fw, cisco_sfw_ftd_syslog, cisco_sma, cisco_sna, cisco_xdr, duo, encore, fw_syslog, history, ioc, main, mcd, mcd_syslog, notable, notable_summary, resource_usage_test_index, risk, secure_malware_analytics, sequenced_events, summary, threat_activity, ubaroute, ueba, whois) sourcetype="cisco:sma:submissions" Secure_Malware_Analytics_Dataset.status IN ("*") by Secure_Malware_Analytics_Dataset.analysis_behaviors_title | chart count by Secure_Malware_Analytics_Dataset.analysis_behaviors_title | sort - count | head 20
So, instead of using gentimes to generate events, use an index search (as you would normally do)
Try braking the large rex up into smaller chunks | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*.*#HLE#" | rex "#HLS#\s*IID:\s*[^,]+,\s*STEP:\s*(?P<STEP>[^,]+),\s*.*#HLE#" | rex "#HLS#\s*IID:\s*[^,]+,\s*ST... See more...
Try braking the large rex up into smaller chunks | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*.*#HLE#" | rex "#HLS#\s*IID:\s*[^,]+,\s*STEP:\s*(?P<STEP>[^,]+),\s*.*#HLE#" | rex "#HLS#\s*IID:\s*[^,]+,\s*STEP:\s*[^,]+,\s*PKEY:\s*(?P<PKEY>.*?),\s*.*#HLE#" and so on
Good news, versions 9.1.6, 9.2.3, 9.3.1 are available now. Testing with 9.2.3 shows no more zombie processes and splunkd handle count remains low, so the memory leak seems to be fixed.
Absolutely, correct. That's my intention and I'm bit worried if I would hit a Performance impact if I keep on updating the macro and it exceeds limit at some point. Is there any better approach I can... See more...
Absolutely, correct. That's my intention and I'm bit worried if I would hit a Performance impact if I keep on updating the macro and it exceeds limit at some point. Is there any better approach I can deal with for this use-case. Happy to adapt to any better approaches.
Let's say below are few rex Patterns available in my lookup | rex field=LogLine mode=sed "s|(Deprecated configuration detected in path).*( Please update your settings to use the latest configura... See more...
Let's say below are few rex Patterns available in my lookup | rex field=LogLine mode=sed "s|(Deprecated configuration detected in path).*( Please update your settings to use the latest configuration options.)|\1 \2|g" | rex field=LogLine mode=sed "s|(Query execution time exceeded the threshold:).*(seconds. Query: SELECT * FROM users WHERE last_login).*|\1 \2|g" | rex field=LogLine mode=sed "s|(Query execution time exceeded the threshold:).*(seconds. Query: SELECT * FROM contacts WHERE contact_id).*|\1 \2|g" Below are the search results, I want to use above rex Pattern: WARN ConfigurationLoader - Deprecated configuration detected in path /xx/yy/zz. Please update your settings to use the latest configuration options. WARN ConfigurationLoader - Deprecated configuration detected in path /aa/dd/jkl. Please update your settings to use the latest configuration options. WARN QueryExecutor - Query execution time exceeded the threshold: 12.3 seconds. Query: SELECT * FROM users WHERE last_login > '2024-01-01'. WARN QueryExecutor - Query execution time exceeded the threshold: 21.9 seconds. Query: SELECT * FROM contacts WHERE contact_id > '252’.   So I'll get something like below, if I do stats LogLine Count Deprecated configuration detected in path . Please update your settings to use the latest configuration options. 2 Query execution time exceeded the threshold: seconds. Query: SELECT * FROM users WHERE last_login 1 Query execution time exceeded the threshold: seconds. Query: SELECT * FROM contacts WHERE contact_id 1
So let me see if I have understood: You have 1000s of patterns in a lookup which you use against a set of events and if any of the events match against a pattern in the lookup you copy that pattern ... See more...
So let me see if I have understood: You have 1000s of patterns in a lookup which you use against a set of events and if any of the events match against a pattern in the lookup you copy that pattern into a macro? And this is the process you want to automate?