All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi everyone, I have started working in Splunk UBA recently, and have some questions: Anomalies: How long does it take to identify anomalies after receiving the logs usually? Can I define anomaly... See more...
Hi everyone, I have started working in Splunk UBA recently, and have some questions: Anomalies: How long does it take to identify anomalies after receiving the logs usually? Can I define anomaly rules? Is there anywhere to explain the existing anomaly categories are based on what or will be looking for what in the traffic? Threats: How long does it take to trigger threats after identifying anomalies? Is there any source I can rely on for creating threat rules? As I am creating rules and testing but with no results.
Hi @andy11 , if your search has a run time of more than 24 hours probably there's an issue on it, even if 10 M of events aren't so many! probably your system hasn't the required resources (CPUs and... See more...
Hi @andy11 , if your search has a run time of more than 24 hours probably there's an issue on it, even if 10 M of events aren't so many! probably your system hasn't the required resources (CPUs and especially storage IOPS (at least 800) so your searches are too slow. Anyway, you should apply the accelaration methods that Splunk offers to you, so please, read my answer to a similar question: https://community.splunk.com/t5/Splunk-Search/How-can-I-optimize-my-Splunk-queries-for-better-performance/m-p/702770#M238261 In other words, you should use an accelerated Data Model or a summary index and run your alert search on it. Ciao. Giuseppe
Hi, I think there is some confusion here. The app that you linked to on SplunkBase is one that was created by a community user and not Splunk. It happens to have the word "synthetics" in the name, b... See more...
Hi, I think there is some confusion here. The app that you linked to on SplunkBase is one that was created by a community user and not Splunk. It happens to have the word "synthetics" in the name, but that is not related to Splunk Synthetics--which is the synthetic monitoring solution provided by Splunk Observability Cloud. For help with the app you found on SplunkBase, you'll need to contact the developer directly.
I'm using a query which returns entire day data :       index="index_name" source="source_name"        And this search provides me above 10 millions of huge events. So my requirement is if t... See more...
I'm using a query which returns entire day data :       index="index_name" source="source_name"        And this search provides me above 10 millions of huge events. So my requirement is if the data gets reduced below 10m i should receive an alert. But when this alert is triggering then this entire search is not getting completed because it's taking lots of time and before that only the alert triggering every time. So is there any way that i can trigger this alert after the search completed completely.
Maybe you can report to Splunk support?
Is there any chance this will be fixed?
| rex max_match=0 field=tags "(?<namevalue>[^:, ]+:[^, ]+)" | mvexpand namevalue | rex field=namevalue "(?<name>[^:]+):(?<value>.*)" | eval {name}=value
| eventstats values(hdr_mid) AS hdr_mid by s qid
It's not the only factor in captain election. So just because you have raft enabled doesn't mean that your election will work properly.
I neved had problems with Captain Election, both with [raft_statemachine] disabled = true disabled = false 🤷‍
@sainag_splunk wrote: The disabled setting in SHC only impacts captain election and member roster management. Ok, so it's minimal, and has ne real impact of cluster operativity Thanks... See more...
@sainag_splunk wrote: The disabled setting in SHC only impacts captain election and member roster management. Ok, so it's minimal, and has ne real impact of cluster operativity Thanks
https://en.m.wikipedia.org/wiki/Raft_(algorithm) Without raft algorithm your captain election will not work properly. You might get away with static captain but it's not fault tolerant so if you los... See more...
https://en.m.wikipedia.org/wiki/Raft_(algorithm) Without raft algorithm your captain election will not work properly. You might get away with static captain but it's not fault tolerant so if you lose your static captain your SHC will more or less fall apart.
Ok. I recognize filterd logs. What is your business case here?
One important thing - you can't add or remove something to/from csv lookup. You can only overwrite it as a whole.
It depends whether we're talking about configuring extractions in transforms or trying to do it with search commands. With configured extractions you just need to capture two groups - one for the fi... See more...
It depends whether we're talking about configuring extractions in transforms or trying to do it with search commands. With configured extractions you just need to capture two groups - one for the field name, another for value and either use $1::$2 for format if using unnamed groups or name them _KEY_1 and _VAL_1 respectively if using named groups. If you want to do that in SPL you need to use the {} notation. Like | eval {fieldname}=fieldvalue Where fieldname is a field containing your target field name. Most probably you'll want to split your input into key:value chunks as multivalued field, then use foreach to iterate over those chunks and split them into final key-value pairs and use the {key} notation to define the output field.
Try this one : <your_search>| rex field=Tags "avd:(?<avd>[^,]+),\s*dept:(?<dept>[^,]+),\s*cm-resource-parent:(?<cm_resource_parent>[^,]+),\s*manager:(?<manager>[^$]+)" ------ If you find this solu... See more...
Try this one : <your_search>| rex field=Tags "avd:(?<avd>[^,]+),\s*dept:(?<dept>[^,]+),\s*cm-resource-parent:(?<cm_resource_parent>[^,]+),\s*manager:(?<manager>[^$]+)" ------ If you find this solution helpful, please consider accepting it and awarding karma points !!  
Two things. 1. A Heavy Forwarder is a Splunk Enterprise instance. It's just doing forwarding. 2. If you can receive your UDP traffic at the forwarder why send it to another Splunk instance with sys... See more...
Two things. 1. A Heavy Forwarder is a Splunk Enterprise instance. It's just doing forwarding. 2. If you can receive your UDP traffic at the forwarder why send it to another Splunk instance with syslog instead of native Splunk protocol?
Perhaps this answer will help: https://community.splunk.com/t5/Splunk-Enterprise/Having-Syslog-logs-into-SPLUNK/m-p/693546/highlight/true#M19778
I've imported a csv file and one of the fields called "Tags" looks like this: Tags= "avd:vm, dept:support services, cm-resource-parent:/subscriptions/e9674c3a-f9f8-85cc-b457-94cf0fbd9715/resourcegr... See more...
I've imported a csv file and one of the fields called "Tags" looks like this: Tags= "avd:vm, dept:support services, cm-resource-parent:/subscriptions/e9674c3a-f9f8-85cc-b457-94cf0fbd9715/resourcegroups/avd-standard-pool-rg/providers/microsoft.desktopvirtualization/hostpools/avd_standard_pool_1, manager:JohnDoe@email.com" I'd like to split each of these tags up into their own field/value, AND extract the first part of the tag as the field name. Result of new fields/values would look like this: avd="vm" dept="support services" cm-resource-parent="/subscriptions/e9674c3a-f9f8-85cc-b457-94cf0fbd9715/resourcegroups/avd-standard-pool-rg/providers/microsoft.desktopvirtualization/hostpools/avd_standard_pool_1" manager="JohnDoe@email.com" I've looked at a lot of examples with rex, MV commands, etc, but nothing that pulls the new field name out of the original field. The format of that Tags field is always the same as listed above, for all events. Thank you!
Apply following workaround in default-mode.conf Additionally you can also push this change via DS push across thousands of universal forwarders. Add index_thruput in the list of disabled proces... See more...
Apply following workaround in default-mode.conf Additionally you can also push this change via DS push across thousands of universal forwarders. Add index_thruput in the list of disabled processors.  Add following line as is in default-mode.conf.   #Turn off a processor [pipeline:indexerPipe] disabled_processors= index_thruput, indexer, indexandforward, latencytracker, diskusage, signing,tcp-output-generic-processor, syslog-output-generic-processor, http-output-generic-processor, stream-output-processor, s2soverhttpoutput, destination-key-processor     NOTE:  PLEASE DON'T APPLY ON HF/SH/IDX/CM/DS. You want to use different app( not SplunkUniversalForwarder app) to push the change.