All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @SalahKhattab , if you want to avoid to index a part of data, the job is more complicated because the only way is the approach to anonymize data (https://docs.splunk.com/Documentation/Splunk/9.3.... See more...
Hi @SalahKhattab , if you want to avoid to index a part of data, the job is more complicated because the only way is the approach to anonymize data (https://docs.splunk.com/Documentation/Splunk/9.3.1/Data/Anonymizedata). In other words, you should delete some parts of your logs before indexing. Why do you want to do this: ro save some license costs or to avoid that some data are visible? If you don't have one of the above requirements, I hint to index all the data, because the removed data could be useful for you. Ciao. Giuseppe
  Hello Giuseppe, In my case, the goal is to ensure that the data is cleaned before indexing. For instance, if the data is: <test>dasdada</test><test2>asdasda</test2> I only need the data for t... See more...
  Hello Giuseppe, In my case, the goal is to ensure that the data is cleaned before indexing. For instance, if the data is: <test>dasdada</test><test2>asdasda</test2> I only need the data for the <test> field, and I don’t want the <test2> field to appear. Additionally, there are many fields that I don’t require, so creating a regex for each unwanted field to remove it with SEDCMD or a blacklist would be challenging. Is there a way to delete fields that aren’t extracted from the log before indexing?        
Hi @SalahKhattab , unless you extracted your fields at index time, fields are extracted at search time, so all the fields that you configured will be extracted. I suppose that you extracted the fie... See more...
Hi @SalahKhattab , unless you extracted your fields at index time, fields are extracted at search time, so all the fields that you configured will be extracted. I suppose that you extracted the fields using INDEXED_EXTRACTIONS=XML, in this case all the fields you have are extracted at search time and this doesn't consume storage or memory. It's different is you use regex extractions and not INDEXED_EXTRACTIONS=XML, in this case, only the configured fields are extracted. Why is so mandatory for you that the other fields aren't extracted? Ciao. Giuseppe
I have XML input logs in Splunk. I have already extracted the required fields, totaling 10 fields. I need to ensure any other fields that are extracted are ignored and not indexed in Splunk. Can I... See more...
I have XML input logs in Splunk. I have already extracted the required fields, totaling 10 fields. I need to ensure any other fields that are extracted are ignored and not indexed in Splunk. Can I set it so that if a field is not in the extracted list, it is automatically ignored? Is this possible? 
Hi everyone, I have started working in Splunk UBA recently, and have some questions: Anomalies: How long does it take to identify anomalies after receiving the logs usually? Can I define anomaly... See more...
Hi everyone, I have started working in Splunk UBA recently, and have some questions: Anomalies: How long does it take to identify anomalies after receiving the logs usually? Can I define anomaly rules? Is there anywhere to explain the existing anomaly categories are based on what or will be looking for what in the traffic? Threats: How long does it take to trigger threats after identifying anomalies? Is there any source I can rely on for creating threat rules? As I am creating rules and testing but with no results.
Hi @andy11 , if your search has a run time of more than 24 hours probably there's an issue on it, even if 10 M of events aren't so many! probably your system hasn't the required resources (CPUs and... See more...
Hi @andy11 , if your search has a run time of more than 24 hours probably there's an issue on it, even if 10 M of events aren't so many! probably your system hasn't the required resources (CPUs and especially storage IOPS (at least 800) so your searches are too slow. Anyway, you should apply the accelaration methods that Splunk offers to you, so please, read my answer to a similar question: https://community.splunk.com/t5/Splunk-Search/How-can-I-optimize-my-Splunk-queries-for-better-performance/m-p/702770#M238261 In other words, you should use an accelerated Data Model or a summary index and run your alert search on it. Ciao. Giuseppe
Hi, I think there is some confusion here. The app that you linked to on SplunkBase is one that was created by a community user and not Splunk. It happens to have the word "synthetics" in the name, b... See more...
Hi, I think there is some confusion here. The app that you linked to on SplunkBase is one that was created by a community user and not Splunk. It happens to have the word "synthetics" in the name, but that is not related to Splunk Synthetics--which is the synthetic monitoring solution provided by Splunk Observability Cloud. For help with the app you found on SplunkBase, you'll need to contact the developer directly.
I'm using a query which returns entire day data :       index="index_name" source="source_name"        And this search provides me above 10 millions of huge events. So my requirement is if t... See more...
I'm using a query which returns entire day data :       index="index_name" source="source_name"        And this search provides me above 10 millions of huge events. So my requirement is if the data gets reduced below 10m i should receive an alert. But when this alert is triggering then this entire search is not getting completed because it's taking lots of time and before that only the alert triggering every time. So is there any way that i can trigger this alert after the search completed completely.
Maybe you can report to Splunk support?
Is there any chance this will be fixed?
| rex max_match=0 field=tags "(?<namevalue>[^:, ]+:[^, ]+)" | mvexpand namevalue | rex field=namevalue "(?<name>[^:]+):(?<value>.*)" | eval {name}=value
| eventstats values(hdr_mid) AS hdr_mid by s qid
It's not the only factor in captain election. So just because you have raft enabled doesn't mean that your election will work properly.
I neved had problems with Captain Election, both with [raft_statemachine] disabled = true disabled = false 🤷‍
@sainag_splunk wrote: The disabled setting in SHC only impacts captain election and member roster management. Ok, so it's minimal, and has ne real impact of cluster operativity Thanks... See more...
@sainag_splunk wrote: The disabled setting in SHC only impacts captain election and member roster management. Ok, so it's minimal, and has ne real impact of cluster operativity Thanks
https://en.m.wikipedia.org/wiki/Raft_(algorithm) Without raft algorithm your captain election will not work properly. You might get away with static captain but it's not fault tolerant so if you los... See more...
https://en.m.wikipedia.org/wiki/Raft_(algorithm) Without raft algorithm your captain election will not work properly. You might get away with static captain but it's not fault tolerant so if you lose your static captain your SHC will more or less fall apart.
Ok. I recognize filterd logs. What is your business case here?
One important thing - you can't add or remove something to/from csv lookup. You can only overwrite it as a whole.
It depends whether we're talking about configuring extractions in transforms or trying to do it with search commands. With configured extractions you just need to capture two groups - one for the fi... See more...
It depends whether we're talking about configuring extractions in transforms or trying to do it with search commands. With configured extractions you just need to capture two groups - one for the field name, another for value and either use $1::$2 for format if using unnamed groups or name them _KEY_1 and _VAL_1 respectively if using named groups. If you want to do that in SPL you need to use the {} notation. Like | eval {fieldname}=fieldvalue Where fieldname is a field containing your target field name. Most probably you'll want to split your input into key:value chunks as multivalued field, then use foreach to iterate over those chunks and split them into final key-value pairs and use the {key} notation to define the output field.