Hi @SalahKhattab , read the above link for anonymizing, you'll find the use of SEDCMD in props.conf to remove part of your logs: SEDCOMD_reduce_fields = s/<Interceptor>(.*)\<ActionDate\>2-04-24\<\/...
See more...
Hi @SalahKhattab , read the above link for anonymizing, you'll find the use of SEDCMD in props.conf to remove part of your logs: SEDCOMD_reduce_fields = s/<Interceptor>(.*)\<ActionDate\>2-04-24\<\/ActionDate\>(.*)\<RecordNotes\>test\<\/RecordNotes\>(.*)\<\/Interceptor\>/<Interceptor\>\<ActionDate\>2-04-24\<\/ActionDate\>\<RecordNotes\>test\<\/RecordNotes\>\<\/Interceptor\>/g that you can test at https://regex101.com/r/fIpO23/1 Ciao. Giuseppe
Sorry, I didn’t quite get your point. Let me clarify. For example, if this is my data: <Interceptor> <AttackCoords>-423423445345345.10742916222947</AttackCoords> <Outcome>2</Outcome> <Inf...
See more...
Sorry, I didn’t quite get your point. Let me clarify. For example, if this is my data: <Interceptor> <AttackCoords>-423423445345345.10742916222947</AttackCoords> <Outcome>2</Outcome> <Infiltrators>20</Infiltrators> <Enforcer>2</Enforcer> <ActionDate>2-04-24</ActionDate> <ActionTime>00:2:00</ActionTime> <RecordNotes>test</RecordNotes> <NumEscaped>0</NumEscaped> <LaunchCoords>-222222</LaunchCoords> <AttackVessel>111</AttackVessel> </Interceptor> I want to extract only ActionDate and RecordNotes and ignore all other fields during ingestion. This way, the data will be cleared of unnecessary fields. In transforms.conf, I aim to create a regex pattern for ActionDate and RecordNotes to filter out other fields, making the resulting data look like this: <Interceptor> <ActionDate>2-04-24</ActionDate> <RecordNotes>test</RecordNotes> </Interceptor> How can I achieve this?
Hi @SalahKhattab , no, it's the opposite: you have to define only the regex extractions for the fields you want, and the others will not be extracted (always if you didn't defined INDEXED_EXTRACTION...
See more...
Hi @SalahKhattab , no, it's the opposite: you have to define only the regex extractions for the fields you want, and the others will not be extracted (always if you didn't defined INDEXED_EXTRACTIONS=XML). let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @tomnguyen1 , usually the easiest way is to create a scheduled search (usually the same search) with e shorter time period that saves results in a summary index, and then run the alert on the sum...
See more...
Hi @tomnguyen1 , usually the easiest way is to create a scheduled search (usually the same search) with e shorter time period that saves results in a summary index, and then run the alert on the summary index. Then you should try to optimize your search. let me know if we can help you describing in a more detailed way, your search. Ciao. Giuseppe
Hi @SalahKhattab , if you want to avoid to index a part of data, the job is more complicated because the only way is the approach to anonymize data (https://docs.splunk.com/Documentation/Splunk/9.3....
See more...
Hi @SalahKhattab , if you want to avoid to index a part of data, the job is more complicated because the only way is the approach to anonymize data (https://docs.splunk.com/Documentation/Splunk/9.3.1/Data/Anonymizedata). In other words, you should delete some parts of your logs before indexing. Why do you want to do this: ro save some license costs or to avoid that some data are visible? If you don't have one of the above requirements, I hint to index all the data, because the removed data could be useful for you. Ciao. Giuseppe
Hello Giuseppe, In my case, the goal is to ensure that the data is cleaned before indexing. For instance, if the data is: <test>dasdada</test><test2>asdasda</test2> I only need the data for t...
See more...
Hello Giuseppe, In my case, the goal is to ensure that the data is cleaned before indexing. For instance, if the data is: <test>dasdada</test><test2>asdasda</test2> I only need the data for the <test> field, and I don’t want the <test2> field to appear. Additionally, there are many fields that I don’t require, so creating a regex for each unwanted field to remove it with SEDCMD or a blacklist would be challenging. Is there a way to delete fields that aren’t extracted from the log before indexing?
Hi @SalahKhattab , unless you extracted your fields at index time, fields are extracted at search time, so all the fields that you configured will be extracted. I suppose that you extracted the fie...
See more...
Hi @SalahKhattab , unless you extracted your fields at index time, fields are extracted at search time, so all the fields that you configured will be extracted. I suppose that you extracted the fields using INDEXED_EXTRACTIONS=XML, in this case all the fields you have are extracted at search time and this doesn't consume storage or memory. It's different is you use regex extractions and not INDEXED_EXTRACTIONS=XML, in this case, only the configured fields are extracted. Why is so mandatory for you that the other fields aren't extracted? Ciao. Giuseppe
I have XML input logs in Splunk. I have already extracted the required fields, totaling 10 fields. I need to ensure any other fields that are extracted are ignored and not indexed in Splunk. Can I...
See more...
I have XML input logs in Splunk. I have already extracted the required fields, totaling 10 fields. I need to ensure any other fields that are extracted are ignored and not indexed in Splunk. Can I set it so that if a field is not in the extracted list, it is automatically ignored? Is this possible?
Hi everyone, I have started working in Splunk UBA recently, and have some questions: Anomalies: How long does it take to identify anomalies after receiving the logs usually? Can I define anomaly...
See more...
Hi everyone, I have started working in Splunk UBA recently, and have some questions: Anomalies: How long does it take to identify anomalies after receiving the logs usually? Can I define anomaly rules? Is there anywhere to explain the existing anomaly categories are based on what or will be looking for what in the traffic? Threats: How long does it take to trigger threats after identifying anomalies? Is there any source I can rely on for creating threat rules? As I am creating rules and testing but with no results.
Hi @andy11 , if your search has a run time of more than 24 hours probably there's an issue on it, even if 10 M of events aren't so many! probably your system hasn't the required resources (CPUs and...
See more...
Hi @andy11 , if your search has a run time of more than 24 hours probably there's an issue on it, even if 10 M of events aren't so many! probably your system hasn't the required resources (CPUs and especially storage IOPS (at least 800) so your searches are too slow. Anyway, you should apply the accelaration methods that Splunk offers to you, so please, read my answer to a similar question: https://community.splunk.com/t5/Splunk-Search/How-can-I-optimize-my-Splunk-queries-for-better-performance/m-p/702770#M238261 In other words, you should use an accelerated Data Model or a summary index and run your alert search on it. Ciao. Giuseppe
Hi, I think there is some confusion here. The app that you linked to on SplunkBase is one that was created by a community user and not Splunk. It happens to have the word "synthetics" in the name, b...
See more...
Hi, I think there is some confusion here. The app that you linked to on SplunkBase is one that was created by a community user and not Splunk. It happens to have the word "synthetics" in the name, but that is not related to Splunk Synthetics--which is the synthetic monitoring solution provided by Splunk Observability Cloud. For help with the app you found on SplunkBase, you'll need to contact the developer directly.
I'm using a query which returns entire day data : index="index_name" source="source_name" And this search provides me above 10 millions of huge events. So my requirement is if t...
See more...
I'm using a query which returns entire day data : index="index_name" source="source_name" And this search provides me above 10 millions of huge events. So my requirement is if the data gets reduced below 10m i should receive an alert. But when this alert is triggering then this entire search is not getting completed because it's taking lots of time and before that only the alert triggering every time. So is there any way that i can trigger this alert after the search completed completely.