All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Applying on non-UF (e.g HF) will break thruput metrics. Added warning to post. Thanks for asking great question.
You nailed it. You may want to check https://community.splunk.com/t5/Knowledge-Management/Splunk-Persistent-Queue/m-p/688223/highlight/true#M10063
Thanks for the information, I assume the target is to fix this in a future UF 9.3.x release? Furthermore, would you happen to know what would happen if the setting was accidentally applied on a HF? ... See more...
Thanks for the information, I assume the target is to fix this in a future UF 9.3.x release? Furthermore, would you happen to know what would happen if the setting was accidentally applied on a HF?   Clients of our deployment server will sometimes run a Splunk enterprise version instead of a UF so I suspect we will need to be careful...
It may be worth adding that the acknowledgement option cannot protect against data loss a scenario where a forwarder is restarted while the remote endpoint is not available To expand on this point, ... See more...
It may be worth adding that the acknowledgement option cannot protect against data loss a scenario where a forwarder is restarted while the remote endpoint is not available To expand on this point, let's assume we have universal forwarder A, sending data to heavy forwarder B (and only HF B). (And then assume B connects to indexers) If A is reading from a file and sending to B, if we shutdown B, and while B is unable to process data we restart A during this downtime of B, any "in memory" data is lost at this point as the memory buffer if flushed on shutdown. The file monitor will re-read the portion of the file *after* the lost portion of the data.   This experiment is quite easy to setup in a development environment, the only point I'm adding is that (as advertised) the acknowledgement protects against intermediate data loss. It does not protect against data loss when the remote endpoint is down and the source is restarted.
Hi, our company does not yet have Splunk enterprise security, but we are considering getting it. Currently, our security posture includes a stream of EDR data from Carbon Black containing the EDR eve... See more...
Hi, our company does not yet have Splunk enterprise security, but we are considering getting it. Currently, our security posture includes a stream of EDR data from Carbon Black containing the EDR events and watchlist hits. We want to correlate the watchlist hits to create incidents. Is this something Splunk Enterprise Security can do right out of the box, given access to the EDR data? If so, how can do we do this in the Splunk Enterprise Security dashboard?  
https://docs.splunk.com/Documentation/Splunk/latest/admin/Wheretofindtheconfigurationfiles This is how Splunk merges settings from all the configuration files to create an effective configuration wh... See more...
https://docs.splunk.com/Documentation/Splunk/latest/admin/Wheretofindtheconfigurationfiles This is how Splunk merges settings from all the configuration files to create an effective configuration which will be applied.
Manipulating structured data with regexes is not a very good idea. It would be better to use an external tool to clean up your data before ingesting.
In Truck Simulator Ultimate, connecting platforms like Apigee Edge to Splunk is similar to integrating tracking tools for your fleet. This connection allows you to monitor and analyze API traffic... See more...
In Truck Simulator Ultimate, connecting platforms like Apigee Edge to Splunk is similar to integrating tracking tools for your fleet. This connection allows you to monitor and analyze API traffic data in real-time, just as tracking fuel and route efficiency improves logistics. It’s a powerful way to optimize operations smoothly. See More
Ok, but what is the goal of your alert? If you just want to know whether you have less than 10Mevents you chose the worst possible way to do so. Why fetching all events if you only want their count? ... See more...
Ok, but what is the goal of your alert? If you just want to know whether you have less than 10Mevents you chose the worst possible way to do so. Why fetching all events if you only want their count? index=whatever source=something | stats count Is much much better. And if you use only indexed fields (or index name which technically isn't an indexed field but we can assume it is for the sake of this argument) which you do you can even do it lightning-fast as | tstats count WHERE index=whatever source=something
Hi @gcusello , Thanks for your reply. Actually my search is not taking that much time, hardly it takes 4-6 minutes of time to complete the search. But the problem here is the alert is triggering b... See more...
Hi @gcusello , Thanks for your reply. Actually my search is not taking that much time, hardly it takes 4-6 minutes of time to complete the search. But the problem here is the alert is triggering before the search complete, means after 2-3 minutes of the cronjob scheduled time. So only 30-40% of search completed within those alert triggering time and i'm getting alerts everyday. I need a solution that the alert will trigger only after the search complete. So can you please help me what to do in this case? Thanks in advance.
Hi @SalahKhattab , read the above link for anonymizing, you'll find the use of SEDCMD in props.conf to remove part of your logs: SEDCOMD_reduce_fields = s/<Interceptor>(.*)\<ActionDate\>2-04-24\<\/... See more...
Hi @SalahKhattab , read the above link for anonymizing, you'll find the use of SEDCMD in props.conf to remove part of your logs: SEDCOMD_reduce_fields = s/<Interceptor>(.*)\<ActionDate\>2-04-24\<\/ActionDate\>(.*)\<RecordNotes\>test\<\/RecordNotes\>(.*)\<\/Interceptor\>/<Interceptor\>\<ActionDate\>2-04-24\<\/ActionDate\>\<RecordNotes\>test\<\/RecordNotes\>\<\/Interceptor\>/g that you can test at https://regex101.com/r/fIpO23/1  Ciao. Giuseppe
    Sorry, I didn’t quite get your point. Let me clarify. For example, if this is my data:  <Interceptor> <AttackCoords>-423423445345345.10742916222947</AttackCoords> <Outcome>2</Outcome> <Inf... See more...
    Sorry, I didn’t quite get your point. Let me clarify. For example, if this is my data:  <Interceptor> <AttackCoords>-423423445345345.10742916222947</AttackCoords> <Outcome>2</Outcome> <Infiltrators>20</Infiltrators> <Enforcer>2</Enforcer> <ActionDate>2-04-24</ActionDate> <ActionTime>00:2:00</ActionTime> <RecordNotes>test</RecordNotes> <NumEscaped>0</NumEscaped> <LaunchCoords>-222222</LaunchCoords> <AttackVessel>111</AttackVessel> </Interceptor>   I want to extract only ActionDate and RecordNotes and ignore all other fields during ingestion. This way, the data will be cleared of unnecessary fields. In transforms.conf, I aim to create a regex pattern for ActionDate and RecordNotes to filter out other fields, making the resulting data look like this: <Interceptor> <ActionDate>2-04-24</ActionDate> <RecordNotes>test</RecordNotes> </Interceptor> How can I achieve this?
Hi @SalahKhattab , no, it's the opposite: you have to define only the regex extractions for the fields you want, and the others will not be extracted (always if you didn't defined INDEXED_EXTRACTION... See more...
Hi @SalahKhattab , no, it's the opposite: you have to define only the regex extractions for the fields you want, and the others will not be extracted (always if you didn't defined INDEXED_EXTRACTIONS=XML). let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Okay, got it. One last thing: is there any regex to check if any field not in the extracted list can be ignored from indexing?
Hi @tomnguyen1 , usually the easiest way is to create a scheduled search (usually the same search) with e shorter time period that saves results in a summary index, and then run the alert on the sum... See more...
Hi @tomnguyen1 , usually the easiest way is to create a scheduled search (usually the same search) with e shorter time period that saves results in a summary index, and then run the alert on the summary index. Then you should try to optimize your search. let me know if we can help you describing in a more detailed way, your search. Ciao. Giuseppe
Hi @SalahKhattab , if you want to avoid to index a part of data, the job is more complicated because the only way is the approach to anonymize data (https://docs.splunk.com/Documentation/Splunk/9.3.... See more...
Hi @SalahKhattab , if you want to avoid to index a part of data, the job is more complicated because the only way is the approach to anonymize data (https://docs.splunk.com/Documentation/Splunk/9.3.1/Data/Anonymizedata). In other words, you should delete some parts of your logs before indexing. Why do you want to do this: ro save some license costs or to avoid that some data are visible? If you don't have one of the above requirements, I hint to index all the data, because the removed data could be useful for you. Ciao. Giuseppe
  Hello Giuseppe, In my case, the goal is to ensure that the data is cleaned before indexing. For instance, if the data is: <test>dasdada</test><test2>asdasda</test2> I only need the data for t... See more...
  Hello Giuseppe, In my case, the goal is to ensure that the data is cleaned before indexing. For instance, if the data is: <test>dasdada</test><test2>asdasda</test2> I only need the data for the <test> field, and I don’t want the <test2> field to appear. Additionally, there are many fields that I don’t require, so creating a regex for each unwanted field to remove it with SEDCMD or a blacklist would be challenging. Is there a way to delete fields that aren’t extracted from the log before indexing?        
Hi @SalahKhattab , unless you extracted your fields at index time, fields are extracted at search time, so all the fields that you configured will be extracted. I suppose that you extracted the fie... See more...
Hi @SalahKhattab , unless you extracted your fields at index time, fields are extracted at search time, so all the fields that you configured will be extracted. I suppose that you extracted the fields using INDEXED_EXTRACTIONS=XML, in this case all the fields you have are extracted at search time and this doesn't consume storage or memory. It's different is you use regex extractions and not INDEXED_EXTRACTIONS=XML, in this case, only the configured fields are extracted. Why is so mandatory for you that the other fields aren't extracted? Ciao. Giuseppe
I have XML input logs in Splunk. I have already extracted the required fields, totaling 10 fields. I need to ensure any other fields that are extracted are ignored and not indexed in Splunk. Can I... See more...
I have XML input logs in Splunk. I have already extracted the required fields, totaling 10 fields. I need to ensure any other fields that are extracted are ignored and not indexed in Splunk. Can I set it so that if a field is not in the extracted list, it is automatically ignored? Is this possible?