All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, If i add init getting below error Still on the background without submitting "submit" button it runs the query of the env and fetch the result
Persistent queue support for monitor inputs will be very useful once it's available.
Real-time searches see events before they are indexed.
I am a grad student and I recently gave a quiz on splunk. There was a true/false question. Q: Splunk Alerts can be created to monitor machine data in real-time, alerting of an event as soon as it lo... See more...
I am a grad student and I recently gave a quiz on splunk. There was a true/false question. Q: Splunk Alerts can be created to monitor machine data in real-time, alerting of an event as soon as it logged by the host.  I marked it as false because it should be "as soon as the event gets indexed by Splunk" instead of "as soon as the event gets logged by the host".  I have raised a question because I was not awarded marks for this question. But the counter was "Per-result triggering helps to achieve this". But isn't it basic that Splunk can only read the indexed data? Can anyone please verify if I'm correct?  Thanks in advance.
https://community.splunk.com/t5/Getting-Data-In/Missing-per-thruput-metrics-on-9-3-x-Universal-forwarders/m-p/702914/highlight/true#M116255
Applying on non-UF (e.g HF) will break thruput metrics. Added warning to post. Thanks for asking great question.
You nailed it. You may want to check https://community.splunk.com/t5/Knowledge-Management/Splunk-Persistent-Queue/m-p/688223/highlight/true#M10063
Thanks for the information, I assume the target is to fix this in a future UF 9.3.x release? Furthermore, would you happen to know what would happen if the setting was accidentally applied on a HF? ... See more...
Thanks for the information, I assume the target is to fix this in a future UF 9.3.x release? Furthermore, would you happen to know what would happen if the setting was accidentally applied on a HF?   Clients of our deployment server will sometimes run a Splunk enterprise version instead of a UF so I suspect we will need to be careful...
It may be worth adding that the acknowledgement option cannot protect against data loss a scenario where a forwarder is restarted while the remote endpoint is not available To expand on this point, ... See more...
It may be worth adding that the acknowledgement option cannot protect against data loss a scenario where a forwarder is restarted while the remote endpoint is not available To expand on this point, let's assume we have universal forwarder A, sending data to heavy forwarder B (and only HF B). (And then assume B connects to indexers) If A is reading from a file and sending to B, if we shutdown B, and while B is unable to process data we restart A during this downtime of B, any "in memory" data is lost at this point as the memory buffer if flushed on shutdown. The file monitor will re-read the portion of the file *after* the lost portion of the data.   This experiment is quite easy to setup in a development environment, the only point I'm adding is that (as advertised) the acknowledgement protects against intermediate data loss. It does not protect against data loss when the remote endpoint is down and the source is restarted.
Hi, our company does not yet have Splunk enterprise security, but we are considering getting it. Currently, our security posture includes a stream of EDR data from Carbon Black containing the EDR eve... See more...
Hi, our company does not yet have Splunk enterprise security, but we are considering getting it. Currently, our security posture includes a stream of EDR data from Carbon Black containing the EDR events and watchlist hits. We want to correlate the watchlist hits to create incidents. Is this something Splunk Enterprise Security can do right out of the box, given access to the EDR data? If so, how can do we do this in the Splunk Enterprise Security dashboard?  
https://docs.splunk.com/Documentation/Splunk/latest/admin/Wheretofindtheconfigurationfiles This is how Splunk merges settings from all the configuration files to create an effective configuration wh... See more...
https://docs.splunk.com/Documentation/Splunk/latest/admin/Wheretofindtheconfigurationfiles This is how Splunk merges settings from all the configuration files to create an effective configuration which will be applied.
Manipulating structured data with regexes is not a very good idea. It would be better to use an external tool to clean up your data before ingesting.
In Truck Simulator Ultimate, connecting platforms like Apigee Edge to Splunk is similar to integrating tracking tools for your fleet. This connection allows you to monitor and analyze API traffic... See more...
In Truck Simulator Ultimate, connecting platforms like Apigee Edge to Splunk is similar to integrating tracking tools for your fleet. This connection allows you to monitor and analyze API traffic data in real-time, just as tracking fuel and route efficiency improves logistics. It’s a powerful way to optimize operations smoothly. See More
Ok, but what is the goal of your alert? If you just want to know whether you have less than 10Mevents you chose the worst possible way to do so. Why fetching all events if you only want their count? ... See more...
Ok, but what is the goal of your alert? If you just want to know whether you have less than 10Mevents you chose the worst possible way to do so. Why fetching all events if you only want their count? index=whatever source=something | stats count Is much much better. And if you use only indexed fields (or index name which technically isn't an indexed field but we can assume it is for the sake of this argument) which you do you can even do it lightning-fast as | tstats count WHERE index=whatever source=something
Hi @gcusello , Thanks for your reply. Actually my search is not taking that much time, hardly it takes 4-6 minutes of time to complete the search. But the problem here is the alert is triggering b... See more...
Hi @gcusello , Thanks for your reply. Actually my search is not taking that much time, hardly it takes 4-6 minutes of time to complete the search. But the problem here is the alert is triggering before the search complete, means after 2-3 minutes of the cronjob scheduled time. So only 30-40% of search completed within those alert triggering time and i'm getting alerts everyday. I need a solution that the alert will trigger only after the search complete. So can you please help me what to do in this case? Thanks in advance.
Hi @SalahKhattab , read the above link for anonymizing, you'll find the use of SEDCMD in props.conf to remove part of your logs: SEDCOMD_reduce_fields = s/<Interceptor>(.*)\<ActionDate\>2-04-24\<\/... See more...
Hi @SalahKhattab , read the above link for anonymizing, you'll find the use of SEDCMD in props.conf to remove part of your logs: SEDCOMD_reduce_fields = s/<Interceptor>(.*)\<ActionDate\>2-04-24\<\/ActionDate\>(.*)\<RecordNotes\>test\<\/RecordNotes\>(.*)\<\/Interceptor\>/<Interceptor\>\<ActionDate\>2-04-24\<\/ActionDate\>\<RecordNotes\>test\<\/RecordNotes\>\<\/Interceptor\>/g that you can test at https://regex101.com/r/fIpO23/1  Ciao. Giuseppe
    Sorry, I didn’t quite get your point. Let me clarify. For example, if this is my data:  <Interceptor> <AttackCoords>-423423445345345.10742916222947</AttackCoords> <Outcome>2</Outcome> <Inf... See more...
    Sorry, I didn’t quite get your point. Let me clarify. For example, if this is my data:  <Interceptor> <AttackCoords>-423423445345345.10742916222947</AttackCoords> <Outcome>2</Outcome> <Infiltrators>20</Infiltrators> <Enforcer>2</Enforcer> <ActionDate>2-04-24</ActionDate> <ActionTime>00:2:00</ActionTime> <RecordNotes>test</RecordNotes> <NumEscaped>0</NumEscaped> <LaunchCoords>-222222</LaunchCoords> <AttackVessel>111</AttackVessel> </Interceptor>   I want to extract only ActionDate and RecordNotes and ignore all other fields during ingestion. This way, the data will be cleared of unnecessary fields. In transforms.conf, I aim to create a regex pattern for ActionDate and RecordNotes to filter out other fields, making the resulting data look like this: <Interceptor> <ActionDate>2-04-24</ActionDate> <RecordNotes>test</RecordNotes> </Interceptor> How can I achieve this?
Hi @SalahKhattab , no, it's the opposite: you have to define only the regex extractions for the fields you want, and the others will not be extracted (always if you didn't defined INDEXED_EXTRACTION... See more...
Hi @SalahKhattab , no, it's the opposite: you have to define only the regex extractions for the fields you want, and the others will not be extracted (always if you didn't defined INDEXED_EXTRACTIONS=XML). let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Okay, got it. One last thing: is there any regex to check if any field not in the extracted list can be ignored from indexing?
Hi @tomnguyen1 , usually the easiest way is to create a scheduled search (usually the same search) with e shorter time period that saves results in a summary index, and then run the alert on the sum... See more...
Hi @tomnguyen1 , usually the easiest way is to create a scheduled search (usually the same search) with e shorter time period that saves results in a summary index, and then run the alert on the summary index. Then you should try to optimize your search. let me know if we can help you describing in a more detailed way, your search. Ciao. Giuseppe