All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try the limit option to the chart command. index = "xyz" |rex field=group "<Instance>(?<instance>[^<]+)</Instance>" |rex field=group "<SESSIONS>(?<sessions>\d+)</SESSIONS>" | chart limit=20 values(s... See more...
Try the limit option to the chart command. index = "xyz" |rex field=group "<Instance>(?<instance>[^<]+)</Instance>" |rex field=group "<SESSIONS>(?<sessions>\d+)</SESSIONS>" | chart limit=20 values(sessions) BY _time, instance  or index = "xyz" |rex field=group "<Instance>(?<instance>[^<]+)</Instance>" |rex field=group "<SESSIONS>(?<sessions>\d+)</SESSIONS>" | chart limit=0 values(sessions) BY _time, instance
Hi I am kinda stuck and need help. I am creating a chart in the splunk dashboard and for the y axis I have nearly 20 values which are to be shown as legends. After a certain number of values they ar... See more...
Hi I am kinda stuck and need help. I am creating a chart in the splunk dashboard and for the y axis I have nearly 20 values which are to be shown as legends. After a certain number of values they are grouped as "other" which dont want and need to display as separate ones. Also I am also ready to turn off the legend. The query used is  index = "xyz" |rex field=group "<Instance>(?<instance>[^<]+)</Instance>" |rex field=group "<SESSIONS>(?<sessions>\d+)</SESSIONS>" | chart values(sessions) BY _time, instance May I know which option in the chart will not collapse the values of the y axis?
Thank you for your reply. I extracted data from palo alto using Splunk Add-on for Palo Alto Networks. Here is an example. Oct 28 13:46:12 192.168.248.2 1 2024-10-28T13:46:12+09:00 PA-VM - - - - 1,2... See more...
Thank you for your reply. I extracted data from palo alto using Splunk Add-on for Palo Alto Networks. Here is an example. Oct 28 13:46:12 192.168.248.2 1 2024-10-28T13:46:12+09:00 PA-VM - - - - 1,2024/10/28 13:46:09,007254000360102,TRAFFIC,start,2818,2024/10/28 13:46:09,192.168.252.100,13.107.5.93,192.168.252.2,13.107.5.93,dmz-to-internet,,,web-browsing,vsys1,DMZ,INTERNET,ethernet1/2,ethernet1/1,SecurityCheck,2024/10/28 13:46:12,497655,1,54084,443,35405,443,0x1400000,tcp,allow,5636,1220,4416,11,2024/10/28 13:46:10,0,computer-and-internet-info,,7423264892787200760,0x0,192.168.0.0-192.168.255.255,United States,,6,5,n/a,0,0,0,0,,PA-VM,from-policy,,,0,,0,,N/A,0,0,0,0,c2a50b1f-ea25-41ce-9c7c-709bde6deec4,0,0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,2024-10-28T13:46:12.041+09:00,,,internet-utility,general-internet,browser-based,4,"used-by-malware,able-to-transfer-file,has-known-vulnerability,tunnel-other-application,pervasive-use",,web-browsing,no,no,0,NonProxyTraffic,,0,0,0 About the second comment, The risk value is shown in the log. In the above example, the risk value is 4. (the value can be 1 ~ 5)  It is seems to be determined by Palo Alto (Palo Alto Add-on). However I wonder the true high risk communication can be extracted from logs and what action is the cause of the risky communication (by correlation search).  For now, I want to make correlation search from the palo alto log and Windows event log.
Hi, If i add init getting below error Still on the background without submitting "submit" button it runs the query of the env and fetch the result
Persistent queue support for monitor inputs will be very useful once it's available.
Real-time searches see events before they are indexed.
I am a grad student and I recently gave a quiz on splunk. There was a true/false question. Q: Splunk Alerts can be created to monitor machine data in real-time, alerting of an event as soon as it lo... See more...
I am a grad student and I recently gave a quiz on splunk. There was a true/false question. Q: Splunk Alerts can be created to monitor machine data in real-time, alerting of an event as soon as it logged by the host.  I marked it as false because it should be "as soon as the event gets indexed by Splunk" instead of "as soon as the event gets logged by the host".  I have raised a question because I was not awarded marks for this question. But the counter was "Per-result triggering helps to achieve this". But isn't it basic that Splunk can only read the indexed data? Can anyone please verify if I'm correct?  Thanks in advance.
https://community.splunk.com/t5/Getting-Data-In/Missing-per-thruput-metrics-on-9-3-x-Universal-forwarders/m-p/702914/highlight/true#M116255
Applying on non-UF (e.g HF) will break thruput metrics. Added warning to post. Thanks for asking great question.
You nailed it. You may want to check https://community.splunk.com/t5/Knowledge-Management/Splunk-Persistent-Queue/m-p/688223/highlight/true#M10063
Thanks for the information, I assume the target is to fix this in a future UF 9.3.x release? Furthermore, would you happen to know what would happen if the setting was accidentally applied on a HF? ... See more...
Thanks for the information, I assume the target is to fix this in a future UF 9.3.x release? Furthermore, would you happen to know what would happen if the setting was accidentally applied on a HF?   Clients of our deployment server will sometimes run a Splunk enterprise version instead of a UF so I suspect we will need to be careful...
It may be worth adding that the acknowledgement option cannot protect against data loss a scenario where a forwarder is restarted while the remote endpoint is not available To expand on this point, ... See more...
It may be worth adding that the acknowledgement option cannot protect against data loss a scenario where a forwarder is restarted while the remote endpoint is not available To expand on this point, let's assume we have universal forwarder A, sending data to heavy forwarder B (and only HF B). (And then assume B connects to indexers) If A is reading from a file and sending to B, if we shutdown B, and while B is unable to process data we restart A during this downtime of B, any "in memory" data is lost at this point as the memory buffer if flushed on shutdown. The file monitor will re-read the portion of the file *after* the lost portion of the data.   This experiment is quite easy to setup in a development environment, the only point I'm adding is that (as advertised) the acknowledgement protects against intermediate data loss. It does not protect against data loss when the remote endpoint is down and the source is restarted.
Hi, our company does not yet have Splunk enterprise security, but we are considering getting it. Currently, our security posture includes a stream of EDR data from Carbon Black containing the EDR eve... See more...
Hi, our company does not yet have Splunk enterprise security, but we are considering getting it. Currently, our security posture includes a stream of EDR data from Carbon Black containing the EDR events and watchlist hits. We want to correlate the watchlist hits to create incidents. Is this something Splunk Enterprise Security can do right out of the box, given access to the EDR data? If so, how can do we do this in the Splunk Enterprise Security dashboard?  
https://docs.splunk.com/Documentation/Splunk/latest/admin/Wheretofindtheconfigurationfiles This is how Splunk merges settings from all the configuration files to create an effective configuration wh... See more...
https://docs.splunk.com/Documentation/Splunk/latest/admin/Wheretofindtheconfigurationfiles This is how Splunk merges settings from all the configuration files to create an effective configuration which will be applied.
Manipulating structured data with regexes is not a very good idea. It would be better to use an external tool to clean up your data before ingesting.
In Truck Simulator Ultimate, connecting platforms like Apigee Edge to Splunk is similar to integrating tracking tools for your fleet. This connection allows you to monitor and analyze API traffic... See more...
In Truck Simulator Ultimate, connecting platforms like Apigee Edge to Splunk is similar to integrating tracking tools for your fleet. This connection allows you to monitor and analyze API traffic data in real-time, just as tracking fuel and route efficiency improves logistics. It’s a powerful way to optimize operations smoothly. See More
Ok, but what is the goal of your alert? If you just want to know whether you have less than 10Mevents you chose the worst possible way to do so. Why fetching all events if you only want their count? ... See more...
Ok, but what is the goal of your alert? If you just want to know whether you have less than 10Mevents you chose the worst possible way to do so. Why fetching all events if you only want their count? index=whatever source=something | stats count Is much much better. And if you use only indexed fields (or index name which technically isn't an indexed field but we can assume it is for the sake of this argument) which you do you can even do it lightning-fast as | tstats count WHERE index=whatever source=something
Hi @gcusello , Thanks for your reply. Actually my search is not taking that much time, hardly it takes 4-6 minutes of time to complete the search. But the problem here is the alert is triggering b... See more...
Hi @gcusello , Thanks for your reply. Actually my search is not taking that much time, hardly it takes 4-6 minutes of time to complete the search. But the problem here is the alert is triggering before the search complete, means after 2-3 minutes of the cronjob scheduled time. So only 30-40% of search completed within those alert triggering time and i'm getting alerts everyday. I need a solution that the alert will trigger only after the search complete. So can you please help me what to do in this case? Thanks in advance.