I understand. You can make a little progress using the strategy of pulling alerts that are triggered. My research is as follows: index=_audit action="alert_fired" ss_app=search ss_name="alert 1"...
See more...
I understand. You can make a little progress using the strategy of pulling alerts that are triggered. My research is as follows: index=_audit action="alert_fired" ss_app=search ss_name="alert 1" OR ss_name="alert 2"
| rename ss_name AS title
| stats count by title, ss_app, _time
| sort -_time In this research I can bring up the two alerts that I want to combine. Is it possible to get certain fields from these two alerts? In this case, I want to get the user. I can only generate the alert if the user is the same, the problem is that there are two different log providers and therefore, the field that has the user value has different names.
hi @Elupt01, You can update the navigation menu to include links to all of you dashboards. If you go to: Settings > User Interface > Navigation Menus > default You will see a text box where you ...
See more...
hi @Elupt01, You can update the navigation menu to include links to all of you dashboards. If you go to: Settings > User Interface > Navigation Menus > default You will see a text box where you can put in XML to define your navigation. There are a few ways to show items. Note: DASHBOARD_NAME refers to the name of the dashboard as seen in the URL, not the title. To link a single dashboard on the main navigation bar use this format: <view name="DASHBOARD_NAME" /> To create a dropdown with a bunch of dashboards, use this format: <collection label="Team Dashboards">
<view name="DASHBOARD_NAME_1" />
<view name="DASHBOARD_NAME_2" />
<view name="DASHBOARD_NAME_3" />
</collection> If you want the dashboards to be automatically added to the menu when you create them, use this format: <collection label="Team Dashboards">
<view source="unclassified" />
</collection> The "unclassified" here means it will list all dashboards not explicitly mentioned in the navigation menu. There are a few other tricks you can do, like using URLs as menu links: <a href="https://company.intranet.com" target="_blank">Team Intranet Page</a> Have a look at the dev docs for more detailed info: https://dev.splunk.com/enterprise/reference/dashboardnav/
Hello everyone, How can I correlate two alerts into a third one? For instance: I have alert 1 and alert 2 both with medium severity. I need the following validation in alert 3: If, after 6 hours...
See more...
Hello everyone, How can I correlate two alerts into a third one? For instance: I have alert 1 and alert 2 both with medium severity. I need the following validation in alert 3: If, after 6 hours since alert 1 was triggered, alert 2 is triggered as well, generate alert 3 with high severity.
Thanks for the response, it does show info, but it seems that it looks for all errors and not just 10001 and 69. and it seems not to respect that it only shows when the percentage is greater tha...
See more...
Thanks for the response, it does show info, but it seems that it looks for all errors and not just 10001 and 69. and it seems not to respect that it only shows when the percentage is greater than 10. Regards
Ok. So I'd approach this from a different way. Let's do some initial search index=data Then for each user we find his first ever occurrence | stats min(_time) as _time by user After this we have...
See more...
Ok. So I'd approach this from a different way. Let's do some initial search index=data Then for each user we find his first ever occurrence | stats min(_time) as _time by user After this we have a list of first logins spread across time. So now all we need is to count those logins across each day | timechart span=1d count And that's it. If you also wanted to have a list of those users for each day instead of doing the timechart you should rather group the users by day manually | bin _time span=1d So now you can aggregate the values over time | stats count as 'Overall number of logins' values(user) as Users
@danrobertsContrary to the popular saying, here a snippet of (properly formatted) text is often worth a thousand pictures. Definitely a data sample in text form is easier to deal with than a screensh...
See more...
@danrobertsContrary to the popular saying, here a snippet of (properly formatted) text is often worth a thousand pictures. Definitely a data sample in text form is easier to deal with than a screenshot. @deepakcYour general idea is relatively ok but it's best to avoid line-merging whenever possible (it's relatively "heavy" performance-wise). So instead of enabling line merging it would be better to find some static part which can be always matched as the event boundary. Also the TRUNCATE setting might be too low. So the question to @danroberts is where exactly the event starts/ends and how "flexible" the format is, especially regarding the timestamp position. Also remember that any additional "clearing" (by removing the lines of dashes which might or might not be desirable - in some cases we want to preserve the event in its original form due to compliance reasons regardless of extra license usage) comes after line breaking and timestamp recognition. Edit: oh, and KV_MODE also should rather be not set to auto (even if it was kv-parsesble, it should be set statically to something instead of auto; as a rule of thumb you should not make Splunk guess).
Hi @nsiva Please try this: | makeresults | eval _raw = "123 IP Address is 1.2.3.4"
| rex field=_raw "is\s(?P<ip>.*)" | table _raw ip once if the rex is working fine, then you can do, "|stats count...
See more...
Hi @nsiva Please try this: | makeresults | eval _raw = "123 IP Address is 1.2.3.4"
| rex field=_raw "is\s(?P<ip>.*)" | table _raw ip once if the rex is working fine, then you can do, "|stats count by ip" let us know what happens, thanks.
depending on your method of collection, please see here: https://docs.splunk.com/Documentation/AddOns/released/AWS/ConfigureInputs Note this portion in case you are under this scenerio: Note: I...
See more...
depending on your method of collection, please see here: https://docs.splunk.com/Documentation/AddOns/released/AWS/ConfigureInputs Note this portion in case you are under this scenerio: Note: It is a best practice to collect VPC flow logs and CloudWatch logs through Kinesis streams. However, the AWS Kinesis input has the following limitations: Multiple inputs collecting data from a single stream cause duplicate events in the Splunk platform.
my output in splunk is as below <error code #> IP Address is x.y.z.a I want to extract only the x.y.z.a and its count. Should ignore duplicates. Can someone please assist?
Hello, I have created a dashboard, it is public within my group. I want the end users to be able to open the main SPLUNK link and see all the teams dashboards. We have most of the dashboards linked t...
See more...
Hello, I have created a dashboard, it is public within my group. I want the end users to be able to open the main SPLUNK link and see all the teams dashboards. We have most of the dashboards linked to the app but I dont know how to add the one I just did. Added a picutre.
Yeah, I was afraid of that. I was hoping someone would have a magic work-around I hadn't thought of, as I do tend to find some winners around here. No worries, thanks for replying.