All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If I wanted to add another error ,example Codigo_error="11", what would I have to do?
Yes, you can because the second search using the base is simply to create the single row result, which you can then turn into a token, e.g. <search id="base"> <query> bla </query> </search> <ta... See more...
Yes, you can because the second search using the base is simply to create the single row result, which you can then turn into a token, e.g. <search id="base"> <query> bla </query> </search> <table depends="$hidden$"> <search base="base"> <query> | stats values(device_ip_address) as device_ip_address | eval device_ip_address=mvjoin(device_ip_address, ",") </query> </search> <done> <set token="mytoken">$result.device_ip_address$</set> </done> </table> and then your other search can use $mytoken$ as needed - use the eval in the second search to make the format of the device_ip_address values what you need it to be for the other search.
If I wanted to add another error ,example Codigo_error="10001", what would I have to do?
Hello! I'm looking to set the index parameter of the collect command with the value of a field from each event. Here's an example.     | makeresults count=2 | streamstats count | eval index = ca... See more...
Hello! I'm looking to set the index parameter of the collect command with the value of a field from each event. Here's an example.     | makeresults count=2 | streamstats count | eval index = case(count=1, "myindex1", count=2, "myindex2") | collect index=index testmode=true     This search creates two events. Both events have the index field, one with "myindex1" as the value, and the other with "myindex2". I would like to use these values to set the index in the collect command.
Sorry, I misunderstood, it works correctly.
If error and exception then it should be error rest of them are success.but using the below query to get status still.i got both suuccess and error for the some of the transactions ID  | eval Stat... See more...
If error and exception then it should be error rest of them are success.but using the below query to get status still.i got both suuccess and error for the some of the transactions ID  | eval Status=case(priority="ERROR" AND tracePoint="EXCEPTION" OR message="*Error while processing*","ERROR", priority="WARN","WARN",priority!="ERROR" AND tracePoint!="EXCEPTION" OR message!="*(ERROR):*","SUCCESS") |stats values(Status) as Status by transactionId    
@alfredoh14,   Here's some SPL that gives you a table with the app name, short name, and SQL: | makeresults count=3 | streamstats count as id | eval sql=case(id=1,"'' as \"FIELD\",''Missing Value'... See more...
@alfredoh14,   Here's some SPL that gives you a table with the app name, short name, and SQL: | makeresults count=3 | streamstats count as id | eval sql=case(id=1,"'' as \"FIELD\",''Missing Value'' AS \"ERROR\" from scbt_owner.SCBT_LOAD_CLOB_DATA_WORK", id=2,"'' as \"something \",''Missing Value'' AS \"ERROR\" from ART_owner.ART_LOAD_CLOB_DATA_WORK", id=3, "from Building_Mailer_owner.Building_Mailer_") | fields sql ``` The above was just to create the source data ``` | rex field="sql" "from\s+(?<lk_wlc_app_short>.+?)_owner" | lookup lookup_weblogic_app lk_wlc_app_short | table lk_wlc_app_short, lk_wlc_app_name, sql   The regular expression pulls out the table name in the SQL, eg "from XXXX_owner", and uses the short code to match the app name from the lookup. To make the lookup work, you will need to ensure that the matches are NOT case sensitive, or make sure your lookup fields match what is in the SQL.  
I understand. I managed to make a little progress using the strategy of pulling through triggered alerts. My research is as follows:******
I understand. You can make a little progress using the strategy of pulling alerts that are triggered. My research is as follows: index=_audit action="alert_fired" ss_app=search ss_name="alert 1"... See more...
I understand. You can make a little progress using the strategy of pulling alerts that are triggered. My research is as follows: index=_audit action="alert_fired" ss_app=search ss_name="alert 1" OR ss_name="alert 2" | rename ss_name AS title | stats count by title, ss_app, _time | sort -_time In this research I can bring up the two alerts that I want to combine. Is it possible to get certain fields from these two alerts? In this case, I want to get the user. I can only generate the alert if the user is the same, the problem is that there are two different log providers and therefore, the field that has the user value has different names.
hi @Elupt01, You can update the navigation menu to include links to all of you dashboards. If you go to:  Settings > User Interface > Navigation Menus > default You will see a text box where you ... See more...
hi @Elupt01, You can update the navigation menu to include links to all of you dashboards. If you go to:  Settings > User Interface > Navigation Menus > default You will see a text box where you can put in XML to define your navigation. There are a few ways to show items. Note: DASHBOARD_NAME refers to the name of the dashboard as seen in the URL, not the title. To link a single dashboard on the main navigation bar use this format: <view name="DASHBOARD_NAME" />   To create a dropdown with a bunch of dashboards, use this format: <collection label="Team Dashboards"> <view name="DASHBOARD_NAME_1" /> <view name="DASHBOARD_NAME_2" /> <view name="DASHBOARD_NAME_3" /> </collection>   If you want the dashboards to be automatically added to the menu when you create them, use this format: <collection label="Team Dashboards"> <view source="unclassified" /> </collection> The "unclassified" here means it will list all dashboards not explicitly mentioned in the navigation menu.   There are a few other tricks you can do, like using URLs as menu links: <a href="https://company.intranet.com" target="_blank">Team Intranet Page</a>   Have a look at the dev docs for more detailed info: https://dev.splunk.com/enterprise/reference/dashboardnav/
Alerts are based on results of a search - for an alert to be triggered based on two conditions, your search needs to find both conditions.
I am glad it works - what does your query about earliest and latest mean?
So if a transaction has both ERROR and not ERROR, what do you want it to show?
I'm seeing this same error on a new build. Did you ever find an answer?
Hello everyone, How can I correlate two alerts into a third one? For instance: I have alert 1 and alert 2 both with medium severity. I need the following validation in alert 3: If, after 6 hours... See more...
Hello everyone, How can I correlate two alerts into a third one? For instance: I have alert 1 and alert 2 both with medium severity. I need the following validation in alert 3: If, after 6 hours since alert 1 was triggered, alert 2 is triggered as well, generate alert 3 with high severity.
Thanks for the response, it does show info, but it seems that it looks for all errors and not just 10001 and 69. and it seems not to respect that it only shows when the percentage is greater tha... See more...
Thanks for the response, it does show info, but it seems that it looks for all errors and not just 10001 and 69. and it seems not to respect that it only shows when the percentage is greater than 10. Regards
Running 9.2 and getting the same error
Ok. So I'd approach this from a different way. Let's do some initial search index=data Then for each user we find his first ever occurrence | stats min(_time) as _time by user After this we have... See more...
Ok. So I'd approach this from a different way. Let's do some initial search index=data Then for each user we find his first ever occurrence | stats min(_time) as _time by user After this we have a list of first logins spread across time. So now all we need is to count those logins across each day | timechart span=1d count And that's it. If you also wanted to have a list of those users for each day instead of doing the timechart you should rather group the users by day manually | bin _time span=1d So now you can aggregate the values over time | stats count as 'Overall number of logins' values(user) as Users  
@danrobertsContrary to the popular saying, here a snippet of (properly formatted) text is often worth a thousand pictures. Definitely a data sample in text form is easier to deal with than a screensh... See more...
@danrobertsContrary to the popular saying, here a snippet of (properly formatted) text is often worth a thousand pictures. Definitely a data sample in text form is easier to deal with than a screenshot. @deepakcYour general idea is relatively ok but it's best to avoid line-merging whenever possible (it's relatively "heavy" performance-wise). So instead of enabling line merging it would be better to find some static part which can be always matched as the event boundary. Also the TRUNCATE setting might be too low. So the question to @danroberts is where exactly the event starts/ends and how "flexible" the format is, especially regarding the timestamp position. Also remember that any additional "clearing" (by removing the lines of dashes which might or might not be desirable - in some cases we want to preserve the event in its original form due to compliance reasons regardless of extra license usage) comes after line breaking and timestamp recognition. Edit: oh, and KV_MODE also should rather be not set to auto (even if it was kv-parsesble, it should be set statically to something instead of auto; as a rule of thumb you should not make Splunk guess).
Hi @nsiva Please try this: | makeresults | eval _raw = "123 IP Address is 1.2.3.4" | rex field=_raw "is\s(?P<ip>.*)" | table _raw ip once if the rex is working fine, then you can do, "|stats count... See more...
Hi @nsiva Please try this: | makeresults | eval _raw = "123 IP Address is 1.2.3.4" | rex field=_raw "is\s(?P<ip>.*)" | table _raw ip once if the rex is working fine, then you can do, "|stats count by ip"   let us know what happens, thanks.