All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I just changed the cron job.  I was just running it from the UI.   Once I did that, I started getting alerts.  I need to do some more cleanup, but the problem is solved. 
Just a follow up, are you readjusting the cron schedule to fire soon after making the adjustment to test? I'm not sure if an alert action will trigger by just doing an "Open in Search" or "Run" acti... See more...
Just a follow up, are you readjusting the cron schedule to fire soon after making the adjustment to test? I'm not sure if an alert action will trigger by just doing an "Open in Search" or "Run" action from the UI. I think the scheduler may have to kick off the search for the alert actions to be applied. (unless using the "| sendalert command")
I've tried that and I didn't see anything.  I tried it again and I still don't see the alert firing.
Okay so I think since your trigger condition is      search count>0     It suspect is not firing because there is no field named 'count' for that to evaluate as true. Can you try this settin... See more...
Okay so I think since your trigger condition is      search count>0     It suspect is not firing because there is no field named 'count' for that to evaluate as true. Can you try this setting instead? (it should be the same logic as intended) As long as the KVStore has results in it, then your alert action should trigger every time the scheduler kick off the search.
@_JP : Yes,proper display of information. Probably, it is pretty much custom requirement. After some research, now I am able to do that with sendemail command . < my initial search here> | table hos... See more...
@_JP : Yes,proper display of information. Probably, it is pretty much custom requirement. After some research, now I am able to do that with sendemail command . < my initial search here> | table hostname owner version | outputcsv test.csv | stats values(owner) as email | mvexpand email | eval subject="Test Alert" ,email_body= "This is test email body" | map search="|inputcsv test.csv | where owner=\"$email$\" | sendemail sendcsv=true to=\"$email$\" subject=\"$subject$\" message="\$email_body$\""  Example : If I have user test1 and test2 . Hosts that belong to test1 are sent to test1@gmail.com user and hosts that belong to test2 user are sent to test2@gmail.com. CSV file is getting sent. But now the problem is subject and emailbody are not getting displayed as I added. Its just showing Splunk Result  
@Anonymous  : I am seeing come inconsistency. Once the SPL worked where subject and email body were added as I specified . But sometimes it is not working. Email is getting sent .  Internal logs ... See more...
@Anonymous  : I am seeing come inconsistency. Once the SPL worked where subject and email body were added as I specified . But sometimes it is not working. Email is getting sent .  Internal logs show subject and email body as empty
@isoutamo  This is how my SPL looks like. Alert is created to run on weekly basis < my initial search here> | table hostname owner version | outputcsv test.csv | stats values(owner) as email | mvex... See more...
@isoutamo  This is how my SPL looks like. Alert is created to run on weekly basis < my initial search here> | table hostname owner version | outputcsv test.csv | stats values(owner) as email | mvexpand email | eval subject="Test Alert" ,email_body= "This is test email body" | map search="|inputcsv test.csv | where owner=\"$email$\" | sendemail sendcsv=true to=\"$email$\" subject=\"$subject$\" message="\$email_body$\"" I created subject and email_body using eval and using in sendemail.
Cool.  Not quite as fast as the original method, but the difference is minuscule.  I do like the fact that I don't have to repeat the same command.  This is nice to know.
The outputlookup should have been inputlookup.  My brain slipped a gear when I was entering the Subject.  I have corrected it.  Here is what I have in the alert.  I should give the foreach a try.
When you say you have set up an alert, what are your configured Trigger Conditions and then the following Alert Actions that follow? These can be found in the Edit Alert menu and looks like this. ... See more...
When you say you have set up an alert, what are your configured Trigger Conditions and then the following Alert Actions that follow? These can be found in the Edit Alert menu and looks like this. Where does the outputlookup come into play here? I dont see it in you SPL shared but it is in the title. From just the title of this question alone it sounds like you would like to gather results and instead of storing them in a lookup to send them to a summary index via alert_action or collect command. But from the body of the question it sounds like you are just having issues seeing results of a scheduled search trigger an alert. If you run the search ad-hoc and are seeing results, then I would check Trigger conditions, the configured alert actions if the trigger conditions are met.  If those look good then I would check the ownership of the Alert itself and does the owner have access to the KVStore. You should be able to look into internal logs about the status of previous runs as well with something like this. index=_internal savedsearch_name="<alert_name>" | table _time, savedsearch_name, user, app, status, dispatch_time, run_time, result_count, alert_actions, action_time_ms where <alert_name> is the name of your alert. I also noticed on your search that you had a lot of eval doing sort of the same function, I think a foreach loop might be useful here if you want to try it out. | inputlookup path_principals_lookup | foreach domain_id, domain_name, group, non_tier_zero_principal, path_id, path_title, principal, tier_zero_principal, user [ | eval <<FIELD>>=if( isnull('<<FIELD>>'), "NULL_<<FIELD>>", '<<FIELD>>' ) ] | dedup domain_id, domain_name, group, non_tier_zero_principal, path_id, path_title, principal, tier_zero_principal, user
This is really an Excel question rather than a Splunk question. In Splunk, date-times are stored internally as the number of seconds since 1/1/1970, whereas in Excel, date-times are stored internally... See more...
This is really an Excel question rather than a Splunk question. In Splunk, date-times are stored internally as the number of seconds since 1/1/1970, whereas in Excel, date-times are stored internally as the number of days since 1/1/1900 (I think). Just format the cell as a date in Excel.
Thank you for your comments. I had the feeling this might be a problem upstream but I wanted to make sure.
Making this the solution as I have provided step-by-step instructions, but all credit go to @PickleRick for the suggestion. Based on your response, I found a relevant helpful post at https://communi... See more...
Making this the solution as I have provided step-by-step instructions, but all credit go to @PickleRick for the suggestion. Based on your response, I found a relevant helpful post at https://community.splunk.com/t5/Dashboards-Visualizations/How-do-I-update-panel-color-in-Splunk-using-CSS/td-p/364590 I had to use the Browser Inspector to identify the specific elements. I inspected the data label itself (not the line/bar whatever you're looking at) which in my case, revealed the class called 'highcharts-label highcharts-data-label highcharts-data-label-color-undefined'. However, there was not a way to uniquely select these element by itself, so I had to refer to its parent element, which had a class of 'highcharts-data-labels highcharts-series-0 highcharts-line-series', where the numeral character 0 is the series identifier (0,1,2,3 and so on...) Perfect! The series number I wanted to hide is 0. This also helped me to specify the specific element: https://www.w3schools.com/cssref/css_selectors.php  With CSS selector, the selector is then (keep in mind the spaces in class name are actually separators and there were 3 seperate classes in that element. I am only selecting the first 2 classes and chaining them:     .highcharts-data-labels.highcharts-series-0     The following block is inserted into panel and works like a charm, hiding only the specified data labels while preserving the other data labels, resulting in a cleaner look:   <row> <panel> <title>Blah Blah Blah</title> <html depends="$alwaysHideCSSStyle$"> <style> .highcharts-data-labels.highcharts-series-0 { display:none; } </style> </html> <chart> ...   A caveat is that display:none treats this element as air. The chart might auto-adjust itself to fill this space and may impact your desired visual layout. An alternative is to use visibility:hidden, which allows the element to take space on the chart, but be hidden. Thank you!
Hi All, The Bloodhound TA creates a KV store lookup.  I've been asked to take the entries in the KV store and turn them into events.  I've setup an alert, but I'm not seeing the alert fire.  The SPL... See more...
Hi All, The Bloodhound TA creates a KV store lookup.  I've been asked to take the entries in the KV store and turn them into events.  I've setup an alert, but I'm not seeing the alert fire.  The SPL looks like this   | inputlookup path_principals_lookup | eval domain_id=if(isnull(domain_id), "NULL_domain_id", domain_id) | eval domain_name=if(isnull(domain_name), "NULL_domain_name", domain_name) | eval group=if(isnull(group), "NULL_Group", group) | eval non_tier_zero_principal=if(isnull(non_tier_zero_principal), "NULL_non_tier_zero_principal", non_tier_zero_principal) | eval path_id=if(isnull(path_id), "NULL_path_id", path_id) | eval path_title=if(isnull(path_title), "NULL_path_title", path_title) | eval principal=if(isnull(principal), "NULL_principal", principal) | eval tier_zero_principal=if(isnull(tier_zero_principal), "NULL_tier_zero_principal", tier_zero_principal) | eval user=if(isnull(user), "NULL_user", user) | dedup domain_id, domain_name, group, non_tier_zero_principal, path_id, path_title, principal, tier_zero_principal, user   I see statistics, but that doesn't fire the alert.  Is there something I'm missing to turn the values in the kvstore into events to be alerted on? TIA, Joe
OK. In your example data only small subset of events has the RCV.FROM string which you use to anchor for the TestMQ field. That means that most of the events doesn't have the field. So if you do st... See more...
OK. In your example data only small subset of events has the RCV.FROM string which you use to anchor for the TestMQ field. That means that most of the events doesn't have the field. So if you do stats by that field, you won't get results where there is no value in this field.
Queues become blocked when the corresponding pipeline is too slow to keep up with incoming data.  In this case, the index pipeline is unable to send data out as fast as it's coming in.  Verify the HF... See more...
Queues become blocked when the corresponding pipeline is too slow to keep up with incoming data.  In this case, the index pipeline is unable to send data out as fast as it's coming in.  Verify the HF's destinations are all up, listening, and reachable.
TestMQ doesn't appear in the same events as priority which is why the stats are coming out as zero
Hi when indexqueue has blocked on HF (or other instances) you should tart to looking from next host which is receiving those events. Quite often the real issue (if there is any issue) are found from... See more...
Hi when indexqueue has blocked on HF (or other instances) you should tart to looking from next host which is receiving those events. Quite often the real issue (if there is any issue) are found from it. Just use MC to look how those queues and pipelines are working on it. Usually it’s not an issue, if those queue is locked time by time.  r. Ismo
As always when you know old url you could try way back machine https://web.archive.org/web/20181020030244/http://docs.splunk.com:80/Documentation/Splunkbase/splunkbase/Splunkbase/Monetizeyourcontent
I know there are similar posts about this, but I am not sure on what to do or tweak here. Messages I am getting are similar to this: 01-05-2024 09:35:07.049 -0800 INFO Metrics - group=queue, ingest... See more...
I know there are similar posts about this, but I am not sure on what to do or tweak here. Messages I am getting are similar to this: 01-05-2024 09:35:07.049 -0800 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=815, largest_size=1764, smallest_size=0 I already set parallelIngestionPipelines = 2 Also, there is no indication of resource exhaustion on these Heavy Forwarders. CPU is constantly below 25% and RAM is low as well. What else can I check/do/configure to avoid this? Also, what happens to the data when this happens? Thank you!