All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Anonymous  : I am seeing come inconsistency. Once the SPL worked where subject and email body were added as I specified . But sometimes it is not working. Email is getting sent .  Internal logs ... See more...
@Anonymous  : I am seeing come inconsistency. Once the SPL worked where subject and email body were added as I specified . But sometimes it is not working. Email is getting sent .  Internal logs show subject and email body as empty
@isoutamo  This is how my SPL looks like. Alert is created to run on weekly basis < my initial search here> | table hostname owner version | outputcsv test.csv | stats values(owner) as email | mvex... See more...
@isoutamo  This is how my SPL looks like. Alert is created to run on weekly basis < my initial search here> | table hostname owner version | outputcsv test.csv | stats values(owner) as email | mvexpand email | eval subject="Test Alert" ,email_body= "This is test email body" | map search="|inputcsv test.csv | where owner=\"$email$\" | sendemail sendcsv=true to=\"$email$\" subject=\"$subject$\" message="\$email_body$\"" I created subject and email_body using eval and using in sendemail.
Cool.  Not quite as fast as the original method, but the difference is minuscule.  I do like the fact that I don't have to repeat the same command.  This is nice to know.
The outputlookup should have been inputlookup.  My brain slipped a gear when I was entering the Subject.  I have corrected it.  Here is what I have in the alert.  I should give the foreach a try.
When you say you have set up an alert, what are your configured Trigger Conditions and then the following Alert Actions that follow? These can be found in the Edit Alert menu and looks like this. ... See more...
When you say you have set up an alert, what are your configured Trigger Conditions and then the following Alert Actions that follow? These can be found in the Edit Alert menu and looks like this. Where does the outputlookup come into play here? I dont see it in you SPL shared but it is in the title. From just the title of this question alone it sounds like you would like to gather results and instead of storing them in a lookup to send them to a summary index via alert_action or collect command. But from the body of the question it sounds like you are just having issues seeing results of a scheduled search trigger an alert. If you run the search ad-hoc and are seeing results, then I would check Trigger conditions, the configured alert actions if the trigger conditions are met.  If those look good then I would check the ownership of the Alert itself and does the owner have access to the KVStore. You should be able to look into internal logs about the status of previous runs as well with something like this. index=_internal savedsearch_name="<alert_name>" | table _time, savedsearch_name, user, app, status, dispatch_time, run_time, result_count, alert_actions, action_time_ms where <alert_name> is the name of your alert. I also noticed on your search that you had a lot of eval doing sort of the same function, I think a foreach loop might be useful here if you want to try it out. | inputlookup path_principals_lookup | foreach domain_id, domain_name, group, non_tier_zero_principal, path_id, path_title, principal, tier_zero_principal, user [ | eval <<FIELD>>=if( isnull('<<FIELD>>'), "NULL_<<FIELD>>", '<<FIELD>>' ) ] | dedup domain_id, domain_name, group, non_tier_zero_principal, path_id, path_title, principal, tier_zero_principal, user
This is really an Excel question rather than a Splunk question. In Splunk, date-times are stored internally as the number of seconds since 1/1/1970, whereas in Excel, date-times are stored internally... See more...
This is really an Excel question rather than a Splunk question. In Splunk, date-times are stored internally as the number of seconds since 1/1/1970, whereas in Excel, date-times are stored internally as the number of days since 1/1/1900 (I think). Just format the cell as a date in Excel.
Thank you for your comments. I had the feeling this might be a problem upstream but I wanted to make sure.
Making this the solution as I have provided step-by-step instructions, but all credit go to @PickleRick for the suggestion. Based on your response, I found a relevant helpful post at https://communi... See more...
Making this the solution as I have provided step-by-step instructions, but all credit go to @PickleRick for the suggestion. Based on your response, I found a relevant helpful post at https://community.splunk.com/t5/Dashboards-Visualizations/How-do-I-update-panel-color-in-Splunk-using-CSS/td-p/364590 I had to use the Browser Inspector to identify the specific elements. I inspected the data label itself (not the line/bar whatever you're looking at) which in my case, revealed the class called 'highcharts-label highcharts-data-label highcharts-data-label-color-undefined'. However, there was not a way to uniquely select these element by itself, so I had to refer to its parent element, which had a class of 'highcharts-data-labels highcharts-series-0 highcharts-line-series', where the numeral character 0 is the series identifier (0,1,2,3 and so on...) Perfect! The series number I wanted to hide is 0. This also helped me to specify the specific element: https://www.w3schools.com/cssref/css_selectors.php  With CSS selector, the selector is then (keep in mind the spaces in class name are actually separators and there were 3 seperate classes in that element. I am only selecting the first 2 classes and chaining them:     .highcharts-data-labels.highcharts-series-0     The following block is inserted into panel and works like a charm, hiding only the specified data labels while preserving the other data labels, resulting in a cleaner look:   <row> <panel> <title>Blah Blah Blah</title> <html depends="$alwaysHideCSSStyle$"> <style> .highcharts-data-labels.highcharts-series-0 { display:none; } </style> </html> <chart> ...   A caveat is that display:none treats this element as air. The chart might auto-adjust itself to fill this space and may impact your desired visual layout. An alternative is to use visibility:hidden, which allows the element to take space on the chart, but be hidden. Thank you!
Hi All, The Bloodhound TA creates a KV store lookup.  I've been asked to take the entries in the KV store and turn them into events.  I've setup an alert, but I'm not seeing the alert fire.  The SPL... See more...
Hi All, The Bloodhound TA creates a KV store lookup.  I've been asked to take the entries in the KV store and turn them into events.  I've setup an alert, but I'm not seeing the alert fire.  The SPL looks like this   | inputlookup path_principals_lookup | eval domain_id=if(isnull(domain_id), "NULL_domain_id", domain_id) | eval domain_name=if(isnull(domain_name), "NULL_domain_name", domain_name) | eval group=if(isnull(group), "NULL_Group", group) | eval non_tier_zero_principal=if(isnull(non_tier_zero_principal), "NULL_non_tier_zero_principal", non_tier_zero_principal) | eval path_id=if(isnull(path_id), "NULL_path_id", path_id) | eval path_title=if(isnull(path_title), "NULL_path_title", path_title) | eval principal=if(isnull(principal), "NULL_principal", principal) | eval tier_zero_principal=if(isnull(tier_zero_principal), "NULL_tier_zero_principal", tier_zero_principal) | eval user=if(isnull(user), "NULL_user", user) | dedup domain_id, domain_name, group, non_tier_zero_principal, path_id, path_title, principal, tier_zero_principal, user   I see statistics, but that doesn't fire the alert.  Is there something I'm missing to turn the values in the kvstore into events to be alerted on? TIA, Joe
OK. In your example data only small subset of events has the RCV.FROM string which you use to anchor for the TestMQ field. That means that most of the events doesn't have the field. So if you do st... See more...
OK. In your example data only small subset of events has the RCV.FROM string which you use to anchor for the TestMQ field. That means that most of the events doesn't have the field. So if you do stats by that field, you won't get results where there is no value in this field.
Queues become blocked when the corresponding pipeline is too slow to keep up with incoming data.  In this case, the index pipeline is unable to send data out as fast as it's coming in.  Verify the HF... See more...
Queues become blocked when the corresponding pipeline is too slow to keep up with incoming data.  In this case, the index pipeline is unable to send data out as fast as it's coming in.  Verify the HF's destinations are all up, listening, and reachable.
TestMQ doesn't appear in the same events as priority which is why the stats are coming out as zero
Hi when indexqueue has blocked on HF (or other instances) you should tart to looking from next host which is receiving those events. Quite often the real issue (if there is any issue) are found from... See more...
Hi when indexqueue has blocked on HF (or other instances) you should tart to looking from next host which is receiving those events. Quite often the real issue (if there is any issue) are found from it. Just use MC to look how those queues and pipelines are working on it. Usually it’s not an issue, if those queue is locked time by time.  r. Ismo
As always when you know old url you could try way back machine https://web.archive.org/web/20181020030244/http://docs.splunk.com:80/Documentation/Splunkbase/splunkbase/Splunkbase/Monetizeyourcontent
I know there are similar posts about this, but I am not sure on what to do or tweak here. Messages I am getting are similar to this: 01-05-2024 09:35:07.049 -0800 INFO Metrics - group=queue, ingest... See more...
I know there are similar posts about this, but I am not sure on what to do or tweak here. Messages I am getting are similar to this: 01-05-2024 09:35:07.049 -0800 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, blocked=true, max_size_kb=500, current_size_kb=499, current_size=815, largest_size=1764, smallest_size=0 I already set parallelIngestionPipelines = 2 Also, there is no indication of resource exhaustion on these Heavy Forwarders. CPU is constantly below 25% and RAM is low as well. What else can I check/do/configure to avoid this? Also, what happens to the data when this happens? Thank you!
Hi @gcusello  Refer below requested sample query and event details: Kindly suggest.   index=test_index=*instance*/*testget* | rex "\: (?<testgettrn>.*) \- S from" | rex "RCV\.FROM\.(?<... See more...
Hi @gcusello  Refer below requested sample query and event details: Kindly suggest.   index=test_index=*instance*/*testget* | rex "\: (?<testgettrn>.*) \- S from" | rex "RCV\.FROM\.(?<TestMQ>.*)\@" | eval Priority_Level=case(Priority="Low", "Low", Priority="Medium", "Medium", Priority="High", "High") | stats count as TotalCount, count(eval(Priority_Level="Low")) as Low, count(eval(Priority_Level="Medium")) as Medium, count(eval(Priority_Level="High")) as High by TestMQ | fillnull value=0     Sample Events:   240105 18:06:03 19287 testget1: ===> TRN@instance.RQ1: 0000002400509150632034-AERG00001A [Priority=Low,ScanPriority=0, Rule: Default Rule]. host = testserver2.com source = /test/test.logsourcetype = testscan 240105 18:06:03 19287 testget1: ===> TRN@instance.RQ1: 0000002400540101635213-AERG00000A [Priority=Low,ScanPriority=0, Rule: Default Rule]. host = testserver2.com source = /test/test.log sourcetype = testscan 240105 18:06:03 19287 testget1: <--- TRN: 0000002481540150632034-AERG00001A - S from [RCV.FROM.TEST.SEP.Q1@QM.ABC123]. host = testserver2.com source = /test/test.log sourcetype = testscan 240105 18:06:03 19287 testget1: <--- TRN: 0000002400547150635213-AERG00000A - S from [RCV.FROM.TEST.SEP.Q1@QM.ABC123]. host = testserver2.com source = /test/test.log sourcetype = testscan 240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000002400540902427245-AERC000f8A [Priority=Medium,ScanPriority=2, Rule: Default Rule]. host = testserver1.com source = /test/test.log sourcetype = testscan 240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000001800540152427236-AERC000f7A [Priority=Medium,ScanPriority=2, Rule: Default Rule]. host = testserver1.com source = /test/test.log sourcetype = testscan 240105 18:02:29 72965 testget1: ===> TRN@instance.RQ1: 0000002400540109427216-AERC000f6A [Priority=High,ScanPriority=1, Rule: Default Rule]. host = testserver1.com source = /test/test.log sourcetype = testscan  
It's just that the link leads to an old part of the docs site which has apparently been retired.
OK. If you mean the password policy within the Splunk itself, you should be able to find it in the _configtracker index (I'm not sure if it's available for Cloud but I assume it is) - look for change... See more...
OK. If you mean the password policy within the Splunk itself, you should be able to find it in the _configtracker index (I'm not sure if it's available for Cloud but I assume it is) - look for changes to authorize.conf file.
Hi @AC1 , try something like this: Index="xx" label="xx" id=* | stats dc(id) AS id_count Ciao. Giuseppe
One last question: Is my request kind of "monetize your content" like in the link below: https://docs.splunk.com/Documentation/Splunkbase/splunkbase/Splunkbase/Monetizeyourcontent  ... which now... See more...
One last question: Is my request kind of "monetize your content" like in the link below: https://docs.splunk.com/Documentation/Splunkbase/splunkbase/Splunkbase/Monetizeyourcontent  ... which now leads to a "Hi! This page does not exist, or has been removed from the documentation. " ? Am I looking to something that was previously supported, but now is not? More: quoted: "you can add a license for a third party solution to your Splunk instance and have Splunk enforce it" I am not looking per force to " have Splunk enforce it". If I could do it just within my App that would be fine. Can this be done ? best regards Altin