All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If I "Schedule PDF", will the dashboard automatically take the date range or do I have to set it in some way?
Where can I download Splunk Universal Forwarder 9.0.7?
The cited manual describes how to use _time in a search, but the problem you describe happens at index time and so is not covered by that manual. Please tell us how the log gets from your applicatio... See more...
The cited manual describes how to use _time in a search, but the problem you describe happens at index time and so is not covered by that manual. Please tell us how the log gets from your application to Splunk.  Also, how much lag are you seeing?  Is there a sourcetype defined in props.conf for the data?  If so, what are the settings?  Can you share a (sanitized) sample event or two?
If you use on-prem Splunk Enterprise then you can set the From email address to any value you wish.  If you use Splunk Cloud then the From address cannot be changed. Either way, I get the impression... See more...
If you use on-prem Splunk Enterprise then you can set the From email address to any value you wish.  If you use Splunk Cloud then the From address cannot be changed. Either way, I get the impression "+untrusted" is being added to the From field after the message leaves Splunk - probably by your email service.  You should talk to your email admin about that.
If it's truly a report then, yes, it can be sent in the body of an email.  However, dashboards cannot be sent that way and must be sent as a PDF.
The problem is that there is a lag happening in the log shipping from our application to Splunk, after some investigation we realized that we can override the event time by providing _time property i... See more...
The problem is that there is a lag happening in the log shipping from our application to Splunk, after some investigation we realized that we can override the event time by providing _time property in the logs (ref:https://docs.splunk.com/Documentation/SCS/current/Search/Timestampsandtimeranges) and it should be UNIX epoch time (seconds). we did that but it didn’t have any effect on the event time and the time difference persists. It has been a while since we are testing a lot of possibilities yet none of them did the trick.
Is there a way to print the entire report in the email instead of a PDF attachment?
Hello All,  Currently we have setup the use case to send the emails whenever a condition is satisfied and an alert is fired up. My concern is whenever the email is received we are receiving the add... See more...
Hello All,  Currently we have setup the use case to send the emails whenever a condition is satisfied and an alert is fired up. My concern is whenever the email is received we are receiving the address in the FROM field as "abc.xyz+untrusted@jkl.com",   and we think that some mail boxes are not getting these emails from the specific untrusted email address,  please correct me if i am misunderstood.  Also, is there a way to add this "abc.xyz@jkl.com" to the trusted email group or something like that? or is there a different way to get the actual email address instead of the +untrusted email whenever an email is sent out from splunk. Hope this makes sense.  Thanks, 
In these kinds of situations in Splunk I generally do something like this to replace empty strings with actual null values. | foreach err_field* [ | eval ... See more...
In these kinds of situations in Splunk I generally do something like this to replace empty strings with actual null values. | foreach err_field* [ | eval <<FIELD>>=if( '<<FIELD>>'=="" OR match('<<FIELD>>', "^\s*$"), null(), '<<FIELD>>' ) ] | eval err_final=coalesce(err_field1, err_field2, err_field3, err_field4)   You can see the coalesce works as expected after replacing nullifying the empty strings. Note: this is also replacing any values in the err_field* fields that is only whitespace in addition to empty strings.
Dashboard Studio is still under development so some features which are available in SimpleXML / Classic dashboards are either not available or not fully working, it will depend on which version of Sp... See more...
Dashboard Studio is still under development so some features which are available in SimpleXML / Classic dashboards are either not available or not fully working, it will depend on which version of Splunk you are using.
When using BREAK_ONLY_BEFORE, set SHOULD_LINEMERGE = true. [snow:all:devices] KV_MODE=xml BREAK_ONLY_BEFORE =\<item> SHOULD_LINEMERGE = true DATETIME_CONFIG = NONE
From the dashboard, select "Schedule PDF Delivery" from the Export dropdown.  Check the "Schedule PDF" box and fill in the form.  In the Schedule field, select "Run on Cron Schedule" then put "13 0-2... See more...
From the dashboard, select "Schedule PDF Delivery" from the Export dropdown.  Check the "Schedule PDF" box and fill in the form.  In the Schedule field, select "Run on Cron Schedule" then put "13 0-23/3 * * 0,6".
Splunk Cloud Version: 9.0.2209.4
It looks like err_field1contains an empty string.  If it was null then err_final would be set to err_field2 or err_field3.
I'd like to set up an email notification for the following dashboard, specifically on Saturdays and Sundays at intervals of 3 hours. Since I receive files only on these days, this schedule aligns wit... See more...
I'd like to set up an email notification for the following dashboard, specifically on Saturdays and Sundays at intervals of 3 hours. Since I receive files only on these days, this schedule aligns with our data delivery. Could someone guide me on configuring this setup?    
Your LINE_BREAKER (and EVENT_BREAKER - they work very similarily but on different levels) makes no sense. This parameter is not used to find whole event. It is supposed to find and match (the part c... See more...
Your LINE_BREAKER (and EVENT_BREAKER - they work very similarily but on different levels) makes no sense. This parameter is not used to find whole event. It is supposed to find and match (the part captured within the capture group is discarded as not belonging to either the preceeding or following event) the text which is _between_ events. That's why by default it matches ([\r\n]+) - it finds all sequences of continuous end of line characters, splits the stream where those sequences happen, and removes those sequences from the ingestion pipeline. In your case the situation is more complicated since you're trying to do a Bad Thing (tm) which is approach the structured data with simple regex manipulation. You could try to define your LINE_BREAKER as ^}(,[\r\n]+){ Which would mean that Splunk is to break the events in those places where you have only "}," alone in the line and immediately after that another "{" starts (possibly having several empty lines in between). But you're running into a risk of: 1) Incorrectly spliting your event in case you have a more complicated json structure 2) Laving the beginning and dangling square brackets as parts of the events (well, this one could be mitigated be editing the regex further but by expense of increasing risk number 1.  
index="********" message_type =ERROR correlation_id="*" | eval err_field1 = spath(_raw,"response_details.body") | eval err_field2 = spath(_raw,"response_details") | eval err_field3 = spath(_raw,"... See more...
index="********" message_type =ERROR correlation_id="*" | eval err_field1 = spath(_raw,"response_details.body") | eval err_field2 = spath(_raw,"response_details") | eval err_field3 = spath(_raw,"error") | eval err_field4 = spath(_raw,"message") | eval err_final=coalesce(err_field1,err_field2,err_field3,err_field4) | table err_field1 err_field2 err_field3 err_field4 err_final i have the fields populating for err_field3 and err_field4.. but its not populating in the err_final. Attached the screenshot for reference
If you're on Cloud, you can't send your syslog directly to cloud and need a local forwarder (or SC4S instance)  anyway. So it doesn't matter much whether it's TCP or UDP (at least in terms of on-site... See more...
If you're on Cloud, you can't send your syslog directly to cloud and need a local forwarder (or SC4S instance)  anyway. So it doesn't matter much whether it's TCP or UDP (at least in terms of on-site vs. Cloud).
Hi @gcusello  Yes, it looks same but the issue is, we cannot change the height of the graph for better visibility.
Hi @vinod743374, it seems to be the most near to your requisite. Ciao. Giuseppe