All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You're trying to send via gmail using SMTP AUTH and apparently gmail doesn't support that (as far as I remember Google stopped supporting "normal" authentication like plain login in IMAP or SMTP AUTH... See more...
You're trying to send via gmail using SMTP AUTH and apparently gmail doesn't support that (as far as I remember Google stopped supporting "normal" authentication like plain login in IMAP or SMTP AUTH quite a few years ago).  
The problem is not well defined. If you simply want ratio of good vs. bad, stats count by state (good/bad) should be enough. If you have events indicating start/stop of periods of good/bad state it'... See more...
The problem is not well defined. If you simply want ratio of good vs. bad, stats count by state (good/bad) should be enough. If you have events indicating start/stop of periods of good/bad state it's gonna be harder and you'll have to use streamstats.
Hello there, did you find how to do it? if so, may you share it?  
Unfortunately that wont work as there can be an unlimited number of consecutive strings of events between the 2 logged events and it needs to calculate the durations of each, which I haven't seen any... See more...
Unfortunately that wont work as there can be an unlimited number of consecutive strings of events between the 2 logged events and it needs to calculate the durations of each, which I haven't seen any solutions in the community successfully solve.
While not the most efficient command in the book, perhaps the transaction command could be helpful because you can define the start/end events and it will calculate stuff like duration for you of the... See more...
While not the most efficient command in the book, perhaps the transaction command could be helpful because you can define the start/end events and it will calculate stuff like duration for you of the overall transaction. Also, this discussion seemed to be similar to yours: How to calculate uptime percentage based on my dat... - Splunk Community
Are Splunk Enterprise and the Universal Forwarder (UF) running on the same server.  If so, it's unnecessary and can lead to problems.  Remove the UF since Splunk Enterprise is fully capable of monito... See more...
Are Splunk Enterprise and the Universal Forwarder (UF) running on the same server.  If so, it's unnecessary and can lead to problems.  Remove the UF since Splunk Enterprise is fully capable of monitoring files without it. If they are on separate servers then confirm there is no firewall blocking connections between them. Check the logs on the UF (/opt/splunkforwarder/var/log/splunk/splunkd.log) to see if there are any messages that might explain why data is not being sent to Splunk Finally, neither Splunk Enterprise nor Universal Forwarder should run as root because that creates a security risk.
below is the sample json log content the main filelds are default extracts but the nested aren't. Please help to extract the nested space separated data as fields The one I want to extract as a se... See more...
below is the sample json log content the main filelds are default extracts but the nested aren't. Please help to extract the nested space separated data as fields The one I want to extract as a separate field is the line tag: service=z2-qa1-local-z2-api-endpoint APPID=1234 cluster=z2-qa1-local application=z2 full-imagename=0123456789.dkr.10cal/10.20/xyz container-id=asdfgh503 full-container-id=1234567890   Whole log event { [-] line: { [-] @timestamp: 2023-10-31T20:36:57.092Z class: x.x.x.x.x.Logging exception: line: 54 marker: message: GET https://00.00.000.000:123456/management/health forwarded from [] by [] for unknown returned 200 in 1ms pid: 7 severity: INFO span: b60d05680b3cbfa7 thread: boundedElastic-9 trace: b60d05680b3cbfa7 } source: stdout tag: service=z2-qa1-local-z2-api-endpoint APPID=1234 cluster=z2-qa1-local application=z2 full-imagename=0123456789.dkr.10cal/10.20/xyz container-id=asdfgh503 full-container-id=1234567890 }
Hello Team,   I need your help, i was in process of creating splunk email alert but got an issue as shown in the picture below , Please help me, Thank you in advance
It's not entirely clear what the expected output should be, but perhaps this helps.  It counts the number of sourcetypes for each endpoint and filters out events where endpoint is in both sourcetypes... See more...
It's not entirely clear what the expected output should be, but perhaps this helps.  It counts the number of sourcetypes for each endpoint and filters out events where endpoint is in both sourcetypes. sourcetype=A | rex field=_raw "John\s+(?<endpoint>\w+)" | append [| search sourcetype=B "Live" | rex field=_raw "Mike\s+(?<endpoint>\w+)"] | stats count by endpoint | where count = 1
Example field extractions in props.conf look like this EXTRACT-action = Action: \[(?<action>[^\]]+)\] EXTRACT-user = User: (?<user>\S+) What follows the = is a regular expression very much like wha... See more...
Example field extractions in props.conf look like this EXTRACT-action = Action: \[(?<action>[^\]]+)\] EXTRACT-user = User: (?<user>\S+) What follows the = is a regular expression very much like what is used with the rex command.  With these examples and a little experimentation in regex101.com you should be able to extract the remaining fields. If you have troubles, please post the field you're trying to extract and the command you tried.
Looking to build 1 correlation search to do the following: Bring an extracted field name from 1 ST and search that field name across another ST. If hits in both ST, do not alert. If only hits in ... See more...
Looking to build 1 correlation search to do the following: Bring an extracted field name from 1 ST and search that field name across another ST. If hits in both ST, do not alert. If only hits in the first ST, do alert. Ideally, this would function similar to how $host$ can be used in a drill down to pull the host name, though I'm not sure this is possible for a correlation search. Is there a command to do a comparison like this? So far I have the following returning results: sourcetype=A | rex field=_raw "John\s+(?<endpoint>\w+)" | append [| search sourcetype=B "Live" | rex field=_raw "Mike\s+(?<endpoint>\w+)"] This does give me results from both indexes, but this is not correlating results from A to B (obviously). I have tried several commands (join, transaction, coalesce etc) and removed these failed attempts from the above for simplicity. I may have been using these commands incorrectly as well.  TYIA
Are you collecting events from a lot of different sourcetypes?  If you just have one sourcetype per alert can you pass it into the macro like you would other parameters? I configured a macro like th... See more...
Are you collecting events from a lot of different sourcetypes?  If you just have one sourcetype per alert can you pass it into the macro like you would other parameters? I configured a macro like the following: And then was able to use it like this: index=_internal earliest=-15m | stats count by component | `collect_macro(the_index=summary,the_sourcetype=special_sourcetype)`   And then in my summary index I saw stuff appear with the special_sourcetype instead of stash:        
The problem I am having is adding up all the consecutive durations of each "period" good and bad, when they occur in random lengths repeatedly throughout the time searched.
@_JP thanks for your response. I'm not trying to define a new sourcetype here. Instead I want the same sourcetype to be collected from search. I cannot define the custom sourcetype after collect c... See more...
@_JP thanks for your response. I'm not trying to define a new sourcetype here. Instead I want the same sourcetype to be collected from search. I cannot define the custom sourcetype after collect command because the sourcetype varies based on the type of alert/alertname. I want to reference this macro in most of the alerts!
This solved question seems to be what you're looking for:   Solved: How to get the Audit for Lookup files modification... - Splunk Community If you don't want any changes at all, and you're on a... See more...
This solved question seems to be what you're looking for:   Solved: How to get the Audit for Lookup files modification... - Splunk Community If you don't want any changes at all, and you're on a *nix system, can you deploy your lookup with read-only permissions on the file within the app?
The collect command does allow you to define a sourcetype.  Note that the stash sourcetype is special as it doesn't hit your license volume.  When you use collect with a different sourcetype Splunk c... See more...
The collect command does allow you to define a sourcetype.  Note that the stash sourcetype is special as it doesn't hit your license volume.  When you use collect with a different sourcetype Splunk considers it "new" data since you may not be generating summarizing statistics on data already indexed. Also, since this is tied to an alert, would using the Log Event Alert Action be sufficient?
@gcusello i found the similar solution of yours but i am unable to achieve for my usecase. Can you please help me in tweaking search for mine. https://community.splunk.com/t5/Alerting/How-do-you-d... See more...
@gcusello i found the similar solution of yours but i am unable to achieve for my usecase. Can you please help me in tweaking search for mine. https://community.splunk.com/t5/Alerting/How-do-you-detect-when-a-host-stops-sending-logs-to-Splunk/m-p/369071
It sounds like you will have to build an SPL query using the eventstats  command, or possibly the streamstats command.  Since I can't see your data I'm not sure what would be the best approach, but t... See more...
It sounds like you will have to build an SPL query using the eventstats  command, or possibly the streamstats command.  Since I can't see your data I'm not sure what would be the best approach, but there is a slight difference between these two commands.    Eventstats is like the stats command where it looks at all of your events matching found by your query, but it does not transform the stream, it just adds additional fields to every event. For example, you could count your up and bad events using eventstats by host.  Then, each event for that host would have the total counts on every event.  So if there were six up events, and seven bad events for a host, then each of those 13 events would have an up value of six and bad value of seven. Alternatively, streamstats only looks at events in the stream up to and including the point where you are in the stream - it doesn't know about "future" events in the result set.  This is good for stuff like running average, but has other uses.  So in your case, the first up event would have a count of 1, the second up event a count of two, the first bad event a count of 1, and so on...the last up would have a count of six and the last bad a count of seven. I know you mentioned duration...you can also add-up the time differences using these commands, too, by doing math on _time.
Based on your description it sounds like you are looking to utilize the drilldown actions for a visualization to change something on the existing page. While not exactly what you're doing, here's so... See more...
Based on your description it sounds like you are looking to utilize the drilldown actions for a visualization to change something on the existing page. While not exactly what you're doing, here's some posts around here  Solved: How to create a drill down from one panel to anoth... - Splunk Community Solved: Single value drilldown click to display and click ... - Splunk Community   Also a couple of external resources discussing how the tokens work: The Beginner’s Guide to Splunk Drilldowns With Conditions – Kinney Group Define Your Drilldown in Splunk: $click.value$ vs $click.value2$ – Kinney Group
Hi Splunkers I'm trying to send alerts data from one index to another using a macro For ex: The macro is having 4 arguments like below and would like to send data to new index called "newidx" usi... See more...
Hi Splunkers I'm trying to send alerts data from one index to another using a macro For ex: The macro is having 4 arguments like below and would like to send data to new index called "newidx" using collect command here is the macro called `newmacro` eval apple=xyz, banana=abc, mango=www, grape=123 | collect index=newidx the idea is wherever I reference this macro in an alert that exact alert raw data need to be copied to the newidx but the sourcetype always changes as stash instead of original. I don't see all original fields in summary index Is there any way to define a sourcetype something like |collect index=newidx sourcetype=$sourcetype$