@R15 wrote: There is no other portion, running the same search as in your screenshot I get the error. Could you pls take a screenshot and attach for our reference, thanks.
No, it's batch analysis, a term coined by Bamm Visscher, to only look at results that happen every few days or weekly. I don't want it to hit once for every result. The whole ask of the post is to fi...
See more...
No, it's batch analysis, a term coined by Bamm Visscher, to only look at results that happen every few days or weekly. I don't want it to hit once for every result. The whole ask of the post is to find out how I can get a report not to send if there aren't any results. That's really what I want to do. In alerts, I only can select "once per result," which doesn't work for me because I want them in a batch (many alerts in one alert, so to speak) I don't want it to fire every time there is a hit. This is used for high false positive alerts that we only want to look at every few days. I'm not sure how to make what I'm looking for much clearer than I have in my responses. I want to either have an alert that sends one alert with consolidated alerts for 3 days worth of alerts or I want to send a report with 3 days worth of consolidated search results in place of an alert but I don't want the report to send if there are 0 results.
@inventsekar apparently avg is not an eval function in my version of splunk. It's available with other things like chart and stats, but not eval. Something like this works however: ... | chart eval...
See more...
@inventsekar apparently avg is not an eval function in my version of splunk. It's available with other things like chart and stats, but not eval. Something like this works however: ... | chart eval(avg(bytes) ...
Thanks for the quick response. So endpoint would be a rex'd field, but I want to search on the specific endpoint name from the first rex command. Also looking to correlate that specific endpoint n...
See more...
Thanks for the quick response. So endpoint would be a rex'd field, but I want to search on the specific endpoint name from the first rex command. Also looking to correlate that specific endpoint name is present in both sourcetypes. End point is not an extracted field in either ST so needs to be rex'd out of both. The above may work if I was able to run it as "| stats count by $endpoint$". For an example: I could build the alert as Correlation search:
sourcetype=A | rex field=_raw "John\s+(?<endpoint>\w+)"
| stats count by endpoint
Drill Down:
sourcetype=B "Live" | search $endpoint$
(In this case, the drill down would become a keyword search on the endpoint name rather than a rex'd field) This would work and create an alert that would just need to be manually closed if validated by proving the same endpoint is present in both ST's. Would like to reduce this noise if possible.
The easiest (although I still have no idea if that's what you need) approach will probably be something like this <your_search> | stats list(module) as modules by transactionID | eval modules=mvjo...
See more...
The easiest (although I still have no idea if that's what you need) approach will probably be something like this <your_search> | stats list(module) as modules by transactionID | eval modules=mvjoin(modules," ") | stats count by modules
You're trying to send via gmail using SMTP AUTH and apparently gmail doesn't support that (as far as I remember Google stopped supporting "normal" authentication like plain login in IMAP or SMTP AUTH...
See more...
You're trying to send via gmail using SMTP AUTH and apparently gmail doesn't support that (as far as I remember Google stopped supporting "normal" authentication like plain login in IMAP or SMTP AUTH quite a few years ago).
The problem is not well defined. If you simply want ratio of good vs. bad, stats count by state (good/bad) should be enough. If you have events indicating start/stop of periods of good/bad state it'...
See more...
The problem is not well defined. If you simply want ratio of good vs. bad, stats count by state (good/bad) should be enough. If you have events indicating start/stop of periods of good/bad state it's gonna be harder and you'll have to use streamstats.
Unfortunately that wont work as there can be an unlimited number of consecutive strings of events between the 2 logged events and it needs to calculate the durations of each, which I haven't seen any...
See more...
Unfortunately that wont work as there can be an unlimited number of consecutive strings of events between the 2 logged events and it needs to calculate the durations of each, which I haven't seen any solutions in the community successfully solve.
While not the most efficient command in the book, perhaps the transaction command could be helpful because you can define the start/end events and it will calculate stuff like duration for you of the...
See more...
While not the most efficient command in the book, perhaps the transaction command could be helpful because you can define the start/end events and it will calculate stuff like duration for you of the overall transaction. Also, this discussion seemed to be similar to yours: How to calculate uptime percentage based on my dat... - Splunk Community
Are Splunk Enterprise and the Universal Forwarder (UF) running on the same server. If so, it's unnecessary and can lead to problems. Remove the UF since Splunk Enterprise is fully capable of monito...
See more...
Are Splunk Enterprise and the Universal Forwarder (UF) running on the same server. If so, it's unnecessary and can lead to problems. Remove the UF since Splunk Enterprise is fully capable of monitoring files without it. If they are on separate servers then confirm there is no firewall blocking connections between them. Check the logs on the UF (/opt/splunkforwarder/var/log/splunk/splunkd.log) to see if there are any messages that might explain why data is not being sent to Splunk Finally, neither Splunk Enterprise nor Universal Forwarder should run as root because that creates a security risk.
below is the sample json log content the main filelds are default extracts but the nested aren't. Please help to extract the nested space separated data as fields The one I want to extract as a se...
See more...
below is the sample json log content the main filelds are default extracts but the nested aren't. Please help to extract the nested space separated data as fields The one I want to extract as a separate field is the line tag: service=z2-qa1-local-z2-api-endpoint APPID=1234 cluster=z2-qa1-local application=z2 full-imagename=0123456789.dkr.10cal/10.20/xyz container-id=asdfgh503 full-container-id=1234567890 Whole log event { [-] line: { [-] @timestamp: 2023-10-31T20:36:57.092Z class: x.x.x.x.x.Logging exception: line: 54 marker: message: GET https://00.00.000.000:123456/management/health forwarded from [] by [] for unknown returned 200 in 1ms pid: 7 severity: INFO span: b60d05680b3cbfa7 thread: boundedElastic-9 trace: b60d05680b3cbfa7 } source: stdout tag: service=z2-qa1-local-z2-api-endpoint APPID=1234 cluster=z2-qa1-local application=z2 full-imagename=0123456789.dkr.10cal/10.20/xyz container-id=asdfgh503 full-container-id=1234567890 }
Hello Team, I need your help, i was in process of creating splunk email alert but got an issue as shown in the picture below , Please help me, Thank you in advance
It's not entirely clear what the expected output should be, but perhaps this helps. It counts the number of sourcetypes for each endpoint and filters out events where endpoint is in both sourcetypes...
See more...
It's not entirely clear what the expected output should be, but perhaps this helps. It counts the number of sourcetypes for each endpoint and filters out events where endpoint is in both sourcetypes. sourcetype=A
| rex field=_raw "John\s+(?<endpoint>\w+)"
| append [| search sourcetype=B "Live"
| rex field=_raw "Mike\s+(?<endpoint>\w+)"]
| stats count by endpoint
| where count = 1
Example field extractions in props.conf look like this EXTRACT-action = Action: \[(?<action>[^\]]+)\]
EXTRACT-user = User: (?<user>\S+) What follows the = is a regular expression very much like wha...
See more...
Example field extractions in props.conf look like this EXTRACT-action = Action: \[(?<action>[^\]]+)\]
EXTRACT-user = User: (?<user>\S+) What follows the = is a regular expression very much like what is used with the rex command. With these examples and a little experimentation in regex101.com you should be able to extract the remaining fields. If you have troubles, please post the field you're trying to extract and the command you tried.
Looking to build 1 correlation search to do the following: Bring an extracted field name from 1 ST and search that field name across another ST. If hits in both ST, do not alert. If only hits in ...
See more...
Looking to build 1 correlation search to do the following: Bring an extracted field name from 1 ST and search that field name across another ST. If hits in both ST, do not alert. If only hits in the first ST, do alert. Ideally, this would function similar to how $host$ can be used in a drill down to pull the host name, though I'm not sure this is possible for a correlation search. Is there a command to do a comparison like this? So far I have the following returning results: sourcetype=A | rex field=_raw "John\s+(?<endpoint>\w+)" | append [| search sourcetype=B "Live" | rex field=_raw "Mike\s+(?<endpoint>\w+)"] This does give me results from both indexes, but this is not correlating results from A to B (obviously). I have tried several commands (join, transaction, coalesce etc) and removed these failed attempts from the above for simplicity. I may have been using these commands incorrectly as well. TYIA
Are you collecting events from a lot of different sourcetypes? If you just have one sourcetype per alert can you pass it into the macro like you would other parameters? I configured a macro like th...
See more...
Are you collecting events from a lot of different sourcetypes? If you just have one sourcetype per alert can you pass it into the macro like you would other parameters? I configured a macro like the following: And then was able to use it like this: index=_internal earliest=-15m | stats count by component | `collect_macro(the_index=summary,the_sourcetype=special_sourcetype)` And then in my summary index I saw stuff appear with the special_sourcetype instead of stash:
The problem I am having is adding up all the consecutive durations of each "period" good and bad, when they occur in random lengths repeatedly throughout the time searched.
@_JP thanks for your response. I'm not trying to define a new sourcetype here. Instead I want the same sourcetype to be collected from search. I cannot define the custom sourcetype after collect c...
See more...
@_JP thanks for your response. I'm not trying to define a new sourcetype here. Instead I want the same sourcetype to be collected from search. I cannot define the custom sourcetype after collect command because the sourcetype varies based on the type of alert/alertname. I want to reference this macro in most of the alerts!