Still not working i replaced semicolon with "=" sign Please check screenshot. ============================================================================= Sample raw data
You can use appendpipe command for this - https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Appendpipe Either creating a temporary fields and counting them (which is a more straigh...
See more...
You can use appendpipe command for this - https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Appendpipe Either creating a temporary fields and counting them (which is a more straightforward solution) | eval is_small=if(your_field<threshold,1,0) | eval is_big=if(your_field>another_threshold,1,0) | appendpipe sum(is_small) as "Small Values" sum(is_big) as "Big Values" Alternatively to creating temporary fields you can use the eval-based stats like sum(eval(if(your_field>another_threshold,1,0))) as "Big Values" But this is more advanced functionality and this syntax can be a bit confusing.
Good morning, I have some alerts that I have set up that are not triggering. They are Defender events. If I run the query in a normal search if I get the results of the alerts that I miss. However, ...
See more...
Good morning, I have some alerts that I have set up that are not triggering. They are Defender events. If I run the query in a normal search if I get the results of the alerts that I miss. However, for some reason the alerts are not triggered: neither the email is sent, nor do they appear in the Triggered alerts section. This is my alert and this is one of the events for which it should have triggered and has not triggered: I also tried disabling the throttle in case there was a problem and it was leaking. I also checked to see if the search had been skipped but it was not. Any idea?
As you are running Universal Forwarder it does not process the transforms by default. You could try enabling force_local_processing option for a sourcetype but it's not very well docummented and gen...
See more...
As you are running Universal Forwarder it does not process the transforms by default. You could try enabling force_local_processing option for a sourcetype but it's not very well docummented and generally not advisable since it increases load on the UF (which is supposed to be as lightweight as possible).
Your question is a bit vague but I'll assume you mean that you don't see your forwarders in Forwarder Management section of the UI (either on your Deployment Server or an all-in-one instance). See t...
See more...
Your question is a bit vague but I'll assume you mean that you don't see your forwarders in Forwarder Management section of the UI (either on your Deployment Server or an all-in-one instance). See this document: https://docs.splunk.com/Documentation/Splunk/9.2.1/Updating/Upgradepre-9.2deploymentservers
Hello @billy , Can you please use the configuration provided below, where I've added the sourcetype in inputs.conf: [WinEventLog://Security]
disabled = 0
current_only
renderXml = 1
whitelist = 4...
See more...
Hello @billy , Can you please use the configuration provided below, where I've added the sourcetype in inputs.conf: [WinEventLog://Security]
disabled = 0
current_only
renderXml = 1
whitelist = 4624, 4634
sourcetype = XmlWinEventLog:Security 2 - You can also configure the files using source instead of sourcetype inputs.conf -
[WinEventLog://Security]
disabled = 0
current_only
renderXml = 1
whitelist = 4624, 4634
props.conf -
[source::XmlWinEventLog:Security]
TRANSFORMS-Xml = send_to_3rd_party
transforms.conf
[send_to_3rd_party]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = foobar If this reply helps you, Karma would be appreciated. Thanks, Surbhi
Good morning, I did the Splunk Enterprise update from version 9.1.1 to 9.2.1 but the web interface no longer shows the connected machines. What can I verify? Thanks OS Centos 7
Check out Splunk Cheat Sheet: Query, SPL, RegEx, & Commands | Splunk At the end of the blog post you will find the splunk-quick-reference-guide in pdf format.
Hi Amit, I'm not sure about the issue, but I suggest trying to reinstall with the latest agent version. Also, check the connectivity and SSL certification if required. Make sure that the hostname is...
See more...
Hi Amit, I'm not sure about the issue, but I suggest trying to reinstall with the latest agent version. Also, check the connectivity and SSL certification if required. Make sure that the hostname is correctly set in the server section to ensure all metrics are present.
Hi Rohit, Could you please perform the following checks: 1) Verify the connection between the database agent and the controller, as well as between the database agent and the database server. 2) E...
See more...
Hi Rohit, Could you please perform the following checks: 1) Verify the connection between the database agent and the controller, as well as between the database agent and the database server. 2) Ensure that the user has the correct permissions as specified in the AppDynamics document. https://docs.appdynamics.com/appd/onprem/23.x/23.2/ja/database-visibility/add-database-collectors/configure-mysql-collectors 3) Check if you have created a user for the db collector setting. Try logging in with the same user on the MySQL server to confirm that the user ID and password are correct. Thank you.
Hello @vishenps , First, upgrade the add-on builder version. Then, move all configurations to the default directory within the custom app from the local directory, and then remove the local director...
See more...
Hello @vishenps , First, upgrade the add-on builder version. Then, move all configurations to the default directory within the custom app from the local directory, and then remove the local directory. Finally, proceed with the vetting process. Let me know how it goes Please accept the solution and hit Karma, if this helps!
Yes, i released that its not "timestamp " and its changes to "eventTimestamp" in raw data However modified query but still its not working. =======================================================...
See more...
Yes, i released that its not "timestamp " and its changes to "eventTimestamp" in raw data However modified query but still its not working. ====================================================================== index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"eventTimestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})" --> Please suggest here |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time ====================================================================== attaching sample raw screenshot for your reference
Hi Rich, I am sorry for the poorly worded question. "You have alert, which the cron schedule says to fire at 1 PM (13:00) in CDT. That's 11:30 PM (23:30) IST. " The issue is instead of receivi...
See more...
Hi Rich, I am sorry for the poorly worded question. "You have alert, which the cron schedule says to fire at 1 PM (13:00) in CDT. That's 11:30 PM (23:30) IST. " The issue is instead of receiving the mail at 11:30 PM (23:30) IST, I receive it on 11:30 am IST. If you check the mail screenshot, you can see the inline query result returned wed Apr 3 13:00, but trigger time is April 4, 01:19 am CST, and the mail reached my inbox on April 4, 11:49 am IST. Shouldn't it be actually April 3 13:19 CST and 23:49 IST?