All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

|rex field=_raw "\"@timestamp\":\"\d{4}-\d{2}-\d{2}T(?<Time>\d{2}:\d{2})"
where can i see the index time?
Why are you resetting _time? This is masking what timestamp was used when the event was indexed.  You should also look at _indextime to see if there is any significant delay between when the event wa... See more...
Why are you resetting _time? This is masking what timestamp was used when the event was indexed.  You should also look at _indextime to see if there is any significant delay between when the event was created i.e. the time in the data, and the time it was indexed because it could be that the event was indexed in the last 5 minutes but the timestamp is prior to that so wouldn't get picked up by the search.
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help ... See more...
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help how to do that. please find below details  ================================================================================ Query index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"@timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  --> Please help Here  |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time ================================================================================ Screenshot   ------------------------------------------------------------------------------------------- raw data {"@timestamp":"2024-04-04T02:25:59.366Z","level":"INFO","message":"Snapshot event published: SnapshotEvent(version=SnapshotVersion(sourceSystem=dbI-LDN, entityType=ACCOUNT, subType=, date=2024-04-03, version=1, snapshotSize=326718, uuid=8739e273-cedc-482b-b696-48357efc8704, eventTimestamp=2024-04-04T02:24:52.762129638), status=CREATED)","thread":"snapshot-checker-3","loggerName":"com.db.sdda.dc.kafka.snapshot.writer.InternalEventSender"} Show syntax highlighted   Need only time 02:25:59 AM/PM in last column
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help ... See more...
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help how to do that. please find below details  ================================================================================ Query index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"@timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  --> Please help Here  |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time ================================================================================ Screenshot   ------------------------------------------------------------------------------------------- raw data {"@timestamp":"2024-04-04T02:25:59.366Z","level":"INFO","message":"Snapshot event published: SnapshotEvent(version=SnapshotVersion(sourceSystem=dbI-LDN, entityType=ACCOUNT, subType=, date=2024-04-03, version=1, snapshotSize=326718, uuid=8739e273-cedc-482b-b696-48357efc8704, eventTimestamp=2024-04-04T02:24:52.762129638), status=CREATED)","thread":"snapshot-checker-3","loggerName":"com.db.sdda.dc.kafka.snapshot.writer.InternalEventSender"} Show syntax highlighted
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help ... See more...
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help how to do that. please find below details  ================================================================================ Query index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"@timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  --> Please help Here  |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time ================================================================================ Screenshot ------------------------------------------------------------------------------------------- raw data {"@timestamp":"2024-04-04T02:25:59.366Z","level":"INFO","message":"Snapshot event published: SnapshotEvent(version=SnapshotVersion(sourceSystem=dbI-LDN, entityType=ACCOUNT, subType=, date=2024-04-03, version=1, snapshotSize=326718, uuid=8739e273-cedc-482b-b696-48357efc8704, eventTimestamp=2024-04-04T02:24:52.762129638), status=CREATED)","thread":"snapshot-checker-3","loggerName":"com.db.sdda.dc.kafka.snapshot.writer.InternalEventSender"} Show syntax highlighted
Hi Steven, I am trying to push SPLUNK UF to Windows and MAC laptops. Can you please the steps how you did through Intune. It would be lot helpful
Hello @Dattasri , You can use the below mentioned search query in which I have used the random function to generate values between 0 and 100, and then applied the `stats count` command. | mak... See more...
Hello @Dattasri , You can use the below mentioned search query in which I have used the random function to generate values between 0 and 100, and then applied the `stats count` command. | makeresults count=10 | eval rand=(random() % 100) + 1 | stats count(eval(rand > 60)) as count_greater_than_60, count(eval(rand < 60)) as count_less_than_60 If this reply helps you, Karma would be appreciated. Thanks, Surbhi  
Yes, no error.
|rex field=_raw "eventTimestamp=(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"
Have you checked splunk internal log for ERROR ?
Yes: index=conf detectionSource=MCAS NOT title IN("Potential ransomware activity*", "Multiple delete VM activities*", "Mass delete*","Data exfiltration to an app that is not sanctioned*", "Cloud Dis... See more...
Yes: index=conf detectionSource=MCAS NOT title IN("Potential ransomware activity*", "Multiple delete VM activities*", "Mass delete*","Data exfiltration to an app that is not sanctioned*", "Cloud Discovery anomaly detection*", "Investigation priority score increase*", "Risky hosting apps*", "DXC*") status=new NOT ((title="Impossible travel activity" AND description="*Mexico*" AND description="*United States*")) | dedup incidentId | rename entities{}.* AS * devices{}.* AS * evidence{}.* AS * | stats values(title) as AlertName, values(deviceDnsName) as Host, values(user) as "Account", values(description) as "Description", values(fileName) as file, values(ipAddress) as "Source IP", values(category) as "Mitre" by incidentId | rename incidentId AS ID_Defender | tojson auto(AlertName), auto(Host), auto("Account"), auto("Description"), auto(file), auto("Source IP"), auto("Mitre") output_field=events | eval events=replace(events, "\\[\"", "\""), events=replace(events, "\"\\]", "\"") | rex field=events mode=sed "s/:\\[([0-9])\\]/:\\1/g" | eval native_alert_id = "SPL" . strftime(now(), "%Y%m%d%H%M%S") . "" . tostring(random()) | tojson auto(native_alert_id) output_field=security | eval security=replace(security, "\\[\"", "\""), security=replace(security, "\"\\]", "\"") | rename security AS "security-alert" | tojson json(security-alert), auto(events) output_field=security-alert | eval _time=now()
Watch your raw event carefully. Compare it with the regex. The difference is kinda obvious.
Can you share the macro expansion of the search in a code block </>
Try something like this index=events event.Properties.errMessage!="Invalid LoginID" event.Properties.errMessage!="Account Temporarily Locked Out" event.Properties.errMessage!="Permission denied" eve... See more...
Try something like this index=events event.Properties.errMessage!="Invalid LoginID" event.Properties.errMessage!="Account Temporarily Locked Out" event.Properties.errMessage!="Permission denied" event.Properties.errMessage!="Unauthorized user" event.Properties.errMessage!="Account Pending Verification" event.Properties.errMessage!="Invalid parameter value" | stats count by event.Properties.errMessage
Still not working i replaced semicolon with "=" sign Please check screenshot. ============================================================================= Sample raw data  
You can use appendpipe command for this - https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Appendpipe Either creating a temporary fields and counting them (which is a more straigh... See more...
You can use appendpipe command for this - https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Appendpipe Either creating a temporary fields and counting them (which is a more straightforward solution) | eval is_small=if(your_field<threshold,1,0) | eval is_big=if(your_field>another_threshold,1,0) | appendpipe sum(is_small) as "Small Values" sum(is_big) as "Big Values" Alternatively to creating temporary fields you can use the eval-based stats like sum(eval(if(your_field>another_threshold,1,0))) as "Big Values" But this is more advanced functionality and this syntax can be a bit confusing.
Good morning, I have some alerts that I have set up that are not triggering. They are Defender events. If I run the query in a normal search if I get the results of the alerts that I miss. However, ... See more...
Good morning, I have some alerts that I have set up that are not triggering. They are Defender events. If I run the query in a normal search if I get the results of the alerts that I miss. However, for some reason the alerts are not triggered: neither the email is sent, nor do they appear in the Triggered alerts section. This is my alert   and this is one of the events for which it should have triggered and has not triggered: I also tried disabling the throttle in case there was a problem and it was leaking. I also checked to see if the search had been skipped but it was not.   Any idea?
As you are running Universal Forwarder it does not process the transforms by default. You could try enabling force_local_processing option for a sourcetype but it's not very well docummented and gen... See more...
As you are running Universal Forwarder it does not process the transforms by default. You could try enabling force_local_processing option for a sourcetype but it's not very well docummented and generally not advisable since it increases load on the UF (which is supposed to be as lightweight as possible).
Your question is a bit vague but I'll assume you mean that you don't see your forwarders in Forwarder Management section of the UI (either on your Deployment Server or an all-in-one instance). See t... See more...
Your question is a bit vague but I'll assume you mean that you don't see your forwarders in Forwarder Management section of the UI (either on your Deployment Server or an all-in-one instance). See this document: https://docs.splunk.com/Documentation/Splunk/9.2.1/Updating/Upgradepre-9.2deploymentservers