All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Everyone I was wondering how I would go about a stats count for 2 fields where 1 field depends on the other.  So for example we have a field called service version and another called service... See more...
Hi Everyone I was wondering how I would go about a stats count for 2 fields where 1 field depends on the other.  So for example we have a field called service version and another called service operation. Operations aren't necessarily exclusive to a version but I was wondering if it was possible to do a stats count where it would display something like: Service 1 - Operation 1: *how many* - Operation 2: *How many* and so on. Is something like this possible? 
Hello Everyone, I've encountered an issue where certain customers appear to have duplicate ELB access logs. During a routine check, I noticed instances of identical events being logged with the exac... See more...
Hello Everyone, I've encountered an issue where certain customers appear to have duplicate ELB access logs. During a routine check, I noticed instances of identical events being logged with the exact same timestamp, which shouldn't normally occur. I'm utilizing the Splunk_TA_aws app for ingesting logs, specifying each S3 bucket and the corresponding ELB log prefix as inputs. My search pattern is index=customer-index-{customer_name} sourcetype="aws:elb:accesslogs", aimed at isolating the data per customer. Upon reviewing the original logs directly within the S3 buckets, I confirmed that the duplicates are not present at the source; they only appear once ingested into Splunk. This leads me to wonder if there might be a configuration or processing step within Splunk or the AWS Add-on that could be causing these duplicates. Has anyone experienced a similar issue or could offer insights into potential causes or solutions? Any advice or troubleshooting tips would be greatly appreciated. here we can see the same timestamp for the logs:  if im adding | dedup _raw the number of events going down to "6535" from 12,710    Thank you in advance for your assistance.  
is there way to add AM OR PM according to time.
Yeah my sincerest apologies, can have difficulties at times with accurately describing what I'm looking for.  I'll definitely checkout the below query.  But essentially I'm just looking for a d... See more...
Yeah my sincerest apologies, can have difficulties at times with accurately describing what I'm looking for.  I'll definitely checkout the below query.  But essentially I'm just looking for a date value and request value to not change day to day unless the request value is higher on a different date value. Hopefully that's a more accurate description. 
This is a little vague so I will make some assumptions. Assuming you want a daily count of events, and just keep the highest one, you could do this | bin _time span=1d | stats count by _time | even... See more...
This is a little vague so I will make some assumptions. Assuming you want a daily count of events, and just keep the highest one, you could do this | bin _time span=1d | stats count by _time | eventstats max(count) as max | where count==max
It is a system field called _indextime - you could rename it without the leading _ so it becomes visible. If you want to use it, you may need to include it in the stats command since this command onl... See more...
It is a system field called _indextime - you could rename it without the leading _ so it becomes visible. If you want to use it, you may need to include it in the stats command since this command only keeps fields which are explicitly named.
Hi all,  I was wondering if there's a way to create a search that I can add to a dashboard that'll present the Peak day and what the volume is over a 30 day period?  Essentially when loading into... See more...
Hi all,  I was wondering if there's a way to create a search that I can add to a dashboard that'll present the Peak day and what the volume is over a 30 day period?  Essentially when loading into the dashboard I was hoping it could save whatever day it occurred and not be replaced until a larger peak occurs. Assuming that's even possible.  Possibly worded this poorly so feel free to ask any questions about what I'm trying to achieve. 
|rex field=_raw "\"@timestamp\":\"\d{4}-\d{2}-\d{2}T(?<Time>\d{2}:\d{2})"
where can i see the index time?
Why are you resetting _time? This is masking what timestamp was used when the event was indexed.  You should also look at _indextime to see if there is any significant delay between when the event wa... See more...
Why are you resetting _time? This is masking what timestamp was used when the event was indexed.  You should also look at _indextime to see if there is any significant delay between when the event was created i.e. the time in the data, and the time it was indexed because it could be that the event was indexed in the last 5 minutes but the timestamp is prior to that so wouldn't get picked up by the search.
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help ... See more...
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help how to do that. please find below details  ================================================================================ Query index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"@timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  --> Please help Here  |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time ================================================================================ Screenshot   ------------------------------------------------------------------------------------------- raw data {"@timestamp":"2024-04-04T02:25:59.366Z","level":"INFO","message":"Snapshot event published: SnapshotEvent(version=SnapshotVersion(sourceSystem=dbI-LDN, entityType=ACCOUNT, subType=, date=2024-04-03, version=1, snapshotSize=326718, uuid=8739e273-cedc-482b-b696-48357efc8704, eventTimestamp=2024-04-04T02:24:52.762129638), status=CREATED)","thread":"snapshot-checker-3","loggerName":"com.db.sdda.dc.kafka.snapshot.writer.InternalEventSender"} Show syntax highlighted   Need only time 02:25:59 AM/PM in last column
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help ... See more...
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help how to do that. please find below details  ================================================================================ Query index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"@timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  --> Please help Here  |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time ================================================================================ Screenshot   ------------------------------------------------------------------------------------------- raw data {"@timestamp":"2024-04-04T02:25:59.366Z","level":"INFO","message":"Snapshot event published: SnapshotEvent(version=SnapshotVersion(sourceSystem=dbI-LDN, entityType=ACCOUNT, subType=, date=2024-04-03, version=1, snapshotSize=326718, uuid=8739e273-cedc-482b-b696-48357efc8704, eventTimestamp=2024-04-04T02:24:52.762129638), status=CREATED)","thread":"snapshot-checker-3","loggerName":"com.db.sdda.dc.kafka.snapshot.writer.InternalEventSender"} Show syntax highlighted
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help ... See more...
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help how to do that. please find below details  ================================================================================ Query index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"@timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  --> Please help Here  |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time ================================================================================ Screenshot ------------------------------------------------------------------------------------------- raw data {"@timestamp":"2024-04-04T02:25:59.366Z","level":"INFO","message":"Snapshot event published: SnapshotEvent(version=SnapshotVersion(sourceSystem=dbI-LDN, entityType=ACCOUNT, subType=, date=2024-04-03, version=1, snapshotSize=326718, uuid=8739e273-cedc-482b-b696-48357efc8704, eventTimestamp=2024-04-04T02:24:52.762129638), status=CREATED)","thread":"snapshot-checker-3","loggerName":"com.db.sdda.dc.kafka.snapshot.writer.InternalEventSender"} Show syntax highlighted
Hi Steven, I am trying to push SPLUNK UF to Windows and MAC laptops. Can you please the steps how you did through Intune. It would be lot helpful
Hello @Dattasri , You can use the below mentioned search query in which I have used the random function to generate values between 0 and 100, and then applied the `stats count` command. | mak... See more...
Hello @Dattasri , You can use the below mentioned search query in which I have used the random function to generate values between 0 and 100, and then applied the `stats count` command. | makeresults count=10 | eval rand=(random() % 100) + 1 | stats count(eval(rand > 60)) as count_greater_than_60, count(eval(rand < 60)) as count_less_than_60 If this reply helps you, Karma would be appreciated. Thanks, Surbhi  
Yes, no error.
|rex field=_raw "eventTimestamp=(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"
Have you checked splunk internal log for ERROR ?
Yes: index=conf detectionSource=MCAS NOT title IN("Potential ransomware activity*", "Multiple delete VM activities*", "Mass delete*","Data exfiltration to an app that is not sanctioned*", "Cloud Dis... See more...
Yes: index=conf detectionSource=MCAS NOT title IN("Potential ransomware activity*", "Multiple delete VM activities*", "Mass delete*","Data exfiltration to an app that is not sanctioned*", "Cloud Discovery anomaly detection*", "Investigation priority score increase*", "Risky hosting apps*", "DXC*") status=new NOT ((title="Impossible travel activity" AND description="*Mexico*" AND description="*United States*")) | dedup incidentId | rename entities{}.* AS * devices{}.* AS * evidence{}.* AS * | stats values(title) as AlertName, values(deviceDnsName) as Host, values(user) as "Account", values(description) as "Description", values(fileName) as file, values(ipAddress) as "Source IP", values(category) as "Mitre" by incidentId | rename incidentId AS ID_Defender | tojson auto(AlertName), auto(Host), auto("Account"), auto("Description"), auto(file), auto("Source IP"), auto("Mitre") output_field=events | eval events=replace(events, "\\[\"", "\""), events=replace(events, "\"\\]", "\"") | rex field=events mode=sed "s/:\\[([0-9])\\]/:\\1/g" | eval native_alert_id = "SPL" . strftime(now(), "%Y%m%d%H%M%S") . "" . tostring(random()) | tojson auto(native_alert_id) output_field=security | eval security=replace(security, "\\[\"", "\""), security=replace(security, "\"\\]", "\"") | rename security AS "security-alert" | tojson json(security-alert), auto(events) output_field=security-alert | eval _time=now()
Watch your raw event carefully. Compare it with the regex. The difference is kinda obvious.