All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@cbiraris  there are a number of ways of doing this, but it depends on what you want to end up with. I am assuming that the event _time field denotes your time - if not, then parsing your time field... See more...
@cbiraris  there are a number of ways of doing this, but it depends on what you want to end up with. I am assuming that the event _time field denotes your time - if not, then parsing your time field using strptime() is needed first. A couple of examples below showing you stats and streamstats usage. Using stats you can collect your events together like this, assuming you have some kind of correlation ID that can group the events together. | makeresults count=4 | streamstats c | eval _time=now() - (c * 60) - (random() % 30) | eval EventID="ID:".round(c / 2) | fields - c ``` Calculate the gap ``` | stats range(_time) as r by EventID If you have a number events a simple example of streamstats will just calculate the difference between two events like this, which generates 4 random timed events and calculates the difference between each pair | makeresults count=4 | streamstats c | eval _time=now() - (c * 60) - (random() % 30) | fields - c | eval Event=mvindex(split("Start,End",","),(c - 1) % 2) ``` Calculate the gap ``` | streamstats reset_after="Event=\"End\"" range(_time) as gap  
You can't do it via the legend, as that does not support drilldown - you'd have to play around with JS probably to get that to work. I assume you want to be able to unhide it again, so you can't do ... See more...
You can't do it via the legend, as that does not support drilldown - you'd have to play around with JS probably to get that to work. I assume you want to be able to unhide it again, so you can't do it directly on that chart, but you could do it by having another set of buttons in another panel that would provide a filter set to show/hide those. I've often done this either through a multiselect input above the chart or a link input where the inputs are tabbed horizontally, e.g. You can see how that is done in the Itsy Bitsy app for Splunk - https://splunkbase.splunk.com/app/5256  
If that is your _raw event, just do | spath correlation_id and it will give you the correlation_id field
I should also mention that changing the sourcetype to anything other than aws:s3 or aws:s3:csv results in no data being indexed at all.
Here is the props.conf stanza from the TA's default directory that applies to the source type that is specified in the documentation: ########################### ### CSV ### ######################... See more...
Here is the props.conf stanza from the TA's default directory that applies to the source type that is specified in the documentation: ########################### ### CSV ### ########################### [aws:s3:csv] DATETIME_CONFIG = CURRENT TIME_FORMAT = %Y-%m-%dT%H:%M:%S%Z SHOULD_LINEMERGE = false LINE_BREAKER = [\r\n]+ TRUNCATE = 8388608 EVENT_BREAKER_ENABLE = true EVENT_BREAKER = [\r\n]+ KV_MODE = json I tried adding a props.conf into the local directory for the TA but it seems to be ignored because the data ends up indexed exactly the same after adding the new file and then restarting Splunk. This is the contents of the local props.conf that I tried: [aws:s3:csv] TIME_FORMAT = %s HEADER_FIELD_LINE_NUMBER = 1 INDEXED_EXTRACTIONS = TSV TIMESTAMP_FIELDS = timestamp
Hi @All ,  I want to extract the correlation_id for the below payload, can anyone help me to write rex command. {"message_type": "INFO", "processing_stage": "Deleted message from queue", "messa... See more...
Hi @All ,  I want to extract the correlation_id for the below payload, can anyone help me to write rex command. {"message_type": "INFO", "processing_stage": "Deleted message from queue", "message": "Deleted message from queue", "correlation_id": "['321e2253-443a-41f1-8af3-81dbdb8bcc77']", "error": "", "invoker_agent": "arn:aws:sqs:eu-central-1:981503094308:prd-ccm-incontact-ingestor-queue-v1", "invoked_component": "prd-ccm-incontact-ingestor-v1", "request_payload": "", "response_details": "{'ResponseMetadata': {'RequestId': 'a04c3e82-fe3a-5986-b61c-6323fd295e18', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'a04c3e82-fe3a-5986-b61c-6323fd295e18', 'x-amzn-trace-id': 'Root=1-652700cc-f7ed3cf574ce28da63f6625d;Parent=865f4dad6eddf3c1;Sampled=1', 'date': 'Wed, 11 Oct 2023 20:08:51 GMT', 'content-type': 'text/xml', 'content-length': '215', 'connection': 'keep-alive'}, 'RetryAttempts': 0}}", "invocation_timestamp": "2023-10-11T20:08:51Z", "response_timestamp": "2023-10-11T20:08:51Z", "original_source_app": "YMKT", "target_idp_application": "", "retry_attempt": "1", "custom_attributes": {"entity-internal-id": "", "root-entity-id": "", "campaign-id": "", "campaign-name": "", "marketing-area": "", "lead-id": "", "record_count": "1", "country": ["India"]}}
Hello, How to put comment on the Splunk Dashboard Studio source? The classic Splunk Dashboard I can put comment  on the source using <!--  comment  --> In the new Splunk Dashboard Studio, I tried ... See more...
Hello, How to put comment on the Splunk Dashboard Studio source? The classic Splunk Dashboard I can put comment  on the source using <!--  comment  --> In the new Splunk Dashboard Studio, I tried to put comment using /* comment */, but I got an error "Comments are not permitted in JSON." The comment only work on the data configuration query editor Thank you so much
1. Usually starting a new thread instead of digging up an old one (possibly posting a link to the old one for reference) yields bigger chance of getting reasonable results. 2. As you've already read... See more...
1. Usually starting a new thread instead of digging up an old one (possibly posting a link to the old one for reference) yields bigger chance of getting reasonable results. 2. As you've already read, Splunk does measure only general license usage as well as split by index or sourcetype. But not much more. So you have to either count it yourself by measuring the aggregate data size (which can be very costly) or estimate it by sampling as showed in this thread. 3. License measurement might or might not make sense in context of datasets since datasets can be defined in various way. In general - datasets as such don't consume license. Only the events that dataset is based on have already consumed the license. But this is in no way an "exclusive count" - the same events can be used to for example Network Traffic and Network Sessions datamodels. So it's not really clear what you need.
On a Column Chart is it possible to hide/unhide legend values by clicking on it? For eg. if I click on www3 in legend this action will hide www3 and I'll see only www1 and www2 on a chart.  
This regex works with one of the two sample events. <Data Name='NewProcessName'>(C:\\Program Files\\Windows Defender Advanced Threat Protection\\MsSense\.exe)|(C:\\Program Files \(x86\)\\Tanium\\Tan... See more...
This regex works with one of the two sample events. <Data Name='NewProcessName'>(C:\\Program Files\\Windows Defender Advanced Threat Protection\\MsSense\.exe)|(C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\TaniumCX\.exe)<\/Data>
@richgalloway  Can you pls paste here the valid regex  for the above Event if possible. Thanks..
I am looking to find out the license usage for particular dataset in events. Please let me know if any clue.  index=aws sourcetype=aws accoutn=123456   
Ideally, the catchall directory would be empty because the syslog server was configured to have a separate directory for each type of log data coming it.  The catchall directory is there for when som... See more...
Ideally, the catchall directory would be empty because the syslog server was configured to have a separate directory for each type of log data coming it.  The catchall directory is there for when someone stands up a new service that sends syslog data.  That unexpected kind of log would land in the catchall directory and, hopefully, alert the syslog admin to the need for additional configuration.
I'm curious about why you thought eval would not work after stats. There's nothing particularly magical about stats.  It's a transforming command so only the fields used in the command are available... See more...
I'm curious about why you thought eval would not work after stats. There's nothing particularly magical about stats.  It's a transforming command so only the fields used in the command are available to later commands.  They are still fields, however, and can be processed as such.  Note that some stats functions produce multi-value fields, which don't work well in all commands so they may require additional processing.
@yuanliuThanks.  I would have never figured out the mvjoin(mvindex.  That is something I don't use.  You gave me enough help that I was able to work out something I can give to another team.  Karma p... See more...
@yuanliuThanks.  I would have never figured out the mvjoin(mvindex.  That is something I don't use.  You gave me enough help that I was able to work out something I can give to another team.  Karma point awarded.
Yes, there probably are people here you can help you.  We do best, however, with specific questions rather than vague help requests. Tell us what inputs you have and what results you'd like.  Show t... See more...
Yes, there probably are people here you can help you.  We do best, however, with specific questions rather than vague help requests. Tell us what inputs you have and what results you'd like.  Show the failed attempts and say how they don't live up to expectations.  Describe the challenges you've encountered.
I tested your suggestion and it worked even on real data with multiple "Classes" (Class A, B, C). I thought eval would not work after passing "stats" pipe, so I tried to sum (Score1+Score2+Score3) w... See more...
I tested your suggestion and it worked even on real data with multiple "Classes" (Class A, B, C). I thought eval would not work after passing "stats" pipe, so I tried to sum (Score1+Score2+Score3) within the stats, but it would not let me.   I accepted this as a solution. Could you give an explanation why it worked after passing "stats" function? Thank you so much
I have been tasked with cleaning up the catchall directory in the syslog directory of our Heavy Forwarders. The path is /var/syslog/catchall/. I plan on grouping servers/directories based on the kind... See more...
I have been tasked with cleaning up the catchall directory in the syslog directory of our Heavy Forwarders. The path is /var/syslog/catchall/. I plan on grouping servers/directories based on the kind of logs being received. I just wanted to ask what kind of logs are usually expected to end up in this directory?
I am creating a continuous error alert in Splunk. I have been working on constructing a search query to group different error types in Splunk. I have made several attempts and have explored multiple ... See more...
I am creating a continuous error alert in Splunk. I have been working on constructing a search query to group different error types in Splunk. I have made several attempts and have explored multiple approaches; however, I have encountered challenges in effectively grouping the error types within the query. Can anybody help me in this
Perhaps the generic S3 input is *too* generic.  Can you share the props.conf stanza for the appropriate sourcetype?