All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @All ,  I want to extract the correlation_id for the below payload, can anyone help me to write rex command. {"message_type": "INFO", "processing_stage": "Deleted message from queue", "messa... See more...
Hi @All ,  I want to extract the correlation_id for the below payload, can anyone help me to write rex command. {"message_type": "INFO", "processing_stage": "Deleted message from queue", "message": "Deleted message from queue", "correlation_id": "['321e2253-443a-41f1-8af3-81dbdb8bcc77']", "error": "", "invoker_agent": "arn:aws:sqs:eu-central-1:981503094308:prd-ccm-incontact-ingestor-queue-v1", "invoked_component": "prd-ccm-incontact-ingestor-v1", "request_payload": "", "response_details": "{'ResponseMetadata': {'RequestId': 'a04c3e82-fe3a-5986-b61c-6323fd295e18', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'a04c3e82-fe3a-5986-b61c-6323fd295e18', 'x-amzn-trace-id': 'Root=1-652700cc-f7ed3cf574ce28da63f6625d;Parent=865f4dad6eddf3c1;Sampled=1', 'date': 'Wed, 11 Oct 2023 20:08:51 GMT', 'content-type': 'text/xml', 'content-length': '215', 'connection': 'keep-alive'}, 'RetryAttempts': 0}}", "invocation_timestamp": "2023-10-11T20:08:51Z", "response_timestamp": "2023-10-11T20:08:51Z", "original_source_app": "YMKT", "target_idp_application": "", "retry_attempt": "1", "custom_attributes": {"entity-internal-id": "", "root-entity-id": "", "campaign-id": "", "campaign-name": "", "marketing-area": "", "lead-id": "", "record_count": "1", "country": ["India"]}}
Hello, How to put comment on the Splunk Dashboard Studio source? The classic Splunk Dashboard I can put comment  on the source using <!--  comment  --> In the new Splunk Dashboard Studio, I tried ... See more...
Hello, How to put comment on the Splunk Dashboard Studio source? The classic Splunk Dashboard I can put comment  on the source using <!--  comment  --> In the new Splunk Dashboard Studio, I tried to put comment using /* comment */, but I got an error "Comments are not permitted in JSON." The comment only work on the data configuration query editor Thank you so much
1. Usually starting a new thread instead of digging up an old one (possibly posting a link to the old one for reference) yields bigger chance of getting reasonable results. 2. As you've already read... See more...
1. Usually starting a new thread instead of digging up an old one (possibly posting a link to the old one for reference) yields bigger chance of getting reasonable results. 2. As you've already read, Splunk does measure only general license usage as well as split by index or sourcetype. But not much more. So you have to either count it yourself by measuring the aggregate data size (which can be very costly) or estimate it by sampling as showed in this thread. 3. License measurement might or might not make sense in context of datasets since datasets can be defined in various way. In general - datasets as such don't consume license. Only the events that dataset is based on have already consumed the license. But this is in no way an "exclusive count" - the same events can be used to for example Network Traffic and Network Sessions datamodels. So it's not really clear what you need.
On a Column Chart is it possible to hide/unhide legend values by clicking on it? For eg. if I click on www3 in legend this action will hide www3 and I'll see only www1 and www2 on a chart.  
This regex works with one of the two sample events. <Data Name='NewProcessName'>(C:\\Program Files\\Windows Defender Advanced Threat Protection\\MsSense\.exe)|(C:\\Program Files \(x86\)\\Tanium\\Tan... See more...
This regex works with one of the two sample events. <Data Name='NewProcessName'>(C:\\Program Files\\Windows Defender Advanced Threat Protection\\MsSense\.exe)|(C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\TaniumCX\.exe)<\/Data>
@richgalloway  Can you pls paste here the valid regex  for the above Event if possible. Thanks..
I am looking to find out the license usage for particular dataset in events. Please let me know if any clue.  index=aws sourcetype=aws accoutn=123456   
Ideally, the catchall directory would be empty because the syslog server was configured to have a separate directory for each type of log data coming it.  The catchall directory is there for when som... See more...
Ideally, the catchall directory would be empty because the syslog server was configured to have a separate directory for each type of log data coming it.  The catchall directory is there for when someone stands up a new service that sends syslog data.  That unexpected kind of log would land in the catchall directory and, hopefully, alert the syslog admin to the need for additional configuration.
I'm curious about why you thought eval would not work after stats. There's nothing particularly magical about stats.  It's a transforming command so only the fields used in the command are available... See more...
I'm curious about why you thought eval would not work after stats. There's nothing particularly magical about stats.  It's a transforming command so only the fields used in the command are available to later commands.  They are still fields, however, and can be processed as such.  Note that some stats functions produce multi-value fields, which don't work well in all commands so they may require additional processing.
@yuanliuThanks.  I would have never figured out the mvjoin(mvindex.  That is something I don't use.  You gave me enough help that I was able to work out something I can give to another team.  Karma p... See more...
@yuanliuThanks.  I would have never figured out the mvjoin(mvindex.  That is something I don't use.  You gave me enough help that I was able to work out something I can give to another team.  Karma point awarded.
Yes, there probably are people here you can help you.  We do best, however, with specific questions rather than vague help requests. Tell us what inputs you have and what results you'd like.  Show t... See more...
Yes, there probably are people here you can help you.  We do best, however, with specific questions rather than vague help requests. Tell us what inputs you have and what results you'd like.  Show the failed attempts and say how they don't live up to expectations.  Describe the challenges you've encountered.
I tested your suggestion and it worked even on real data with multiple "Classes" (Class A, B, C). I thought eval would not work after passing "stats" pipe, so I tried to sum (Score1+Score2+Score3) w... See more...
I tested your suggestion and it worked even on real data with multiple "Classes" (Class A, B, C). I thought eval would not work after passing "stats" pipe, so I tried to sum (Score1+Score2+Score3) within the stats, but it would not let me.   I accepted this as a solution. Could you give an explanation why it worked after passing "stats" function? Thank you so much
I have been tasked with cleaning up the catchall directory in the syslog directory of our Heavy Forwarders. The path is /var/syslog/catchall/. I plan on grouping servers/directories based on the kind... See more...
I have been tasked with cleaning up the catchall directory in the syslog directory of our Heavy Forwarders. The path is /var/syslog/catchall/. I plan on grouping servers/directories based on the kind of logs being received. I just wanted to ask what kind of logs are usually expected to end up in this directory?
I am creating a continuous error alert in Splunk. I have been working on constructing a search query to group different error types in Splunk. I have made several attempts and have explored multiple ... See more...
I am creating a continuous error alert in Splunk. I have been working on constructing a search query to group different error types in Splunk. I have made several attempts and have explored multiple approaches; however, I have encountered challenges in effectively grouping the error types within the query. Can anybody help me in this
Perhaps the generic S3 input is *too* generic.  Can you share the props.conf stanza for the appropriate sourcetype?
Something like that can be done using eval. | index=scoreindex | stats values(Name) as Name, values(Subject) as Subject, sum(TotalScore) as TotalScore, max(Score1) as Score1, max(Score2) as Scor... See more...
Something like that can be done using eval. | index=scoreindex | stats values(Name) as Name, values(Subject) as Subject, sum(TotalScore) as TotalScore, max(Score1) as Score1, max(Score2) as Score2, max(Score3) as Score3 by Class | eval "Max TotalScore"=Score1 + Score2 + Score3 | table Class, Name, Subject, TotalScore, Score1, Score2, Score3, "Max TotalScore"  
Yes, lookups can support wildcards.  Go to Settings->Lookups->Lookup definitions and edit the lookup.  Tick the "Advanced options" box and enter WILDCARD(error) in the "Match type" box.  Then it's up... See more...
Yes, lookups can support wildcards.  Go to Settings->Lookups->Lookup definitions and edit the lookup.  Tick the "Advanced options" box and enter WILDCARD(error) in the "Match type" box.  Then it's up to the lookup file to have wildcards in the appropriate places.
Using subsearch results in large number of OR operators.  It's probably more economic just doing stats | inputlookup servers.csv | eval CSV = "servers" | inputlookup append=true HR.csv | fillnull CS... See more...
Using subsearch results in large number of OR operators.  It's probably more economic just doing stats | inputlookup servers.csv | eval CSV = "servers" | inputlookup append=true HR.csv | fillnull CSV value=HR | stats values(CSV) as CSV by Name ID | where mvcount(CSV) == 1 AND CSV == "servers" (Again, thanks @richgalloway for demonstrating append mode!)
I have a standalone Splunk Enterprise (not Splunk Cloud) set up to work with some log data that is stored in an AWS S3 bucket. The log data is in TSV format, each file has a header row at the top wit... See more...
I have a standalone Splunk Enterprise (not Splunk Cloud) set up to work with some log data that is stored in an AWS S3 bucket. The log data is in TSV format, each file has a header row at the top with the field names, and each file is gzipped. I have the AWS TA installed (https://splunkbase.splunk.com/app/1876). Having followed the instructions in the documentation (Introduction to the Splunk Add-on for Amazon Web Services - Splunk Documentation) for setting up a Generic S3 input, no fields are being extracted and the time stamps are not being recognized. The data does ingest but it is all just raw rows from the TSVs. The header row is being indexed as an event as well. The timestamps in Splunk are just _indextime even though there is a column called "timestamp" in the data. Does anyone have any suggestions on how I can get this to recognize the timestamps and actually show the field names that appear in the header row?
list of the URLs the contractors have access to which is the csv file.  The firewall team wants to remove any URLs that aren't used in a period of time.  Thus, I have to compare the firewall URLs t... See more...
list of the URLs the contractors have access to which is the csv file.  The firewall team wants to remove any URLs that aren't used in a period of time.  Thus, I have to compare the firewall URLs to the csv  So, the firewall team wants to update that CSV file so it will not contain entries that haven't had matching events for a given time period.  Is this correct?  This seems to be the opposite of what the Splunk search is doing. Some more points you need to clarify. What are field name(s) the index search and the lookup file use to indicate URLs?  Based on your code snippet, I assume that they both use url. Does the CSV file contain additional fields?  Based on your code snippet, I will assume none. Is there some significance of trailing slash (/)?  Do all url values end with one trailing slash?  This may not be relevant, but some SPL manipulations may ruin your convention.  So, I'd like to be cautious. A more important question is the use of asterisk (*).  Are the last two domains (root and second level) the only parts of interest?  Given all the illustrations, I have to assume yes.  In other words, no differentiation is needed between *.microsoft.com/ and microsoft.com/.  Additionally, I will assume that every url in the CSV needs to be paired with a wildcard entry. Using the above assumptions, the following can show you second level domains that have not been used. index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule=policy_3) | eval url = mvjoin(mvindex(split(url, "."), -2,-1), ".") | dedup url | inputlookup append=true my_list_of_urls.csv | fillnull sourcetype value=CSV | stats values(sourcetype) as sourcetype by url | where mvcount(sourcetype) == 1 AND sourcetype == "CSV" | eval url = mvappend(url, "*." . url) | mvexpand url The output contains a list of second level domains affixed with a trailing slash, and these same strings prefixed with "*.".  These would be the ones to be removed. If you have lots of events with URLs that have no match in the CSV, you can also use the subsearch as a filter to improve efficiency.  Like index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule=policy_3) [ | inputlookup my_list_of_urls.csv ] | eval url = mvjoin(mvindex(split(url, "."), -2,-1), ".") | dedup url | inputlookup append=true my_list_of_urls.csv | fillnull sourcetype value=CSV | stats values(sourcetype) as sourcetype by url | where mvcount(sourcetype) == 1 AND sourcetype == "CSV" | eval url = mvappend(url, "*." . url) | mvexpand url Hope this helps.