All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi @All ,  I want to extract the correlation_id for the below payload, can anyone help me to write rex command. {"message_type": "INFO", "processing_stage": "Deleted message from queue", "messa... See more...
Hi @All ,  I want to extract the correlation_id for the below payload, can anyone help me to write rex command. {"message_type": "INFO", "processing_stage": "Deleted message from queue", "message": "Deleted message from queue", "correlation_id": "['321e2253-443a-41f1-8af3-81dbdb8bcc77']", "error": "", "invoker_agent": "arn:aws:sqs:eu-central-1:981503094308:prd-ccm-incontact-ingestor-queue-v1", "invoked_component": "prd-ccm-incontact-ingestor-v1", "request_payload": "", "response_details": "{'ResponseMetadata': {'RequestId': 'a04c3e82-fe3a-5986-b61c-6323fd295e18', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'a04c3e82-fe3a-5986-b61c-6323fd295e18', 'x-amzn-trace-id': 'Root=1-652700cc-f7ed3cf574ce28da63f6625d;Parent=865f4dad6eddf3c1;Sampled=1', 'date': 'Wed, 11 Oct 2023 20:08:51 GMT', 'content-type': 'text/xml', 'content-length': '215', 'connection': 'keep-alive'}, 'RetryAttempts': 0}}", "invocation_timestamp": "2023-10-11T20:08:51Z", "response_timestamp": "2023-10-11T20:08:51Z", "original_source_app": "YMKT", "target_idp_application": "", "retry_attempt": "1", "custom_attributes": {"entity-internal-id": "", "root-entity-id": "", "campaign-id": "", "campaign-name": "", "marketing-area": "", "lead-id": "", "record_count": "1", "country": ["India"]}}
Hello, How to put comment on the Splunk Dashboard Studio source? The classic Splunk Dashboard I can put comment  on the source using <!--  comment  --> In the new Splunk Dashboard Studio, I tried ... See more...
Hello, How to put comment on the Splunk Dashboard Studio source? The classic Splunk Dashboard I can put comment  on the source using <!--  comment  --> In the new Splunk Dashboard Studio, I tried to put comment using /* comment */, but I got an error "Comments are not permitted in JSON." The comment only work on the data configuration query editor Thank you so much
On a Column Chart is it possible to hide/unhide legend values by clicking on it? For eg. if I click on www3 in legend this action will hide www3 and I'll see only www1 and www2 on a chart.  
I have been tasked with cleaning up the catchall directory in the syslog directory of our Heavy Forwarders. The path is /var/syslog/catchall/. I plan on grouping servers/directories based on the kind... See more...
I have been tasked with cleaning up the catchall directory in the syslog directory of our Heavy Forwarders. The path is /var/syslog/catchall/. I plan on grouping servers/directories based on the kind of logs being received. I just wanted to ask what kind of logs are usually expected to end up in this directory?
I am creating a continuous error alert in Splunk. I have been working on constructing a search query to group different error types in Splunk. I have made several attempts and have explored multiple ... See more...
I am creating a continuous error alert in Splunk. I have been working on constructing a search query to group different error types in Splunk. I have made several attempts and have explored multiple approaches; however, I have encountered challenges in effectively grouping the error types within the query. Can anybody help me in this
I have a standalone Splunk Enterprise (not Splunk Cloud) set up to work with some log data that is stored in an AWS S3 bucket. The log data is in TSV format, each file has a header row at the top wit... See more...
I have a standalone Splunk Enterprise (not Splunk Cloud) set up to work with some log data that is stored in an AWS S3 bucket. The log data is in TSV format, each file has a header row at the top with the field names, and each file is gzipped. I have the AWS TA installed (https://splunkbase.splunk.com/app/1876). Having followed the instructions in the documentation (Introduction to the Splunk Add-on for Amazon Web Services - Splunk Documentation) for setting up a Generic S3 input, no fields are being extracted and the time stamps are not being recognized. The data does ingest but it is all just raw rows from the TSVs. The header row is being indexed as an event as well. The timestamps in Splunk are just _indextime even though there is a column called "timestamp" in the data. Does anyone have any suggestions on how I can get this to recognize the timestamps and actually show the field names that appear in the header row?
      How to get the exception from the below tables. Exception is John who is not HR table .     User list from the servers.   Name  ID  Bill 23 Peter 24 john  25   HR T... See more...
      How to get the exception from the below tables. Exception is John who is not HR table .     User list from the servers.   Name  ID  Bill 23 Peter 24 john  25   HR Table  Name  ID  Bill  23 Peter  24 Anita 27
Hello all, I installed a Splunk add-on on my heavy forwarder just to test it first, it worked fine. After that I copied it (the entire directory) to the deployment server and I pushed it to the hea... See more...
Hello all, I installed a Splunk add-on on my heavy forwarder just to test it first, it worked fine. After that I copied it (the entire directory) to the deployment server and I pushed it to the heavy forwarder because, you know, I want to manage everything from the deployment server (trying to be organized ) The issue is, from the heavy forwarder GUI, when i click on the app icon it doesn't load: it gives me "500 Internal Server Error" (with the picture of the confused horse ) and I have these error messages from the internal logs:  "ERROR ExecProcessor [2341192 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/myapp_hf/bin/app.py" HTTP 404 Not Found -- Action forbidden." I forgot to mention that I changed the name of the original app in app.conf  I can't figure out why it is not working   Thanks for your help, Kaboom1
Hi,  I'm trying to use the REST API to get and post saved searches that are Alerts but for some reason it only returns data for Reports. Has anyone else had this problem?  GET https://<host>:<mP... See more...
Hi,  I'm trying to use the REST API to get and post saved searches that are Alerts but for some reason it only returns data for Reports. Has anyone else had this problem?  GET https://<host>:<mPort>/services/saved/searches https://<host>:<mPort>/services/saved/searches/{name}
Hi Team, Is there any way we can calculate time duration between 2 different events like start and end. For example: we have start event at 10/10/23 23:50:00.031 PM, and End evet at 11/10/23 0... See more...
Hi Team, Is there any way we can calculate time duration between 2 different events like start and end. For example: we have start event at 10/10/23 23:50:00.031 PM, and End evet at 11/10/23 00:50:00.031 AM  how can we calculate this. please help. Thank you
How to calculate total when aggregating using stats max(field)? Thank you for your help.  Max Total Score is the total score of maximum score for each Score field when aggregating all rows using ... See more...
How to calculate total when aggregating using stats max(field)? Thank you for your help.  Max Total Score is the total score of maximum score for each Score field when aggregating all rows using stats: max(Score1), max(Score2), max(Score3).    TotalScore is the total of each Score field for each row (without aggregation)  This is the output I need Class Name Subject TotalScore Score1 Score2   Score3 Max TotalScore ClassA grouped grouped 240 85 95 80 260 My Splunk Search   | index=scoreindex | stats values(Name) as Name, values(Subject) as Subject, max(TotalScore) as TotalScore, max(Score1) as Score1, max(Score2) as Score2, max(Score3) as Score3 by Class | table Class Name, Subject, Total Score, Score1, Score2, Score3   I think my search below is going to display the following. Class Name Subject TotalScore Score1 Score2   Score3 ClassA Name1 Name2 Name3 Math English 240 85 95 80 This is the whole data in table format from scoreindex Class Name Subject TotalScore Score1 Score2   Score3 ClassA Name1 Math 170 60 40 70 ClassA Name1 English 195 85 60 50 ClassA Name2 Math 175 50 60 65 ClassA Name2 English 240 80 90 70 ClassA Name3 Math 170 40 60 70 ClassA Name3 English 230 55 95 80
at all, I have to parse logs extracted from logstash. I'm receiving logstash logs and they are in json format and almost all the fields I need are already parsed and available in json. My issue is... See more...
at all, I have to parse logs extracted from logstash. I'm receiving logstash logs and they are in json format and almost all the fields I need are already parsed and available in json. My issue is that the event rawdata is in a field called "message" and these fields aren't automatically extracted as I would. I'd like to avoid to re-parse all datasources and create custom add-ons from all data sources. Does anybody encounter this kind of integration and know a way to use standard Add-Ons to parse only the message field? Thank you for your help. Ciao. Giuseppe
We use the ansible-role-for-splunk framework found on GitHub: https://github.com/schneewe/ansible-role-for-splunk It support app deployments through the following task: https://github.com/schneewe/a... See more...
We use the ansible-role-for-splunk framework found on GitHub: https://github.com/schneewe/ansible-role-for-splunk It support app deployments through the following task: https://github.com/schneewe/ansible-role-for-splunk/blob/master/roles/splunk/tasks/configure_apps.yml But this seem to require a full Search Head Cluster but we only have a single search head node. Isn't the single search head setup supported by this framework or am I just missing something?
Hi I am using the same source type on the same file. One is coming in via forwarder and the other is uploaded via GUI. However, the forwarder is not extracting the fields. This means I have to us... See more...
Hi I am using the same source type on the same file. One is coming in via forwarder and the other is uploaded via GUI. However, the forwarder is not extracting the fields. This means I have to use "patch" to access the fields, this is a pain. Below is a file from a forwarder, we can see fields are not extracted. Below is the same file but upload - in this case, the fields are extracted. This is the sourcetype [import_json_2] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIMESTAMP_FIELDS = start_time TZ = Asia/Beirut category = Structured description = JavaScript Object Notation format. For more information, visit http://json.org/ disabled = false pulldown_type = 1   Any ideas - thanks in advance. Rob
Hi All  In my current dashboard i have several text input that colleagues can use to find varies information. Sometimes it takes a while for their information to appear.  Is there a way to add a lo... See more...
Hi All  In my current dashboard i have several text input that colleagues can use to find varies information. Sometimes it takes a while for their information to appear.  Is there a way to add a loading notification / alert to advise colleagues that Splunk is retrieving the information but may take some time?  The delay unusually is only for their 1st search and thereafter the searches are pretty much instant.  Many thanks   Paula  
Name sku kit NAC-D-CDSK-DLS-05.90 NAC-D HJA-JEOE-DNDN-94.4.0 This my data, I want to replace  with NAC-D to ANT-P for multiple values this is my search query ... See more...
Name sku kit NAC-D-CDSK-DLS-05.90 NAC-D HJA-JEOE-DNDN-94.4.0 This my data, I want to replace  with NAC-D to ANT-P for multiple values this is my search query | eval sku = if(name=="",substr(kit,0,5),substr(name,0,5)) | eval sku=case(sku =="NAC-D","ANT-P ",sku =="DHV-K","ABD-U",true(),sku)
Hi, I have created a custom app to implement ACME on search head cluster members with a script on bin folder that update files/certificates on 3 folders ./acme ./certs ./backup the content of th... See more...
Hi, I have created a custom app to implement ACME on search head cluster members with a script on bin folder that update files/certificates on 3 folders ./acme ./certs ./backup the content of these folders are required to be different on each server (deployer and 3 members). How to correctly deploy/implement this configuration? Thanking you in advance, Graça
If I have a lookup table that contains the following: error,priority Unable to find any company of ID,P2 500 Internal Server Error,P1  And result query with fields: 500 Internal Server Error:... See more...
If I have a lookup table that contains the following: error,priority Unable to find any company of ID,P2 500 Internal Server Error,P1  And result query with fields: 500 Internal Server Error: {xxx} Unable to find any company of ID: xxx Using the below query only brings back direct matches: <search query> | lookup _error_message_prority error AS ErrorMessage OUTPUTNEW Priority AS Priority Is there a way to use wildcards, 'like' or 'contains' when using lookup tables in Splunk Cloud?
Hello! According to ITSI documentation (https://docs.splunk.com/Documentation/ITSI/4.17.1/Configure/KVPerms) there is a KV store called "maintenance_calendar" that contains maintenance window detail... See more...
Hello! According to ITSI documentation (https://docs.splunk.com/Documentation/ITSI/4.17.1/Configure/KVPerms) there is a KV store called "maintenance_calendar" that contains maintenance window details. I need to run some searches on the schedules, but I cannot access the data in the KV store due to the error:  Is it possible to achieve what I am looking to do?  Thank you and best regards, Andrew
When can customers with existing SOAR instances expect to get migrated from the trial MC instance?