All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ITWhisperer please find the raw event. { [-]    adf: true    all_request_headers: {[-]      Accept: */*      Content-Length: 0      Content-Type: text/plain      Cookie: Cookie1=Salvin      ... See more...
@ITWhisperer please find the raw event. { [-]    adf: true    all_request_headers: {[-]      Accept: */*      Content-Length: 0      Content-Type: text/plain      Cookie: Cookie1=Salvin      Host: wasphictst-wdc.hc.cloud.uk.sony     User-Agent: insomnia/2021.5.3    }    all_response_headers: { [-]      Connection: keep-alive      Content-Length: 196      Content-Type: text/html; charset=iso-8859-1      Date: Fri, 14 Feb 2025 15:51:13 GMT      Server: Apache/2.4.37 (Red Hat Enterprise Linux)      Strict-Transport-Security: max-age=31536000; includeSubDomains    }    avg_ingress_latency_be: 0    avg_ingress_latency_fe: 0    cacheable: true    client_dest_port: 443    client_insights:    client_ip: 128.168.178.113    client_rtt: 1    client_src_port: 24487    compression: NO_COMPRESSION_CAN_BE_COMPRESSED    compression_percentage: 0    conn_est_time_be: 0    conn_est_time_fe: 0    headers_received_from_server: {[+]    }    headers_sent_to_server: { [+]  }    host: xyz    http_version: 1.1    jwt_log: { [+]    }    log_id: 108692    max_ingress_latency_be: 0    max_ingress_latency_fe: 0    method: GET    persistent_session_id: 3472328296900352087    pool: pool-cac2726e-acd1-4225-8ac8-72ebd82a57a6    pool_name: xxxxx    report_timestamp: 2025-02-14T15:51:13.176715Z    request_content_type: text/plain    request_headers: 833    request_id: 9mY-Spaj-RgC9    request_length: 193    request_state: AVI_HTTP_REQUEST_STATE_SEND_RESPONSE_BODY_TO_CLIENT    response_code: 404    response_content_type: text/html; charset=iso-8859-1    response_headers: 13    response_length: 6148    response_time_first_byte: 81    response_time_last_byte: 81    rewritten_uri_query: test=%26%26%20whoami    server_conn_src_ip: 128.160.77.235    server_dest_port: 80    server_ip: 128.160.88.68    server_name: 128.160.88.68    server_response_code: 404    server_response_length: 373    server_response_time_first_byte: 78    server_response_time_last_byte: 81    server_rtt: 3    server_src_port: 25921    servers_tried: 1    service_engine: GB-DRN-AB-Tier2-se-bmqhk    significant: 0    significant_log: [ [+]    ]    sni_hostname: xyx    source_ip: xxxxxx    ssl_cipher: TLS_AES_256_GCM_SHA384    ssl_session_id: bc586cf2272c7130a6e90551566bf12c    ssl_version: TLSv1.3    tenant_name: admin    udf: false    uri_path: /cmd    uri_query: test=&& whoami    user_agent: insomnia/2021.5.3    vcpu_id: 0    virtualservice: virtualservice-e52d1117-b508-4a6d-9fb5-f03ca6319af7    vs_ip: 128.160.71.101    vs_name: xxx-443    waf_log: { [-]      allowlist_configured: false      allowlist_processed: false      application_rules_configured: false      application_rules_processed: false      latency_request_body_phase: 1544      latency_request_header_phase: 351      latency_response_body_phase: 15      latency_response_header_phase: 50      memory_allocated: 71496      omitted_app_rule_stats: {[+]      }      omitted_signature_stats: {[+]      }      psm_configured: false      psm_processed: false      rules_configured: true      rules_processed: true      status: PASSED    } }   They want all_request_headers, all_response_headers, waf_log details to be viewed in a dashboard manner and any other important panels which makes sense.
@tscroggins , is there an alternative to using xyseries because the results are limited and the results displayed are therefore truncated? 
F5 WAF logs
Hi @michael_vi , as @richgalloway and @kiran_panchavat said, you can use regex101 to find the correct regex to cut a part ot your json. Only one attention point: json format has a well defined stru... See more...
Hi @michael_vi , as @richgalloway and @kiran_panchavat said, you can use regex101 to find the correct regex to cut a part ot your json. Only one attention point: json format has a well defined structure, so beware in cutting a part of the event, because if you break the json structure, the INDEXED_EXTRACTION=JSON and the spath command will not work correctly, and you have to manually parse all the fields! Ciao. Giuseppe
Please share some raw anonymised sample events in a code block using the </> button so we can see what you are dealing with.
Didnt get u but This is the query which i built up so far which is capturing time difference in HH:MM:SS in stats view but i want to display the same duration in chart as well index=music Job=* |... See more...
Didnt get u but This is the query which i built up so far which is capturing time difference in HH:MM:SS in stats view but i want to display the same duration in chart as well index=music Job=* | stats values(host) as Host values(Job) as Job, earliest(_time) as start_time latest(_time) as end values(x) as "File Name" by oid | eval Duration=(end-start_time) | eval end = strftime(end,"%m-%d-%Y %H:%M:%S") | eval start_time = strftime(start_time,"%m-%d-%Y %H:%M:%S") | rename opid as OPID, start_time as "Start Time", end as "End Time" | chart list(Duration) as Duration by "Start Time" | fieldformat Duration=tostring(round(Duration, 0), "Duration")  
Hi @dinesh001kumar , the only way is to put them in an app or an add-on and submit them to upload. If there isn't any issue you can upload it. Otherwise you could ask this to Splunk Cloud Support,... See more...
Hi @dinesh001kumar , the only way is to put them in an app or an add-on and submit them to upload. If there isn't any issue you can upload it. Otherwise you could ask this to Splunk Cloud Support, but I'm not sure that they will do. Ciao. Giuseppe
Start with your requirements. This is a very imprecise requirement. What does "neat" mean? What information are you being asked to show? Next, start creating searches to calculate that information fr... See more...
Start with your requirements. This is a very imprecise requirement. What does "neat" mean? What information are you being asked to show? Next, start creating searches to calculate that information from your logs. Then choose a way to display that information in an informative way. Next, if you want some more help here, please post your raw events i.e. unformatted, preferably using a code block using the </> button above. That way we can simulate your situation and suggest some searches.
Hi @Karthikeya  It sounds like you might need to work with the dashboard users to understand exactly what they want out of the dashboard - what is their main goal when they look at the dashboard? We... See more...
Hi @Karthikeya  It sounds like you might need to work with the dashboard users to understand exactly what they want out of the dashboard - what is their main goal when they look at the dashboard? We do not want to overwhelm the users with charts they cannot read or make sense of.  I'd start by understanding the main purpose of the dashboard, and then the top 3-4 statistics or details they want to be able to see. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @livehybrid , using only Splunk the only way is indexing all the logs and use dedup in your searches, but in this way you pay twice the license, because it isn't possible in Splunk to create a fi... See more...
Hi @livehybrid , using only Splunk the only way is indexing all the logs and use dedup in your searches, but in this way you pay twice the license, because it isn't possible in Splunk to create a filter to avoid duplicates before indexing. The only solution is to take logs using an rsyslog and writing logs in files, then preparse the logs using a script, but it's very heavy for the system. Ciao. Giuseppe
Ah, is there anything unique in a pair of events to split it by? Oh anything on the event to show is the start or end? Please could you share some anonymised sample events for us to look at in order... See more...
Ah, is there anything unique in a pair of events to split it by? Oh anything on the event to show is the start or end? Please could you share some anonymised sample events for us to look at in order to help further? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi, I think ultimately this might depend on the source of the data, what are you sending to the syslog server? 
There are multiple job with each having unique start and end time
Hi @sureshkumaar , it isn't a good idea to attach a new question to a closed question, even if on the same topic: it's always better to open a new one to have a surely faster and probably better ans... See more...
Hi @sureshkumaar , it isn't a good idea to attach a new question to a closed question, even if on the same topic: it's always better to open a new one to have a surely faster and probably better answer to your question. Anyway, if the regex that you're using matches all the events to filter, it's correct and you can use it. Ciao. Giuseppe
I have a requirement to create a dashboard with following Json data: all_request_headers: { [-]      Accept: */*      Content-Length: 0      Content-Type: text/plain      Cookie: Cookie1=Salvin ... See more...
I have a requirement to create a dashboard with following Json data: all_request_headers: { [-]      Accept: */*      Content-Length: 0      Content-Type: text/plain      Cookie: Cookie1=Salvin      Host: wasphictst-wdc.hc.cloud.uk.sony      User-Agent: insomnia/2021.5.3    }    all_response_headers: { [-]      Connection: keep-alive      Content-Length: 196      Content-Type: text/html; charset=iso-8859-1      Date: Fri, 14 Feb 2025 15:51:13 GMT      Server: Apache/2.4.37 (Red Hat Enterprise Linux)      Strict-Transport-Security: max-age=31536000; includeSubDomains    } waf_log: { [-]      allowlist_configured: false      allowlist_processed: false      application_rules_configured: false      application_rules_processed: false      latency_request_body_phase: 1544      latency_request_header_phase: 351      latency_response_body_phase: 15      latency_response_header_phase: 50      memory_allocated: 71496      omitted_app_rule_stats: { [+]      }      omitted_signature_stats: { [+]      }      psm_configured: false      psm_processed: false      rules_configured: true      rules_processed: true      status: PASSED    } Fields are getting auto extracted like waf_log.allowlist_configured ... etc. They want a neat dashboard for request headers, response headers, waf log details etc. How to create this dashboard. I am confused. If we create based on fields then there will be so many panels right.
Is the data being sent from the origin to both syslog servers at the same time? -- Yes, both syslog servers picking same log and ingesting at the same time. Is it possible to control this behaviou... See more...
Is the data being sent from the origin to both syslog servers at the same time? -- Yes, both syslog servers picking same log and ingesting at the same time. Is it possible to control this behaviour so it sends only to the primary, or to the standby if it fails? ---  How to achieve this? 
Hi @splunklearner  It sounds like your duplication is coming before it hits Splunk - Its not easy to deduplicate this on the way through, instead you might want to look at how the data is sent to sy... See more...
Hi @splunklearner  It sounds like your duplication is coming before it hits Splunk - Its not easy to deduplicate this on the way through, instead you might want to look at how the data is sent to syslog.  Is the data being sent from the origin to both syslog servers at the same time? Is it possible to control this behaviour so it sends only to the primary, or to the standby if it fails? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @dinesh001kumar  You can add files into an app and then package this to be uploaded to Splunk Cloud - it isnt possible to upload CSS/HTML via the UI in Splunk Cloud. Create an app and add the re... See more...
Hi @dinesh001kumar  You can add files into an app and then package this to be uploaded to Splunk Cloud - it isnt possible to upload CSS/HTML via the UI in Splunk Cloud. Create an app and add the required files to <APP NAME>/appserver/static/ so that they are then accessible within the app. Have a look at https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/UseCSS#Customize_styling_and_behavior_for_one_dashboard for more info on using CSS too. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
We have two syslog standalone servers (both are active) (one named as primary and the other is contingency) with UF installed in it forwards data to Splunk. We have different indexes configured for t... See more...
We have two syslog standalone servers (both are active) (one named as primary and the other is contingency) with UF installed in it forwards data to Splunk. We have different indexes configured for these two servers.  Now the issue is same log is getting indexed into both servers which resulting in duplication of logs in Splunk.  Syslog 1 --- index = sony_a == Same log Syslog 2 --- index = sony_b == Same log When we search with index=sony* it is giving same logs for two indexes which is duplication. how to avoid two syslog servers from getting indexed same log twice? 
Is Job unique for each start/end?  If so I would suggest something like this: index=music Job=* | stats earliest(_time) as start_time, latest(_time) as end_time by Job | eval Duration=(end_time-sta... See more...
Is Job unique for each start/end?  If so I would suggest something like this: index=music Job=* | stats earliest(_time) as start_time, latest(_time) as end_time by Job | eval Duration=(end_time-start_time) ``` The rest of your SPL here, such as ``` | chart values(Duration) as Duration by start_time   Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will