All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Maybe you can share your current search that does this grouping. What's your logic for deciding that server 2 and server 3 need to be in a different server group to server 4, is it simply on the pre... See more...
Maybe you can share your current search that does this grouping. What's your logic for deciding that server 2 and server 3 need to be in a different server group to server 4, is it simply on the presence of a connection between those 2 services? @danspav 
Feb 3 11:10:15 server-server-server-server systemd[1]: Removed slice User Slice of UID 0. Feb 3 04:14:23 server-server-server-server rsyslogd[679024]: imjournal: 16021 messages lost due to rate-limi... See more...
Feb 3 11:10:15 server-server-server-server systemd[1]: Removed slice User Slice of UID 0. Feb 3 04:14:23 server-server-server-server rsyslogd[679024]: imjournal: 16021 messages lost due to rate-limiting (20000 allowed within 600 seconds) Feb 3 11:01:01 server-server-server-server CROND[3905399]: (root) CMDEND (run-parts /etc/cron.hourly) Feb 3 11:10:55 server-server-server-server esfdaemon[3938104]: 0 Feb 3 10:24:36 server-server-server-server auditd[2689]: Audit daemon rotating log files Is there a way to capture the whole line where systemd, rsyslogd and auditd keyword matches using props.conf and transforms.conf? Below captures till the specific keyword, how about remaining lines after the keyword? [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = ^\w{3}\s\s\d{1,2}\s\d{1,2}:\d{1,2}:\d{1,2}\s+(?:[+\-A-Z0-9]*\s+)?(systemd|rsyslogd|auditd) DEST_KEY = queue FORMAT = indexQueue  
As I said, assuming your events have already been ingested as JSON. It looks like they aren't, or at least the fields you need aren't. Try this | spath all_request_headers | fields _time all_request... See more...
As I said, assuming your events have already been ingested as JSON. It looks like they aren't, or at least the fields you need aren't. Try this | spath all_request_headers | fields _time all_request_headers | spath input=all_request_headers | fields - _raw all_request_headers
| fields _time all_request_headers | spath input=all_request_headers | fields - _raw all_request_headers   Giving this search after my index and sourcetype, it is showing nothing in events, can... See more...
| fields _time all_request_headers | spath input=all_request_headers | fields - _raw all_request_headers   Giving this search after my index and sourcetype, it is showing nothing in events, can you please help @ITWhisperer  
Thanks for providing the raw example. It looks like some of the header fields have quite high entropy, meaning that it could create a lot of values for a dashboard/table. Are they wanting to see rar... See more...
Thanks for providing the raw example. It looks like some of the header fields have quite high entropy, meaning that it could create a lot of values for a dashboard/table. Are they wanting to see rare or most frequent values for these headers, perhaps?  Presume there are some headers such as "Date" which arent going to add much value? As previously mentioned - I think its important to understand the purpose of the dashboard, otherwise the panels created might be meaningless and a waste of search time. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
You could start with a dashboard with a couple of statistics tables using the following searches (assuming your events have already been ingested as JSON) | fields _time all_request_headers | spath ... See more...
You could start with a dashboard with a couple of statistics tables using the following searches (assuming your events have already been ingested as JSON) | fields _time all_request_headers | spath input=all_request_headers | fields - _raw all_request_headers | fields _time all_response_headers | spath input=all_response_headers | fields - _raw all_response_headers | fields _time waf_log | spath input=waf_log | fields - _raw waf_log Or you could combine them into a single table | fields _time all_request_headers all_response_headers waf_log | spath input=all_request_headers | spath input=all_response_headers | spath input=waf_log | fields - _raw all_request_headers all_response_headers waf_log To be honest, these won't be very useful but it is what you/they asked for and might help you clarify what exactly they do want to see in the dashboard.
I have a table with hundreds of thousands of rows which I am seeking to visualise in Splunk. The data is far too big for the Network Viz diagram to show without latency issues so I am seeking to gro... See more...
I have a table with hundreds of thousands of rows which I am seeking to visualise in Splunk. The data is far too big for the Network Viz diagram to show without latency issues so I am seeking to group it down and will chunk it further with filters. Users are mainly interested in the services we have and how they connect with other services through our assets. Details like server names are less important so those can be grouped. I am having issues with the default network viz diagram and the grouping behaviour.  Example Data: Parent Child Parent Class Child Class Service1 Service2 Service Service Server1 Server2 Server Server Server2 Server3 Server Server Service1 Server3 Service Server Service2 Server2 Service Server Service3 Server4 Service Server Service1 Database1 Service Database Service3 Database1 Service Database Service3 Database2 Service Database Server3 Network1 Server Network Server4 Network1 Server Network   Desired look below. Notice how there are multiple server groups rather than just one. We can clearly identify that service 1 and 2 are linked through servers. Service 3 is separate and is connected to a different group of servers. And here is my attempt at using the group functionality for the network viz diagram. I used the asset class to make colour groupings. Notice that the groups are just generic and cannot be named. All servers have been grouped together making it look like all 3 services are linked through servers.   Expanding this diagram clearly shows they are not linked through servers. Service 3 is connected to a different server. Is there a way to reach my desired grouping method with the default Splunk tools? Is there another add on I could utilise?
@niketnilay is it possible without using the additional extension  Timeline Custom Visualization
{"adf":true,"significant":0,"udf":false,"virtualservice":"virtualservice-e52d1117-b508-4a6d-9fb5-f03ca6319af7","report_timestamp":"2025-02-14T15:51:13.176715Z","service_engine":"GB-DRN-AB-Tier2-se-bm... See more...
{"adf":true,"significant":0,"udf":false,"virtualservice":"virtualservice-e52d1117-b508-4a6d-9fb5-f03ca6319af7","report_timestamp":"2025-02-14T15:51:13.176715Z","service_engine":"GB-DRN-AB-Tier2-se-bmqhk","vcpu_id":0,"log_id":108692,"client_ip":"128.168.178.113","client_src_port":24487,"client_dest_port":443,"client_rtt":1,"ssl_session_id":"bc586cf2272c7130a6e90551566bf12c","ssl_version":"TLSv1.3","ssl_cipher":"TLS_AES_256_GCM_SHA384","sni_hostname":"wasphictst-wdc.hc.cloud.uk.sony","http_version":"1.1","method":"GET","uri_path":"/cmd","uri_query":"test=&& whoami","rewritten_uri_query":"test=%26%26%20whoami","user_agent":"insomnia/2021.5.3","host":"wasphictst-wdc.hc.cloud.uk.sony","persistent_session_id":3472328296900352087,"request_content_type":"text/plain","response_content_type":"text/html; charset=iso-8859-1","request_length":193,"cacheable":true,"pool":"pool-cac2726e-acd1-4225-8ac8-72ebd82a57a6","pool_name":"p-wasphictst-wdc.hc.cloud.uk.sony-wdc-443","server_ip":"128.160.88.68","server_name":"128.160.88.68","server_conn_src_ip":"128.160.77.235","server_dest_port":80,"server_src_port":25921,"server_rtt":3,"server_response_length":373,"server_response_code":404,"server_response_time_first_byte":78,"server_response_time_last_byte":81,"response_length":6148,"response_code":404,"response_time_first_byte":81,"response_time_last_byte":81,"compression_percentage":0,"compression":"NO_COMPRESSION_CAN_BE_COMPRESSED","client_insights":"","request_headers":833,"response_headers":13,"request_state":"AVI_HTTP_REQUEST_STATE_SEND_RESPONSE_BODY_TO_CLIENT","all_request_headers":{"Host":"wasphictst-wdc.hc.cloud.uk.sony","User-Agent":"insomnia/2021.5.3","Cookie":"Cookie1=Jijin","Content-Type":"text/plain","Accept":"*/*","Content-Length":0},"all_response_headers":{"Content-Type":"text/html; charset=iso-8859-1","Content-Length":196,"Connection":"keep-alive","Date":"Fri, 14 Feb 2025 15:51:13 GMT","Server":"Apache/2.4.37 (Red Hat Enterprise Linux)","Strict-Transport-Security":"max-age=31536000; includeSubDomains"},"significant_log":["ADF_HTTP_CONTENT_LENGTH_HDR_WITH_UNSUPPORTED_METHOD","ADF_RESPONSE_CODE_4XX"],"headers_sent_to_server":{"X-Forwarded-For":"128.168.178.113","Host":"wasphictst-wdc.hc.cloud.uk.sony","Content-Length":0,"User-Agent":"insomnia/2021.5.3","Cookie":"Cookie1=Jijin","Content-Type":"text/plain","Accept":"*/*","X-Forwarded-Proto":"https"},"headers_received_from_server":{"Date":"Fri, 14 Feb 2025 15:51:13 GMT","Server":"Apache/2.4.37 (Red Hat Enterprise Linux)","Content-Length":196,"Content-Type":"text/html; charset=iso-8859-1"},"vs_ip":"128.160.71.101","waf_log":{"status":"PASSED","latency_request_header_phase":351,"latency_request_body_phase":1544,"latency_response_header_phase":50,"latency_response_body_phase":15,"rules_configured":true,"psm_configured":false,"application_rules_configured":false,"allowlist_configured":false,"allowlist_processed":false,"rules_processed":true,"psm_processed":false,"application_rules_processed":false,"memory_allocated":71496,"omitted_signature_stats":{"rules":0,"match_elements":0},"omitted_app_rule_stats":{"rules":0,"match_elements":0}},"request_id":"9mY-Spaj-RgC9","servers_tried":1,"jwt_log":{"is_jwt_verified":false},"max_ingress_latency_fe":0,"avg_ingress_latency_fe":0,"conn_est_time_fe":0,"max_ingress_latency_be":0,"avg_ingress_latency_be":0,"conn_est_time_be":0,"source_ip":"128.168.178.113","vs_name":"v-wasphictst-wdc.hc.cloud.uk.sony-443","tenant_name":"admin"}
  Didnt get u but This is the query which i built up so far which is capturing time difference in HH:MM:SS in stats view but i want to display the same duration in chart as well index=music Job=* ... See more...
  Didnt get u but This is the query which i built up so far which is capturing time difference in HH:MM:SS in stats view but i want to display the same duration in chart as well index=music Job=* | stats values(host) as Host values(Job) as Job, earliest(_time) as start_time latest(_time) as end values(x) as "File Name" by oid | eval Duration=(end-start_time) | eval end = strftime(end,"%m-%d-%Y %H:%M:%S") | eval start_time = strftime(start_time,"%m-%d-%Y %H:%M:%S") | rename opid as OPID, start_time as "Start Time", end as "End Time" | chart list(Duration) as Duration by "Start Time" | fieldformat Duration=tostring(round(Duration, 0), "Duration") Current stats output:    i want to display like this
The simple answer is that you can't or at least not easily with standard charts. The x-axis on a bar chart or y-axis on a column chart is numeric and doesn't show strings (which is what you seem to b... See more...
The simple answer is that you can't or at least not easily with standard charts. The x-axis on a bar chart or y-axis on a column chart is numeric and doesn't show strings (which is what you seem to be trying to show your duration as).
What do you mean? I don't see no truncation. |makeresults format=csv data="field1,field2,raw_data 1,1,\"Very long string we don't want truncated because we don't think it's necessary or even desire... See more...
What do you mean? I don't see no truncation. |makeresults format=csv data="field1,field2,raw_data 1,1,\"Very long string we don't want truncated because we don't think it's necessary or even desired in our particular use case. So we're trying to insert some rubbish text here to make it longer. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enimaad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum\" 1,2,\"Very long string we don't want truncated because we don't think it's necessary or even desired in our particular use case. So we're trying to insert some rubbish text here to make it longer. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enimaad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum\" 2,1,\"Very long string we don't want truncated because we don't think it's necessary or even desired in our particular use case. So we're trying to insert some rubbish text here to make it longer. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enimaad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum\" 2,2,\"Very long string we don't want truncated because we don't think it's necessary or even desired in our particular use case. So we're trying to insert some rubbish text here to make it longer. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enimaad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum\" " | xyseries field1 field2 raw_data
This is not the raw unformatted event - it has been prettied for display. It should look something like  { "adf": true, "all_request_headers": { "Accept": "*/*", "Content-Length": 0... See more...
This is not the raw unformatted event - it has been prettied for display. It should look something like  { "adf": true, "all_request_headers": { "Accept": "*/*", "Content-Length": 0, Please repost your event in this style
@ITWhisperer please find the raw event. { [-]    adf: true    all_request_headers: {[-]      Accept: */*      Content-Length: 0      Content-Type: text/plain      Cookie: Cookie1=Salvin      ... See more...
@ITWhisperer please find the raw event. { [-]    adf: true    all_request_headers: {[-]      Accept: */*      Content-Length: 0      Content-Type: text/plain      Cookie: Cookie1=Salvin      Host: wasphictst-wdc.hc.cloud.uk.sony     User-Agent: insomnia/2021.5.3    }    all_response_headers: { [-]      Connection: keep-alive      Content-Length: 196      Content-Type: text/html; charset=iso-8859-1      Date: Fri, 14 Feb 2025 15:51:13 GMT      Server: Apache/2.4.37 (Red Hat Enterprise Linux)      Strict-Transport-Security: max-age=31536000; includeSubDomains    }    avg_ingress_latency_be: 0    avg_ingress_latency_fe: 0    cacheable: true    client_dest_port: 443    client_insights:    client_ip: 128.168.178.113    client_rtt: 1    client_src_port: 24487    compression: NO_COMPRESSION_CAN_BE_COMPRESSED    compression_percentage: 0    conn_est_time_be: 0    conn_est_time_fe: 0    headers_received_from_server: {[+]    }    headers_sent_to_server: { [+]  }    host: xyz    http_version: 1.1    jwt_log: { [+]    }    log_id: 108692    max_ingress_latency_be: 0    max_ingress_latency_fe: 0    method: GET    persistent_session_id: 3472328296900352087    pool: pool-cac2726e-acd1-4225-8ac8-72ebd82a57a6    pool_name: xxxxx    report_timestamp: 2025-02-14T15:51:13.176715Z    request_content_type: text/plain    request_headers: 833    request_id: 9mY-Spaj-RgC9    request_length: 193    request_state: AVI_HTTP_REQUEST_STATE_SEND_RESPONSE_BODY_TO_CLIENT    response_code: 404    response_content_type: text/html; charset=iso-8859-1    response_headers: 13    response_length: 6148    response_time_first_byte: 81    response_time_last_byte: 81    rewritten_uri_query: test=%26%26%20whoami    server_conn_src_ip: 128.160.77.235    server_dest_port: 80    server_ip: 128.160.88.68    server_name: 128.160.88.68    server_response_code: 404    server_response_length: 373    server_response_time_first_byte: 78    server_response_time_last_byte: 81    server_rtt: 3    server_src_port: 25921    servers_tried: 1    service_engine: GB-DRN-AB-Tier2-se-bmqhk    significant: 0    significant_log: [ [+]    ]    sni_hostname: xyx    source_ip: xxxxxx    ssl_cipher: TLS_AES_256_GCM_SHA384    ssl_session_id: bc586cf2272c7130a6e90551566bf12c    ssl_version: TLSv1.3    tenant_name: admin    udf: false    uri_path: /cmd    uri_query: test=&& whoami    user_agent: insomnia/2021.5.3    vcpu_id: 0    virtualservice: virtualservice-e52d1117-b508-4a6d-9fb5-f03ca6319af7    vs_ip: 128.160.71.101    vs_name: xxx-443    waf_log: { [-]      allowlist_configured: false      allowlist_processed: false      application_rules_configured: false      application_rules_processed: false      latency_request_body_phase: 1544      latency_request_header_phase: 351      latency_response_body_phase: 15      latency_response_header_phase: 50      memory_allocated: 71496      omitted_app_rule_stats: {[+]      }      omitted_signature_stats: {[+]      }      psm_configured: false      psm_processed: false      rules_configured: true      rules_processed: true      status: PASSED    } }   They want all_request_headers, all_response_headers, waf_log details to be viewed in a dashboard manner and any other important panels which makes sense.
@tscroggins , is there an alternative to using xyseries because the results are limited and the results displayed are therefore truncated? 
F5 WAF logs
Hi @michael_vi , as @richgalloway and @kiran_panchavat said, you can use regex101 to find the correct regex to cut a part ot your json. Only one attention point: json format has a well defined stru... See more...
Hi @michael_vi , as @richgalloway and @kiran_panchavat said, you can use regex101 to find the correct regex to cut a part ot your json. Only one attention point: json format has a well defined structure, so beware in cutting a part of the event, because if you break the json structure, the INDEXED_EXTRACTION=JSON and the spath command will not work correctly, and you have to manually parse all the fields! Ciao. Giuseppe
Please share some raw anonymised sample events in a code block using the </> button so we can see what you are dealing with.
Didnt get u but This is the query which i built up so far which is capturing time difference in HH:MM:SS in stats view but i want to display the same duration in chart as well index=music Job=* |... See more...
Didnt get u but This is the query which i built up so far which is capturing time difference in HH:MM:SS in stats view but i want to display the same duration in chart as well index=music Job=* | stats values(host) as Host values(Job) as Job, earliest(_time) as start_time latest(_time) as end values(x) as "File Name" by oid | eval Duration=(end-start_time) | eval end = strftime(end,"%m-%d-%Y %H:%M:%S") | eval start_time = strftime(start_time,"%m-%d-%Y %H:%M:%S") | rename opid as OPID, start_time as "Start Time", end as "End Time" | chart list(Duration) as Duration by "Start Time" | fieldformat Duration=tostring(round(Duration, 0), "Duration")  
Hi @dinesh001kumar , the only way is to put them in an app or an add-on and submit them to upload. If there isn't any issue you can upload it. Otherwise you could ask this to Splunk Cloud Support,... See more...
Hi @dinesh001kumar , the only way is to put them in an app or an add-on and submit them to upload. If there isn't any issue you can upload it. Otherwise you could ask this to Splunk Cloud Support, but I'm not sure that they will do. Ciao. Giuseppe