All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The COALESCE did the trick.  You are awesome.  Thanks for all of the help.  I can finally get a good nights rest. Thanks, Tom
@livehybrid  For your information. I have changed the kvstore port from 8191 to 8192 and its start working properly since then.
Hello Splunkers!! I have a Splunk dashboard where I am using a drilldown to dynamically load images from a media server. However, the images do not load initially. The strange part is that when ... See more...
Hello Splunkers!! I have a Splunk dashboard where I am using a drilldown to dynamically load images from a media server. However, the images do not load initially. The strange part is that when I go to Edit > Source and then simply Save the dashboard again (without making any changes), the images start loading correctly. Why is this happening, and how can I permanently fix this issue without needing to manually edit and save the dashboard every time? Any insights or solutions would be greatly appreciated! Always getting below error. After performing Edit > Source action. Images are loading perfectly.  
Accepting this post as a solution since my "question" contains the solution and was really for information sharing purposes.
Please share your dashboard configuration source
Can you provide a sample?
We have a requirement to exclude or remove few fields from the event we receive it in Splunk. Already we have extracted json data by giving condition in props.conf and below is the sample event - { ... See more...
We have a requirement to exclude or remove few fields from the event we receive it in Splunk. Already we have extracted json data by giving condition in props.conf and below is the sample event - { [-]    adf: true    all_request_headers: { [+]    }    all_response_headers: { [+]    }    avg_ingress_latency_be: 0    avg_ingress_latency_fe: 0    cacheable: true    client_dest_port: 443    client_insights:    client_ip: XXXXXXXX    client_rtt: 1    client_src_port: 13353    compression: NO_COMPRESSION_CAN_BE_COMPRESSED    compression_percentage: 0    conn_est_time_be: 6    conn_est_time_fe: 0    headers_received_from_server: { [+]    }    headers_sent_to_server: { [+]    }    host: wasphictst-wdc.hc.cloud.uk.sony    http_version: 1.1    jwt_log: { [+]    }    log_id: 121721    max_ingress_latency_be: 0    max_ingress_latency_fe: 0    method: GET    persistent_session_id: 3472328296699025517    pool: pool-cac2726e-acd1-4225-8ac8-72ebd82a57a6    pool_name: p-wasphictst-wdc.hc.cloud.uk.sony-wdc-443    report_timestamp: 2025-02-18T11:33:23.069736Z    request_headers: 577    request_id: euh-xfiN-7Ikq    request_length: 148    request_state: AVI_HTTP_REQUEST_STATE_SEND_RESPONSE_BODY_TO_CLIENT    response_code: 404    response_content_type: text/html; charset=iso-8859-1    response_headers: 13    response_length: 6148    response_time_first_byte: 61    response_time_last_byte: 61    rewritten_uri_query: test=%26%26%20whoami    server_conn_src_ip: 128.160.77.237    server_dest_port: 80    server_ip: 128.160.73.123    server_name: 128.160.73.123    server_response_code: 404    server_response_length: 373    server_response_time_first_byte: 52    server_response_time_last_byte: 61    server_rtt: 9    server_src_port: 56233    servers_tried: 1    service_engine: GB-DRN-AB-Tier2-se-vxeuz    significant: 0    significant_log: [ [+]    ]    sni_hostname: wasphictst-wdc.hc.cloud.uk.sony    source_ip: 128.164.6.186    ssl_cipher: TLS_AES_256_GCM_SHA384    ssl_session_id: 935810081909dc8c018416502ece5d00    ssl_version: TLSv1.3    tenant_name: admin    udf: false    uri_path: /cmd    uri_query: test=&& whoami    user_agent: insomnia/2021.5.3    vcpu_id: 0    virtualservice: virtualservice-e52d1117-b508-4a6d-9fb5-f03ca6319af7    vs_ip: 128.160.71.101    vs_name: v-wasphictst-wdc.hc.cloud.uk.sony-443    waf_log: { [+]    } } We need to remove few fields from new and existing events like "avg_ingress_latency_be",  "avg_ingress_latency_fe", "request_state", "server_response_code" and many of the fields while onboarding. Where can I write the logic to exclude these fields because user app owners don't want these fields while viewing the data and source cannot edit that. We need to do this before on-boarding.
Hello, I have installed Splunk enterprise on windows server in order to retrieve Netflow (Port 2055) and Syslog (Port 514) information. I added in “Data Inputs” the UDP 2055 sourcetype=“Netflow” an... See more...
Hello, I have installed Splunk enterprise on windows server in order to retrieve Netflow (Port 2055) and Syslog (Port 514) information. I added in “Data Inputs” the UDP 2055 sourcetype=“Netflow” and 514 sourcetype=“Syslog”. In “Forwarding and receiving” and then “Forwarding defaults” I checked yes. But I can't see anything in Splunk. So I installed Wireshark, which does see Syslog and Netflow data. I checked with PowerShell that the port was open and that splunkd was listening (netstat -ano | findstr :2055) & (tasklist | findstr XXXX). I've also installed several add-ons but with no conclusive result. Has anyone had this problem before or have any clues as to how to solve it? Thanks in advance ----------------------------------------------------------- Bonjour, J'ai installé Splunk entreprise sur windows serveur afin de récupérer les infos de type Netflow (Port 2055) et Syslog (Port 514). J'ai ajouté dans "Data Inputs" l'UDP 2055 sourcetype="Netflow" et 514 sourcetype="Syslog". Dans "Forwarding and receiving" et puis "Forwarding defaults" j'ai coché oui. Mais je ne vois absolument rien dans Splunk. J'ai donc installé Wireshark qui lui voit bien les données Syslog et Netflow. J'ai regardé avec PowerShell si le port est bien ouvert et que c'est bien splunkd qui écoute (netstat -ano | findstr :2055) & (tasklist | findstr XXXX). J'ai également installé plusieurs add-on mais sans résultat concluant. Quelqu'un a déjà eu le problème ou aurait une piste de solution ? Merci d'avance
Hi @livehybrid  It worked perfectly!  Thank you so much
Hi    I have a kv store lookup which populated automatically and it contains arrays . How can make it like a normal lookup that is searchable  or how to make it as a proper file    current csv: ... See more...
Hi    I have a kv store lookup which populated automatically and it contains arrays . How can make it like a normal lookup that is searchable  or how to make it as a proper file    current csv:     I want the above kv store as a searchable lookup with proper segregation between each rows     
Anyone happen to know the following message? When i trigger a customize application, i get the follow message   ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify fai... See more...
Anyone happen to know the following message? When i trigger a customize application, i get the follow message   ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c1106)
I understood that but once it added to dashboard and drilldown enabled for respective field values, if users click on any of the value search should be there. But here when I give above search it is ... See more...
I understood that but once it added to dashboard and drilldown enabled for respective field values, if users click on any of the value search should be there. But here when I give above search it is empty events just showing time. I removed _raw from fields - . But I want to understand why we given spath command here?
Try using the Save as button and save it to a dashboard. There is a tutorial on how to do this https://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/WelcometotheSearchTutorial
After giving this in search how to create a dashboard with single panel including all request headers?
Hi @sureshkumaar  The value within the match command is actually a regular expressions (I used a pipedelimited list) so you could update this with a regex to match the filter you are looking for (e.... See more...
Hi @sureshkumaar  The value within the match command is actually a regular expressions (I used a pipedelimited list) so you could update this with a regex to match the filter you are looking for (e.g. hostname space keyword)? You will only need the single INGEST_EVAL because it uses an IF statement and sets the queue to nullQueue if the match is not met. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @Jtru88  Congrats on passing your cert! I dont think you will get much from here in terms of job availability but I would recommend checking LinkedIn, if you are on there, and other local job res... See more...
Hi @Jtru88  Congrats on passing your cert! I dont think you will get much from here in terms of job availability but I would recommend checking LinkedIn, if you are on there, and other local job resources as there is no job functionality within the forum. Be sure to also update Credly with your achievement so you can share a verified link on LinkedIn etc and to potential customers Best of luck! Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
The only solution is to take logs using an rsyslog and writing logs in files, then preparse the logs using a script, but it's very heavy for the system. --> Can you please describe more about this an... See more...
The only solution is to take logs using an rsyslog and writing logs in files, then preparse the logs using a script, but it's very heavy for the system. --> Can you please describe more about this and the script I need to use ?
Hi @livehybrid , Thanks for the reply. I have 2 questions 1. The If condition which is given it will pick the events where ever the keyword matches right being the keyword whether at the start, mi... See more...
Hi @livehybrid , Thanks for the reply. I have 2 questions 1. The If condition which is given it will pick the events where ever the keyword matches right being the keyword whether at the start, middle, end of the events "systemd", "rsyslogd" and "auditd" In my case i am looking for the events to be picked to a sourcetype when those keywords are there after the server name server-server-server-server systemd server-server-server-server rsyslogd 2. we need to have below one also right in props.conf to ignore other events getting forwarded to the sourcetype? [sourcetype] TRANSFORMS-set = setnull
Just passed my first cert located in the DC suburbs, any market for a cleared individual in the area??
A couple of things -  Can you confirm there's no event suppression rule? Can you confirm the time range are exactly the same and not being rounded off (for example if it's 24 hours, it's same in b... See more...
A couple of things -  Can you confirm there's no event suppression rule? Can you confirm the time range are exactly the same and not being rounded off (for example if it's 24 hours, it's same in both and not rounded off) Can you confirm the result count difference between index=notable VS `notable` (notable macro) and what's the count difference?