All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Karthikeya  This should be really easy to achieve by adding some simple props/transforms to your Indexers or HFs: == props.conf == [yourSourceType] TRANSFORMS-removeJsonKeys = removeJsonKeys1 ... See more...
Hi @Karthikeya  This should be really easy to achieve by adding some simple props/transforms to your Indexers or HFs: == props.conf == [yourSourceType] TRANSFORMS-removeJsonKeys = removeJsonKeys1 == transforms.conf == [removeJsonKeys1] INGEST_EVAL = _raw=json_delete(_raw, "key1", "nestedkey.subkey2")   You can also see how this would work in the UI, although obviously this isnt persistent.  Here is an example working to see: SPL | makeresults | eval _raw = "[{\"integrationName\":\"Opsgenie Edge Connector - Splunk\",\"alert\":{\"message\":\"[ThousandEyes] Alert for TMS Core Healthcheck\",\"id\":\"abc123xyz\"},\"action\":\"Create\"},{\"integrationName\":\"Opsgenie Edge Connector - Splunk\",\"alert\":{\"message\":\"[ThousandEyes] Alert for TMS Core Healthcheck\",\"id\":\"abc123xyz\"},\"action\":\"Close\"},{\"integrationName\":\"Opsgenie Edge Connector - Splunk\",\"alert\":{\"message\":\"[ThousandEyes] Alert for TMS Core Healthcheck\",\"id\":\"def456uvw\"},\"action\":\"Create\"},{\"integrationName\":\"Opsgenie Edge Connector - Splunk\",\"alert\":{\"message\":\"[ThousandEyes] Alert for TMS Core Healthcheck\",\"id\":\"def456uvw\"},\"action\":\"Close\"},{\"integrationName\":\"Opsgenie Edge Connector - Splunk\",\"alert\":{\"message\":\"[ThousandEyes] Alert for TMS Core Healthcheck\",\"id\":\"ghi789rst\"},\"action\":\"Create\"}]" | eval events=json_array_to_mv(_raw) | mvexpand events | eval _raw=events | fields _raw | eval _raw=json_delete(_raw, "integrationName", "alert.id") Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Hi @uagraw01  Have you configured the "Dashboards Trusted Domains List" to allow the domain/URL of the image you are trying to load?  Check out https://docs.splunk.com/Documentation/Splunk/9.4.0/Ad... See more...
Hi @uagraw01  Have you configured the "Dashboards Trusted Domains List" to allow the domain/URL of the image you are trying to load?  Check out https://docs.splunk.com/Documentation/Splunk/9.4.0/Admin/ConfigureDashboardsTrustedDomainsList for details on how to set this up. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
The if function works like a ternary ? : operator in C. So the proper syntax for setting a field conditionally is like this: | eval field=if(something="something","value_when_true","value_when_false... See more...
The if function works like a ternary ? : operator in C. So the proper syntax for setting a field conditionally is like this: | eval field=if(something="something","value_when_true","value_when_false")  
Hi @Sultan77 , sorry, what do you mean with correlation with it? Ciao. Giuseppe
You may have encountered a case where you have to update the operating system version where Splunk resides, in this case Red Hat 7.x to 9.x, is there any consideration that should be taken into accou... See more...
You may have encountered a case where you have to update the operating system version where Splunk resides, in this case Red Hat 7.x to 9.x, is there any consideration that should be taken into account, considering that there are two instances that fulfill the indexer role and there is another cluster instance that manages both, the latter will not be updated. I was thinking of cloning each server and updating it in an isolated network, then exchanging them one by one in the production environment, you will know if that works or I should apply another strategy
I have Splunk 9.4 installed and this is the file in the root $SPLUNK_HOME folder. splunk-9.4.0-6b4ebe426ca6-windows-x64-manifest That file name changes based on the version you have installed.  Ins... See more...
I have Splunk 9.4 installed and this is the file in the root $SPLUNK_HOME folder. splunk-9.4.0-6b4ebe426ca6-windows-x64-manifest That file name changes based on the version you have installed.  Inside that file is a list of ~30K files with expected owner and permissions.  The health check verifies against all of the contents but only on Splunk restart.
No response for this issue from Splunk. I am probably going to write a bug report this week and see if that gets any traction.
The COALESCE did the trick.  You are awesome.  Thanks for all of the help.  I can finally get a good nights rest. Thanks, Tom
@livehybrid  For your information. I have changed the kvstore port from 8191 to 8192 and its start working properly since then.
Hello Splunkers!! I have a Splunk dashboard where I am using a drilldown to dynamically load images from a media server. However, the images do not load initially. The strange part is that when ... See more...
Hello Splunkers!! I have a Splunk dashboard where I am using a drilldown to dynamically load images from a media server. However, the images do not load initially. The strange part is that when I go to Edit > Source and then simply Save the dashboard again (without making any changes), the images start loading correctly. Why is this happening, and how can I permanently fix this issue without needing to manually edit and save the dashboard every time? Any insights or solutions would be greatly appreciated! Always getting below error. After performing Edit > Source action. Images are loading perfectly.  
Accepting this post as a solution since my "question" contains the solution and was really for information sharing purposes.
Please share your dashboard configuration source
Can you provide a sample?
We have a requirement to exclude or remove few fields from the event we receive it in Splunk. Already we have extracted json data by giving condition in props.conf and below is the sample event - { ... See more...
We have a requirement to exclude or remove few fields from the event we receive it in Splunk. Already we have extracted json data by giving condition in props.conf and below is the sample event - { [-]    adf: true    all_request_headers: { [+]    }    all_response_headers: { [+]    }    avg_ingress_latency_be: 0    avg_ingress_latency_fe: 0    cacheable: true    client_dest_port: 443    client_insights:    client_ip: XXXXXXXX    client_rtt: 1    client_src_port: 13353    compression: NO_COMPRESSION_CAN_BE_COMPRESSED    compression_percentage: 0    conn_est_time_be: 6    conn_est_time_fe: 0    headers_received_from_server: { [+]    }    headers_sent_to_server: { [+]    }    host: wasphictst-wdc.hc.cloud.uk.sony    http_version: 1.1    jwt_log: { [+]    }    log_id: 121721    max_ingress_latency_be: 0    max_ingress_latency_fe: 0    method: GET    persistent_session_id: 3472328296699025517    pool: pool-cac2726e-acd1-4225-8ac8-72ebd82a57a6    pool_name: p-wasphictst-wdc.hc.cloud.uk.sony-wdc-443    report_timestamp: 2025-02-18T11:33:23.069736Z    request_headers: 577    request_id: euh-xfiN-7Ikq    request_length: 148    request_state: AVI_HTTP_REQUEST_STATE_SEND_RESPONSE_BODY_TO_CLIENT    response_code: 404    response_content_type: text/html; charset=iso-8859-1    response_headers: 13    response_length: 6148    response_time_first_byte: 61    response_time_last_byte: 61    rewritten_uri_query: test=%26%26%20whoami    server_conn_src_ip: 128.160.77.237    server_dest_port: 80    server_ip: 128.160.73.123    server_name: 128.160.73.123    server_response_code: 404    server_response_length: 373    server_response_time_first_byte: 52    server_response_time_last_byte: 61    server_rtt: 9    server_src_port: 56233    servers_tried: 1    service_engine: GB-DRN-AB-Tier2-se-vxeuz    significant: 0    significant_log: [ [+]    ]    sni_hostname: wasphictst-wdc.hc.cloud.uk.sony    source_ip: 128.164.6.186    ssl_cipher: TLS_AES_256_GCM_SHA384    ssl_session_id: 935810081909dc8c018416502ece5d00    ssl_version: TLSv1.3    tenant_name: admin    udf: false    uri_path: /cmd    uri_query: test=&& whoami    user_agent: insomnia/2021.5.3    vcpu_id: 0    virtualservice: virtualservice-e52d1117-b508-4a6d-9fb5-f03ca6319af7    vs_ip: 128.160.71.101    vs_name: v-wasphictst-wdc.hc.cloud.uk.sony-443    waf_log: { [+]    } } We need to remove few fields from new and existing events like "avg_ingress_latency_be",  "avg_ingress_latency_fe", "request_state", "server_response_code" and many of the fields while onboarding. Where can I write the logic to exclude these fields because user app owners don't want these fields while viewing the data and source cannot edit that. We need to do this before on-boarding.
Hello, I have installed Splunk enterprise on windows server in order to retrieve Netflow (Port 2055) and Syslog (Port 514) information. I added in “Data Inputs” the UDP 2055 sourcetype=“Netflow” an... See more...
Hello, I have installed Splunk enterprise on windows server in order to retrieve Netflow (Port 2055) and Syslog (Port 514) information. I added in “Data Inputs” the UDP 2055 sourcetype=“Netflow” and 514 sourcetype=“Syslog”. In “Forwarding and receiving” and then “Forwarding defaults” I checked yes. But I can't see anything in Splunk. So I installed Wireshark, which does see Syslog and Netflow data. I checked with PowerShell that the port was open and that splunkd was listening (netstat -ano | findstr :2055) & (tasklist | findstr XXXX). I've also installed several add-ons but with no conclusive result. Has anyone had this problem before or have any clues as to how to solve it? Thanks in advance ----------------------------------------------------------- Bonjour, J'ai installé Splunk entreprise sur windows serveur afin de récupérer les infos de type Netflow (Port 2055) et Syslog (Port 514). J'ai ajouté dans "Data Inputs" l'UDP 2055 sourcetype="Netflow" et 514 sourcetype="Syslog". Dans "Forwarding and receiving" et puis "Forwarding defaults" j'ai coché oui. Mais je ne vois absolument rien dans Splunk. J'ai donc installé Wireshark qui lui voit bien les données Syslog et Netflow. J'ai regardé avec PowerShell si le port est bien ouvert et que c'est bien splunkd qui écoute (netstat -ano | findstr :2055) & (tasklist | findstr XXXX). J'ai également installé plusieurs add-on mais sans résultat concluant. Quelqu'un a déjà eu le problème ou aurait une piste de solution ? Merci d'avance
Hi @livehybrid  It worked perfectly!  Thank you so much
Hi    I have a kv store lookup which populated automatically and it contains arrays . How can make it like a normal lookup that is searchable  or how to make it as a proper file    current csv: ... See more...
Hi    I have a kv store lookup which populated automatically and it contains arrays . How can make it like a normal lookup that is searchable  or how to make it as a proper file    current csv:     I want the above kv store as a searchable lookup with proper segregation between each rows     
Anyone happen to know the following message? When i trigger a customize application, i get the follow message   ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify fai... See more...
Anyone happen to know the following message? When i trigger a customize application, i get the follow message   ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c1106)
I understood that but once it added to dashboard and drilldown enabled for respective field values, if users click on any of the value search should be there. But here when I give above search it is ... See more...
I understood that but once it added to dashboard and drilldown enabled for respective field values, if users click on any of the value search should be there. But here when I give above search it is empty events just showing time. I removed _raw from fields - . But I want to understand why we given spath command here?
Try using the Save as button and save it to a dashboard. There is a tutorial on how to do this https://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/WelcometotheSearchTutorial