All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@shivanshu1593 @jitbahan @Gowhar @dsainz please help me on the similar issue - https://community.splunk.com/t5/Getting-Data-In/Akamai-add-on-logs-are-not-populating/m-p/743241#M118086
@cpetterborg @anzianojackson @adri2915 @iamarkaprabha please help on this similar issue - https://community.splunk.com/t5/Getting-Data-In/Akamai-add-on-logs-are-not-populating/m-p/743241#M118086
@James_ACN @javo_dlg @deepdiver @tofa please help me on the similar issue - https://community.splunk.com/t5/Getting-Data-In/Akamai-add-on-logs-are-not-populating/m-p/743241#M118086
There is option add a field to an existing kvstore without edit conf files? I dont own the server so it be It's difficult to get there all the time.
@deepdiver @nkoppert_s please help me on the similar issue - https://community.splunk.com/t5/Getting-Data-In/Akamai-add-on-logs-are-not-populating/m-p/743241#M118086
@mikefg @RCavazana2023 @cybersecnutant @LeandroNTT please help me on the similar issue ue - https://community.splunk.com/t5/Getting-Data-In/Akamai-add-on-logs-are-not-populating/m-p/743241#M118086
@James_ACN @javo_dlg @deepdiver @tofa anyone please help on this similar issue - https://community.splunk.com/t5/Getting-Data-In/Akamai-add-on-logs-are-not-populating/m-p/743241#M118086
To Rez an old thread, I was having this very same issue today and have a suspicion about why the option wasn't showing.  I have 2 clustered environments; One Lab and one production. The production S... See more...
To Rez an old thread, I was having this very same issue today and have a suspicion about why the option wasn't showing.  I have 2 clustered environments; One Lab and one production. The production SH's show the option to install from file, the Lab does not. In the Lab, I was experimenting with installing apps in a one off scenario and now the option is gone, but in production where I've only pushed by deployment, the option is still there.  tl;dr : Because I tried to install a one off app to a SH in the cluster, it seems to have removed the option to install further apps per SH?   Anyone seen similar?
Hi Team  Is it possible to switch the dashboard after a regular interval in the same app ?  I've around 15 dashboards in the same app and i want to switch to next dashboard after every 2 mins.  ... See more...
Hi Team  Is it possible to switch the dashboard after a regular interval in the same app ?  I've around 15 dashboards in the same app and i want to switch to next dashboard after every 2 mins.  In the above attached screenshot , i have around 15 dashboards and the home screen is always "ESES Hotline SUMMARY"  dashboard.  Is it possible to move automatically to next dashboard "ESES Hotline" after 2 mins and then move automatically to next dashboard "EVIS Application" after 2 mins and so on.     
Hi Team  Can you please let me know how it is possible to fetch the events with the time greater than the time of the 1st event in the dashboard.  Example: I've 3 jobs executed every day at around ... See more...
Hi Team  Can you please let me know how it is possible to fetch the events with the time greater than the time of the 1st event in the dashboard.  Example: I've 3 jobs executed every day at around below timings:  Job1 : Around 10 PM  ( Day D)  Job2 : Around 3 AM ( Day D + 1) Job3 : Around 6 AM ( Day D + 1) I am fetching the latest of the Job1/Job2/Job3 to show in the dashboard and want the result in the below format.  If we are after 5 PM - 10 PM ,  Job1 : PLANNED  Job2 : PLANNED  Job3 : PLANNED  If we are at 11 PM ,  Job1 : Executed at 10:00  Job2 : PLANNED  Job3 : PLANNED  If we are 4 AM ,  Job1 : Executed at 10:00  Job2 : Executed at  03:00 Job3 : PLANNED  If we are 7 AM ,  Job1 : Executed at 10:00  Job2 : Executed at  03:00 Job3 : Executed at  06:00  If we are 4 PM ,  Job1 : PLANNED  Job2 : PLANNED  Job3 : PLANNED  If we are at 5 PM ,  Job1 : PLANNED  Job2 : PLANNED  Job3 : PLANNED  We want to consider the start of day at 5 PM and end at next day at 5 PM instead of using last 24 hours / today.   
Hi @zafar , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points... See more...
Hi @zafar , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @Treize , because summary index in a main search hasn't limits in the number of results. Ciao. Giuseppe
Hi i have a below table  i have been trying to represent it in a heat map i can see the percentage value in the blocks but how can i get the severity values => very high,high medium represented ... See more...
Hi i have a below table  i have been trying to represent it in a heat map i can see the percentage value in the blocks but how can i get the severity values => very high,high medium represented in the blocks     
In this situation, it could mean one of two things.  The first is that you're trying to use a cert chain and there is already a single cert in idpCert.pem.  Some IdP's like Ping require you to remove... See more...
In this situation, it could mean one of two things.  The first is that you're trying to use a cert chain and there is already a single cert in idpCert.pem.  Some IdP's like Ping require you to remove that idpCert.pem.  However, the more likely case here is that you have multiple single certs attached to your IdP metadata.xml file. Some IdP's such as ADFS and Azure (Entra) allow for Primary and Secondary IdP certs, which allow for seamless transition from expiring to new certs. However, Splunk does NOT accept two single certs in one metadata.xml file.  Hence, your solution here is as below: 1.  On the IdP, replace the expiring cert with the new cert 2.  Disable secondary cert option 3.  Download the new metadata.xml file 4.  Upload the IdP metadata.xml file to Splunk UI > Save    footnote:  Splunk DOES accept cert chains, but that has to be manually uploaded and in the correct order as per KB below: https://community.splunk.com/t5/Deployment-Architecture/Problem-with-SAML-cert-quot-ERROR-UiSAML-Verification-of-SAML/m-p/322376#M12073 
Hi @msmadhu, Splunk does not offer native clustering for Deployment Servers (DS) in the same way it does for indexers or search heads. High availability (HA) for the DS is typically achieved using e... See more...
Hi @msmadhu, Splunk does not offer native clustering for Deployment Servers (DS) in the same way it does for indexers or search heads. High availability (HA) for the DS is typically achieved using external components (Typically a load balancer). The common approach involves: Setting up two (or more) independent Deployment Server instances. Using a network load balancer (LB) in front of these instances. Configuring deployment clients (Universal Forwarders) to point to the virtual IP (VIP) or hostname of the load balancer. Keeping the configuration ($SPLUNK_HOME/etc/deployment-apps/) synchronized between the DS instances. This can be done manually, via scripting, or using shared storage (e.g., NFS), though shared storage requires careful implementation. This architecture relies on external systems (load balancer, potentially shared storage or sync scripts) and requires careful management to ensure configurations are consistent across DS instances. You can find guidance on HA strategies in the Splunk Enterprise Admin Manual: Protect against loss of the deployment server   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @SplunkExplorer  When changes to apps are made on your SHC - the changes are applied to the ./local folder within the app on the SHC, whereas the content pushed from your deployer generally lands... See more...
Hi @SplunkExplorer  When changes to apps are made on your SHC - the changes are applied to the ./local folder within the app on the SHC, whereas the content pushed from your deployer generally lands in the ./default directory. This means that if users have modified any of the knowledge objects since it was pushed from the deployer, they wont be overwritten when a subsequent deployment is done. Check out the docs for more info "Because of how configuration file precedence works, changes that users make to apps at runtime get maintained in the apps through subsequent upgrades."   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi Please assist how to build Splunk deployment servers clustering with minimum requirement. 
@isoutamo  or Someone from Splunk Team  Can you please help to provide me a solution for this type of result.   
Hi Splunkers, today I have the following issue: on our SHC, there is a small app subset that is managed, and so modified, from their user directly on SHs. What does it means for us? Of course, that ... See more...
Hi Splunkers, today I have the following issue: on our SHC, there is a small app subset that is managed, and so modified, from their user directly on SHs. What does it means for us? Of course, that we need to perform version updates from SH to Deployer before perform a new app bundle push. Otherwise, older version on Deployer will override the updated one on SH. My wondering is: is there any way, on Splunk version 9.2.1, to avoid this app update when Deployer is used?  The final purpose, just to make an example, is: ehy, if deployer has 100 apps in $SPLUNK_HOME$/etc/shcluster/apps, we want that on a bundle push, 95 of them must be updated, if SH version is NOT equal to Deployer one; remaining 5, should not be updated by Deployer.
Hi @Hemanth35 , Based on the logs, your Python script (licensecode.py) successfully initiates a Splunk search but then gets stuck in a waiting loop (Waiting next 10 seconds for all queries to comple... See more...
Hi @Hemanth35 , Based on the logs, your Python script (licensecode.py) successfully initiates a Splunk search but then gets stuck in a waiting loop (Waiting next 10 seconds for all queries to complete) until it hits its internal 200-second timeout. This usually means the script is either not correctly checking if the Splunk search job has finished, or the search itself is taking longer than 200 seconds to complete. Check Search Performance: Run the search search eventtype="cba-env-prod" NNO.API.LOG.PM.UPDATE latest=now earliest=-7d directly in the Splunk Search UI. Note how long it takes to complete. If it takes longer than 200 seconds, you'll need to either optimize the search or increase the timeout in your Python script. Review Python Script Logic: Examine the licensecode.py script, particularly the loop that waits for the search to complete (around line 265) and the logic that triggers the search and should check its status (around lines 193-201). Ensure the script correctly polls the Splunk search job's status using its SID (1743587848.2389422_84B919DD-8E60-47EE-AF06-F6EE20B95178). It should check if the job's dispatchState is DONE, FAILED, or still running (See https://docs.splunk.com/Documentation/Splunk/9.4.1/RESTTUT/RESTsearches#:~:text=dispatchState-,dispatchState,-is%20one%20of) Verify that once the job is DONE, the script proceeds to retrieve the results using the appropriate Splunk SDK method or REST API endpoint. Add more detailed logging within the loop to print the actual status returned by Splunk for the job SID during each check. This will help diagnose if the status check logic is flawed. Increase Script Timeout: If the search legitimately takes longer than 200 seconds, modify the timeout value within your script. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing