All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Unfortunately, your requirement might not only be hard/impossible to implement in dashboard studio (afair simplexml dashboards allow custom js so you have much more freedom there), but is also imprec... See more...
Unfortunately, your requirement might not only be hard/impossible to implement in dashboard studio (afair simplexml dashboards allow custom js so you have much more freedom there), but is also imprecisely defined. What about a 36h search range - average daily count or hourly? Or 8 days - daily or weekly? Anyway, from the top of my head, if you have a predefined set of time ranges you could use a list instead of time-picker and fill-in placeholders this way. Alternative approach (but I'm not 100% sure it will work) could be to use subsearch to generate the span part of the search and within that search use addinfo to get the time range).
How can I get the following visualization ? I've tried the following commands: (index="my_index" sourcetype="sourcetype1") OR (index="my_index" sourcetype="sourcetype2") | fields field1 field2 ... See more...
How can I get the following visualization ? I've tried the following commands: (index="my_index" sourcetype="sourcetype1") OR (index="my_index" sourcetype="sourcetype2") | fields field1 field2 | stats count(field1) as Field1, count(field2) as Field2 and I getting the following graph
does Splunk Add-on for NetApp Data ONTAP work with OnTap above 9.7? https://splunkbase.splunk.com/app/3418 says "The add-on supports the data collection from NetApp® Data ONTAP® Cluster Mode version... See more...
does Splunk Add-on for NetApp Data ONTAP work with OnTap above 9.7? https://splunkbase.splunk.com/app/3418 says "The add-on supports the data collection from NetApp® Data ONTAP® Cluster Mode version 9.6 and 9.7"  with Latest Version 3.1.0 released September 13, 2024. Since OnTap has 9.16 available now and in September 2024 OnTap 9.15 - and possible 9.16 - was fully GA I'm wondering why only 9.6 and 9.7 are listed. Especially since 9.6 and 9.7 may be out of support from NetApp. The Release Notes for Latest Version 3.1.0 doesn't even mention OnTap releases.
I am trying to configure the Proofpoint - ET Splunk TA on Splunk Cloud, and during the setup, it asks for an API key and an authorization code. While I have the API key, I noticed that the authorizat... See more...
I am trying to configure the Proofpoint - ET Splunk TA on Splunk Cloud, and during the setup, it asks for an API key and an authorization code. While I have the API key, I noticed that the authorization code appears as "None", so I provided the Oink code instead. However, when I try to save the configuration, it does not get applied. Is there a specific way to configure this on Splunk Cloud? Any guidance on setting up ET Intelligence correctly would be greatly appreciated. Thank you     
Hi @llopreiato  Unfortunately I think you might need to speak to the vendor of the LB to see if they can work out how to make it work based on HTTP Status code - this is fairly common behaviour for ... See more...
Hi @llopreiato  Unfortunately I think you might need to speak to the vendor of the LB to see if they can work out how to make it work based on HTTP Status code - this is fairly common behaviour for LB health checks so I'd be surprised if its not possible using your LB. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi @Leonardo1998  I think you might need to check the metrics log, the below might help you get started - Ideally we want to see that metrics are being written for your specific source as at the mom... See more...
Hi @Leonardo1998  I think you might need to check the metrics log, the below might help you get started - Ideally we want to see that metrics are being written for your specific source as at the moment it looks like they're not appearing anywhere! index=_internal group=per_source_thruput series=*azure* Please can you also confirm that the index for that metric input is definitely as metric index and not an event type index? Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi @livehybrid, thanks for your answer, unfortunately the configuration you suggest is not available on the LB. The LB configuration requires: 1. an Health Check URI 2. an Heath Check Receive Str... See more...
Hi @livehybrid, thanks for your answer, unfortunately the configuration you suggest is not available on the LB. The LB configuration requires: 1. an Health Check URI 2. an Heath Check Receive String   Luca
Hi @llopreiato, Configure the load balancer health check to rely on the HTTP status code returned by the ha_active_status endpoint, rather than parsing the response body. The active Cluster Manager... See more...
Hi @llopreiato, Configure the load balancer health check to rely on the HTTP status code returned by the ha_active_status endpoint, rather than parsing the response body. The active Cluster Manager (CM) will return an HTTP 200 status code, while the standby CM(s) will return an HTTP 503 status code. This difference in status codes is the standard method for load balancers to determine the active node in this scenario. The endpoint path is /services/cluster/manager/ha_active_status on the management port (default 8089). Active CM: https://:8089/services/cluster/manager/ha_active_status -> HTTP 200 Standby CM: https://:8089/services/cluster/manager/ha_active_status -> HTTP 503 Most load balancers can be configured to check for a specific HTTP status code as the health check condition. See the documentation on CM redundancy and health checks: https://docs.splunk.com/Documentation/Splunk/latest/Indexer/CMredundancy#:~:text=The%20active%20manager%20responds%20with%20200%3B%20the%20standby%20managers%20respond%20with%20503 Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, we're implementing a distributed clustered infrastructure with CM redundancy, between the indexers/search heads and the CMs there is a load balancer. The mentioned LB can send requests to th... See more...
Hello, we're implementing a distributed clustered infrastructure with CM redundancy, between the indexers/search heads and the CMs there is a load balancer. The mentioned LB can send requests to the ha_active_status endpoint but it needs a static response set to "understand" which CM is active. The team working on the LB says the ha_active_status response is not static because it contains a timestamp so the status check doesn't work. Any workarounds? Thanks in advance Luca
Hi @dannyuk  Yes, the steps you followed using rsync to overlay the new version files onto the existing installation directory after stopping the service is a valid method for upgrading a Splunk Uni... See more...
Hi @dannyuk  Yes, the steps you followed using rsync to overlay the new version files onto the existing installation directory after stopping the service is a valid method for upgrading a Splunk Universal Forwarder installed from a .tar file on AIX. The key steps are stopping the forwarder, replacing the binaries and default files with the new version's files, and then restarting it. When Splunk starts after the file replacement, it detects the version change and runs any necessary migration scripts. While running splunk status might have initiated the migration check, the standard command to ensure the migration proceeds and licenses are accepted is: /opt/splunk/splunkforwarder/bin/splunk start --accept-license --answer-yes so you may wish to pass these flags to the status script to prevent being prompted to scroll through the agreement and accept. It's always a good idea to check the splunkd.log file after an upgrade for any errors or warnings that might have occurred during the migration process. $SPLUNK_HOME/var/log/splunk/splunkd.log Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Nico99  If you're using SPL to get the incident then you could use the following instead: | mcincidentbyid id=ES-00001   Did this answer help you? If so, please consider: Adding kudos to... See more...
Hi @Nico99  If you're using SPL to get the incident then you could use the following instead: | mcincidentbyid id=ES-00001   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi @SOClife  The only documented APIs for ES8 specifically are at https://docs.splunk.com/Documentation/ES/8.0.2/API/AboutSplunkESAPI and as you say, the investigation API isnt listed in here. Howe... See more...
Hi @SOClife  The only documented APIs for ES8 specifically are at https://docs.splunk.com/Documentation/ES/8.0.2/API/AboutSplunkESAPI and as you say, the investigation API isnt listed in here. However - I believe some of the investigation endpoints you are looking for are actually now under the Mission Control app (See the MC APIs at https://docs.splunk.com/Documentation/MC/Current/SplunkPlaybookAPI) If you view an investigation in the UI with the Network tab of the browser developer tools open then you will see API calls to <yourEnv>/en-US/splunkd/__raw/servicesNS/nobody/missioncontrol/v2/investigations/<GUID>/findings (for example!) - some of these map to the documented MC APIs, however I couldnt find all of them in there. Its worth capturing the payload and responses to determine what you need from them.  As another example, loading the Incident Review in the UI loads some MC V1 API calls such as the notes endpoint. In addition to the API calls, if you're extracting information about incidents/investigations then you may be able to perform standard SPL searches using the REST API, | mcincidents < This will return a list of incidents within the timeframe searched | mcincidentbyid id=ES-00001 < Return a single incident details, pass display_id or id (guid)   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Entire CMC dashboards cannot be cloned, but you can copy individual panels.  Click on the Open In Search icon (magnifying glass) to open the query in a new tab.  Modify the query as desired and then ... See more...
Entire CMC dashboards cannot be cloned, but you can copy individual panels.  Click on the Open In Search icon (magnifying glass) to open the query in a new tab.  Modify the query as desired and then save it in a custom dashboard. It's not clear which dashboard you wish to put in the service desk's app.  If it's the custom dashboard created above, then just save the dashboard in that app.  If it's the CMC dashboard, edit the navigation menu to add the CMC dashboard to the app's menu bar.
@marnall  I would to had to do the th same for "MaxEcutionTime" field, can you help me please
I would to had to do the th same for "MaxEcutionTime" field, can you help me please
Hi, I'm fairly new to AIX and I have been tasked with upgrading our customers version of SPLUNK from 9.0.1 to 9.4.1, the below steps are what I did which seemed to work and now have version 9.4.1 on... See more...
Hi, I'm fairly new to AIX and I have been tasked with upgrading our customers version of SPLUNK from 9.0.1 to 9.4.1, the below steps are what I did which seemed to work and now have version 9.4.1 on the system:   Implementation Plan:: Create a copy/backup of the splunkforwarder folder: cp -r /opt/splunk/splunkforwarder /opt/splunk/splunkforwarder_backup_$(date +%F) mkdir /tmp/splunk_temp tar -xvf /nim/media/SOFTWARE/splunk/Splunk-9.4.1/splunkforwarder-9.4.1-2f7817798b5d-aix-powerpc.tar -C /tmp/splunk_temp /opt/splunk/splunkforwarder/bin/splunk stop /opt/splunk/splunkforwarder/bin/splunk status rsync -av /tmp/splunk_temp/splunkforwarder/ /opt/splunk/splunkforwarder/ rm -rf /tmp/splunk_temp chown -R 510:510 /opt/splunk/splunkforwarder chown -R root:system /opt/splunk/splunkforwarder/var /opt/splunk/splunkforwarder/bin/splunk status      <<<< This command will kick in the migration and upgrade to 9.4.1 /opt/splunk/splunkforwarder/bin/splunk start /opt/splunk/splunkforwarder/bin/splunk status         <<<<< Shows splunk has been upgraded to 9.4.1   I've also read the Install the universal forwarder on AIX instructions but just wanted to check if the way i've upgraded splunk actually is going to work even though it says it has been upgraded ??   Thanks
Thanks @ITWhisperer , it's good to know that it's doable using Classic SimpleXML Dashboard. Hopefully it will also be possible using Dashboard Studio later on. For now I'll stick with it.
Hello, I would like to clone cloud monitoring console dashboard (Forwarders tab) for only a specific set of HF/UF (and not all). I would like to make this dashboard available in one of the app use... See more...
Hello, I would like to clone cloud monitoring console dashboard (Forwarders tab) for only a specific set of HF/UF (and not all). I would like to make this dashboard available in one of the app used by service desk for monitoring. how can I achieve this? Thank you.
thank you so much, it works
Hello Sorry for the delay. To answer your questions: This is what I would expect: B) The number of times in that 24 hours, divided by the number of hours (In this case, 24?)  Additionally: If B,... See more...
Hello Sorry for the delay. To answer your questions: This is what I would expect: B) The number of times in that 24 hours, divided by the number of hours (In this case, 24?)  Additionally: If B, what would the count be divided by for time range of 5 hours? 7 days? Or 14 days? 60 days? I would say that if its 5 hours it would be divided by 5 hours or 7 days or whatever is selected. I was hoping that the selector would make it easier but I didnt think about the snap-to time being a problem. Hope that helps Thanks again