All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Leonardo1998  I think you might need to check the metrics log, the below might help you get started - Ideally we want to see that metrics are being written for your specific source as at the mom... See more...
Hi @Leonardo1998  I think you might need to check the metrics log, the below might help you get started - Ideally we want to see that metrics are being written for your specific source as at the moment it looks like they're not appearing anywhere! index=_internal group=per_source_thruput series=*azure* Please can you also confirm that the index for that metric input is definitely as metric index and not an event type index? Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi @livehybrid, thanks for your answer, unfortunately the configuration you suggest is not available on the LB. The LB configuration requires: 1. an Health Check URI 2. an Heath Check Receive Str... See more...
Hi @livehybrid, thanks for your answer, unfortunately the configuration you suggest is not available on the LB. The LB configuration requires: 1. an Health Check URI 2. an Heath Check Receive String   Luca
Hi @llopreiato, Configure the load balancer health check to rely on the HTTP status code returned by the ha_active_status endpoint, rather than parsing the response body. The active Cluster Manager... See more...
Hi @llopreiato, Configure the load balancer health check to rely on the HTTP status code returned by the ha_active_status endpoint, rather than parsing the response body. The active Cluster Manager (CM) will return an HTTP 200 status code, while the standby CM(s) will return an HTTP 503 status code. This difference in status codes is the standard method for load balancers to determine the active node in this scenario. The endpoint path is /services/cluster/manager/ha_active_status on the management port (default 8089). Active CM: https://:8089/services/cluster/manager/ha_active_status -> HTTP 200 Standby CM: https://:8089/services/cluster/manager/ha_active_status -> HTTP 503 Most load balancers can be configured to check for a specific HTTP status code as the health check condition. See the documentation on CM redundancy and health checks: https://docs.splunk.com/Documentation/Splunk/latest/Indexer/CMredundancy#:~:text=The%20active%20manager%20responds%20with%20200%3B%20the%20standby%20managers%20respond%20with%20503 Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, we're implementing a distributed clustered infrastructure with CM redundancy, between the indexers/search heads and the CMs there is a load balancer. The mentioned LB can send requests to th... See more...
Hello, we're implementing a distributed clustered infrastructure with CM redundancy, between the indexers/search heads and the CMs there is a load balancer. The mentioned LB can send requests to the ha_active_status endpoint but it needs a static response set to "understand" which CM is active. The team working on the LB says the ha_active_status response is not static because it contains a timestamp so the status check doesn't work. Any workarounds? Thanks in advance Luca
Hi @dannyuk  Yes, the steps you followed using rsync to overlay the new version files onto the existing installation directory after stopping the service is a valid method for upgrading a Splunk Uni... See more...
Hi @dannyuk  Yes, the steps you followed using rsync to overlay the new version files onto the existing installation directory after stopping the service is a valid method for upgrading a Splunk Universal Forwarder installed from a .tar file on AIX. The key steps are stopping the forwarder, replacing the binaries and default files with the new version's files, and then restarting it. When Splunk starts after the file replacement, it detects the version change and runs any necessary migration scripts. While running splunk status might have initiated the migration check, the standard command to ensure the migration proceeds and licenses are accepted is: /opt/splunk/splunkforwarder/bin/splunk start --accept-license --answer-yes so you may wish to pass these flags to the status script to prevent being prompted to scroll through the agreement and accept. It's always a good idea to check the splunkd.log file after an upgrade for any errors or warnings that might have occurred during the migration process. $SPLUNK_HOME/var/log/splunk/splunkd.log Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Nico99  If you're using SPL to get the incident then you could use the following instead: | mcincidentbyid id=ES-00001   Did this answer help you? If so, please consider: Adding kudos to... See more...
Hi @Nico99  If you're using SPL to get the incident then you could use the following instead: | mcincidentbyid id=ES-00001   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hi @SOClife  The only documented APIs for ES8 specifically are at https://docs.splunk.com/Documentation/ES/8.0.2/API/AboutSplunkESAPI and as you say, the investigation API isnt listed in here. Howe... See more...
Hi @SOClife  The only documented APIs for ES8 specifically are at https://docs.splunk.com/Documentation/ES/8.0.2/API/AboutSplunkESAPI and as you say, the investigation API isnt listed in here. However - I believe some of the investigation endpoints you are looking for are actually now under the Mission Control app (See the MC APIs at https://docs.splunk.com/Documentation/MC/Current/SplunkPlaybookAPI) If you view an investigation in the UI with the Network tab of the browser developer tools open then you will see API calls to <yourEnv>/en-US/splunkd/__raw/servicesNS/nobody/missioncontrol/v2/investigations/<GUID>/findings (for example!) - some of these map to the documented MC APIs, however I couldnt find all of them in there. Its worth capturing the payload and responses to determine what you need from them.  As another example, loading the Incident Review in the UI loads some MC V1 API calls such as the notes endpoint. In addition to the API calls, if you're extracting information about incidents/investigations then you may be able to perform standard SPL searches using the REST API, | mcincidents < This will return a list of incidents within the timeframe searched | mcincidentbyid id=ES-00001 < Return a single incident details, pass display_id or id (guid)   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Entire CMC dashboards cannot be cloned, but you can copy individual panels.  Click on the Open In Search icon (magnifying glass) to open the query in a new tab.  Modify the query as desired and then ... See more...
Entire CMC dashboards cannot be cloned, but you can copy individual panels.  Click on the Open In Search icon (magnifying glass) to open the query in a new tab.  Modify the query as desired and then save it in a custom dashboard. It's not clear which dashboard you wish to put in the service desk's app.  If it's the custom dashboard created above, then just save the dashboard in that app.  If it's the CMC dashboard, edit the navigation menu to add the CMC dashboard to the app's menu bar.
@marnall  I would to had to do the th same for "MaxEcutionTime" field, can you help me please
I would to had to do the th same for "MaxEcutionTime" field, can you help me please
Hi, I'm fairly new to AIX and I have been tasked with upgrading our customers version of SPLUNK from 9.0.1 to 9.4.1, the below steps are what I did which seemed to work and now have version 9.4.1 on... See more...
Hi, I'm fairly new to AIX and I have been tasked with upgrading our customers version of SPLUNK from 9.0.1 to 9.4.1, the below steps are what I did which seemed to work and now have version 9.4.1 on the system:   Implementation Plan:: Create a copy/backup of the splunkforwarder folder: cp -r /opt/splunk/splunkforwarder /opt/splunk/splunkforwarder_backup_$(date +%F) mkdir /tmp/splunk_temp tar -xvf /nim/media/SOFTWARE/splunk/Splunk-9.4.1/splunkforwarder-9.4.1-2f7817798b5d-aix-powerpc.tar -C /tmp/splunk_temp /opt/splunk/splunkforwarder/bin/splunk stop /opt/splunk/splunkforwarder/bin/splunk status rsync -av /tmp/splunk_temp/splunkforwarder/ /opt/splunk/splunkforwarder/ rm -rf /tmp/splunk_temp chown -R 510:510 /opt/splunk/splunkforwarder chown -R root:system /opt/splunk/splunkforwarder/var /opt/splunk/splunkforwarder/bin/splunk status      <<<< This command will kick in the migration and upgrade to 9.4.1 /opt/splunk/splunkforwarder/bin/splunk start /opt/splunk/splunkforwarder/bin/splunk status         <<<<< Shows splunk has been upgraded to 9.4.1   I've also read the Install the universal forwarder on AIX instructions but just wanted to check if the way i've upgraded splunk actually is going to work even though it says it has been upgraded ??   Thanks
Thanks @ITWhisperer , it's good to know that it's doable using Classic SimpleXML Dashboard. Hopefully it will also be possible using Dashboard Studio later on. For now I'll stick with it.
Hello, I would like to clone cloud monitoring console dashboard (Forwarders tab) for only a specific set of HF/UF (and not all). I would like to make this dashboard available in one of the app use... See more...
Hello, I would like to clone cloud monitoring console dashboard (Forwarders tab) for only a specific set of HF/UF (and not all). I would like to make this dashboard available in one of the app used by service desk for monitoring. how can I achieve this? Thank you.
thank you so much, it works
Hello Sorry for the delay. To answer your questions: This is what I would expect: B) The number of times in that 24 hours, divided by the number of hours (In this case, 24?)  Additionally: If B,... See more...
Hello Sorry for the delay. To answer your questions: This is what I would expect: B) The number of times in that 24 hours, divided by the number of hours (In this case, 24?)  Additionally: If B, what would the count be divided by for time range of 5 hours? 7 days? Or 14 days? 60 days? I would say that if its 5 hours it would be divided by 5 hours or 7 days or whatever is selected. I was hoping that the selector would make it easier but I didnt think about the snap-to time being a problem. Hope that helps Thanks again
Hello, Interesting, I had to adapt a bit the limit query, the source type did not exist on my side: index IN (_cmc_summary, summary) ingest_license=* | table _time ingest_license I tried the s... See more...
Hello, Interesting, I had to adapt a bit the limit query, the source type did not exist on my side: index IN (_cmc_summary, summary) ingest_license=* | table _time ingest_license I tried the search but it is not working, I either have a value in the total_license_limit_gb column or in the Daily License Usage but not both at the same time. So the last Where cannot do the test. Here is the result classified by the daily usage: And classified by the Limit: I tried to do the search differently and also use poolsz but without luck. Any idea ? Thank you again for you great help
@SOClife  If your operations heavily depend on the Investigation API’s simplicity and no workaround (e.g., REST searches) is feasible within your timeline, sticking with v7.x until v8.x matures.   ... See more...
@SOClife  If your operations heavily depend on the Investigation API’s simplicity and no workaround (e.g., REST searches) is feasible within your timeline, sticking with v7.x until v8.x matures.    There are no APIs available:- https://docs.splunk.com/Special:SpecialLatestDoc?t=Documentation/ES/8.0.3/API/InvestigationAPIreference 
Running the search returned exactly the same namespaces as my initial query. There’s no row related to the Load Balancer metrics.
Hi! No, this is not normal - which OS, Splunk version and AME version are you on? Please create a ticket at https://datapunctum.atlassian.net/servicedesk/customer/portal/1 Simon
Hi @Leonardo1998  When you ran that search, did you get a row for your Azure Load Balancer Metrics? What dimensions did it list?   Did this answer help you? If so, please consider: Adding ku... See more...
Hi @Leonardo1998  When you ran that search, did you get a row for your Azure Load Balancer Metrics? What dimensions did it list?   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.