All Posts

Top

All Posts

Hi When you are creating an app into Splunk with GUI, there are two separate templates to chosen. Based on your chose it will create different files and directories under that app. When you are sta... See more...
Hi When you are creating an app into Splunk with GUI, there are two separate templates to chosen. Based on your chose it will create different files and directories under that app. When you are starting to create own apps and TAs, I strongly propose that you start to use e.g. git to store those and keep track on your changes. Then you could use some editor like Visual Studio Code to write those together with Simple XML editor and/or Dashboard Studio. There are some old .conf presentation how you can do this. Also read instructions from dev.splunk.com about this. r. Ismo
Hi You could use @richgalloway 's presented apps. I think that there was presentation about it last our previous .conf? Other option is just use REST requests to get that information what you want t... See more...
Hi You could use @richgalloway 's presented apps. I think that there was presentation about it last our previous .conf? Other option is just use REST requests to get that information what you want to show. On Splunk Cloud you haven't rest access to indexers and otherwise it has restricted amount of endpoints in use. For that reason you cannot get all that information with this way. IMHO: You should have all this kind of configuration in some version control system like git. Create needed Apps and TAs to store those. Maybe separate TAs based on your needs between HF/UF, Indexers and SH. Then just use any suitable methods / processes to install those into correct environment. Try to avoid configure that kind of information via both GUI and conf files. In long run you will avoid lot of issues to use git + Apps/TAs with conf files! r. Ismo
Hi you should use UF package which is loaded from your SCP stack. Just install it on all your UF+HF which are directly connected to your cloud stack and use its defaults to send into SCP. Don't mesh... See more...
Hi you should use UF package which is loaded from your SCP stack. Just install it on all your UF+HF which are directly connected to your cloud stack and use its defaults to send into SCP. Don't mesh it! r. Ismo
Hi maybe this hits your fingers?  grantableRoles This limits what you can see and set. On cloud you cannot see all users as some of those are restricted for only Splunk's own use. r. Ismo 
I've tried doing that but after filling out the form and submitting it, I get an error message stating that I don't have any entitlement.
Hi   We have a splunk installation with smart store enabled. We have plenty of cache on disk, so we are no near the space padding setting. I have seen bucket downloads from the S3, and I did not e... See more...
Hi   We have a splunk installation with smart store enabled. We have plenty of cache on disk, so we are no near the space padding setting. I have seen bucket downloads from the S3, and I did not expect that. So my question is, do Splunk pre-emptive evict buckets, even if there are enough space ? I se no documentation that states it does anything else than LRU.   Regards André
Dear Cansel,  The query you have shared is running properly on one collector, but what if there are multiple collectors? It is showing me the wait state with its numeric IDs and giving a count for i... See more...
Dear Cansel,  The query you have shared is running properly on one collector, but what if there are multiple collectors? It is showing me the wait state with its numeric IDs and giving a count for it as well. Another thing was, can I show the name of query with it's ID? Please check if the query is right or wrong because it is still not showing. One more thing, I want to let you know my setup is on prem. Please find the attachment below Thanks & Regards, Hardik
@SOARt_of_Lost the only way I can think of initially is to have a scheduled playbook to check for containers from notables without artifacts and then run the relevant playbook against them. Timer app... See more...
@SOARt_of_Lost the only way I can think of initially is to have a scheduled playbook to check for containers from notables without artifacts and then run the relevant playbook against them. Timer app would be used to create the container to kick the utility playbook off as regularly as you want. 
I want to deploy a single Splunk collector in my AWS ECS cluster which will 1. Collect all resource metrics for other running tasks within the same cluster 2. Receive, process and forward all custo... See more...
I want to deploy a single Splunk collector in my AWS ECS cluster which will 1. Collect all resource metrics for other running tasks within the same cluster 2. Receive, process and forward all custom OT metrics sent to it by the applications themselves.   Is this possible?   Thanks
Hi Tony, Based on your first screenshot this is normal, Yes tier was created but the agent is not working anymore. Can you please answer below in order to understand situation? 1- Is this monolith... See more...
Hi Tony, Based on your first screenshot this is normal, Yes tier was created but the agent is not working anymore. Can you please answer below in order to understand situation? 1- Is this monolith Java App 2-Do you have more than 1 JVM instance on same host  Thanks Cansel
Hi Umesh, You are still looking for a solution for Fiori integration Thanks  Cansel  
Hi @jessieb_83 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @keneyfofe, your error is really strange, have you the Settings menu? If yes, go in [Settings > Liensing > Add new License]. Ciao. Giuseppe
Hi @dhruvisha2345, if you created the new app by GUI, you have only to upload a file and Splunk automaticall add the appserver/static folder. If you created the app by SH, you have to manually crea... See more...
Hi @dhruvisha2345, if you created the new app by GUI, you have only to upload a file and Splunk automaticall add the appserver/static folder. If you created the app by SH, you have to manually create it. Ciao. Giuseppe
I created a role with the capabilities 'edit_license' and 'edit_user', but I didn't receive all the users from the GET request to the URL: /services/authentication/users?output_mode=json. It only ret... See more...
I created a role with the capabilities 'edit_license' and 'edit_user', but I didn't receive all the users from the GET request to the URL: /services/authentication/users?output_mode=json. It only returned part of the users. Without the role 'edit_license', I received the following error: "messages": [ { "type": "ERROR", "text": "Unauthorized" } ] What are the minimum permissions required to retrieve all users, and does anyone know if this is the same for Splunk Cloud?  
Hi William, Think this issue hitting to OS - java version problem. To localize this issue can you try the machine agent bundle java .zip version instead of "rpm" package with same agent version? A... See more...
Hi William, Think this issue hitting to OS - java version problem. To localize this issue can you try the machine agent bundle java .zip version instead of "rpm" package with same agent version? Another thing is can you please try to install the machine agent with "rpm" way with older version like 23.x.x. Based on your experience if you can share the latest update we can localize your problem. Thanks Cansel
Hi @shakti , clear the cache and if that doesn't work open a case with Splunk Support Ciao. Giuseppe
Hello all, I am trying to ingest metrics via Opentelemetry in an enterprise environment. I have installed the Splunk Add-On for OpenTelemetry Collector , which according to the documentation is comp... See more...
Hello all, I am trying to ingest metrics via Opentelemetry in an enterprise environment. I have installed the Splunk Add-On for OpenTelemetry Collector , which according to the documentation is compatibl I have some doubts to configure it: where can you know the following connection points that my enterprise environment has? - SPLUNK_API_URL: The Splunk API URL, e.g. https://api.us0.signalfx.com  - SPLUNK_INGEST_URL: The Splunk ingest URL, e.g. https://ingest.us0.signalfx.com  - SPLUNK_LISTEN_INTERFACE: The network interface the agent receivers listen on.¿? - SPLUNK_TRACE_URL: The Splunk trace endpoint URL, e.g. https://ingest.us0.signalfx.com/v2/trace  Is there a configuration file where to view it? Do I have to do some step before to get those services up? thanks in advance BR JAR   T
Hello,   The UI of my search head is not loading ...I am seeing only a white screen with no error message as such ..Splunkd  is also running ...Kindly suggest?
Hi Hardik, Actually, this is not a syntax error, after "FROM" you specify the data source and there is no data source like "DB5". You have to use "dbmon_wait_time" this comes from event service shar... See more...
Hi Hardik, Actually, this is not a syntax error, after "FROM" you specify the data source and there is no data source like "DB5". You have to use "dbmon_wait_time" this comes from event service shards.  Another thing is (sorry this is my fault ) I accidentally removed "count" before " (`wait-state-id`) " that is bolded below. Btw this query is based on a controller that has only 1 DB collector, if you have more than 1 collector you need to specify 'server-id'  column with "WHERE" clause. SELECT `wait-state-id`, count(`wait-state-id`) FROM dbmon_wait_time Thanks Cansel