All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @dhruvisha2345, if you created the new app by GUI, you have only to upload a file and Splunk automaticall add the appserver/static folder. If you created the app by SH, you have to manually crea... See more...
Hi @dhruvisha2345, if you created the new app by GUI, you have only to upload a file and Splunk automaticall add the appserver/static folder. If you created the app by SH, you have to manually create it. Ciao. Giuseppe
I created a role with the capabilities 'edit_license' and 'edit_user', but I didn't receive all the users from the GET request to the URL: /services/authentication/users?output_mode=json. It only ret... See more...
I created a role with the capabilities 'edit_license' and 'edit_user', but I didn't receive all the users from the GET request to the URL: /services/authentication/users?output_mode=json. It only returned part of the users. Without the role 'edit_license', I received the following error: "messages": [ { "type": "ERROR", "text": "Unauthorized" } ] What are the minimum permissions required to retrieve all users, and does anyone know if this is the same for Splunk Cloud?  
Hi William, Think this issue hitting to OS - java version problem. To localize this issue can you try the machine agent bundle java .zip version instead of "rpm" package with same agent version? A... See more...
Hi William, Think this issue hitting to OS - java version problem. To localize this issue can you try the machine agent bundle java .zip version instead of "rpm" package with same agent version? Another thing is can you please try to install the machine agent with "rpm" way with older version like 23.x.x. Based on your experience if you can share the latest update we can localize your problem. Thanks Cansel
Hi @shakti , clear the cache and if that doesn't work open a case with Splunk Support Ciao. Giuseppe
Hello all, I am trying to ingest metrics via Opentelemetry in an enterprise environment. I have installed the Splunk Add-On for OpenTelemetry Collector , which according to the documentation is comp... See more...
Hello all, I am trying to ingest metrics via Opentelemetry in an enterprise environment. I have installed the Splunk Add-On for OpenTelemetry Collector , which according to the documentation is compatibl I have some doubts to configure it: where can you know the following connection points that my enterprise environment has? - SPLUNK_API_URL: The Splunk API URL, e.g. https://api.us0.signalfx.com  - SPLUNK_INGEST_URL: The Splunk ingest URL, e.g. https://ingest.us0.signalfx.com  - SPLUNK_LISTEN_INTERFACE: The network interface the agent receivers listen on.¿? - SPLUNK_TRACE_URL: The Splunk trace endpoint URL, e.g. https://ingest.us0.signalfx.com/v2/trace  Is there a configuration file where to view it? Do I have to do some step before to get those services up? thanks in advance BR JAR   T
Hello,   The UI of my search head is not loading ...I am seeing only a white screen with no error message as such ..Splunkd  is also running ...Kindly suggest?
Hi Hardik, Actually, this is not a syntax error, after "FROM" you specify the data source and there is no data source like "DB5". You have to use "dbmon_wait_time" this comes from event service shar... See more...
Hi Hardik, Actually, this is not a syntax error, after "FROM" you specify the data source and there is no data source like "DB5". You have to use "dbmon_wait_time" this comes from event service shards.  Another thing is (sorry this is my fault ) I accidentally removed "count" before " (`wait-state-id`) " that is bolded below. Btw this query is based on a controller that has only 1 DB collector, if you have more than 1 collector you need to specify 'server-id'  column with "WHERE" clause. SELECT `wait-state-id`, count(`wait-state-id`) FROM dbmon_wait_time Thanks Cansel
I am a beginner in splunk and I have created a new app in the Splunk Enterprise.I am not able to see appserver folder in the newly created app? How can I add that directory?
Hi Sikka, SaaS platform serving as a multitenant controller it is really hard to manage this kind operation if you dont have any real technical issue. So you can kindly ask this to support team or... See more...
Hi Sikka, SaaS platform serving as a multitenant controller it is really hard to manage this kind operation if you dont have any real technical issue. So you can kindly ask this to support team or your account manager with a support ticket. Based my older experience it is not impossible but it can charge additional cost for you just because professional service. Thanks Cansel
Hi, 1- All Analytics data include Log Analytics stored in your SaaS Event service (based on your controller type you can also store in on-prem.) 2-Storege Management default for SaaS based on your... See more...
Hi, 1- All Analytics data include Log Analytics stored in your SaaS Event service (based on your controller type you can also store in on-prem.) 2-Storege Management default for SaaS based on your license type. If you have ; * PoC license default 8 days analytic retention period * Prod (paid) license default retention for analytics 30days * You can also increase this retention up to 90 days if you paid additionaly per license. This values are constant on SaaS if you are using on-prem default retention value is also same but you can reduce retention day based on your storage size. 3- there is no way to increase your your default retention orher than license type and yes ypu can only"reduce" your retention period "only" on-prem event service. Thanks Cansel
Hello Cansel, I did the same, but it is showing me a syntax error. Please find the attachment below.
It was perfect .  I ended up doing it like this because of how the logs are stored in our environment. index=c account=1 env=lower source="logfiles" ("destination" OR "received") | eval logtype =... See more...
It was perfect .  I ended up doing it like this because of how the logs are stored in our environment. index=c account=1 env=lower source="logfiles" ("destination" OR "received") | eval logtype = if(like(_raw, "destination%"),"logb","loga") | rex field=_raw filename in loga| rex field=_raw filename in logb| stats count min(_time) as Starttime max(_time) as Endtime values(logtype) as logtype by filename | where count=2 AND logtype="loga" AND logtype="logb" | eval diff = Endtime - Starttime | stats avg(diff)
Erro message: Unable to load app list. Refresh the page to try again. Can anyone help with this?
Good All I am new in Splunk, and I am currently having problem at startup. How do I switch to Free from Enterprise Trial License?      
Have you seen the Admin's Little Helper app (https://splunkbase.splunk.com/app/6368).  It includes a btool command that lets you see your configurations on both SH and indexers using SPL. While many... See more...
Have you seen the Admin's Little Helper app (https://splunkbase.splunk.com/app/6368).  It includes a btool command that lets you see your configurations on both SH and indexers using SPL. While many configurables can be loaded safely on either/both SH and indexer, others cannot.  Inputs and outputs are good examples.  Clustering settings are another.
What is your question?
I have created two queries : The below is for the correct outage window  And the second one with any random date to see if alert is triggered when one of server goes down  Both has ... See more...
I have created two queries : The below is for the correct outage window  And the second one with any random date to see if alert is triggered when one of server goes down  Both has same trigger condition set : | where is_maintenance_window=0 AND is_server_down=1
When your testing just keep in mind that this is the time from the log event. | eval current_time=_time While this is the current time now, when the alert is running. So, depending upon your lookb... See more...
When your testing just keep in mind that this is the time from the log event. | eval current_time=_time While this is the current time now, when the alert is running. So, depending upon your lookback period (earliest= latest=) you might be picking up log events outside (prior or after) your outage window start time/end time.  | eval current_time=now() But, if you dont want any alerts during the outage window now() should be the correct time to be using for your triggering conditions
REPORT-url_domain It's the name of the field you want to assign the result to.  
If you use loadjob, it always loads an existing, previously run job. If you run | savedsearch ... then it will run a new search. If that new search returns the wrong results, then it would seem li... See more...
If you use loadjob, it always loads an existing, previously run job. If you run | savedsearch ... then it will run a new search. If that new search returns the wrong results, then it would seem likely that the search has not changed