All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks, but I'm not sure I understand your answer. For information, the dashboard tab has always been displayed correctly. It's only since last week that this error has appeared! and only with my ad... See more...
Thanks, but I'm not sure I understand your answer. For information, the dashboard tab has always been displayed correctly. It's only since last week that this error has appeared! and only with my administrator account. I don't know why
I've got a data set which collects data everyday but for my graph I'd like to compare the time selected to the same duration 24 hours before.   I can get the query to do the comparison but I want... See more...
I've got a data set which collects data everyday but for my graph I'd like to compare the time selected to the same duration 24 hours before.   I can get the query to do the comparison but I want to be able to show only the timeframe selected in the timepicker i.e. last 30 mins rather then the fill -48hours etc.   Below is the query I've used: index=naming version=2.2.* metric="playing" earliest=-36h latest=now | dedup _time, _raw | timechart span=1h sum(value) as value | timewrap 1d | rename value_latest_day as "Current 24 Hours", value_1day_before as "Previous 24 Hours" | foreach * [eval <<FIELD>>=round(<<FIELD>>, 0)] This is the base query I've used. For a different version I have done a join however that takes a bit too long to join. Ideally I want to be able to filter the above data (as it's quite quick to load) but only for the time picked in the time picker.   Thanks,
Yes. Restricting access is one of the valid points for creating separate indexes. Your data though seems a bit strange - I didn't notice that before. You have a json array with separate structures ... See more...
Yes. Restricting access is one of the valid points for creating separate indexes. Your data though seems a bit strange - I didn't notice that before. You have a json array with separate structures within that array which you want as separate events. That makes it a more complicated task. I'd probably try to use an external tool to read/receive the source "events", then parse the json, split the array into separate entities and push each of them separately to its proper index (either by writing to separate files for pickup by UF or pushing to HEC endpoint).
I have similar issue with Appdynamics cluster agent 24.6.0 and App JAVA version Java8 - Java agent inside the POD running and sending signals  - POD logs shows successful instrumentation  - POD is... See more...
I have similar issue with Appdynamics cluster agent 24.6.0 and App JAVA version Java8 - Java agent inside the POD running and sending signals  - POD logs shows successful instrumentation  - POD is up&running - Agent status is 100% in Appd Dashboard - AppD Dashboard looks OK but APPD_POD_INSTRUMENTATION_STATE is failed in POD yaml. I use Appdynamics cluster agent 24.6.0 and we use Java8 for the application . is  there a known  bug with this controller version ? thank you
I'm not sure I get the question right. Are you asking how to externalize config from the code in react application? I'm not a react developer but there are several easily googleable links in that to... See more...
I'm not sure I get the question right. Are you asking how to externalize config from the code in react application? I'm not a react developer but there are several easily googleable links in that topic. For example https://stackoverflow.com/questions/30568796/how-to-store-configuration-file-and-read-it-using-react
Hey @PickleRick Apologies I dont think I have fully understood what you are trying to imply here. My objective is to calculate duration between 2 set of events but one of those 2 events can happen m... See more...
Hey @PickleRick Apologies I dont think I have fully understood what you are trying to imply here. My objective is to calculate duration between 2 set of events but one of those 2 events can happen multiple times. It is like sending a request to an API and then validate the response. If the response is not what was expected then send the same request again and keep sending until you get the expected response. So my objective is to calculate the time when the 1st request was sent and when the last expected response was received. 2024-08-16 13:43:34,232|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Sending GET request to https://myapi.com/test 2024-08-16 13:43:50,232|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Sending GET request to https://myapi.com/test 2024-08-16 13:44:14,232|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Sending GET request to https://myapi.com/test 2024-08-16 13:43:44,232|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Sending GET request to https://myapi.com/test 2024-08-16 13:43:57,510|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Response Received in 114 milliseconds "200 OK" response for GET request to https://myapi.com/test: "status":"MatchCompleted"  Please find the set of events again here. 
Your "working" role might have less capabilities but can have access to some objects (especially the dashboard itself) that the "non-working" role does not. Check the _audit log for denied access to... See more...
Your "working" role might have less capabilities but can have access to some objects (especially the dashboard itself) that the "non-working" role does not. Check the _audit log for denied access to objects for the non-working user.
Since you're aggregating a relatively long-spanned set of events into a single data point you have to make a concious decision which point in time to assume as the timestamp for the result. You can e... See more...
Since you're aggregating a relatively long-spanned set of events into a single data point you have to make a concious decision which point in time to assume as the timestamp for the result. You can easily assign a value to the _time field just by doing | eval _time=something But you have to decide which timestamp to use. Is it the start time for your transaction? Is it the endtime? Maybe it's a middle of the transaction... It's up to you to make that decision. Anyway, when dealing with _time in stats, there's not much point in using latest() and earliest(). min() and max() suffice
Hi all,  Im trying to use this app by Baboon - Monitoring of Java Virtual Machines with JMX I get some error when i click on data inputs Oops. Page not found! Click here to return to Splunk h... See more...
Hi all,  Im trying to use this app by Baboon - Monitoring of Java Virtual Machines with JMX I get some error when i click on data inputs Oops. Page not found! Click here to return to Splunk homepage. Would I need to activate the app first?
You can't timechart by more than 2 dimensions and _time is one of those, try combining Env and Tenant index=_introspection sourcetype=splunk_resource_usage component=Hostwide | eval total_cpu_usage=... See more...
You can't timechart by more than 2 dimensions and _time is one of those, try combining Env and Tenant index=_introspection sourcetype=splunk_resource_usage component=Hostwide | eval total_cpu_usage=('data.cpu_system_pct' + 'data.cpu_user_pct') | eval EnvTenant=Env.":".Tenant | timechart Perc90(total_cpu_usage) AS cpu_usage span=12h useother=f by EnvTenant
@renjith_nair Thanks for the response but I don't think your solution is fully working. I tried it like below but then _time will not be available for me to plot the graph. I need to plot that durat... See more...
@renjith_nair Thanks for the response but I don't think your solution is fully working. I tried it like below but then _time will not be available for me to plot the graph. I need to plot that duration on a graph. Is there a way to do that? | stats earliest(_time) as starttime,latest(_time) as endtime by uniqueId | eval duration=endtime-starttime | timechart span=15m p95(duration) as p95Responsetime
Hi, For a few days now, my Splunk Dashboard shortcut has been displaying an error when I connect with the administrator account. But when I use another account with less privilege via LDAP auth... See more...
Hi, For a few days now, my Splunk Dashboard shortcut has been displaying an error when I connect with the administrator account. But when I use another account with less privilege via LDAP authentication, I don't get this error, the page displays fine. Do you have any idea what the problem is? Thanks for your help.
Thank you so much PickleRick. It works for me well. I was able isolate 3 consecutive results. I appreciate
For OAuth 2.0. Authentication in Splunk_TA_snow, under ServiceNow account configuration you type in Client Id and Client Secret. Once you click on save/Update button, if the connection is successful,... See more...
For OAuth 2.0. Authentication in Splunk_TA_snow, under ServiceNow account configuration you type in Client Id and Client Secret. Once you click on save/Update button, if the connection is successful, a pop-up window opens where you have to login with a proper user and password. Mind that the browser doesn't take your personal credentials, but you login with a user that was predefined in ServiceNow.
Yes. That's so... and that was a really bad idea for App order UI in WebGUI 🤦‍ 🤦‍ 🤦‍ Previously drag option with jquery was perfect... really do not know why they change drastically this sect... See more...
Yes. That's so... and that was a really bad idea for App order UI in WebGUI 🤦‍ 🤦‍ 🤦‍ Previously drag option with jquery was perfect... really do not know why they change drastically this section Editing "user-prefs.conf" need a daemon restart. Annoying.
Hi Splunker, I’ve been developing a React app for Splunk that manages users via the REST API (create/update/delete). Initially, I hardcoded the REST API URL, username, and password for development ... See more...
Hi Splunker, I’ve been developing a React app for Splunk that manages users via the REST API (create/update/delete). Initially, I hardcoded the REST API URL, username, and password for development purposes. Now that the development is nearly complete, I need to make the URL dynamic. It should retrieve the REST API server URL and the currently logged-in user’s information and use it in the Splunk React app. How can I achieve this? Here is the current hardcoded code: const fetchAllUsers = async () => { try { const response = await axios.get('https://mymachine:8089/services/authentication/users', { auth: { username: 'admin', password: 'admin123' }, headers: { 'Content-Type': 'application/xml' } }); } catch (error) { console.error('Error fetching users:', error); } }; #restapi  #createuser #react #reactapp thanks in advance
Perfect, worked for me thanks!!
I use the linked list input type to control sets of panels, something like this <input id="inventory_type" type="link" token="tok_category" searchWhenChanged="true"> <choice value="hos... See more...
I use the linked list input type to control sets of panels, something like this <input id="inventory_type" type="link" token="tok_category" searchWhenChanged="true"> <choice value="host">Host</choice> <choice value="user">User</choice> <initialValue>host</initialValue> <change> <condition value="host"> <set token="by_host"></set> <unset token="by_user"></unset> </condition> <condition value="user"> <set token="by_user"></set> <unset token="by_host"></unset> </condition> </change> </input> You can then have <row depends="$by_host$> and <row depends="$by_user$> to control which rows are shown. If you want to have inline CSS to then tweak the buttons you can do it in the dashboard. See this app which has an example of how to customise the XML and tabs.  https://splunkbase.splunk.com/app/5256 You can then get this type of display   
OK, so you've got two tokens going on here. The default 'All' (*) is selected. When you select one from the list, the intention is that the All (*) should disappear otherwise the selected options are... See more...
OK, so you've got two tokens going on here. The default 'All' (*) is selected. When you select one from the list, the intention is that the All (*) should disappear otherwise the selected options are *,1 (or whatever 1 is in your case). So, my condition resets the form. token so that it removes * from the options. What token are you actually using in the search? Are you using app_fm_entity_id or app_net_fm_entity_id If you need a second token which also has the word "_all" when * is selected, then your problem is that you are using <eval> to set that token, when you just need to use <set> I use an html panel sometimes to debug tokens - multiselect behaviour is a little unintuitive and technically the documentation says that <change> is not supported for multiselect, but it does work, it's just odd... <panel> <input id="app_nodes_multiselect" type="multiselect" token="app_fm_entity_id" searchWhenChanged="true"> <label>Nodes</label> <delimiter> </delimiter> <fieldForLabel>entity_name</fieldForLabel> <fieldForValue>internal_entity_id</fieldForValue> <search> <query> | makeresults count=5 | streamstats c | eval entity_name="name:".c, internal_entity_id=c | table entity_name, internal_entity_id | sort entity_name </query> </search> <choice value="*">All</choice> <default>*</default> <change> <condition match="$form.app_fm_entity_id$=&quot;*&quot;"> <set token="app_net_fm_entity_id">_all</set> <set token="condition">1</set> </condition> <condition> <set token="condition">2</set> <eval token="form.app_fm_entity_id">case(mvcount($form.app_fm_entity_id$)="2" AND mvindex($form.app_fm_entity_id$,0)="*", mvindex($form.app_fm_entity_id$,1), mvfind($form.app_fm_entity_id$,"^\\*$$")=mvcount($form.app_fm_entity_id$)-1, "_all", true(), $form.app_fm_entity_id$)</eval> <set token="app_net_fm_entity_id">$app_fm_entity_id$</set> </condition> </change> </input> <html> app_fm_entity_id::$app_fm_entity_id$<p/> form.app_fm_entity_id::$form.app_fm_entity_id$<p/> app_fm_entity_id::$app_fm_entity_id$<p/> app_net_fm_entity_id::$app_net_fm_entity_id$<p/> condition::$condition$ </html> </panel>