All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for your response, is there any way we can have JSON pagination for Dashboard Panel since we do have panel in Studio Dashboard.
Hi @BRFZ , which are missing logs? are they missing always or only in few moments? how did you find that there are missed logs? Ciao. Giuseppe
Hello Guys, We are using Splunk Cloud and have created multiple HECs for different products. We noticed that events coming in through HEC always have "xxx.splunkcloud.com" as the value of the host ... See more...
Hello Guys, We are using Splunk Cloud and have created multiple HECs for different products. We noticed that events coming in through HEC always have "xxx.splunkcloud.com" as the value of the host field. Is there a way to assign different hostnames to different products? Thanks & Regards, Iris
Try starting with something like this index=naming version=2.2.* metric="playing" [| makeresults | fields - _time | addinfo | eval day=mvrange(0,2) | mvexpand day | eval earliest=relative_time(info_... See more...
Try starting with something like this index=naming version=2.2.* metric="playing" [| makeresults | fields - _time | addinfo | eval day=mvrange(0,2) | mvexpand day | eval earliest=relative_time(info_min_time,"-".day."d") | eval latest=relative_time(info_max_time,"-".day."d") | fields earliest latest]
Hello everyone, I installed and configured the Splunk Forwarder on a machine. While the logs are being forwarded to Splunk, I’ve noticed that some data is missing from the logs that are coming throu... See more...
Hello everyone, I installed and configured the Splunk Forwarder on a machine. While the logs are being forwarded to Splunk, I’ve noticed that some data is missing from the logs that are coming through. Could this issue be related to specific configurations that need to be adjusted on the forwarder, or is it possible that the problem is coming from the machines themselves? If anyone has experienced something similar or has insights on how to address this, I would greatly appreciate your advice. Thank you in advance for your help! Best regards,
Below is full error message: The percentage of non high priority searches lagged (48%) over the last 24 hours is very high and exceeded the yellow thresholds (40%) on this Splunk instance. Total Se... See more...
Below is full error message: The percentage of non high priority searches lagged (48%) over the last 24 hours is very high and exceeded the yellow thresholds (40%) on this Splunk instance. Total Searches that were part of this percentage=38563. Total lagged Searches=18709
Hi all, How can this be fixed? Thanks for your help on this,
You have two options: 1. Check the developer console and see if you can spot any errors 2. Check the _audit index (and maybe _internal) to see if there is anything significant around that time. I ... See more...
You have two options: 1. Check the developer console and see if you can spot any errors 2. Check the _audit index (and maybe _internal) to see if there is anything significant around that time. I don't think that's something that's happening commonly so it's about simply trying to find and isolate the cause.
I'm not sure what you want here. You must have the server's address to connect to its API endpoint so it's not clear for me who would return that address to you. And you can't get user's password. It... See more...
I'm not sure what you want here. You must have the server's address to connect to its API endpoint so it's not clear for me who would return that address to you. And you can't get user's password. It won't work that way. Also - what do you mean by "currently logged in user"? Which user? Are you trying to piggyback on someone's already existing session (if so, I'd expect Splunk to have defenses against session hijacking and if possible, that should probably be explicitly configured)? Or do you want to authenticate a user in Splunk so that you can use that session for your purposes? If so, use user's credentials to log in to Splunk, obtain a session token and use that token. But it has all the risks of a man-in-the-middle solution.
OK. From the top. You have a set of events. Each event has the _time field describing when the event happened. You're using the stats command to find earliest and latest (or min and max which in th... See more...
OK. From the top. You have a set of events. Each event has the _time field describing when the event happened. You're using the stats command to find earliest and latest (or min and max which in this case boils down to the same thing) values of this field for each uniqueId. As an output you have three fields - starttime, endtime and uniqueId. You no longer have the _time field. Timechart must have the _time field since it, well, it charts over the time. So you have to assign some value to the _time field manually. You can do it either by using eval as I showed previously or simply by adding another aggregation to your stats. For example | stats earliest(_time) as starttime,latest(_time) as endtime avg(_time) as _time by uniqueId That's just one of possible ways of doing that (of course you can use avg(_time), min(_time), max(_time) or any other aggregation function which makes sense in this context).
hi @PickleRick , thanks for your time, Yes, that's correct! The goal is to externalize the configuration (like the REST API URL, username, and password) from the code so it’s not hardcoded. Specif... See more...
hi @PickleRick , thanks for your time, Yes, that's correct! The goal is to externalize the configuration (like the REST API URL, username, and password) from the code so it’s not hardcoded. Specifically, I want to dynamically retrieve the REST API server URL and the currently logged-in user’s information and use it in the React app within Splunk. Do you have any suggestions on how to achieve this? Is there a predefined token that can gives the server URL, username, and password (or something similar to a session key for the currently logged-in user) that I can use in my React code? Thanks,
Hello Guys, We have paloalto firewalls with different timezone settings. For the ones which is not in the same timezone as Splunk, their logs will be considered as the logs of the future and hence c... See more...
Hello Guys, We have paloalto firewalls with different timezone settings. For the ones which is not in the same timezone as Splunk, their logs will be considered as the logs of the future and hence cannot be searched in Splunk in a timely manner. I cannot fix it by specifying timezone for the source types provided by the paloalto TA, since it cannot fulfill multiple time zones at the same time. I wonder if you have experienced the similar problem, if yes, would you please share your experience on handling this kind of issue? Thanks much for your help in advance! Regards, Iris
Thanks, but I'm not sure I understand your answer. For information, the dashboard tab has always been displayed correctly. It's only since last week that this error has appeared! and only with my ad... See more...
Thanks, but I'm not sure I understand your answer. For information, the dashboard tab has always been displayed correctly. It's only since last week that this error has appeared! and only with my administrator account. I don't know why
I've got a data set which collects data everyday but for my graph I'd like to compare the time selected to the same duration 24 hours before.   I can get the query to do the comparison but I want... See more...
I've got a data set which collects data everyday but for my graph I'd like to compare the time selected to the same duration 24 hours before.   I can get the query to do the comparison but I want to be able to show only the timeframe selected in the timepicker i.e. last 30 mins rather then the fill -48hours etc.   Below is the query I've used: index=naming version=2.2.* metric="playing" earliest=-36h latest=now | dedup _time, _raw | timechart span=1h sum(value) as value | timewrap 1d | rename value_latest_day as "Current 24 Hours", value_1day_before as "Previous 24 Hours" | foreach * [eval <<FIELD>>=round(<<FIELD>>, 0)] This is the base query I've used. For a different version I have done a join however that takes a bit too long to join. Ideally I want to be able to filter the above data (as it's quite quick to load) but only for the time picked in the time picker.   Thanks,
Yes. Restricting access is one of the valid points for creating separate indexes. Your data though seems a bit strange - I didn't notice that before. You have a json array with separate structures ... See more...
Yes. Restricting access is one of the valid points for creating separate indexes. Your data though seems a bit strange - I didn't notice that before. You have a json array with separate structures within that array which you want as separate events. That makes it a more complicated task. I'd probably try to use an external tool to read/receive the source "events", then parse the json, split the array into separate entities and push each of them separately to its proper index (either by writing to separate files for pickup by UF or pushing to HEC endpoint).
I have similar issue with Appdynamics cluster agent 24.6.0 and App JAVA version Java8 - Java agent inside the POD running and sending signals  - POD logs shows successful instrumentation  - POD is... See more...
I have similar issue with Appdynamics cluster agent 24.6.0 and App JAVA version Java8 - Java agent inside the POD running and sending signals  - POD logs shows successful instrumentation  - POD is up&running - Agent status is 100% in Appd Dashboard - AppD Dashboard looks OK but APPD_POD_INSTRUMENTATION_STATE is failed in POD yaml. I use Appdynamics cluster agent 24.6.0 and we use Java8 for the application . is  there a known  bug with this controller version ? thank you
I'm not sure I get the question right. Are you asking how to externalize config from the code in react application? I'm not a react developer but there are several easily googleable links in that to... See more...
I'm not sure I get the question right. Are you asking how to externalize config from the code in react application? I'm not a react developer but there are several easily googleable links in that topic. For example https://stackoverflow.com/questions/30568796/how-to-store-configuration-file-and-read-it-using-react
Hey @PickleRick Apologies I dont think I have fully understood what you are trying to imply here. My objective is to calculate duration between 2 set of events but one of those 2 events can happen m... See more...
Hey @PickleRick Apologies I dont think I have fully understood what you are trying to imply here. My objective is to calculate duration between 2 set of events but one of those 2 events can happen multiple times. It is like sending a request to an API and then validate the response. If the response is not what was expected then send the same request again and keep sending until you get the expected response. So my objective is to calculate the time when the 1st request was sent and when the last expected response was received. 2024-08-16 13:43:34,232|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Sending GET request to https://myapi.com/test 2024-08-16 13:43:50,232|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Sending GET request to https://myapi.com/test 2024-08-16 13:44:14,232|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Sending GET request to https://myapi.com/test 2024-08-16 13:43:44,232|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Sending GET request to https://myapi.com/test 2024-08-16 13:43:57,510|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Response Received in 114 milliseconds "200 OK" response for GET request to https://myapi.com/test: "status":"MatchCompleted"  Please find the set of events again here. 
Your "working" role might have less capabilities but can have access to some objects (especially the dashboard itself) that the "non-working" role does not. Check the _audit log for denied access to... See more...
Your "working" role might have less capabilities but can have access to some objects (especially the dashboard itself) that the "non-working" role does not. Check the _audit log for denied access to objects for the non-working user.
Since you're aggregating a relatively long-spanned set of events into a single data point you have to make a concious decision which point in time to assume as the timestamp for the result. You can e... See more...
Since you're aggregating a relatively long-spanned set of events into a single data point you have to make a concious decision which point in time to assume as the timestamp for the result. You can easily assign a value to the _time field just by doing | eval _time=something But you have to decide which timestamp to use. Is it the start time for your transaction? Is it the endtime? Maybe it's a middle of the transaction... It's up to you to make that decision. Anyway, when dealing with _time in stats, there's not much point in using latest() and earliest(). min() and max() suffice