All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Here is my sample log    2024-07-08T04:43:32.468537+00:00 dxx1-dbxxxs.xxx.net MSSQLSERVER[0] {"EventTime":"2024-07-08 04:43:32","Hostname":"dx1-dbxxxs.xxx.net","Keywords":45035996273704960,"EventTy... See more...
Here is my sample log    2024-07-08T04:43:32.468537+00:00 dxx1-dbxxxs.xxx.net MSSQLSERVER[0] {"EventTime":"2024-07-08 04:43:32","Hostname":"dx1-dbxxxs.xxx.net","Keywords":45035996273704960,"EventType":"AUDIT_SUCCESS","SeverityValue":2,"Severity":"INFO","EventID":44444,"SourceName":"MSSQLSERVER","Task":5,"RecordNumber":1234343410,"ProcessID":0,"ThreadID":0,"Channel":"Application","Message":"Audit event:lkjfd:sdfkjhf:Askjhdfsdf","Category":"None","EventReceivedTime":"2024-07-08 04:43:32","SourceModuleName":"default-inputs","SourceModuleType":"im_msvistalog"}#015   Here is my config props.conf [dbtest:test] #mysourcetype TRANSFORMS-extract_kv_pairs = extract_json_data transforms.conf   [extract_json_data] REGEX = "(\w+)":"?([^",}]+)"? FORMAT = $1::$2 WRITE_META = true The same Regex is working in Regex101 here is the test link https://regex101.com/r/rt3bly/1 I am not sure why its not working in my log extraction.  Any help is highly appreciated. Thanks  
There are several possible scenarios why you can't se the data you think should be getting into Splunk. 1. The data is actually not being properly read or otherwise received by the UF - check your i... See more...
There are several possible scenarios why you can't se the data you think should be getting into Splunk. 1. The data is actually not being properly read or otherwise received by the UF - check your inputs and their state, check the splunkd.log for any sign of UF having problems with inputs. And check if files are not being either not found by your input definitions or skipped due to - for example - crc duplication due to common header or if files simply cannot be read due to insufficient permissions. 2. The data might be configured to be sent to non-existant indexes. If you don't have a last-chance index defined, such events would get discarded. 3. There might be a configuration in place which does some filtering or redirection to other index(es). 4. The data might be getting indexed properly but you might be having problems with time recognition (especially with wrongly set timezones) resulting in events indexed at wrong point in time - that would mean that you're simply not seeing your events because your search range doesn't cover the events being indexed since they are "late".  
What the data looks like when ingested with HEC depends on the endpoint you use. If you're using the /raw endpoint you can include the host value as parameter If you're using the /event endpoint yo... See more...
What the data looks like when ingested with HEC depends on the endpoint you use. If you're using the /raw endpoint you can include the host value as parameter If you're using the /event endpoint you can include host field as additional field along your event See https://docs.splunk.com/Documentation/Splunk/9.3.0/RESTREF/RESTinput#services.2Fcollector.2Fraw and https://docs.splunk.com/Documentation/Splunk/9.3.0/RESTREF/RESTinput#services.2Fcollector.2Fevent (yes, these are Splunk Enterprise docs but the HEC should work the same way in Cloud - even the docs on HEC in Cloud say these two endpoints are available).
Sure, posted on Slack, thanks
Thanks for your response, is there any way we can have JSON pagination for Dashboard Panel since we do have panel in Studio Dashboard.
Hi @BRFZ , which are missing logs? are they missing always or only in few moments? how did you find that there are missed logs? Ciao. Giuseppe
Hello Guys, We are using Splunk Cloud and have created multiple HECs for different products. We noticed that events coming in through HEC always have "xxx.splunkcloud.com" as the value of the host ... See more...
Hello Guys, We are using Splunk Cloud and have created multiple HECs for different products. We noticed that events coming in through HEC always have "xxx.splunkcloud.com" as the value of the host field. Is there a way to assign different hostnames to different products? Thanks & Regards, Iris
Try starting with something like this index=naming version=2.2.* metric="playing" [| makeresults | fields - _time | addinfo | eval day=mvrange(0,2) | mvexpand day | eval earliest=relative_time(info_... See more...
Try starting with something like this index=naming version=2.2.* metric="playing" [| makeresults | fields - _time | addinfo | eval day=mvrange(0,2) | mvexpand day | eval earliest=relative_time(info_min_time,"-".day."d") | eval latest=relative_time(info_max_time,"-".day."d") | fields earliest latest]
Hello everyone, I installed and configured the Splunk Forwarder on a machine. While the logs are being forwarded to Splunk, I’ve noticed that some data is missing from the logs that are coming throu... See more...
Hello everyone, I installed and configured the Splunk Forwarder on a machine. While the logs are being forwarded to Splunk, I’ve noticed that some data is missing from the logs that are coming through. Could this issue be related to specific configurations that need to be adjusted on the forwarder, or is it possible that the problem is coming from the machines themselves? If anyone has experienced something similar or has insights on how to address this, I would greatly appreciate your advice. Thank you in advance for your help! Best regards,
Below is full error message: The percentage of non high priority searches lagged (48%) over the last 24 hours is very high and exceeded the yellow thresholds (40%) on this Splunk instance. Total Se... See more...
Below is full error message: The percentage of non high priority searches lagged (48%) over the last 24 hours is very high and exceeded the yellow thresholds (40%) on this Splunk instance. Total Searches that were part of this percentage=38563. Total lagged Searches=18709
Hi all, How can this be fixed? Thanks for your help on this,
You have two options: 1. Check the developer console and see if you can spot any errors 2. Check the _audit index (and maybe _internal) to see if there is anything significant around that time. I ... See more...
You have two options: 1. Check the developer console and see if you can spot any errors 2. Check the _audit index (and maybe _internal) to see if there is anything significant around that time. I don't think that's something that's happening commonly so it's about simply trying to find and isolate the cause.
I'm not sure what you want here. You must have the server's address to connect to its API endpoint so it's not clear for me who would return that address to you. And you can't get user's password. It... See more...
I'm not sure what you want here. You must have the server's address to connect to its API endpoint so it's not clear for me who would return that address to you. And you can't get user's password. It won't work that way. Also - what do you mean by "currently logged in user"? Which user? Are you trying to piggyback on someone's already existing session (if so, I'd expect Splunk to have defenses against session hijacking and if possible, that should probably be explicitly configured)? Or do you want to authenticate a user in Splunk so that you can use that session for your purposes? If so, use user's credentials to log in to Splunk, obtain a session token and use that token. But it has all the risks of a man-in-the-middle solution.
OK. From the top. You have a set of events. Each event has the _time field describing when the event happened. You're using the stats command to find earliest and latest (or min and max which in th... See more...
OK. From the top. You have a set of events. Each event has the _time field describing when the event happened. You're using the stats command to find earliest and latest (or min and max which in this case boils down to the same thing) values of this field for each uniqueId. As an output you have three fields - starttime, endtime and uniqueId. You no longer have the _time field. Timechart must have the _time field since it, well, it charts over the time. So you have to assign some value to the _time field manually. You can do it either by using eval as I showed previously or simply by adding another aggregation to your stats. For example | stats earliest(_time) as starttime,latest(_time) as endtime avg(_time) as _time by uniqueId That's just one of possible ways of doing that (of course you can use avg(_time), min(_time), max(_time) or any other aggregation function which makes sense in this context).
hi @PickleRick , thanks for your time, Yes, that's correct! The goal is to externalize the configuration (like the REST API URL, username, and password) from the code so it’s not hardcoded. Specif... See more...
hi @PickleRick , thanks for your time, Yes, that's correct! The goal is to externalize the configuration (like the REST API URL, username, and password) from the code so it’s not hardcoded. Specifically, I want to dynamically retrieve the REST API server URL and the currently logged-in user’s information and use it in the React app within Splunk. Do you have any suggestions on how to achieve this? Is there a predefined token that can gives the server URL, username, and password (or something similar to a session key for the currently logged-in user) that I can use in my React code? Thanks,
Hello Guys, We have paloalto firewalls with different timezone settings. For the ones which is not in the same timezone as Splunk, their logs will be considered as the logs of the future and hence c... See more...
Hello Guys, We have paloalto firewalls with different timezone settings. For the ones which is not in the same timezone as Splunk, their logs will be considered as the logs of the future and hence cannot be searched in Splunk in a timely manner. I cannot fix it by specifying timezone for the source types provided by the paloalto TA, since it cannot fulfill multiple time zones at the same time. I wonder if you have experienced the similar problem, if yes, would you please share your experience on handling this kind of issue? Thanks much for your help in advance! Regards, Iris
Thanks, but I'm not sure I understand your answer. For information, the dashboard tab has always been displayed correctly. It's only since last week that this error has appeared! and only with my ad... See more...
Thanks, but I'm not sure I understand your answer. For information, the dashboard tab has always been displayed correctly. It's only since last week that this error has appeared! and only with my administrator account. I don't know why
I've got a data set which collects data everyday but for my graph I'd like to compare the time selected to the same duration 24 hours before.   I can get the query to do the comparison but I want... See more...
I've got a data set which collects data everyday but for my graph I'd like to compare the time selected to the same duration 24 hours before.   I can get the query to do the comparison but I want to be able to show only the timeframe selected in the timepicker i.e. last 30 mins rather then the fill -48hours etc.   Below is the query I've used: index=naming version=2.2.* metric="playing" earliest=-36h latest=now | dedup _time, _raw | timechart span=1h sum(value) as value | timewrap 1d | rename value_latest_day as "Current 24 Hours", value_1day_before as "Previous 24 Hours" | foreach * [eval <<FIELD>>=round(<<FIELD>>, 0)] This is the base query I've used. For a different version I have done a join however that takes a bit too long to join. Ideally I want to be able to filter the above data (as it's quite quick to load) but only for the time picked in the time picker.   Thanks,
Yes. Restricting access is one of the valid points for creating separate indexes. Your data though seems a bit strange - I didn't notice that before. You have a json array with separate structures ... See more...
Yes. Restricting access is one of the valid points for creating separate indexes. Your data though seems a bit strange - I didn't notice that before. You have a json array with separate structures within that array which you want as separate events. That makes it a more complicated task. I'd probably try to use an external tool to read/receive the source "events", then parse the json, split the array into separate entities and push each of them separately to its proper index (either by writing to separate files for pickup by UF or pushing to HEC endpoint).
I have similar issue with Appdynamics cluster agent 24.6.0 and App JAVA version Java8 - Java agent inside the POD running and sending signals  - POD logs shows successful instrumentation  - POD is... See more...
I have similar issue with Appdynamics cluster agent 24.6.0 and App JAVA version Java8 - Java agent inside the POD running and sending signals  - POD logs shows successful instrumentation  - POD is up&running - Agent status is 100% in Appd Dashboard - AppD Dashboard looks OK but APPD_POD_INSTRUMENTATION_STATE is failed in POD yaml. I use Appdynamics cluster agent 24.6.0 and we use Java8 for the application . is  there a known  bug with this controller version ? thank you