All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ITWhisperer what would I need to do if I wanted to look at a bigger window?  My max would be to pick 7 days in my time picker, how would i edit the above to look at that? Thank you in advance
Hey PickleRick, Yeah I was thinking this. The data is coming in through modular input so if I adjust the script then I be able to parse them into their respective indexes. But if I'm doing so then I... See more...
Hey PickleRick, Yeah I was thinking this. The data is coming in through modular input so if I adjust the script then I be able to parse them into their respective indexes. But if I'm doing so then I may as well create separate applications altogether for each one which is what I'm trying to avoid with this exercise. Regarding the data, yes this is a much simpler example of more complicated data I'm working with. Essentially each event is JSON data with values that are either string or [array]. archetype is [array] and can be both superhero and villain so this event should appear in both indexes (but I've simplified it for this example). So is there no possible way to utilise and bypass summary indexing rules by any chance to meet my desired use case? Because I'm still trying to summarise my data by separating superhero and villains to speed up searches. Seems like a lot of work to simply want to create separate indexes based on search. Thanks,
Hi @BRFZ , could you share some sample of your logs: both complete and incomplete logs? Ciao. Giuseppe
Hi All, Wanted to know whether AppDynamics can monitor Salesforce in any way at all? I saw some posts mentioning manual injector for End user monitoring on Salesforce frontend but any more details w... See more...
Hi All, Wanted to know whether AppDynamics can monitor Salesforce in any way at all? I saw some posts mentioning manual injector for End user monitoring on Salesforce frontend but any more details we can capture at all from salesforce? Please suggest your experience if anyone has tried out any custom monitoring. Am looking for some ways through which we can get close to APM monitoring like metrics.
Hello @gcusello, The missing data includes certain event IDs that don’t appear at all, and there are also instances where information is incomplete. For example, several fields are filled with d... See more...
Hello @gcusello, The missing data includes certain event IDs that don’t appear at all, and there are also instances where information is incomplete. For example, several fields are filled with dashes ("-"), indicating a lack of information.
I am using HEC to receive various logs from Firehose, HEC is allowed to use index names AWS & palo_alto. The default index is set to AWS. All the logs coming from the HEC are assigned to default i... See more...
I am using HEC to receive various logs from Firehose, HEC is allowed to use index names AWS & palo_alto. The default index is set to AWS. All the logs coming from the HEC are assigned to default index AWS and default sourcetype aws:firehose I am using the below config to change the sourcetype and index name of the logs.      props.conf [source::syslog:dev/syslogng/*] TRANSFORMS-hecpaloalto = hecpaloalto, hecpaloalto_in disabled = false transforms.conf [hecpaloalto] REGEX = (.*) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::pan:log [hecpaloalto_in] REGEX = (.*) DEST_KEY = _MetaData:Index FORMAT = palo_alto   The sourcetype has changed to pan:log and its as intended but the Index name is still displaying as AWS, instead of changing to palo_alto. HEC config have default index as aws and selected indexes are aws, palo_alto Is there anything wrong in my config?
The forwarder asset table is generated from tcpin_connections metrics in _internal.  FTR, this is done by the Monitoring Console (MC), not the Cluster Manager (CM).  The CM and MC can be co-located i... See more...
The forwarder asset table is generated from tcpin_connections metrics in _internal.  FTR, this is done by the Monitoring Console (MC), not the Cluster Manager (CM).  The CM and MC can be co-located in limited conditions - see https://docs.splunk.com/Documentation/Splunk/9.3.0/Indexer/Systemrequirements#Additional_roles_for_the_manager_node. Seeing the same forwarder many times often happens when a host is cloned without first preparing the forwarder for cloning.  See the CLONEPREP installer option at https://docs.splunk.com/Documentation/Forwarder/9.3.0/Forwarder/InstallaWindowsuniversalforwarderfromaninstaller. The fix is to delete the GUID on each server (using the splunk clone-prep-clear-config command) and then restart Splunk so it generates a new GUID.  Then have the Monitoring Console generate a new forwarder assets table.
I don't see any fields extracted under in the search head.  This config is placed in the heavy forwarder in the same app where the input is mentioned. Even in the search head Extract Fields teste... See more...
I don't see any fields extracted under in the search head.  This config is placed in the heavy forwarder in the same app where the input is mentioned. Even in the search head Extract Fields tester the Regex just gives a check mark for all the events saying its a valid regex but doesn't display any Events. Assuming $1::$2 will be used to assign the field name and field value.
thank you @ITWhisperer that's perfect and hasn't slowed down my query!
We are trying to see why my application is taking too much as end-end latency time in business transaction snapshots shows a high value. We are unable to drill down the latency time when viewed detai... See more...
We are trying to see why my application is taking too much as end-end latency time in business transaction snapshots shows a high value. We are unable to drill down the latency time when viewed details into it ?  Is there a way where we can see the full drill down view of all the calls involved in it ?  We can only see the execution time but not the latency time in Appdynamics. Screenshots attached. 
In what way is it "not working"? Are you getting some of the fields, none of the fields, it is only working for some of the events, it is not working for only some sort of data? Do you need to escape... See more...
In what way is it "not working"? Are you getting some of the fields, none of the fields, it is only working for some of the events, it is not working for only some sort of data? Do you need to escape the double quotes in the regex?
we are experiencing the same behaviour. CPU Load, disk space, RAM etc looks fine... Have you been able to solve the issue?
Here is my sample log    2024-07-08T04:43:32.468537+00:00 dxx1-dbxxxs.xxx.net MSSQLSERVER[0] {"EventTime":"2024-07-08 04:43:32","Hostname":"dx1-dbxxxs.xxx.net","Keywords":45035996273704960,"EventTy... See more...
Here is my sample log    2024-07-08T04:43:32.468537+00:00 dxx1-dbxxxs.xxx.net MSSQLSERVER[0] {"EventTime":"2024-07-08 04:43:32","Hostname":"dx1-dbxxxs.xxx.net","Keywords":45035996273704960,"EventType":"AUDIT_SUCCESS","SeverityValue":2,"Severity":"INFO","EventID":44444,"SourceName":"MSSQLSERVER","Task":5,"RecordNumber":1234343410,"ProcessID":0,"ThreadID":0,"Channel":"Application","Message":"Audit event:lkjfd:sdfkjhf:Askjhdfsdf","Category":"None","EventReceivedTime":"2024-07-08 04:43:32","SourceModuleName":"default-inputs","SourceModuleType":"im_msvistalog"}#015   Here is my config props.conf [dbtest:test] #mysourcetype TRANSFORMS-extract_kv_pairs = extract_json_data transforms.conf   [extract_json_data] REGEX = "(\w+)":"?([^",}]+)"? FORMAT = $1::$2 WRITE_META = true The same Regex is working in Regex101 here is the test link https://regex101.com/r/rt3bly/1 I am not sure why its not working in my log extraction.  Any help is highly appreciated. Thanks  
There are several possible scenarios why you can't se the data you think should be getting into Splunk. 1. The data is actually not being properly read or otherwise received by the UF - check your i... See more...
There are several possible scenarios why you can't se the data you think should be getting into Splunk. 1. The data is actually not being properly read or otherwise received by the UF - check your inputs and their state, check the splunkd.log for any sign of UF having problems with inputs. And check if files are not being either not found by your input definitions or skipped due to - for example - crc duplication due to common header or if files simply cannot be read due to insufficient permissions. 2. The data might be configured to be sent to non-existant indexes. If you don't have a last-chance index defined, such events would get discarded. 3. There might be a configuration in place which does some filtering or redirection to other index(es). 4. The data might be getting indexed properly but you might be having problems with time recognition (especially with wrongly set timezones) resulting in events indexed at wrong point in time - that would mean that you're simply not seeing your events because your search range doesn't cover the events being indexed since they are "late".  
What the data looks like when ingested with HEC depends on the endpoint you use. If you're using the /raw endpoint you can include the host value as parameter If you're using the /event endpoint yo... See more...
What the data looks like when ingested with HEC depends on the endpoint you use. If you're using the /raw endpoint you can include the host value as parameter If you're using the /event endpoint you can include host field as additional field along your event See https://docs.splunk.com/Documentation/Splunk/9.3.0/RESTREF/RESTinput#services.2Fcollector.2Fraw and https://docs.splunk.com/Documentation/Splunk/9.3.0/RESTREF/RESTinput#services.2Fcollector.2Fevent (yes, these are Splunk Enterprise docs but the HEC should work the same way in Cloud - even the docs on HEC in Cloud say these two endpoints are available).
Sure, posted on Slack, thanks
Thanks for your response, is there any way we can have JSON pagination for Dashboard Panel since we do have panel in Studio Dashboard.
Hi @BRFZ , which are missing logs? are they missing always or only in few moments? how did you find that there are missed logs? Ciao. Giuseppe
Hello Guys, We are using Splunk Cloud and have created multiple HECs for different products. We noticed that events coming in through HEC always have "xxx.splunkcloud.com" as the value of the host ... See more...
Hello Guys, We are using Splunk Cloud and have created multiple HECs for different products. We noticed that events coming in through HEC always have "xxx.splunkcloud.com" as the value of the host field. Is there a way to assign different hostnames to different products? Thanks & Regards, Iris
Try starting with something like this index=naming version=2.2.* metric="playing" [| makeresults | fields - _time | addinfo | eval day=mvrange(0,2) | mvexpand day | eval earliest=relative_time(info_... See more...
Try starting with something like this index=naming version=2.2.* metric="playing" [| makeresults | fields - _time | addinfo | eval day=mvrange(0,2) | mvexpand day | eval earliest=relative_time(info_min_time,"-".day."d") | eval latest=relative_time(info_max_time,"-".day."d") | fields earliest latest]