All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

App  started successfully (id: 1712665900147) on asset: Loaded action execution configuration executing action: test_asset_connectivity Connecting to 192.168.208.144... Connectivity test faile... See more...
App  started successfully (id: 1712665900147) on asset: Loaded action execution configuration executing action: test_asset_connectivity Connecting to 192.168.208.144... Connectivity test failed 1 action failed Failed to connect to PHANTOM server. No route to host. Connectivity test failed i am facing this issue  i tried all the possible way
Hi all, I created a volume and changed all homePath for all indexes to use this volume. Now I can't search on events that existed before this volume was created, and the search heads only show even... See more...
Hi all, I created a volume and changed all homePath for all indexes to use this volume. Now I can't search on events that existed before this volume was created, and the search heads only show events that are on this volume. How can I move old and existing events to this volume so I can search on them? Thank you.
Hello guys, so I'm currently trying to set Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm cha... See more...
Hello guys, so I'm currently trying to set Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm chart, so while trying to change the initial admin credentials on all the instances, I face the following issue where all instance will be up and ready as Kubernetes pods for except the indexers where they will not start and remain in an error phase without any logs indicating the reason for this, so the following is a snippet of my values.yaml file which is being provided for the Splunk Enterprise chart:   sva: c3: enabled: true indexerClusters: - name: idx searchHeadClusters: - name: shc indexerCluster: enabled: true name: "idx" replicaCount: 3 defaults: splunk: hec_disabled: 0 hec_enableSSL: 0 hec_token: "test" password: "admintest" pass4SymmKey: "test" idxc: secret: "test" shc: secret: "test" extraEnv: - name: SPLUNK_DEFAULTS_URL value: "/mnt/splunk-defaults/default.yml"   Initially, I was not passing the "SPLUNK_DEFAULTS_URL", but after some debugging the "defaults" field will write in "/mnt/splunk-defaults/default.yml" only, and by default, all instances read from "/mnt/splunk-secrets/default.yml" so I had to change it, and so what happened admin password had changed on all Splunk instances to "admintest" but the issue is indexers pods would not start. Note: I tried to change the password by providing the "SPLUNK_PASSWORD" environment variable to all instances but the same behavior.
Hi all, Since the redesign of the new Incident Review page, we appear to have lost the ability to search for Notables using a ShortID. With the old dashboard this was achieved by selecting Associatio... See more...
Hi all, Since the redesign of the new Incident Review page, we appear to have lost the ability to search for Notables using a ShortID. With the old dashboard this was achieved by selecting Associations from the filters and entering the ShortID you were looking for, but the new Incident Review dashboard appears to have taken this functionality away. Is there any way to achieve this?
Hi All, One of our teams has implemented an incoming webhook from Splunk into MS Teams to post an message when an alert is triggered. We encountered what seems to be a bug where for one specific me... See more...
Hi All, One of our teams has implemented an incoming webhook from Splunk into MS Teams to post an message when an alert is triggered. We encountered what seems to be a bug where for one specific message it was unable to be replied to or reacted to. Strangely enough viewing the message on a mobile would allow you to reply and react to it. Every other alert message before and after we have been able to be reply to.  
I am trying to find the duration for a time span. The "in" and "out" numbers are included in the data as type: number. I attempted: in = 20240401183030 out = 20240401193030 | convert mktime(in) AS... See more...
I am trying to find the duration for a time span. The "in" and "out" numbers are included in the data as type: number. I attempted: in = 20240401183030 out = 20240401193030 | convert mktime(in) AS IN | convert mktime(out) AS OUT | eval Duration =OUT - IN I have not been able to find a function that would directly convert number to time or if there is some multifunctional way to get the right duration between the two, But this does not perform the correct time math. 
Hi all, thank in advance for your time! I have a problem writing a properly working query with this case study: I need to take data from index=email1 to find matching data from index=email2. I ... See more...
Hi all, thank in advance for your time! I have a problem writing a properly working query with this case study: I need to take data from index=email1 to find matching data from index=email2. I tried to do it this way: from index=email1 I take the fields src_user and recipient and use the appropriate search to look for it in the email2 index. Query examples that I used: index=email1 sourcetype=my_sourcetype source_user=* [ search index=email2 sourcetype=my_sourcetype source_user=* | fields source_user ] OR index=email1 sourcetype=my_sourcetype | join src_user, recipient [search index=emai2 *filters*] Everything looked OK in the control sample (I found events in a 10-minute window, e.g. 06:00-06:10), which at first glance matched, but when I extended the search time, e.g. to 24h, it did not show me any events, even those that matched in a short time window (even though they were in these 24 hours). Thank you for any ideas or solutions for this case.
We have several summary searches that collect data into metric indexes. They run nightly and some of them create quite a large number of events (~100k). As a result we sometimes see warnings, that th... See more...
We have several summary searches that collect data into metric indexes. They run nightly and some of them create quite a large number of events (~100k). As a result we sometimes see warnings, that the metric indexes cannot be optimised fast enough. A typical query looks like   index=uhdbox sourcetype="tvclients:log:analytics" name="app*" name="*Play*" OR name="*Open*" earliest=-1d@d+3h latest=-0d@d+3h | bin _time AS day span=24h aligntime=@d+3h | stats count as eventCount earliest(_time) as _time by day, eventName, releaseTrack, partnerId, deviceId | fields - day | mcollect index=uhdbox_summary_metrics split=true marker="name=UHD_AppsDetails, version=1.1.0" eventName, releaseTrack, partnerId, deviceId     The main contributor to the large number of events is the cardinality of deviceId (~100k) which effectively is a "MAC" address with a common prefix and defined length. I could create 4 / 8 /16 reports each selecting a subset of deviceIds and schedule them at different times, but it would be quite a burden to maintain those basicly identical copies. So... I wonder if there is a mechanism to shard the search results and feed them it into many separate mcollects that are spaced apart by some delay. Something like   index=uhdbox sourcetype="tvclients:log:analytics" name="app*" name="*Play*" OR name="*Open*" earliest=-1d@d+3h latest=-0d@d+3h | shard by deviceId bins=10 sleep=60s | stats count as eventCount earliest(_time) as _time by day, eventName, releaseTrack, partnerId, deviceId | fields - day | mcollect index=uhdbox_summary_metrics split=true marker="name=UHD_AppsDetails, version=1.1.0" eventName, releaseTrack, partnerId, deviceId   Maybe my pseudo code above is not so clear. What I would like to achieve is, that instead of one huge mcollect I get 10 mcollects (each for a approximately 1/10th of the events). They should be scheduled approximately 60s apart from each other...
Hi all After temptative for troubleshooting my issue alone, I will try my luck here. Purpose : clone one sourcetype to store the logs into a local indexer, and in a distant one I use one heavy... See more...
Hi all After temptative for troubleshooting my issue alone, I will try my luck here. Purpose : clone one sourcetype to store the logs into a local indexer, and in a distant one I use one heavy forwarder to receive the logs, store the logs in a indexer, and same heavy forwarder will clone the sourcetype to forward the cloned one into a distant heavy forward, that I don't managed. Here is my config : [inputs.conf] [udp://22210] index = my_logs_indexer sourcetype = log_sourcetype disabled = false This works pretty well, and all logs are stored into my indexer Now will come the cloning part :  [props.conf] [log_sourcetype] TRANSFORMS-log_sourcetype-clone = log_sourcetype-clone [transforms.conf] [log_sourcetype-clone] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = distant_HF_formylogs [outputs.conf] => for cloned logs [tcpout:distant_HF_formylogs] server = ip_of_distant_HF:port sendCookedData = false This configuration is used for another use case, as sometimes I have had to anonymize some logs. However, for this particular use case, when I activate the cloning part, it stops the complete log flow for this use case, even on the local indexers. I didn't quite understand why, because I don't see the difference with my other use case, apart from the fact that the logs are UDP logs and not TCP. Am I missing something? Thanks a lot for your help
This may be a very simple question but I'm having trouble identifying the answer, I've been trying to find a way to use RUM data to identify and list out the slowest pages on a website using the Obse... See more...
This may be a very simple question but I'm having trouble identifying the answer, I've been trying to find a way to use RUM data to identify and list out the slowest pages on a website using the Observability dashboard, unfortunately, I don't seem to be able to drill down to any specific page using the dashboard. from what research I've done it seems like I may have to manually add in thousands of RUM URL groupings to drill down further but I have a feeling that that shouldn't be correct?
Hi, I want to ingest the backup logs which are in Cloudwatch to Splunk using AWS addon. But I do not see any metric present in Add on to fetch these details. Under which metric these backlogs will ... See more...
Hi, I want to ingest the backup logs which are in Cloudwatch to Splunk using AWS addon. But I do not see any metric present in Add on to fetch these details. Under which metric these backlogs will be present? How can I get these logs to Splunk using Add on? Thank You!  
Updated the Slunk Palo alto app on a search head and i`m getting these error messages in the _internal index. Any clues? Splunk_TA_paloalto 8.1.1 Splunk core 9.0.3 04-08-2024 12:49:4... See more...
Updated the Slunk Palo alto app on a search head and i`m getting these error messages in the _internal index. Any clues? Splunk_TA_paloalto 8.1.1 Splunk core 9.0.3 04-08-2024 12:49:40.061 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=aperture: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:49:40.061 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=aperture: RequestsDependencyWarning) 04-08-2024 12:49:40.969 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=autofocus_export: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:49:40.969 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=autofocus_export: RequestsDependencyWarning) 04-08-2024 12:49:59.031 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=cortex_xdr: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:49:59.031 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=cortex_xdr: RequestsDependencyWarning) 04-08-2024 12:50:00.762 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=iot_security: /opt/splunk/etc/apps/Splunk_TA_paloalto/bin/splunk_ta_paloalto/aob_py3/solnlib/packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (3.0.4) doesn't match a supported version! 04-08-2024 12:50:00.762 +0000 ERROR ModularInputs [2488837 MainThread] - <stderr> Introspecting scheme=iot_security: RequestsDependencyWarning) OS is ubuntu fully patched. 
Hello everyone! I need some help creating a multivalue field. Events can contain 1 or more fields with the following forms: I try to explain with an example Event1: FICHERO_LOG1 = /any/log1/id/i... See more...
Hello everyone! I need some help creating a multivalue field. Events can contain 1 or more fields with the following forms: I try to explain with an example Event1: FICHERO_LOG1 = /any/log1/id/idca-admin/idca-admin.log FICHERO_LOG2 = /any/log1/id/log1/any1.log FICHERO_LOG3 = /any/log1/httpd/*   Event2: FICHERO_LOG1 = /any/log2/id/id.log FICHERO_LOG2 = /any/log2/logging.log FICHERO_LOG3 = /any/log2/tree/httpd/ds/log2/* FICHERO_LOG4 = /any/log2/id/id-batch/id-batch2.log   eventN FICHERO_LOG1 = /any/logN/data1/activemq.log FICHERO_LOG2 = /any/logN/id/hss2/*.system.log ……… FICHERO_LOGN = /any/path1/id/…./*…..log   The result I expect is: For Event1   key values   LOG= /any/log1/id/idca-admin/idca-admin.log     /any/log1/id/log1/any1.log     /any/log1/httpd/*                for Event2:   key values   LOG= /any/log2/id/id.log     /any/log2/logging.log       /any/log2/tree/httpd/ds/log2/*     /any/log2/id/idca-batch/idca-batch2.log     For event N   key values   LOG= /any/logN/data1/activemq.log     /any/logN/id/hss2/*.system.log       …….     /any/path1/id/…./*…..log   I have tried with   transform.conf: [my-log] REGEX=^.*FICHERO_LOG.*\=\s*( ?<log>.*?)\s*\n MV-AD=true props.conf [extractingFields] TRANSFORM = other_transforms_stanza, my-log       But it's not working.   Any ideas or help? What steps should I follow?   Thanks JAR
Hello All, we plan to use a Splunk OVA for VMware Metrics (5096) in combination with Splunk on Windows. I can't find any information how this OVA will be supported. E.g  Operating System and Splunk ... See more...
Hello All, we plan to use a Splunk OVA for VMware Metrics (5096) in combination with Splunk on Windows. I can't find any information how this OVA will be supported. E.g  Operating System and Splunk updates. Does anyone know this? Regards, Bernhard
i am using below to load colur in drop downlist . Data loading propertly. but it always shows - Could not create search - No Search query provided   <input type="dropdown" token="color" depends="$c... See more...
i am using below to load colur in drop downlist . Data loading propertly. but it always shows - Could not create search - No Search query provided   <input type="dropdown" token="color" depends="$color_dropdown_token$" searchWhenChanged="false"> <label>Color</label> <choice value="*">All</choice> <choice value="Green">Green</choice> <choice value="Orange">Orange</choice> <choice value="Red">Red</choice> <initialValue>*</initialValue> <search> <query/> <earliest>$Time.earliest$</earliest> <latest>$Time.latest$</latest> </search> </input>      
Have a nice day! I have several Splunk instances and often see the message below:   WorkloadsHandler [111560 TcpChannelThread] - Workload mgmt is not supported on this system.   I know that the ... See more...
Have a nice day! I have several Splunk instances and often see the message below:   WorkloadsHandler [111560 TcpChannelThread] - Workload mgmt is not supported on this system.   I know that the workload feature is not supported on the windows system, and it is obviously disabled How can I get rid of this annoying message in the splunkd.log?
Below are the CIM Macros where i am using and there are different indexes mapped in individual macros. I want to get the list of all indexes mapped in all the CIM Macros. Hence i did a scheduled se... See more...
Below are the CIM Macros where i am using and there are different indexes mapped in individual macros. I want to get the list of all indexes mapped in all the CIM Macros. Hence i did a scheduled search which runs and check all the macros. But it is utilizing lot of memory and even  searches are failing. Please help me with a better way to get the list of all indexes mapped in CIM Macros.   cim_Authentication_indexes cim_Alerts_indexes cim_Change_indexes cim_Endpoint_indexes cim_Intrusion_Detection_indexes cim_Malware_indexes cim_Network_Resolution_indexes cim_Network_Sessions_indexes cim_Network_Traffic_indexes cim_Vulnerabilities_indexes cim_Web_indexes    
how to resolve the repetitive alert of RSA_Probe_Alert_RSA_SECUREID_null_Splunk will check every min for the events with key word "svc_radius_probe_ctx" and when there is no events with the key word ... See more...
how to resolve the repetitive alert of RSA_Probe_Alert_RSA_SECUREID_null_Splunk will check every min for the events with key word "svc_radius_probe_ctx" and when there is no events with the key word found for that min alert will be triggered. all the vms and server is working fine. every week atleast once getting this alert.