All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi folks, Below are the Architecture Multisite indexer cluster 8 peers with 1 CM search head cluster.   Site2 peers are reporting to search head when I searched index=_internal |stats count b... See more...
Hi folks, Below are the Architecture Multisite indexer cluster 8 peers with 1 CM search head cluster.   Site2 peers are reporting to search head when I searched index=_internal |stats count by splunk_server but if I search with particular index name like index=cisco, windows, linux then only site1 peers are reporting. in all the search head,  Note: 1. Indexer cluster is stable SF and RF met 2.  Connectivity to all the peers, CM is established. 3. All Peers are in the healthy state in the distributed search. 4. Search affinity is disabled 5. No errors related to any connectivity in the splunkd.log on the peers. Need help to rectify this issue. Thanks
Hello, I would like to develop a Splunk alert for one of the source where we are ingesting data using REST API by configuring the scripted input on our Heavy Forwarder, I wanted to set up an email ... See more...
Hello, I would like to develop a Splunk alert for one of the source where we are ingesting data using REST API by configuring the scripted input on our Heavy Forwarder, I wanted to set up an email alert when ever there is an interruption in data ingestion from the source. I am using the below search but not seeing any results. | tstats latest(_time) as latest where index=XYZ by source | eval recent = if(latest > relative_time(now(),"-10m"),1,0), realLatest = strftime(latest,"%c") | where recent=0 Can someone please help me with the search?   Thanks
I have a playbook using the Splunk "run query" action block with the "attach_result" action which adds the query results to the vault. Is there any way to download these results locally using the sam... See more...
I have a playbook using the Splunk "run query" action block with the "attach_result" action which adds the query results to the vault. Is there any way to download these results locally using the same playbook as opposed to manually navigating to each container and downloading the results? I have a scenario where I would like to download these files from the container as they run and then place them on a shared drive (or moving the file from the Phantom box to the shared drive would work great as well).   It seems like it should be simple, but I cannot figure out how to interact with this file using a playbook. Any help would be appreciated!     
Hi, As soon as an event ends I want to create an alert and want to sent email with Shipment ID which is ended. Example log: EVENT GROUP A = Started en ended. 2022-12-20 10:43:04.468 +01:00 [Shipm... See more...
Hi, As soon as an event ends I want to create an alert and want to sent email with Shipment ID which is ended. Example log: EVENT GROUP A = Started en ended. 2022-12-20 10:43:04.468 +01:00 [ShipmentTransferWorker] **** Execution of Shipment Transfer Worker started. **** 2022-12-20 10:43:04.471 +01:00 [ShipmentTransferWorker] **** [Shipment Number: 000061015] **** 2022-12-20 11:06:19.097 +01:00 [ShipmentTransferWorker] **** Execution of Shipment Transfer Worker ended ****   EVENT GROUP B = Started end not ended yet. 2022-12-20 13:43:04.468 +01:00 [ShipmentTransferWorker] **** Execution of Shipment Transfer Worker started. **** 2022-12-20 13:43:04.471 +01:00 [ShipmentTransferWorker] **** [Shipment Number: 000061016] **** My SPL   index=app sourcetype=MySource host=MyHost "ShipmentTransferWorker" | eval Shipment_Status =if(like(_raw, "%Execution of Shipment Transfer Worker started%"),"Started", if(like(_raw, "%Execution of Shipment Transfer Worker ended%"), "Ended", NULL)) | transaction host startswith="Execution of Shipment Transfer Worker started" endswith="Execution of Shipment Transfer Worker ended" keepevicted=true | rex "Shipment Number: (?<ShipmentNumber>\d*)" | eval Shipment_Status_Started =if(like(_raw, "%Execution of Shipment Transfer Worker started%"),"Started", NULL) | eval Shipment_Status_Ended = if(like(_raw, "%Execution of Shipment Transfer Worker ended%"), "Ended", NULL) | table ShipmentNumber Shipment_Status_Started Shipment_Status_Ended      suppose that EVENT GROUP B ends with following event after 6 hours and then I want to create an Alert and mail with shipment number 000061016: 2022-12-20 19:43:19.097 +01:00 [ShipmentTransferWorker] **** Execution of Shipment Transfer Worker ended **** How can I create trigger and email once the event ends?  
Hi Splunk Experts, Im looking for help in splitting a table grouped into single row into multiple rows. I would like identify the filesystems that are above 40% and would like to collect stats and ... See more...
Hi Splunk Experts, Im looking for help in splitting a table grouped into single row into multiple rows. I would like identify the filesystems that are above 40% and would like to collect stats and visuals. The Statistics for table is displayed as single row only. I tried mvexpand but it doesnt accept 2 fields, only accepts one field. If i apply for field but it generates many rows. Im missing something here. Can you please help me with workaround. Splunk Query: ---------------- index=lab_env host=labhmc earliest=-4h latest=now | spath path=hmc_info{} output=LIST | rename LIST as _raw | kv | rex field="hmc_info{}.fs_utilization" mode=sed "s/\%//g" | table hmc_name hmc_info{}.Filesystem hmc_info{}.fs_utilization   Splunk Event: --------------- {"category": "hmc", "hmc_name": "labhmc", "hmc_uptime": "73", "hmc_data_ip": "127.0.0.1", "hmc_priv_ip": "127.0.0.1", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/dev", "fs_utilization": "0%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/dev/shm", "fs_utilization": "1%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/run", "fs_utilization": "3%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/sys/fs/cgroup", "fs_utilization": "0%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/", "fs_utilization": "46%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/data", "fs_utilization": "2%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/home", "fs_utilization": "4%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/extra", "fs_utilization": "17%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/dump", "fs_utilization": "1%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/var", "fs_utilization": "14%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/var/hsc/log", "fs_utilization": "25%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/run/user/601", "fs_utilization": "0%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/run/user/604", "fs_utilization": "1%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"} {"category": "hmc_filesystem", "hmc_name": "labhmc", "Filesystem": "/run/user/600", "fs_utilization": "0%", "hmc_version": "V8.65.22", "hmcmodel": "7164-r15", "hmcserial": "673456B", "datacenter": "LAB", "country": "DE"}
I am trying to create an after hour query with specific time frames 1. Mon 0000-0700 and 1900-2400, 2. Tue 0000-0700 and 1900-2400, 3. Wed 0000-0700 and 1900-2400, Thur 0000-0700 and 1900-2400, Fri 0... See more...
I am trying to create an after hour query with specific time frames 1. Mon 0000-0700 and 1900-2400, 2. Tue 0000-0700 and 1900-2400, 3. Wed 0000-0700 and 1900-2400, Thur 0000-0700 and 1900-2400, Fri 0000-0700 and 1900-2400, Sat 0000-2400, and Sun 0000-2400. I have my Cron Express set for 43 10***  | sort - _time | eval user=lower(user) |eval Day=strftime(_time,”%A”) |eval Hour=strftime(_time,”%H”) |eval Date=strftime(_time,”Y-%m-%d”) | search Hour IN (19,20,21,22,23,24,0,1,2,3,4,5,6,7) | table Date, Day, Hour, “User Account” I like the way this is displayed but I cannot figure out how to combine this query with a weekend (FRI 1900-Mon 0700) query. Or will I have to have two different queries? Once completed this will make a good dashboard. 
I am ingesting Azure Activity events via Splunk Add-on for Microsoft Cloud Services and was wondering if there are any recommendations/best practices for the settings for Max Time Wait and Max Batch ... See more...
I am ingesting Azure Activity events via Splunk Add-on for Microsoft Cloud Services and was wondering if there are any recommendations/best practices for the settings for Max Time Wait and Max Batch Size? Thx
Hey there! I'm trying to monitor(batch)) a folder congaing  xml files,  the XML files don't necessarily have the same structure, also they have multiple hierarchy and the level of it might vary .... See more...
Hey there! I'm trying to monitor(batch)) a folder congaing  xml files,  the XML files don't necessarily have the same structure, also they have multiple hierarchy and the level of it might vary . where and how do i configure a sourcetype the know's how to handle this kind of a case so i won't have to parse the data with rex on search time.   example for a file that may exists:  
Hey! So I have a a self host splunk enterprise environment with a Cluster Master(deployment server is a separate instance) and 3 indexers. I am trying to push apps to my indexers but when i change th... See more...
Hey! So I have a a self host splunk enterprise environment with a Cluster Master(deployment server is a separate instance) and 3 indexers. I am trying to push apps to my indexers but when i change the Master-apps folder and add in the applications and then run a 'splunk cluster-apply' i get the following our put.  No new bundle will be pushed. The cluster manager and peers already have this bundle with bundleId=9FE2BF9FC21C0681C01644653BD69C6C.  Before this I did push a single app and it worked fine then I removed it. Now I am trying to push multiple apps and getting the output above
Good Morning, I'm having trouble converting a whole number to a decimal.  Example:     | eval Amount = round(tonumber(balance_amount), 2) Result: 814118225.00     But I need the... See more...
Good Morning, I'm having trouble converting a whole number to a decimal.  Example:     | eval Amount = round(tonumber(balance_amount), 2) Result: 814118225.00     But I need the number to look like:  8141182.25
Hi at all, did anyone experienced the "Dismiss Azure Alert" Workflow Action in the Splunk Add-on for Microsoft Azure App runned by Enterprise Security? I have to configure it but I never did it. ... See more...
Hi at all, did anyone experienced the "Dismiss Azure Alert" Workflow Action in the Splunk Add-on for Microsoft Azure App runned by Enterprise Security? I have to configure it but I never did it. reading documentation, it seems that it's all configured and it should run without any problem. Does it request some special configuration or something that requests a special attention? Thank you for your time. Ciao. Giuseppe
Hi all, I need to create 2 drop down fields that depend on each other from several source:   | makeresults | eval name = "a" | eval value = mvappend("1","2","3","4","5") | union [| makeresu... See more...
Hi all, I need to create 2 drop down fields that depend on each other from several source:   | makeresults | eval name = "a" | eval value = mvappend("1","2","3","4","5") | union [| makeresults | eval name = "b" | eval value = mvappend("a","b","c","d","e") ] | union [| makeresults | eval name = "c" | eval value = mvappend("qq","ss","ff","gg","rr") ] | table name value | stats values(*) as * by name value When I choose A, I get only A values. B - only B values and etc. The queries are coming from several sources, so I can't append or union, I need to create token for each value. Please assist Name Value A A values
Dear Community, Lets say I was running a search for an hour period from 10:00 until 11:00 and we had a particular transaction that consisted of 2 or more events - the first occurring at 09:59 and t... See more...
Dear Community, Lets say I was running a search for an hour period from 10:00 until 11:00 and we had a particular transaction that consisted of 2 or more events - the first occurring at 09:59 and the last at 10:01.  Using the default Transaction command any events which occurred before 10:00 would not be included and we would therefore not be viewing the whole transaction. Likewise, if a transaction started at 10:59 and didn't end until 11:01, any events which occurred after 11:00 would be dropped. Is there any way to include all events related to transactions which started or ended during the specified search time range? Conversely, If this is not possible it would be helpful to drop any transactions which did not start and end within the time range - is there any way to achieve this? Kind regards, Ben
Hi, I just installed UBA on RHEL 8.4 and started it first time. However it failed to start. Then I tried to stop-all and start-all to find which service is wrong. It looks like HDBS data node is not ... See more...
Hi, I just installed UBA on RHEL 8.4 and started it first time. However it failed to start. Then I tried to stop-all and start-all to find which service is wrong. It looks like HDBS data node is not failed, but I don't know to fix it individually. Can anyone give me a hand?   Tue Dec 20 18:57:36 CST 2022: Running: /opt/caspida/bin/Caspida start-service hive-metastore Hive tables are accessible Looking for live HDFS datanodes report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 report: Incomplete HDFS URI, no host: hdfs://demo_uba:8020 No HDFS datanodes found, check if the required ports are open Refer to installation instructions for the list of ports that are required to be open between the caspida cluster nodes
Hi, Hello. I have few images in my dashboard. After search I would like to add value (e.g.: order number) to each image and after that use this value as input to drilldown. So when I click on image... See more...
Hi, Hello. I have few images in my dashboard. After search I would like to add value (e.g.: order number) to each image and after that use this value as input to drilldown. So when I click on image another search start with the order number token.   Images are in svg Is this possible?  Thank you 
Hi All,   I want to create Multiple tables/Panels inside a dashboard which will have static message like DASHBAORD A, DASHBAORD B, DASHBAORD C etc.. These message's will drill down to respective ... See more...
Hi All,   I want to create Multiple tables/Panels inside a dashboard which will have static message like DASHBAORD A, DASHBAORD B, DASHBAORD C etc.. These message's will drill down to respective dashboards A,B and C.  Currently i am using a query : index=* | head 1 | eval DashboardName="Dashboard A" |table DashboardName Is there a way to put a query with static message without to go and search a a set of events using index,source or sourcetype. I don't want to unnecessary use this.
Hi, I need to Connect to Splunk Enterprise that is hosted within a VM from my Local Machine using Python. I tried with the Port 8000, The connection seemed to be established but I cannot do anythin... See more...
Hi, I need to Connect to Splunk Enterprise that is hosted within a VM from my Local Machine using Python. I tried with the Port 8000, The connection seemed to be established but I cannot do anything further like querying etc.    Thanks & Regards
Hello Team i am using syslog for logs ingestion of solaris servers. I can see results for tcpdump host solarisServer. but logs are not visible on search head
Hi all, I use following simple props.conf to some json type events: [my:sourcetype] category = Structured DATETIME_CONFIG = LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 INDEXED... See more...
Hi all, I use following simple props.conf to some json type events: [my:sourcetype] category = Structured DATETIME_CONFIG = LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 INDEXED_EXTRACTIONS=json TIME_FORMAT=%s disabled=false pulldown_type=true SHOULD_LINEMERGE=false TIMESTAMP_FIELDS=timestamp   The event looks like following: {"access_device": {"browser": "Edge Chromium", "browser_version": "108.0.1462.54", "epkey": null, "flash_version": "uninstalled", "hostname": null, "ip": "192.168.182.230", "is_encryption_enabled": "unknown", "is_firewall_enabled": "unknown", "is_password_set": "unknown", "java_version": "uninstalled", "location": {"city": "Bestine", "country": "Tatooine", "state": "Central and Western District"}, "os": "Windows", "os_version": "10"}, "adaptive_trust_assessments": {}, "alias": "unknown", "application": {"key": "ABCDEFG1234567", "name": "[UAT] Hello World App"}, "auth_device": {"ip": null, "key": null, "location": {"city": null, "country": null, "state": null}, "name": null}, "email": null, "event_type": "authentication", "factor": "not_available", "isotimestamp": "2022-12-20T09:14:08.755759+00:00", "ood_software": null, "reason": "allow_unenrolled_user", "result": "success", "timestamp": 1671527648, "txid": "c571233d-b357-3f07-e126-ca2623b8e0d9", "user": {"groups": [], "key": null, "name": "luke"}, "eventtype": "authentication", "host": "jedi1.mydomain.com"}   It works when i test it through upload log file by setting sourcetype to my:sourcetype.  Fields and timestamp can be extracted. However, when events are being fed from UF, the timestamp can't be extracted and  using the file modified time as timestamp instead. Tried to add 'TIME_PREFIX=timestamp": ' but didn't help. Would anyone please help? Thanks and Regards  
Hi, I have the following events in Splunk { "field1": "something", "execution_times": { "service1": 100, "service2": 400, (...) "service_N": 600, }, "field2": "s... See more...
Hi, I have the following events in Splunk { "field1": "something", "execution_times": { "service1": 100, "service2": 400, (...) "service_N": 600, }, "field2": "something" } How can I create a multiline chart that would show p90 + p99 of each "service" in JSON map "execution_times" based on the values [here 100, 400, (...) 600]. The query should produce a chart with N*2 (for p90 and p99) different time series (lines) based on all "services" that were inside events. Each event can contain different "services" in its execution_times map. Thanks