All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, While trying to install Splunk UF in Windows Server 2022 to send logs to Splunk Cloud Instance, we encounter this Universal forwarder setup wizard ended prematurely Error.  Back from server. ... See more...
Hi, While trying to install Splunk UF in Windows Server 2022 to send logs to Splunk Cloud Instance, we encounter this Universal forwarder setup wizard ended prematurely Error.  Back from server. Return value: 1603 This value is returned which was found in the msi logs. What could be the issue of this error?
Hi All, I ran into a tricky one and can’t wrap my head around it (or if it is even possible).  The use case is as follows: There are 3 sourcetypes that share a common field “Detection_id”.  This ... See more...
Hi All, I ran into a tricky one and can’t wrap my head around it (or if it is even possible).  The use case is as follows: There are 3 sourcetypes that share a common field “Detection_id”.  This field has a unique value that appears in all 3 sourcetypes. The first sourcetype is “DetectionName” and the 2nd sourcetype is “ProcessInfo”.  I did a join based on the “Detection_id” field so that way I can see my detection names displayed in a table right next to the process information responsible for the detection from the 2nd sourcetype.  That search works fine. This is where it gets a bit tricky.  The 3rd sourcetype is called “FalsePositive” meaning it was a detection that was already investigated and considered false positive.  We do not want any events displaying in our table from the first two searches IF the same unique “Detection_id” value also appears in the “FalsePositive” sourcetype.  That way time isn’t wasted scrolling through detections that were already investigated. Any thoughts on the best way to handle this and/or if it is even possible? Appreciate any answers or feedback.
hello All,   For a separate reason we have had to disable SSL for HEC tokens on our HF. SC4S now will not connect as it just throws SSL errors. Is there a way to disable HEC SSL for SC4S?   T... See more...
hello All,   For a separate reason we have had to disable SSL for HEC tokens on our HF. SC4S now will not connect as it just throws SSL errors. Is there a way to disable HEC SSL for SC4S?   Thanks in advance Leon
Hi all, How we could make the ingest of all logs into a single sourcetype in splunk cloud ES.
I have three types of data entries.     { <Irrelevant field omitted> "parameters": [ { "LicenseNumber": "123456" } ], "eventTimestamp": "2023-05-09T15:23:57+0300", } { <Irrelevant field omitte... See more...
I have three types of data entries.     { <Irrelevant field omitted> "parameters": [ { "LicenseNumber": "123456" } ], "eventTimestamp": "2023-05-09T15:23:57+0300", } { <Irrelevant field omitted> "parameters": [ { "Holder_Id": "654321" } ], "eventTimestamp": "2023-05-09T15:23:57+0300", } { <Irrelevant field omitted> "parameters": [ { "Name": "John Doe" } ], "eventTimestamp": "2023-05-09T15:23:57+0300", }       I want to get stats how many by the field parameter field type as in Name:69, Holder_Id:42, LicenseNumber:76. I thought I'd use eval to create a field by the existence of each parameters, but that doesn't work. <base_query> | eval group_name = case(isnotnull('parameters{}.Name'), Name, isnotnull('parameters{}.HolderId'), HolderId, isnotnull('parameters{}.LicenseNumber'), LicenseNumber) | stats count by group_name
Hi, How to determine which logs are being utilized for specific dashboards, use-cases, or other metrics in your Splunk Cloud ES?
index="abcd" | eval _time = strptime(TS_Changed_At,"%d/%m/%Y %H:%M") | sort 0 ID _time | dedup ID _time | eventstats last(Status) as current_status by ID | where current_status="AAA" OR current_... See more...
index="abcd" | eval _time = strptime(TS_Changed_At,"%d/%m/%Y %H:%M") | sort 0 ID _time | dedup ID _time | eventstats last(Status) as current_status by ID | where current_status="AAA" OR current_status="BBB" OR current_status="CCC" | streamstats current=f window=1 values(Status) as prev_status by ID | where NOT Status=prev_status | eval Cal= if(Status="CCC" AND (NOT prev_status="AAA " AND NOT prev_status="BBB"),substr(TS_Last_Status_Change,1,16),if(Status="BBB" AND NOT prev_status="AAA",substr(TS_Last_Status_Change,1,16),if(Status="AAA",substr(TS_Last_Status_Change,1,16),""))) | where NOT Cal="" | eventstats max(eval(strptime(Cal,"%d/%m/%Y %H:%M"))) as max_ by ID | where max_ = strptime(Cal,"%d/%m/%Y %H:%M") | table ID Cal
AKHQ will show topic / connector as red/yellow if there is some issue, can Splunk capture those and config alert base on that?
If I have queries with Lists/Arrays containing events : line.Data = [eventOne, eventThree];  line.Data = [eventOne, eventTwo];  How can I create a table that shows the count of the different events... See more...
If I have queries with Lists/Arrays containing events : line.Data = [eventOne, eventThree];  line.Data = [eventOne, eventTwo];  How can I create a table that shows the count of the different events: eventOne: 2 eventTwo: 1 eventThree: 1
If I have queries with dictionaries containing events as the key and frequencies as the value: line.Data = {"eventOne": 4, "eventThree" : 2};  line.Data = {"eventOne": 2, "eventTwo" : 3} How can I ... See more...
If I have queries with dictionaries containing events as the key and frequencies as the value: line.Data = {"eventOne": 4, "eventThree" : 2};  line.Data = {"eventOne": 2, "eventTwo" : 3} How can I create a table that shows the sum of the different events: eventOne: 6 eventTwo: 3 eventThree: 2
Hi, Looking for help on how to detect systems where a monitored value has decreased compared to yesterday's average value, and if so, by how much. We have a fleet of several thousand systems which ... See more...
Hi, Looking for help on how to detect systems where a monitored value has decreased compared to yesterday's average value, and if so, by how much. We have a fleet of several thousand systems which are monitored, and we'd like to locate any systems which have a value drop since yesterday. Note: Due to some lag in delivery of the values due to long haul comms, we also include the origin timestamp when sending the event to splunk to get the correct order of events instead of using index time. (for now...) The base search looks like this to return events containing the relevant values across the fleet: index=foo sourcetype=bar system_name="*_volume" | eval tstamp=strptime(local_time, "%Y-%m-%dT%H:%M:%S.%Q") | eval _time = tstamp A typical event resulting from the search above from last hour looks like this (with some unused fields removed): {    local_time: 2023-05-10T15:05:07.617Z    system_name: location/sublocation/widget 01/widget_volume    value: 201.58281 } A table of the above across one sublocation would look like this. _time system_name value 2023-05-10 15:05:07.617 location 01/sublocation 01/widget 01/widget_volume 16.18125 2023-05-10 13:48:02.010 location 01/sublocation 01/widget 02/widget_volume 53.16573 2023-05-10 13:41:01.497 location 01/sublocation 01/widget 03/widget_volume 108.99990 2023-05-10 11:48:53.687 location 01/sublocation 01/widget 04/widget_volume 200.73786 So, my challenge is we have thousands of "system_name"s feeding into splunk and we'd really like to spot the handful that are decreasing compared to yesterday. If we identify any that have decreased since yesterday, as a starting reference point in time to compare with, we'd like to get a table format report run daily with the list of system_names, todays_avg_value, yesterdays_avg_value, value_delta (which should be a negative number and not percent). Eventually we'd like to run this as an hourly comparison as this value is so important to catch a decrease in a timely manner from an alarming perspective. Thank you!
Hello, I have some issues regarding changing the configuration of Splunk Enterprise Security. My system consists of 5 search heads and all apps and add-ons are pushed from the Deployer in the defaul... See more...
Hello, I have some issues regarding changing the configuration of Splunk Enterprise Security. My system consists of 5 search heads and all apps and add-ons are pushed from the Deployer in the default push mode (merge_to_default), including Splunk ES. The issue is that I previously configured the alert email in ES Content Update on the Search Head via the Web GUI, and this configuration would then be replicated to members in the cluster. Now I want to add another email to this section, but changing each rule manually is too time-consuming, so I directly edited the savedsearch.conf file, but it did not replicated to the remaining members. After reading Splunk's documentation, I have an idea that I can change the push mode to local_only for the Splunk ES app, so that the savedsearch.conf file that was configured in: $SPLUNK_HOME/etc/apps/DA-ESS-ContentUpdate/local/savedsearchs.conf Then push the bundle down to the captain, and the configuration will then be replicated to the remaining members in the cluster. Is this plan feasible and are there any potential risks that could occur when following this approach?
We have a requirement to archive all on-prim Splunk data to Splunk cloud instance. Client has purchased Dynamic Data Active archive license. Can someone please provide information on what are the pre... See more...
We have a requirement to archive all on-prim Splunk data to Splunk cloud instance. Client has purchased Dynamic Data Active archive license. Can someone please provide information on what are the pre-requisites for this work and also are there any other ways using which we can perform this  activity?
I tried to migrate my python scripts for Splunk to send out emails with attachments from python2 to python3. In python3, when the attachment got Chinese/ Japanese or any other special characters, the... See more...
I tried to migrate my python scripts for Splunk to send out emails with attachments from python2 to python3. In python3, when the attachment got Chinese/ Japanese or any other special characters, the email cannot be sent out. This issue didn't happen in python2 script. I think it's because python 3 is strict about not mixing different types of strings, but i don't know how to modify it. Any ideas or tips for this issue?
I want to set up DB Connect in SPLUNK Managed cloud to write SNOWFLAKE query in SPLUNK. How can I change the configurations in SPLUNK cloud to set up DB Connect correctly? I am using below blog f... See more...
I want to set up DB Connect in SPLUNK Managed cloud to write SNOWFLAKE query in SPLUNK. How can I change the configurations in SPLUNK cloud to set up DB Connect correctly? I am using below blog from SNOWFLAKE but I am stuck at step3 because we are on SPLUBK Cloud. https://community.snowflake.com/s/article/Integrating-Snowflake-and-Splunk-with-DBConnect   Thanks Bhavesh
Hi All, Can anyone help me create a regex to extract the bolded parts from the following _raw log, please? some text - [action:"Accept"; some text ; origin:"10.111.10.111"; some text]"; dst:"192.16... See more...
Hi All, Can anyone help me create a regex to extract the bolded parts from the following _raw log, please? some text - [action:"Accept"; some text ; origin:"10.111.10.111"; some text]"; dst:"192.168.11.01"; some text684"; layer_name:"Some text"; layer_nsome text"; src:"192.168.81.62"] Thank you in advance!
When running splunk show deploy-poll or splunk set deploy-poll on the command line of a UF (Linux) I'm prompted to provide a user and password, but receive "Login failed" 100% of the time. I successf... See more...
When running splunk show deploy-poll or splunk set deploy-poll on the command line of a UF (Linux) I'm prompted to provide a user and password, but receive "Login failed" 100% of the time. I successfully login with those same credentials in the UI of the SH, MC. The credentials are my user credentials (admin role) and not a default user/pass. Where can I begin investigating this issue? -Is there a log Splunk writes to locally (this UF is not a deployment client yet) that I can log into to find out why I can't authenticate? -Is there a conf file I need to look into to diagnose and fix this issue? I know I can use the -auth tag in the command and provide a user/pass but I don't want my password in the command history of this server.
Hello, In the classic Dashboard you have the option to see the Job Inspector in the panels. I miss this option in Dashboard Studio. Does anybody know of this exist?  I cannot find it. Regards... See more...
Hello, In the classic Dashboard you have the option to see the Job Inspector in the panels. I miss this option in Dashboard Studio. Does anybody know of this exist?  I cannot find it. Regards,, Harry
Description:   I can't seem to find the sequence of commands to upgrade the data store to wireTiger on this Ubuntu VM.  When attempting to upgrade receive the following error.: Starting KV Store... See more...
Description:   I can't seem to find the sequence of commands to upgrade the data store to wireTiger on this Ubuntu VM.  When attempting to upgrade receive the following error.: Starting KV Store storage engine upgrade: Phase 1 (dump) of 2: ..Failed to migrate to storage engine wiredTiger, reason= [App Key Value Store migration] Starting migrate-kvstore. [App Key Value Store migration] Storage Engine hasn't been migrated to wireTiger. Cannot upgrade to service(42)
Hi, I have the need to detect basic authentication logons on our exchange on-prem system. we have deployed the TA add-on for Exchange but it does not monitor a log file where I found the informatio... See more...
Hi, I have the need to detect basic authentication logons on our exchange on-prem system. we have deployed the TA add-on for Exchange but it does not monitor a log file where I found the information I needed. The log files are located in the path E:\Program Files\Microsoft\Exchange Server\V15\Logging\HttpProxy\Mapi I thought to add one stanza to monitor the log files in there but I don't know which source type should I use for it. I wonder if someone already create one that could be shared. [monitor://E:\Program Files\Microsoft\Exchange Server\V15\Logging\HttpProxy\Mapi] whitelist=\.log$|\.LOG$ time_before_close = 0 sourcetype= ??????????????? queue=parsingQueue index=msexchange disabled=false many thanks.