All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, can anybody help, how to change the font size of drop-down items/selections? Here is my dropdown: <input type="dropdown" token="auftrag_tkn" searchWhenChanged="true" id="dropdownAuswahlAuftra... See more...
Hi, can anybody help, how to change the font size of drop-down items/selections? Here is my dropdown: <input type="dropdown" token="auftrag_tkn" searchWhenChanged="true" id="dropdownAuswahlAuftrag"> <label>Auftrag</label> <fieldForLabel>Auftrag</fieldForLabel> <fieldForValue>Auftrag</fieldForValue> <search> <query>xxxxx</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input>  
After the Splunk Master enters maintenance mode, one of the indexers goes offline and then back online, and disables maintenance mode. The fixup tasks get stuck for about a week. The number of fixup ... See more...
After the Splunk Master enters maintenance mode, one of the indexers goes offline and then back online, and disables maintenance mode. The fixup tasks get stuck for about a week. The number of fixup tasks pending goes from around 5xx to 102 (after deleting rb bucket. I assume its the issue of bucket syncing in indexer cluster because client's server is a bit laggy(network delay, low cpu)) There are 40 fixup tasks in progress and 102 fixup tasks pending in the indexer cluster master. The internal log shows that all those 40 tasks are displaying the following error: Getting size on disk: Unable to get size on disk for bucket id=xxxxxxxxxxxxx path="/splunkdata/windows/db/rb_xxxxxx" (This is usually harmless as we may be racing with a rename in BucketMover or the S2SFileReceiver thread, or merge-buckets command which should be obvious in log file; the previous WARN message about this path can safely be ignored.) caller=serialize_SizeOnDisk Delete dir exists, or failed to sync search files for bid=xxxxxxxxxxxxxxxxxxx; will build bucket locally. err= Failed to sync search files for bid=xxxxxxxxxxxxxxxxxxx from srcs=xxxxxxxxxxxxxxxxxxxxxxx CMSlave [6205 CallbackRunnerThread] - searchState transition bid=xxxxxxxxxxxxxxxxxxxxx from=PendingSearchable to=Unsearchable reason='fsck failed: exitCode=24 (procId=1717942)' Getting size on disk: Unable to get size on disk for bucket id=xxxxxxxxxxxxx path="/splunkdata/windows/db/rb_xxxxxx" (This is usually harmless as we may be racing with a rename in BucketMover or the S2SFileReceiver thread, or merge-buckets command which should be obvious in log file; the previous WARN message about this path can safely be ignored.) caller=serialize_SizeOnDisk The internal log shows that all those 102 tasks are displaying the following error: ERROR TcpInputProc [6291 ReplicationDataReceiverThread] - event=replicationData status=failed err="Could not open file for bid=windows~xxxxxx err="bucket is already registered with this peer" (Success)"  Does anyone know what "fsck failed exit code 24" and "bucket is already registered with this peer" mean? How can these issues be resolved to reduce the number of fixup tasks? Thanks.  
Running Splunk 9.3.5 on RHEL 8.  STIG hardened environment.  The non-Splunk RHEL instances running a Universal Forwarder have no issue access the audit.log files, apparently by virtue of the stateme... See more...
Running Splunk 9.3.5 on RHEL 8.  STIG hardened environment.  The non-Splunk RHEL instances running a Universal Forwarder have no issue access the audit.log files, apparently by virtue of the statement AmbientCapabilities=CAP_DAC_READ_SEARCH located in the /etc/systemd/system/SplunkForwarder.service file.  However, the same is not true on the Splunk instance.  They require read access permissions via a file ACL or something.  Where these options all result in multiple STIG compliance findings.  Which each require write ups a vendor (Splunk) dependencies.   Question - why?  Why can't Splunk access the audit.log files the same way as the UF?  Or is there some way to do the same sort of thing with AmbientCapabilities for Splunkd.Service? It is tempting to quit collecting these logs with Splunk itself and install UF on the Splunk instances too.
I am making a dashboard with the dropdown input called $searchCriteria$. I am trying to set the value of a search_col based on the value of the $searchCriteria$ token. I have tried the following:  ... See more...
I am making a dashboard with the dropdown input called $searchCriteria$. I am trying to set the value of a search_col based on the value of the $searchCriteria$ token. I have tried the following:  | eval search_col = if($searchcriteria$ == "s_user", user, path) | eval search_col = if('$searchcriteria$' == "s_user", user, path) | eval search_col = if($searchcriteria$ == 's_user', user, path) | eval search_col = if('$searchcriteria$' == 's_user', user, path) Even tried  | eval search_col = if(s_user == s_user, user, path) The value of search_col is the same as path.  I have tested, and the value of the $searchcriteria$ is getting set properly. What am I doing wrong?  
Hi,   I am trying to form a custom link to the episode/event in the email alert triggered from SPlunk ITSI.   However, when I open the link to that event or episode directly it always opens the a... See more...
Hi,   I am trying to form a custom link to the episode/event in the email alert triggered from SPlunk ITSI.   However, when I open the link to that event or episode directly it always opens the alert and episode link and you the have to again search for the events and check the details.   Is there a way to get the link to the episode directly taht a person can open without searching from the ist of the events?   the link to specific episode e.g. https://splunkcloud.com/en-US/app/itsi/itsi_event_management?tab=layout_1&emid=1sdfdff-3cd3-11f0-b7a7-44561c0a81024&earliest=%40d&latest=now&tabid=all-events when opened in separate window does not open that specific episode the above url is modified to not share the exact url for the episode.  
I have a requirement to monitor log files created by Trellix on my windows 11 and 2019 hosts.  The log files are located at C:\ProgramData\McAfee\Endpoint Security\Logs\AccessProtection_Activity.log... See more...
I have a requirement to monitor log files created by Trellix on my windows 11 and 2019 hosts.  The log files are located at C:\ProgramData\McAfee\Endpoint Security\Logs\AccessProtection_Activity.log                                                                                                                                                               \ExploitPrevention_Activity.log                                                                                                                                                                \OnDemandScan_Activity.log                                                                                                                                                                 \SelfProtection_Activity.log   My stanza in the input.conf are configured as:   [monitor://C:\ProgramData\McAfee\Endpoint Security\Logs\AccessProtection_Activity.log disabled = 0 index = winlogs sourcetype = WinEventLog:HIPS start_from = oldest current_only = 0 checkpointInterval = 5 renderXel = false   Same format for each log. For some reason Splunk is not ingesting the log data.
I am trying to invoke the threadPrint operation on the MBean java.lang:type=Runtime.  I think the UI is telling me that it takes a String array as input, but I can't figure out how to specify an arra... See more...
I am trying to invoke the threadPrint operation on the MBean java.lang:type=Runtime.  I think the UI is telling me that it takes a String array as input, but I can't figure out how to specify an array. I have tried many combinations: Blank Empty double quotes Empty single quotes Double-quoted values Single-quoted values Curly braces Square braces A single character Multiple characters A few give an immediate syntax error, like quotes aren't allowed.  Most give something like this: failed with error = Unsupported type = [Ljava.lang.String;, value = l  I think it's trying to tell me that my input is not a String array. How do I specify an array?   thanks  
Hello Experts ,  I have never done this and wonder if there is a best way to achieve below  I want to use DS to push intial configurations from DS to CM and then use CM as porxy for IDX cluster .  ... See more...
Hello Experts ,  I have never done this and wonder if there is a best way to achieve below  I want to use DS to push intial configurations from DS to CM and then use CM as porxy for IDX cluster .  I tried below  1) added CM as client and for the serverclass I have added 'stateOnClient = noop' to each of the app entries just to make sure those application does not work locally no the CM  2) after above step confs lands on cm under /opt/splunk/etc/apps ,however I want them to land to /opt/splukn/etc/master-apps question is can DS put the directories in a diffferent locations than default that is /opt/splunk/etc/apps
| loadjob savedsearch="userid:search:hostslists" | lookup lookupname Hostname as host OUTPUTNEW Hostname,IP | eval Host=upper(host)    | append         [| loadjob savedsearch="userid:search:hosts... See more...
| loadjob savedsearch="userid:search:hostslists" | lookup lookupname Hostname as host OUTPUTNEW Hostname,IP | eval Host=upper(host)    | append         [| loadjob savedsearch="userid:search:hostslists"          | lookup lookupname IP as host OUTPUTNEW IP,Hostname          | eval Host=upper(host)]    | append         [| loadjob savedsearch="userid:search:hostslists"          | lookup lookupname AltName as host OUTPUTNEW AltName,IP,Hostname          | where AltName != Hostname          | eval Host=upper(host)] | eval starttime=relative_time(now(),"-10d@d"),endtime=relative_time(now(),"-1d@d") | convert ctime(latest),ctime(starttime),ctime(endtime) | where latest<=endtime AND latest>=starttime | rename latest as "Last event date", Host as "Host referred in Splunk" | eval Hostname=if('Host referred in Splunk'!='IP','Host referred in Splunk',Hostname) | stats count by Hostname,IP,"Host referred in Splunk","Last event date" | fields - count | dedup IP,Hostname   In my query I am using the saved search "hostslists" (it contains list of hosts reporting to splunk along with latest event datetime) Lookup "lookupname" (contains fields: Hostname, AltName,IP) Aim: Have to get the list of devices present in lookup which is not reporting for more than 10 days Logic: some devices report with "Hostname", some devices reprot with "AltName", few devices report with "IP"        So, I am checking all the 3 fields and capturing "Last event date"     Now, I am facing challenge,  Hostname               IP              "Last event date" Host1                  ipaddr1               25th July                 (by referring IP) Host1                  ipaddr1               10th June                 (by referring Hostname)   I have 2 different "Last event date" for same "Hostname" & "IP".  In my report, it is not showing the latest date, but Here I have to consider latest date, I am stuck how to use such logic. Can anyone please help ? Thanks for your response
Hello, I tried to import App Dashboard for Cyberwatch but dashboard display empty data. My understanding, for the Data input, i should select the following: sourcetype = "cyberwatch:syslog" ... See more...
Hello, I tried to import App Dashboard for Cyberwatch but dashboard display empty data. My understanding, for the Data input, i should select the following: sourcetype = "cyberwatch:syslog" app context = "Cyberwatch (SA-cyberwatch)" index name = "cyberwatch" But if i check the content of the dashboard, there is other source type: cbw:group cbw:node cbw:vuln ... Can you clarify to make the Dashboard work from Cyberwatch syslog events? Regards, Olivier
Currently we have our Cisco ISE devices being sent to a syslog server and then a forwarder is bringing that into Splunk. We are running into an issue where ise_servername is showing the device name, ... See more...
Currently we have our Cisco ISE devices being sent to a syslog server and then a forwarder is bringing that into Splunk. We are running into an issue where ise_servername is showing the device name, but the Syslog server name. What am I missing? How would I go about fixing this?
Hello Splunkers! I am using HEC to send an html file to splunk. The received event contains the html lines of code. The html is a table with some data, and forms a table with the data. Is there a w... See more...
Hello Splunkers! I am using HEC to send an html file to splunk. The received event contains the html lines of code. The html is a table with some data, and forms a table with the data. Is there a way, or how can I create a dashboard from the html text that shows the table?  Maybe another way to say this is: How can I extract the html code from the hec event, and display same on a dashboard?   Thank You So Much, E Holz  
We are looking for feasible to integrate with Mule Cloudhub with Splunk Cloud directly for logs ingestion. Please suggest
I have devices using a specific v4 address range and a specific v6 address range. I'd like to get the percent of devices using the v6 range so we can track the progress of the conversion. I'm new to ... See more...
I have devices using a specific v4 address range and a specific v6 address range. I'd like to get the percent of devices using the v6 range so we can track the progress of the conversion. I'm new to Splunk so I'm not sure how to proceed. 
Hi All, I`m looking to remove missing forwarders, where the servers have been permanently removed, reported by CMC. I cannot see anyway of doing this.  Is this something that i have to raise a ... See more...
Hi All, I`m looking to remove missing forwarders, where the servers have been permanently removed, reported by CMC. I cannot see anyway of doing this.  Is this something that i have to raise a support case for? many thanks Mark
Hi Guys, I'm trying to run a playbook and send an email using the SMTP services but not able to do it. When I tested send email from the SOAR CLI it was working but from the console it's not happeni... See more...
Hi Guys, I'm trying to run a playbook and send an email using the SMTP services but not able to do it. When I tested send email from the SOAR CLI it was working but from the console it's not happening. Can anyone tell me how to send emails from SOAR using "Passwordless" method? Unable to find the instructions or SOP on Splunk.   I've tested the connectivity over port 25 towards the SMTP server, and it's working.
Hi all, I want to extract fields from a custom log format. Here's my transforms.conf: REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\s+\d{1,3}(?:\.\d{1,3}){3}\s+\d+\s+\S+\s+(\S+)(?:\s+(iLO\d+))?\s+-\s+-\s+-\s+(.... See more...
Hi all, I want to extract fields from a custom log format. Here's my transforms.conf: REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\s+\d{1,3}(?:\.\d{1,3}){3}\s+\d+\s+\S+\s+(\S+)(?:\s+(iLO\d+))?\s+-\s+-\s+-\s+(.*) FORMAT = name::$1 version::$2 message::$3 DEST_KEY = _meta This regex is supposed to extract the following from a log like: Jul 27 14:10:05 1.2.3.4 1 2025-07-27T14:09:05Z QQQ123-G12-W4-AB iLO6 - - - iLO time update failed. Unable to contact NTP server. Expected extracted fields: name = QQQ123-G12-W4-AB version = iLO6 message = iLO time update failed. Unable to contact NTP server. The regex works correctly when tested independently, and all three groups are matched. However, in Splunk, only the first two fields (name and version) are extracted correctly. The message field only includes the first word: iLO. It seems Splunk is stopping at the first space for the message field, despite the regex using (.*) at the end. Any idea what could be causing this behavior? Is there a setting or context where Splunk treats fields as single-token values by default? Any advice would be appreciated!
Hi All I've been tasked with setting up logging for Windows Certification Services and getting this into Splunk. Have enabled the logging for Certification Services and can see the events for this... See more...
Hi All I've been tasked with setting up logging for Windows Certification Services and getting this into Splunk. Have enabled the logging for Certification Services and can see the events for this in the Windows Security log, in Splunk I can see the Windows Security logs for the CA server however the Certification Services events are missing. I've confirmed in the inputs.conf that the event IDs I'm looking for are whitelisted, does anyone have any other suggestions on what can be checked?
I'm working on a transforms.conf to extract fields from a custom log format. Here's my regex: REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\s+\d{1,3}(?:\.\d{1,3}){3}\s+\d+\s+\S+\s+(\S+)(?:\s+(iLO\d+))?\s+-\s+-\s... See more...
I'm working on a transforms.conf to extract fields from a custom log format. Here's my regex: REGEX = ^\w+\s+\d+\s+\d+:\d+:\d+\s+\d{1,3}(?:\.\d{1,3}){3}\s+\d+\s+\S+\s+(\S+)(?:\s+(iLO\d+))?\s+-\s+-\s+-\s+(.*) FORMAT = srv::$1 ver::$2 msg::$3 DEST_KEY = _meta This regex is supposed to extract the following from a log like: Jul 27 14:10:05 x.y.z.k 1 2025-07-27T14:09:05Z QQQ123-G12-W4-AB iLO6 - - - iLO time update failed. Unable to contact NTP server. Expected extracted fields: srv = QQQ123-G12-W4-AB ver = iLO6 msg = iLO time update failed. Unable to contact NTP server. The regex works correctly when tested independently, and all three groups are matched. However, in Splunk, only the first two fields (srv and ver) are extracted correctly. The msg field only includes the first word: iLO. It seems Splunk is stopping at the first space for the msg field, despite the regex using (.*) at the end. Any idea what could be causing this behavior? Is there a setting or context where Splunk treats fields as single-token values by default? Any advice would be appreciated!
hi , in my company we are using splunk enterprise in cluster struct , i recently update my servers not splunk after that and after restarting splunk deployment server all forwarder are trying to do p... See more...
hi , in my company we are using splunk enterprise in cluster struct , i recently update my servers not splunk after that and after restarting splunk deployment server all forwarder are trying to do phone call and when trying to listen on deployment servers it reciving the calls but when i check clients on forwarder manager section it is empty , what can i do ?