All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

i want to onboard application logs into splunk cloud.  Hyland.Logging can be configured to send information to Splunk as well as the Diagnostics Console by modifying the .config file of the server. ... See more...
i want to onboard application logs into splunk cloud.  Hyland.Logging can be configured to send information to Splunk as well as the Diagnostics Console by modifying the .config file of the server. To configure Hyland.Logging to send information to Splunk: <Route name="Logging_Local_Splunk" > <add key="Splunk" value="http://localhost:SplunkPort"/> <add key="SplunkToken" value="SplunkTokenNumber"/> <add key="DisableIPAddressMasking" value="false" /> </Route> Configuring Hyland.Logging for Splunk • Application Server • Reader • Product Documentation  i am not understanding where we need to configure above config in Splunk. Much appreciated anyone guide me.
Hi, I would like to resize the panels that I have in a Splunk row. So I have 3 panels and I referred to some previous posts on doing the panel width resize using CSS. I remember this used to work? B... See more...
Hi, I would like to resize the panels that I have in a Splunk row. So I have 3 panels and I referred to some previous posts on doing the panel width resize using CSS. I remember this used to work? But I can't seem to get this working on my current Splunk dashboard. Due to some script dependencies, I am not able to use Dashboard Studio hence still stuck with the classic XML dashboard. I referred to previous question on this and did exactly like what was mentioned but the panels still appear equally spaced at 33.33% each. <form version="1"> <label>Adjust Width of Panels in Dashboard</label> <fieldset submitButton="false"> <input type="time" token="tokTime" searchWhenChanged="true"> <label>Select Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel depends="$alwaysHideCSS$" id="CSSPanel"> <html> <p/> <style> #CSSPanel{ width:0% !important; } #errorSinglePanel{ width:25% !important; } #errorStatsPanel{ width:30% !important; } #errorLineChartPanel{ width:45% !important; } </style> </html> </panel> <panel id="errorSinglePanel"> <title>Splunkd Errors (Single Value)</title> <single> <search> <query>index=_internal sourcetype=splunkd log_level!=INFO | timechart count</query> <earliest>$tokTime.earliest$</earliest> <latest>$tokTime.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="colorBy">trend</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">inverse</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> </single> </panel> <panel id="errorStatsPanel"> <title>Top 5 Error (Stats)</title> <table> <search> <query>index=_internal sourcetype=splunkd log_level!=INFO | top 5 component showperc=false</query> <earliest>$tokTime.earliest$</earliest> <latest>$tokTime.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel id="errorLineChartPanel"> <title>Splunkd Errors (Timechart)</title> <chart> <search> <query>index=_internal sourcetype=splunkd log_level!=INFO | timechart count</query> <earliest>$tokTime.earliest$</earliest> <latest>$tokTime.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">line</option> <option name="charting.drilldown">none</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </chart> </panel> </row> </form>
I am trying to set up the Tenable App for Splunk and the documentation is a bit vague about whether it requires a Heavy Forwarder to operate.  I found an old post from 2017 that mentioned it did, but... See more...
I am trying to set up the Tenable App for Splunk and the documentation is a bit vague about whether it requires a Heavy Forwarder to operate.  I found an old post from 2017 that mentioned it did, but it was referencing older versions of Nessus than what is used in my environment.  Does anyone know if a heavy forwarder is still required for the  Tenable App for Splunk?
Hi all, I’ve recently encountered several challenges since migrating to Splunk Mission Control (MS) and would appreciate any guidance or insights. Summary of Issues: We had a dashboard set up ... See more...
Hi all, I’ve recently encountered several challenges since migrating to Splunk Mission Control (MS) and would appreciate any guidance or insights. Summary of Issues: We had a dashboard set up to pull all the data needed for our monthly report. Since switching to MS, all those dashboards are broken with errors like: "Could not find object id="*. I recreated the dashboard with new searches, which initially worked fine and allowed report creation. However, when revisiting the new dashboard, most searches now fail or return no results within the expected time frame, despite previously working and being used in the latest report. Several items such as charts for "top hosts (consolidated)" and "top hosts" that were available under Security Domain > Network > Exec View are now missing post-migration. Search Aborts and Resource Issues: One major problem is searches being aborted with SVC errors. After contacting the customer, workload restrictions on my account were lifted, but searches still fail due to resource usage. Even limiting searches to a single day results in failures, and this has become quite frustrating. Example Problem with Macros and Searches: The macro sim_licensing_summary_base appears to be missing since moving to MS, and even the customer cannot locate it. The following search, intended to replicate the macro’s function, returns incomplete results after 2025-04-10 without any errors in the job manager:   (host=*.*splunk*.* NOT host=sh*.*splunk*.* index=_telemetry source=*license_usage_summary.log* type="RolloverSummary") | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "volume" fixedrange=true | eval GB=round((volume / 1073741824),3) | fields - b, volume | stats avg/max(GB) Additional Notes: We’ve also noticed missing dashboards and objects that were previously part of Enterprise Security views. Searches aborting due to resource limits remain an issue despite workload adjustments. Has anyone else experienced similar problems after switching to Mission Control? Any advice on troubleshooting these dashboard errors, missing macros, or search aborts would be greatly appreciated.  
For some reason, I needed to share some data from an index with a different set of permissions. After a bit of research, I found that the CLONE_SOURCETYPE option could help me with this stuff. I cr... See more...
For some reason, I needed to share some data from an index with a different set of permissions. After a bit of research, I found that the CLONE_SOURCETYPE option could help me with this stuff. I created the required settings in props.conf and transforms.conf, and then pushed them to the IDXC layer. At first glance, everything seemed fine, but then I discovered that CLONE_SOURCETYPE clones all events from the original sourcetype and redirects only a few to the new one. Is that the intended behavior, or did I make serious mistakes in the configuration? I expected to see only the events matching the REGEX in the original index.   props.conf [vsi_file_esxi-syslog] LINE_BREAKER = (\n) MAX_TIMESTAMP_LOOKAHEAD = 24 SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = false BREAK_ONLY_BEFORE = \d{1,3} TIME_PREFIX = ^<\d{1,3}> TRANSFORMS-remove_trash = vsi_file_esxi-syslog_rt0, vsi_file_esxi-syslog_ke0 TRANSFORMS-route_events = general_file_esxi-syslog_re0 transforms.conf [general_file_esxi-syslog_re0] CLONE_SOURCETYPE = general_re_esxi-syslog REGEX = FIREWALL-PKTLOG: DEST_KEY = _MetaData:Index FORMAT = general WRITE_META = true    
Hi Team, Greetings !! This is Srinivasa, Could you please provide Splunk with Unified Applications (CUCM) On-prem , how to configure , install documents 
Hi, We are experiencing a critical issue where several scheduled alerts/reports are not being received by intended recipients. This issue affects both individual mailboxes and distribution lists. ... See more...
Hi, We are experiencing a critical issue where several scheduled alerts/reports are not being received by intended recipients. This issue affects both individual mailboxes and distribution lists. Initially, only a few users reported missing alerts. However, it has now escalated, with all members of the distribution lists no longer receiving several key reports. Only a few support team members  continue to receive alerts in their personal mailboxes, suggesting inconsistent delivery. Also just checking, is there is any suppression list blocking
Hi All, I am very new to splunk and faced a issue while extracting a value which is having alphanumeric value, with no predefined length. ex: 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x... See more...
Hi All, I am very new to splunk and faced a issue while extracting a value which is having alphanumeric value, with no predefined length. ex: 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - zywstrf 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - abc123f 2025-05-15T04:32:12.397Z INFO 1 --- [nio-8080-exec-4] x.y.z.y.LDAPAccountServiceImpl : [Request END] Failed : Cannot fetch secret for Vault Engine - XYXR_VPN_Engine, AIT - 9876 Service ID - 1234-abcehu09_svc06-app_texsas_14455 I am trying get the Service ID value, which comes at the end of the line. Thanks a lot in advance. Regards, AKM
I’m forwarding logs from an EC2 instance using rsyslog with the omhttp module to a Splunk HEC endpoint running on another EC2 instance (IP: 172.31.25.126) over *port 8088*.My rsyslog.conf includes: r... See more...
I’m forwarding logs from an EC2 instance using rsyslog with the omhttp module to a Splunk HEC endpoint running on another EC2 instance (IP: 172.31.25.126) over *port 8088*.My rsyslog.conf includes: rsyslog  module(load="omhttp") action(type="omhttp"    server="172.31.25.126"    port="8088"    uri="/services/collector/event"    headers=["Authorization: Splunk <token>"]    template="RSYSLOG_SyslogProtocol23Format"    queue.filename="fwdRule1"    queue.maxdiskspace="1g"    queue.saveonshutdown="on"    queue.type="LinkedList"    action.resumeRetryCount="-1"  )### Problem:Even though I’ve explicitly configured port 8088, I get this error: omhttp: suspending ourselves due to server failure 7: Failed to connect to 172.31.25.126 port 443: No route to hostIt seems like omhttp is still trying to use *HTTPS (port 443)* instead of *plain HTTP on port 8088*.---### Questions:1. How do I force the omhttp module to use HTTP instead of HTTPS? 2. Is there a configuration parameter to explicitly set the protocol scheme (http vs https)? 3. Is this behavior expected if I just set the port to 8088 without configuring the protocol?Any insights or examples are appreciated. Thanks!
Hi,   I have installed splunk 9.4.2 on-prem and downloaded and installed the 'splunkuf' app from splunkcloud universal forwarder package. Upon restarting the splunk instance, it throws following err... See more...
Hi,   I have installed splunk 9.4.2 on-prem and downloaded and installed the 'splunkuf' app from splunkcloud universal forwarder package. Upon restarting the splunk instance, it throws following errors. I just want to ensure the internal logs reach cloud before i configure the server with custom apps/add-ons. 05-14-2025 13:05:23.918 +0000 ERROR TcpOutputFd [2377196 TcpOutEloop] - Connection to host=18.xx:9997 failed. sock_error = 104. SSL Error = No error I have checked connectivity from on-prem instance to inputs1.*.splunkcloud.com:9997 using curl/telnet and openssl and firewall team confirmed the ports are open. Any thoughts on what I could be missing or suggestions to troubleshoot? thanks laks  
Hi,  We just upgrade Splunk to version 9.4.2 and in the dahabord we noticed that all the text are wrappedbefore the sting was cutted and ... were at the end. Do you know how to revert this auto w... See more...
Hi,  We just upgrade Splunk to version 9.4.2 and in the dahabord we noticed that all the text are wrappedbefore the sting was cutted and ... were at the end. Do you know how to revert this auto wrapping?   Thank you
Hello! I maintain Splunk reports. Some of the Pivot reports are based on a Dataset that is generated based on a simple search. Duplicate values ​​have not been taken into account in the generation. ... See more...
Hello! I maintain Splunk reports. Some of the Pivot reports are based on a Dataset that is generated based on a simple search. Duplicate values ​​have not been taken into account in the generation. Due to an error, there were two data sources for a few weeks. This resulted in identical duplicate rows in the dataset. In the future, duplicate rows can be removed from the dataset with a simple dedup. However, are there any best practices to fix this?
Hi,      I am trying to gather data from a specific organisation unit in Active Directory and ignore everything else? I have tried with a transforms.conf to allow it but didn't seem to work.  I coul... See more...
Hi,      I am trying to gather data from a specific organisation unit in Active Directory and ignore everything else? I have tried with a transforms.conf to allow it but didn't seem to work.  I could sort of get it to work by writing a block for everything else but its a bit of a pain as the environment is shared. Any one had any experience  doing this sort of thing?
Hi All, Anyone who has worked with OpenText NetIQ Logs before? We are receiving the NetIQ logs via syslog, but the sourcetype is set as default 'syslog', and field extractions are not performed pro... See more...
Hi All, Anyone who has worked with OpenText NetIQ Logs before? We are receiving the NetIQ logs via syslog, but the sourcetype is set as default 'syslog', and field extractions are not performed properly. Since I do not find any Splunk TA for NetIQ logs, I would need some suggestions for the source type assignment for NetIQ logs.  Thank you
we have .Net application & getting continuassly events ever 20 S. Can you please guide me how to stop noice. I have disabled but still persist.   Thanks Dinesh 
Hi Team, Is there a direct way to retrieve a list of usernames or accounts configured in Splunk Add-ons (such as those used in modular inputs, scripted inputs, or API connections) using Splunk SPL? ... See more...
Hi Team, Is there a direct way to retrieve a list of usernames or accounts configured in Splunk Add-ons (such as those used in modular inputs, scripted inputs, or API connections) using Splunk SPL? Regards, VK
Hello Everyone, I want to check if a field called "from_header_displayname" contains any Unicode. Below is the event source, this example event contains the unicode of "\u0445": "from_header_displ... See more...
Hello Everyone, I want to check if a field called "from_header_displayname" contains any Unicode. Below is the event source, this example event contains the unicode of "\u0445": "from_header_displayname": "'support@\u0445.comx.com' And the following what I see from the web console, the unicode has been translated into "x" (note: it's not the real letter x, but something looks like x in the other language) from_header_displayname: 'support@х.comx.com' I used the following search but no luck: index=email | regex from_header_displayname="[\u0000-\uffff]" Error in 'SearchOperator:regex': The regex '[\u0000-\uffff]' is invalid. Regex: PCRE2 does not support \F, \L, \l, \N{name}, \U, or \u. Please advise what should I use in this case. Thanks in advance. Regards, Iris
I need to find whether the string ["foobar"] exists in a log message.  I have a search query like some stuff | eval hasFoobar = case(_raw LIKE "%\"foobar%", "Y") | eval hasFoobar = if(hasFoobar = ... See more...
I need to find whether the string ["foobar"] exists in a log message.  I have a search query like some stuff | eval hasFoobar = case(_raw LIKE "%\"foobar%", "Y") | eval hasFoobar = if(hasFoobar = "Y", "YES", "NO") | table message, hasFoobar which gives YESes as expected. If I add a square bracket, whether escaped or not, I only get NOes.  E.g., some stuff | eval hasFoobar = case(_raw LIKE "%[\"foobar%", "Y") | eval hasFoobar = if(hasFoobar = "Y", "YES", "NO") | table message, hasFoobar some stuff | eval hasFoobar = case(_raw LIKE "%\[\"foobar%", "Y") | eval hasFoobar = if(hasFoobar = "Y", "YES", "NO") | table message, hasFoobar   Any advice?  
Hello Experts ,  I am trying to send windows security logs to logstash(http) receiver . Below is what I have based on my understanding from below splunk document  https://docs.splunk.com/Documentat... See more...
Hello Experts ,  I am trying to send windows security logs to logstash(http) receiver . Below is what I have based on my understanding from below splunk document  https://docs.splunk.com/Documentation/Forwarder/latest/Forwarder/Configureforwardingwithoutputs.conf?_gl=1*1oibtlm*_gcl_aw*R0NMLjE3NDY4NDE5NzEuRUFJYUlRb2JDaE1Jc2Z2dnRPV1hqUU1WaDBwX0FCMlJYQnRjRUFBWUFTQUFFZ0wtNXZEX0J3RQ..*_gcl_au*NzE5NjQzNDU5LjE3NDQ5MDE2Mjc.*FPAU*NzE5NjQzNDU5LjE3NDQ5MDE2Mjc.*_ga*NjI5NDg5MjY4LjE3NDQ5MDE2Mjg.*_ga_5EPM2P39FV*czE3NDcxNTY4OTMkbzckZzEkdDE3NDcxNTcxNDIkajAkbDAkaDM4ODI5OTg4OQ..*_fplc*R1FCTFo5ZiUyQnVNQ3gxRlQ2NXVoQW45b0tXS2Z4SiUyRkxpSUYyME04d2hZRGR4b25qaGFMaEhSRG1SYUpoaDhCTG8zc3daRkhXZEhtTjFad0VtcFhoTHBZc0k3eGgzUDVNZzJOaXhkJTJCNGklMkIxbUJpYVRBanhIWUpKdFFtMlpIRVElM0QlM0Q. On UF I have inputs.conf [WinEventLog://Security] disabled = 0 outputs.conf [httpout] httpEventCollectorToken = <token> uri = http://127.0.0.1:8002 compressed = false sendCookedData = false compression = none my logstash.conf ( I want to write the data into a file) input { http { port => 8002 codec => plain } } output { file { path => "C:\logstash_output\uf_debug_raw.txt" } } The file is being created but it holds encoded data like encrypted data , symbols . Can someone suggest if this is even possible  data in the file  {"url":{"domain":"127.0.0.1","port":8002,"path":"/services/collector/s2s"},"@version":"1","event":{"original":"�x��V�n\u001CE\u0010�`@���@\u001C�����%
Hello, How to change dataSource in table dynamically based on token in Splunk Dashboard Studio? I tried to assign a token on the "primary" field, so it can change dynamically to "Data 1" or "Data... See more...
Hello, How to change dataSource in table dynamically based on token in Splunk Dashboard Studio? I tried to assign a token on the "primary" field, so it can change dynamically to "Data 1" or "Data 2" based on selection. However, this solution does not seem to work.  I've seen a suggestion to use "saved search", but I don't want to use that solution.  Please suggest. Thanks "viz_dynamictable": {     "type": "splunk.table",     "dataSources": {         "primary": "$datasource_token$"          },     "title": "$title_token$" } "dataSources": {      "ds_index1": {                "type": "ds.search",                 "options": {                      "query": "index=index1"                  },             "name": "Data 1"         },       "ds_index2": {                  "type": "ds.search",                  "options": {                      "query": "index=index2"                   } ,             "name": "Data 2"         },