All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,   I have recently changed the OS hostname, followed by Splunk hostname change on a single node deployment.  I am still seeing old hostname in splunk License Manager reports, which are also sho... See more...
Hi,   I have recently changed the OS hostname, followed by Splunk hostname change on a single node deployment.  I am still seeing old hostname in splunk License Manager reports, which are also showing blanks for both today and historical license consumption.    Followed several articles on things to check, ie https://splunk.my.site.com/customer/s/article/In-the, and https://community.splunk.com/t5/Splunk-Dev/How-to-fix-incorrect-Instance-name-after-change-hostname/m-p/613316 Same outcome - the license manager still showing incorrect hostname in dropdown, and license usage stats are not reflected in the UX   If someone could please provide additional guidance on things I can check?
REGISTER HERE Tuesday, April 8, 2025  |  9AM–9:30AM PT Pizza Hut's Story of a Successful Migration for Greater Reliability & Resilience Many organizations are struggling with observability solu... See more...
REGISTER HERE Tuesday, April 8, 2025  |  9AM–9:30AM PT Pizza Hut's Story of a Successful Migration for Greater Reliability & Resilience Many organizations are struggling with observability solutions that promise a lot but deliver little - leading to high costs, incomplete insights, and slower innovation. Instead of driving digital transformation, organizations find themselves bogged down by unexpected overages and fragmented tools, and siloed teams. But a better way to observability is here. Splunk Observability is designed to meet customer needs for a unified, flexible observability solution and evolve with them as they scale. In this webinar, you’ll hear firsthand from the Pizza Hut team who made the switch to Splunk Observability, transforming their digital strategy to achieve greater reliability, resilience, and business impact. Join our webinar to learn: The digital transformation goals that drove an organization to seek a better observability solution. Key differentiators that make Splunk the right choice for modern businesses. Real-world examples showcasing improved performance, reliability, and efficiency after customers migrate to Splunk. How Splunk can help simplify your observability journey, improve collaboration across teams and drive better outcomes for your business. This is a can’t-miss webinar! Register now and join us.
Tuesday, April 8, 2025  |  9AM–9:30AM PT Pizza Hut's Story of a Successful Migration for Greater Reliability & Resilience Many organizations are struggling with observability solutions that promi... See more...
Tuesday, April 8, 2025  |  9AM–9:30AM PT Pizza Hut's Story of a Successful Migration for Greater Reliability & Resilience Many organizations are struggling with observability solutions that promise a lot but deliver little - leading to high costs, incomplete insights, and slower innovation. Instead of driving digital transformation, organizations find themselves bogged down by unexpected overages and fragmented tools, and siloed teams. But a better way to observability is here. Splunk Observability is designed to meet customer needs for a unified, flexible observability solution and evolve with them as they scale. In this webinar, you’ll hear firsthand from the Pizza Hut team who made the switch to Splunk Observability, transforming their digital strategy to achieve greater reliability, resilience, and business impact. REGISTER NOW!
Hi All, I want a SPL query to get total size occupied/consumed by each index till now since the date of onboarding and the remaining space available for each index.. And also please provide a... See more...
Hi All, I want a SPL query to get total size occupied/consumed by each index till now since the date of onboarding and the remaining space available for each index.. And also please provide a query which can help me to get the expected results if i have searched for one hour/one month/one year it should return the results since the date of onboarding to till now.. Thanks, Srinivasulu S
I’ve encountered an issue while working on a configuration for a Splunk deployment. I was creating a stanza in the inputs.conf file within an app that would be pushed to multiple clients via the depl... See more...
I’ve encountered an issue while working on a configuration for a Splunk deployment. I was creating a stanza in the inputs.conf file within an app that would be pushed to multiple clients via the deployment server. The goal was to retrieve specific data across multiple clients. However, I noticed that the data retrieval wasn't working as expected. While troubleshooting the issue, I made several changes to the stanza, including tweaking key values. In the process, I tried to change the source type in the stanza. Unfortunately, after making this change, all the events that had already been indexed and retrieved vanished. I'm looking for guidance on how to recover the missing events or if there’s any way to prevent this in the future when modifying the source type in inputs.conf. Any insights or suggestions on how to address this would be greatly appreciated! Thank you in advance for your help!
Hi. Recently I notice that the splunk heavy forwarder has stop receiving logs from network devices.  We are using TLS over syslog, but the cert is not expired yet. The rsyslog.conf file should be not... See more...
Hi. Recently I notice that the splunk heavy forwarder has stop receiving logs from network devices.  We are using TLS over syslog, but the cert is not expired yet. The rsyslog.conf file should be nothing wrong since previously it can receive logs. Can I know why is it happening?
Hello I am running search index=_introspection dedup host  table host in result i am not able to see one indexer and one search head while other indexers and sh are visible .
  Hello folks, I have a series of event results which take the format as shown below: appDisplayName: foo appId: foo0 appliedConditionalAccessPolicies: [ [-] { [-] displayName... See more...
  Hello folks, I have a series of event results which take the format as shown below: appDisplayName: foo appId: foo0 appliedConditionalAccessPolicies: [ [-] { [-] displayName: All Users Require MFA All Apps enforcedGrantControls: [ [+] ] enforcedSessionControls: [ [+] ] id: foo1 result: success } { [-] displayName: macOS Conditional Access Policy enforcedGrantControls: [ [+] ] enforcedSessionControls: [ [+] ] id: foo2 result: success } { [-] displayName: Global-Restrict enforcedGrantControls: [ [+] ] enforcedSessionControls: [ [+] ] id: foo3 result: notApplied } { [-] displayName: All_user_risk_policy enforcedGrantControls: [ [+] ] enforcedSessionControls: [ [+] ] id: foo4 result: notApplied Is there a way to cycle through the specific event to extract and maintain the correlation of field:value and then repeat for one or more event blocks? Effectively it would look like this: displayName: All Users Require MFA All Apps - id: foo1 - result: success displayName: macOS Conditional Access Policy - id: foo2 - result: success displayName: Global-Restrict - id: foo3 - result: notApplied displayName: All_user_risk_policy - id: foo4 - result: notApplied Thank you to all.
I’ve inherited a fleet of about 150 Windows Servers, all configured identically — same Deployment Server, TAs, inputs.conf/outputs.conf, etc. Out of the 150, around 10-12 systems are sending most Win... See more...
I’ve inherited a fleet of about 150 Windows Servers, all configured identically — same Deployment Server, TAs, inputs.conf/outputs.conf, etc. Out of the 150, around 10-12 systems are sending most Windows logs as expected, except for Security logs (WinEventLog:Security). I’ve already tried the basics like rebooting and reinstalling the forwarder, but no go. I’m leaning toward a possible permissions issue but not sure where to start troubleshooting from here.
Where can I find the icons that I can use for a splunk architecture diagram?
Hi All, I have a splunk alert that is having this search query: index="dcn_b2b_use_case_analytics" sourcetype=lime_process_monitoring | where BCD_AU_UP_01=0 OR BDC_BA_01=0 | dedup host | eval ... See more...
Hi All, I have a splunk alert that is having this search query: index="dcn_b2b_use_case_analytics" sourcetype=lime_process_monitoring | where BCD_AU_UP_01=0 OR BDC_BA_01=0 | dedup host | eval failed_processes=mvappend( if(BCD_AU_UP_01=0, "BCD_AU_UP_01", NULL), if(BDC_BA_01=0, "BDC_BA_01", NULL) ) | eval failed_process_list=mvjoin(failed_processes, ", ") | eval metricLabel="Labware - Services has been stopped in Server--Test Incident--Please Ignore" | eval metricValue_part1="Hello Application Support team, The below service has been stopped in the server, Service name: " | eval metricValue_part2=failed_process_list | eval metricValue_part3=" Server name: " | eval metricValue_part4=host | eval metricValue_part5=" Please take the required action to resume the service. Thank you. Regards, Background Service Check Automation Bot" | eval metricValue=metricValue_part1 + metricValue_part2 + metricValue_part3 + metricValue_part4 + metricValue_part5 | eval querypattern="default" | eval assignmentgroup="SmartTech Team" | eval business_service="SmartTech Business Service" | eval serviceoffering="SmartTech service offering" | eval Interface="CAB" | eval urgency=3 | eval impact=3 (Please note: here process status = 0 is failed process and =1 is successful process) ALERT CONFIG: Alert type: Scheduled Cron Expression: */7 * * * * Expires 24 hours Trigger Once Throttle (was checked in checkbox) Suppress triggering for 30 minutes When triggered - Alert Action- PTIX SNOWALERT(trigger incident in SNOW)   This should trigger only one incident having the Service names and the Server name, but not sure why this alert is triggering three different tickets-please help me correct the alert to trigger single ticket whenever alert is enabled.
Hello, I’ve been reviewing the documentation for configuring SSL/TLS on a Splunk forwarder, but I couldn’t find the specific steps for setting it up on a Windows machine. Would anyone be able to pro... See more...
Hello, I’ve been reviewing the documentation for configuring SSL/TLS on a Splunk forwarder, but I couldn’t find the specific steps for setting it up on a Windows machine. Would anyone be able to provide the procedure or a link to the relevant documentation? Best regards,
Hi, here is the data | delta _time as dlt | eval dlt=abs(dlt) | table _time, state, dlt "_time",state,dlt "2025-03-21T13:25:33.000+0100","Störung", "2025-03-21T13:21:46.000+0100",Verteilzeit,"2... See more...
Hi, here is the data | delta _time as dlt | eval dlt=abs(dlt) | table _time, state, dlt "_time",state,dlt "2025-03-21T13:25:33.000+0100","Störung", "2025-03-21T13:21:46.000+0100",Verteilzeit,"227.000" "2025-03-21T13:05:01.000+0100","Personal fehlt","1005.000" "2025-03-21T11:23:35.000+0100","Produktion ON","6086.000" "2025-03-21T11:23:19.000+0100",Wartung,"16.000" "2025-03-21T11:21:41.000+0100","Störung","98.000" "2025-03-21T11:20:04.000+0100","Produktion OFF","97.000" "2025-03-21T11:19:57.000+0100","Produktion ON","7.000" "2025-03-21T10:47:01.000+0100","Produktion OFF","1976.000" "2025-03-21T10:46:55.000+0100","Produktion ON","6.000" "2025-03-21T10:46:28.000+0100",Verteilzeit,"27.000" "2025-03-21T10:46:21.000+0100",Verteilzeit,"7.000" There are 7 different signals. Each (state) is comming from the system as an impuls in specific time stamp and represents the state of any workplace. The interval between these signals is the delta (dlt) or duration of the previous state. There is guaranteed no overlapping. I would like to visualise a bar chart of this duration on the timeline. E.g. last 24h. See an example (duration.jpg). Each begin of color is in fact timestamp of the state. If there is any idea, please. This would help me a lot.
Hi, From what I read so far, Splunk forms can be used to fetch/filter data based on User's requirement. The data in the case is already present in Splunk. However, I wish to insert data into specific... See more...
Hi, From what I read so far, Splunk forms can be used to fetch/filter data based on User's requirement. The data in the case is already present in Splunk. However, I wish to insert data into specific Index in Splunk. Can this also be done using Splunk Forms? 
traffic events not getting routed to nw_fortigate and non-traffic events not getting routed to os_linux Can someone help? props.conf [source::.../TUC-*/OOB/TUC-*(50M)*.log] TRANSFORMS-routing = r... See more...
traffic events not getting routed to nw_fortigate and non-traffic events not getting routed to os_linux Can someone help? props.conf [source::.../TUC-*/OOB/TUC-*(50M)*.log] TRANSFORMS-routing = route_fortigate_traffic, route_nix_messages   transforms.conf [route_fortigate_traffic] REGEX = (?i)traffic|session|firewall|deny|accept DEST_KEY = _MetaData:Index FORMAT = nw_fortigate [route_nix_messages] REGEX = .* DEST_KEY = _MetaData:Index FORMAT = os_linux
Hi Experts, I have the following data.  { "TIMESTAMP": 1742677200, "SYSINFO": "{\"number_of_notconnect_interfaces\":0,\"hostname\":\"test\",\"number_of_transceivers\":{\"10G-LR\":10,\"100G-CWDM4\... See more...
Hi Experts, I have the following data.  { "TIMESTAMP": 1742677200, "SYSINFO": "{\"number_of_notconnect_interfaces\":0,\"hostname\":\"test\",\"number_of_transceivers\":{\"10G-LR\":10,\"100G-CWDM4\":20},\"number_of_bfd_peers\":10,\"number_of_bgp_peers\":10,\"number_of_disabled_interfaces\":10,\"number_of_subinterfaces\":{\"Ethernet1\":10,\"Ethernet2\":20},\"number_of_up_interfaces\":1}" } I would like to create the table as below, but Ethernet1 or Ethernet2 is a dynamic key (it can be like Ethernet3 or Ethernet4). Ethernet1 10 Ethernet2 20   Could someone tell me how to write query to achieve this?
Hi  I have dashboard with Data Entity drop down ,i want to add a drop drown "ALL" ,if i select ALL and hit submit button,  It shows for all data api "/aws/lambda/api-data-$stageToken$-*" <query>... See more...
Hi  I have dashboard with Data Entity drop down ,i want to add a drop drown "ALL" ,if i select ALL and hit submit button,  It shows for all data api "/aws/lambda/api-data-$stageToken$-*" <query>index=$indexToken$  source IN ("/aws/lambda/api-data-$stageToken$-$entityTokenFirst$") msg="data:invoke" <form version="1.1" theme="dark" submitButton="true"> <label>Stats</label> <fieldset> <input type="dropdown" token="indexToken1" searchWhenChanged="false"> <label>Environment</label> <choice value="prod,prod">PROD</choice> <choice value="np,test">TEST</choice> <change> <eval token="stageToken">mvindex(split($value$,","),1)</eval> <eval token="indexToken">mvindex(split($value$,","),0)</eval> </change> <default>np,test</default> </input> <input type="dropdown" token="entityToken" searchWhenChanged="false"> <label>Data Entity</label> <choice value=“name,0”>name</choice> <choice value="targetProduct,*-test-target">Target </choice> <choice value="product,*-test-product">Product </choice> <choice value=“address,0”>address</choice> <change> <!-- Split the value and set tokens for both parts --> <set token="entityLabel">$label$</set> <eval token="searchName">mvindex(split($value$, ","),1)</eval> <eval token="entityTokenFirst">mvindex(split($value$, ","),0)</eval> </change> </input> <input type="time" token="timeToken" searchWhenChanged="false"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>Distinct Consumer Count</title> <single> <search> <query>index="np" source="**" | spath path=$stageToken$.nsp3s{} output=nsp3s | mvexpand nsp3s | spath input=nsp3s path=Name output=Name | spath input=nsp3s path=DistinctAdminUserCount output=DistinctAdminUserCount | search Name=$searchName$ | sort -_time | head 1 | appendpipe [ stats count | eval Name=if(count==0 OR isnull("$searchName$") OR "$searchName$"=="", "No NSP", "$searchName$") | fields DistinctAdminUserCount ]</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> <panel> <title>Event Processed</title> <single> <search> <query>index="$indexToken$" source="publish-$entityTokenFirst$-$stageToken$-nsp" * Published to NSP3 objectType* | stats count</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> <row> <panel> <title>Total Request :</title> <single> <search> <query>index=$indexToken$ source IN ("/aws/lambda/api-data-$stageToken$-$entityTokenFirst$") msg="data:invoke" | stats count</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="colorMode">none</option> <option name="drilldown">none</option> <option name="height">317</option> <option name="rangeColors">["0xcba700","0xdc4e41"]</option> <option name="rangeValues">[200]</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.size">large</option> <option name="unitPosition">after</option> <option name="useColors">1</option> </single> </panel> </row> </form>  
Hi  Submit button is not working 1.First time when i load the dashboard ,i select data Data Entity from dropdown and hit submit button .It works and fetch the result of that selecte Data Entity ... See more...
Hi  Submit button is not working 1.First time when i load the dashboard ,i select data Data Entity from dropdown and hit submit button .It works and fetch the result of that selecte Data Entity 2.Second time from dropdown selected the another entity without hitting submit button .the search started running for the selected drop down and gets the result.Help needed to fix it 3.In choice Value field "*-test-target" or *-test-product" wanted to be auto populate test or prod based on Env ($stageToken$)       <label>Data Entity</label>       <choice value=“name,0”>name</choice>       <choice value="targetProduct,*-test-target">Target </choice>       <choice value="product,*-test-product">Product </choice> <form version="1.1" theme="dark" submitButton="true"> <label>Stats</label> <fieldset> <input type="dropdown" token="indexToken1" searchWhenChanged="false"> <label>Environment</label> <choice value="prod,prod">PROD</choice> <choice value="np,test">TEST</choice> <change> <eval token="stageToken">mvindex(split($value$,","),1)</eval> <eval token="indexToken">mvindex(split($value$,","),0)</eval> </change> <default>np,test</default> </input> <input type="dropdown" token="entityToken" searchWhenChanged="false"> <label>Data Entity</label> <choice value=“name,0”>name</choice> <choice value="targetProduct,*-test-target">Target </choice> <choice value="product,*-test-product">Product </choice> <choice value=“address,0”>address</choice> <change> <!-- Split the value and set tokens for both parts --> <set token="entityLabel">$label$</set> <eval token="searchName">mvindex(split($value$, ","),1)</eval> <eval token="entityTokenFirst">mvindex(split($value$, ","),0)</eval> </change> </input> <input type="time" token="timeToken" searchWhenChanged="false"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>Distinct Consumer Count</title> <single> <search> <query>index="np" source="**" | spath path=$stageToken$.nsp3s{} output=nsp3s | mvexpand nsp3s | spath input=nsp3s path=Name output=Name | spath input=nsp3s path=DistinctAdminUserCount output=DistinctAdminUserCount | search Name=$searchName$ | sort -_time | head 1 | appendpipe [ stats count | eval Name=if(count==0 OR isnull("$searchName$") OR "$searchName$"=="", "No NSP", "$searchName$") | fields DistinctAdminUserCount ]</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> <panel> <title>Event Processed</title> <single> <search> <query>index="$indexToken$" source="publish-$entityTokenFirst$-$stageToken$-nsp" * Published to NSP3 objectType* | stats count</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> <row> <panel> <title>Total Request :</title> <single> <search> <query>index=$indexToken$ source IN ("/aws/lambda/api-data-$stageToken$-$entityTokenFirst$") msg="data:invoke" | stats count</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> <refresh>60m</refresh> <refreshType>delay</refreshType> </search> <option name="colorMode">none</option> <option name="drilldown">none</option> <option name="height">317</option> <option name="rangeColors">["0xcba700","0xdc4e41"]</option> <option name="rangeValues">[200]</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.size">large</option> <option name="unitPosition">after</option> <option name="useColors">1</option> </single> </panel> </row> </form>  
Hi everyone, I am trying to configure Kaspersky Security Center to forward logs to Splunk using Syslog over TLS. However, I need some guidance on the following points:   How can I configure Kasp... See more...
Hi everyone, I am trying to configure Kaspersky Security Center to forward logs to Splunk using Syslog over TLS. However, I need some guidance on the following points:   How can I configure Kaspersky Security Center to send logs via Syslog over TLS? What are the steps to generate the necessary certificates for this setup? Which certificate formats or file extensions does Kaspersky Security Center accept for TLS encryption? Are there any specific configurations required on the Splunk side to properly receive and parse these logs over TLS? I would appreciate any insights, best practices, or documentation references that could help. Thank you in advance!  
hello,   Please write or send me document link which internet endpoints (URL, port) Splunk SIEM needs access to in order to function properly, download updates, apps, and anything else required for... See more...
hello,   Please write or send me document link which internet endpoints (URL, port) Splunk SIEM needs access to in order to function properly, download updates, apps, and anything else required for its normal operation.