All Topics

Top

All Topics

hello i have a list of events structured with the following fields :  guid (uniqueid), property (name of a property ), value ( value link to the property name). i have 4 specific properties that ... See more...
hello i have a list of events structured with the following fields :  guid (uniqueid), property (name of a property ), value ( value link to the property name). i have 4 specific properties that I received separately on different events and the key is the guid to consolidate the information property/value by guid i make a search => search xxx | table  guid , property , value i m able to have all the events in a table in this way guid   property value 1   start  1 1   end  2 1   duration 1 1   status  OK 2  start  1 2   end  3 2   duration 2 2   status  KO  I try to transpose the result in this way  => search xxx | table  guid , property , value | transpose 0 header_field="property" tho have a result like this : guid start end duration status 1 1 2 1 OK 2 1 3 2 KO but the result is not good, is there a way to easily search and display in a readable table this kind of structured events? Other need, how to simply display by guid the status and duration ? Thanks for your help regards Laurent
We currently have a report that will be emailed on a nightly basis, It will send and email with an attachment that includes an XLS and a PDF that contains the xls.  The PDF exports as expected, but w... See more...
We currently have a report that will be emailed on a nightly basis, It will send and email with an attachment that includes an XLS and a PDF that contains the xls.  The PDF exports as expected, but when Splunk emails the PDF, it says "No Matching Events found".  When we send the XLS as part of the communication, it contains the contents of the report as expected.  It was working fine up until a few weeks back, then the PDF stopped producing results while the XLS continues to function as expected.   I have searched the logs and have found no errors that would prevent the report from being generated, not sure where to  look at this point to determine why PDF is not producing results.   Splunk Cloud Version:  9.1.2308.203 build d153a0fad666
Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization of HTTP requests:  Edge Processor administrators can now configure HTTP event collector... See more...
Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization of HTTP requests:  Edge Processor administrators can now configure HTTP event collector (HEC) tokens to authenticate HTTP requests for data coming from a HEC source. This enhances the overall security of the HEC data path as it prevents unwanted data from coming into Edge Processor pipelines. Point and Click UI for Lookups: Edge Processor Lookups allow configuring pipelines to enrich event data using CSV and KV store lookups defined on the search head linked to Edge Processor. Through the UI, users can now seamlessly build the lookup command without having to manually write an SPL2 statement to support a wide array of use cases such as detecting indicators of compromise, resolving host IPs, and more.  Point and Click UI for Cryptographic Functions: Edge Processor now supports a seamless GUI-based building experience for hashing functions (SHA1, SHA256, SHA512 and MD5), which no longer requires the manual authoring of SPL2 hashing statements. These functions allow users to support use cases such as masking sensitive information and monitoring of file/data integrity by hashing it. With the User Interface users can now rebuild the _raw event and just send through _raw with the hashed data through interactive point-and-click interface, without manually typing the commands in the pipeline definition.  To learn more about Edge Processor’s HEC, Lookups and Cryptographic capabilities (and more!), check out Splunk Docs.
Hi All,  I am using depedent dropdown in my splunk dashboard .But the second dropdown not working.Could you pls what is the exact error .And screen shot is attached.And my inputlookup with below val... See more...
Hi All,  I am using depedent dropdown in my splunk dashboard .But the second dropdown not working.Could you pls what is the exact error .And screen shot is attached.And my inputlookup with below values. <input type="dropdown" token="BankApp" searchWhenChanged="true" depends="$BankDropDown$"> <label>ApplicationName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | dedup APPLICATION_NAME | sort APPLICATION_NAME | table APPLICATION_NAME </query> </search> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>APPLICATION_NAME</fieldForValue> <default>*</default> <prefix>applicationName="</prefix> <suffix>"</suffix> </input> <input type="dropdown" token="interface" searchWhenChanged="true" depends="$BankDropDown$"> <label>InterfaceName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | search $BankApp$ | sort INTERFACE_NAME | table INTERFACE_NAME </query> </search> <fieldForLabel>InterfaceName</fieldForLabel> <fieldForValue>INTERFACE_NAME</fieldForValue> <default>*</default> <prefix>InterfaceName="</prefix> <suffix>"</suffix> </input> INTERFACE_NAME APPLICATION_NAME APPLICATION_NAME INTERFACE_NAME p-oracle-fin-processor-2 HSBC_NA_AP_ACH p-oracle-fin-processor USBANK_AP_ACH p-oracle-fin-processor-2 AMEX_AP_GL1025_PCARD_CCTRANS p-oracle-api APEX_VENDORPORTAL_HR_APO_EMPLOYEE_OUT p-oracle-fin-processor-2 AVALARA_TAX_VAT_REPORTING p-oracle-fin-processor-2 BOA_KING_KYRIBA_CE_BANKSTMTS_BFA_GLOBAL p-oracle-fin-processor-2 HSBC_APAC_CE_BANKSTMTS p-oracle-fin-processor-2 HSBC_NA_CE_BANKSTMTS                  
My splunk query able to get the required results using below query.  After running the query, I get NULL values in one of the column. As per business requirement i need to replace the NULL values to ... See more...
My splunk query able to get the required results using below query.  After running the query, I get NULL values in one of the column. As per business requirement i need to replace the NULL values to blank or some other values in one of the column name acd2. index=application1 "ProcessWriteBackServiceImpl" "userList" sourcetype="intradiem:iex:ewfm" source="E:\app1\\appsec\\appsec1\\test.log" | rex field=_raw "^(?:[^\[\n]*\[){2}(?P\w+)[^=\n]*=\[(?P\d+)" | eval empid = substr("000000", 0, max(9-len(empid), 0)) . empid | search actiontype="*" empid="*" | stats count by actiontype, empid, _time | table actiontype, empid, _time | join type=inner empid [search index="*" earliest=-24hr latest=now source="D:\\app2\\app_data.csv" | rex field=_raw "^(?P[^,]+),(?P\w+),(?P[^,]+),(?P[^,]+),(?P\d+)\,(?\w+)\,(?P[^,]+),(?P\w+)" | search empid="*" msid="*" muid="*" muname="*" acd="*" acd2="*" lastname="*" firstname="*"] | eval Time = strftime(_time, "%Y-%d-%m %H:%M:%S") | fields - _time | table Time, actiontype, empid, muid, muname, acd,acd2, lastname, firstname   output results   Timeactiontypeempidmuidmunameacdacd2lastnamefirstname 1 2024-19-04 08:10:18 Break 0000000 3302 test 55 NULL sample name sample name 2 2024-19-04 08:14:41 Break 0000000 6140 test 55 NULL sample name sample name 3 2024-19-04 08:35:07 Break 00000000000 1317 test 55 NULL sample name sample name 4 2024-19-04 08:25:41 Break 000000000 1106 test 55 NULL sample name sample name 5 2024-19-04 07:25:19 0 000000000000 6535 test 55 96 sample name sample name
Is the Splunk ODBC "deployment" compatible with Splunk Cloud? For example, following this guide. Would it be possible to setup a cloud instance instead of a local/Enterprise URL?
HI, I'm working in splunk team. Environment: 3 SH 10 IDX (1 of 10 IDX overused) Replication factor 3 Search factor 3   Could it happen that searches are continuously done only on certain indexe... See more...
HI, I'm working in splunk team. Environment: 3 SH 10 IDX (1 of 10 IDX overused) Replication factor 3 Search factor 3   Could it happen that searches are continuously done only on certain indexer? I've been constantly monitoring them with top and ps -ef, and I'm seeing a lot of search operations on certain indexer. The cpu usage is roughly double... It's been going on for months. Can it be considered normal?
I would like to add a column called Management  to my table. The management value is not part of the event data. It is  something I would like to assign based on the value of Applications:  Any help... See more...
I would like to add a column called Management  to my table. The management value is not part of the event data. It is  something I would like to assign based on the value of Applications:  Any help would be appreciated. Management Applications In IIT In ALP In MAL In HST Out OCC In ALY In GSS In HHS In ISD  
Hey there , kindly need support how to determine received logs SIZE for specific Host. Prefers to be done through GUI  Hit: working on distributed environment also own License master instance    t... See more...
Hey there , kindly need support how to determine received logs SIZE for specific Host. Prefers to be done through GUI  Hit: working on distributed environment also own License master instance    thanks in advance, 
Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The goal of the SplunkTrust™ program is not to recognize users in the room who know the mo... See more...
Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The goal of the SplunkTrust™ program is not to recognize users in the room who know the most about Splunk. Rather, the SplunkTrust program was created to acknowledge and recognize those who go above and beyond to help their fellow Splunk practitioners and who have truly helped shape this wonderful, vibrant community.  This is an honor that represents a recognition by peers that the person embodies the hallmarks of the program — being helpful, technical, knowledgeable, and readily available. And it comes with perks, too! SplunkTrust members receive the coveted Splunk Fez with membership pin, Splunk EDU credits, and a free pass to .conf24. Splunk inducts the cohort of SplunkTrust members annually at .conf, which will be happening June 11 - 14 in Las Vegas this year. We’ll be welcoming four new SplunkTrust members, and welcoming back 48 returning members from our previous SplunkTrust selection. And so, without further ado, we are so excited to announce our .conf24 SplunkTrust cohort: New 2024 SplunkTrust Members Chris Barrett Bryan Beaulieu Sekar Sundaram Trayton White Returning SplunkTrust Members Brett Adams Ryan Adler Gareth Anderson Michael Bentley Steven Bochniewicz Mika Borner Antony Bowesman Becky Burwell Suat Celikok Siddhartha Chakraborty Mary Cordova Aleem Cummins Giuseppe Cusello Johannes Effland Mhike Funderburk Sanjay Reddy Gaddam Björn Hansen Vatsal Jagani Chris Kaye Steve Koelpin Tom Kopchak Sebastian Kramp Mariusz Kruk Charles Kuykendall Yuan Liu Rich Mahlerwein Mark McCullough Nick Mealy Madison Moss Martin Müller Cary Petterborg James Sevener David Shpritz Diogo Silva Kyle Smith Ismo Soutamo Daniel Spavin Keara Spoor Balaji Thambisetty Matt Uebel Kamlesh Vaghela Dominique Vocat Duane Waddle Colby Williams Tom Wise Yutaka Yamada Srikanth Yarlagadda Chris Younger And a huge congrats (and thank you!) to our incredible honorary SplunkTrust members as well! The Honorary Trust is composed of Splunk employees who embody the same wonderful, pioneering spirit that we look for in all SplunkTrustees. New 2024 Honorary SplunkTrust Members James Hodgkinson Returning Honorary SplunkTrust Members Karandeep “Deep” Bains Camille Balli John Billings Ari Donio Dustin Eastman Rich Galloway Tedd Hellmann David Hourani Charlie Huggard Kate Lawrence-Gupta Harshil Marvania Caroline McGee Clara Merriman Nadine Miller Matthew Modestino Brian Osburn David Paper Chris Perkins Nate Plamondon Michael Simko Lou Stella Jesse Trucks David Twersky Russell Uman John "Okie" Welch Tom West Congratulations again to everyone, and we can’t wait to see you at .conf in June! The Splunk Community Team
I'm looking to turn off the INFO messages in the server.log file for my on-prem controller.   Finding the file that will allow me to set the different levels of logging would be very much appreciate... See more...
I'm looking to turn off the INFO messages in the server.log file for my on-prem controller.   Finding the file that will allow me to set the different levels of logging would be very much appreciated.  
We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of the Splunk community that have shown consistent participation in at least one area of our c... See more...
We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of the Splunk community that have shown consistent participation in at least one area of our community programs. The Splunk MVP program, a complement to our longstanding SplunkTrust program, is one way the community team is able to recognize and celebrate our star contributors. The Splunk community is built upon the efforts of its members and the contributions they make to help their peers find success. We are incredibly thankful for your contributions! How do I become a Splunk MVP? Do you love answering questions about Splunk SOAR in Splunk Answers? That’s one way! Driving fabulous engagement as a leader of your Splunk User Group? That’s another way! Rockin’ out the Slack contributions? Chances are, you’re already well on your way! Splunk MVP selection focuses on community contributions, prioritizing engagement and support, in whatever areas you might be most passionate about.  Introducing our 2024 cohort of Splunk MVPs: First and foremost, let us say, THANK YOU! Our programs are built around your knowledge of Splunk, domain expertise, and passion to help your peers find success. And without further ado, our 2024 MVPs are… Uday Agrawal Marc Andersen Pedro Borges Steve Baker Johnny Blizzard Mark Cooke Marius Cuanda Eric Favreau Andrew Gerber Alain Giammarinaro Suman Gajavelly Lakshmanan Ganesan Adam Gold Taruchit Goyal Martin Hettervik Jason Hotchkiss Dal Jeanis Aaron Johnsen Rohit Joshi Nancy Kafer Chandra Sekhar Kolla Mike Langhorst Magnus Lord Alex Lu Rakesh Luhar Jo Øverland Guilhem Marchard Andrew McManus Manjunath Meti Phil Meyerson Renjith Nair Deyan Petrov Michael Pitcher Chris Risley Jean-Philippe Rosse David Rutstein Ryan Plas Gregory Rivas Rafael Santos Nadine Shillingford Matt Snyder Paul Schultze Trevor Scroggins Raja Sha Meet Shah Ayush Sharma Young So Somesh Soni Adam Denham Smith George Starcher Brandon Sternfield Davin Studer Erling Teigen Edoardo Vicendone Michael Uschmann Congratulations once again to the 2024 cohort of Splunk MVPs, and we hope to see some of you at .conf in June! The Splunk Community Team
Hi, I'm currently ingesting CSV files to Splunk. One of the fields record actual Event Timestamp in this format YYYYmmddHHMMSS (e.g. 20240418142025). I need to format this field's value in a way th... See more...
Hi, I'm currently ingesting CSV files to Splunk. One of the fields record actual Event Timestamp in this format YYYYmmddHHMMSS (e.g. 20240418142025). I need to format this field's value in a way that Splunk will understand the data (e.g. date, hour, minutes, second etc.). Once this formatting is complete, I need to sort these time stamps/events for each Second (e.g. bucket span=1s Event_Time). Note here Event_Time is the formatted data from original Event Timestamp field. So far, I've tried this: index=test1 sourcetype=test2 | eval Event_Time=strftime(strptime(SUBMIT_TIME,"%Y%m%d%H%M%S"), "%m/%d/%y %H:%M:%S") | table Event_Time Above command gives me decent output such as 04/18/24 14:20:25. But, when I try to group values of Event_Time using "bucket span=1s Event_Time", it does not do anything. Note that "bucket span=1s _time" works as I'm using Splunk default time field. Appreciate any help to make this formatting work for post processing Event_Time. Thank you in advance.
I am struggling to find a post for my answer because the naming for Splunk Enterprise and Enterprise Security is so similar and I am only seeing results for ES.. I want to find a way to add Threat I... See more...
I am struggling to find a post for my answer because the naming for Splunk Enterprise and Enterprise Security is so similar and I am only seeing results for ES.. I want to find a way to add Threat Intelligence feeds into my Splunk Enterprise environment so my organization can eventually move off of the other SIEM we have been using in tandem with Splunk.  Is this possible with Splunk Enterprise? I know ES has the capability but we are strictly on-prem at the moment and I do not see us moving to it anytime soon. Any suggestions? Has anyone set these up on prem?
Installing and Configuring the Cisco AppDynamics Smart Agent and Machine Agent on Ubuntu Linux" Download The Smart Agent from Go to download.appdynamics.com Click on the dropdown 'Type'... See more...
Installing and Configuring the Cisco AppDynamics Smart Agent and Machine Agent on Ubuntu Linux" Download The Smart Agent from Go to download.appdynamics.com Click on the dropdown 'Type' and find AppDynamics Smart Agent for Linux ZIP You can curl the download as well to your Linux box. AppDynamics Smart Agent requires pip3 to be present and appdsmartagent folder should be where we will install smart agent, So if you don’t have it. Please run the below-> mkdir /opt/appdynamics/appdsmartagent sudo apt update sudo apt install -y python3-pip cd /opt/appdynamics/appdsmartagent​ Once you have these setup, curl the Zip artifactory curl -L -O -H "Authorization: Bearer xxxxx.xxxx.xxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxx;" "https://download.appdynamics.com/download/prox/download-file/appdsmartagent/24.2.0.1472/appdsmartagent_64_linux_24.2.0.1472.zip" ​ Unzip the content and run install-script.sh, In this case unzip appdsmartagent_64_linux_24.2.0.1472.zip unzip appdsmartagent_64_linux_24.2.0.1472.zip​ Run the install script placed in the appdsmartagent folder ./install-script.sh​ Add the configuration inside of /opt/appdynamics/appdsmartagent/config.ini vi /opt/appdynamics/appdsmartagent/config.ini​ You are required to configure Smart Agents to register with the Controller. Edit the  /opt/appdynamics/appdsmartagent/config.ini  file for the required Smart Agent configuration. Ensure that you update the following parameters: ControllerURL : The URL of the Controller on which you want to establish the connection with the Smart Agent. ControllerPort  : FMServicePort:  The port to which the Smart Agent connects to the FM service (Agent Management). It is 8090 for an on-premises Controller and 443 for a SaaS Controller. AccountAccessKey : The account access key on the Controller. AccountName : The account name on the Controller to which the Smart Agent will report. An example for above: ControllerURL = https://support-controller.saas.appdynamics.com ControllerPort = 443 FMServicePort = 443 AccountAccessKey = abcd-ahsasasj-asbasas AccountName = controllerces Once done. Start the Smart Agent with the below: systemctl start smartagent.service You can check the status  The first part is done. Now let’s install a Machine agent. Once your Agent is started Please go to controller.com/controller/#/apm/agent-management-install-agents On the Install Agents page, where it says "Select Agent Attributes' select 'Machine' Under 'Select where to Deploy Agents'  Add your Host where you installed Smart Agent and move to the left Click 'Apply' and then click 'Done Once this is done, you can install it with DEFAULT config, or you can set Customer Agent attributes.  If you wish to pass more information, you can get the key : value example from Cisco AppDynamics Docs: Ansible Configuration for Machine Agent.  For example: controller_account_access_key: "123key"   NOTE: Your controller-host, port, access-key, and accountName are already passed. You can select SIM (Server Visibility) and SSL enabled based on your needs. I am marking JRE Support enabled as well.  Ansible Configuration for Machine Agent In Linux, the Ansible role starts and runs the Machine agent as a service during installation, upgrade, and… Once done, the agent will take 10–15 minutes to show up on your Controller. It will be automatically installed and be in a running state. Logs can be located at /opt/appdynamics/machine-agent folder.  
Hi, I have installed cisco networks app and add-on. I have a labdata file with many events loaded to splunk. All data can be seen from search engine, but the app shows no result. Is it possible to us... See more...
Hi, I have installed cisco networks app and add-on. I have a labdata file with many events loaded to splunk. All data can be seen from search engine, but the app shows no result. Is it possible to use the labdata information on Cisco Networks? Should I add some configuration in order to it work?
Thanks in advance . I am trying to fetch application name and inteface details from input lookup and match with the splunk query .But i am getting below error.  Error in 'search' command: U... See more...
Thanks in advance . I am trying to fetch application name and inteface details from input lookup and match with the splunk query .But i am getting below error.  Error in 'search' command: Unable to parse the search: Comparator '=' has an invalid term on the left hand side: applicationName=applicationName.       <input type="dropdown" token="BankApp" searchWhenChanged="true" depends="$BankDropDown$"> <label>ApplicationName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | dedup APPLICATION_NAME | sort APPLICATION_NAME | table APPLICATION_NAME </query> </search> <fieldForLabel>ApplicationName</fieldForLabel> <fieldForValue>APPLICATION_NAME</fieldForValue> <default>*</default> <prefix>applicationName="</prefix> <suffix>"</suffix> </input> <input type="dropdown" token="interface" searchWhenChanged="true" depends="$BankDropDown$"> <label>InterfaceName</label> <choice value="*">All</choice> <search> <query> | inputlookup BankIntegration.csv | search $BankApp$ | sort INTERFACE_NAME | table INTERFACE_NAME </query> </search> <fieldForLabel>InterfaceName</fieldForLabel> <fieldForValue>INTERFACE_NAME</fieldForValue> <default>*</default> <prefix>InterfaceName="</prefix> <suffix>"</suffix> </input>    
Hiya, I'm trying to use the Splunk REST API to update macros that I've recently had to move to live under a different app that isn't the default `search` app. Before when the macro lived in the `s... See more...
Hiya, I'm trying to use the Splunk REST API to update macros that I've recently had to move to live under a different app that isn't the default `search` app. Before when the macro lived in the `search` app I was able to make a POST request to    /servicesNS/<account>/search/admin/macros/<macroName>   And this worked:   elif search_or_macro == 'macros': url = '<ROOT>/servicesNS/<ACCOUNT>/search/admin/macros/{}'.format(macro_name) res = requests.post(url, headers=headers, data={'definition': r'{}'.format(macro_definition)})   However once I moved the macros to live under a new app, let's call it `my_new_app`, POST requests no longer work to update the macro. This is what I have currently:   elif search_or_macro == 'macros': url = '<ROOT>/servicesNS/nobody/my_new_app/admin/macros/{}'.format(macro_name) res = requests.post(url, headers=headers, data={'definition': r'{}'.format(macro_definition)})   I have tried replacing `nobody` with: admin the account that owns the macro However neither of these work. I used the following splunk command to verify that the endpoint does seem to exist:   | rest /servicesNS/<ACCOUNT>/my_new_app/admin/macros/<MACRO NAME> | search author=<ACCOUNT>   And when I run that I get the following `id`:   https://127.0.0.1:8089/servicesNS/nobody/my_new_app/admin/macros/<MACRO NAME>     I have also read through the REST API documentation here: https://docs.splunk.com/Documentation/Splunk/9.1.3/RESTTUT/RESTbasicexamples https://docs.splunk.com/Documentation/Splunk/9.1.3/RESTUM/RESTusing#Namespace https://docs.splunk.com/Documentation/Splunk/9.1.3/RESTUM/RESTusing However none of these explicitly describe how to update macros, and all I can seem to find when googling are old posts from 2015-2019 that weren't applicable to what I am trying to achieve Any help here would greatly be appreciated, I feel like I'm missing something simple but can't find further documentation that applies to macros
The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, after which no new versions will be released and the app will be archived from Splunkbase. C... See more...
The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, after which no new versions will be released and the app will be archived from Splunkbase. Check out this Splunk Lantern article to learn more.
I was following the documentation of splunk connect for syslog so that I could ingest syslog in Splunk Cloud setup. I cannot turn of SSL option in my HEC global settings. So I did not uncomment the ... See more...
I was following the documentation of splunk connect for syslog so that I could ingest syslog in Splunk Cloud setup. I cannot turn of SSL option in my HEC global settings. So I did not uncomment the below line I created the file /opt/sc4s/env_file with the contents. SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://your.splunk.instance:8088 SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx #Uncomment the following line if using untrusted SSL certificates #SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no I started my sc4s.service ( systemd service created by following the doc). I started to get exception Followed this for splunk cloud. When sc4s service is started I get error below curl: (60) SSL certificate problem: self-signed certificate in certificate chain More details here: https://curl.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. SC4S_ENV_CHECK_HEC: Invalid Splunk HEC URL, invalid token, or other HEC connectivity issue index=main. sourcetype=sc4s:fallback Startup will continue to prevent data loss if this is a transient failure. If I uncomment the line, I don't see the exception anymore but I fail to get any message when I  search index=* sourcetype=sc4s:events "starting up" as suggested in the documentation. No sample data when I run echo “Hello SC4S” > /dev/udp/<SC4S_ip>/514 Please let me know what I am missing in the setup so that I can proceed forward