All Topics

Top

All Topics

I have several apps setup to segregate our various products. I’ve added icons to the apps. My issue is the icon is being placed over the app name. It should place the icon next to the app name. For e... See more...
I have several apps setup to segregate our various products. I’ve added icons to the apps. My issue is the icon is being placed over the app name. It should place the icon next to the app name. For example, the Search and Reporting app has the White arrow on a green background to the left of the app name. How do I get the icon to be placed left of the app name?
If you want to gain full control over your growing data volumes, check out Splunk’s Data Management pipeline builders – Edge Processor and Ingest Processor. These pipeline builders are available to S... See more...
If you want to gain full control over your growing data volumes, check out Splunk’s Data Management pipeline builders – Edge Processor and Ingest Processor. These pipeline builders are available to Splunk Cloud Platform customers and are included with your subscription. What are Splunk’s Data Management Pipeline Builders? Splunk’s Data Management Pipeline Builders are the latest innovation in data processing. They offer more efficient, flexible data transformation – helping you reduce noise, optimize costs, and gain visibility and control over your data in motion. Splunk Data Management pipeline builders are offered with a choice of deployment model: Edge Processor is a customer-hosted offering for greater control over data before it leaves your network boundaries. You can use it to filter, mask, and transform your data close to its source before routing the processed data to the environment of your choice.  Ingest Processor is a Splunk-hosted SaaS offering ideal for customers who are all-in on cloud and prefer that Splunk manage the infrastructure for them. In addition to filtering, masking and transforming data, it enables a new capability - converting logs to metrics. How to Get Started with Pipeline Builders  If you’d like to request access to either Edge Processor or Ingest Processor, fill out this form to request activation. If you already have access, you can navigate to the Data Management console in the following ways:  Login to Splunk Cloud Platform, navigate to the Splunk Web UI homepage, and click Settings > Add data > Data Management Experience to start using the pipeline builders today.  You can also directly navigate to Data Management using the following link: https://px.scs.splunk.com/<your Splunk cloud tenant name> Review these Lantern articles before building your first pipeline: How to get started with pipeline builders: learn how Edge Processor and Ingest Processor work, the differences between the two, and when to use which based on your desired outcomes. How to configure and deploy Edge Processor and Ingest Processor: follow the steps in here to access, configure and deploy Edge Processor or Ingest Processor, create a basic pipeline to process data, and learn about the detailed cluster view. Popular Use Cases to Get Started If you’re ready to filter, mask, and transform your data before routing it to Splunk or Amazon S3, then it’s time to build a pipeline! Pipelines are SPL2 statements that specify what data to process, how to process it, and where to send it. Author pipelines using SPL2, use quick-start templates, and even preview your data before applying it.  Once you’ve configured and deployed Edge Processor or Ingest Processor, you can build a pipeline to accomplish a number of use cases to help you control costs, gain additional insights, and optimize your overall data strategy. Check out the following key use cases to get started: Security use cases Reduce syslog firewall logs (PAN and Cisco) and route to Amazon S3 for low-cost storage (article) Mask sensitive PII data from events for compliance (video) Enrich data via real-time threat detection with KV Store lookups (article) Observability use cases Filter Kubernetes data over HTTP Event Collector (HEC) - Edge Processor only (video) Reduce verbose Java app debug logs for faster incident detection (blog) NEW with Ingest Processor: Convert logs to metrics to optimize monitoring, then route to Splunk Observability Cloud (article) Explore more use cases in this comprehensive Lantern article. Here you’ll find additional use cases to filter and route data, as well as use cases to transform, mask, and route data. Dive in and Unlock New Capabilities Dive in with the resources below and unlock new capabilities with Federated Search for Amazon S3. Register for our upcoming events to learn more and get live help from the Data Management team, then review the additional resources to support your ongoing journey.  Upcoming events you don’t want to miss Ask the Experts: Community Office Hours | Sep 25, 2024 at 1pm PT: Ask questions and get help from technical experts on the Data Management team. Tech Talk | Oct 24, 2024 at 11am PT: Dive deep into the capabilities of Splunk’s Pipeline Builders and see them in action. Bi-weekly Webinar | every other Thursday at 9am PT (starting Oct 10, 2024): Topics will vary week-to-week and will cover everything you need to know about the pipeline builders, from how to get started to executing advanced use cases.  Additional resources Check out the Data Management Resource Hub to support your ongoing journey (updated regularly). Join the Slack channel for important announcements, ongoing support, and a direct line of access to the Splunk Data Management team (request access here). If you’d like to request a feature or provide any other feedback, submit to Splunk Ideas and/or send an email to pipelineprocessing@splunk.com. Streamline Your Data Management Even More with Federated Search for Amazon S3  After routing data to Amazon S3, you can leverage Federated Search for Amazon S3 for a unified experience to search data across Splunk Platform and Amazon S3. This solution is now generally available in Splunk Cloud Platform and can help you further optimize costs while managing compliance.  We recommend using Federated Search for Amazon S3 for low-frequency ad-hoc searches of non-mission critical data that’s often stored in Amazon S3. Common use cases include running security investigations over historical data, performing statistical analysis over historical data, enriching existing data in Splunk with additional context from Amazon S3, and more.   You've seen the benefits, you have the use cases, now it’s time to experience the magic of Splunk Data Management for yourself!  Login to your Splunk Cloud Platform and navigate to Data Management Experience to start using the pipeline builders today! Request activation here. Happy Splunking!  The Splunk Data Management Team
As a test, I first created some credit card numbers using a python script. I placed the script, along with inputs and props, on the search head. I only placed props on the indexers. The following... See more...
As a test, I first created some credit card numbers using a python script. I placed the script, along with inputs and props, on the search head. I only placed props on the indexers. The following SEDCMD will  mask the 1st and 3rd set of 4-digits. The two groups (2nd and 4th set of 4-digits) will not be masked. props: [cc_generator] SEDCMD-maskcc = s/\d{4}-(\d{4})-\d{4}-(\d{4})/xxxx-\1-xxxx-\2/g inputs: [script://./bin/my_cc_generator.py] interval = */30 * * * * sourcetype = cc_generator disabled = 0 index = mypython output: xxxx-9874-xxxx-9484
I am trying to remove the year from from the time labels on the area chart without it messing up the charts format.  I've tried fieldformat but that would mess up the chart when the new year hap... See more...
I am trying to remove the year from from the time labels on the area chart without it messing up the charts format.  I've tried fieldformat but that would mess up the chart when the new year happens, any help would be great.
I am trying to create a new field called "description" that contains values from two other existing fields.  If field "app" is equal to linux than i want to combine existing fields "host" and "aler... See more...
I am trying to create a new field called "description" that contains values from two other existing fields.  If field "app" is equal to linux than i want to combine existing fields "host" and "alert_type". If field "app" is equal to windows than i want to combine existing field values "host" and "severity" If app equals anything else, i want the value to be false.  Below is the eval i have, buts its not working:   | eval description=if('app'=="linux", host. "-" .alert_type', 'app'==windows, host. "-" .severity, "false")    
Hi can anybody help with this problem, please? source1: lookup Tab (lookup.csv) att1 att2 att3 F1 1100 12.09.2024 F2 1100 23.04.2024 F3 1100 15.06.2024 F4 1100 16.03.2024 att1 is also in index=... See more...
Hi can anybody help with this problem, please? source1: lookup Tab (lookup.csv) att1 att2 att3 F1 1100 12.09.2024 F2 1100 23.04.2024 F3 1100 15.06.2024 F4 1100 16.03.2024 att1 is also in index=myindex I want to have in a table for all att1 from lookup.csv count of all events from index=myindex att1=$att1$ AND earliest=strptime($att3$, "%d.%m.%Y") output: att1 count(from myindex) att2 att3 F1 count 1100 12.09.2024 F2 count 1100 23.04.2024 F3 count 1100 15.06.2024 F4 count 1100 16.03.2024
Is there a way to get Service Endpoint values (response time, load, errors) into Analytics so it can be queried? I have multiple custom service endpoints that are looking at the performance of api c... See more...
Is there a way to get Service Endpoint values (response time, load, errors) into Analytics so it can be queried? I have multiple custom service endpoints that are looking at the performance of api calls from a specific customer.  They are calls like createCart and placeOrder etc. Is there a way for me to get the values like load, response time, and error counts for these service endpoints, in Analytics? I know I can get those metrics for business transactions, but these service endpoints are subsets within the BTs.  I don't want to have to create a custom BT for each of these custom service endpoints if I can avoid that. Thanks, Greg
Hi, Is the Dnslookup available in Splunk cloud like enterprise?
Hi Team  Can you please help me to provide a solution to use a csv file with the external vs internal user id data in the splunk.  Below is the current query and output that extracts the internal... See more...
Hi Team  Can you please help me to provide a solution to use a csv file with the external vs internal user id data in the splunk.  Below is the current query and output that extracts the internal userid and i need another column to add corresponding external user id.  Csv file : ABC.csv usr_id,eml_add_ds internal user id 1 , external user id 1 internal user id 2 , external user id 2 internal user id 3 , external user id 3 internal user id 4 , external user id 4 Query : (index=ABC) ("Start" OR "Finish") Properties.AspNetCoreEnvironment="*" | rex field=Message "Start:\s*(?<start_info>[^\s]+)" | rex field=Message "user\s(?<Userid>[^\took|.]+)" | search start_info=* | table Userid | sort time   Output :   
Hello - I am trying to construct a search whereby I can do a lookup of a single table, then rename the fields and change how they're displayed, however the lookup and eval commands don't seem to be w... See more...
Hello - I am trying to construct a search whereby I can do a lookup of a single table, then rename the fields and change how they're displayed, however the lookup and eval commands don't seem to be working as I would like.  The main search I am performing is basic, using some source subnets and then trying to have the lookup reference what area of the business they belong to, below is the lookup portion of my search: index="logs" sourceip="x.x.x.x" OR destip="x.x.x.x" | lookup file.csv cidr AS sourceip OUTPUT field_a AS sourceprovider, field_b AS sourcearea, field_c AS sourcezone , field_d AS sourceregion, cidr AS src_cidr | lookup file.csv cidr AS destip OUTPUT field_a AS destprovider, field_b AS destarea, field_c AS destzone, field_d AS destregion, cidr AS dest_cidr | fillnull value="none" | eval src_details_combined=sourceprovider."-".sourcearea."-".sourcezone ."-".sourceregion | eval dest_details_combined=destprovider."-".destarea."-".destzone."-".destregion | eval src_details_combined=IF(src_details_combined=="none-none-none-none","notfound",src_details_combined) | eval dest_details_combined=IF(dest_details_combined=="none-none-none-none","notfound",dest_details_combined) | stats count values(sourceip) as sourceip values(destip) as destip by src_details_combined, dest_details_combined, rule, dest_port, app | table src_details_combined, dest_details_combined, app, count   When I run the search I do get some results but the  src_details_combined and dest_details_combined fields always return as "notfound" - even though I know the IPs should match in the lookup csv.  Can anyone see where I have gone wrong in my search?
Hello Splunkers !! I hope all is well. There are some sourcetypes in splunk which are having large amount of data but we are not using those sourcetypes in any of the dashboards or saved searches... See more...
Hello Splunkers !! I hope all is well. There are some sourcetypes in splunk which are having large amount of data but we are not using those sourcetypes in any of the dashboards or saved searches. I want to delete those sourcetypes in splunk and I have some questions associated with the deletion of sourcetype as below. 1. What is the best approach to delete the sourcetypes data in splunk ( using the delete command or from backend ) 2. Does the deletion of historical data from those sourcetypes which impact the other useful sourcetype? 3. Does it impact on the corruption of the buckets ? 4. Unused sourcetypes is carrying millions of data. So what will be the fastest approach to delete the large historical data chunks ? Thanks in advance. Advice and suggestions are really appreciated !!
Hello, I am responsible for providing self-signed SSL certificates for Splunk servers. Could you guide me, considering that I am working in a distributed architecture consisting of SH, 2 INDEXERS ... See more...
Hello, I am responsible for providing self-signed SSL certificates for Splunk servers. Could you guide me, considering that I am working in a distributed architecture consisting of SH, 2 INDEXERS plus a server handling DS, and forwarders? How many certificates will I need to generate, and do the forwarders also require SSL certificates? If possible, I would greatly appreciate it if you could provide any relevant documentation to assist me in this process. Best regards,
Can the Splunkbase app - "Splunk AI assistant for SPL", be installed on an on-prem deployment of Splunk enterprise, or is the app only for public cloud deployments of Splunk Enterprise.  If this is n... See more...
Can the Splunkbase app - "Splunk AI assistant for SPL", be installed on an on-prem deployment of Splunk enterprise, or is the app only for public cloud deployments of Splunk Enterprise.  If this is not for on-prem, are there plans to build and app for on-prem, and what are the current timelines.
We are using v9 format of logs in splunk. It is working fine and we are able to see logs in splunk as expected. We added 4 more fields in transform.conf and test the addon in splunk. Then additiona... See more...
We are using v9 format of logs in splunk. It is working fine and we are able to see logs in splunk as expected. We added 4 more fields in transform.conf and test the addon in splunk. Then additional fields taking the value of s3_filename, bucket name and prefix which are added at the end which is not correct behavior.   We are looking for solution with that we should be able to parse correct value in correct field and the additional fields should have null values if there is no values for them in logs. transform.conf [proxylogs_fields] DELIMS = "," FIELDS = Timestamp,policy_identities,src,src_translated_ip,dest,content_type,action,url,http_referrer,http_user_agent,status,requestSize,responseSize,responseBodySize,sha256,category,av_detection,pua,amp_disposition,amp_malwarename,amp_score,policy_identity_type,blocked_category,identities,identity_type,request_method,dlp_status,certificate_errors,filename,rulesetID,ruleID,destinationListID,isolateAction,fileAction,warnStatus,forwarding_method,Producer,test_feild1,test_field2,test_field3,test_field4,s3_filename,aws_bucket_name,aws_prefix props.conf [cisco:cloud_security:proxy] REPORT-proxylogs-fields = proxylogs_fields,extract_url_domain LINE_BREAKER = ([\r\n]+) # EVENT_BREAKER = ([\r\n]+) # EVENT_BREAKER_ENABLE = true SHOULD_LINEMERGE = false CHARSET = AUTO disabled = false TRUNCATE = 1000000 MAX_EVENTS = 1000000 EVAL-product = "Cisco Secure Access and Umbrella" EVAL-vendor = "Cisco" EVAL-vendor_product = "Cisco Secure Access/Umbrella" MAX_TIMESTAMP_LOOKAHEAD = 22 NO_BINARY_CHECK = true TIME_PREFIX = ^ TIME_FORMAT = "%Y-%m-%d %H:%M:%S" TZ = UTC FIELDALIAS-bytes_in = requestSize as bytes_in FIELDALIAS-bytes_out = responseSize as bytes_out EVAL-action = lower(action) EVAL-app = "Cisco Cloud Security" FIELDALIAS-http_content_type = content_type as http_content_type EVAL-http_user_agent_length = len(http_user_agent) EVAL-url_length = len(url) EVAL-dest = if(isnotnull(dest),dest,url_domain) EVAL-bytes = requestSize + responseSize  
Can`t seem to get my head round this one - I`ve got a table and would like the users to be able to click on a row and to add a Summary comment, but there`s a bug in the code. The comments get submitt... See more...
Can`t seem to get my head round this one - I`ve got a table and would like the users to be able to click on a row and to add a Summary comment, but there`s a bug in the code. The comments get submitted BEFORE I click on the Submit button, which doesn`t seems to work anyway.     <form version="1.1" theme="light" script="TA-images_and-_files:tokenlinks.js"> <label>Report</label> <search> <query>| makeresults|eval Date=strftime(_time,"%d/%m/%Y")|fields - _time</query> <done> <set token="defaut_time">$result.Date$</set> </done> </search> <fieldset submitButton="false"> <input type="dropdown" token="date_tok" searchWhenChanged="true"> <label>Date:</label> <fieldForLabel>Date</fieldForLabel> <fieldForValue>Date</fieldForValue> <search> <query>| makeresults | timechart span=1d count | sort - _time | eval Date=strftime(_time, "%d/%m/%Y"), earliest=relative_time(_time, "@d") | table Date, earliest | head 7 | sort - earliest</query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <default>$defaut_time$</default> </input> <input type="dropdown" token="shift_tok" searchWhenChanged="true"> <label>Shift:</label> <choice value="Day">Day</choice> <choice value="Night">Night</choice> <default>Day</default> <initialValue>Day</initialValue> </input> </fieldset> <row> <panel id="input_panel" depends="$show_input$"> <input type="text" token="Summary"> <label>Summary</label> </input> <input type="text" token="Date"> <label>Date</label> </input> <input type="text" token="Time"> <label>Time</label> </input> <input type="text" token="Shift"> <label>Shift</label> </input> <html> <div> <button type="button" id="buttonId" class="btn btn-primary">Submit</button> <button style="margin-left:10px;" class="btn" data-token-json="{&quot;show_input&quot;: null}">Cancel</button> </div> </html> </panel> </row> <row depends="$hideMe$"> <panel> <table> <search> <done> <unset token="form.Summary"></unset> <unset token="form.Date"></unset> <unset token="form.Time"></unset> <unset token="form.Shift"></unset> <unset token="show_input"></unset> </done> <query>| inputlookup handover_timeline_comments.csv | append [ | makeresults | eval "Summary" = "$form.Summary$", Shift="$form.Shift$", Date="$form.Date$", Time="$form.Time$" ] | outputlookup handover_timeline_comments.csv</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> <refresh>30</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <table> <search> <query>| makeresults count=24 | eval Date= "$date_tok$", Shift="$shift_tok$" | streamstats count as Time | eval Time=if(Time&lt;10, "0".Time.":00", Time.":00") | eval Time=case( Shift == "Night" AND Time &gt;= "19:00", Time, Shift == "Day" AND Time &gt;= "07:00" AND Time &lt;= "18:00", Time, 1==1, null ) | where isnotnull(Time) | append [ | makeresults count=24 | streamstats count as Time | eval Time=if(Time&lt;10, "0".Time.":00", Time.":00") | table Time | eval Date= "$date_tok$", Shift="$shift_tok$" | eval Time=case( Shift == "Night" AND Time &lt;= "06:00", Time, 1==1, null ) | where isnotnull(Time) ] | eval Summary="" | fields - _time | lookup handover_timeline_comments.csv Date Shift Time OUTPUT Summary | eventstats last(Summary) as Summary by Date Shift Time</query> <earliest>-24h@h</earliest> <latest>now</latest> <refresh>10s</refresh> </search> <option name="count">12</option> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <drilldown> <set token="form.Date">$row.Date$</set> <set token="form.Shift">$row.Shift$</set> <set token="form.Time">$row.Time$</set> <set token="show_input">true</set> </drilldown> </table> </panel> </row> </form>     .js:   requirejs([ '../app/simple_xml_examples/libs/jquery-3.6.0-umd-min', '../app/simple_xml_examples/libs/underscore-1.6.0-umd-min', 'util/console', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function($, _, console, mvc) { function setToken(name, value) { console.log('Setting Token %o=%o', name, value); var defaultTokenModel = mvc.Components.get('default'); if (defaultTokenModel) { defaultTokenModel.set(name, value); } var submittedTokenModel = mvc.Components.get('submitted'); if (submittedTokenModel) { submittedTokenModel.set(name, value); } } $('.dashboard-body').on('click', '[data-set-token],[data-unset-token],[data-token-json]', function(e) { e.preventDefault(); var target = $(e.currentTarget); var setTokenName = target.data('set-token'); if (setTokenName) { setToken(setTokenName, target.data('value')); } var unsetTokenName = target.data('unset-token'); if (unsetTokenName) { setToken(unsetTokenName, undefined); } var tokenJson = target.data('token-json'); if (tokenJson) { try { if (_.isObject(tokenJson)) { _(tokenJson).each(function(value, key) { if (value === null) { // Unset the token setToken(key, undefined); } else { setToken(key, value); } }); } } catch (e) { console.warn('Cannot parse token JSON: ', e); } } }); });    
Splunk support recently upgraded Datapunctum Alert Manager Enterprise to version 3.1.1, but it is now broken.  When I open the app it directs me to complete the update tasks.  When I try to execute t... See more...
Splunk support recently upgraded Datapunctum Alert Manager Enterprise to version 3.1.1, but it is now broken.  When I open the app it directs me to complete the update tasks.  When I try to execute the first task, I get an error saying Internal Server Error and can't progress any further.  The documentation doesn't help with troubleshooting this issue.
  Tech Talk Out of the Box to Up And Running Streamlined Observability for Your Cloud Environment Watch On-Demand Splunk Observability continues to make improvements to bridge that gap off... See more...
  Tech Talk Out of the Box to Up And Running Streamlined Observability for Your Cloud Environment Watch On-Demand Splunk Observability continues to make improvements to bridge that gap offering ready-to-use dashboards, charts, detectors, and alerts for hundreds of popular OSS, cloud infrastructure, and services. Join us to explore how these built-in features and capabilities can help users get to value faster. Watch Now  Key Takeaways Understand the ease of setup for your cloud environment delivered by the Splunk Distribution of the OpenTelemetry Collector. Explore the vast coverage and deep insights delivered by the Splunk navigators, including the Kubernetes navigator. Learn how to leverage the built-in detectors and alerts to streamline your troubleshooting experience.
Different crashes during tcpout reload. Received fatal signal 6 (Aborted) on PID . Cause: Signal sent by PID running under UID . Crashing thread: indexerPipe_1 Backtrace (PIC build): [0x00... See more...
Different crashes during tcpout reload. Received fatal signal 6 (Aborted) on PID . Cause: Signal sent by PID running under UID . Crashing thread: indexerPipe_1 Backtrace (PIC build): [0x000014BC540AFB8F] gsignal + 271 (libc.so.6 + 0x4EB8F) [0x000014BC54082EA5] abort + 295 (libc.so.6 + 0x21EA5) [0x000055BCEBEFC1A7] __assert_fail + 135 (splunkd + 0x51601A7) [0x000055BCEBEC4BD9] ? (splunkd + 0x5128BD9) [0x000055BCE9013E72] _ZN34AutoLoadBalancedConnectionStrategyD0Ev + 18 (splunkd + 0x2277E72) [0x000055BCE905DC99] _ZN14TcpOutputGroupD1Ev + 217 (splunkd + 0x22C1C99) [0x000055BCE905E002] _ZN14TcpOutputGroupD0Ev + 18 (splunkd + 0x22C2002) [0x000055BCE905FC6F] _ZN15TcpOutputGroups14checkSendStateEv + 623 (splunkd + 0x22C3C6F) [0x000055BCE9060F08] _ZN15TcpOutputGroups4sendER15CowPipelineData + 88 (splunkd + 0x22C4F08) [0x000055BCE90002FA] _ZN18TcpOutputProcessor7executeER15CowPipelineData + 362 (splunkd + 0x22642FA) [0x000055BCE9829628] _ZN9Processor12executeMultiER18PipelineDataVectorPS0_ + 72 (splunkd + 0x2A8D628) [0x000055BCE8D29D25] _ZN8Pipeline4mainEv + 1157 (splunkd + 0x1F8DD25) [0x000055BCEBF715EE] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 46 (splunkd + 0x51D55EE) [0x000055BCEBF716FB] _ZN6Thread8callMainEPv + 139 (splunkd + 0x51D56FB) [0x000014BC552AC1DA] ? (libpthread.so.0 + 0x81DA) Another reload crash Backtrace (PIC build): [0x00007F456828700B] gsignal + 203 (libc.so.6 + 0x2100B) [0x00007F4568266859] abort + 299 (libc.so.6 + 0x859) [0x0000560602B5B4B7] __assert_fail + 135 (splunkd + 0x5AAA4B7) [0x00005605FF66297A] _ZN15TcpOutputClientD1Ev + 3130 (splunkd + 0x25B197A) [0x00005605FF6629F2] _ZN15TcpOutputClientD0Ev + 18 (splunkd + 0x25B19F2) [0x0000560602AD7807] _ZN9EventLoop3runEv + 839 (splunkd + 0x5A26807) [0x00005605FF3555AD] _ZN11Distributed11EloopRunner4mainEv + 205 (splunkd + 0x22A45AD) [0x0000560602BD03FE] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 46 (splunkd + 0x5B1F3FE) [0x0000560602BD050B] _ZN6Thread8callMainEPv + 139 (splunkd + 0x5B1F50B) [0x00007F4568CAD609] ? (libpthread.so.0 + 0x2609) [0x00007F4568363353] clone + 67 (libc.so.6 + 0xFD353) Linux / myhost / 5.15.0-1055-aws / #60~20.04.1-Ubuntu SMP Thu assertion_failure="!_hasDataInTransit" assertion_function="virtual TcpOutputClient::~TcpOutputClient()"   Starting Splunk 9.2, splunk outputs.conf is reloadable. Whenever DC pulls bundle from DS, depending on the changes, during reload, conf files are reloaded. One of the conf file is outputs.conf. Prior to 9.2 outputs.conf was not reloadable that means hitting following endpoint would do nothing. /data/outputs/tcp/server or  https://<host>:<port>/servicesNS/-/-/admin/tcpout-group/_reload Behavior is changed from 9.2 and now outputs.conf is reloadable. However reloading outputs.conf is very complex process as it involves shutdown tcpout groups safely. Still there are cases where splunk crashes. We are working on fixing reported crashes. NOTE: (Splunkcloud and others), following workaround is NOT for a crash caused by  /debug/refresh induced forced reload.  There is no workaround available for a crash caused by /debug/refresh, except not to use /debug/refresh. Workaround As mentioned before 9.2 outputs.conf was never reloadable ( no-op for _reload), thus no crashes/complications Set in local/apps.conf as a workaround. [triggers] reload.outputs = simple With setting above, splunk will take no action on tcpout(outputs.conf) reload( a behavior  that was before 9.2)   If outputs.conf is changed via DS, restart splunk.
Is there any documentation in Splunk's documentation to guide a load balancer administrator on configuring the load balancer in front of intermediate forwarders to receive syslog traffic from securit... See more...
Is there any documentation in Splunk's documentation to guide a load balancer administrator on configuring the load balancer in front of intermediate forwarders to receive syslog traffic from security devices on port 514?
Hi Team, I am using below query to get the DNS lookup query, everything is fine but I am not getting the time field aligned with my inputlookup query. If I remove the inputlookup and use the individ... See more...
Hi Team, I am using below query to get the DNS lookup query, everything is fine but I am not getting the time field aligned with my inputlookup query. If I remove the inputlookup and use the individual domain name then it works fine. however I would like to have the time as well along with my inputlookup data.   | makeresults | inputlookup append=t dns.csv | dnsquery domainfield=domain qtype="A" answerfield="dns_response" nss="10.102.204.52" | eval Status = case(isnotnull(dns_error), "UnReachable",1=1 , "Reachable") | eval DateTime=strftime(_time,"%a %B %d %Y %H:%M:%S") | table DateTime domain dns_response dns_error Status   Result is showing as -  DateTime domain dns_response dns_error Status Wed September 18 2024 11:57:19       Reachable   ns1.vodacombusiness.co.za 41.0.1.10   Reachable   ns2.vodacombusiness.co.za 41.0.193.10   Reachable   ns3.vodacombusiness.co.za - Could not execute DNS query: A -> ns3.vodacombusiness.co.za. Error: None of DNS query names exist: ns3.vodacombusiness.co.za., ns3.vodacombusiness.co.za. UnReachable