All Topics

Top

All Topics

Different crashes during tcpout reload. Received fatal signal 6 (Aborted) on PID . Cause: Signal sent by PID running under UID . Crashing thread: indexerPipe_1 Backtrace (PIC build): [0x00... See more...
Different crashes during tcpout reload. Received fatal signal 6 (Aborted) on PID . Cause: Signal sent by PID running under UID . Crashing thread: indexerPipe_1 Backtrace (PIC build): [0x000014BC540AFB8F] gsignal + 271 (libc.so.6 + 0x4EB8F) [0x000014BC54082EA5] abort + 295 (libc.so.6 + 0x21EA5) [0x000055BCEBEFC1A7] __assert_fail + 135 (splunkd + 0x51601A7) [0x000055BCEBEC4BD9] ? (splunkd + 0x5128BD9) [0x000055BCE9013E72] _ZN34AutoLoadBalancedConnectionStrategyD0Ev + 18 (splunkd + 0x2277E72) [0x000055BCE905DC99] _ZN14TcpOutputGroupD1Ev + 217 (splunkd + 0x22C1C99) [0x000055BCE905E002] _ZN14TcpOutputGroupD0Ev + 18 (splunkd + 0x22C2002) [0x000055BCE905FC6F] _ZN15TcpOutputGroups14checkSendStateEv + 623 (splunkd + 0x22C3C6F) [0x000055BCE9060F08] _ZN15TcpOutputGroups4sendER15CowPipelineData + 88 (splunkd + 0x22C4F08) [0x000055BCE90002FA] _ZN18TcpOutputProcessor7executeER15CowPipelineData + 362 (splunkd + 0x22642FA) [0x000055BCE9829628] _ZN9Processor12executeMultiER18PipelineDataVectorPS0_ + 72 (splunkd + 0x2A8D628) [0x000055BCE8D29D25] _ZN8Pipeline4mainEv + 1157 (splunkd + 0x1F8DD25) [0x000055BCEBF715EE] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 46 (splunkd + 0x51D55EE) [0x000055BCEBF716FB] _ZN6Thread8callMainEPv + 139 (splunkd + 0x51D56FB) [0x000014BC552AC1DA] ? (libpthread.so.0 + 0x81DA) Another reload crash Backtrace (PIC build): [0x00007F456828700B] gsignal + 203 (libc.so.6 + 0x2100B) [0x00007F4568266859] abort + 299 (libc.so.6 + 0x859) [0x0000560602B5B4B7] __assert_fail + 135 (splunkd + 0x5AAA4B7) [0x00005605FF66297A] _ZN15TcpOutputClientD1Ev + 3130 (splunkd + 0x25B197A) [0x00005605FF6629F2] _ZN15TcpOutputClientD0Ev + 18 (splunkd + 0x25B19F2) [0x0000560602AD7807] _ZN9EventLoop3runEv + 839 (splunkd + 0x5A26807) [0x00005605FF3555AD] _ZN11Distributed11EloopRunner4mainEv + 205 (splunkd + 0x22A45AD) [0x0000560602BD03FE] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 46 (splunkd + 0x5B1F3FE) [0x0000560602BD050B] _ZN6Thread8callMainEPv + 139 (splunkd + 0x5B1F50B) [0x00007F4568CAD609] ? (libpthread.so.0 + 0x2609) [0x00007F4568363353] clone + 67 (libc.so.6 + 0xFD353) Linux / myhost / 5.15.0-1055-aws / #60~20.04.1-Ubuntu SMP Thu assertion_failure="!_hasDataInTransit" assertion_function="virtual TcpOutputClient::~TcpOutputClient()"   Starting Splunk 9.2, splunk outputs.conf is reloadable. Whenever DC pulls bundle from DS, depending on the changes, during reload, conf files are reloaded. One of the conf file is outputs.conf. Prior to 9.2 outputs.conf was not reloadable that means hitting following endpoint would do nothing. /data/outputs/tcp/server or  https://<host>:<port>/servicesNS/-/-/admin/tcpout-group/_reload Behavior is changed from 9.2 and now outputs.conf is reloadable. However reloading outputs.conf is very complex process as it involves shutdown tcpout groups safely. Still there are cases where splunk crashes. We are working on fixing reported crashes. NOTE: (Splunkcloud and others), following workaround is NOT for a crash caused by  /debug/refresh induced forced reload.  There is no workaround available for a crash caused by /debug/refresh, except not to use /debug/refresh. Workaround As mentioned before 9.2 outputs.conf was never reloadable ( no-op for _reload), thus no crashes/complications Set in local/apps.conf as a workaround. [triggers] reload.outputs = simple With setting above, splunk will take no action on tcpout(outputs.conf) reload( a behavior  that was before 9.2)   If outputs.conf is changed via DS, restart splunk.
Is there any documentation in Splunk's documentation to guide a load balancer administrator on configuring the load balancer in front of intermediate forwarders to receive syslog traffic from securit... See more...
Is there any documentation in Splunk's documentation to guide a load balancer administrator on configuring the load balancer in front of intermediate forwarders to receive syslog traffic from security devices on port 514?
Hi Team, I am using below query to get the DNS lookup query, everything is fine but I am not getting the time field aligned with my inputlookup query. If I remove the inputlookup and use the individ... See more...
Hi Team, I am using below query to get the DNS lookup query, everything is fine but I am not getting the time field aligned with my inputlookup query. If I remove the inputlookup and use the individual domain name then it works fine. however I would like to have the time as well along with my inputlookup data.   | makeresults | inputlookup append=t dns.csv | dnsquery domainfield=domain qtype="A" answerfield="dns_response" nss="10.102.204.52" | eval Status = case(isnotnull(dns_error), "UnReachable",1=1 , "Reachable") | eval DateTime=strftime(_time,"%a %B %d %Y %H:%M:%S") | table DateTime domain dns_response dns_error Status   Result is showing as -  DateTime domain dns_response dns_error Status Wed September 18 2024 11:57:19       Reachable   ns1.vodacombusiness.co.za 41.0.1.10   Reachable   ns2.vodacombusiness.co.za 41.0.193.10   Reachable   ns3.vodacombusiness.co.za - Could not execute DNS query: A -> ns3.vodacombusiness.co.za. Error: None of DNS query names exist: ns3.vodacombusiness.co.za., ns3.vodacombusiness.co.za. UnReachable
Hello, Some of the logs coming from the Windows Universal Forwarder to Splunk show the following error in the message field for certain events: "Splunk could not get the description for this event.... See more...
Hello, Some of the logs coming from the Windows Universal Forwarder to Splunk show the following error in the message field for certain events: "Splunk could not get the description for this event." I have reviewed [https://community.splunk.com/t5/Getting-Data-In/Why-quot-FormatMessage-error-quot-appears-in-indexed-message-for/td-p/139980?_gl=1*1qz5els*_gcl_au*MjMzMzQwMzM3LjE3MjAzNTUzMDY.*FPAU*MjMzMzQwMzM3LjE3MjAzNTUzMDY.*_ga*MTExMjgzNTE2OC4xNzIwMzU1MzA2*_ga_5EPM2P39FV*MTcyNjY0MTY4NC4xNDQuMS4xNzI2NjQxODA1LjAuMC4xODMzMzUzNTE5*_fplc*T0RRcU4zWGc1THpWUFVQeVBxTTQ1T0JVanhWSVVpMmdLTVNzYjNMZSUyQjZUMXBEb0NsY3NTSm45MlpQaFVnbUtsR1MwQWdjdlVyM25peCUyRkozUnZmQ1UlMkJzUE9tSTBFd3kzbjV6diUyQmJoQzQxUlM5alphRUhIVXQ1V0I4M3hRZVElM0QlM0Q.] , but it doesn't solve the issue, as this problem only occurs for a few specific events at specific times. I am using Splunk version 9.2. What could be the issue?
Please help me with SPL for WHENEVER THERE IS ERROR OCCURED DURING MESSAGE EXCHANGE KEYWORD OCCURS AND REQ=INI didn't occur within few minutes raise and alert. 
I am unable to search my custom fields in Splunk after getting migrated index from normal to federated. do I have to change something in field extractions? or something wrong in migration
We have two separate splunk instances with ES (standalone not clustered) . Consider it as a HO DR   when i try to move to DR instance of splunk and copy /etc/apps , After restarting DR instance all... See more...
We have two separate splunk instances with ES (standalone not clustered) . Consider it as a HO DR   when i try to move to DR instance of splunk and copy /etc/apps , After restarting DR instance all the notables are in new status . Those notables which are closed in HO splunk is also showing as new. What could be the reason?   I do know that this is managed as a kv store. If we have to migrate KV store related to this. What are the best practises in this case    
I have indexer cluster setup with a load-balancer in front of  it.  I want the syslog to be ingested into the indexer. My plan is to install Universal forwarder on the Linux servers and send the sysl... See more...
I have indexer cluster setup with a load-balancer in front of  it.  I want the syslog to be ingested into the indexer. My plan is to install Universal forwarder on the Linux servers and send the syslog to Indexer clusters. Now the problem is how can i configure universal forwarder to hit load-balancer to ingest the data ?
Hi everyone, I’m trying to visualize the network traffic of an interface in Splunk using an area chart. However, the Y-axis scale is currently displaying as K, M, B (for thousand, million, billion),... See more...
Hi everyone, I’m trying to visualize the network traffic of an interface in Splunk using an area chart. However, the Y-axis scale is currently displaying as K, M, B (for thousand, million, billion), but I would like it to show K, M, G, T (for kilobytes, megabytes, gigabytes, terabytes). Is there a way to adjust this? Thanks!
When displaying the Choropleth Map on the dashboard, the painted area is collapsed. When I use the Search app to display visual effects, there is no problem. I was wondering if anyone has experienced... See more...
When displaying the Choropleth Map on the dashboard, the painted area is collapsed. When I use the Search app to display visual effects, there is no problem. I was wondering if anyone has experienced the same problem or has any ideas on how to solve it. Splunk Enterprise 9.3.1 renders fine in Dashboard Classic, but the issue occurs in Dashboard Studio. コロプレスマップをダッシュボード表示するとペイントされるエリアが崩れます。 Search appで視覚エフェクト表示した際には問題ありません。同じ事象を経験の方や解決の糸口をお持ちの方いらっしゃいませんでしょうか。 Splunk Enterprise 9.3.1ではDashboard Classicでは正常にレンダリングでき、Dashboard Studioでは事象が発生しています。 Choropleth Map in Search app Choropleth Map in Dashboard Studio Choropleth Map in Dashboard Classic Thanks,
I am looking for a visualization mechanism to colorize slices of a pie by their status OK (green), Warning (yellow), Major (orange), Critical (red). All of the pie chart viz examples I have seen a... See more...
I am looking for a visualization mechanism to colorize slices of a pie by their status OK (green), Warning (yellow), Major (orange), Critical (red). All of the pie chart viz examples I have seen are ranked by count of some category, and I want to rank by status.  In the example below, I have 4 groups of services, each with a number of service instances providing service up to a maximum number defined for the group.  I would like to visually see a group NofM colored by status and not ranked by count. Any ideas on where to go?  The pie chart viz is ruled out per the above (I think).  I looked for other visualizations such as the starburst,  but it didn't present the way I wanted to. Example SPL: | makeresults | eval dbs = "[{\"entity\":\"I0\",\"instanceCount\":\"0\",\"instanceMax\":\"3\"},{\"entity\":\"I1\",\"instanceCount\":\"1\",\"instanceMax\":\"3\"},{\"entity\":\"I2\",\"instanceCount\":\"2\",\"instanceMax\":\"3\"},{\"entity\":\"I3\",\"instanceCount\":\"3\",\"instanceMax\":\"3\"}]" | spath input=dbs path={} output=dblist | mvexpand dblist | spath input=dblist | eval pct_avail=round(100*instanceCount/instanceMax,1) | eval status=case(pct_avail=100, "OK", pct_avail>=50, "Warning", pct_avail>1, "Major", true(), "Critical") | eval color=case( status="Critical", "#FF0000", status="Major", "#D94E17", status="Warning", "#CBA700", status="OK", "#118832", true(), "#1182F3" ) | stats count by entity  
Imagine, if you will, table view lookup that has been setup to pull the Host name, the environment (Dev/Test/Prod) and the Server type (Database, Web App, SSO, etc...) and the application the server ... See more...
Imagine, if you will, table view lookup that has been setup to pull the Host name, the environment (Dev/Test/Prod) and the Server type (Database, Web App, SSO, etc...) and the application the server supports. I have 4 Input field LOVs setup. 1. Enclave...lets me choose Dev / Test / Prod, those are the only 3 options and the token name is "enclave" 2. Type...shows Database, Web App, SSO, Other ... again those are the only options, token name is "type" 3. Application...say HR, Payroll, Order Entry and HealthCare ... again, 4 options, token name is "app" 4.  This should be a DYNAMIC LOV that shows only the servers in the table view lookup that meet the condition set by the first 3 LOVs. ...example Enclave set to Dev, Type set to Web App, Application set to HR.  My table view clearly shows there are 2 web app server names so the 4th LOV should show Server003, Server007, All.  The token would then be set based on the choice (003 or 007) and if "All" were picked the token would be Server003, Server007.  This would drive the panel searches.   Is this possible?  I can get the 4th LOV to run but it doesn't give me a list.
I wanted to index the span tag "error" to be able to filter spans by this tag and to create alerts based on this tag. I tried to add a custom MetricSet. Unfortunately, after I start the analysis, I d... See more...
I wanted to index the span tag "error" to be able to filter spans by this tag and to create alerts based on this tag. I tried to add a custom MetricSet. Unfortunately, after I start the analysis, I don't see the check mark action to activate my new MetricSet:   I have followed the instructions on this page:  https://docs.splunk.com/observability/en/apm/span-tags/index-span-tags.html#index-a-new-span-tag-or-process
Hello everyone, New and trying to learn, I've searched for hours trying to get a dashboard to display computers within my domain and if they are online or not with a time associated. the time associ... See more...
Hello everyone, New and trying to learn, I've searched for hours trying to get a dashboard to display computers within my domain and if they are online or not with a time associated. the time associated with being up or down isn't important, just a nicety. 
I have about 100 servers.  These are a mix of different Oracle servers, Databases, Web Apps Servers, Data Warehouse servers, SSO Servers and OBIEE servers.  Of these, there is also the standard Dev/T... See more...
I have about 100 servers.  These are a mix of different Oracle servers, Databases, Web Apps Servers, Data Warehouse servers, SSO Servers and OBIEE servers.  Of these, there is also the standard Dev/Test/Prod environments and this is all supporting 5 different development / sustainment projects. A request was made to our Splunk Admin in the form of the Server name and all of the log files our engineer could think of at the time.  It appears the Splunk Admin just crammed everything into a single index.  Literally hundreds of log files as each server appeared to have 10-15 log files identified. Given the servers do different things, the request didn't necessarily have the same log files identified for every server.  I would have "expected" the request would have been vetted to answer "What do you really need?" rather than "HERE YOU GO!"  Maybe I've done Software Development too long, it could be me. Anyway, was this the right way to go?  Would it have made more sense to have 1 index for the Database Servers, 1 index for the Web App Servers, 1 index for the Data Warehouse, etc...?  Or, perhaps 1 index for the Production assets and 1 for Test and 1 for Dev? There doesn't appear to be a "best practice" that I can find...and what I have is ONE FREAKING HUGE index.   If you read this far, thanks.  If you have a cogent answer that makes sense to me, even better!  
I have a sample log, how do I create line breaking in props.conf on the indexers so that splunk can recognize the header (###) as the first line of the event message   sample log   ##############... See more...
I have a sample log, how do I create line breaking in props.conf on the indexers so that splunk can recognize the header (###) as the first line of the event message   sample log   ######################################################################## Thu 05/02/2024 - 8:06:13.34 ######################################################################## Parm-1 is XYZ Parm-2 is w4567 Parm-3 is 3421 Parm-4 is mclfmkf Properties file is jakjfdakohj Parm-6 is %Source_File% Parm-7 is binary Parm-8 is Parm-9 is SOURCE_DIR is mfkljfdalkj SOURCE_FILE is klnsaclkncalkn FINAL_DIR is /mail/lslk/jdslkjd/ FINAL_FILE is lkjdflkj_*.txt MFRAME is N Version (C) Copyright ************************************************* Successfully connected   I want splunk to include the ### as the first line of the event message, but I am able to get line breaker from the second line Thu 05/02/2024 - 8:06:13.34   Please let me know    
need query to remove duplicates from count stats Sample input event  email abc      xyz@email.com abc    xyz@email.com abc. test@email.com abc. test@email.com xyz xyz@email.com Expected outpu... See more...
need query to remove duplicates from count stats Sample input event  email abc      xyz@email.com abc    xyz@email.com abc. test@email.com abc. test@email.com xyz xyz@email.com Expected output  event count abc 2 xyz 1 what I am getting  event count abc 4 xyz 1
Hi everyone I updated the version of my database agent and by default, appdynamics set the name like a "default database agent", but I need customize the name for each one, but I could find out wher... See more...
Hi everyone I updated the version of my database agent and by default, appdynamics set the name like a "default database agent", but I need customize the name for each one, but I could find out where set up this configuration. Can anyone help me to know where change the database agent name? thank's
I get the error showed in the title when tying to upload a csv as  lookup. I tried the solution mentioned here:  https://community.splunk.com/t5/Splunk-Search/What-does-the-error-quot-File-has-no-li... See more...
I get the error showed in the title when tying to upload a csv as  lookup. I tried the solution mentioned here:  https://community.splunk.com/t5/Splunk-Search/What-does-the-error-quot-File-has-no-line-endings-quot-for-a/m-p/322387 but that doesn't work. Any suggestions? 
I get weekly email updates with results from weekly URA scans. After noticing that we had outdated apps we rolled out updates for three public apps, Sankey Diagram, Scalable Vector Graphics and Splun... See more...
I get weekly email updates with results from weekly URA scans. After noticing that we had outdated apps we rolled out updates for three public apps, Sankey Diagram, Scalable Vector Graphics and Splunk Dashboard Examples. In our testing environment URA is now content and all apps pass jQuery scans without issues. However, in our production environment URA scan still fails in all three apps. It does not specify which files or of there is a problem om one or all instances so I don’t know what is causing the results. I have double and triple checked the apps comparing hash values for every file both on the deployment server and on all individual test and production search heads. Everything except for the “install hash” in “meta.local” is identical in both test and production environment. Apps are all identical between cluster members in test and production environment respectively. There are not additional files present on any search head in the production environment. Why is URA still failing these apps only in the production environment? How can I identify the reason for the scan failures as I they should all pass in both environments, being identical and all. Any and all suggestions are most welcome All the best