All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

  Try something like this <query> | inputlookup servers | where Enclave="$ENCLAVE$" AND Type="$Type$" AND Application="$APPLICATION$" | stats values(host) as host</query>   Otherw... See more...
  Try something like this <query> | inputlookup servers | where Enclave="$ENCLAVE$" AND Type="$Type$" AND Application="$APPLICATION$" | stats values(host) as host</query>   Otherwise the field name is "values(host)" which doesn't match with fieldForLabel and fieldForValue
Hi @hrawat , open soon a case to Splunk Support. Ciao. Giuseppe
Let me disagree here with you on one thing. Adding a load balancer in front of syslog receivers does not usually solve any problems (especially because LBs typically "don't speak" syslog; and even mo... See more...
Let me disagree here with you on one thing. Adding a load balancer in front of syslog receivers does not usually solve any problems (especially because LBs typically "don't speak" syslog; and even more so since "syslog" can mean many different things - from RFC5424-compliant message to "just throw anyting at UDP/514") and introduces additional layer of complexity and a potential SPOF.
At the moment the query reads as : | inputlookup my_servers | stats values(host) Obviously there is no token at the moment to restrict the list so I should get a rather long list of servers to sel... See more...
At the moment the query reads as : | inputlookup my_servers | stats values(host) Obviously there is no token at the moment to restrict the list so I should get a rather long list of servers to select from. The source for the XML frame or block is <input type="multiselect" token="SERVERS">      <label>SERVERS</label>      <search>           <query> | inputlookup servers                              | stats values(host)</query>      </search>      <fieldForLabel>host</fieldForLabel>      <fieldForValue>host</fieldForValue>      <delimiter> </delimiter> </input>
Different crashes during tcpout reload. Received fatal signal 6 (Aborted) on PID . Cause: Signal sent by PID running under UID . Crashing thread: indexerPipe_1 Backtrace (PIC build): [0x00... See more...
Different crashes during tcpout reload. Received fatal signal 6 (Aborted) on PID . Cause: Signal sent by PID running under UID . Crashing thread: indexerPipe_1 Backtrace (PIC build): [0x000014BC540AFB8F] gsignal + 271 (libc.so.6 + 0x4EB8F) [0x000014BC54082EA5] abort + 295 (libc.so.6 + 0x21EA5) [0x000055BCEBEFC1A7] __assert_fail + 135 (splunkd + 0x51601A7) [0x000055BCEBEC4BD9] ? (splunkd + 0x5128BD9) [0x000055BCE9013E72] _ZN34AutoLoadBalancedConnectionStrategyD0Ev + 18 (splunkd + 0x2277E72) [0x000055BCE905DC99] _ZN14TcpOutputGroupD1Ev + 217 (splunkd + 0x22C1C99) [0x000055BCE905E002] _ZN14TcpOutputGroupD0Ev + 18 (splunkd + 0x22C2002) [0x000055BCE905FC6F] _ZN15TcpOutputGroups14checkSendStateEv + 623 (splunkd + 0x22C3C6F) [0x000055BCE9060F08] _ZN15TcpOutputGroups4sendER15CowPipelineData + 88 (splunkd + 0x22C4F08) [0x000055BCE90002FA] _ZN18TcpOutputProcessor7executeER15CowPipelineData + 362 (splunkd + 0x22642FA) [0x000055BCE9829628] _ZN9Processor12executeMultiER18PipelineDataVectorPS0_ + 72 (splunkd + 0x2A8D628) [0x000055BCE8D29D25] _ZN8Pipeline4mainEv + 1157 (splunkd + 0x1F8DD25) [0x000055BCEBF715EE] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 46 (splunkd + 0x51D55EE) [0x000055BCEBF716FB] _ZN6Thread8callMainEPv + 139 (splunkd + 0x51D56FB) [0x000014BC552AC1DA] ? (libpthread.so.0 + 0x81DA) Another reload crash Backtrace (PIC build): [0x00007F456828700B] gsignal + 203 (libc.so.6 + 0x2100B) [0x00007F4568266859] abort + 299 (libc.so.6 + 0x859) [0x0000560602B5B4B7] __assert_fail + 135 (splunkd + 0x5AAA4B7) [0x00005605FF66297A] _ZN15TcpOutputClientD1Ev + 3130 (splunkd + 0x25B197A) [0x00005605FF6629F2] _ZN15TcpOutputClientD0Ev + 18 (splunkd + 0x25B19F2) [0x0000560602AD7807] _ZN9EventLoop3runEv + 839 (splunkd + 0x5A26807) [0x00005605FF3555AD] _ZN11Distributed11EloopRunner4mainEv + 205 (splunkd + 0x22A45AD) [0x0000560602BD03FE] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 46 (splunkd + 0x5B1F3FE) [0x0000560602BD050B] _ZN6Thread8callMainEPv + 139 (splunkd + 0x5B1F50B) [0x00007F4568CAD609] ? (libpthread.so.0 + 0x2609) [0x00007F4568363353] clone + 67 (libc.so.6 + 0xFD353) Linux / myhost / 5.15.0-1055-aws / #60~20.04.1-Ubuntu SMP Thu assertion_failure="!_hasDataInTransit" assertion_function="virtual TcpOutputClient::~TcpOutputClient()"   Starting Splunk 9.2, splunk outputs.conf is reloadable. Whenever DC pulls bundle from DS, depending on the changes, during reload, conf files are reloaded. One of the conf file is outputs.conf. Prior to 9.2 outputs.conf was not reloadable that means hitting following endpoint would do nothing. /data/outputs/tcp/server or  https://<host>:<port>/servicesNS/-/-/admin/tcpout-group/_reload Behavior is changed from 9.2 and now outputs.conf is reloadable. However reloading outputs.conf is very complex process as it involves shutdown tcpout groups safely. Still there are cases where splunk crashes. We are working on fixing reported crashes. NOTE: (Splunkcloud and others), following workaround is NOT for a crash caused by  /debug/refresh induced forced reload.  There is no workaround available for a crash caused by /debug/refresh, except not to use /debug/refresh. Workaround As mentioned before 9.2 outputs.conf was never reloadable ( no-op for _reload), thus no crashes/complications Set in local/apps.conf as a workaround. [triggers] reload.outputs = simple With setting above, splunk will take no action on tcpout(outputs.conf) reload( a behavior  that was before 9.2)   If outputs.conf is changed via DS, restart splunk.
Hi @hazem , adding a bit to @PickleRick information: you can configure rsyslog (or syslog-ng) server on your UFs: you don't need to install it because it's already installe, you have only to config... See more...
Hi @hazem , adding a bit to @PickleRick information: you can configure rsyslog (or syslog-ng) server on your UFs: you don't need to install it because it's already installe, you have only to configure it to understand where tp write logs. for more infos see at https://www.rsyslog.com/guides/ or https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-basic_configuration_of_rsyslog  on the LB, you need only to configure the receiving port and the destination port and addresses. Some LBs need also to configure a way to check if the destinations are alive, but this configuration depends on your LB and it's indipendent by Splunk or rsyslog receiver. Ciao. Giuseppe
Splunk recommendation is to NOT send syslog data directly (or via a LB) to a Splunk instance.  Syslog should be sent to a dedicated syslog server (running syslog-ng or rsyslog) and then forwarded to ... See more...
Splunk recommendation is to NOT send syslog data directly (or via a LB) to a Splunk instance.  Syslog should be sent to a dedicated syslog server (running syslog-ng or rsyslog) and then forwarded to Splunk.  The syslog servers should be positioned as close to the data source as possible to avoid data loss.  Use of a load balancer in front of the syslog servers is recommended for resiliency. For more information, see https://docs.splunk.com/Documentation/Splunk/latest/Data/HowSplunkEnterprisehandlessyslogdata#Caveats_to_using_Splunk_Enterprise_as_a_syslog_server_or_message_sender https://www.splunk.com/en_us/blog/tips-and-tricks/high-performance-syslogging-for-splunk-using-syslog-ng-part-1.html  
There is no such document because generally it's not recommended to LB "syslog" traffic. You should keep your syslog receiver as simple as possible and as close to the source as possible.
Splunk doesn't look "backwards" so you have to think backwards So as Splunk by default returns events in reverse chronological order, you have to | reverse them to get them in straight chronolo... See more...
Splunk doesn't look "backwards" so you have to think backwards So as Splunk by default returns events in reverse chronological order, you have to | reverse them to get them in straight chronological order. 2. Assuming that you already have the REQ field extracted, keep track of its values for a 7-minute long window | streamstats time_window=7m values(REQ) AS reqvals 3. Now you can find those events matching your searchnstring and not having the value of REQ copied over from earlier events | search "Error occurred during message exchange" AND NOT reqvals="INI" Two caveats 1. The search might be slow. Depending on your actual data you might make it faster by searching only for "Error occurred during message exchange" OR REQ 2. Remember that a!=b is not the same as NOT a=b. Especially when dealing with multivalued fields.
Is there any documentation in Splunk's documentation to guide a load balancer administrator on configuring the load balancer in front of intermediate forwarders to receive syslog traffic from securit... See more...
Is there any documentation in Splunk's documentation to guide a load balancer administrator on configuring the load balancer in front of intermediate forwarders to receive syslog traffic from security devices on port 514?
Ok, you're mixing several things here. One thing is syslog - UFs don't send syslog. They can receive syslog (which is not advisable anyway to do directly on UF but that's a other story). They send d... See more...
Ok, you're mixing several things here. One thing is syslog - UFs don't send syslog. They can receive syslog (which is not advisable anyway to do directly on UF but that's a other story). They send data over so-called s2s protocol which might be embedded in http requests. UFs have their own load-balancing mechanism and you should be using that. Network-level load balancing won't work properly. If you expect your indexers set to change frequently you might use indexer discovery. This is all you should discuss with your Splunk environment architect.
Hi @Somesh , to send logs from Universal Forwarders to Indexers, you don't need to use a Load Balancer because Splunk has its own method to auto load balance data from UFs to IDXs. You have only to... See more...
Hi @Somesh , to send logs from Universal Forwarders to Indexers, you don't need to use a Load Balancer because Splunk has its own method to auto load balance data from UFs to IDXs. You have only to indicate in outputs.conf (on UFs) the autoloadbalance group and the destination Indexers. It's a different thing for syslogs: you need a Load Balancer to distribute load between receivers and manage fail over. One thing, if possible don't use Indexers to receive syslogs, but use two (or more) UFs with an rsyslog (or syslog-ng) receiver, in this way you separate input phase by parsing, merging, tipying and indexing phases. Ciao. Giuseppe
Transaction command will Group events based on the event content..  and will generate some extra fields like "closed_txn, eventcount, etc." In this case we have selected the starting event with cont... See more...
Transaction command will Group events based on the event content..  and will generate some extra fields like "closed_txn, eventcount, etc." In this case we have selected the starting event with content "Error occurred during message exchange"  and ending event with content "REQ\=INI".  If both the events are present then the generated field "closed_txn=1" will set, else closed_txn=0 will set.   Adding below condition only will show the events which doesn't have a pair (REQ=INI) event. In the above screenshot you can see the second event is actually a group of 2 events (closed_txn=1) and the first event is standing alone (closed_txn=0). Adding the below line to the search will only keep the event, for that REQ=INI not yet received in last 7 min (Please note: 'latest =-7m' added as early filter) | search closed_txn=0 The result will look like below, for that you can create an alert as you wish   I hope this is what you are looking for.    
Hi @Prashant , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poi... See more...
Hi @Prashant , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @gcusello - Ah got it. Thank you so much.  
Hi @Prashant , from the inputlookup you don't have a timestamp _time. If you want the now() timestamp you can try in this way: | inputlookup dns.csv | dnsquery domainfield=domain qtype="A" answerf... See more...
Hi @Prashant , from the inputlookup you don't have a timestamp _time. If you want the now() timestamp you can try in this way: | inputlookup dns.csv | dnsquery domainfield=domain qtype="A" answerfield="dns_response" nss="10.102.204.52" | eval DateTine=strftime(now(),"%a %B %d %Y %H:%M:%S") | eval Status = case(isnotnull(dns_error), "UnReachable",1=1 , "Reachable") | table DateTime domain dns_response dns_error Status  Ciao. Giuseppe
Hi Team, I am using below query to get the DNS lookup query, everything is fine but I am not getting the time field aligned with my inputlookup query. If I remove the inputlookup and use the individ... See more...
Hi Team, I am using below query to get the DNS lookup query, everything is fine but I am not getting the time field aligned with my inputlookup query. If I remove the inputlookup and use the individual domain name then it works fine. however I would like to have the time as well along with my inputlookup data.   | makeresults | inputlookup append=t dns.csv | dnsquery domainfield=domain qtype="A" answerfield="dns_response" nss="10.102.204.52" | eval Status = case(isnotnull(dns_error), "UnReachable",1=1 , "Reachable") | eval DateTime=strftime(_time,"%a %B %d %Y %H:%M:%S") | table DateTime domain dns_response dns_error Status   Result is showing as -  DateTime domain dns_response dns_error Status Wed September 18 2024 11:57:19       Reachable   ns1.vodacombusiness.co.za 41.0.1.10   Reachable   ns2.vodacombusiness.co.za 41.0.193.10   Reachable   ns3.vodacombusiness.co.za - Could not execute DNS query: A -> ns3.vodacombusiness.co.za. Error: None of DNS query names exist: ns3.vodacombusiness.co.za., ns3.vodacombusiness.co.za. UnReachable
Hi @dhiraj , let me understand: if you have both the events (message and REQ=INI), running the first two items of my search, you should have two types of events (check this in the interesting field... See more...
Hi @dhiraj , let me understand: if you have both the events (message and REQ=INI), running the first two items of my search, you should have two types of events (check this in the interesting fields). So the following stats command, should give you type_count=2 (if both present) and type_count=1 if there's only one. If you have both the strings to search ("Error occurred during message exchange" and "REQ=INI") in two different events (as in your screenshots), you should have both the types; if not, check the strings to search and the eval condition. Ciao. Giuseppe
Hi Lawrence, can you please share your solution? A customer just asked us to collect audit logs from Marketing Cloud and we're trying to figure how to do it. Thanks a lot!   Marco
Hello everybody I want to confirm that the fix to Enable cgroup v2 on RHEL8 has solved the issue for us as well Regard, Harry