All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello All, Logs are not indexing into splunk. My configurations are below  inputs.conf: [monitor:///usr/logs/Client*.log*] index = admin crcSalt = <SOURCE> disabled = false recursive = false ... See more...
Hello All, Logs are not indexing into splunk. My configurations are below  inputs.conf: [monitor:///usr/logs/Client*.log*] index = admin crcSalt = <SOURCE> disabled = false recursive = false props.conf: [source::(...(usr/logs/Client*.log*))] sourcetype = auth_log My logs files pattern: Client_11.186.145.54:1_q1234567.log Client_11.186.145.54:1_q1234567.log.~~ Client_12.187.146.53:2_s1234567.log Client_12.187.146.53:2_s1234567.log.~~ Client_1.1.1.1:2_p1244567.log Client_1.1.1.1:2_p1244567.log.~~ In some of log files it starts with below line: ===== JLSLog: Maximum log file size is 5000000 and then log events So for this one i tried with below config one by one but nothing worked out adding crcSalt=<SOURCE> in monitor stanze, tried with adding SEDCMD in props.conf SEDCMD-removeheadersfooters=s/\=\=\=\=\=\sJLSLog:\s((Maximum\slog\sfile\ssize\sis\s\d+)|Initial\slog\slevel\sis\sLow)//g and tried with regex in transforms.conf transforms.conf [ignore_lines_starting_with_equals] REGEX = ^===(.*) DEST_KEY = queue FORMAT = nullQueue props.conf: [auth_log] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)=== TRANSFORMS-null = ignore_lines_starting_with_equals When i checked in splunkd logs there is no error captured and in list inputstatus it is showing                  percent = 100.00                 type = finished reading / open file please help me out of this issue if anyone faced before and fixed it. but the weird scenario is sometimes only  the first line of log file is indexed  ===== JLSLog: Maximum log file size is 5000000 host/server details: os: Solaris 10 splunk universal forwarder version 7.3.9 splunk enterprise version: 9.1.1 Here restriction is the host os cant be upgraded as of now so i need to strict on 7.3.9 splunk forwarder version.
how can i input zeek log and analyze data
Hello, I have problem in installing Python module on splunk i am getting pip not found error whenever i try to using pip module.. I am not sure what is wrong here.. Please someone help me fig... See more...
Hello, I have problem in installing Python module on splunk i am getting pip not found error whenever i try to using pip module.. I am not sure what is wrong here.. Please someone help me figure this out..
Hi All, I am looking into using some proxy logs to determine download volume for particular streaming sites and was looking for a way to merge hostnames into one "service". Consider the SPL: index... See more...
Hi All, I am looking into using some proxy logs to determine download volume for particular streaming sites and was looking for a way to merge hostnames into one "service". Consider the SPL: index=proxy url=*.streaming-site.com | eval megabytes=round(((bytes_in/1024)/1024),2) | stats sum(megabytes) as Download_MB by url | sort -Download_MB Will likely return multiple rows like: cdn1.streaming-site.com 180.3 cdn2.streaming-site.com 164.8 www.streaming-site.com  12.3 I am wanting to merge those all into one row of streaming-site.com   357.4 I have played around with the coalesce function, but this would be unsustainable for sites like Netflix which have dozens of URLs associated with them. If anyone has any suggestions on how I might combine results with say a wildcard (*), I'd love to hear from you!
We've been collecting data with the inputs add-on (Input Add On for SentinelOne App For Splunk) for several years now.  The applications channel has always been a bit problematic with the collection ... See more...
We've been collecting data with the inputs add-on (Input Add On for SentinelOne App For Splunk) for several years now.  The applications channel has always been a bit problematic with the collection process running for several days but now we haven't seen any data since Monday February 19th around 5:00 PM. It's February 22nd and we generally see applications data every day. We started seeing errors on February 16th error_message="cannot unpack non-iterable NoneType object" error_type="&lt;class 'TypeError'&gt;" error_arguments="cannot unpack non-iterable NoneType object" error_filename="s1_client.py" error_line_number="500" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_type="&lt;class 'management.mgmtsdk_v2.exceptions.InternalServerErrorException'&gt;" error_arguments="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_filename="s1_client.py" error_line_number="223" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" And have seen a few errors since then error_message="cannot unpack non-iterable NoneType object" error_type="&lt;class 'TypeError'&gt;" error_arguments="cannot unpack non-iterable NoneType object" error_filename="s1_client.py" error_line_number="500" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_type="&lt;class 'management.mgmtsdk_v2.exceptions.InternalServerErrorException'&gt;" error_arguments="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_filename="s1_client.py" error_line_number="188" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="cannot unpack non-iterable NoneType object" error_type="&lt;class 'TypeError'&gt;" error_arguments="cannot unpack non-iterable NoneType object" error_filename="s1_client.py" error_line_number="500" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_type="&lt;class 'management.mgmtsdk_v2.exceptions.InternalServerErrorException'&gt;" error_arguments="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_filename="s1_client.py" error_line_number="188" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" After noting the following in the release notes Improvements ... -- Applications input uses a new S1 API endpoint to reduce load on ingest. we upgraded the add-on from version 5.19 to version 5.20. Now we're seeing the following messages in the sentinelone-modularinput.log 2024-02-22 13:40:02,171 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="630" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=saving_checkpoint msg='not saving checkpoint in case there was a communication error' start=1708026001000 items_found=0 channel=applications 2024-02-22 13:40:01,526 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="599" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=calling_applications_channel status=start start=1708026001000 start_length=13 start_type=<class 'str'> end=1708630801000 end_length=13 end_type=<class 'str'> checkpoint=1708026001.525169 channel=applications 2024-02-22 13:40:01,526 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="580" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=got_checkpoint checkpoint={'last_execution': 1708026001.525169} channel=applications last_execution=1708026001.525169 2024-02-22 13:40:01,525 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="565" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=got_checkpoint checkpoint={'last_execution': 1708026001.525169} channel=applications type=<class 'dict'> It appears that the input is running but we're not seeing any events.  We also noted the following in the documentation for version 5.2.0. sourcetype SentinelOne API Description ...     sentinelone:channel:applications web/api/v2.1/installed-applications Deprecated sentinelone:channel:applications web/api/v2.1/installed-applications Deprecated Does this mean that the input has been deprecated? If so, what does the statement "Applications input uses a new S1 API endpoint to reduce load on ingest." in the release notes mean?  And why is the Applications channel still an option when creating inputs through the Splunk IU? Any information you can provide on the application channel would be greatly appreciated. __PRESENT
i have log like this : 2024-02-22 12:49:38:344 EST| INFO |InterfaceName=USCUSTOMERINV INVCanonicalProcess Sender_ID=ThermoFisher Scientific Receiver_ID =Marshall University-Unimarket TxnType=Invoi... See more...
i have log like this : 2024-02-22 12:49:38:344 EST| INFO |InterfaceName=USCUSTOMERINV INVCanonicalProcess Sender_ID=ThermoFisher Scientific Receiver_ID =Marshall University-Unimarket TxnType=Invoice TotalAmount=-1916.83 Status=Success 2024-02-22 11:51:12:992 EST| INFO |InterfaceName=USCUSTOMERINV INVCanonicalProcess Sender_ID=ThermoFisher Scientific Receiver_ID =Mammoth Bio via Coupa TxnType=Invoice TotalAmount=4190.67 Status=Success below query giving monthly total index="webmethods_qa" source="/apps/webmethods/integrationserver/instances/default/logs/USCustomerEDI.log" Status=success OR STATUS=Success OR Status=Failure USCUSTOMERINV | timechart sum(TotalAmount) span=1mon   but I need for each Receiver_ID how much invoice total for 1 months span like this :   how to do that?
Heavy forwarder or indexer crashes with FATAL error on typing thread.   Note: Issue is now fixed for next 9.2.2/9.1.5/9.0.10 patches   Crashing thread: typing_0 Backtrace (PIC build): [0x00007... See more...
Heavy forwarder or indexer crashes with FATAL error on typing thread.   Note: Issue is now fixed for next 9.2.2/9.1.5/9.0.10 patches   Crashing thread: typing_0 Backtrace (PIC build): [0x00007F192F4C2ACF] gsignal + 271 (libc.so.6 + 0x4EACF) [0x00007F192F495EA5] abort + 295 (libc.so.6 + 0x21EA5) [0x000055E24388D6C0] ? (splunkd + 0x1A366C0) [0x000055E24388D770] ? (splunkd + 0x1A36770) [0x000055E2445D6D24] PipelineInputChannelReference::PipelineInputChannelReference(Str const**, PipelineInputChannelSet*, bool) + 388 (splunkd + 0x277FD24) [0x000055E2445BACC3] PipelineData::set_channel(Str const*, Str const*, Str const*) + 243 (splunkd + 0x2763CC3) [0x000055E2445BAF9E] PipelineData::recomputeConfKey(PipelineSet*, bool) + 286 (splunkd + 0x2763F9E) [0x000055E243E3689E] RegexExtractionProcessor::each(CowPipelineData&, PipelineDataVector*, bool) + 718 (splunkd + 0x1FDF89E) [0x000055E243E36BF3] RegexExtractionProcessor::executeMulti(PipelineDataVector&, PipelineDataVector*) + 67 (splunkd + 0x1FDFBF3) [0x000055E243BCD5F2] Pipeline::main() + 1074 (splunkd + 0x1D765F2) [0x000055E244C336FD] Thread::_callMainAndDiscardTerminateException() + 13 (splunkd + 0x2DDC6FD) [0x000055E244C345F2] Thread::callMain(void*) + 178 (splunkd + 0x2DDD5F2) [0x00007F192FF1F1CA] ? (libpthread.so.0 + 0x81CA) [0x00007F192F4ADE73] clone + 67 (libc.so.6 + 0x39E73)                   Crashing thread: typing_0 Backtrace (PIC build): [0x00007F192F4C2ACF] gsignal + 271 (libc.so.6 + 0x4EACF) [0x00007F192F495EA5] abort + 295 (libc.so.6 + 0x21EA5) [0x000055E24388D6C0] ? (splunkd + 0x1A366C0) [0x000055E24388D770] ? (splunkd + 0x1A36770) [0x000055E2445D6D24] _ZN29PipelineInputChannelReferenceC2EPPK3StrP23PipelineInputChannelSetb + 388 (splunkd + 0x277FD24) [0x000055E2445BACC3] _ZN12PipelineData11set_channelEPK3StrS2_S2_ + 243 (splunkd + 0x2763CC3) [0x000055E2445BAF9E] _ZN12PipelineData16recomputeConfKeyEP11PipelineSetb + 286 (splunkd + 0x2763F9E) [0x000055E243E3689E] _ZN24RegexExtractionProcessor4eachER15CowPipelineDataP18PipelineDataVectorb + 718 (splunkd + 0x1FDF89E) [0x000055E243E36BF3] _ZN24RegexExtractionProcessor12executeMultiER18PipelineDataVectorPS0_ + 67 (splunkd + 0x1FDFBF3) [0x000055E243BCD5F2] _ZN8Pipeline4mainEv + 1074 (splunkd + 0x1D765F2) [0x000055E244C336FD] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 13 (splunkd + 0x2DDC6FD) [0x000055E244C345F2] _ZN6Thread8callMainEPv + 178 (splunkd + 0x2DDD5F2) [0x00007F192FF1F1CA] ? (libpthread.so.0 + 0x81CA) [0x00007F192F4ADE73] clone + 67 (libc.so.6 + 0x39E73) Last few lines of stderr (may contain info on assertion failure, but also could be old): Fatal thread error: pthread_mutex_lock: Invalid argument; 117 threads active, in typing_0                               Fatal thread error: pthread_mutex_lock: Invalid argument;                 This crash happens if persistent queue is enabled. It has been reported for several years. I see one reported back in 2015 as well. https://community.splunk.com/t5/Monitoring-Splunk/What-would-cause-a-Fatal-thread-error-in-thread-typing-found-in/m-p/261407 The bug existed always but the interesting part is, since 9.x the frequency of crashes has gone up. More customers are reporting the crashes now. The probability of hitting the race condition has gone up now. We are fixing the issue( internal ticket SPL-251434) for next patch, in the mean time here are few workarounds to consider depending on what is feasible for your requirement. The reason for 9.x high frequency of crashes on instance with persistent queue enabled is that the forwarders(UF/HF/IUF/IHF) are sending data at faster rate due to 9.x autoBatch, thus small in-memory part of persistent queue (default 500KB) makes it nearly impossible to not bring persistent queue part (writing on to disk) into play. Meaning now 9.x receiver with persistent queue is writing on to disk nearly all the time even when down stream pipeline queues are not saturated. So the best solution to bring crashing frequency to the level of 8.x or older is to increase  in-memory part of persistent queue ( so that if no down stream queues full not disk writes to persistent queue). However the fundamental bug still remains there and will be fixed in a patch.  The workarounds are reducing the possibility of disk writes for persistent queue. So have a look at the 3 possible workarounds and see which one works for you. 1. Turn off persistent queue on splunktcpin port( I sure not feasible for all). This will eliminate the crash. 2. Disable `splunk_internal_metrics`  app as it does source type cloning for metrics.log. Most of us probably not aware that metrics.log is cloned and additionally indexed into `_metrics` index. If you are not using `_metrics` index, disable the app. For crash to happen, you need two conditions    a) persistent queue   b) sourcetype cloning. 3. Apply following configs to reduce the chances of crashes. limits.conf  [input_channels] max_inactive=300001 lowater_inactive=300000 inactive_eligibility_age_seconds=120 inputs.conf, increase im-memory queue size of PQ( depending on ssl or non-ssl port) [splunktcp-ssl:<port>] queueSize=100MB [splunktcp:<port>] queueSize=100MB Enable Async Forwarding on HF/IUF/IHF (crashing instance)  4.  Slow down forwarders by setting `autoBatch=false` on all universal forwarders/heavy forwarders . 
My input module relies on API data and I have decided to move connection timeout configuration options over to global configuration e.g:   helper.get_global_setting("read_timeout")   Rather than ... See more...
My input module relies on API data and I have decided to move connection timeout configuration options over to global configuration e.g:   helper.get_global_setting("read_timeout")   Rather than requiring it be set individually per Input module. However it appears there is no `validate_input` functionality similar to the input module for global configuration options. There is no documentation for this, but you would think that being able to validate global config options inherited by every input module would be an important thing to do. I now have to figure out how to do this in each input module, but it delays telling the user they have made bad config where they enter it. I cannot rely on something like log info due to splunk cloud giving not much access to logs, so I'm more or less reliant on resetting the value or keeping them in input modules. Is there anyway this can be achieved?
Hello, We have been running Website monitoring for a while and just recently it started to continuously report Connection Time out errors on and off on the URL's we track.  We had checked the networ... See more...
Hello, We have been running Website monitoring for a while and just recently it started to continuously report Connection Time out errors on and off on the URL's we track.  We had checked the network, and no issues can be found. Is it possible Splunk or the Website monitoring add-in is corrupt? Any suggestions?
I can see the business transactions 0 of 10 happening but I can't see the transactions and the application flow map. What could be the reason for it?
Trying to blacklist an event that is generating a lot of logs. Previously asked this question here Solved: Re: Splunk Blacklist DesktopExtension.exe addition... - Splunk Community but the solution... See more...
Trying to blacklist an event that is generating a lot of logs. Previously asked this question here Solved: Re: Splunk Blacklist DesktopExtension.exe addition... - Splunk Community but the solution is not working. Any other thoughts on how to blacklist Desktopextension.exe for windows security events.      blacklist = EventCode=4673 message="DesktopExtension\.exe        
Hello, How do i provide access to a limited email address on a dashboard through the backend 
I am trying to configure the distributed monitoring console without the UI (for automation purposes). It seems that I have gotten most things right - all instances show up the way I want them to, how... See more...
I am trying to configure the distributed monitoring console without the UI (for automation purposes). It seems that I have gotten most things right - all instances show up the way I want them to, however they are all marked as "unreachable". It seems that I must do the step where I provide credentials for the mc host to login to the monitored host. However I cannot figure out what this step actually does. Also I cannot find anything that hints to the credentials being stored anywhere. So what does this login process actually do, and how can I mimic that bevhaviour for the mc from the commandline/ via config files? Any insight on how the setup process works behind the scene would be appreciated.
I have an application that I am trying to monitor.  There is a specific event code for when the tool is opened to modify the tool (EventCode=250).  There is an EventCode for when it is closed (EventC... See more...
I have an application that I am trying to monitor.  There is a specific event code for when the tool is opened to modify the tool (EventCode=250).  There is an EventCode for when it is closed (EventCode=100).  These two codes display a user name, but the events between them do not.  How can I write a search to look for these two events then display the changes between them with the username who completed the change?   | from datamodel:P3 | search EventCode=250 OR 100 OR 70 OR 80 | eval user = coalesce(User, Active_User) | eval Event_Time=strftime(_time,"%m/%d/%y %I:%M:%S %P") | table Event_Time, host,user,Device_Added,Device_SN,Device_ID,EventCode, EventDescription Event_Time                        host              user      Device_Added      Device_SN       Device_ID      EventCode  02/22/24 08:49:44 am Test-Com   xxxxx                                                                                                 100 02/21/24 03:59:12 pm Test-Com   xxxxx                                                                                                  250 02/21/24 03:56:08 pm Test-Com   xxxxx                                                                                                  100 02/21/24 03:56:00 pm Test-Com                            USB 1                   12345          PID_1                   70  02/21/24 03:56:00 pm Test-Com                            USB 2                    6789            PID_2                   70  02/21/24 03:51:10 pm Test-Com                            USB 1                   12345          PID_1                   80   02/21/24 03:50:44 pm Test-Com     xxxxx                                                                                                  250
Hi, I am looking to grab all windows events of successful NTLM logins without using Kerberos. Here is my query so far.     "eventcode=4776" "Error Code: 0x0" ntlm   I think this is working as of ... See more...
Hi, I am looking to grab all windows events of successful NTLM logins without using Kerberos. Here is my query so far.     "eventcode=4776" "Error Code: 0x0" ntlm   I think this is working as of now, however it brings results including the value of Kerberos, I tried using the value, Not "Kerberos" , however it completely broke my search result.   I am looking to grab only the value of "Account Name:" and "Source Network Address:" then export it to a csv file every week.    Is this something I can do with Splunk? If so any help would be appreciated. Thanks.
Hi i have stats table with following     
Hello Splunk members! I have a CSV Lookup file with 2 columns ClientName HWDetSystem BD-K-027EY     VMware I have an index with ASA Firewall log which I want to search and find events for ... See more...
Hello Splunk members! I have a CSV Lookup file with 2 columns ClientName HWDetSystem BD-K-027EY     VMware I have an index with ASA Firewall log which I want to search and find events for all the ClientNme in the CSV 234654252.234 %ASA-3-2352552: Certificate was successfully validated. serial number: 1123423SSDDG23442234234DSGSGSGGSSG8, subject name: CN=BD-K-027EY.bl.emea.something.com. Between the CSV lookup file and event the common is the ClientName and a portion of the subject name. If I look for successfully and provide a single client name i get the event I want, but I am struggling to look it up for all the clients and make it uniqe. At the end I just want a list of ClientName for which the even was logged. thanks  
Hi, We have two indexes wich are stuck in fixeup task.  Our environment exist off  some indexing peers  wich are atached to smartstore.   This mornig there is a warning no sf and rf is met. Two ind... See more...
Hi, We have two indexes wich are stuck in fixeup task.  Our environment exist off  some indexing peers  wich are atached to smartstore.   This mornig there is a warning no sf and rf is met. Two indexes are in this degraded state. Checking the bucket status there are two buckets from two different indexes whish doesn't get fixed. Those buckets are mentioned in the search factor fix, replication factor fix and generation. The last has the notice "No possible primaries". Searching on the indexer which is mentioned in the bucket info it says: DatabaseDirectoryManager [838121 TcpChannelThread] - unable to check if cache_id="bid|aaaaaa~183~839799B0-6EAF-436C-B12A-2CDC010C1319|" is stable with CacheManager as it is not present in CacheManager and ERROR ClusterSlaveBucketHandler [838121 TcpChannelThread] - Failed to trigger replication (err='Cannot replicate remote storage enabled warm bucket, bid=aaaaaa~183~839799B0-6EAF-436C-B12A-2CDC010C1319 until it's uploaded' what can be wrong, and what to do about it?   Thanks in advance Splunk enterprise v9.0.5,  on premisse smartstore.
I have a lookup file like below, the query should send mails to each person with that respective row information. and if mail1 column is empty, then query should consider mail2 column value to send m... See more...
I have a lookup file like below, the query should send mails to each person with that respective row information. and if mail1 column is empty, then query should consider mail2 column value to send mails. and if mail2 column is empty, the query should consider mail3 column value to send mail. and if mail1, mail2 are empty then query should consider mail3 column value to send mail. Emp occupation location firstmail secondarymail thirdmail abc aaa hhh aa@mail.com gg@mail.com def ghjk gggg bb@mail.com ff@mail.com ghi lmo iiii   hh@mail.com jkl pre jjj     dd@mail.com mno swq kkk aa@mail.com ii@mail.com   example, aa@mail.com..should receive mail like below in tabluar format Emp occupation location firstmail secondarymail thirdmail abc aaa hhh aa@mail.com gg@mail.com mno swq kkk aa@mail.com ii@mail.com   so likewise query should read complete table and send mails to persons individually....containing that specific row information in tabluar format. Please help me with the query and let me know incase of any clarification on the requirement.
Hello fellow Splunkthusiasts! TL;DR: Is there any way to connect one indexer cluster to two distinct license servers?   Our company has two different licenses: one acquired directly by the compa... See more...
Hello fellow Splunkthusiasts! TL;DR: Is there any way to connect one indexer cluster to two distinct license servers?   Our company has two different licenses: one acquired directly by the company (we posses the license file) the other was acquired by a corporate group to which our company belongs, it is provided to us through group's license server (it is actually some larger license split to several pools, one of them being available to us). The obvious solution is to have one IDXC for each license with SHs searching both clusters. However, both licenses together are approximately 100GB/day, therefore building two independent indexer clusters feels like a waste of resources. What is the best way to approach this?