All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for it but i need one mail to be sent though a recipient has multiple rows of data.
Thanks for this! More tweaking required on my part as some of the subdomains being evaluated have more than 3 levels, but this is a big help in getting me on the right track!
The coalesce function selects a field within a single result.  To combine (aggregate) multiple results, use the stats command again after modifying the url field. index=proxy url=*.streaming-site.co... See more...
The coalesce function selects a field within a single result.  To combine (aggregate) multiple results, use the stats command again after modifying the url field. index=proxy url=*.streaming-site.com | eval megabytes=round(((bytes_in/1024)/1024),2) | stats sum(megabytes) as Download_MB by url | eval url=replace(url, ".*?\.(.*)","\1") | stats sum(Download_MB) as Download_MB by url | sort - Download_MB  
Thanks for the input. Escaping the escape characters seems a bit silly, but alright. I couldn't get it working today so I'll try a few more variations next week as I have time. Appreciate the help!
Did not realize that. Thank you for the correction. Removing quotes didn't exclude the Teams events though so I must have something else set wrong. As far as what I have posted, does it seem right? ... See more...
Did not realize that. Thank you for the correction. Removing quotes didn't exclude the Teams events though so I must have something else set wrong. As far as what I have posted, does it seem right? I'm not super familiar with troubleshooting props.conf and transforms.conf settings yet.
Hi All, I am looking into using some proxy logs to determine download volume for particular streaming sites and was looking for a way to merge hostnames into one "service". Consider the SPL: index... See more...
Hi All, I am looking into using some proxy logs to determine download volume for particular streaming sites and was looking for a way to merge hostnames into one "service". Consider the SPL: index=proxy url=*.streaming-site.com | eval megabytes=round(((bytes_in/1024)/1024),2) | stats sum(megabytes) as Download_MB by url | sort -Download_MB Will likely return multiple rows like: cdn1.streaming-site.com 180.3 cdn2.streaming-site.com 164.8 www.streaming-site.com  12.3 I am wanting to merge those all into one row of streaming-site.com   357.4 I have played around with the coalesce function, but this would be unsustainable for sites like Netflix which have dozens of URLs associated with them. If anyone has any suggestions on how I might combine results with say a wildcard (*), I'd love to hear from you!
SOARt_of_Lost,   Thanks for the reply.  The whole VPE is kinda clunky, but I guess that's what part of the SOAR is for is to provide a visual programming interface. I ended up writing a python m... See more...
SOARt_of_Lost,   Thanks for the reply.  The whole VPE is kinda clunky, but I guess that's what part of the SOAR is for is to provide a visual programming interface. I ended up writing a python module and installed it via the backend procedure with pip.
Looks closer     
We've been collecting data with the inputs add-on (Input Add On for SentinelOne App For Splunk) for several years now.  The applications channel has always been a bit problematic with the collection ... See more...
We've been collecting data with the inputs add-on (Input Add On for SentinelOne App For Splunk) for several years now.  The applications channel has always been a bit problematic with the collection process running for several days but now we haven't seen any data since Monday February 19th around 5:00 PM. It's February 22nd and we generally see applications data every day. We started seeing errors on February 16th error_message="cannot unpack non-iterable NoneType object" error_type="&lt;class 'TypeError'&gt;" error_arguments="cannot unpack non-iterable NoneType object" error_filename="s1_client.py" error_line_number="500" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_type="&lt;class 'management.mgmtsdk_v2.exceptions.InternalServerErrorException'&gt;" error_arguments="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_filename="s1_client.py" error_line_number="223" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" And have seen a few errors since then error_message="cannot unpack non-iterable NoneType object" error_type="&lt;class 'TypeError'&gt;" error_arguments="cannot unpack non-iterable NoneType object" error_filename="s1_client.py" error_line_number="500" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_type="&lt;class 'management.mgmtsdk_v2.exceptions.InternalServerErrorException'&gt;" error_arguments="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_filename="s1_client.py" error_line_number="188" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="cannot unpack non-iterable NoneType object" error_type="&lt;class 'TypeError'&gt;" error_arguments="cannot unpack non-iterable NoneType object" error_filename="s1_client.py" error_line_number="500" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_type="&lt;class 'management.mgmtsdk_v2.exceptions.InternalServerErrorException'&gt;" error_arguments="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_filename="s1_client.py" error_line_number="188" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" After noting the following in the release notes Improvements ... -- Applications input uses a new S1 API endpoint to reduce load on ingest. we upgraded the add-on from version 5.19 to version 5.20. Now we're seeing the following messages in the sentinelone-modularinput.log 2024-02-22 13:40:02,171 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="630" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=saving_checkpoint msg='not saving checkpoint in case there was a communication error' start=1708026001000 items_found=0 channel=applications 2024-02-22 13:40:01,526 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="599" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=calling_applications_channel status=start start=1708026001000 start_length=13 start_type=<class 'str'> end=1708630801000 end_length=13 end_type=<class 'str'> checkpoint=1708026001.525169 channel=applications 2024-02-22 13:40:01,526 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="580" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=got_checkpoint checkpoint={'last_execution': 1708026001.525169} channel=applications last_execution=1708026001.525169 2024-02-22 13:40:01,525 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="565" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=got_checkpoint checkpoint={'last_execution': 1708026001.525169} channel=applications type=<class 'dict'> It appears that the input is running but we're not seeing any events.  We also noted the following in the documentation for version 5.2.0. sourcetype SentinelOne API Description ...     sentinelone:channel:applications web/api/v2.1/installed-applications Deprecated sentinelone:channel:applications web/api/v2.1/installed-applications Deprecated Does this mean that the input has been deprecated? If so, what does the statement "Applications input uses a new S1 API endpoint to reduce load on ingest." in the release notes mean?  And why is the Applications channel still an option when creating inputs through the Splunk IU? Any information you can provide on the application channel would be greatly appreciated. __PRESENT
This is very much a question of efficiency.  If you have a relatively small number of event 70 in a short period of time, but event 250 was some long time ago, using subsearch would be more efficient... See more...
This is very much a question of efficiency.  If you have a relatively small number of event 70 in a short period of time, but event 250 was some long time ago, using subsearch would be more efficient than retrieving both types of events for a long period of time. You also need to tell us which EventCode's give you User, which give you Active_User.  Assuming that EventCode 250 gives you Active_User but 70 gives you User, you can do something like | from datamodel:P3 | search EventCode=250 earliest=-1mon ``` earliest value for demonstration purpose only ``` [from datamodel:P3 | search EventCode=70 earliest=-1h ``` earliest value for demonstration purpose only ``` | stats values(User) as Active_User ``` assuming User is present in EventCode 70 to matche Active_User in EventCode 250 ]  
Do you mean index="webmethods_qa" source="/apps/webmethods/integrationserver/instances/default/logs/USCustomerEDI.log" Status=success OR STATUS=Success OR Status=Failure USCUSTOMERINV | timechart su... See more...
Do you mean index="webmethods_qa" source="/apps/webmethods/integrationserver/instances/default/logs/USCustomerEDI.log" Status=success OR STATUS=Success OR Status=Failure USCUSTOMERINV | timechart sum(TotalAmount) span=1mon by Sender_ID
i have log like this : 2024-02-22 12:49:38:344 EST| INFO |InterfaceName=USCUSTOMERINV INVCanonicalProcess Sender_ID=ThermoFisher Scientific Receiver_ID =Marshall University-Unimarket TxnType=Invoi... See more...
i have log like this : 2024-02-22 12:49:38:344 EST| INFO |InterfaceName=USCUSTOMERINV INVCanonicalProcess Sender_ID=ThermoFisher Scientific Receiver_ID =Marshall University-Unimarket TxnType=Invoice TotalAmount=-1916.83 Status=Success 2024-02-22 11:51:12:992 EST| INFO |InterfaceName=USCUSTOMERINV INVCanonicalProcess Sender_ID=ThermoFisher Scientific Receiver_ID =Mammoth Bio via Coupa TxnType=Invoice TotalAmount=4190.67 Status=Success below query giving monthly total index="webmethods_qa" source="/apps/webmethods/integrationserver/instances/default/logs/USCustomerEDI.log" Status=success OR STATUS=Success OR Status=Failure USCUSTOMERINV | timechart sum(TotalAmount) span=1mon   but I need for each Receiver_ID how much invoice total for 1 months span like this :   how to do that?
We upgraded splunk then the app to 1.4.6, but kept getting the same errors. The solution was rather silly. It couldn't run python3.exe because the python installer named it python312.exe... renamed a... See more...
We upgraded splunk then the app to 1.4.6, but kept getting the same errors. The solution was rather silly. It couldn't run python3.exe because the python installer named it python312.exe... renamed and the app started working. 
@yuanliu  Is there a way to say if EventCode=70 look upstream for EventCode=250 and join User?  I am only trying to capture who created the event.
Getting the same error.  Unable to create FIFO: path="/opt/splunk/var/run/splunk/dispatch/ Please assist
What i mean by backend is through the CLI
See if this link helps .
Heavy forwarder or indexer crashes with FATAL error on typing thread.   Note: Issue is now fixed for next 9.2.2/9.1.5/9.0.10 patches   Crashing thread: typing_0 Backtrace (PIC build): [0x00007... See more...
Heavy forwarder or indexer crashes with FATAL error on typing thread.   Note: Issue is now fixed for next 9.2.2/9.1.5/9.0.10 patches   Crashing thread: typing_0 Backtrace (PIC build): [0x00007F192F4C2ACF] gsignal + 271 (libc.so.6 + 0x4EACF) [0x00007F192F495EA5] abort + 295 (libc.so.6 + 0x21EA5) [0x000055E24388D6C0] ? (splunkd + 0x1A366C0) [0x000055E24388D770] ? (splunkd + 0x1A36770) [0x000055E2445D6D24] PipelineInputChannelReference::PipelineInputChannelReference(Str const**, PipelineInputChannelSet*, bool) + 388 (splunkd + 0x277FD24) [0x000055E2445BACC3] PipelineData::set_channel(Str const*, Str const*, Str const*) + 243 (splunkd + 0x2763CC3) [0x000055E2445BAF9E] PipelineData::recomputeConfKey(PipelineSet*, bool) + 286 (splunkd + 0x2763F9E) [0x000055E243E3689E] RegexExtractionProcessor::each(CowPipelineData&, PipelineDataVector*, bool) + 718 (splunkd + 0x1FDF89E) [0x000055E243E36BF3] RegexExtractionProcessor::executeMulti(PipelineDataVector&, PipelineDataVector*) + 67 (splunkd + 0x1FDFBF3) [0x000055E243BCD5F2] Pipeline::main() + 1074 (splunkd + 0x1D765F2) [0x000055E244C336FD] Thread::_callMainAndDiscardTerminateException() + 13 (splunkd + 0x2DDC6FD) [0x000055E244C345F2] Thread::callMain(void*) + 178 (splunkd + 0x2DDD5F2) [0x00007F192FF1F1CA] ? (libpthread.so.0 + 0x81CA) [0x00007F192F4ADE73] clone + 67 (libc.so.6 + 0x39E73)                   Crashing thread: typing_0 Backtrace (PIC build): [0x00007F192F4C2ACF] gsignal + 271 (libc.so.6 + 0x4EACF) [0x00007F192F495EA5] abort + 295 (libc.so.6 + 0x21EA5) [0x000055E24388D6C0] ? (splunkd + 0x1A366C0) [0x000055E24388D770] ? (splunkd + 0x1A36770) [0x000055E2445D6D24] _ZN29PipelineInputChannelReferenceC2EPPK3StrP23PipelineInputChannelSetb + 388 (splunkd + 0x277FD24) [0x000055E2445BACC3] _ZN12PipelineData11set_channelEPK3StrS2_S2_ + 243 (splunkd + 0x2763CC3) [0x000055E2445BAF9E] _ZN12PipelineData16recomputeConfKeyEP11PipelineSetb + 286 (splunkd + 0x2763F9E) [0x000055E243E3689E] _ZN24RegexExtractionProcessor4eachER15CowPipelineDataP18PipelineDataVectorb + 718 (splunkd + 0x1FDF89E) [0x000055E243E36BF3] _ZN24RegexExtractionProcessor12executeMultiER18PipelineDataVectorPS0_ + 67 (splunkd + 0x1FDFBF3) [0x000055E243BCD5F2] _ZN8Pipeline4mainEv + 1074 (splunkd + 0x1D765F2) [0x000055E244C336FD] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 13 (splunkd + 0x2DDC6FD) [0x000055E244C345F2] _ZN6Thread8callMainEPv + 178 (splunkd + 0x2DDD5F2) [0x00007F192FF1F1CA] ? (libpthread.so.0 + 0x81CA) [0x00007F192F4ADE73] clone + 67 (libc.so.6 + 0x39E73) Last few lines of stderr (may contain info on assertion failure, but also could be old): Fatal thread error: pthread_mutex_lock: Invalid argument; 117 threads active, in typing_0                               Fatal thread error: pthread_mutex_lock: Invalid argument;                 This crash happens if persistent queue is enabled. It has been reported for several years. I see one reported back in 2015 as well. https://community.splunk.com/t5/Monitoring-Splunk/What-would-cause-a-Fatal-thread-error-in-thread-typing-found-in/m-p/261407 The bug existed always but the interesting part is, since 9.x the frequency of crashes has gone up. More customers are reporting the crashes now. The probability of hitting the race condition has gone up now. We are fixing the issue( internal ticket SPL-251434) for next patch, in the mean time here are few workarounds to consider depending on what is feasible for your requirement. The reason for 9.x high frequency of crashes on instance with persistent queue enabled is that the forwarders(UF/HF/IUF/IHF) are sending data at faster rate due to 9.x autoBatch, thus small in-memory part of persistent queue (default 500KB) makes it nearly impossible to not bring persistent queue part (writing on to disk) into play. Meaning now 9.x receiver with persistent queue is writing on to disk nearly all the time even when down stream pipeline queues are not saturated. So the best solution to bring crashing frequency to the level of 8.x or older is to increase  in-memory part of persistent queue ( so that if no down stream queues full not disk writes to persistent queue). However the fundamental bug still remains there and will be fixed in a patch.  The workarounds are reducing the possibility of disk writes for persistent queue. So have a look at the 3 possible workarounds and see which one works for you. 1. Turn off persistent queue on splunktcpin port( I sure not feasible for all). This will eliminate the crash. 2. Disable `splunk_internal_metrics`  app as it does source type cloning for metrics.log. Most of us probably not aware that metrics.log is cloned and additionally indexed into `_metrics` index. If you are not using `_metrics` index, disable the app. For crash to happen, you need two conditions    a) persistent queue   b) sourcetype cloning. 3. Apply following configs to reduce the chances of crashes. limits.conf  [input_channels] max_inactive=300001 lowater_inactive=300000 inactive_eligibility_age_seconds=120 inputs.conf, increase im-memory queue size of PQ( depending on ssl or non-ssl port) [splunktcp-ssl:<port>] queueSize=100MB [splunktcp:<port>] queueSize=100MB Enable Async Forwarding on HF/IUF/IHF (crashing instance)  4.  Slow down forwarders by setting `autoBatch=false` on all universal forwarders/heavy forwarders . 
My input module relies on API data and I have decided to move connection timeout configuration options over to global configuration e.g:   helper.get_global_setting("read_timeout")   Rather than ... See more...
My input module relies on API data and I have decided to move connection timeout configuration options over to global configuration e.g:   helper.get_global_setting("read_timeout")   Rather than requiring it be set individually per Input module. However it appears there is no `validate_input` functionality similar to the input module for global configuration options. There is no documentation for this, but you would think that being able to validate global config options inherited by every input module would be an important thing to do. I now have to figure out how to do this in each input module, but it delays telling the user they have made bad config where they enter it. I cannot rely on something like log info due to splunk cloud giving not much access to logs, so I'm more or less reliant on resetting the value or keeping them in input modules. Is there anyway this can be achieved?