All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@yuanliu  Is there a way to say if EventCode=70 look upstream for EventCode=250 and join User?  I am only trying to capture who created the event.
Getting the same error.  Unable to create FIFO: path="/opt/splunk/var/run/splunk/dispatch/ Please assist
What i mean by backend is through the CLI
See if this link helps .
Heavy forwarder or indexer crashes with FATAL error on typing thread.   Note: Issue is now fixed for next 9.2.2/9.1.5/9.0.10 patches   Crashing thread: typing_0 Backtrace (PIC build): [0x00007... See more...
Heavy forwarder or indexer crashes with FATAL error on typing thread.   Note: Issue is now fixed for next 9.2.2/9.1.5/9.0.10 patches   Crashing thread: typing_0 Backtrace (PIC build): [0x00007F192F4C2ACF] gsignal + 271 (libc.so.6 + 0x4EACF) [0x00007F192F495EA5] abort + 295 (libc.so.6 + 0x21EA5) [0x000055E24388D6C0] ? (splunkd + 0x1A366C0) [0x000055E24388D770] ? (splunkd + 0x1A36770) [0x000055E2445D6D24] PipelineInputChannelReference::PipelineInputChannelReference(Str const**, PipelineInputChannelSet*, bool) + 388 (splunkd + 0x277FD24) [0x000055E2445BACC3] PipelineData::set_channel(Str const*, Str const*, Str const*) + 243 (splunkd + 0x2763CC3) [0x000055E2445BAF9E] PipelineData::recomputeConfKey(PipelineSet*, bool) + 286 (splunkd + 0x2763F9E) [0x000055E243E3689E] RegexExtractionProcessor::each(CowPipelineData&, PipelineDataVector*, bool) + 718 (splunkd + 0x1FDF89E) [0x000055E243E36BF3] RegexExtractionProcessor::executeMulti(PipelineDataVector&, PipelineDataVector*) + 67 (splunkd + 0x1FDFBF3) [0x000055E243BCD5F2] Pipeline::main() + 1074 (splunkd + 0x1D765F2) [0x000055E244C336FD] Thread::_callMainAndDiscardTerminateException() + 13 (splunkd + 0x2DDC6FD) [0x000055E244C345F2] Thread::callMain(void*) + 178 (splunkd + 0x2DDD5F2) [0x00007F192FF1F1CA] ? (libpthread.so.0 + 0x81CA) [0x00007F192F4ADE73] clone + 67 (libc.so.6 + 0x39E73)                   Crashing thread: typing_0 Backtrace (PIC build): [0x00007F192F4C2ACF] gsignal + 271 (libc.so.6 + 0x4EACF) [0x00007F192F495EA5] abort + 295 (libc.so.6 + 0x21EA5) [0x000055E24388D6C0] ? (splunkd + 0x1A366C0) [0x000055E24388D770] ? (splunkd + 0x1A36770) [0x000055E2445D6D24] _ZN29PipelineInputChannelReferenceC2EPPK3StrP23PipelineInputChannelSetb + 388 (splunkd + 0x277FD24) [0x000055E2445BACC3] _ZN12PipelineData11set_channelEPK3StrS2_S2_ + 243 (splunkd + 0x2763CC3) [0x000055E2445BAF9E] _ZN12PipelineData16recomputeConfKeyEP11PipelineSetb + 286 (splunkd + 0x2763F9E) [0x000055E243E3689E] _ZN24RegexExtractionProcessor4eachER15CowPipelineDataP18PipelineDataVectorb + 718 (splunkd + 0x1FDF89E) [0x000055E243E36BF3] _ZN24RegexExtractionProcessor12executeMultiER18PipelineDataVectorPS0_ + 67 (splunkd + 0x1FDFBF3) [0x000055E243BCD5F2] _ZN8Pipeline4mainEv + 1074 (splunkd + 0x1D765F2) [0x000055E244C336FD] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 13 (splunkd + 0x2DDC6FD) [0x000055E244C345F2] _ZN6Thread8callMainEPv + 178 (splunkd + 0x2DDD5F2) [0x00007F192FF1F1CA] ? (libpthread.so.0 + 0x81CA) [0x00007F192F4ADE73] clone + 67 (libc.so.6 + 0x39E73) Last few lines of stderr (may contain info on assertion failure, but also could be old): Fatal thread error: pthread_mutex_lock: Invalid argument; 117 threads active, in typing_0                               Fatal thread error: pthread_mutex_lock: Invalid argument;                 This crash happens if persistent queue is enabled. It has been reported for several years. I see one reported back in 2015 as well. https://community.splunk.com/t5/Monitoring-Splunk/What-would-cause-a-Fatal-thread-error-in-thread-typing-found-in/m-p/261407 The bug existed always but the interesting part is, since 9.x the frequency of crashes has gone up. More customers are reporting the crashes now. The probability of hitting the race condition has gone up now. We are fixing the issue( internal ticket SPL-251434) for next patch, in the mean time here are few workarounds to consider depending on what is feasible for your requirement. The reason for 9.x high frequency of crashes on instance with persistent queue enabled is that the forwarders(UF/HF/IUF/IHF) are sending data at faster rate due to 9.x autoBatch, thus small in-memory part of persistent queue (default 500KB) makes it nearly impossible to not bring persistent queue part (writing on to disk) into play. Meaning now 9.x receiver with persistent queue is writing on to disk nearly all the time even when down stream pipeline queues are not saturated. So the best solution to bring crashing frequency to the level of 8.x or older is to increase  in-memory part of persistent queue ( so that if no down stream queues full not disk writes to persistent queue). However the fundamental bug still remains there and will be fixed in a patch.  The workarounds are reducing the possibility of disk writes for persistent queue. So have a look at the 3 possible workarounds and see which one works for you. 1. Turn off persistent queue on splunktcpin port( I sure not feasible for all). This will eliminate the crash. 2. Disable `splunk_internal_metrics`  app as it does source type cloning for metrics.log. Most of us probably not aware that metrics.log is cloned and additionally indexed into `_metrics` index. If you are not using `_metrics` index, disable the app. For crash to happen, you need two conditions    a) persistent queue   b) sourcetype cloning. 3. Apply following configs to reduce the chances of crashes. limits.conf  [input_channels] max_inactive=300001 lowater_inactive=300000 inactive_eligibility_age_seconds=120 inputs.conf, increase im-memory queue size of PQ( depending on ssl or non-ssl port) [splunktcp-ssl:<port>] queueSize=100MB [splunktcp:<port>] queueSize=100MB Enable Async Forwarding on HF/IUF/IHF (crashing instance)  4.  Slow down forwarders by setting `autoBatch=false` on all universal forwarders/heavy forwarders . 
My input module relies on API data and I have decided to move connection timeout configuration options over to global configuration e.g:   helper.get_global_setting("read_timeout")   Rather than ... See more...
My input module relies on API data and I have decided to move connection timeout configuration options over to global configuration e.g:   helper.get_global_setting("read_timeout")   Rather than requiring it be set individually per Input module. However it appears there is no `validate_input` functionality similar to the input module for global configuration options. There is no documentation for this, but you would think that being able to validate global config options inherited by every input module would be an important thing to do. I now have to figure out how to do this in each input module, but it delays telling the user they have made bad config where they enter it. I cannot rely on something like log info due to splunk cloud giving not much access to logs, so I'm more or less reliant on resetting the value or keeping them in input modules. Is there anyway this can be achieved?
When using stats, rather than using values, use list for each field instead: | stats list(FirstName), list(LastName) by Loc
Hello, We have been running Website monitoring for a while and just recently it started to continuously report Connection Time out errors on and off on the URL's we track.  We had checked the networ... See more...
Hello, We have been running Website monitoring for a while and just recently it started to continuously report Connection Time out errors on and off on the URL's we track.  We had checked the network, and no issues can be found. Is it possible Splunk or the Website monitoring add-in is corrupt? Any suggestions?
I think I may not be explaining a key part of this well enough (or if I am misunderstanding your explanation, I'm sorry!). I need ALL ResourceIds from index=main. The only values I need to filter out... See more...
I think I may not be explaining a key part of this well enough (or if I am misunderstanding your explanation, I'm sorry!). I need ALL ResourceIds from index=main. The only values I need to filter out are instance IDs (i.e. i-1234567abcdef) that are NOT found in index=other. So let's say index=main ResourceId=* returns: i-1234567abcdef i-abcdef1234567 sg-12345abcde etc. (any other value that is not an instance ID) and the index=other search returns InstanceId: i-abcdef1234567 I need the results to be (filtered out i-1234567abcdef because it was not returned by index=other): i-abcdef1234567 sg-12345abcde So I guess a way to think about this is that I am trying to remove any value from ResourceId that matches the string "i-*" IF it was NOT found in index=other, and THEN coalesce ResourceId and InstanceId into a single field. 
Hi @Nour.Alghamdi, Did you ever get a solution from Support? I did hear back from Docs and this is what they told me. "Events service will take care of bringing up the elastic search"
Splunk does not limit access by email address - it uses role-based access controls (RBAC).  You would need to create a role and make that role the only one that can access the dashboard in question. ... See more...
Splunk does not limit access by email address - it uses role-based access controls (RBAC).  You would need to create a role and make that role the only one that can access the dashboard in question.  Then create a Splunk account with the subject email address and assign that account to the new role. Another option is to make the dashboard private to the user with the subject email address. All of that is easiest to do using the GUI.  How to do it "through the backend" depends on your environment (Splunk Cloud, standalone, SHC, etc.).  It also depends on what you mean by "backend" - REST API, config files, or CLI commands.
Putting inputs.conf on a HF without a matching props.conf means the events may not be indexed properly.  That's why I advise installing a TA.  Use the same TAs you use on the UFs.  If you don't have ... See more...
Putting inputs.conf on a HF without a matching props.conf means the events may not be indexed properly.  That's why I advise installing a TA.  Use the same TAs you use on the UFs.  If you don't have one, try Splunk Add-on for Microsoft Windows (https://splunkbase.splunk.com/app/742) and Splunk Add-on for Unix and Linux (https://splunkbase.splunk.com/app/833).
As per your suggestions we have changed the SQL quiry. After changes results showing it's still "Winows_Support - Operations" group.  Can you please help me here.
Which TAs are you referencing (for a Windows HF)? I have a Windows inputs.conf file that I'm sure came from an app, but I'm not sure which, that is being modified for certain needs now. If you had a ... See more...
Which TAs are you referencing (for a Windows HF)? I have a Windows inputs.conf file that I'm sure came from an app, but I'm not sure which, that is being modified for certain needs now. If you had a specific TA in mind that may help determine which inputs are not suitable for a HF.
Am trying to provide limited access to a dashboard and am trying to do that through the backend 
So it is possible to control with a deployment server? I thought I saw somewhere that it was not. Which TAs are you referencing for the HFs? I am currently modifying a single Windows inputs.conf fil... See more...
So it is possible to control with a deployment server? I thought I saw somewhere that it was not. Which TAs are you referencing for the HFs? I am currently modifying a single Windows inputs.conf file and just pushing it to different machines via the deployment server. Which is where the root of the question, which inputs should definitely be turned off to avoid problems? 
I can see the business transactions 0 of 10 happening but I can't see the transactions and the application flow map. What could be the reason for it?
What problem are you trying to solve?
If the illustrated fields are all you have, the only link between 250 -> 100 (with user) and the rest of events (without) is host.  I highly doubt if this can be sufficient to determine what a user h... See more...
If the illustrated fields are all you have, the only link between 250 -> 100 (with user) and the rest of events (without) is host.  I highly doubt if this can be sufficient to determine what a user have done between 250 and 100, unless this tool is strictly single-user and no other things can generate any of these events. If the tool is single-user only, you can use transaction to group these events together, like | transaction host startswith="EventCode=250" endswith="EventCode=100" Once transactions are established, you can then glean completed transactions for event codes that are not 250 and 100.  For example, | transaction host startswith="EventCode=250" endswith="EventCode=100"​ | stats values(EventCode) as EventCode values(user) as user by host | eval EventCode = mvfilter(NOT EventCode IN ("250", "100")) Hope this helps.
Trying to blacklist an event that is generating a lot of logs. Previously asked this question here Solved: Re: Splunk Blacklist DesktopExtension.exe addition... - Splunk Community but the solution... See more...
Trying to blacklist an event that is generating a lot of logs. Previously asked this question here Solved: Re: Splunk Blacklist DesktopExtension.exe addition... - Splunk Community but the solution is not working. Any other thoughts on how to blacklist Desktopextension.exe for windows security events.      blacklist = EventCode=4673 message="DesktopExtension\.exe