All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi can you show how your Alert has configured. You could also look from internal logs if that alert is run and if it raised or not.  Have you test that your email works from splunk? Easiest way to... See more...
Hi can you show how your Alert has configured. You could also look from internal logs if that alert is run and if it raised or not.  Have you test that your email works from splunk? Easiest way to check it is add  .... | sendemail .... after your query.   One comment for your SPL. It's better 1st select rows and then sort, that way it's more efficient. ... | where Requests > 50 | sort 0 - Requests Is better way. Also if there could be huge amount of those then you need 0 with sort to sort all not only XXX events. r. Ismo 
Hi it's just like @VatsalJagani said.  What is the issue which you try to solve with this Fixed header part? Maybe there is another solution which you could use to achieve your objectives? r. Ismo
Hi Have you checked that your Windows version is supported by Splunk 9.1.1?  As you could see from https://docs.splunk.com/Documentation/Splunk/9.1.1/ReleaseNotes/Deprecatedfeatures#Removed_feature... See more...
Hi Have you checked that your Windows version is supported by Splunk 9.1.1?  As you could see from https://docs.splunk.com/Documentation/Splunk/9.1.1/ReleaseNotes/Deprecatedfeatures#Removed_features_in_version_9.1 Windows 2016 has removed from supported OS version. r. Ismo
You should look what you need to put on UF and HF vs. what is needed on indexer in outputs.conf. Those are different thing as normally indexers just write events to disks. On props.conf is this one ... See more...
You should look what you need to put on UF and HF vs. what is needed on indexer in outputs.conf. Those are different thing as normally indexers just write events to disks. On props.conf is this one for indexing and clone events to another destination. As you could see there is no default group definition. # Clone events to groups indexer1 and indexer2. Also, index all this data # locally as well. [tcpout] indexAndForward=true [tcpout:indexer1] server=Y.Y.Y.Y:9997 [tcpout:indexer2] server=X.X.X.X:6666 I suppose than when you set default group here it just changing this behaviour somehow and then it cannot store events inside this cluster. Seems to be some kind of timeout which happened before that crash. Have you see any events on target system? Based on port I assume that target is also splunk? If so, you should remove "sendCookedData = false" to send S2S data to remote. My guess is that this should work [tcpout] indexAndForward=true [tcpout:external_system] disabled=false forwardedindex.0.blacklist = (_internal|_audit|_telemetry|_introspection) server=<external_system>:9997  
Hello @gcusello , Thank you that worked. Hello @ITWhisperer , yeahh now we are getting the results as expected. Thank you for your help.
Thank you @richgalloway and @bowesmana  - I'd accept both as the solution if I could as I learned about the return and format commands from you both. I accepted return as the solution since I wanted ... See more...
Thank you @richgalloway and @bowesmana  - I'd accept both as the solution if I could as I learned about the return and format commands from you both. I accepted return as the solution since I wanted to use the IN search, and couldn't format the format command to remove the column names from the generated string. Not sure this is right, but I ended up having to use an eval command to append quotesa and commas to my values, prior to the return statement. In the end, it was something like...  index=syslog src_ip IN ( [ | tstats count from datamodel=Random by ips | stats values(ips) as IP | eval IP = "\"".IP."\"," | return $IP ] )  Thanks again!
We followed current documentation: [tcpout] defaultGroup = <comma-separated list> * A comma-separated list of one or more target group names, specified later in [tcpout:<target_group>] stanzas. *... See more...
We followed current documentation: [tcpout] defaultGroup = <comma-separated list> * A comma-separated list of one or more target group names, specified later in [tcpout:<target_group>] stanzas. * The forwarder sends all data to the specified groups. * If you don't want to forward data automatically, don't configure this setting. * Can be overridden by the '_TCP_ROUTING' setting in the inputs.conf file, which in turn can be overridden by a props.conf or transforms.conf modifier. * Starting with version 4.2, this setting is no longer required. Data forwarding is working, but the state of the cluster is invalid. We also noted these crash logs and we think they are related to this problem: Received fatal signal 6 (Aborted) on PID 2521742. Cause: Signal sent by PID 2521742 running under UID 1001. Crashing thread: TcpOutEloop ........... Backtrace (PIC build): [0x00007F02A9F91A7C] pthread_kill + 300 (libc.so.6 + 0x6EA7C) [0x00007F02A9F3D476] raise + 22 (libc.so.6 + 0x1A476) [0x00007F02A9F237F3] abort + 211 (libc.so.6 + 0x7F3) [0x0000556B5A5B0FA9] ? (splunkd + 0x1A52FA9) [0x0000556B5BA12B6E] _ZN11TimeoutHeap18runExpiredTimeoutsER13MonotonicTime + 670 (splunkd + 0x2EB4B6E) [0x0000556B5B939260] _ZN9EventLoop18runExpiredTimeoutsER13MonotonicTime + 32 (splunkd + 0x2DDB260) [0x0000556B5B93A690] _ZN9EventLoop3runEv + 208 (splunkd + 0x2DDC690) [0x0000556B5A97185E] _ZN11Distributed11EloopRunner4mainEv + 206 (splunkd + 0x1E1385E) [0x0000556B5BA0957D] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 13 (splunkd + 0x2EAB57D) [0x0000556B5BA0A413] _ZN6Thread8callMainEPv + 147 (splunkd + 0x2EAC413) [0x00007F02A9F8FB43] ? (libc.so.6 + 0x6CB43) [0x00007F02AA021A00] ? (libc.so.6 + 0xFEA00) Linux / splunk-indexer01 / 5.15.0-76-generic / #83-Ubuntu SMP Thu Jun 15 19:16:32 UTC 2023 / x86_64 assertion_failure="!_current_timeout_was_readded" assertion_function="void TimeoutHeap::assert_didnt_get_readded() const" assertion_file="/builds/splcore/main/src/util/TimeoutHeap.h:527" /etc/debian_version: bookworm/sid Last errno: 0 Threads running: 85 Runtime: 61.996351s argv: [splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd] Regex JIT enabled .......  
You could set queue sizes on remote side only. I think your real issue is that default group definition under tcpout section? I think that this should'n be there.
Hi here is couple of same kind of cases:  https://community.splunk.com/t5/Dashboards-Visualizations/Why-am-I-Receiving-quot-A-custom-JavaScript-error-caused-an/m-p/619270 https://community.splunk... See more...
Hi here is couple of same kind of cases:  https://community.splunk.com/t5/Dashboards-Visualizations/Why-am-I-Receiving-quot-A-custom-JavaScript-error-caused-an/m-p/619270 https://community.splunk.com/t5/Dashboards-Visualizations/Receiving-quot-A-custom-JavaScript-error-caused-an-issue-loading/m-p/643743 https://community.splunk.com/t5/Dashboards-Visualizations/How-do-I-fix-this-custom-javascript-Error-Notification-pop-up-on/m-p/648232 Maybe this was already fixed on the latest versions? r. Ismo
Have you looked at markdown panels?  
As @bowesmana says, it looks like you can't use tokens in references. This is likely because the structure of the dashboard needs to be known when the dashboard is loaded, and while it appears to be ... See more...
As @bowesmana says, it looks like you can't use tokens in references. This is likely because the structure of the dashboard needs to be known when the dashboard is loaded, and while it appears to be static in your example, the value of the token could be changed during the running of the dashboard, so to prevent this, the reference has to be static. So, the question now is, why are you trying to do this? Perhaps there is another way to solve your usecase!
Hi, thanks for your reply. We tried this approach but we had the problem described in the previous answer. Maybe they are related to the remote queue size as you said. Is there a way to control th... See more...
Hi, thanks for your reply. We tried this approach but we had the problem described in the previous answer. Maybe they are related to the remote queue size as you said. Is there a way to control the remote queue size or length in tcpout mode?  
Have you tried creating a dashboard and adding a panel of the chart type you want, then using your search as the data source?
Thank You ! That's what i'm looking for. 
Hi @rphillips_splk, While I know this is some time ago, I still find this very interesting! You used 2 different routing types here - so I need to ask if this also could be applied on 2 TCP (differ... See more...
Hi @rphillips_splk, While I know this is some time ago, I still find this very interesting! You used 2 different routing types here - so I need to ask if this also could be applied on 2 TCP (different) connections, so the cloned version could also be send as TCP and not syslog? Moreover - I'm in the situation, there will be an additional HF where you show the "syslog receiver" above, and the actual indexer - so basically route like this: UF -> HF (clone) -> IDX                        |-> HF -> IDX Can this be done as smooth as above? If so, how?
Hi You could forward also to Splunk as S2S traffic.  This should be enough for that on your indexers outputs.conf [tcpout] indexAndForward=true [tcpout:<Your server name or something] server=<tar... See more...
Hi You could forward also to Splunk as S2S traffic.  This should be enough for that on your indexers outputs.conf [tcpout] indexAndForward=true [tcpout:<Your server name or something] server=<target server ip>:<used port like 9997 for s2s> # other parameter what you want to use like blacklist Then you should remember that it that connection didn't work then your indexing in local node will be stopped after remote queue is full! r. Ismo 
Hi,  In some of the Dashboards of my Splunk Monitoring Console, it returns this error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details" ... See more...
Hi,  In some of the Dashboards of my Splunk Monitoring Console, it returns this error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details" Error in Dashboard:   The console in the developer details specifies a 404 Not Found Error on many scripts:  The same error is issued for also other js like PopTart.js or Base.js.  Searching for all these files in the Splunk folder I notice these scritpts are all stored in a folder called quarantined_files, an odd folder placed directly in the /opt/splunk/ path.  Any ideas on how to debug this error? 
Hi @Devi13, probably the Counts are strings, so did you tried to convert them in numbers using eval tonumber (https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/ConversionFunctions#t... See more...
Hi @Devi13, probably the Counts are strings, so did you tried to convert them in numbers using eval tonumber (https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/ConversionFunctions#tonumber.28.26lt.3Bstr.26gt.3B.2C.26lt.3Bbase.26gt.3B.29)? base search | eval Count=tonumber(Count) | table Count  Ciao. Giuseppe
Hello Team, I have log like this, File Records count is 2 File Records count is 5 File Records count is 45 File Records count is 23 and I have extracted the values 2,5,45,23 as a separate fiel... See more...
Hello Team, I have log like this, File Records count is 2 File Records count is 5 File Records count is 45 File Records count is 23 and I have extracted the values 2,5,45,23 as a separate field called Count. When I use "base search| table Count"  I am getting the expected value in a stats table But I want 2,5,45,23 to be plotted in the line graph. I tried stats commands but its only showing the no. of events of Count but not the values of count. Could you please provide your assistance on how can I plot the values of Count into a graph.
Yes that's right. Ok, I'll give that go. Thanks