All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @NullZero, as you can see at https://docs.splunk.com/Documentation/ITSI/4.17.0/Configure/props.conf#props.conf.example you should try add to your props.conf PREAMBLE_REGEX: [nextdns:dns] INDEXED... See more...
Hi @NullZero, as you can see at https://docs.splunk.com/Documentation/ITSI/4.17.0/Configure/props.conf#props.conf.example you should try add to your props.conf PREAMBLE_REGEX: [nextdns:dns] INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1 HEADER_FIELD_DELIMITER =, FIELD_NAMES = timestamp,domain,query_type,dnssec,protocol,client_ip,status,reasons,destination_country,root_domain,device_id,device_name,device_model,device_local_ip,matched_name,client_name TIMESTAMP_FIELDS = timestamp PREAMBLE_REGEX = ^timestamp,domain,query_type,  Ciao. Giuseppe
This is also an issue for me (not using aggregations). All the $trellis...$ tokens don't work when passing to a custom search. My workaround was to copy the URI generated for my search, and insert th... See more...
This is also an issue for me (not using aggregations). All the $trellis...$ tokens don't work when passing to a custom search. My workaround was to copy the URI generated for my search, and insert the $trellis...$ token in the proper place (I used a |u for URL encoding but not sure it's necessary). When using the "Link to Custom URL" drilldown, the tokens work just fine. Downside is that now the user gets the  "Redirecting Away From Splunk" message prior to being redirected.
It worked. Thank you very much. May you please explain to me what each part of the query does so that next time I can create personal queries of the same kind.  
I'm ingesting logs from DNS (Next DNS via API) and struggling to exclude the header. I have seen @woodcock resolve some other examples and I can't quite see where I'm going wrong. The common mistake ... See more...
I'm ingesting logs from DNS (Next DNS via API) and struggling to exclude the header. I have seen @woodcock resolve some other examples and I can't quite see where I'm going wrong. The common mistake is not doing this on the UF. Sample data: (comes in via a curl command and writes out to a file)   timestamp,domain,query_type,dnssec,protocol,client_ip,status,reasons,destination_country,root_domain,device_id,device_name,device_model,device_local_ip,matched_name,client_name 2023-09-01T09:09:21.561936+00:00,beam.scs.splunk.com,AAAA,false,DNS-over-HTTPS,213.31.58.70,,,,splunk.com,8D512,"NUC10i5",,,,nextdns-cli 2023-09-01T09:09:09.154592+00:00,time.cloudflare.com,A,true,DNS-over-HTTPS,213.31.58.70,,,,cloudflare.com,14D3C,"NUC10i5",,,,nextdns-cli     UF (On syslog server) v8.1.0   props.conf [nextdns:dns] INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1 HEADER_FIELD_DELIMITER =, FIELD_NAMES = timestamp,domain,query_type,dnssec,protocol,client_ip,status,reasons,destination_country,root_domain,device_id,device_name,device_model,device_local_ip,matched_name,client_name TIMESTAMP_FIELDS = timestamp inputs.conf [monitor:///opt/remote-logs/nextdns/nextdns.log] index = nextdns sourcetype = nextdns:dns initCrcLength = 375     Indexer (SVA S1) v9.1.0 Disabled the options, I will apply Great8 once I have this fixed. All the work needs to happen on the UF.   [nextdns:dns] #INDEXED_EXTRACTIONS = CSV #HEADER_FIELD_LINE_NUMBER = 1 #HEADER_FIELD_DELIMITER =, #FIELD_NAMES = timestamp,domain,query_type,dnssec,protocol,client_ip,status,reasons,destination_country,root_domain,device_id,device_name,device_model,device_local_ip,matched_name,client_name #TIMESTAMP_FIELDS = timestamp      Challenge: I'm still getting the header field ingest I have deleted the indexed data, regenerated updated log, reingested and still issues. Obviously I have restarted splunk on each instance after respective changes.
Thank you. And what about Windows 10 22H2? Looking on the download page this OS release is still supported:  
Hello to all, i have the following Issue: I receive logs from an older machine for which I cannot adjust the logging settings. When extracting data in Splunk, I encounter the following field and so... See more...
Hello to all, i have the following Issue: I receive logs from an older machine for which I cannot adjust the logging settings. When extracting data in Splunk, I encounter the following field and some values: id = EF_jblo_fdsfew42_sla id = EF_space_332312_sla id = EF_97324_pewpew_sla with a field extraction I then get my location from the id. For example: id = EF_jblo_fdsfew42_sla         => location = jblo id = EF_space_332312_sla       => location = space id = EF_97324_pewpew_sla     => location = 97324 <- where this is not a location here.   Now, I aim to replace the location using an automatic lookup based on the ID "EF_97324_pewpew_sla." Unfortunately, I encounter an issue where I either retrieve only the location from the table, omitting the rest, or I only receive the values extracted from the field extraction. I've reviewed the search sequence as per the documentation, ensuring that field extraction precedes lookup. However, I'm perplexed as to why it consistently erases all the values rather than just overwriting a single one. Is there an automated solution running in the background, similar to automatic lookup, that could resolve this? Thought lookup: ID Solution EF_97324_pewpew_sla TSINOC   My original concept was as follows: Data is ingested into Splunk. Using field extraction to extract the location from the ID. For the IDs where I am aware that they do not contain any location information, I intend to replace the extracted value with the lookup data. I wanted to run the whole thing in the "background" so that the users do not have to run it as a search string. I also tried to use calculated fields  to build one from two fields, but since the calculation takes place before the lookup, this was unfortunately not possible. Hope someone can help me. Kind regards, Flenwy
Hi can you show how your Alert has configured. You could also look from internal logs if that alert is run and if it raised or not.  Have you test that your email works from splunk? Easiest way to... See more...
Hi can you show how your Alert has configured. You could also look from internal logs if that alert is run and if it raised or not.  Have you test that your email works from splunk? Easiest way to check it is add  .... | sendemail .... after your query.   One comment for your SPL. It's better 1st select rows and then sort, that way it's more efficient. ... | where Requests > 50 | sort 0 - Requests Is better way. Also if there could be huge amount of those then you need 0 with sort to sort all not only XXX events. r. Ismo 
Hi it's just like @VatsalJagani said.  What is the issue which you try to solve with this Fixed header part? Maybe there is another solution which you could use to achieve your objectives? r. Ismo
Hi Have you checked that your Windows version is supported by Splunk 9.1.1?  As you could see from https://docs.splunk.com/Documentation/Splunk/9.1.1/ReleaseNotes/Deprecatedfeatures#Removed_feature... See more...
Hi Have you checked that your Windows version is supported by Splunk 9.1.1?  As you could see from https://docs.splunk.com/Documentation/Splunk/9.1.1/ReleaseNotes/Deprecatedfeatures#Removed_features_in_version_9.1 Windows 2016 has removed from supported OS version. r. Ismo
You should look what you need to put on UF and HF vs. what is needed on indexer in outputs.conf. Those are different thing as normally indexers just write events to disks. On props.conf is this one ... See more...
You should look what you need to put on UF and HF vs. what is needed on indexer in outputs.conf. Those are different thing as normally indexers just write events to disks. On props.conf is this one for indexing and clone events to another destination. As you could see there is no default group definition. # Clone events to groups indexer1 and indexer2. Also, index all this data # locally as well. [tcpout] indexAndForward=true [tcpout:indexer1] server=Y.Y.Y.Y:9997 [tcpout:indexer2] server=X.X.X.X:6666 I suppose than when you set default group here it just changing this behaviour somehow and then it cannot store events inside this cluster. Seems to be some kind of timeout which happened before that crash. Have you see any events on target system? Based on port I assume that target is also splunk? If so, you should remove "sendCookedData = false" to send S2S data to remote. My guess is that this should work [tcpout] indexAndForward=true [tcpout:external_system] disabled=false forwardedindex.0.blacklist = (_internal|_audit|_telemetry|_introspection) server=<external_system>:9997  
Hello @gcusello , Thank you that worked. Hello @ITWhisperer , yeahh now we are getting the results as expected. Thank you for your help.
Thank you @richgalloway and @bowesmana  - I'd accept both as the solution if I could as I learned about the return and format commands from you both. I accepted return as the solution since I wanted ... See more...
Thank you @richgalloway and @bowesmana  - I'd accept both as the solution if I could as I learned about the return and format commands from you both. I accepted return as the solution since I wanted to use the IN search, and couldn't format the format command to remove the column names from the generated string. Not sure this is right, but I ended up having to use an eval command to append quotesa and commas to my values, prior to the return statement. In the end, it was something like...  index=syslog src_ip IN ( [ | tstats count from datamodel=Random by ips | stats values(ips) as IP | eval IP = "\"".IP."\"," | return $IP ] )  Thanks again!
We followed current documentation: [tcpout] defaultGroup = <comma-separated list> * A comma-separated list of one or more target group names, specified later in [tcpout:<target_group>] stanzas. *... See more...
We followed current documentation: [tcpout] defaultGroup = <comma-separated list> * A comma-separated list of one or more target group names, specified later in [tcpout:<target_group>] stanzas. * The forwarder sends all data to the specified groups. * If you don't want to forward data automatically, don't configure this setting. * Can be overridden by the '_TCP_ROUTING' setting in the inputs.conf file, which in turn can be overridden by a props.conf or transforms.conf modifier. * Starting with version 4.2, this setting is no longer required. Data forwarding is working, but the state of the cluster is invalid. We also noted these crash logs and we think they are related to this problem: Received fatal signal 6 (Aborted) on PID 2521742. Cause: Signal sent by PID 2521742 running under UID 1001. Crashing thread: TcpOutEloop ........... Backtrace (PIC build): [0x00007F02A9F91A7C] pthread_kill + 300 (libc.so.6 + 0x6EA7C) [0x00007F02A9F3D476] raise + 22 (libc.so.6 + 0x1A476) [0x00007F02A9F237F3] abort + 211 (libc.so.6 + 0x7F3) [0x0000556B5A5B0FA9] ? (splunkd + 0x1A52FA9) [0x0000556B5BA12B6E] _ZN11TimeoutHeap18runExpiredTimeoutsER13MonotonicTime + 670 (splunkd + 0x2EB4B6E) [0x0000556B5B939260] _ZN9EventLoop18runExpiredTimeoutsER13MonotonicTime + 32 (splunkd + 0x2DDB260) [0x0000556B5B93A690] _ZN9EventLoop3runEv + 208 (splunkd + 0x2DDC690) [0x0000556B5A97185E] _ZN11Distributed11EloopRunner4mainEv + 206 (splunkd + 0x1E1385E) [0x0000556B5BA0957D] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 13 (splunkd + 0x2EAB57D) [0x0000556B5BA0A413] _ZN6Thread8callMainEPv + 147 (splunkd + 0x2EAC413) [0x00007F02A9F8FB43] ? (libc.so.6 + 0x6CB43) [0x00007F02AA021A00] ? (libc.so.6 + 0xFEA00) Linux / splunk-indexer01 / 5.15.0-76-generic / #83-Ubuntu SMP Thu Jun 15 19:16:32 UTC 2023 / x86_64 assertion_failure="!_current_timeout_was_readded" assertion_function="void TimeoutHeap::assert_didnt_get_readded() const" assertion_file="/builds/splcore/main/src/util/TimeoutHeap.h:527" /etc/debian_version: bookworm/sid Last errno: 0 Threads running: 85 Runtime: 61.996351s argv: [splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd] Regex JIT enabled .......  
You could set queue sizes on remote side only. I think your real issue is that default group definition under tcpout section? I think that this should'n be there.
Hi here is couple of same kind of cases:  https://community.splunk.com/t5/Dashboards-Visualizations/Why-am-I-Receiving-quot-A-custom-JavaScript-error-caused-an/m-p/619270 https://community.splunk... See more...
Hi here is couple of same kind of cases:  https://community.splunk.com/t5/Dashboards-Visualizations/Why-am-I-Receiving-quot-A-custom-JavaScript-error-caused-an/m-p/619270 https://community.splunk.com/t5/Dashboards-Visualizations/Receiving-quot-A-custom-JavaScript-error-caused-an-issue-loading/m-p/643743 https://community.splunk.com/t5/Dashboards-Visualizations/How-do-I-fix-this-custom-javascript-Error-Notification-pop-up-on/m-p/648232 Maybe this was already fixed on the latest versions? r. Ismo
Have you looked at markdown panels?  
As @bowesmana says, it looks like you can't use tokens in references. This is likely because the structure of the dashboard needs to be known when the dashboard is loaded, and while it appears to be ... See more...
As @bowesmana says, it looks like you can't use tokens in references. This is likely because the structure of the dashboard needs to be known when the dashboard is loaded, and while it appears to be static in your example, the value of the token could be changed during the running of the dashboard, so to prevent this, the reference has to be static. So, the question now is, why are you trying to do this? Perhaps there is another way to solve your usecase!
Hi, thanks for your reply. We tried this approach but we had the problem described in the previous answer. Maybe they are related to the remote queue size as you said. Is there a way to control th... See more...
Hi, thanks for your reply. We tried this approach but we had the problem described in the previous answer. Maybe they are related to the remote queue size as you said. Is there a way to control the remote queue size or length in tcpout mode?  
Have you tried creating a dashboard and adding a panel of the chart type you want, then using your search as the data source?
Thank You ! That's what i'm looking for.