All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

  HI , please help to get new field URI by using rex  /area/label/health/readiness||||||||||METRICS|--
@drippler Your search is worked for me as well.. Its pretty much straight and simple one. Thanks Note: Its deletes only 10k events only you have to run the search for multiple times    index... See more...
@drippler Your search is worked for me as well.. Its pretty much straight and simple one. Thanks Note: Its deletes only 10k events only you have to run the search for multiple times    index="Indexname" sourcetype="sourcetype" | eval eid=_cd | search [search index="Indexname" sourcetype="sourcetype" | streamstats count by _raw | search count>1 | eval eid=_cd | fields eid] |delete  
Thanks @gcusello . I saw other options but I didn't think them necessary, appreciate the assistance and good to have solved it.  
Sometimes after an app has a change made to it when it is deployed to our Universal Forwarders on Windows computers the conf files are malformed. Specifically, the inputs.conf file has all the spaces... See more...
Sometimes after an app has a change made to it when it is deployed to our Universal Forwarders on Windows computers the conf files are malformed. Specifically, the inputs.conf file has all the spaces removed and formatting removed and the forwarder does not use the file anymore. The only fix I have found is to delete the app from the forwarder and wait for the deployment server to re-deploy it.
We are not able to  send data to application controller dashboard. When enable appdynamics agent for php , apache process terminates. Please find the below error logs. [Fri Sep 01 17:18:11.766615 2... See more...
We are not able to  send data to application controller dashboard. When enable appdynamics agent for php , apache process terminates. Please find the below error logs. [Fri Sep 01 17:18:11.766615 2023] [mpm_prefork:notice] [pid 12853] AH00163: Apache/2.4.57 (codeit) OpenSSL/3.0.10+quic configured -- resuming normal operations [Fri Sep 01 17:18:11.766641 2023] [core:notice] [pid 12853] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom [Fri Sep 01 17:18:40.794670 2023] [core:notice] [pid 12853] AH00052: child pid 12862 exit signal Aborted (6) [Fri Sep 01 17:18:40.794714 2023] [core:notice] [pid 12853] AH00052: child pid 12883 exit signal Aborted (6) terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom
Can anybody me with the commands to specifically opening firewall ports foe an on-premise installation? Also, what ports to open and how to open the ports?
I have both web.conf and chrome configuration updates in place but the issue remains. As soon as I open something on top of my browser or change tabs, the search auto-cancels. We've been living wit... See more...
I have both web.conf and chrome configuration updates in place but the issue remains. As soon as I open something on top of my browser or change tabs, the search auto-cancels. We've been living with this issue for a few years now hoping for a fix in each of the new releases but still no changes.
Ok, this seems to be totally different case what you are asking earlier Basically you have only one Tag which has several values. Unfortunately your examples didn't show enough information to ans... See more...
Ok, this seems to be totally different case what you are asking earlier Basically you have only one Tag which has several values. Unfortunately your examples didn't show enough information to answer you. Can you give the whole events (scrambled if need)? We are needing something to make reactions between those events.
Try something like this | inputlookup appJobLogs where [ | search appJobLogs | where match(MessageText, "(?i)general error") | fields RunID ]
https://docs.splunk.com/Documentation/Splunk/9.1.1/Installation/Systemrequirements#Windows_operating_systems told supported OS versions.
If you want events grouped by one or more fields then you want the stats command. | inputlookup appJobLogs | where match(MessageText, "(?i)general error") | rex mode=sed field=MessageText "s/, /\n/g... See more...
If you want events grouped by one or more fields then you want the stats command. | inputlookup appJobLogs | where match(MessageText, "(?i)general error") | rex mode=sed field=MessageText "s/, /\n/g" | stats values(*) as * by RunID  
We tried to remove the default group as you suggested but it gave us the same error. We don't have to send data to another Splunk, on the other side there will be Fluentd that will capture the data.... See more...
We tried to remove the default group as you suggested but it gave us the same error. We don't have to send data to another Splunk, on the other side there will be Fluentd that will capture the data. At the moment we are trying to send data to a socket opened with netcat on another device in the same subnet. We see coming data on netcat, but Splunk crashes on the indexers. This is the btool output related to output.conf: /opt/splunk/etc/system/local/outputs.conf [tcpout] /opt/splunk/etc/system/default/outputs.conf ackTimeoutOnShutdown = 30 /opt/splunk/etc/system/default/outputs.conf autoLBFrequency = 30 /opt/splunk/etc/system/default/outputs.conf autoLBVolume = 0 /opt/splunk/etc/system/default/outputs.conf blockOnCloning = true /opt/splunk/etc/system/default/outputs.conf blockWarnThreshold = 100 /opt/splunk/etc/system/default/outputs.conf cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-SHA256 /opt/splunk/etc/system/default/outputs.conf compressed = false /opt/splunk/etc/system/default/outputs.conf connectionTTL = 0 /opt/splunk/etc/system/default/outputs.conf connectionTimeout = 20 /opt/splunk/etc/system/default/outputs.conf disabled = false /opt/splunk/etc/system/default/outputs.conf dropClonedEventsOnQueueFull = 5 /opt/splunk/etc/system/default/outputs.conf dropEventsOnQueueFull = -1 /opt/splunk/etc/system/default/outputs.conf ecdhCurves = prime256v1, secp384r1, secp521r1 /opt/splunk/etc/system/default/outputs.conf enableOldS2SProtocol = false /opt/splunk/etc/system/default/outputs.conf forceTimebasedAutoLB = false /opt/splunk/etc/system/default/outputs.conf forwardedindex.0.whitelist = .* /opt/splunk/etc/system/default/outputs.conf forwardedindex.1.blacklist = _.* /opt/splunk/etc/system/default/outputs.conf forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker) /opt/splunk/etc/system/local/outputs.conf forwardedindex.3.blacklist = (_internal|_audit|_telemetry|_introspection) /opt/splunk/etc/system/default/outputs.conf forwardedindex.filter.disable = false /opt/splunk/etc/system/default/outputs.conf heartbeatFrequency = 30 /opt/splunk/etc/system/local/outputs.conf indexAndForward = true /opt/splunk/etc/system/default/outputs.conf maxConnectionsPerIndexer = 2 /opt/splunk/etc/system/default/outputs.conf maxFailuresPerInterval = 2 /opt/splunk/etc/system/default/outputs.conf maxQueueSize = auto /opt/splunk/etc/system/default/outputs.conf readTimeout = 300 /opt/splunk/etc/system/default/outputs.conf secsInFailureInterval = 1 /opt/splunk/etc/system/default/outputs.conf sendCookedData = true /opt/splunk/etc/system/default/outputs.conf sslQuietShutdown = false /opt/splunk/etc/system/default/outputs.conf sslVersions = tls1.2 /opt/splunk/etc/system/default/outputs.conf tcpSendBufSz = 0 /opt/splunk/etc/system/default/outputs.conf useACK = false /opt/splunk/etc/system/default/outputs.conf useClientSSLCompression = true /opt/splunk/etc/system/default/outputs.conf writeTimeout = 300 /opt/splunk/etc/system/local/outputs.conf [tcpout:external_system] /opt/splunk/etc/system/local/outputs.conf disabled = false /opt/splunk/etc/system/local/outputs.conf sendCookedData = false /opt/splunk/etc/system/local/outputs.conf server = <external_server>:<external_port>  
@richgalloway  @isoutamo  I should bring in some examples. My current query is: index=plc source="middleware" sourcetype="plc:___" Tag = "Channel1*" | dedup _time | table _time Tag Value This ... See more...
@richgalloway  @isoutamo  I should bring in some examples. My current query is: index=plc source="middleware" sourcetype="plc:___" Tag = "Channel1*" | dedup _time | table _time Tag Value This brings in a table with two different tags that we are currently monitoring. One is an incident and the other is a tag that specifies if the time is working hours or not:   I want to be able to take the last scheduled event value and apply this to every incident column rather than the scheduled time populating within the incident column.
Apologies, I am quite new to Splunk so not sure if this is possible, I have the following simple query:     | inputlookup appJobLogs | where match(MessageText, "(?i)general error") | rex mode=... See more...
Apologies, I am quite new to Splunk so not sure if this is possible, I have the following simple query:     | inputlookup appJobLogs | where match(MessageText, "(?i)general error") | rex mode=sed field=MessageText "s/, /\n/g" | sort RunStartTimeStamp asc, LogTimeStamp asc, LogID ASC       This works and gets the data I need for the error I am after, but, I want all associated values for the error by RunID. So the headers are: Host, InvocationID, Name, LogID, LogTS, LogName, MessageID, MessageText, RunID, RunTS, RunName I would like to do something like:     | inputlookup appJobLogs | where RunID in [ | search appJobLogs | where match(MessageText, "(?i)general error") | fields RunID ]     I have tried various forms and closest I got was a join which gave me the not found fields (should be fixable) but limited to 10,000 results so that seems like the wrong solution.
thanks for your replay. No, I want the exact opposite. I want to extract the entire text value. I received a truncated version, and I can use regex to extract the complete value. However, why di... See more...
thanks for your replay. No, I want the exact opposite. I want to extract the entire text value. I received a truncated version, and I can use regex to extract the complete value. However, why did the value get truncated when retrieved from a summary index but not from the normal index?
The subsearch retrieves the DNS names from the lookup and renames the field so that it matches the field name used in the events. The format essentially expands to something like this  index=foo (( ... See more...
The subsearch retrieves the DNS names from the lookup and renames the field so that it matches the field name used in the events. The format essentially expands to something like this  index=foo (( dns_query="value1") OR (dns_query="value2"))
Hi @NullZero, as you can see at https://docs.splunk.com/Documentation/ITSI/4.17.0/Configure/props.conf#props.conf.example you should try add to your props.conf PREAMBLE_REGEX: [nextdns:dns] INDEXED... See more...
Hi @NullZero, as you can see at https://docs.splunk.com/Documentation/ITSI/4.17.0/Configure/props.conf#props.conf.example you should try add to your props.conf PREAMBLE_REGEX: [nextdns:dns] INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1 HEADER_FIELD_DELIMITER =, FIELD_NAMES = timestamp,domain,query_type,dnssec,protocol,client_ip,status,reasons,destination_country,root_domain,device_id,device_name,device_model,device_local_ip,matched_name,client_name TIMESTAMP_FIELDS = timestamp PREAMBLE_REGEX = ^timestamp,domain,query_type,  Ciao. Giuseppe
This is also an issue for me (not using aggregations). All the $trellis...$ tokens don't work when passing to a custom search. My workaround was to copy the URI generated for my search, and insert th... See more...
This is also an issue for me (not using aggregations). All the $trellis...$ tokens don't work when passing to a custom search. My workaround was to copy the URI generated for my search, and insert the $trellis...$ token in the proper place (I used a |u for URL encoding but not sure it's necessary). When using the "Link to Custom URL" drilldown, the tokens work just fine. Downside is that now the user gets the  "Redirecting Away From Splunk" message prior to being redirected.
It worked. Thank you very much. May you please explain to me what each part of the query does so that next time I can create personal queries of the same kind.  
I'm ingesting logs from DNS (Next DNS via API) and struggling to exclude the header. I have seen @woodcock resolve some other examples and I can't quite see where I'm going wrong. The common mistake ... See more...
I'm ingesting logs from DNS (Next DNS via API) and struggling to exclude the header. I have seen @woodcock resolve some other examples and I can't quite see where I'm going wrong. The common mistake is not doing this on the UF. Sample data: (comes in via a curl command and writes out to a file)   timestamp,domain,query_type,dnssec,protocol,client_ip,status,reasons,destination_country,root_domain,device_id,device_name,device_model,device_local_ip,matched_name,client_name 2023-09-01T09:09:21.561936+00:00,beam.scs.splunk.com,AAAA,false,DNS-over-HTTPS,213.31.58.70,,,,splunk.com,8D512,"NUC10i5",,,,nextdns-cli 2023-09-01T09:09:09.154592+00:00,time.cloudflare.com,A,true,DNS-over-HTTPS,213.31.58.70,,,,cloudflare.com,14D3C,"NUC10i5",,,,nextdns-cli     UF (On syslog server) v8.1.0   props.conf [nextdns:dns] INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1 HEADER_FIELD_DELIMITER =, FIELD_NAMES = timestamp,domain,query_type,dnssec,protocol,client_ip,status,reasons,destination_country,root_domain,device_id,device_name,device_model,device_local_ip,matched_name,client_name TIMESTAMP_FIELDS = timestamp inputs.conf [monitor:///opt/remote-logs/nextdns/nextdns.log] index = nextdns sourcetype = nextdns:dns initCrcLength = 375     Indexer (SVA S1) v9.1.0 Disabled the options, I will apply Great8 once I have this fixed. All the work needs to happen on the UF.   [nextdns:dns] #INDEXED_EXTRACTIONS = CSV #HEADER_FIELD_LINE_NUMBER = 1 #HEADER_FIELD_DELIMITER =, #FIELD_NAMES = timestamp,domain,query_type,dnssec,protocol,client_ip,status,reasons,destination_country,root_domain,device_id,device_name,device_model,device_local_ip,matched_name,client_name #TIMESTAMP_FIELDS = timestamp      Challenge: I'm still getting the header field ingest I have deleted the indexed data, regenerated updated log, reingested and still issues. Obviously I have restarted splunk on each instance after respective changes.