All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi is it possible that you put all locations into this automatic lookup and use only it without any additional field extractions etc.? r. Ismo
Hi this should works ... | rex "(?<URI>^[^\|]+)"  I assume that your event is in _raw. If it's already in some field then just add "fields=<your field>" after rex. https://regex101.com/r/IsMwQy/1... See more...
Hi this should works ... | rex "(?<URI>^[^\|]+)"  I assume that your event is in _raw. If it's already in some field then just add "fields=<your field>" after rex. https://regex101.com/r/IsMwQy/1 r. Ismo
Your question is rather vague, but assuming you want the beginning of the _raw event field up to but not including the first | you could try this | rex "^(?<url>[^\|]+)"
As your receiver is fluentd, I assume that you have syslog source listener on it? You probably have something similar than  <source> @type syslog port 8080 bind 0.0.0.0 tag cf.app message_... See more...
As your receiver is fluentd, I assume that you have syslog source listener on it? You probably have something similar than  <source> @type syslog port 8080 bind 0.0.0.0 tag cf.app message_length_limit 99990 frame_type octet_count <transport tcp> </transport> <parse> message_format rfc5424 </parse> </source> In splunk side you must format sending events to be a valid syslog message (RFC5424). Otherwise fluentd didn't accept those and quite soon splunk's queues are full and so on... Unfortunately I haven't currently any syslog server to test this. But I suppose that it's goes something like this https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/Outputsconf#Syslog_output---- [syslog] defaultGroup = syslog:syslog_out [syslog:syslog_out] server = <Your fluentd server>:<receiving port> type = tcp timestampformat = %b %e %H:%M:%S maxEventSize = <XXXX if greater than 1024> Probably you are also needing a props.conf & transforms.conf to route events also to this syslog output instead of that pure tcpout (or maybe you don't need tcpout-stanza?)? I hope that those instructions are enough clear on docs. There is also some old posts, but unfortunately those seems to be for HF configuration not for indexer. Please inform us what is actually configuration which is working after you have get it. 
  HI , please help to get new field URI by using rex  /area/label/health/readiness||||||||||METRICS|--
@drippler Your search is worked for me as well.. Its pretty much straight and simple one. Thanks Note: Its deletes only 10k events only you have to run the search for multiple times    index... See more...
@drippler Your search is worked for me as well.. Its pretty much straight and simple one. Thanks Note: Its deletes only 10k events only you have to run the search for multiple times    index="Indexname" sourcetype="sourcetype" | eval eid=_cd | search [search index="Indexname" sourcetype="sourcetype" | streamstats count by _raw | search count>1 | eval eid=_cd | fields eid] |delete  
Thanks @gcusello . I saw other options but I didn't think them necessary, appreciate the assistance and good to have solved it.  
Sometimes after an app has a change made to it when it is deployed to our Universal Forwarders on Windows computers the conf files are malformed. Specifically, the inputs.conf file has all the spaces... See more...
Sometimes after an app has a change made to it when it is deployed to our Universal Forwarders on Windows computers the conf files are malformed. Specifically, the inputs.conf file has all the spaces removed and formatting removed and the forwarder does not use the file anymore. The only fix I have found is to delete the app from the forwarder and wait for the deployment server to re-deploy it.
We are not able to  send data to application controller dashboard. When enable appdynamics agent for php , apache process terminates. Please find the below error logs. [Fri Sep 01 17:18:11.766615 2... See more...
We are not able to  send data to application controller dashboard. When enable appdynamics agent for php , apache process terminates. Please find the below error logs. [Fri Sep 01 17:18:11.766615 2023] [mpm_prefork:notice] [pid 12853] AH00163: Apache/2.4.57 (codeit) OpenSSL/3.0.10+quic configured -- resuming normal operations [Fri Sep 01 17:18:11.766641 2023] [core:notice] [pid 12853] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND' terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom [Fri Sep 01 17:18:40.794670 2023] [core:notice] [pid 12853] AH00052: child pid 12862 exit signal Aborted (6) [Fri Sep 01 17:18:40.794714 2023] [core:notice] [pid 12853] AH00052: child pid 12883 exit signal Aborted (6) terminate called after throwing an instance of 'boost::wrapexcept<boost::uuids::entropy_error>' what(): getrandom
Can anybody me with the commands to specifically opening firewall ports foe an on-premise installation? Also, what ports to open and how to open the ports?
I have both web.conf and chrome configuration updates in place but the issue remains. As soon as I open something on top of my browser or change tabs, the search auto-cancels. We've been living wit... See more...
I have both web.conf and chrome configuration updates in place but the issue remains. As soon as I open something on top of my browser or change tabs, the search auto-cancels. We've been living with this issue for a few years now hoping for a fix in each of the new releases but still no changes.
Ok, this seems to be totally different case what you are asking earlier Basically you have only one Tag which has several values. Unfortunately your examples didn't show enough information to ans... See more...
Ok, this seems to be totally different case what you are asking earlier Basically you have only one Tag which has several values. Unfortunately your examples didn't show enough information to answer you. Can you give the whole events (scrambled if need)? We are needing something to make reactions between those events.
Try something like this | inputlookup appJobLogs where [ | search appJobLogs | where match(MessageText, "(?i)general error") | fields RunID ]
https://docs.splunk.com/Documentation/Splunk/9.1.1/Installation/Systemrequirements#Windows_operating_systems told supported OS versions.
If you want events grouped by one or more fields then you want the stats command. | inputlookup appJobLogs | where match(MessageText, "(?i)general error") | rex mode=sed field=MessageText "s/, /\n/g... See more...
If you want events grouped by one or more fields then you want the stats command. | inputlookup appJobLogs | where match(MessageText, "(?i)general error") | rex mode=sed field=MessageText "s/, /\n/g" | stats values(*) as * by RunID  
We tried to remove the default group as you suggested but it gave us the same error. We don't have to send data to another Splunk, on the other side there will be Fluentd that will capture the data.... See more...
We tried to remove the default group as you suggested but it gave us the same error. We don't have to send data to another Splunk, on the other side there will be Fluentd that will capture the data. At the moment we are trying to send data to a socket opened with netcat on another device in the same subnet. We see coming data on netcat, but Splunk crashes on the indexers. This is the btool output related to output.conf: /opt/splunk/etc/system/local/outputs.conf [tcpout] /opt/splunk/etc/system/default/outputs.conf ackTimeoutOnShutdown = 30 /opt/splunk/etc/system/default/outputs.conf autoLBFrequency = 30 /opt/splunk/etc/system/default/outputs.conf autoLBVolume = 0 /opt/splunk/etc/system/default/outputs.conf blockOnCloning = true /opt/splunk/etc/system/default/outputs.conf blockWarnThreshold = 100 /opt/splunk/etc/system/default/outputs.conf cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-SHA256 /opt/splunk/etc/system/default/outputs.conf compressed = false /opt/splunk/etc/system/default/outputs.conf connectionTTL = 0 /opt/splunk/etc/system/default/outputs.conf connectionTimeout = 20 /opt/splunk/etc/system/default/outputs.conf disabled = false /opt/splunk/etc/system/default/outputs.conf dropClonedEventsOnQueueFull = 5 /opt/splunk/etc/system/default/outputs.conf dropEventsOnQueueFull = -1 /opt/splunk/etc/system/default/outputs.conf ecdhCurves = prime256v1, secp384r1, secp521r1 /opt/splunk/etc/system/default/outputs.conf enableOldS2SProtocol = false /opt/splunk/etc/system/default/outputs.conf forceTimebasedAutoLB = false /opt/splunk/etc/system/default/outputs.conf forwardedindex.0.whitelist = .* /opt/splunk/etc/system/default/outputs.conf forwardedindex.1.blacklist = _.* /opt/splunk/etc/system/default/outputs.conf forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker) /opt/splunk/etc/system/local/outputs.conf forwardedindex.3.blacklist = (_internal|_audit|_telemetry|_introspection) /opt/splunk/etc/system/default/outputs.conf forwardedindex.filter.disable = false /opt/splunk/etc/system/default/outputs.conf heartbeatFrequency = 30 /opt/splunk/etc/system/local/outputs.conf indexAndForward = true /opt/splunk/etc/system/default/outputs.conf maxConnectionsPerIndexer = 2 /opt/splunk/etc/system/default/outputs.conf maxFailuresPerInterval = 2 /opt/splunk/etc/system/default/outputs.conf maxQueueSize = auto /opt/splunk/etc/system/default/outputs.conf readTimeout = 300 /opt/splunk/etc/system/default/outputs.conf secsInFailureInterval = 1 /opt/splunk/etc/system/default/outputs.conf sendCookedData = true /opt/splunk/etc/system/default/outputs.conf sslQuietShutdown = false /opt/splunk/etc/system/default/outputs.conf sslVersions = tls1.2 /opt/splunk/etc/system/default/outputs.conf tcpSendBufSz = 0 /opt/splunk/etc/system/default/outputs.conf useACK = false /opt/splunk/etc/system/default/outputs.conf useClientSSLCompression = true /opt/splunk/etc/system/default/outputs.conf writeTimeout = 300 /opt/splunk/etc/system/local/outputs.conf [tcpout:external_system] /opt/splunk/etc/system/local/outputs.conf disabled = false /opt/splunk/etc/system/local/outputs.conf sendCookedData = false /opt/splunk/etc/system/local/outputs.conf server = <external_server>:<external_port>  
@richgalloway  @isoutamo  I should bring in some examples. My current query is: index=plc source="middleware" sourcetype="plc:___" Tag = "Channel1*" | dedup _time | table _time Tag Value This ... See more...
@richgalloway  @isoutamo  I should bring in some examples. My current query is: index=plc source="middleware" sourcetype="plc:___" Tag = "Channel1*" | dedup _time | table _time Tag Value This brings in a table with two different tags that we are currently monitoring. One is an incident and the other is a tag that specifies if the time is working hours or not:   I want to be able to take the last scheduled event value and apply this to every incident column rather than the scheduled time populating within the incident column.
Apologies, I am quite new to Splunk so not sure if this is possible, I have the following simple query:     | inputlookup appJobLogs | where match(MessageText, "(?i)general error") | rex mode=... See more...
Apologies, I am quite new to Splunk so not sure if this is possible, I have the following simple query:     | inputlookup appJobLogs | where match(MessageText, "(?i)general error") | rex mode=sed field=MessageText "s/, /\n/g" | sort RunStartTimeStamp asc, LogTimeStamp asc, LogID ASC       This works and gets the data I need for the error I am after, but, I want all associated values for the error by RunID. So the headers are: Host, InvocationID, Name, LogID, LogTS, LogName, MessageID, MessageText, RunID, RunTS, RunName I would like to do something like:     | inputlookup appJobLogs | where RunID in [ | search appJobLogs | where match(MessageText, "(?i)general error") | fields RunID ]     I have tried various forms and closest I got was a join which gave me the not found fields (should be fixable) but limited to 10,000 results so that seems like the wrong solution.
thanks for your replay. No, I want the exact opposite. I want to extract the entire text value. I received a truncated version, and I can use regex to extract the complete value. However, why di... See more...
thanks for your replay. No, I want the exact opposite. I want to extract the entire text value. I received a truncated version, and I can use regex to extract the complete value. However, why did the value get truncated when retrieved from a summary index but not from the normal index?
The subsearch retrieves the DNS names from the lookup and renames the field so that it matches the field name used in the events. The format essentially expands to something like this  index=foo (( ... See more...
The subsearch retrieves the DNS names from the lookup and renames the field so that it matches the field name used in the events. The format essentially expands to something like this  index=foo (( dns_query="value1") OR (dns_query="value2"))