All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try something like this | timechart span=1d sum(SuccessCount) as SuccessCount sum(FailedCount) as FailedCount
Hello I have a Radial Gauge visualisation and wanna specific a filed as the range maximung value, with the following command: <option name="charting.chart.rangeValues">[0,"1000",2000,5000]</optio... See more...
Hello I have a Radial Gauge visualisation and wanna specific a filed as the range maximung value, with the following command: <option name="charting.chart.rangeValues">[0,"1000",2000,5000]</option>  Is this possible? thanks for helping hitchmontana
You could do this index="myindex" "mymodule*:" AND "P[" AND "F[" source="/tmp/*/log.srv23.*.bz2" OR source="/tmp/*/log.srv24.*.bz2" | rex field=source "\/.*\/log\.(?<servername>\w+)." | rex "P(?<P... See more...
You could do this index="myindex" "mymodule*:" AND "P[" AND "F[" source="/tmp/*/log.srv23.*.bz2" OR source="/tmp/*/log.srv24.*.bz2" | rex field=source "\/.*\/log\.(?<servername>\w+)." | rex "P(?<PF>\[\d+\]\[\d+\])" ``` count by colour and server name ``` | stats count as _PF by PF servername ``` now collect by colour ``` | stats list(servername) as servername list(_PF) as count by PF which would give you something like this - does this work PF  hostname count red host1      50         host2      90 green host1 40             host2  90 purple host1 50 yellow host1 90  
14 years later, I'm coming here to say THANK YOU! One of my backburner projects has been trying to figure out why we've been getting "cron-2" and "error.log-too_small" type sourcetypes for over a yea... See more...
14 years later, I'm coming here to say THANK YOU! One of my backburner projects has been trying to figure out why we've been getting "cron-2" and "error.log-too_small" type sourcetypes for over a year now. Simply defining the sourcetype for each file, as you suggested, has fixed the issue.  You, @hulahoop, are a lifesaver.
@richgalloway How to do it during index time? What changes need to be made in line break settings?
Hi Team, Can we install data collector node on heavy forwarder? what are the care need to be taken on this ? What are the other services can be installed on the heavy forwarder ?   
Hi I have a query that need to compare count of PF field for two log file: on splunk I have two query that create this table, the issue is need to "PF" that equal in query1 and query2 show in same ... See more...
Hi I have a query that need to compare count of PF field for two log file: on splunk I have two query that create this table, the issue is need to "PF" that equal in query1 and query2 show in same row: current result: hostname1 PF1 count1 hostname2 PF2 count2 host1 red 50 host2 yellow 90 host1 green 40 host2 green 90 host1 purple 50 host2 red 90 expected result: hostname1 PF1 count1 hostname2 PF2 count2 host1 red 50 host2 red 90 host1 green 40 host2 green 90 host1 purple 50 host2 - - host1 - - host2 yellow 90 here is the query: index="myindex" "mymodule*:" AND "P[" AND "F[" source="/tmp/*/log.srv23.*.bz2" | rex field=source "\/.*\/log\.(?<servername>\w+)." | rex "P(?<PF>\[\d+\]\[\d+\])" | stats count as _PF by PF,servername | stats list(_PF) as count list(PF) as PF by servername | appendcols [search index="myindex" "mymodule*:" AND "P[" AND "F[" source="/tmp/*/log.srv24.*.bz2" | rex field=source "\/.*\/log\.(?<servername>\w+)." | rex "P(?<PF2>\[\d+\]\[\d+\])" | stats count as _PF2 by PF2,servername | stats list(_PF2) as count list(PF2) as PF2 by servername ] Any idea? Thanks
I created an index in "manager-app\_cluster\local\indexes.conf" on the cluster master in a cluster environment. The index is now displayed in "peer_apps\_cluster\local\indexes.conf" of... See more...
I created an index in "manager-app\_cluster\local\indexes.conf" on the cluster master in a cluster environment. The index is now displayed in "peer_apps\_cluster\local\indexes.conf" of the cluster indexer, but I cannot select that index from other servers (deploy or sh's WebUI). Please tell me how to create an index in a cluster environment.
Hi,   The jmx receiver may be worth a look. It would involve configuring TomCat to create a jmx endpoint. Then configuring the OTel jmx receiver to connect to that endpoint to gather metrics. ht... See more...
Hi,   The jmx receiver may be worth a look. It would involve configuring TomCat to create a jmx endpoint. Then configuring the OTel jmx receiver to connect to that endpoint to gather metrics. https://docs.splunk.com/observability/en/gdi/opentelemetry/components/jmx-receiver.html
Hi. I'm working on splunk operations team. version: Enterprise 7.3.4, centos7 Environment : 3 SH, 10 IDX cluster. Search Factor : 3 Replication Factor : 3   Why is different tsidx file's size ... See more...
Hi. I'm working on splunk operations team. version: Enterprise 7.3.4, centos7 Environment : 3 SH, 10 IDX cluster. Search Factor : 3 Replication Factor : 3   Why is different tsidx file's size in same index? My situation : Tsidx file's  size in one specific indexer is 2~3x larger than size in the rest indexer. So, this indexer storage sometimes goes to above 97% and indexing stops. After splunk instance restart, for a few days storage remains at 86~87%. (the rest indexers always remain at 86%) I checked rawdata file's size. But, size is only affected by event count. Not all indexes are included. 10 of 100 indexes are included. I can't find cause. Help me. If you need additional information, please let me know. Thank you.  
The + quantifier is greedy, meaning it will match as many characters as possible.  So you'll get everything from NameofTeam until the end of the data.  To avoid that, use the non-greedy quantifier +?... See more...
The + quantifier is greedy, meaning it will match as many characters as possible.  So you'll get everything from NameofTeam until the end of the data.  To avoid that, use the non-greedy quantifier +?, even better, change the pattern to match until the next quotation mark. index=test | rex field=_raw "Key\": \"Owner\", \"ValueString\": \"(?<Team>.+?)\"}," index=test | rex field=_raw "Key\": \"Owner\", \"ValueString\": \"(?<Team>[^"]+)\"},"  
@vishwa re:mvstats - did you know that Splunk natively supports min/max/avg/sum on mvfields.
Hello,  looks like Microsoft Graph Security add all tags to all event type so its not correctly CIM mapped , any one filtered events based on alerts data  and map to correct data models?  for examp... See more...
Hello,  looks like Microsoft Graph Security add all tags to all event type so its not correctly CIM mapped , any one filtered events based on alerts data  and map to correct data models?  for example i need to sort out based on events and map to below data models Malware - IDS - Endpoint - Alert Thx 
Yes, I opened a support case with Splunk and as per the support  I upgraded the Splunk, it worked. Thanks!
Also I can see below logs in metrics.log: /opt/splunkforwarder/var/log/splunk# grep -Ri blocked metrics.log* metrics.log:02-20-2024 02:18:21.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=... See more...
Also I can see below logs in metrics.log: /opt/splunkforwarder/var/log/splunk# grep -Ri blocked metrics.log* metrics.log:02-20-2024 02:18:21.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=102, largest_size=105, smallest_size=35 metrics.log:02-20-2024 02:27:30.654 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=100, largest_size=107, smallest_size=36 metrics.log:02-20-2024 02:28:31.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=102, largest_size=107, smallest_size=40 metrics.log:02-20-2024 03:01:03.654 +1100 INFO Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=120, largest_size=125, smallest_size=41 metrics.log:02-20-2024 03:13:15.656 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=102, largest_size=105, smallest_size=32 metrics.log:02-20-2024 03:21:23.654 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=102, largest_size=107, smallest_size=36 metrics.log:02-20-2024 03:27:29.653 +1100 INFO Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=121, largest_size=123, smallest_size=38 metrics.log:02-20-2024 03:31:33.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=100, largest_size=105, smallest_size=35 metrics.log:02-20-2024 03:57:59.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=100, largest_size=105, smallest_size=35 metrics.log.1:02-19-2024 21:45:53.652 +1100 INFO Metrics - group=knowledgebundle_replication, name=blocked_search_metrics, app=none, user=none, elapsed_ms=18446744073709551615 metrics.log.1:02-19-2024 22:07:14.652 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=100, largest_size=106, smallest_size=33 metrics.log.1:02-19-2024 22:27:34.653 +1100 INFO Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=100, largest_size=112, smallest_size=35 metrics.log.1:02-19-2024 22:56:02.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=102, largest_size=111, smallest_size=32 metrics.log.1:02-19-2024 22:57:03.653 +1100 INFO Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=120, largest_size=125, smallest_size=42 metrics.log.1:02-19-2024 23:18:24.654 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=101, largest_size=106, smallest_size=33 metrics.log.1:02-20-2024 00:08:13.652 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=100, largest_size=111, smallest_size=37 metrics.log.1:02-20-2024 00:21:26.652 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=102, largest_size=111, smallest_size=37 metrics.log.1:02-20-2024 00:44:49.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=100, largest_size=105, smallest_size=39 metrics.log.1:02-20-2024 00:49:54.655 +1100 INFO Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=119, largest_size=129, smallest_size=40 metrics.log.1:02-20-2024 01:25:29.654 +1100 INFO Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=121, largest_size=131, smallest_size=37 metrics.log.1:02-20-2024 01:27:31.654 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=102, largest_size=111, smallest_size=29 metrics.log.1:02-20-2024 01:33:37.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=101, largest_size=111, smallest_size=33
Hi Guys, I am very new to Splunk Cloud and how things work here. Our current setup is: 1. UF(Linux) -> Heavy Forwarder(On Prem) -> Indexer/Search Head(Splunk Cloud) 2. Created a new index quifa... See more...
Hi Guys, I am very new to Splunk Cloud and how things work here. Our current setup is: 1. UF(Linux) -> Heavy Forwarder(On Prem) -> Indexer/Search Head(Splunk Cloud) 2. Created a new index quifapp on Splunk Cloud. 2. UF is already connected to HF (just dummy connection and verified that its sending _internal logs to Splunk Cloud) as can be seen from the logs: 02-20-2024 11:22:11.394 +1100 INFO AutoLoadBalancedConnectionStrategy [566068 TcpOutEloop] - Found currently active indexer. Connected to idx=192.168.23.32:9997:0, reuse=1   3. New request is to forward logs from location /app/quif/quif.log to Splunk Cloud. 4. I have put the required config under below location /opt/splunkforwarder/etc/apps/quif/local and it has two files: #cat inputs.conf [monitor:///app/quif/quif.log*] sourcetype=quif_requests disabled=0 index=quifapp # cat props.conf [quif_requests] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom disabled = false pulldown_type = true EXTRACT-AgentId = ^\w+:\s+(?P<AgentId>\w+) ####   4. I restarted SplunkForwarder but can't see any logs coming in the Cloud.   Is there any additional config that's required at any level. How can I troubleshoot?
Hi @bowesmana , Thank you!!!! this query worked |tstats count where index=xxx host=host1 sourcetype=newsourcetype by PREFIX(type:) _time | rename type: as Types | timechart span=1d values(count) b... See more...
Hi @bowesmana , Thank you!!!! this query worked |tstats count where index=xxx host=host1 sourcetype=newsourcetype by PREFIX(type:) _time | rename type: as Types | timechart span=1d values(count) by Types | addtotals  
@ITWhisperer is right, but I've used a time as a fraction to show on the Y axis, something like | rex field=End "\d+/\d+/\d+ (?<h>\d+):(?<m>\d+)\s+" | eval t=round(h+(m/100),2) | chart max(t) over D... See more...
@ITWhisperer is right, but I've used a time as a fraction to show on the Y axis, something like | rex field=End "\d+/\d+/\d+ (?<h>\d+):(?<m>\d+)\s+" | eval t=round(h+(m/100),2) | chart max(t) over Date by System It's a bit of a kludge, as it will represent 10:50 as 10.50 so will be half way between 10 and 11, but you could represent it as a true fraction, i.e. /60 not 100, but then the numbers are not so useful, i.e. when 10:50 shows as 10.83  
I'm not sure why rex is properly matching the beginning of the value I am looking for (NameofTeam), but it also matches and includes everything after it. As I understand it, my search should stop mat... See more...
I'm not sure why rex is properly matching the beginning of the value I am looking for (NameofTeam), but it also matches and includes everything after it. As I understand it, my search should stop matching when it reaches "}, after matching the team name. What am I doing wrong?   index=test | rex field=_raw "Key\": \"Owner\", \"ValueString\": \"(?<Team>.+)\"},"   Sample Data: {"Key": "OtherKey", "ValueString": "OtherValue"}, {"Key": "Owner", "ValueString": "NameofTeam"}, {"Key": "OtherKey", "ValueString": "OtherValue"}, Expected Output: NameofTeam Actual Output: NameofTeam"}, {"Key": "OtherKey", "ValueString": "OtherValue"},  
Yes, I was looking for that where statement.  My mistake, I tried using |search which didn't work.  Thanks!