All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi all, We are currently facing an issue with our Splunk SOAR installation Every time that we open the playbook editor, it shows the errors in the screenshot below and all the dropdown and search f... See more...
Hi all, We are currently facing an issue with our Splunk SOAR installation Every time that we open the playbook editor, it shows the errors in the screenshot below and all the dropdown and search fields stop working (eg: we're unable to choose apps or datatypes for the input) We have also tried to reinstall it (both v6.1.1 and v6.2.0) The service is running on a VM with Red Hat Enterprise Linux release 8.9 Do you have any suggestions on how we can solve this problem? Thanks for your help Best regards  
Hello I would like to make a query in which i can see how long my equipment has been inactive and when it was inactive preferably in a timechart. I would like to define inactive in 2 ways. One is wh... See more...
Hello I would like to make a query in which i can see how long my equipment has been inactive and when it was inactive preferably in a timechart. I would like to define inactive in 2 ways. One is when x y and z have the same value +/-50 for 10 seconds or more In these events 1000=950/1050 for the sake of inactivity The second way is when there has been no new event from a piece of equipment for more than 10 seconds Any help would be very much appriciated. Below are some sample events and how long the equipment is active/inactive 12:00:10 x=1000 y=500 z=300 equipmentID=1 12:00:15 x=1000 y=500 z=300 equipmentID=1 12:00:20 x=1025 y=525 z=275 equipmentID=1 12:00:25 x=1000 y=500 z=300 equipmentID=1 (20 seconds of inactivity) 12:00:30 x=1600 y=850 z=60 equipmentID=1 12:00:35 x=1600 y=850 z=60 equipmentID=1 (15 seconds of activity) 12:03:00 x=1650 y=950 z=300 equipmentID=1 (135 seconds of inactivity) 12:03:05 x=1850 y=500 z=650 equipmentID=1 12:03:10 x=2500 y=950 z=800 equipmentID=1 12:03:15 x=2500 y=950 z=400 equipmentID=1 12:03:20 x=2500 y=950 z=150 equipmentID=1 (15 seconds of activity)
Hi all, I'm trying to extract a part of a field. The field named Computer and is like MySrv.MyDomain.MySubDom1.com MySubDom1 can exist or not. I would like to extract everything after MySrv. I tri... See more...
Hi all, I'm trying to extract a part of a field. The field named Computer and is like MySrv.MyDomain.MySubDom1.com MySubDom1 can exist or not. I would like to extract everything after MySrv. I tried with  index=MyIndex host=MySrv | rex field=_raw "(?<domaine_test>(\.\w+))" The result create a new field Domain_test but it stores only the first part "MyDomain" and not the rest of the field. How can I do this ? For exemple : Computer = "MySrv.MyDomain.MySubDom1.com" Result : Domain_test = "MyDomain.MySubDom1.com"
"I will have a table composed of Hostname, Dev (hostname of the development machine associated with the machine in the Hostname field), vulnerability (vulnerability associated with the machine in Hos... See more...
"I will have a table composed of Hostname, Dev (hostname of the development machine associated with the machine in the Hostname field), vulnerability (vulnerability associated with the machine in Hostname). The Dev field is only used to see if the machine in Hostname has a machine in development associated with it. I should verify that in my table there is not that machine (in this case in the hostname field) associated with the same vulnerability." HOSTNAME DEV VULNERABILITà PAPERINO pippo APACHE In this case, my machine "paperino" has a vulnerability "apache", and it also has a development machine associated with it. Therefore, I should verify that for the machine "Pippo" there isn't the same vulnerability HOSTNAME DEV VULNERABILITà PIPPO - APACHE If this row were present in my search, then in the row of the table above, I should write "YES" in my new field that I will create. because pippo have same vulnerability (apache )
Hello all, I am confused on which machines I am intended to have my inputs.conf files configured.  1. I am currently operating under the assumption that inputs.conf files are primarily for the inde... See more...
Hello all, I am confused on which machines I am intended to have my inputs.conf files configured.  1. I am currently operating under the assumption that inputs.conf files are primarily for the indexer is this correct? 2. If I update an inputs.conf file do I need to push the updated file through my deployment server so that the inputs.conf files tied to the applications on the S.U.F reflect in the same changes made on the manager. a. I have raw xml data populating and I wish to fix this so that it is easier to read... Currently there is no source type in my inputs.conf. I believe applying an appropriate source type in the inputs.conf is the first step to fixing this problem.  b. There are multiple stanzas in inputs.conf. Do I need to apply a source type to each of the stanzas that have to do with sending xml logs or is their a way to apply this change on global scale? Z. Will someone please explain the difference between source and source type I have read the documentation on the manner and am still uncertain in my understanding.   Thanks for the help in advance!  
Try something like this | timechart span=1d sum(SuccessCount) as SuccessCount sum(FailedCount) as FailedCount
Hello I have a Radial Gauge visualisation and wanna specific a filed as the range maximung value, with the following command: <option name="charting.chart.rangeValues">[0,"1000",2000,5000]</optio... See more...
Hello I have a Radial Gauge visualisation and wanna specific a filed as the range maximung value, with the following command: <option name="charting.chart.rangeValues">[0,"1000",2000,5000]</option>  Is this possible? thanks for helping hitchmontana
You could do this index="myindex" "mymodule*:" AND "P[" AND "F[" source="/tmp/*/log.srv23.*.bz2" OR source="/tmp/*/log.srv24.*.bz2" | rex field=source "\/.*\/log\.(?<servername>\w+)." | rex "P(?<P... See more...
You could do this index="myindex" "mymodule*:" AND "P[" AND "F[" source="/tmp/*/log.srv23.*.bz2" OR source="/tmp/*/log.srv24.*.bz2" | rex field=source "\/.*\/log\.(?<servername>\w+)." | rex "P(?<PF>\[\d+\]\[\d+\])" ``` count by colour and server name ``` | stats count as _PF by PF servername ``` now collect by colour ``` | stats list(servername) as servername list(_PF) as count by PF which would give you something like this - does this work PF  hostname count red host1      50         host2      90 green host1 40             host2  90 purple host1 50 yellow host1 90  
14 years later, I'm coming here to say THANK YOU! One of my backburner projects has been trying to figure out why we've been getting "cron-2" and "error.log-too_small" type sourcetypes for over a yea... See more...
14 years later, I'm coming here to say THANK YOU! One of my backburner projects has been trying to figure out why we've been getting "cron-2" and "error.log-too_small" type sourcetypes for over a year now. Simply defining the sourcetype for each file, as you suggested, has fixed the issue.  You, @hulahoop, are a lifesaver.
@richgalloway How to do it during index time? What changes need to be made in line break settings?
Hi Team, Can we install data collector node on heavy forwarder? what are the care need to be taken on this ? What are the other services can be installed on the heavy forwarder ?   
Hi I have a query that need to compare count of PF field for two log file: on splunk I have two query that create this table, the issue is need to "PF" that equal in query1 and query2 show in same ... See more...
Hi I have a query that need to compare count of PF field for two log file: on splunk I have two query that create this table, the issue is need to "PF" that equal in query1 and query2 show in same row: current result: hostname1 PF1 count1 hostname2 PF2 count2 host1 red 50 host2 yellow 90 host1 green 40 host2 green 90 host1 purple 50 host2 red 90 expected result: hostname1 PF1 count1 hostname2 PF2 count2 host1 red 50 host2 red 90 host1 green 40 host2 green 90 host1 purple 50 host2 - - host1 - - host2 yellow 90 here is the query: index="myindex" "mymodule*:" AND "P[" AND "F[" source="/tmp/*/log.srv23.*.bz2" | rex field=source "\/.*\/log\.(?<servername>\w+)." | rex "P(?<PF>\[\d+\]\[\d+\])" | stats count as _PF by PF,servername | stats list(_PF) as count list(PF) as PF by servername | appendcols [search index="myindex" "mymodule*:" AND "P[" AND "F[" source="/tmp/*/log.srv24.*.bz2" | rex field=source "\/.*\/log\.(?<servername>\w+)." | rex "P(?<PF2>\[\d+\]\[\d+\])" | stats count as _PF2 by PF2,servername | stats list(_PF2) as count list(PF2) as PF2 by servername ] Any idea? Thanks
I created an index in "manager-app\_cluster\local\indexes.conf" on the cluster master in a cluster environment. The index is now displayed in "peer_apps\_cluster\local\indexes.conf" of... See more...
I created an index in "manager-app\_cluster\local\indexes.conf" on the cluster master in a cluster environment. The index is now displayed in "peer_apps\_cluster\local\indexes.conf" of the cluster indexer, but I cannot select that index from other servers (deploy or sh's WebUI). Please tell me how to create an index in a cluster environment.
Hi,   The jmx receiver may be worth a look. It would involve configuring TomCat to create a jmx endpoint. Then configuring the OTel jmx receiver to connect to that endpoint to gather metrics. ht... See more...
Hi,   The jmx receiver may be worth a look. It would involve configuring TomCat to create a jmx endpoint. Then configuring the OTel jmx receiver to connect to that endpoint to gather metrics. https://docs.splunk.com/observability/en/gdi/opentelemetry/components/jmx-receiver.html
Hi. I'm working on splunk operations team. version: Enterprise 7.3.4, centos7 Environment : 3 SH, 10 IDX cluster. Search Factor : 3 Replication Factor : 3   Why is different tsidx file's size ... See more...
Hi. I'm working on splunk operations team. version: Enterprise 7.3.4, centos7 Environment : 3 SH, 10 IDX cluster. Search Factor : 3 Replication Factor : 3   Why is different tsidx file's size in same index? My situation : Tsidx file's  size in one specific indexer is 2~3x larger than size in the rest indexer. So, this indexer storage sometimes goes to above 97% and indexing stops. After splunk instance restart, for a few days storage remains at 86~87%. (the rest indexers always remain at 86%) I checked rawdata file's size. But, size is only affected by event count. Not all indexes are included. 10 of 100 indexes are included. I can't find cause. Help me. If you need additional information, please let me know. Thank you.  
The + quantifier is greedy, meaning it will match as many characters as possible.  So you'll get everything from NameofTeam until the end of the data.  To avoid that, use the non-greedy quantifier +?... See more...
The + quantifier is greedy, meaning it will match as many characters as possible.  So you'll get everything from NameofTeam until the end of the data.  To avoid that, use the non-greedy quantifier +?, even better, change the pattern to match until the next quotation mark. index=test | rex field=_raw "Key\": \"Owner\", \"ValueString\": \"(?<Team>.+?)\"}," index=test | rex field=_raw "Key\": \"Owner\", \"ValueString\": \"(?<Team>[^"]+)\"},"  
@vishwa re:mvstats - did you know that Splunk natively supports min/max/avg/sum on mvfields.
Hello,  looks like Microsoft Graph Security add all tags to all event type so its not correctly CIM mapped , any one filtered events based on alerts data  and map to correct data models?  for examp... See more...
Hello,  looks like Microsoft Graph Security add all tags to all event type so its not correctly CIM mapped , any one filtered events based on alerts data  and map to correct data models?  for example i need to sort out based on events and map to below data models Malware - IDS - Endpoint - Alert Thx 
Yes, I opened a support case with Splunk and as per the support  I upgraded the Splunk, it worked. Thanks!
Also I can see below logs in metrics.log: /opt/splunkforwarder/var/log/splunk# grep -Ri blocked metrics.log* metrics.log:02-20-2024 02:18:21.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=... See more...
Also I can see below logs in metrics.log: /opt/splunkforwarder/var/log/splunk# grep -Ri blocked metrics.log* metrics.log:02-20-2024 02:18:21.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=102, largest_size=105, smallest_size=35 metrics.log:02-20-2024 02:27:30.654 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=100, largest_size=107, smallest_size=36 metrics.log:02-20-2024 02:28:31.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=102, largest_size=107, smallest_size=40 metrics.log:02-20-2024 03:01:03.654 +1100 INFO Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=120, largest_size=125, smallest_size=41 metrics.log:02-20-2024 03:13:15.656 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=102, largest_size=105, smallest_size=32 metrics.log:02-20-2024 03:21:23.654 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=102, largest_size=107, smallest_size=36 metrics.log:02-20-2024 03:27:29.653 +1100 INFO Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=121, largest_size=123, smallest_size=38 metrics.log:02-20-2024 03:31:33.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=100, largest_size=105, smallest_size=35 metrics.log:02-20-2024 03:57:59.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=100, largest_size=105, smallest_size=35 metrics.log.1:02-19-2024 21:45:53.652 +1100 INFO Metrics - group=knowledgebundle_replication, name=blocked_search_metrics, app=none, user=none, elapsed_ms=18446744073709551615 metrics.log.1:02-19-2024 22:07:14.652 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=100, largest_size=106, smallest_size=33 metrics.log.1:02-19-2024 22:27:34.653 +1100 INFO Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=100, largest_size=112, smallest_size=35 metrics.log.1:02-19-2024 22:56:02.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=102, largest_size=111, smallest_size=32 metrics.log.1:02-19-2024 22:57:03.653 +1100 INFO Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=120, largest_size=125, smallest_size=42 metrics.log.1:02-19-2024 23:18:24.654 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=101, largest_size=106, smallest_size=33 metrics.log.1:02-20-2024 00:08:13.652 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=100, largest_size=111, smallest_size=37 metrics.log.1:02-20-2024 00:21:26.652 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=102, largest_size=111, smallest_size=37 metrics.log.1:02-20-2024 00:44:49.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=100, largest_size=105, smallest_size=39 metrics.log.1:02-20-2024 00:49:54.655 +1100 INFO Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=119, largest_size=129, smallest_size=40 metrics.log.1:02-20-2024 01:25:29.654 +1100 INFO Metrics - group=queue, name=parsingqueue, blocked=true, max_size_kb=512, current_size_kb=511, current_size=121, largest_size=131, smallest_size=37 metrics.log.1:02-20-2024 01:27:31.654 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=102, largest_size=111, smallest_size=29 metrics.log.1:02-20-2024 01:33:37.653 +1100 INFO Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=101, largest_size=111, smallest_size=33