All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please don't post the same question twice.  Please delete one of them.
For anyone else - the below search eventually worked the way I wanted although perhaps there is a more efficient way to do the same thing! | tstats max(_indextime) as indextime WHERE earliest=-7... See more...
For anyone else - the below search eventually worked the way I wanted although perhaps there is a more efficient way to do the same thing! | tstats max(_indextime) as indextime WHERE earliest=-7d latest=now() index=* BY sourcetype index _time span=1h ```Look back over a 7 day window, and get the typical number of hours between indextimes, as well as the number of hours seen``` | sort 0 + index sourcetype indextime | streamstats window=2 range(indextime) as range_indextime by sourcetype index | eval range_indextime=range_indextime/60/60 | stats max(indextime) as last_indextime dc(indextime) as hour_count_over_5_days avg(range_indextime) as range_based_spacing by sourcetype index | eval now=now() | eval average_hour_spacing=120/hour_count_over_5_days | eval hours_since_last_seen=if(isnotnull(hours_since_last_seen),hours_since_last_seen,abs((now-last_indextime)/60/60)) ```Compare the time since we last saw indexes, and determine if it is likely late or not.``` | eval is_late=case(((range_based_spacing<=1 AND hours_since_last_seen>=1.5 AND average_hour_spacing<=1) OR (range_based_spacing<=6 AND hours_since_last_seen>=8 AND average_hour_spacing<=6) OR (range_based_spacing<=12 AND hours_since_last_seen>=15 AND average_hour_spacing<=12) OR (range_based_spacing<=24 AND hours_since_last_seen>=36) OR isnull(last_indextime)) AND hour_count_over_5_days>1,"yes",(hours_since_last_seen>24 AND hour_count_over_5_days<=1),"maybe",1=1,"no") | eval last_indextime=strftime(last_indextime,"%Y-%m-%dT%H:%M") | fields - now
Hello everyone,  I am trying to send syslog data to my Edge Processor and it is the first time and it seems that it is not as simple as Splunk proposes. I am sending the data to port 514 TCP whic... See more...
Hello everyone,  I am trying to send syslog data to my Edge Processor and it is the first time and it seems that it is not as simple as Splunk proposes. I am sending the data to port 514 TCP which is listening, the edge processor service is up and seems to be working. With a tcpdump it seems to get something to port 514, I put an example of the output:     root@siacemsself01:/splunk-edge/etc# tcpdump -i any dst port 514 -Ans0 tcpdump: data link type LINUX_SLL2 tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes 12:00:33.644148 ens32 In IP 10.100.11.46.34344 > 10.100.11.237.514: Flags [.], ack 791814934, win 502, options [nop,nop,TS val 441690529 ecr 2755011762], length 0 E..43.@.@... d.. d...(...^../2#......S..... .S...6$.     But in the instance section nothing appears as inbound data. I also found this in the edge.log file:     2024/02/20 11:40:33 workload exit: collector failed to start in idle mode, stuck in closing/closed state {"level":"INFO","time":"2024-02-20T11:40:49.752Z","location":"teleport/plugin.go:100","message":"starting plugin","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.752Z","location":"teleport/plugin.go:179","message":"starting collector in idle mode","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.752Z","location":"logging/redactor.go:55","message":"startup package settings","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0","settings":{}} {"level":"INFO","time":"2024-02-20T11:40:49.752Z","location":"teleport/plugin.go:198","message":"waiting new connector to start","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.752Z","location":"config/conf_map_factory.go:127","message":"settings is empty. returning nop configuration map","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"WARN","time":"2024-02-20T11:40:49.752Z","location":"logging/redactor.go:50","message":"unable to clone map","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0","error":"json: unsupported type: map[interface {}]interface {}"} {"level":"INFO","time":"2024-02-20T11:40:49.753Z","location":"service@v0.92.0/telemetry.go:86","message":"Setting up own telemetry...","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.753Z","location":"service@v0.92.0/telemetry.go:203","message":"Serving Prometheus metrics","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0","address":"localhost:8888","level":"Basic"} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"service@v0.92.0/service.go:151","message":"Starting otelcol-acies...","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0","Version":"92e64ca1","NumCPU":2} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"extensions/extensions.go:34","message":"Starting extensions...","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"service@v0.92.0/service.go:177","message":"Everything is ready. Begin running and processing data.","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"ERROR","time":"2024-02-20T11:40:49.754Z","location":"otelcol@v0.92.0/collector.go:255","message":"Asynchronous error received, terminating process","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0","error":"listen tcp 127.0.0.1:8888: bind: address already in use","callstack":"go.opentelemetry.io/collector/otelcol.(*Collector).Run\n\tgo.opentelemetry.io/collector/otelcol@v0.92.0/collector.go:255\ncd.splunkdev.com/data-availability/acies/teleport.(*Plugin).startCollector.func1\n\tcd.splunkdev.com/data-availability/acies/teleport/plugin.go:193"} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"service@v0.92.0/service.go:191","message":"Starting shutdown...","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"extensions/extensions.go:59","message":"Stopping extensions...","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"service@v0.92.0/service.go:205","message":"Shutdown complete.","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"ERROR","time":"2024-02-20T11:40:49.754Z","location":"teleport/plugin.go:194","message":"failing to startup","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"ERROR","time":"2024-02-20T11:40:49.852Z","location":"teleport/plugin.go:227","message":"collector failed to start in idle mode, stuck in closing/closed state","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"}       Any idea about what it's happening?
Thank you for the information. It is very helpful!  
Please have a look at https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Configurethepeerindexes
inputs.conf is configured on the machine from where the data is forwarded. So it could be on UF,HF,Indexer or even on Search Head if the logs are being forwarded Sourcetype can be applied on the gen... See more...
inputs.conf is configured on the machine from where the data is forwarded. So it could be on UF,HF,Indexer or even on Search Head if the logs are being forwarded Sourcetype can be applied on the general section which will be considered if individual sections are not specified Please have a look at this https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Wheretofindtheconfigurationfiles more detailed information And also here to have an understanding about the data processing https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590781#M103485 The source is the name of the file, stream, or other input from which a particular event originates. The sourcetype determines how Splunk software processes the incoming data stream into individual events according to the nature of the data. In short , /var/log/apache.log is a source and how the source file should be parsed is defined by the sourcetype access_combined    
Hi @damo66a  Did you figure out this issue?  I also have issues that powershell scripts doesn't seems to be triggered after a while (after working for days and weeks). Restart of the splunk service... See more...
Hi @damo66a  Did you figure out this issue?  I also have issues that powershell scripts doesn't seems to be triggered after a while (after working for days and weeks). Restart of the splunk service helps, but after some time it stops again. I can't find any error messages either. Regards
Do you mean something like this? |inputlookup abc.csv.gz |where Hostname= "$field1$"
Hi Team, I got a requirement one of Active Directory team to get the Event ID with Event Source. If you have any idea to get the details please post me the details.   Thank you !!! 
DropDown 1 - 3 static options. DropDown 2 needs to display the products of those servers ServerA ServerB ServerC DropDown2 using Query : I need to bring the server A or B or C in my token?  Quer... See more...
DropDown 1 - 3 static options. DropDown 2 needs to display the products of those servers ServerA ServerB ServerC DropDown2 using Query : I need to bring the server A or B or C in my token?  Query; |inputlookup abc.csv.gz |Hostname= "ServerA"      <input type="dropdown" token="field1" searchWhenChanged="false"> <label>License Server</label> <choice value="a">A</choice> <choice value="b">B</choice> <choice value="c">C</choice> <default>a</default> <change> <condition value="a"> <unset token="c-details"></unset> <unset token="b-details"></unset> <set token="a-details"></set> </condition> <condition value="b"> <unset token="a-details"></unset> <unset token="c-details"></unset> <set token="b-details"></set> </condition> <condition value="c"> <unset token="a-details"></unset> <unset token="b-details"></unset> <set token="c-details"></set> </condition> </change> </input>
| rex "\w+\.(?<domaine_test>[\.\w]+)"
Can you share the soluition?
Thanks for a clearer description of your usecase Please try this | eventstats values(Hostname) as hosts by vulnerability | eval patch=if(isnotnull(mvfind(hosts,dev)), "Yes", "No")
      Hello, I have a multi-site cluster at version 9.0.1, with several Indexers, SHs, and HF/UFs. The Monitoring Console is configured on the Cluster Manager, and "Forwarder Monitoring" ... See more...
      Hello, I have a multi-site cluster at version 9.0.1, with several Indexers, SHs, and HF/UFs. The Monitoring Console is configured on the Cluster Manager, and "Forwarder Monitoring" is enabled, which allows me to see the status of the forwarders. What is missing is the possibility to select HF in the Resource Usage section of the Monitoring Console. They are not available. How can I get them to appear in Resource Usage in the Monitoring Console?   Thank you, Andrea
Hi all, We are currently facing an issue with our Splunk SOAR installation Every time that we open the playbook editor, it shows the errors in the screenshot below and all the dropdown and search f... See more...
Hi all, We are currently facing an issue with our Splunk SOAR installation Every time that we open the playbook editor, it shows the errors in the screenshot below and all the dropdown and search fields stop working (eg: we're unable to choose apps or datatypes for the input) We have also tried to reinstall it (both v6.1.1 and v6.2.0) The service is running on a VM with Red Hat Enterprise Linux release 8.9 Do you have any suggestions on how we can solve this problem? Thanks for your help Best regards  
Hello I would like to make a query in which i can see how long my equipment has been inactive and when it was inactive preferably in a timechart. I would like to define inactive in 2 ways. One is wh... See more...
Hello I would like to make a query in which i can see how long my equipment has been inactive and when it was inactive preferably in a timechart. I would like to define inactive in 2 ways. One is when x y and z have the same value +/-50 for 10 seconds or more In these events 1000=950/1050 for the sake of inactivity The second way is when there has been no new event from a piece of equipment for more than 10 seconds Any help would be very much appriciated. Below are some sample events and how long the equipment is active/inactive 12:00:10 x=1000 y=500 z=300 equipmentID=1 12:00:15 x=1000 y=500 z=300 equipmentID=1 12:00:20 x=1025 y=525 z=275 equipmentID=1 12:00:25 x=1000 y=500 z=300 equipmentID=1 (20 seconds of inactivity) 12:00:30 x=1600 y=850 z=60 equipmentID=1 12:00:35 x=1600 y=850 z=60 equipmentID=1 (15 seconds of activity) 12:03:00 x=1650 y=950 z=300 equipmentID=1 (135 seconds of inactivity) 12:03:05 x=1850 y=500 z=650 equipmentID=1 12:03:10 x=2500 y=950 z=800 equipmentID=1 12:03:15 x=2500 y=950 z=400 equipmentID=1 12:03:20 x=2500 y=950 z=150 equipmentID=1 (15 seconds of activity)
Hi all, I'm trying to extract a part of a field. The field named Computer and is like MySrv.MyDomain.MySubDom1.com MySubDom1 can exist or not. I would like to extract everything after MySrv. I tri... See more...
Hi all, I'm trying to extract a part of a field. The field named Computer and is like MySrv.MyDomain.MySubDom1.com MySubDom1 can exist or not. I would like to extract everything after MySrv. I tried with  index=MyIndex host=MySrv | rex field=_raw "(?<domaine_test>(\.\w+))" The result create a new field Domain_test but it stores only the first part "MyDomain" and not the rest of the field. How can I do this ? For exemple : Computer = "MySrv.MyDomain.MySubDom1.com" Result : Domain_test = "MyDomain.MySubDom1.com"
"I will have a table composed of Hostname, Dev (hostname of the development machine associated with the machine in the Hostname field), vulnerability (vulnerability associated with the machine in Hos... See more...
"I will have a table composed of Hostname, Dev (hostname of the development machine associated with the machine in the Hostname field), vulnerability (vulnerability associated with the machine in Hostname). The Dev field is only used to see if the machine in Hostname has a machine in development associated with it. I should verify that in my table there is not that machine (in this case in the hostname field) associated with the same vulnerability." HOSTNAME DEV VULNERABILITà PAPERINO pippo APACHE In this case, my machine "paperino" has a vulnerability "apache", and it also has a development machine associated with it. Therefore, I should verify that for the machine "Pippo" there isn't the same vulnerability HOSTNAME DEV VULNERABILITà PIPPO - APACHE If this row were present in my search, then in the row of the table above, I should write "YES" in my new field that I will create. because pippo have same vulnerability (apache )
Hello all, I am confused on which machines I am intended to have my inputs.conf files configured.  1. I am currently operating under the assumption that inputs.conf files are primarily for the inde... See more...
Hello all, I am confused on which machines I am intended to have my inputs.conf files configured.  1. I am currently operating under the assumption that inputs.conf files are primarily for the indexer is this correct? 2. If I update an inputs.conf file do I need to push the updated file through my deployment server so that the inputs.conf files tied to the applications on the S.U.F reflect in the same changes made on the manager. a. I have raw xml data populating and I wish to fix this so that it is easier to read... Currently there is no source type in my inputs.conf. I believe applying an appropriate source type in the inputs.conf is the first step to fixing this problem.  b. There are multiple stanzas in inputs.conf. Do I need to apply a source type to each of the stanzas that have to do with sending xml logs or is their a way to apply this change on global scale? Z. Will someone please explain the difference between source and source type I have read the documentation on the manner and am still uncertain in my understanding.   Thanks for the help in advance!  
Try something like this | timechart span=1d sum(SuccessCount) as SuccessCount sum(FailedCount) as FailedCount