All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm using a modified search from splunksearches.com to get the events from the past two days and returning the difference.  For all of the indexes and sourcetypes, if it exists, in the testlookup.  ... See more...
I'm using a modified search from splunksearches.com to get the events from the past two days and returning the difference.  For all of the indexes and sourcetypes, if it exists, in the testlookup.   While it works the index and sourcetype does not line up with the results.  Mapping I found handles this SPL a little different than a normal search, location of the stats command had to be moved to return the same results.   My question is there a way to modify the SPL so the index/sourcetype lines up with the results?  I'm pretty sure I'll eventually get it but already spent enough time on this.   thanks testlookup: has the columns index and sourcetype           | inputlookup testlookup |eval index1=index |eval sourcetype1=if (isnull(sourcetype),"","sourcetype="+sourcetype) |appendpipe [|map search="search index=$index1$ earliest=-48h latest=-24h | bin _time span=1d | eval window=\"Yesterday\"| stats count by _time window | append [|search index=$index1$ earliest=-24h | eval window=\"Today\"| bin _time span=1d | stats count by _time window | eval _time=(_time-(60*60*24))] | timechart span=1d sum(count) by window|eval difference = abs(Yesterday - Today)"]| table index1 sourcetype1 Yesterday Today difference         index1 sourcetype1 yesterday today  difference test1 st_test1           10 20 10            
Hi all, We have been facing some errors with Splunk indexers, where it says something like below. ``` Failed processing http input, token name=<HECtoken>, channel=n/a, source_IP=, reply=9, events_... See more...
Hi all, We have been facing some errors with Splunk indexers, where it says something like below. ``` Failed processing http input, token name=<HECtoken>, channel=n/a, source_IP=, reply=9, events_processed=62, http_input_body_size=47326, parsing_err="Server is busy" ``` And I found in some discussions that increasing queue sizes may help sometimes. We are indexing ~400GB per day and it makes sense to increase the queue sizes as default values might not be good enough in this case. However, the splunk docs doesnt have a detailed explanation of which queues can be set in server.conf and what are the proportions that we need consider. Can someone help with understanding this?
You may be able to adjust the props.conf settings to change how events are ingested.  Can you share raw events?
@ITWhisperer yes correct, but i have products for each Hostname which needs to be shown in drop down.    Hostname A = Product A, Product B, Product C etc.  Hostname B - Product X, product Y, Produ... See more...
@ITWhisperer yes correct, but i have products for each Hostname which needs to be shown in drop down.    Hostname A = Product A, Product B, Product C etc.  Hostname B - Product X, product Y, Product Z etc so depends on the Hostname, products needs to be populated in drop down
index=my_index source="/var/log/nginx/access.log" [| makeresults | addinfo | bin info_min_time as earliest span=15m | bin info_max_time as latest span=15m | table earliest latest]... See more...
index=my_index source="/var/log/nginx/access.log" [| makeresults | addinfo | bin info_min_time as earliest span=15m | bin info_max_time as latest span=15m | table earliest latest] | bin _time span=15m | stats avg(request_time) as Average_Request_Time by _time | streamstats count as weight | eval alert=if(Average_Request_Time>1,weight,0) | stats sum(alert) as alert | where alert==1
Thanks again @ITWhisperer.  Is there any way to restrict to the previous 2 times bins in the query as the cron scheduler doesn't fire exactly on the hour and getting 3 bins as you said.  Thinking of ... See more...
Thanks again @ITWhisperer.  Is there any way to restrict to the previous 2 times bins in the query as the cron scheduler doesn't fire exactly on the hour and getting 3 bins as you said.  Thinking of running at 1:05pm and if that could get the 12:30-45 & 12:45-1 bins, I think that would work well.
Please don't post the same question twice.  Please delete one of them.
For anyone else - the below search eventually worked the way I wanted although perhaps there is a more efficient way to do the same thing! | tstats max(_indextime) as indextime WHERE earliest=-7... See more...
For anyone else - the below search eventually worked the way I wanted although perhaps there is a more efficient way to do the same thing! | tstats max(_indextime) as indextime WHERE earliest=-7d latest=now() index=* BY sourcetype index _time span=1h ```Look back over a 7 day window, and get the typical number of hours between indextimes, as well as the number of hours seen``` | sort 0 + index sourcetype indextime | streamstats window=2 range(indextime) as range_indextime by sourcetype index | eval range_indextime=range_indextime/60/60 | stats max(indextime) as last_indextime dc(indextime) as hour_count_over_5_days avg(range_indextime) as range_based_spacing by sourcetype index | eval now=now() | eval average_hour_spacing=120/hour_count_over_5_days | eval hours_since_last_seen=if(isnotnull(hours_since_last_seen),hours_since_last_seen,abs((now-last_indextime)/60/60)) ```Compare the time since we last saw indexes, and determine if it is likely late or not.``` | eval is_late=case(((range_based_spacing<=1 AND hours_since_last_seen>=1.5 AND average_hour_spacing<=1) OR (range_based_spacing<=6 AND hours_since_last_seen>=8 AND average_hour_spacing<=6) OR (range_based_spacing<=12 AND hours_since_last_seen>=15 AND average_hour_spacing<=12) OR (range_based_spacing<=24 AND hours_since_last_seen>=36) OR isnull(last_indextime)) AND hour_count_over_5_days>1,"yes",(hours_since_last_seen>24 AND hour_count_over_5_days<=1),"maybe",1=1,"no") | eval last_indextime=strftime(last_indextime,"%Y-%m-%dT%H:%M") | fields - now
Hello everyone,  I am trying to send syslog data to my Edge Processor and it is the first time and it seems that it is not as simple as Splunk proposes. I am sending the data to port 514 TCP whic... See more...
Hello everyone,  I am trying to send syslog data to my Edge Processor and it is the first time and it seems that it is not as simple as Splunk proposes. I am sending the data to port 514 TCP which is listening, the edge processor service is up and seems to be working. With a tcpdump it seems to get something to port 514, I put an example of the output:     root@siacemsself01:/splunk-edge/etc# tcpdump -i any dst port 514 -Ans0 tcpdump: data link type LINUX_SLL2 tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes 12:00:33.644148 ens32 In IP 10.100.11.46.34344 > 10.100.11.237.514: Flags [.], ack 791814934, win 502, options [nop,nop,TS val 441690529 ecr 2755011762], length 0 E..43.@.@... d.. d...(...^../2#......S..... .S...6$.     But in the instance section nothing appears as inbound data. I also found this in the edge.log file:     2024/02/20 11:40:33 workload exit: collector failed to start in idle mode, stuck in closing/closed state {"level":"INFO","time":"2024-02-20T11:40:49.752Z","location":"teleport/plugin.go:100","message":"starting plugin","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.752Z","location":"teleport/plugin.go:179","message":"starting collector in idle mode","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.752Z","location":"logging/redactor.go:55","message":"startup package settings","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0","settings":{}} {"level":"INFO","time":"2024-02-20T11:40:49.752Z","location":"teleport/plugin.go:198","message":"waiting new connector to start","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.752Z","location":"config/conf_map_factory.go:127","message":"settings is empty. returning nop configuration map","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"WARN","time":"2024-02-20T11:40:49.752Z","location":"logging/redactor.go:50","message":"unable to clone map","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0","error":"json: unsupported type: map[interface {}]interface {}"} {"level":"INFO","time":"2024-02-20T11:40:49.753Z","location":"service@v0.92.0/telemetry.go:86","message":"Setting up own telemetry...","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.753Z","location":"service@v0.92.0/telemetry.go:203","message":"Serving Prometheus metrics","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0","address":"localhost:8888","level":"Basic"} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"service@v0.92.0/service.go:151","message":"Starting otelcol-acies...","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0","Version":"92e64ca1","NumCPU":2} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"extensions/extensions.go:34","message":"Starting extensions...","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"service@v0.92.0/service.go:177","message":"Everything is ready. Begin running and processing data.","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"ERROR","time":"2024-02-20T11:40:49.754Z","location":"otelcol@v0.92.0/collector.go:255","message":"Asynchronous error received, terminating process","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0","error":"listen tcp 127.0.0.1:8888: bind: address already in use","callstack":"go.opentelemetry.io/collector/otelcol.(*Collector).Run\n\tgo.opentelemetry.io/collector/otelcol@v0.92.0/collector.go:255\ncd.splunkdev.com/data-availability/acies/teleport.(*Plugin).startCollector.func1\n\tcd.splunkdev.com/data-availability/acies/teleport/plugin.go:193"} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"service@v0.92.0/service.go:191","message":"Starting shutdown...","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"extensions/extensions.go:59","message":"Stopping extensions...","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"service@v0.92.0/service.go:205","message":"Shutdown complete.","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"ERROR","time":"2024-02-20T11:40:49.754Z","location":"teleport/plugin.go:194","message":"failing to startup","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"ERROR","time":"2024-02-20T11:40:49.852Z","location":"teleport/plugin.go:227","message":"collector failed to start in idle mode, stuck in closing/closed state","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"}       Any idea about what it's happening?
Thank you for the information. It is very helpful!  
Please have a look at https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Configurethepeerindexes
inputs.conf is configured on the machine from where the data is forwarded. So it could be on UF,HF,Indexer or even on Search Head if the logs are being forwarded Sourcetype can be applied on the gen... See more...
inputs.conf is configured on the machine from where the data is forwarded. So it could be on UF,HF,Indexer or even on Search Head if the logs are being forwarded Sourcetype can be applied on the general section which will be considered if individual sections are not specified Please have a look at this https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Wheretofindtheconfigurationfiles more detailed information And also here to have an understanding about the data processing https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590781#M103485 The source is the name of the file, stream, or other input from which a particular event originates. The sourcetype determines how Splunk software processes the incoming data stream into individual events according to the nature of the data. In short , /var/log/apache.log is a source and how the source file should be parsed is defined by the sourcetype access_combined    
Hi @damo66a  Did you figure out this issue?  I also have issues that powershell scripts doesn't seems to be triggered after a while (after working for days and weeks). Restart of the splunk service... See more...
Hi @damo66a  Did you figure out this issue?  I also have issues that powershell scripts doesn't seems to be triggered after a while (after working for days and weeks). Restart of the splunk service helps, but after some time it stops again. I can't find any error messages either. Regards
Do you mean something like this? |inputlookup abc.csv.gz |where Hostname= "$field1$"
Hi Team, I got a requirement one of Active Directory team to get the Event ID with Event Source. If you have any idea to get the details please post me the details.   Thank you !!! 
DropDown 1 - 3 static options. DropDown 2 needs to display the products of those servers ServerA ServerB ServerC DropDown2 using Query : I need to bring the server A or B or C in my token?  Quer... See more...
DropDown 1 - 3 static options. DropDown 2 needs to display the products of those servers ServerA ServerB ServerC DropDown2 using Query : I need to bring the server A or B or C in my token?  Query; |inputlookup abc.csv.gz |Hostname= "ServerA"      <input type="dropdown" token="field1" searchWhenChanged="false"> <label>License Server</label> <choice value="a">A</choice> <choice value="b">B</choice> <choice value="c">C</choice> <default>a</default> <change> <condition value="a"> <unset token="c-details"></unset> <unset token="b-details"></unset> <set token="a-details"></set> </condition> <condition value="b"> <unset token="a-details"></unset> <unset token="c-details"></unset> <set token="b-details"></set> </condition> <condition value="c"> <unset token="a-details"></unset> <unset token="b-details"></unset> <set token="c-details"></set> </condition> </change> </input>
| rex "\w+\.(?<domaine_test>[\.\w]+)"
Can you share the soluition?
Thanks for a clearer description of your usecase Please try this | eventstats values(Hostname) as hosts by vulnerability | eval patch=if(isnotnull(mvfind(hosts,dev)), "Yes", "No")
      Hello, I have a multi-site cluster at version 9.0.1, with several Indexers, SHs, and HF/UFs. The Monitoring Console is configured on the Cluster Manager, and "Forwarder Monitoring" ... See more...
      Hello, I have a multi-site cluster at version 9.0.1, with several Indexers, SHs, and HF/UFs. The Monitoring Console is configured on the Cluster Manager, and "Forwarder Monitoring" is enabled, which allows me to see the status of the forwarders. What is missing is the possibility to select HF in the Resource Usage section of the Monitoring Console. They are not available. How can I get them to appear in Resource Usage in the Monitoring Console?   Thank you, Andrea