All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are gathering logs from various devices that contain security, performance, and availability-related information. These logs are all being sent to Splunk. We utilize both Splunk core and the ES A... See more...
We are gathering logs from various devices that contain security, performance, and availability-related information. These logs are all being sent to Splunk. We utilize both Splunk core and the ES App. Since we have to pay separately for both core and the ES App based on ingestion, we are exploring options to minimize costs. Is there a mechanism available for selecting which logs can be sent to the ES App for processing? If such an option exists, we would only need to send security-specific logs to the ES App, significantly reducing our Splunk ES App licensing costs. Splunk Enterprise Security
In my search I have a field (ResourceId) that contains various cloud resource values. One of these values is InstanceId. The subsearch is returning a list of "active" instances. What I ultimately nee... See more...
In my search I have a field (ResourceId) that contains various cloud resource values. One of these values is InstanceId. The subsearch is returning a list of "active" instances. What I ultimately need to do is filter out only those InstanceIds from the ResourceIds field that DO NOT match the InstanceIds returned from the subsearch (the active instances), while keeping all other values in the ResourceId field. Sample ResourceId values: i-987654321abcdefg (active; WAS returned by subsearch) i-123abcde456abcde (inactive; was NOT a returned value from subsearch) bucket-name sg-12423adssvd Intended Output: i-987654321abcdefg bucket-name sg-12423adssvd Search (in progress):   index=main ResourceId=* | join InstanceId type=inner [search index=other type=instance earliest=-2h] | eval InstanceId=if(in(ResourceId, InstanceId), InstanceId, "NULL") | table InstanceId      
Hello, I'm currently developing a Splunk app and am having trouble bundling saved searches to appear in the Search & Reporting app. My intention is to include a list of searches in my app packa... See more...
Hello, I'm currently developing a Splunk app and am having trouble bundling saved searches to appear in the Search & Reporting app. My intention is to include a list of searches in my app package (in savedsearches.conf or elsewhere) that will appear in the Reports tab of S&R. I've done some digging but haven't found a solution that works. Is this possible? I'm developing in Splunk Enterprise 9.1.3.
I am trying to look at cpu and mem statistics on my indexers and search heads, but the index only ever goes back 15 days, almost to the hour, but I need to look a a specific date almost a month ago. ... See more...
I am trying to look at cpu and mem statistics on my indexers and search heads, but the index only ever goes back 15 days, almost to the hour, but I need to look a a specific date almost a month ago. Any ideas on why this could be and how can get around it?
Hi. I have a single filed for date and time of event - 2024-02-19T11:16:58.930104Z I would like to have to fields Date and Time as well as one more calculated fields I can use to find records not... See more...
Hi. I have a single filed for date and time of event - 2024-02-19T11:16:58.930104Z I would like to have to fields Date and Time as well as one more calculated fields I can use to find records not changed in last 2 days or 48 hours what ever is better for the search. I tried  |eval Date = strftime(policy_refresh_at, "%b-%d-%Y") | eval Time = strftime(policy_refresh_at, "%H:%M") or | eval Date=substr(policy_refresh,10,1) The result come empty in both cases. So nothing to calculate on Please advise, Thank you Please advise on
I am using the | fields _raw to show the entire content of the source file as a single event.  It works for most of my log files less than 100K.  For occasionally larger files, the search will break ... See more...
I am using the | fields _raw to show the entire content of the source file as a single event.  It works for most of my log files less than 100K.  For occasionally larger files, the search will break the results into multiple events and missing out the details.  How can I fix it?  Or is there another way to return the file contents?  I know users can click on the Show Source in the event action, but my search queries are part of a dashboard drilldown on file names.
I need help to write a search query where the result from the one query is passed onto the second query 1 we import the users from the active directory group in the okta group and the event eventTyp... See more...
I need help to write a search query where the result from the one query is passed onto the second query 1 we import the users from the active directory group in the okta group and the event eventType="group.user_membership.add" captures this Json event Following the query get me the name of the group and user name. index="indexName"   eventType="group.user_membership.add" | spath "target{}.displayName" |rename target{}.displayName as grpID| eval groupName=mvindex(grpID, 1) |  rename "target{}.alternateId" AS "targetId" | rename "target{}.type" AS "targetType"| eval target_user=mvindex(targetId, mvfind(targetType, "User")) | table target_user groupName 2. After the user is added to the Okta group, I want to find the occurrence of the user authentications during time range  . I can separately find user authentication using eventType="user.authentication.sso" this event doesn't have a group name. index="indexName"   eventType="user.authentication.sso"  target_user  | stats count by date How do I pass the user in the first query to the second query. I cannot use subsearch since the main search eventype is not the same as the second sub search.   Basically, I want to create a report by groupname/username authentications for the selected time range Any help is appreciated.
We have a search where one of the fields from base search is passed onto a REST API using map command.    <Base Search> | stats count min(_time) as firstTime max(_time) as lastTime values(user) as ... See more...
We have a search where one of the fields from base search is passed onto a REST API using map command.    <Base Search> | stats count min(_time) as firstTime max(_time) as lastTime values(user) as user by user, src_ip, activity, riskLevel |map maxsearches=100 search="| rest splunk_server=local /services/App/.../ ioc="$src_ip$"     But after this search ,only the results returned by the REST API are shown. How can I include some of the fields from original search, e.g. user, activity so that they can later be used in a table? Tried adding the field using eval right before the REST call but that doesn't seem to be working.    eval activity=\"$activity$\" | rest     Also tried using "multireport" but only the first search is considered.    | multireport [ table user, src_ip, activity, riskLevel] [| map map maxsearches=100 search="| rest splunk_server=local /services/App/.../ ioc="$src_ip$"]     Is there a way to achieve this? API call itself returns a set of fields which I am extracting using spath but also want to keep some of the original ones for added context. Thanks, ~Abhi
Hello to everyone! I have a Win server with Splunk UF installed that consumes MS Exchange logs This logs is stored in CSV format Splunk UF settings look like this: props.conf [exch_file_httppr... See more...
Hello to everyone! I have a Win server with Splunk UF installed that consumes MS Exchange logs This logs is stored in CSV format Splunk UF settings look like this: props.conf [exch_file_httpproxy-mapi] ANNOTATE_PUNCT = false BREAK_ONLY_BEFORE_DATE = true INDEXED_EXTRACTIONS = csv initCrcLength = 2735 HEADER_FIELD_LINE_NUMBER = 1 MAX_TIMESTAMP_LOOKAHEAD = 24 SHOULD_LINEMERGE = false TIMESTAMP_FIELDS = DateTime TRANSFORMS-no_column_headers = no_column_headers transforms.conf [no_column_headers] REGEX = ^#.* DEST_KEY = queue FORMAT = nullQueue   Thanks to the data quality report on the indexers layer, I found out that this source type has some timestamp issues I investigated this problem by executing a search on the searched layer and found surprising events breaking You can see an example in the attachment _raw data is OK and is not contain "unxepected" next-line characters What is wrong with my settings?
Hello everyone, Unfortunately, from the license master server I can not see anything from the dashboards of the license usage page. I have also tried with the query below:  index=_internal sourcety... See more...
Hello everyone, Unfortunately, from the license master server I can not see anything from the dashboards of the license usage page. I have also tried with the query below:  index=_internal sourcetype=splunkd source=*license_usage.log type=Usage idx=*  But nothing, no results found.  Could you please help me? Thanks in advance.  
Hi, I am trying to deploy a new index to my indexer cluster via the Cluster Master and have followed the usual documentation on how to deploy via the Master-Apps Folder. I have done this before and ... See more...
Hi, I am trying to deploy a new index to my indexer cluster via the Cluster Master and have followed the usual documentation on how to deploy via the Master-Apps Folder. I have done this before and it has worked no problem but this time I have no idea why it is not working.  When I make the change to indexes.conf and run the command "splunk validate cluster-bundle" it gives me no errors and then brings me back to my CLI so I would presume it has validated it. Then I run the command "splunk show cluster-bundle-status" to check the bundle ID's they are still the same ID's on the active bundle and the latest bundle. Its as if Splunk is not recognising that a change has been made to the bundle and therefore cannot deploy it down to the indexers.   I ran the command "splunk apply cluster-bundle" and it gave me the below error. However when I checked the Splunkd.log on the CM and the Indexers there was no indication of a validation error, or any error for that case. Is there anything that I am missing here? Just cant work out why it is not recognising a change has been made to update the Bundle IDs to be pushed down.  Thanks  
I'm using a modified search from splunksearches.com to get the events from the past two days and returning the difference.  For all of the indexes and sourcetypes, if it exists, in the testlookup.  ... See more...
I'm using a modified search from splunksearches.com to get the events from the past two days and returning the difference.  For all of the indexes and sourcetypes, if it exists, in the testlookup.   While it works the index and sourcetype does not line up with the results.  Mapping I found handles this SPL a little different than a normal search, location of the stats command had to be moved to return the same results.   My question is there a way to modify the SPL so the index/sourcetype lines up with the results?  I'm pretty sure I'll eventually get it but already spent enough time on this.   thanks testlookup: has the columns index and sourcetype           | inputlookup testlookup |eval index1=index |eval sourcetype1=if (isnull(sourcetype),"","sourcetype="+sourcetype) |appendpipe [|map search="search index=$index1$ earliest=-48h latest=-24h | bin _time span=1d | eval window=\"Yesterday\"| stats count by _time window | append [|search index=$index1$ earliest=-24h | eval window=\"Today\"| bin _time span=1d | stats count by _time window | eval _time=(_time-(60*60*24))] | timechart span=1d sum(count) by window|eval difference = abs(Yesterday - Today)"]| table index1 sourcetype1 Yesterday Today difference         index1 sourcetype1 yesterday today  difference test1 st_test1           10 20 10            
Hi all, We have been facing some errors with Splunk indexers, where it says something like below. ``` Failed processing http input, token name=<HECtoken>, channel=n/a, source_IP=, reply=9, events_... See more...
Hi all, We have been facing some errors with Splunk indexers, where it says something like below. ``` Failed processing http input, token name=<HECtoken>, channel=n/a, source_IP=, reply=9, events_processed=62, http_input_body_size=47326, parsing_err="Server is busy" ``` And I found in some discussions that increasing queue sizes may help sometimes. We are indexing ~400GB per day and it makes sense to increase the queue sizes as default values might not be good enough in this case. However, the splunk docs doesnt have a detailed explanation of which queues can be set in server.conf and what are the proportions that we need consider. Can someone help with understanding this?
Hello everyone,  I am trying to send syslog data to my Edge Processor and it is the first time and it seems that it is not as simple as Splunk proposes. I am sending the data to port 514 TCP whic... See more...
Hello everyone,  I am trying to send syslog data to my Edge Processor and it is the first time and it seems that it is not as simple as Splunk proposes. I am sending the data to port 514 TCP which is listening, the edge processor service is up and seems to be working. With a tcpdump it seems to get something to port 514, I put an example of the output:     root@siacemsself01:/splunk-edge/etc# tcpdump -i any dst port 514 -Ans0 tcpdump: data link type LINUX_SLL2 tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes 12:00:33.644148 ens32 In IP 10.100.11.46.34344 > 10.100.11.237.514: Flags [.], ack 791814934, win 502, options [nop,nop,TS val 441690529 ecr 2755011762], length 0 E..43.@.@... d.. d...(...^../2#......S..... .S...6$.     But in the instance section nothing appears as inbound data. I also found this in the edge.log file:     2024/02/20 11:40:33 workload exit: collector failed to start in idle mode, stuck in closing/closed state {"level":"INFO","time":"2024-02-20T11:40:49.752Z","location":"teleport/plugin.go:100","message":"starting plugin","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.752Z","location":"teleport/plugin.go:179","message":"starting collector in idle mode","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.752Z","location":"logging/redactor.go:55","message":"startup package settings","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0","settings":{}} {"level":"INFO","time":"2024-02-20T11:40:49.752Z","location":"teleport/plugin.go:198","message":"waiting new connector to start","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.752Z","location":"config/conf_map_factory.go:127","message":"settings is empty. returning nop configuration map","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"WARN","time":"2024-02-20T11:40:49.752Z","location":"logging/redactor.go:50","message":"unable to clone map","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0","error":"json: unsupported type: map[interface {}]interface {}"} {"level":"INFO","time":"2024-02-20T11:40:49.753Z","location":"service@v0.92.0/telemetry.go:86","message":"Setting up own telemetry...","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.753Z","location":"service@v0.92.0/telemetry.go:203","message":"Serving Prometheus metrics","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0","address":"localhost:8888","level":"Basic"} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"service@v0.92.0/service.go:151","message":"Starting otelcol-acies...","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0","Version":"92e64ca1","NumCPU":2} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"extensions/extensions.go:34","message":"Starting extensions...","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"service@v0.92.0/service.go:177","message":"Everything is ready. Begin running and processing data.","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"ERROR","time":"2024-02-20T11:40:49.754Z","location":"otelcol@v0.92.0/collector.go:255","message":"Asynchronous error received, terminating process","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0","error":"listen tcp 127.0.0.1:8888: bind: address already in use","callstack":"go.opentelemetry.io/collector/otelcol.(*Collector).Run\n\tgo.opentelemetry.io/collector/otelcol@v0.92.0/collector.go:255\ncd.splunkdev.com/data-availability/acies/teleport.(*Plugin).startCollector.func1\n\tcd.splunkdev.com/data-availability/acies/teleport/plugin.go:193"} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"service@v0.92.0/service.go:191","message":"Starting shutdown...","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"extensions/extensions.go:59","message":"Stopping extensions...","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"INFO","time":"2024-02-20T11:40:49.754Z","location":"service@v0.92.0/service.go:205","message":"Shutdown complete.","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"ERROR","time":"2024-02-20T11:40:49.754Z","location":"teleport/plugin.go:194","message":"failing to startup","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"} {"level":"ERROR","time":"2024-02-20T11:40:49.852Z","location":"teleport/plugin.go:227","message":"collector failed to start in idle mode, stuck in closing/closed state","service":"edge-processor","hostname":"siacemsself01","commit":"92e64ca1","version":"1.0.0"}       Any idea about what it's happening?
Hi Team, I got a requirement one of Active Directory team to get the Event ID with Event Source. If you have any idea to get the details please post me the details.   Thank you !!! 
DropDown 1 - 3 static options. DropDown 2 needs to display the products of those servers ServerA ServerB ServerC DropDown2 using Query : I need to bring the server A or B or C in my token?  Quer... See more...
DropDown 1 - 3 static options. DropDown 2 needs to display the products of those servers ServerA ServerB ServerC DropDown2 using Query : I need to bring the server A or B or C in my token?  Query; |inputlookup abc.csv.gz |Hostname= "ServerA"      <input type="dropdown" token="field1" searchWhenChanged="false"> <label>License Server</label> <choice value="a">A</choice> <choice value="b">B</choice> <choice value="c">C</choice> <default>a</default> <change> <condition value="a"> <unset token="c-details"></unset> <unset token="b-details"></unset> <set token="a-details"></set> </condition> <condition value="b"> <unset token="a-details"></unset> <unset token="c-details"></unset> <set token="b-details"></set> </condition> <condition value="c"> <unset token="a-details"></unset> <unset token="b-details"></unset> <set token="c-details"></set> </condition> </change> </input>
      Hello, I have a multi-site cluster at version 9.0.1, with several Indexers, SHs, and HF/UFs. The Monitoring Console is configured on the Cluster Manager, and "Forwarder Monitoring" ... See more...
      Hello, I have a multi-site cluster at version 9.0.1, with several Indexers, SHs, and HF/UFs. The Monitoring Console is configured on the Cluster Manager, and "Forwarder Monitoring" is enabled, which allows me to see the status of the forwarders. What is missing is the possibility to select HF in the Resource Usage section of the Monitoring Console. They are not available. How can I get them to appear in Resource Usage in the Monitoring Console?   Thank you, Andrea
Hi all, We are currently facing an issue with our Splunk SOAR installation Every time that we open the playbook editor, it shows the errors in the screenshot below and all the dropdown and search f... See more...
Hi all, We are currently facing an issue with our Splunk SOAR installation Every time that we open the playbook editor, it shows the errors in the screenshot below and all the dropdown and search fields stop working (eg: we're unable to choose apps or datatypes for the input) We have also tried to reinstall it (both v6.1.1 and v6.2.0) The service is running on a VM with Red Hat Enterprise Linux release 8.9 Do you have any suggestions on how we can solve this problem? Thanks for your help Best regards  
Hello I would like to make a query in which i can see how long my equipment has been inactive and when it was inactive preferably in a timechart. I would like to define inactive in 2 ways. One is wh... See more...
Hello I would like to make a query in which i can see how long my equipment has been inactive and when it was inactive preferably in a timechart. I would like to define inactive in 2 ways. One is when x y and z have the same value +/-50 for 10 seconds or more In these events 1000=950/1050 for the sake of inactivity The second way is when there has been no new event from a piece of equipment for more than 10 seconds Any help would be very much appriciated. Below are some sample events and how long the equipment is active/inactive 12:00:10 x=1000 y=500 z=300 equipmentID=1 12:00:15 x=1000 y=500 z=300 equipmentID=1 12:00:20 x=1025 y=525 z=275 equipmentID=1 12:00:25 x=1000 y=500 z=300 equipmentID=1 (20 seconds of inactivity) 12:00:30 x=1600 y=850 z=60 equipmentID=1 12:00:35 x=1600 y=850 z=60 equipmentID=1 (15 seconds of activity) 12:03:00 x=1650 y=950 z=300 equipmentID=1 (135 seconds of inactivity) 12:03:05 x=1850 y=500 z=650 equipmentID=1 12:03:10 x=2500 y=950 z=800 equipmentID=1 12:03:15 x=2500 y=950 z=400 equipmentID=1 12:03:20 x=2500 y=950 z=150 equipmentID=1 (15 seconds of activity)
Hi all, I'm trying to extract a part of a field. The field named Computer and is like MySrv.MyDomain.MySubDom1.com MySubDom1 can exist or not. I would like to extract everything after MySrv. I tri... See more...
Hi all, I'm trying to extract a part of a field. The field named Computer and is like MySrv.MyDomain.MySubDom1.com MySubDom1 can exist or not. I would like to extract everything after MySrv. I tried with  index=MyIndex host=MySrv | rex field=_raw "(?<domaine_test>(\.\w+))" The result create a new field Domain_test but it stores only the first part "MyDomain" and not the rest of the field. How can I do this ? For exemple : Computer = "MySrv.MyDomain.MySubDom1.com" Result : Domain_test = "MyDomain.MySubDom1.com"