All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Does the "Windows Event Log(Multiline)"  data source in UBA support event logs in native language(non English). For example Norwegian? If it is not supported how can we add this data to UBA?  
Hi, I have an issue in indexer cluster. It's been a month since i noticed there are different number of buckets in two indexers.  Search factor, Replication factor fine. What is the reason for this d... See more...
Hi, I have an issue in indexer cluster. It's been a month since i noticed there are different number of buckets in two indexers.  Search factor, Replication factor fine. What is the reason for this difference? 
Hello, I'm trying to set-up splunk mobile on my Apple Watch to see my custom dashboard. First I've registered on my Splunk Instance the Iphone that I'll pair to the Apple Watch. All is working fine b... See more...
Hello, I'm trying to set-up splunk mobile on my Apple Watch to see my custom dashboard. First I've registered on my Splunk Instance the Iphone that I'll pair to the Apple Watch. All is working fine because I'm able to see all the dashboard selected on Splunk Cloud Gateway. Second step I've installed Splunk Mobile App on Apple Watch but now begin the issue because seems that connection it's not working from Iphone and Apple Watch: "Your Iphone is not registered to a Splunk Instance". Any suggestion? Thanks and best regards
We want to filter the event before indexing based on filed value match. For example below is the single event, if the below condition is match  we need to index the whole event otherwise drop the wh... See more...
We want to filter the event before indexing based on filed value match. For example below is the single event, if the below condition is match  we need to index the whole event otherwise drop the whole event. WAFAction = unknown WAFFlags = 0 Please advise how to achieve the same ?  Sample event  JSON format  with time stamp { [-] BotScore: 98 BotScoreSrc: Machine Learning CacheCacheStatus: unknown CacheResponseBytes: 1877 CacheResponseStatus: 200 CacheTieredFill: false ClientASN: 701 ClientCountry: us ClientDeviceType: desktop ClientIP: 196.142.18.94 ClientIPClass: noRecord ClientMTLSAuthCertFingerprint: ClientMTLSAuthStatus: unknown ClientRequestBytes: 3912 ClientRequestMethod: POST ClientRequestPath: /common/endpoint/ ClientRequestProtocol: HTTP/2 ClientRequestScheme: https ClientRequestSource: eyeball ClientRequestURI: /common/endpoint/ ClientRequestUserAgent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36 ClientSSLCipher: ECDHE-ECDSA-AES128-GCM-SHA256 ClientSSLProtocol: TLSv1.2 ClientSrcPort: 50738 ClientTCPRTTMs: 14 ClientXRequestedWith: XMLHttpRequest EdgeCFConnectingO2O: false EdgeColoCode: EWR EdgeColoID: 11 EdgeEndTimestamp: 2021-06-24T01:33:21Z EdgePathingOp: wl EdgePathingSrc: macro EdgePathingStatus: nr EdgeRateLimitAction: EdgeRateLimitID: 0 EdgeRequestHost: api.xyz.com EdgeResponseBodyBytes: 71 EdgeResponseBytes: 814 EdgeResponseCompressionRatio: 0 EdgeResponseContentType: application/json EdgeResponseStatus: 200 EdgeServerIP: 62.15.62.15 EdgeStartTimestamp: 2021-06-24T01:33:21Z EdgeTimeToFirstByteMs: 160 FirewallMatchesActions: [ [+] ] FirewallMatchesRuleIDs: [ [+] ] FirewallMatchesSources: [ [+] ] OriginDNSResponseTimeMs: 0 OriginIP: 44.12.238.17 OriginRequestHeaderSendDurationMs: 0 OriginResponseBytes: 0 OriginResponseDurationMs: 148 OriginResponseHTTPExpires: OriginResponseHTTPLastModified: OriginResponseHeaderReceiveDurationMs: 90 OriginResponseStatus: 200 OriginResponseTime: 148000000 OriginSSLProtocol: TLSv1.2 OriginTCPHandshakeDurationMs: 18 OriginTLSHandshakeDurationMs: 40 ParentRayID: 00 RayID: 6642351fccb80ca5 SecurityLevel: med SmartRouteColoID: 0 UpperTierColoID: 0 WAFAction: unknown WAFFlags: 0 WAFMatchedVar: WAFProfile: unknown WAFRuleID: WAFRuleMessage: WorkerCPUTime: 0 WorkerStatus: unknown WorkerSubrequest: false WorkerSubrequestCount: 0 ZoneID: 134451718 ZoneName: yy.xxxxx.com } Show as raw text host = idx1.server001.net.netsource = s3://cloudflare/logs/20210624/20210624T013257Z_20210624T013327Z_0c91c265.log.gzsourcetype = _json  
Hi Everyone, I have a question regarding the incremental backup snapshot in the Events Service. If we want to keep the Events Service snapshot to always run in incremental (backup runs daily), d... See more...
Hi Everyone, I have a question regarding the incremental backup snapshot in the Events Service. If we want to keep the Events Service snapshot to always run in incremental (backup runs daily), does that mean we can't delete the indices file backup under the indices folder? What if we want to keep only the last 7 days of backup data (backup runs daily)? Any best practice on the Events Service snapshot data housekeeping or snapshot retention settings? Thanks for your reply! Regards, Yan
I am trying to find out cpu utilization by process. value of Cpu utilization is coming as 100% and below, which is absolutely fine. but for the processes, it exceeding 100% value, i understand,... See more...
I am trying to find out cpu utilization by process. value of Cpu utilization is coming as 100% and below, which is absolutely fine. but for the processes, it exceeding 100% value, i understand, its happening due to multiple cores configured. is there any way i can fetch it in (less than equal to 100 value). Also i thought of dividing the values by cores (eg: 890/9  , 626/7). if i use "case" or "if" this will have multiple statements (for eg: process utilization till 4200). Is there an easy way to perform this? can i acheive it by integrating python script in alert, i know this can be done, but can someone help me with the process(unaware of integrating custom commands)? process query i am running index=perf_process object=Process instance!=_Total instance!=Idle | fields _time host counter instance Value | search counter="% Processor Time" | stats avg(Value) as avg by instance host _time | stats sum(avg) by _time host @splunk
I have a static lookup file which has 2 columns. Example: name, type. Please note this static lookup has no reference to date timestamp. Now am trying to combine to frame this as a table which will a... See more...
I have a static lookup file which has 2 columns. Example: name, type. Please note this static lookup has no reference to date timestamp. Now am trying to combine to frame this as a table which will append date as another column. I would like to use the resulting table to compare against another result set to confirm that there was an event logged for each type on dailybasis and report only the missing ones. Example : Name | Type | Date AAA | BBB | 22/06/2021 AAA | BBB | 23/06/2021 AAA | BBB | 24/06/2021 CCC | DDD | 22/06/2021 CCC | DDD | 23/06/2021 CCC | DDD | 24/06/2021 EEE | FFF | 22/06/2021 EEE | FFF | 23/06/2021 EEE | FFF | 24/06/2021 GGG | HHH | 22/06/2021 GGG | HHH | 23/06/2021 GGG | HHH | 24/06/2021 Query I have been trying to use is [inputlookup test.csv | fields Name, Type] appendcols [[| gentimes start=-3 | eval Dates=strftime(starttime,"%Y%m%d") | table Dates]. I have been trying to use multiple commands but with no luck It appends just the first row. since my date table has 3 rows, it will append dates in first 3 rows.  Thanks for any help in achieving this!
We have our SPLUNK instance in a private AWS VPC.  We also have a separate HEC cluster in a private AWS VPC (logical separation).  I need to collect AWS Security Hub events into my HEC but reading th... See more...
We have our SPLUNK instance in a private AWS VPC.  We also have a separate HEC cluster in a private AWS VPC (logical separation).  I need to collect AWS Security Hub events into my HEC but reading through the notes, it indicates that I need to use Project Trumpet which will in turn use Kinesis Firehose.  Kinesis Firehose requires a public IP which is something that we don't want to do (we have been using older methods for pulling GuardDuty in for example).  Is there a way to use Trumpet and collect the same logs without having to expose the HEC to the internet (I know this can be segregated by way of Security Groups etc.) but I am trying to find a solution so that I don' t have to do this.  
Universal Forwarder installed on a Windows server using all default settings. Where can I find the stanza that has the types of events it is logging so that I can validate it received th
Does anybody know a good way to filter out AWS Cloudtrail readonly events?   This is what I have on my HF and jumping through hoops to get this on the IDM for Splunk Cloud.   [cloudtrail_read_onl... See more...
Does anybody know a good way to filter out AWS Cloudtrail readonly events?   This is what I have on my HF and jumping through hoops to get this on the IDM for Splunk Cloud.   [cloudtrail_read_only] REGEX = "^Describe|Get|List\p{Lu}|LookupEvents" DEST_KEY = queue FORMAT = nullQueue and this to props.conf: [aws:cloudtrail] #Strip out readOnly AWS events (i.e. Describe*, List*) TRANSFORMS-cloudtrail_read_only = cloudtrail_read_only   Doesn't seem to be filtering. Thoughts?
Our company's IT/Ops team manages a Splunk Cloud server and they have set up various custom apps for our different services, one such app has all the monitors and other configuration necessary for a ... See more...
Our company's IT/Ops team manages a Splunk Cloud server and they have set up various custom apps for our different services, one such app has all the monitors and other configuration necessary for a specific API's logs to be included in the Splunk Cloud.   In the past, after installing SplunkUniversalForwarder we have been able to rename a computer (EC2 Instance running Windows Server), set the C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf file to use the computer's name as the default hostname, and restart the Splunk service and then the custom app folder would automatically be deployed to C:\Program Files\SplunkUniversalForwarder\etc\apps and all the API logs would show up just fine in Splunk Cloud.   We do not want to rename the computers anymore, though, but if I set the inputs.conf with a default hostname that is different than the computer's name and then restart the Splunk service then it will not deploy the custom app folder and the API's logs will not be accessible in Splunk Cloud. The hostname is confirmed to be working, though, because it will start showing Splunk logs (from sourcetype "splunkd") in Splunk Cloud with the host name set in the inputs.conf file.   I could manually add monitors to the inputs.conf file, but then I guess our It/Ops won't be able to administer changes via the app. So, is it possible to download that custom app without renaming the computers?
I have following data and : ...... 2021-06-18 21:05:45.037 +02:00 [Information] ChuteAndStatus=[20202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020... See more...
I have following data and : ...... 2021-06-18 21:05:45.037 +02:00 [Information] ChuteAndStatus=[20202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020]" 2021-06-18 21:05:45.037 +02:00 [Information] ChuteAndStatus=[10202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020]" 2021-06-18 21:05:45.037 +02:00 [Information] ChuteAndStatus=[00202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020]" ..... I need to extract the "First_Status" and "Second_Status"  of a Chute and the field from log data and each 2 characters of value belongs to one Item.  Example: first character set "20",  2 is  for First_Status and means OK and 0 is for Second_Status and means NOT OK for Item_1.  (Total Items= 128/2 = 64) Finally I want to extract the raw data and convert to First_Status , Second_Status and link them to a fix Item (Item_1...Item_64): _time Items First_Status Second_Status 2021-06-18 21:05:45.037 Item_1 Ok Ok 2021-06-18 21:05:46.037 Item_1 Not Ok Not Ok 2021-06-18 21:05:47.037 Item_2 Ok Ok 2021-06-18 21:05:49.037 Item_n .... ..... ....
I have a panel (shown below) and I want to change its color. The panel is called Application Name and I want it to be purple. How do I do that? In other words, I want "sad-api" to be purple not whit... See more...
I have a panel (shown below) and I want to change its color. The panel is called Application Name and I want it to be purple. How do I do that? In other words, I want "sad-api" to be purple not white.
Hi,   We are using self-signed certs for connection between HF and UF for receiving data on HF on port 9997. Now we need to start using 3rd party certificates but we will not be able to place new c... See more...
Hi,   We are using self-signed certs for connection between HF and UF for receiving data on HF on port 9997. Now we need to start using 3rd party certificates but we will not be able to place new cert on all UFs in one go. Is there a way for us to have multiple server.pem and cacert.pem on HF for ingestion until we are able to update certs on all our UFs.   Thanks!
Hi there, I am just wondering if Splunk> is currently the only Splunk provider out there? Are there any other companies you integrate the software?
I have Splunk enterprise installed in docker on port 8000 as follows:  docker run -it --name=splunk -p 8000:8000 -p 8088:8088 -v splunk_etc:/opt/splunk/etc -v splunk_var:/opt/splunk/var -e SPLUNK_ST... See more...
I have Splunk enterprise installed in docker on port 8000 as follows:  docker run -it --name=splunk -p 8000:8000 -p 8088:8088 -v splunk_etc:/opt/splunk/etc -v splunk_var:/opt/splunk/var -e SPLUNK_START_ARGS=--accept-license -e SPLUNK_PASSWORD=<password> splunk/splunk:latest start I am trying to install universal forwarder to forward log files to splunk instance. I used the command from this link : https://docs.splunk.com/Documentation/Forwarder/8.2.0/Forwarder/DeployandrunauniversalforwarderinsideaDockercontainer docker run -d -p 9997:9997 -e SPLUNK_START_ARGS='--accept-license' -e SPLUNK_PASSWORD='<password>' --name uf splunk/universalforwarder:latest and get the following error:  Does a HEC need to be set-up for file forwarding?. 
I have a timechart with columns A and B, I would like to add a third column C, where C=A/B My timechart is created by: index=... | timechart span=10m count(_raw) AS A | appendcols [ index= .... ... See more...
I have a timechart with columns A and B, I would like to add a third column C, where C=A/B My timechart is created by: index=... | timechart span=10m count(_raw) AS A | appendcols [ index= .... | timechart span=10m count(_raw) AS B]
We have logs coming in udp port 514 and want to exclude indexing events with the field "action" equaling "accept". We have tried inserting the following into the inputs.conf but does not work. black... See more...
We have logs coming in udp port 514 and want to exclude indexing events with the field "action" equaling "accept". We have tried inserting the following into the inputs.conf but does not work. blacklist = action = "accept" Please assist.
Hi,  I am working on a search that looks for instances of "string1", but only those that are not followed by instances of "string2" in X minutes of time. The search runs once every 24hrs and should ... See more...
Hi,  I am working on a search that looks for instances of "string1", but only those that are not followed by instances of "string2" in X minutes of time. The search runs once every 24hrs and should produce a total count of the instances found. I am trying to search with bin command. | index=x | search "string1" NOT "string2" | bin _time span=5min The problem is that this search looks for 5min in the past, as well as 5min in the future. So if "string2" is present within -5min from "string1",  that instance does not get counted, and it should be. It should only be excluded if "string2" is after  "string1". Many Thanks
I am trying to do a stats count where 2XX https response means as success and any non 2XX means that it's a failure. I am using below snippet. But it doesn't work. Am I missing something ?   stats ... See more...
I am trying to do a stats count where 2XX https response means as success and any non 2XX means that it's a failure. I am using below snippet. But it doesn't work. Am I missing something ?   stats  count(eval(httpresponse="2*")) as TRANSACTIONS_SUCCESS,  count(eval(httpresponse!="2*")) as TRANSACTIONS_FAILURE by service