All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello. i have a search with different files. In my table I have a field EDZ_000. In another csv file, I have a list with this field EDZ_000 to EDZ_XXX. Need to match this fields to get another colum... See more...
Hello. i have a search with different files. In my table I have a field EDZ_000. In another csv file, I have a list with this field EDZ_000 to EDZ_XXX. Need to match this fields to get another column. How do I do that? Unfortunately, i can't show the source code.
Greetings experts Big picture: using Bash script and curl to download Rest API/JSON  from an AWS instance. The beginning of each download is unstructured followed by the structured JSON. Four diffe... See more...
Greetings experts Big picture: using Bash script and curl to download Rest API/JSON  from an AWS instance. The beginning of each download is unstructured followed by the structured JSON. Four different "logType", two logTypes (deviceConnectivityUpdate and deviceStateEvent) shown in a five event example below _raw example {"total":{"value":5,"relation":"eq"},"max_score":1,"hits":[{"_index":"index_44444444-4444-4444-4444-444444444444","_id":"zbIdu4gBwP_vIV4KexH0","_score":1,"_source":{"version":1,"logType":"deviceConnectivityUpdate","deviceSerialNumber":"4931390007","userName":"gary.whitlocks22","cloudTimestampUTC":"2023-06-14T18:14:11Z","isDeviceOffline":false}},{"_index":"index_44444444-4444-4444-4444-444444444444","_id":"z7Ieu4gBwP_vIV4KARGG","_score":1,"_source":{"version":1,"logType":"deviceConnectivityUpdate","deviceSerialNumber":"4931390007","userName":"gary.whitlocks22","cloudTimestampUTC":"2023-06-14T18:14:45Z","isDeviceOffline":true}},{"_index":"index_44444444-4444-4444-4444-444444444444","_id":"0LIeu4gBwP_vIV4KHxHn","_score":1,"_source":{"version":1,"logType":"deviceStateEvent","deviceSerialNumber":"4931490086","userName":"NSSS","cloudTimestampUTC":"2023-06-14T18:14:53Z","deviceTimestampUTC":"2023-06-14T18:14:55Z","batteryPercent":49,"isCheckIn":false,"isAntiSurveillanceViolation":false,"isLowBatteryViolation":false,"isCellularViolation":false,"isDseDelayed":false,"bleMacAddress":"7d:8e:1a:be:92:5a","cellIpv4Address":"0.0.0.0","cellIpv6Address":"::"}},{"_index":"index_44444444-4444-4444-4444-444444444444","_id":"zrIdu4gBwP_vIV4KsxFQ","_score":1,"_source":{"version":1,"logType":"deviceConnectivityUpdate","deviceSerialNumber":"4931390006","userName":"PennyAndroid","cloudTimestampUTC":"2023-06-14T18:14:25Z","isDeviceOffline":true}},{"_index":"index_44444444-4444-4444-4444-444444444444","_id":"0bIeu4gBwP_vIV4KhBGr","_score":1,"_source":{"version":1,"logType":"deviceConnectivityUpdate","deviceSerialNumber":"4931390006","userName":"PennyAndroid","cloudTimestampUTC":"2023-06-14T18:15:19Z","isDeviceOffline":false}}]} JSON { "total": { "value": 5, "relation": "eq" }, "max_score": 1, "hits": [ { "_index": "index_44444444-4444-4444-4444-444444444444", "_id": "zbIdu4gBwP_vIV4KexH0", "_score": 1, "_source": { "version": 1, "logType": "deviceConnectivityUpdate", "deviceSerialNumber": "4931390007", "userName": "gary.whitlocks22", "cloudTimestampUTC": "2023-06-14T18:14:11Z", "isDeviceOffline": false } }, { "_index": "index_44444444-4444-4444-4444-444444444444", "_id": "z7Ieu4gBwP_vIV4KARGG", "_score": 1, "_source": { "version": 1, "logType": "deviceConnectivityUpdate", "deviceSerialNumber": "4931390007", "userName": "gary.whitlocks22", "cloudTimestampUTC": "2023-06-14T18:14:45Z", "isDeviceOffline": true } }, { "_index": "index_44444444-4444-4444-4444-444444444444", "_id": "0LIeu4gBwP_vIV4KHxHn", "_score": 1, "_source": { "version": 1, "logType": "deviceStateEvent", "deviceSerialNumber": "4931490086", "userName": "NSSS", "cloudTimestampUTC": "2023-06-14T18:14:53Z", "deviceTimestampUTC": "2023-06-14T18:14:55Z", "batteryPercent": 49, "isCheckIn": false, "isAntiSurveillanceViolation": false, "isLowBatteryViolation": false, "isCellularViolation": false, "isDseDelayed": false, "bleMacAddress": "7d:8e:1a:be:92:5a", "cellIpv4Address": "0.0.0.0", "cellIpv6Address": "::" } }, { "_index": "index_44444444-4444-4444-4444-444444444444", "_id": "zrIdu4gBwP_vIV4KsxFQ", "_score": 1, "_source": { "version": 1, "logType": "deviceConnectivityUpdate", "deviceSerialNumber": "4931390006", "userName": "PennyAndroid", "cloudTimestampUTC": "2023-06-14T18:14:25Z", "isDeviceOffline": true } }, { "_index": "index_44444444-4444-4444-4444-444444444444", "_id": "0bIeu4gBwP_vIV4KhBGr", "_score": 1, "_source": { "version": 1, "logType": "deviceConnectivityUpdate", "deviceSerialNumber": "4931390006", "userName": "PennyAndroid", "cloudTimestampUTC": "2023-06-14T18:15:19Z", "isDeviceOffline": false } } ] } CSV version logType deviceSerialNumber userName cloudTimestampUTC isDeviceOffline deviceTimestampUTC batteryPercent isCheckIn isAntiSurveillanceViolation isLowBatteryViolation isCellularViolation isDseDelayed bleMacAddress cellIpv4Address cellIpv6Address 1 deviceConnectivityUpdate 4931390007 gary.whitlocks22 6/14/2023 18:14 FALSE                     1 deviceConnectivityUpdate 4931390007 gary.whitlocks22 6/14/2023 18:14 TRUE                     1 deviceStateEvent 4931490086 NSSS 6/14/2023 18:14   6/14/2023 18:14 49 FALSE FALSE FALSE FALSE FALSE 7d:8e:1a:be:92:5a 0.0.0.0 :: 1 deviceConnectivityUpdate 4931390006 PennyAndroid 6/14/2023 18:14 TRUE                     1 deviceConnectivityUpdate 4931390006 PennyAndroid 6/14/2023 18:15 FALSE                         Indexed using sourcetype _JSON Looking at hits{}._source.logType Splunk reports 4 deviceConnectivityUpdate events and 1 deviceStateEvent which agrees with the data. However when I run stats count by("hits{}._source.logType") by "hits{}._source.userName I get 5x count of events (see attachment) Help please, what am I missing or doing wrong?   
I wish to remove unneeded text from Windows event logs before they are indexed. Specifically, Windows event 4624 contains a dozen or so lines of text at the end that I don't want. These are sent to m... See more...
I wish to remove unneeded text from Windows event logs before they are indexed. Specifically, Windows event 4624 contains a dozen or so lines of text at the end that I don't want. These are sent to my clustered indexers directly from universal forwarders. I have a transform set up on the indexers to extract the fields I'm interested in: [events-win-security-4624] REGEX = EventCode=(4624)\n.*\nComputerName=(.*)?\n.*\n.*\n.*\n.*\n.*?\n.*\n.*\n\nSubject:\n.*\n*.*\n*.*\n*.*\n\nLogon Information:\n*.*\n*.*\n*.*\n*.*\n\n\n*.*\n\nNew Logon:\n\s*Security ID:\s*(.*)?\n\s*Account Name:\s*(.*)?\n\s*Account Domain:\s*(.*)?\n.*\n.*\n.*\n.*\n.*\n\n.*\n.*\n.*\n\nNetwork Information:\n\s*Workstation Name:\s*(.*)?\n\s*Source Network Address:\s*(.*)?\n\s*Source Port:\s*(.*)?\n\nDetailed Authentication Information:\n.*Logon Process:\s*(.*)? DEST_KEY = _raw FORMAT = EventCode=$1 ComputerName=$2 SecurityID=$3 AccountName=$4 AccountDomain=$5 WorkstationName=$6 SourceNetworkAddress=$7 SourcePort=$8 LogonProcess=$9 (Please pardon the lengthy regex; all it's doing is capturing the relevant fields.) And the corresponding bit from props.conf, also on the indexers: [WinEventLog:Security] TRANSFORMS-winsec_event4624 = events-win-security-4624 Based on my understanding, the transform should extract the specified fields from the raw event, put these into the format specified by the FORMAT line, send it on to be indexed, and ignore the rest. However, this is not happening. When I search my index for "EventCode=4624" I see the full event text, along with the extraneous text. It doesn't appear that the transform is doing anything. I've tested my regex against multiple events using regex101.com and everything looks correct. I've since set up a few other regex transforms to drop specific events from a different source (by sending them to nullQueue) and these are all working as expected, so I know that Splunk is able to see the conf files. I've reviewed the specifications for transforms.conf and props.conf but wasn't able to find what I'm doing wrong.   Am I going about this the correct way? I'm relatively new to Splunk so I'm sure I'm overlooking something simple. Thank you for your help!
We're in the process of migrating our data-collection paradigm from:    [In-store devices] -> [ Google Analytics (GA)] -> Splunk (data has been stored in Index A) to:     [In-store devices] -> ... See more...
We're in the process of migrating our data-collection paradigm from:    [In-store devices] -> [ Google Analytics (GA)] -> Splunk (data has been stored in Index A) to:     [In-store devices] -> [Splunk] (data being stored in Index B) At the end of June, we are cutting off our GA connection and moving directly to Splunk.   Unfortunately, a few of the fields have the same name in each index, but most have a different name in the newer index.  The issue is we have a dashboard that displays year-over-year data for these devices that we need to maintain and continue using going forward.   So my question to all you brilliant minds out there is this - how to I modify the dashboard to continue to use the existing data from previous years but begin using the new data/index at a particular date, let's say June 25, 2023.  For the end-user, the dashboard should continue to look as it always has, but behind the scenes it'll be using both indexes. I'm not very strong on SPL and am getting confused on whether this would be case to try and use a Union, or a Join, or ... ? Any, and all help from you guys is *greatly* appreciated! Thanks! P.S. - we have NOT migrated to Dashboard Studio yet and are still running 8.2.4 for the time-being.  I'm not opposed to using DS on this however ...
I am trying to return data for a pie chart with a specified range of values. How would I go about this?    | stats count(Score < 5) as "0-4.9" count(Score >=5 AND Score <=8) as "5-8"  
I am trying to use a similar splunk query: index="myIndex" appname="myapp" msg.result.message ="*TradingSymbol(s):*" | rex "(?<=TradingSymbol\(s\): )[\w-]+(?:, [\w-]+)*," | stats count BY Tra... See more...
I am trying to use a similar splunk query: index="myIndex" appname="myapp" msg.result.message ="*TradingSymbol(s):*" | rex "(?<=TradingSymbol\(s\): )[\w-]+(?:, [\w-]+)*," | stats count BY TradingSymbol(s), Elapsed I wanted to get them in a table as Date,  PortfolioSymbol(s),  ElapsedTime When I try to run it, I get the error  Error in 'rex' command: The regex '(?<=TradingSymbol\(s\): )[\w-]+(?:, [\w-]+)*,' does not extract anything. It should specify at least one named group. Format: (?<name>...). When I try the same in regexr.com, for the below output, (?<=TradingSymbol\(s\): )[\w-]+(?:, [\w-]+)*, able to highlight 2AC5, 3DE2, 5CE3, 4FA4, 1BM5, TEST-2AB6,  TEST-2BA9, RefreshAsyncjronized  End, TradingSymbol(s): 2AC5, 3DE2, 5CE3, 4FA4, 1BM5, TEST-2AB6,  TEST-2BA9, ElapsedTime: 12.3762658 Please help, Thanks
Hello, I have a few Linux devices that are located within the DMZ.  My 3 Splunk servers (Search Head, Indexer, Deployment) are inside my network.  I've installed the forwarders on my dmz servers, a... See more...
Hello, I have a few Linux devices that are located within the DMZ.  My 3 Splunk servers (Search Head, Indexer, Deployment) are inside my network.  I've installed the forwarders on my dmz servers, and set up the set-poll-deployment command (myserver.com:8089)  to ensure it's pointing to the deployment server.   There is no communication from the dmz servers to the deployment server  I did 2 tests: 1. I was able to connect from 1 dmz server to the deployment server.   Result: Connection good. 2. I was not able to connect from the deployment server to the dmz server:  Result: Connection refused Are there other options to get the dmz servers to connect?    Thanks  
Hi all.  I am very new to splunk so please be gentle here. I have the following json payload being updated in our splunk index.     { "status": "open", "description": "some inform... See more...
Hi all.  I am very new to splunk so please be gentle here. I have the following json payload being updated in our splunk index.     { "status": "open", "description": "some information here" "severity": "unknown", "ingestion_source": "source type here" }       What I want to do is have a tile that is per ingestion_source that turns red if a new payload hasn't been received in the last 5 minutes. I know how to make the query, I am just struggling with how to make the dashboard do what I explained. Any help is much appreciated.
When I navigate to Configure > Content > Content Management, I'm greeted with an error message stating "Could not load analytic stories". Anyone have any thoughts/ideas on what could be causing thi... See more...
When I navigate to Configure > Content > Content Management, I'm greeted with an error message stating "Could not load analytic stories". Anyone have any thoughts/ideas on what could be causing this and how to correct it?
Hi @Splunkers, I created panel which give output based on  multiselected fields, both are having different sources/index. issue is: Multiselect value has "A", "B", "C" and "All" values only. Pa... See more...
Hi @Splunkers, I created panel which give output based on  multiselected fields, both are having different sources/index. issue is: Multiselect value has "A", "B", "C" and "All" values only. Panel is working for correctly based on selection. When I selected "A+B+C" then it should be match with "ALL" selection. but ALL contains "A","B","C",......"Y","Z" values, because I put ALL="*" in input setting. How can set token value filter the value for "ALL"?? which collect only "A" OR "B" OR "C" not "*"
Hello, I'm not sure how to achieve this.  I need to create an alert for when a field (user) value has > 500 events for when another field (eventType) is filtered on a specific value. i.e.   Use... See more...
Hello, I'm not sure how to achieve this.  I need to create an alert for when a field (user) value has > 500 events for when another field (eventType) is filtered on a specific value. i.e.   User: John EventType: Blocked I can't figure it out.  Here's what I have so far:   |stats count by user, eventType |eventstats sum(count) as count by eventType |stats values(eventType) as Blocked, values(user) as user |table user, Blocked |where count>500   Thanks for any help on this, Tom    
Hi,  Do you use appd to monitor Kubernetes cluster components such as API server, etc, scheduler, and controller manager? How much are the cluster agent metrics useful?
Hi all, Having a strange issue. splunk add oneshot suddenly stops working. I have tried to re-read a file using  splunk add oneshot -source filename.csv -index test -sourcetype mysourcetype -... See more...
Hi all, Having a strange issue. splunk add oneshot suddenly stops working. I have tried to re-read a file using  splunk add oneshot -source filename.csv -index test -sourcetype mysourcetype -> Oneshot 'filename.csv' added After I deleted the events from index "test". However suddenly events do not appear anymore. Search for with "All Time" scope always returns 0 results. I cannot find any problems in the _internal log for the forwarder. Regards
Dear All I have a CSV lookup with a column name column1 with below values   MicroBest GoDear Bear   And I have some logs in which the field1 contains values like MicroBest Infrastruct... See more...
Dear All I have a CSV lookup with a column name column1 with below values   MicroBest GoDear Bear   And I have some logs in which the field1 contains values like MicroBest Infrastructure No-GoDear-To-World Lives The Life I want to write a query that should match the first two events as lookup column value is a substring to field value. Also, I don't want to use *lookup_value* as it will also match something like Mi GoDearSamsung-Phone. I
using the link  Update Synthetic API Monitoring Job API (appdynamics.com) I try to PUT mybaseuri/v1/synthetic/api/schedule/<id> and get synthetic job and if the job is enabled then I want to disabl... See more...
using the link  Update Synthetic API Monitoring Job API (appdynamics.com) I try to PUT mybaseuri/v1/synthetic/api/schedule/<id> and get synthetic job and if the job is enabled then I want to disabled it but the api is not working at all  could you please suggest how to get the synthetic job id by name and then if the job is enable then disable it, just send me rest api's for these operations also  is the format mybaseuri/v1/synthetic/api/schedule/<id> where mybaseuri=https://mydomain.xxxx.appdynamics.com/controller/alerting/rest/v1/synthetic/api/schedule/<id> is correct is incorrect ? please let me know 
Hello, I've completed the following: 1. Installed Linux forwarder.  2. Assigned ownership and permissions to splunk user and group 3.  started splunk and accepted license 4. entered admin c... See more...
Hello, I've completed the following: 1. Installed Linux forwarder.  2. Assigned ownership and permissions to splunk user and group 3.  started splunk and accepted license 4. entered admin credentials 5. typed in "./splunk enable boot-start -systemd-managed 1 -user splunk -group splunk. 6.  typed in "./splunk set deploy-poll   mysplunkservertest.com:8089 7. Verified deployment.conf file and the uri 8. restarted splunk.  Now,  how do I test the connection from the forwarder to the deployment server? thanks    
Dear Splunk Experts, We plan using your product in a regulated environment, having a question on the (heavy) forwarder. In such area, installation of a product requires proving the absence of retr... See more...
Dear Splunk Experts, We plan using your product in a regulated environment, having a question on the (heavy) forwarder. In such area, installation of a product requires proving the absence of retroactive effects on the base system. 1) Your product offers remote access to the base system, offering great convenience, but thereby potentially modifying the base system, offending the above requirement. Is there a reloably means to prevent a forwarder from offering this feature? 2) Can you give upper limits for memory and CPU resource usage? Again, this is required for a tool that aims at being suitable for installation in the regulated environment we find us in. 3) Do you keep service records for products with a given version, so that one could take credit from showing successful use of the product in a significant amount of cases? This typically includes track records on known issues. Thanks in advance for your effort, best regards, Frank
We have 8-node indexer multi-site cluster with total of 2 sites (each site with 4 indexers) Site 1 - SITE1IDX01 SITE1IDX02 SITE1IDX03 SITE1IDX04 Site 2 - SITE2IDX01 SITE2IDX02 SITE2IDX03 ... See more...
We have 8-node indexer multi-site cluster with total of 2 sites (each site with 4 indexers) Site 1 - SITE1IDX01 SITE1IDX02 SITE1IDX03 SITE1IDX04 Site 2 - SITE2IDX01 SITE2IDX02 SITE2IDX03 SITE2IDX04 Replication Settings - site_replication_factor=origin:2,total:3 site_search_factor=origin:1,total:2 Whenever there are connectivity issues / glitches between the two sites, we start to notice below failures - WARN BucketReplicator - Failed to replicate Streaming bucket to guid=**** host=***** s2sport=**** Read timed out after 60 sec (between Site 1 & Site 2 indexers) And these failures have a cascading effect - Connectivity errors between servers on Site 1 & Site 2 ---> this affect replication of hot streaming buckets between the 2 sites ---> which puts a pressure on splunk replication queue ---> replication is part of index queue and hence indexing queues get filled ---> since indexing queues are supposed to process incoming ingestion, being full they drop the ingestion traffic. Need help over these points - We assume that such errors will still effect even if you put cluster into maintenance mode. How do we then support network related maintenance activities for Splunk Multi-Site cluster ? How can we make multi-site cluster more resilient to not end up blocking index queues due to underlying network glitches between the indexers on two sites.
Hi All, i have created a bar chart  using stats Count by ID. I have a field "status" in my data. Is it possible to add this status along with ID in the axis of the bar chart. I want my label field ... See more...
Hi All, i have created a bar chart  using stats Count by ID. I have a field "status" in my data. Is it possible to add this status along with ID in the axis of the bar chart. I want my label field to be ID(status). ID and status both are varying data. Is it possible to do in Splunk anyhow?
Hi here is sample format   Audit Log ID KMA ID KMA Name Class Retention Term Operation Condition Severity Audit Log Entry ID Created Date Entity ID Entity Network Address Message Values Solution B... See more...
Hi here is sample format   Audit Log ID KMA ID KMA Name Class Retention Term Operation Condition Severity Audit Log Entry ID Created Date Entity ID Entity Network Address Message Values Solution B7086B1B57C6714E00000000009D00B1 B7086B1B57C6714E CA4SICLKMA1 Audit Log Management Operations Short Term List Audit Logs Success Success 000109000000 2023-06-05 09:35:11 AKH25wj 30.XXX.XXX.XX No recommended action B7086B1B57C6714E00000000009D00AF B7086B1B57C6714E CA4SICLKMA1 Audit Log Management Operations Short Term List Audit Logs Success Success 000109000000 2023-06-05 09:35:06 AKH25wj 30.XXX.XXX.XX No recommended action B7086B1B57C6714E00000000009D00AA B7086B1B57C6714E CA4SICLKMA1 Audit Log Management Operations Short Term List Audit Logs Success Success 000109000000 2023-06-05 09:34:17 AKH25wj 30.XXX.XXX.XX No recommended action