All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, We are encountering an issue after a data migration. The data migration was needed to increase the disk performances. Basically we moved all the Splunk data from disk1 to disk2 on a sing... See more...
Hello, We are encountering an issue after a data migration. The data migration was needed to increase the disk performances. Basically we moved all the Splunk data from disk1 to disk2 on a single Splunk Indexer instance belonging to a Multi-Site Splunk Indexer Cluster. The procedure was: With Splunk running, rsync the data from disk1 to disk2 Once rsync finished stop Splunk Put the Cluster in maintenance mode Perform again the rsync to copy the remaining delta from disk1 to disk2 Remove disk1 and point Splunk to disk2 Restart Splunk   Once we have restarted Splunk some buckets have been marked as DISABLED. This is due because once at point 2 we have stopped Splunk the hot buckets have rolled to warm (on disk1). Therefore during the rsync at point 4 those freshly rolled warm buckets of disk1 have been copied to disk2 where buckets hot with the same ID were present. Due to this the conflict happened and the buckets were marked as DISABLED.   So basically now DISABLED buckets could have more data (but not all the data) than the non disabled ones. Furthermore non disabled ones have been replicated within the cluster. Do you think there is a way to recover those DISABLED buckets so that they will be searchable again? I see here: https://community.splunk.com/t5/Deployment-Architecture/What-is-the-naming-convention-behind-the-db-buckets/m-p/9983 https://docs.splunk.com/Documentation/Splunk/latest/Indexer/HowSplunkstoresindexes#Bucket_naming_conventions it seems the solution could be if I well understood (with Splunk instance not running) move the data from for example DISABLED-db_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694 to db_1631215114_1631070671_herechangethebucketID_3C08D28D-299A-448E-BD23-C0E9B071E694 If so: how many digit are allowed for the bucketID? Does someone has any experience on doing so? Once done the new buckets will be replicated within the cluster?   Here is what I find in the internal logs checking for one of the affected bucket: Query:   index=_internal *1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694 source!="/opt/splunk/var/log/splunk/splunkd_ui_access.log" source!="/opt/splunk/var/log/splunk/remote_searches.log" | sort -_time   Result:   09-09-2021 14:18:41.758 +0200 INFO HotBucketRoller - finished moving hot to warm bid=_internal~448~3C08D28D-299A-448E-BD23-C0E9B071E694 idx=_internal from=hot_v1_448 to=db_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694 size=10475446272 caller=size_exceeded _maxHotBucketSize=10737418240 (10240MB,10GB), bucketSize=10878386176 (10374MB,10GB) 09-09-2021 14:18:41.767 +0200 INFO S2SFileReceiver - event=rename bid=_internal~448~3C08D28D-299A-448E-BD23-C0E9B071E694 from=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/448_3C08D28D-299A-448E-BD23-C0E9B071E694 to=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694 09-09-2021 14:18:41.795 +0200 INFO S2SFileReceiver - event=rename bid=_internal~448~3C08D28D-299A-448E-BD23-C0E9B071E694 from=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/448_3C08D28D-299A-448E-BD23-C0E9B071E694 to=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694 09-09-2021 14:18:41.817 +0200 INFO S2SFileReceiver - event=rename bid=_internal~448~3C08D28D-299A-448E-BD23-C0E9B071E694 from=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/448_3C08D28D-299A-448E-BD23-C0E9B071E694 to=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694 09-09-2021 15:53:19.476 +0200 INFO DatabaseDirectoryManager - Dealing with the conflict bucket="/products/data/xxxxxxxxx/splunk/db/_internaldb/db/db_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694"... 09-09-2021 15:53:19.477 +0200 ERROR DatabaseDirectoryManager - Detecting bucket ID conflicts: idx=_internal, bid=_internal~448~3C08D28D-299A-448E-BD23-C0E9B071E694, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/hot_v1_448, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/db_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-db_1631215114_1631070671_448_3C08D28D-299A-448E-BD23-C0E9B071E694. Please check this disabled bucket for manual removal.\nDetecting bucket ID conflicts: idx=_internal, bid=_internal~595~E17D5544-7169-4D32-B7C0-3FD972956D4B, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/595_E17D5544-7169-4D32-B7C0-3FD972956D4B, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1628215818_1627992904_595_E17D5544-7169-4D32-B7C0-3FD972956D4B. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-rb_1628215818_1627992904_595_E17D5544-7169-4D32-B7C0-3FD972956D4B. Please check this disabled bucket for manual removal.\nDetecting bucket ID conflicts: idx=_internal, bid=_internal~591~12531CC6-0C79-473A-859E-9ADF617941A2, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/591_12531CC6-0C79-473A-859E-9ADF617941A2, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1628647804_1628215848_591_12531CC6-0C79-473A-859E-9ADF617941A2. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-rb_1628647804_1628215848_591_12531CC6-0C79-473A-859E-9ADF617941A2. Please check this disabled bucket for manual removal.\nDetecting bucket ID conflicts: idx=_internal, bid=_internal~606~1D0FBF00-A5FF-4767-A044-F3C6F01BAD84, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/606_1D0FBF00-A5FF-4767-A044-F3C6F01BAD84, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1630204023_1629772040_606_1D0FBF00-A5FF-4767-A044-F3C6F01BAD84. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-rb_1630204023_1629772040_606_1D0FBF00-A5FF-4767-A044-F3C6F01BAD84. Please check this disabled bucket for manual removal.\nDetecting bucket ID conflicts: idx=_internal, bid=_internal~603~1D0FBF00-A5FF-4767-A044-F3C6F01BAD84, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/603_1D0FBF00-A5FF-4767-A044-F3C6F01BAD84, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1631172918_1631063432_603_1D0FBF00-A5FF-4767-A044-F3C6F01BAD84. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-rb_1631172918_1631063432_603_1D0FBF00-A5FF-4767-A044-F3C6F01BAD84. Please check this disabled bucket for manual removal.\nDetecting bucket ID conflicts: idx=_internal, bid=_internal~436~2E5A3717-4C0C-487C-87D3-A7127B3DB42D, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/436_2E5A3717-4C0C-487C-87D3-A7127B3DB42D, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1631196626_1631073242_436_2E5A3717-4C0C-487C-87D3-A7127B3DB42D. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-rb_1631196626_1631073242_436_2E5A3717-4C0C-487C-87D3-A7127B3DB42D. Please check this disabled bucket for manual removal.\nDetecting bucket ID conflicts: idx=_internal, bid=_internal~589~12531CC6-0C79-473A-859E-9ADF617941A2, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/589_12531CC6-0C79-473A-859E-9ADF617941A2, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1631199124_1630935298_589_12531CC6-0C79-473A-859E-9ADF617941A2. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-rb_1631199124_1630935298_589_12531CC6-0C79-473A-859E-9ADF617941A2. Please check this disabled bucket for manual removal.\nDetecting bucket ID conflicts: idx=_internal, bid=_internal~594~E17D5544-7169-4D32-B7C0-3FD972956D4B, path1=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/594_E17D5544-7169-4D32-B7C0-3FD972956D4B, path2=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/rb_1631215283_1630935291_594_E17D5544-7169-4D32-B7C0-3FD972956D4B. Temporally resolved by disabling the bucket: path=/products/data/xxxxxxxxx/splunk/db/_internaldb/db/DISABLED-rb_1631215283_1630935291_594_E17D5544-7169-4D32-B7C0-3FD972956D4B. Please check this disabled bucket for manual removal.\n   Thanks a lot, Edoardo
We are dealing with an issue where some of our users have a very short timeout in Splunk. We are working with Splunk to come up with better timeouts but in the meantime we need a way to stop dashboar... See more...
We are dealing with an issue where some of our users have a very short timeout in Splunk. We are working with Splunk to come up with better timeouts but in the meantime we need a way to stop dashboard panels form loading when a user times out to prevent them seeing partial results.  I have tried the normal tokens that are used to hide a panel until a search is done but these "stopped" searches report as done (they just have partial results).   I have seen old solutions that used "finalized" but that has been deprecated.  ('ve tried cancelled, fail and error as well, no luck) Does anyone have any idea how else I can stop these panels from loading when a search is "stopped" because of a timeout?  
Hello, I have issues to write PROPS configuration file for following csv file (please see screenshot below for sample data) with No Header on it. Five columns showed in the screenshot are all values... See more...
Hello, I have issues to write PROPS configuration file for following csv file (please see screenshot below for sample data) with No Header on it. Five columns showed in the screenshot are all values. Value of First Column also included below for better visibility.  Any help will be highly appreciated. Thank you so much. Screenshot   Value of First Column sma2aa_L_0__20210906-194605_16305.html@^@^2020-09-10@^@^04:51:43@^@^sma2aa@^@^insert into "nTABLE_MIGRATION_INFO_current"( "user_name"   [csv] SHOULD_LINEMERGE=FALSE TIME_PREFIX=? TIME_FORMAT=? TIMESTAMP_FIELDS=? HEADER_FIELD_LINE_NUMBER=? INDEXED_EXTRACTIONS=csv  
Hello everyone, we are trying to get Azure Information Protection data into Splunk, specifically, we need to get insights on who are the users that uses Azure Information Protection to classify file... See more...
Hello everyone, we are trying to get Azure Information Protection data into Splunk, specifically, we need to get insights on who are the users that uses Azure Information Protection to classify files. I really don't have any experience with Azure and Microsoft Cloud, and from my researches I've found some add-ons, but don't know if they're useful or not. Splunk add-on for Microsoft Graph API: this seems only importing security alerts related to (also) AIP. Splunk add-on for Microsoft Cloud Services: it allows to pull data directly from Azure assets, but don't know if AIP data are in the perimeter. Microsoft Azure add-on for Splunk: it allows collection of a large perimeter of informations, but same as the previous point. Can you help me to clear my mind on this variety of add-ons and see if one of those add-ons (or maybe another one that i forgot to mention) is suitable for our needs?   Thank you so much.
Can a Splunk Enterprise Server Work as a Forwarder to Splunk Cloud?    
Are the forwarders in Splunk Ent. the same in ES? I ask because I get " missing FWs by MC in both & the numbers are not the same! Please shed some light on this. My understanding is that the FWs work... See more...
Are the forwarders in Splunk Ent. the same in ES? I ask because I get " missing FWs by MC in both & the numbers are not the same! Please shed some light on this. My understanding is that the FWs working with Splunk Ent. are the same working for the ES? Thank u for your help in advance.
Hello, I'm trying to implement APM Serverless for AWS Lambda in a Python function. The function is deployed via a container image, so the extension is built as a layer in the Dockerfile. Firstly, i... See more...
Hello, I'm trying to implement APM Serverless for AWS Lambda in a Python function. The function is deployed via a container image, so the extension is built as a layer in the Dockerfile. Firstly, in case someone is trying to auto instrument this process; The extension must be unzipped into /opt/extensions/ not to /opt/ like suggested in the docs. Otherwise, the Lambda won't see the extension. However, when executing the function, I get the following error: AWS_Execution_Env is not supported, running lambda with default arguments. EXTENSION Name: appdynamics-extension-script State: Started Events: [] Start RequestId End RequestId Error: exit code 0 Extension.Crash Without more information. Any idea what could be causing said crash? Thanks!
What should the "Data Collection Interval" under Forwarder monitoring setup in MC be set to & why please? What is the best setting in a large environment ? We have Splunk Ent + ES.  Should this sett... See more...
What should the "Data Collection Interval" under Forwarder monitoring setup in MC be set to & why please? What is the best setting in a large environment ? We have Splunk Ent + ES.  Should this setting match the MC on ES?
Hi I have a field called STATUS with 2 possible values ​​"SUCCESS" or "WARNING" but the percentages don't seem to work well, I appreciate suggestions   index=my_index SERVICE_CODE="ABCD" |fiel... See more...
Hi I have a field called STATUS with 2 possible values ​​"SUCCESS" or "WARNING" but the percentages don't seem to work well, I appreciate suggestions   index=my_index SERVICE_CODE="ABCD" |fields STATUS, SERVICE_CODE | timechart span=1d sum(eval(if(STATUS="SUCCESS",1,0))) as SUCCESS, sum(eval(if(STATUS="WARNING",1,0))) as FAILED, count as total | eval percentage=round((SUCCESS/total)*100,2) | fillnull value=0 | fields percentage | appendpipe [stats count | where count=0]    
Need help with Deploying Apps or TAs using Deployment server in Linux environment please. I greatly appreciate your help. I have tried my notes, but not good enough for the job. Thx
I used to clear all missing FWs in the Splunk Ent. using the MC "Rebuild" option. But it is not working anymore. Any helpful advice is much appreciated.
Hi, I reset the data on a host in TrackMe as we were having issues with alerts. The host is no longer showing in TrackMe (monitored_state=ALL) but I can see it under Data Host Monitoring Collection... See more...
Hi, I reset the data on a host in TrackMe as we were having issues with alerts. The host is no longer showing in TrackMe (monitored_state=ALL) but I can see it under Data Host Monitoring Collection. In the Collection output "data_host_st_summary", "data_index" and "data_sourcetype" are all blank fields. How do I go about populating these fields with the correct data again?  This is a quiet host with few events and it is not being detected when I run the short/long term tracker reports and cannot find a way to be able to manually add the missing data. Thanks.
base search | fields _time host pdfpath status | stats values(pdfpath) as pdfpath values(host) as host by _time status | table _time host status pdfpath Example Log _time host status pdfpath 20... See more...
base search | fields _time host pdfpath status | stats values(pdfpath) as pdfpath values(host) as host by _time status | table _time host status pdfpath Example Log _time host status pdfpath 2021-09-08 08:00:00.359 hostA processing /20210907/xxxx_live.3.21.cv.1866.13428730.1 2021-09-08 08:00:00.458 hostB processing /20180821/xxxx_live.1.18.cr.403.19409265.0 2021-09-08 08:00:00.462 hostB processing /20180821/xxxx_live.1.18.cr.403.19409265.0 2021-09-08 08:00:00.473 hostA finished /20210907/xxxx_live.3.21.cv.1866.13428730.1 2021-09-08 08:00:00.477 hostC processing /tmp/HL_end_state379145533207037128.pdf 2021-09-08 08:00:00.500 hostC finished /tmp/HL_end_state379145533207037128.pdf I am looking for a way to trigger an alert when a host does not finish processing a pdfpath. Using the example above, hostB is having trouble processing it's pdfpath, as there is no correlating "finished" status as there is for hostA and hostC. The output could be just the earliest and latest time the file was processed and a count of the attempts to process the file. It would be cool also to have a new status called "failed" to easily count the number of failures. Output example: earliest_time latest_time host number_of_attempts pdfpath status 2021-09-08 08:00:00.458 2021-09-08 08:00:00.462 hostB 2 /20180821/xxxx_live.1.18.cr.403.19409265.0 failed I'm looking for suggestions on how I could do this. Thank you.
Hi Below is a simple example of what I am trying to do. I am trying to remove the duplicate out of the process name. So I have the code for that but only run this code if service_type = agent-based... See more...
Hi Below is a simple example of what I am trying to do. I am trying to remove the duplicate out of the process name. So I have the code for that but only run this code if service_type = agent-based.  So ideal I want to run an If service_type = agent-based then eval below. However I lose the !=agent-based. that I don't want to run the eval on that.  so how to I say if agent-based run these 2 evals on that specific data and then keep the rest of the !=agent-based       | eval temp=split($Process_Name$," ") | eval Process_Name=mvindex(temp,0)       Thanks in Advance  
Hello,   I'm trying to add the appearance of a certain value in my base search count. the value is "detatched". it is written in an event, when a certain license has been used. this detatched licen... See more...
Hello,   I'm trying to add the appearance of a certain value in my base search count. the value is "detatched". it is written in an event, when a certain license has been used. this detatched license has a lifespan of 14 days, afterwards it's not active anymore and I don't need to add this to my base search anymore. so basically it's like this :  index=indexa=* licensecount=* productid=5000 earliest=-30d@d latest=now() | eval flag="basecount" | append [search index=indexa =*  productid=5000 subject="*detatched*" earliest=-45d@d latest=-31d@d  | eval flag="addcount"] | stats count(eval(flag="basecount")) as basecount count(eval(flag="addcount")) as addcount | eval totalcount = basecount+addcount |timechart span=1d count(totalcount) I know this query is partlially stupid but what I want to show is what I'm trying to accomplish. Example: Today I have a licence count of the product 5000 of 5, 14 days ago I had a count of 1, therefore today it should show me 6. tomorrow, this count of 1 shouldn't be added anymore, cause it's more than 14 days old and not active anymore. this should be seen - ideally - in a timechart.  Hope someone can make sense of this . Much appreciate any help or feedback, cause, maybe it's not possible to do so in splunk.  Thanks a lot guys
If I have corelationId then how to find out with the query that how many times a particular client/method/api is being called.
Good morning everyone,   I am trying to ingest a log that does not roll over after a new, only when the service that writes the log is restarted. We have done some testing using cRcSalt and so far ... See more...
Good morning everyone,   I am trying to ingest a log that does not roll over after a new, only when the service that writes the log is restarted. We have done some testing using cRcSalt and so far that has not helped to continually monitor the file as it is written.    Any advice would be appreciated.  inputs.conf  [monitor://E:\Tomcat 9.0\logs\tomcat9-stdout.*.log] sourcetype = test index = test blacklist = \.(gz|bz2|z|zip)$ disabled = false CRCSALT = <SOURCE>   Props.conf [test] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = true CHECK_FOR_HEADER = false CHARSET = AUTO EXTRACT-SessionID = (?<=SessionID:)(?P<SessionID>.+) EXTRACT-Result = (?<=VerificationResult:)(?P<Result>.+) EXTRACT-UserName = (?<=User:)(?P<UserName>.+) EXTRACT-Response = (?<=Account Response:)(?P<Response>.+) EXTRACT-Second_Response = (?<=Verification_test:)(?P<Second_Response>.+)
Hi. I have a data model that consists of two root event datasets. Both accelerated using simple SPL. First dataset I can access using the following   | tstats summariesonly=t count FROM datamodel... See more...
Hi. I have a data model that consists of two root event datasets. Both accelerated using simple SPL. First dataset I can access using the following   | tstats summariesonly=t count FROM datamodel=model_name where nodename=dataset_1 by dataset_1.FieldName   But for the 2nd root event dataset, same format doesn't work. For that, I get events only by referencing the dataset along with the datamodel.   | tstats summariesonly=t count FROM datamodel=model_name.dataset_2 by dataset_2.FieldName   e.g., the following will not work.   | tstats summariesonly=t count FROM datamodel=model_name where nodename=dataset_2 by dataset_2.FieldName     I am trying to understand what causes splunk search to work differently on these datasets when both are at the same level? Thanks, ~ Abhi
Actually Netapp storage is configured with splunk, now we want to configure again from splunk to checkmk monitoring tool. will it possible?  
Hey splunkers, How do I create a new field in splunk?   If I have a windows security log with "User" field and I want to call it and use it as "Account". I tried we Eval but didn't succeed.  Tha... See more...
Hey splunkers, How do I create a new field in splunk?   If I have a windows security log with "User" field and I want to call it and use it as "Account". I tried we Eval but didn't succeed.  Thanks.