All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

It's making me crazy!!! Splunk Enterprise 8.2.6, Cluster SH with 3 members.     [role_test] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 0 grantableRoles = test importR... See more...
It's making me crazy!!! Splunk Enterprise 8.2.6, Cluster SH with 3 members.     [role_test] cumulativeRTSrchJobsQuota = 0 cumulativeSrchJobsQuota = 0 grantableRoles = test importRoles = user srchIndexesAllowed = *;_* srchMaxTime = 8640000     A "test" new Role. Import capabilities from "user" Role. A new user is assigner to the "test" Role.       No way to query _internal indexes!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Any suggestion??? Thanks.
Setup an app folder on my search head (clustered with indexers and HECS)  "TA-Whatever"  from the app builder. Dropped a  py script in the default folder inside TA-whatever that gens a json file that... See more...
Setup an app folder on my search head (clustered with indexers and HECS)  "TA-Whatever"  from the app builder. Dropped a  py script in the default folder inside TA-whatever that gens a json file that gets dropped in the parent app TA-whatever folder.  When going to the add data, file and directory menu, I wen thru the server file system drop downs and selected the  json file and on the 2nd page where you select the sourcetype it sees the  json data.  After I select my index, source type and all that, finish, restart Splunk, and search the index,  I see nothing, no data is there.   I have rebuilt the app several times as the Splunk user,  as root.  chmod everything to 777. rebuild source types and index's. did props conf, treid with no entry in props.  added inputs conf to the local app folder, tried with no input.conf in the ta-whatever local folder,  also added a monitor to the global inputs.conf in etc/sys/local and still no dice.   here is my input.conf that i tried in global and the local app directory: [monitor://tmp/felt.json] disabled = 0 index = googlepyscript sourcetype = googlepy   also tried:   [monitor:///tmp/felt.json] disabled = 0 index = googlepyscript sourcetype = googlepy     No matter what I do the index is not getting data. I have tried it with the build in _json sourcetype and created my own and no data goes to the index after I finish the wizard.   Any input is welcomed at this point as I have been going at it for several days. Thanks! 
I'm trying to create a search that shows a daily message count (both inbound and outobound) and the average for each direction. Although it doesn't give me any errors, when the table gets created, th... See more...
I'm trying to create a search that shows a daily message count (both inbound and outobound) and the average for each direction. Although it doesn't give me any errors, when the table gets created, the results show as zero (I know this is inaccurate as I pulled a message trace from o365 to confirm).   index=vs_email sampledesign@victoriasecret.com | eval direction=case(RecipientAddress="sampledesign@victoriasecret.com", "inbound", RecipientAddress!="sampledesign@victoriasecret.com", "outbound") | dedup MessageId | bin _time span=1d | eventstats count(direction="inbound") as inbound_count | eventstats count(direction="outbound") as outbound_count | dedup _time | eventstats avg(inbound_count) as average_inbound_count | eventstats avg(outbound_count) as average_outbound_count | table inbound_count outbound_count average_inbound_count average_outbound_count   All of the results are showing as zero. Any help would be much appreciated. Thanks!
Hi, Please, Can some one let me know what is the file and variable in "Splunk Add-on AWS" for S3, that limits the ingestion of files to 1 hour? I didn't find in inputs.conf file any variable that l... See more...
Hi, Please, Can some one let me know what is the file and variable in "Splunk Add-on AWS" for S3, that limits the ingestion of files to 1 hour? I didn't find in inputs.conf file any variable that limits the ingestion of files to 1 hour. We need to index older files from S3 bucket but "Splunk Add-on AWS" only let index the last hour. This is the inputs.conf file [aws_s3://cloud-logs] aws_account = abc aws_s3_region = us-east-1 bucket_name = f-logs character_set = auto ct_blacklist = ^$ host_name = s3.us-east-1.amazonaws.com index = cloud initial_scan_datetime = 2022-01-14T15:59:18Z max_items = 100000 max_retries = 3 polling_interval = 300 private_endpoint_enabled = 0 recursion_depth = -1 sourcetype = cloud:json disabled = 0 Regards Edgard Patino      
Am using scheduled alerts , I notice not all alerts are getting fired and am not receiving emails for all the events. Around 45-50% for alerts I get, rest of them I got get email.   Any reason wh... See more...
Am using scheduled alerts , I notice not all alerts are getting fired and am not receiving emails for all the events. Around 45-50% for alerts I get, rest of them I got get email.   Any reason why this weird behaviour ?
Good Afternoon I wanted to reach out to the community for some assistance/clarification  on the best approach to change a search head with 2 index peers to a new windows server? We currently have... See more...
Good Afternoon I wanted to reach out to the community for some assistance/clarification  on the best approach to change a search head with 2 index peers to a new windows server? We currently have one search head acting as the license master with two index peers that it is connected to.  The search head server is being decommissioned so I will need to swap out the search head and license master to the new server.  Is there a recommended best approach to this?    I found the following information: How to migrate When you migrate on *nix systems, you can extract the tar file you downloaded directly over the copied files on the new system, or use your package manager to upgrade using the downloaded package. On Windows systems, the installer updates the Splunk files automatically. Stop Splunk Enterprise services on the host from which you want to migrate. Copy the entire contents of the $SPLUNK_HOME directory from the old host to the new host. Copying this directory also copies the mongo subdirectory. Install Splunk Enterprise on the new host. Verify that the index configuration (indexes.conf) file's volume, sizing, and path settings are still valid on the new host. Start Splunk Enterprise on the new instance. Log into Splunk Enterprise with your existing credentials. After you log in, confirm that your data is intact by searching it. here: https://docs.splunk.com/Documentation/Splunk/7.2.3/Installation/MigrateaSplunkinstance?_ga=2.79344788.1455755630.1676483895-73379229.1676483895&_gl=1*xs5dk1*_ga*NzMzNzkyMjkuMTY3NjQ4Mzg5NQ..*_ga_5EPM2P39FV*MTY3NjQ4Mzg5NS4xLjEuMTY3NjQ4NDUzNi4yOC4wLjA. But this seems a little to simple.  I am unable to keep the same server name and IP address that the search head has now.  So that got me thinking this maybe simple but I just wanted to ensure I am not missing a critical step. Thank you Dan  
I have a saved report with three output fields that I want to add to a Column chart in Dashboard studio. Two of the three fields contain static values (license limit and optimal utilization threshold... See more...
I have a saved report with three output fields that I want to add to a Column chart in Dashboard studio. Two of the three fields contain static values (license limit and optimal utilization threshold) that I want to add as overlays to the third utilized SVC field. I can't seem to get the JSON correct. This is as close as I have come. How can I add two fields as overlays in a Column chart? Image attached.     { "type": "splunk.column", "dataSources": { "primary": "ds_search_1" }, "title": "SVC License Usage (today)", "options": { "yAxisAbbreviation": "off", "y2AxisAbbreviation": "off", "showRoundedY2AxisLabels": false, "legendTruncation": "ellipsisMiddle", "showY2MajorGridLines": true, "xAxisTitleVisibility": "hide", "yAxisTitleText": "SVC Usage", "overlayFields": ["optimal utilization threshold", "license limit"], "columnGrouping": "overlay" }, "context": {}, "showProgressBar": false, "showLastUpdated": false }      
I received an error stating "This saved search cannot perform summary indexing because it has a malformed search." while I was setting up summary indexing through the UI.  The SPL in my saved search... See more...
I received an error stating "This saved search cannot perform summary indexing because it has a malformed search." while I was setting up summary indexing through the UI.  The SPL in my saved search included a lookup and a subsearch to dynamically set the earliest and latest values for the main search. From what I found researching the error, the issue is related to passing the earliest and latest values back to the main search. It took me a while to solve this so I thought I'd post it here to help anyone else seeing this error.  
Hello, I have created a modular input using the example of splunk-app-example. It extends the class Script and I modified the get_scheme function adding arguments (example) key_secret = Argument(... See more...
Hello, I have created a modular input using the example of splunk-app-example. It extends the class Script and I modified the get_scheme function adding arguments (example) key_secret = Argument("key_secret") key_secret.title = "Key Secret" key_secret.data_type = Argument.data_type_string key_secret.required_on_create = True   This code allows to save key_secret as a plain string, which is clearly unsecure. Investigating I reached to the storage_password endpoints, and I added the following to stream_events method: if key_secret != "": secrets = self.service.storage_passwords storage_passwords = self.service.storage_passwords storage_password = storage_passwords.create(key_secret, key_id, tenant) input_item.update({"key_secret": ""}) else: key_secret = next(secret for secret in secrets if (secret.realm == tenant and secret.username == key_id)).clear_password This is not working, as I cannot modify the input definition, is storing both in storage_passwords and in inputs.conf. Is there any way in code to delete the inputs.conf password, or what is the correct way to manage this? Thanks!    
Hi Splunkers, I have developed the custom Add On one one server on which its working fine , when i have exported on another server, its giving the below error. i have gone through the many articles... See more...
Hi Splunkers, I have developed the custom Add On one one server on which its working fine , when i have exported on another server, its giving the below error. i have gone through the many articles on community and found its somehow relate to password.conf file. mean since its built on different server so it had encrypted password with their splunk.secret and kvstore. now please help me out how can I resolve issue on here another server and update password.conf or bind with this another server.    ERROR PersistentScript [23354 PersistentScriptIo] - From {/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk-add-on-for-nimsoft/bin/splunk_add_on_for_nimsoft_rh_settings.py persistent}: WARNING:root:Run function: get_password failed: Traceback (most recent call last):   Thanks in advance,
Hi, I have below fields in which i need to display the count of each field value |eval TotalApps=if(match('Type'="NTB"),"1","0") |eval InProgress=if(Type= "NTB" AND isnull(date),"1","0") |eval... See more...
Hi, I have below fields in which i need to display the count of each field value |eval TotalApps=if(match('Type'="NTB"),"1","0") |eval InProgress=if(Type= "NTB" AND isnull(date),"1","0") |eval Submitted=if(Type= "NTB" AND isnotnull(date),"1","0") |eval Apps_Submitted=if(match('Myinfo_Used',"1"),'REASON_CD',"0") |stats count by Apps_Submitted getting results as COPS   1 CMS   2 FCO   3 but requirement is |stats sum(TotalApps) as TotalApps sum(InProgress) as InProgress sum(Submitted) as  Submitted (along with the AppsSubmitted count of each field value) Eg: TotalApps    10 InProgress   5 Submitted   5 AppsSubmitted  5 COPS       1 CMS         2 FCO          3
How can I created dashboard for my entities like image belong in IT Essentials work app. I download the manuel named: - Entity Integrations Manual - Overview of Splunk IT Essentials Work But ... See more...
How can I created dashboard for my entities like image belong in IT Essentials work app. I download the manuel named: - Entity Integrations Manual - Overview of Splunk IT Essentials Work But i can not find my answer Thank you for your help
Hi, I have a problem finding answers about the failure of a universal forwarder to re-ingest an XML file. 02-08-2023 11:11:40.348 +0000 INFO WatchedFile [10392 tailreader0] - Checksum for seekptr... See more...
Hi, I have a problem finding answers about the failure of a universal forwarder to re-ingest an XML file. 02-08-2023 11:11:40.348 +0000 INFO WatchedFile [10392 tailreader0] - Checksum for seekptr didn't match, will re-read entire file='ps_Z00000ldpowf9tXp9iZcoMZgvijew.log'. This is an XML file. It is created as a small file. Eventually, an application will re-write this file with a temporary name before renaming it to this same name. This can be seconds after it is created or after many minutes or even hours. My problem is that this event suggests that the forwarder knows that the file has changed but the new content of the file is not ingested. It will be ingested as expected if I manually modify the top of the file later. At that point, I see: 02-08-2023 16:21:51.439 +0000 INFO WatchedFile [10392 tailreader0] - Checksum for seekptr didn't match, will re-read entire file='ps_Z00000ldpowf9tXp9iZcoMZgvijew.log'. 02-08-2023 16:21:51.439 +0000 INFO WatchedFile [10392 tailreader0] - Will begin reading at offset=0 for file='ps_Z00000ldpowf9tXp9iZcoMZgvijew.log'. And the new version of the file is finally available. This is a universal forwarder. This is a Linux server. The new version of the XML file is 2233 bytes long. The length on the file does not seem to be a problem. A transform exists on the indexers to load the content as one event. This works fine. I do not believe my problem is related to initCrcLength as it did notice the file has changed. I blacklist the name of the temporary file. Switching "multiline_event_extra_waittime” true or false does not help. The ingestion and re-ingestion works fine most of the times. Maybe one every 20 files do not get re-ingested as expected. And it is usually the ones that are re-written few seconds after it got created. My question is the following: why is the file sometimes not re-indexed if the forwarder says it will do it? I can see that there can be a timing/race condition at play but the logs do not show anything other than the INFO records. Would changing the debugging level help? What other parameter in the input could help if this is a timing problem? I failed finding a solution online because pretty much all conversations related to this INFO message are about stopping the file re-ingestion. So I have not been successful in finding my needle. Any advice is welcomed. Thanks.
Within Splunk:Soar(Phantom), is there a way to have a prompt message pop-up for the user running the playbook, as opposed to having to go up to notifications then click the prompt notification to ope... See more...
Within Splunk:Soar(Phantom), is there a way to have a prompt message pop-up for the user running the playbook, as opposed to having to go up to notifications then click the prompt notification to open it.   The idea being that when certain playbooks are launched from workbooks, the prompt will supply the user with a form that will affect the actions/outcomes of the playbook.  While having the promp directly pop-up isn't necessary for this, it would be QOL and help with confusion of new users. I could have sworn I saw this type of feature during a SOAR demo by Splunk, but can't find it documented anywhere if it exists.
I've configured a HEC to receive events from a Telegraf emitter, which provides metrics in the form: {"time":1676415410,"event":"metric","host":"VaultNonProd-us-east-2a","index":"vault-metrics","f... See more...
I've configured a HEC to receive events from a Telegraf emitter, which provides metrics in the form: {"time":1676415410,"event":"metric","host":"VaultNonProd-us-east-2a","index":"vault-metrics","fields":{"_value":0.022299762544379577,"cluster":"vault_nonprod","datacenter":"us-east-2","metric_name":"vault.raft.replication.heartbeat.NonProd-us-east-2b-d992bf60.stddev","metric_type":"timing","role":"vault-server"}} All of the fields come across from the HF to our indexers except the one we're most interested, the _value field.  Searching around, I found https://docs.splunk.com/Documentation/DSP/1.3.1/Connection/IndexEvent which, in part, states that "Entries that are not included in fields include: any key that starts with underscore (such as _time)" Is it possible to include an underscore-starting field in the forwarded event? Thanks
I am trying to create a query to get the sum of multiple fields by a field.    index="*****" |stats sum(field_A) as  A by field_C,sum(field_B) as B  by field_C | table field_C, field_A,field_B... See more...
I am trying to create a query to get the sum of multiple fields by a field.    index="*****" |stats sum(field_A) as  A by field_C,sum(field_B) as B  by field_C | table field_C, field_A,field_B   This query is giving error. 
I am using Splunk searching old log files and the _time is different from log time, would this make sense or do I have to parse the log to set  _time to log time? Thanks.
How do i verify the forwarder is sending data to the Indexer? What search do i need to perform other then Forwarder Management?
Hello, I'm a new Splunk Compliance Manager and I need some assistance. How do i check Splunk Compliance and how do i better manage licensing?   Thanks, Rodney
Hi all,  Splunk newbie with what I hope is a simple question... I have a UF installed on my windows file server, and it is set to monitor a directory--see below [WinEventLog://Security] check... See more...
Hi all,  Splunk newbie with what I hope is a simple question... I have a UF installed on my windows file server, and it is set to monitor a directory--see below [WinEventLog://Security] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest monit [WinEventLog://System] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest [monitor://D:\documents\Confidential] disabled = false  The intent is for it to report access/modifications/deletions to files in that directory, but I am not getting any file monitoring activity returned to my splunk server when I perform a simple query for the windows host.  I do get all the system and security events, though. Any ideas on why I'm not getting the file monitoring activity?  Thanks!