All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi, i am setting up a search head/indexer setup.  i have port 9997 listening on indexer, i configured searchhead to send to indexer (since i have the files being sent to search head).  i can see th... See more...
hi, i am setting up a search head/indexer setup.  i have port 9997 listening on indexer, i configured searchhead to send to indexer (since i have the files being sent to search head).  i can see the syn packets being sent from search head to indexer, but thats about it. i am not sure what the indexer is doing about it, its not sending any error back or anything. capture tcp dump on indexer     capture tcp dump and logs from searchhead.   let me know what i need to do to fix this. thank you in advanced  
When going to CMC -> Forwarders -> Forwarders: deployment, I see that we have 19k+ forwarders, which is completely inaccurate. We have more like 900. It shows 18k+ as missing, and the list has instan... See more...
When going to CMC -> Forwarders -> Forwarders: deployment, I see that we have 19k+ forwarders, which is completely inaccurate. We have more like 900. It shows 18k+ as missing, and the list has instances decommissioned years ago.  I thought I could fix this by telling it to rebuild the forwarder assets via the button under VMC -> Forwarders -> Forwarder monitor setup, but when I click on this, it processes for about a minute, and then nothing changes. The description makes me think it is supposed to clear out the sim_forwarder_assets.csv lookup and rebuild it using only data it sees within the time frame I selected (24 hours). If I open up the lookup, all the entries it had previously are still there.  Am I misunderstanding how this works, or is something broken?
I am sure its fine, but this TA seems a little off (logo and the 'Built By') .. given who Wiz are, what they do and their high profile nature recently of disrupting some bad-guys .. I am keen for oth... See more...
I am sure its fine, but this TA seems a little off (logo and the 'Built By') .. given who Wiz are, what they do and their high profile nature recently of disrupting some bad-guys .. I am keen for others views on this "Built by product-integrations product-integrations"   .. strange.. and the logo seems pixilated.  Our team has recently had some "luck" in getting things vetted that really shouldn't be (and yes we reported) .. so simply saying 'its passed App Vetting" isn't enough for us.
My original time format in the search is  eventID: d7d2d438-cc61-4e74-9e9a-3fd8ae96388d    eventName: StartInstances    eventSource: ec2.amazonaws.com    eventTime: 2024-01-30T05:00:27Z    event... See more...
My original time format in the search is  eventID: d7d2d438-cc61-4e74-9e9a-3fd8ae96388d    eventName: StartInstances    eventSource: ec2.amazonaws.com    eventTime: 2024-01-30T05:00:27Z    eventType: AwsApiCall I am not able to convert it using the strptime function  eval dt_year_epoc = strptime(eventTime, "%Y-%m-%dThh:mm:ssZ") eval dt_day= strftime(dt_year_epoc, "%d") Nothing comes up in dt_day      
How can i take the eventName , instanceId and eventTime in a Pivot Table from the search below : index=aws_cloudtrail sourcetype="aws:cloudtrail" (eventName="StartInstances" OR eventName="StopInstan... See more...
How can i take the eventName , instanceId and eventTime in a Pivot Table from the search below : index=aws_cloudtrail sourcetype="aws:cloudtrail" (eventName="StartInstances" OR eventName="StopInstances" OR eventName="StartDBInstance" OR eventName="StopDBInstance" OR eventName="StartDBCluster" OR eventName="StopDBCluster") AND (userIdentity.type="AssumedRole" AND userIdentity.sessionContext.sessionIssuer.userName="*sched*") | spath "requestParameters.instancesSet.items{}.instanceId" | search "requestParameters.instancesSet.items{}.instanceId"="i-0486ba14134c4355b" | spath "responseElements.instancesSet.items{}.instanceId" | spath "recipientAccountId" Events : awsRegion: us-east-1    eventCategory: Management    eventID: 3a80a688-fa82-4950-b823-69ffc3283862    eventName: StartInstances    eventSource: ec2.amazonaws.com    eventTime: 2024-01-30T11:00:38Z    eventType: AwsApiCall    eventVersion: 1.09    managementEvent: true    readOnly: false    recipientAccountId: XXXXXXXXXXX    requestID: b404437a-ee56-4531-842e-1b10c01f01d3    requestParameters: { [-]      instancesSet: { [-]        items: [ [-]          { [-]            instanceId: i-0486ba14134c4355b          }        ]      }    }
Under Splunk DB Connect, we have data inputs created which will periodically pull data from our SQLServer database and put it into our indexes. The SQL queries used have a rising column which acts a... See more...
Under Splunk DB Connect, we have data inputs created which will periodically pull data from our SQLServer database and put it into our indexes. The SQL queries used have a rising column which acts as a checkpoint column for these inputs. Things were working fine until KVStore went down on the Splunk server. To bring it up, we followed these steps which worked. Stop the search head that has the stale KV store member. Run the command splunk clean kvstore --local. Restart the search head. Run the command splunk show kvstore-status to verify synchronization. But, after doing this, Splunk DB Connect stopped writing any new data to the indexes. The DB Connect logs are filled with error that look like this for all data inputs: loading checkpoint for input title=CDA_Eportal_JobUserMapping loading checkpoint by checkpointKey=63863acecd230000be007648 from KV Store for input title=CDA_Eportal_JobUserMapping error loading checkpoint for input title=CDA_Eportal_JobUserMapping java.lang.RuntimeException: Can't find checkpoint for DB Input CDA_Eportal_JobUserMapping I am unable to even edit the data inputs to manually add checkpoints as they fail while saving. Is there any way to fix all checkpoints or clear all of them so that data gets written to the indexes again? What should I do to fix this issue?  
I need to backfill some missing data into the summary index. However, there are already a few data present in the same index. Therefore, I only want to backfill the remaining events, and the data tha... See more...
I need to backfill some missing data into the summary index. However, there are already a few data present in the same index. Therefore, I only want to backfill the remaining events, and the data that is already present should not be injected again. I am currently using the 'fill_summary_index.py' script, but during testing, it seems to inject duplicate data, indicating that the deduplication function is not working correctly in this script. Please help me by providing a proper script to address this issue.
Hey folks, does anyone know of a straightforward way to get a count of the number of times each playbook is used as a subplaybook? I know you're able to click into the playbook and look to see where ... See more...
Hey folks, does anyone know of a straightforward way to get a count of the number of times each playbook is used as a subplaybook? I know you're able to click into the playbook and look to see where it's being used, but I was hoping to do so at a large scale without having to click into every single playbook. I've got some changes coming that will require a fair number of playbooks to be updated and was hoping to use the count to help determine where to prioritize our effort.
Ever since upgrading to version 2.1.4 of Cofense Triage Add-On, we get hundreds of these errors in our _internal logs: 01-30-2024 13:20:42.094 -0500 ERROR ScriptRunner [1503161 TcpChannelThread] - s... See more...
Ever since upgrading to version 2.1.4 of Cofense Triage Add-On, we get hundreds of these errors in our _internal logs: 01-30-2024 13:20:42.094 -0500 ERROR ScriptRunner [1503161 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py execute': /opt/splunk/etc/apps/TA-cofense-triage-add-on-for-splunk/bin/ta_cofense_triage_add_on_for_splunk/aob_py3/solnlib/utils.py:153: UserWarning: _get_all_passwords is deprecated, please use get_all_passwords_in_realm instead.   It does not affect log ingestion, but would like help in figuring out how to suppress the errors.
How to get the list of   indexes/sources that aren't being used in Splunk for last 90 days. can you anyone suggest query to get the index/sourcetype not used in any of knowledge object. 
I am working on creating an alert from Splunk.  In my search I am creating a variable using eval, but that is not used in the result table.  But I would like to use it in the email subject and body. ... See more...
I am working on creating an alert from Splunk.  In my search I am creating a variable using eval, but that is not used in the result table.  But I would like to use it in the email subject and body.     index=applications sourcetype=aws:cloudwatchlogs ((Job="*prod-job1*") OR (Job="*prod-job2*")) | eval emailTime=strftime(now(),"%m/%d/%Y") | stats latest(_time) as latest(s3Partition) as s3Partition latest(field1) as field1 latest(field2) as field2 latest(emailTime) as emailTime by table_name | search field2 ="*" emailTime=* | eval diff=(field2-field1) | eval evt_time=strftime(_time, "%Y-%m-%d") | eval partition_date=substr(s3Partition, len("event_creation_time=")+1, len("yyyy-mm-dd")) | where isnotnull(table_name) and isnotnull(emailTime) and ( evt_time == partition_date ) | table table_name, field1, field2, diff | sort raw_table_name | rename table_name AS "Table Name" field1 AS "Field1 count" field2 AS "Field2 count" diff as "Count Difference"       I tried using it like  -    $result.partition_date$  and  $result.emailTime$    -    in the subject and body, but the value is not getting substituted -  it appears  empty in both the places. Is it possible to use this value in email without using it in the table for the alert? Thank you  
Our current SOAR servers, fresh install on AWS EC2s, 500's each night. Upon investigation, it looks like there's this error in the logs: File "/opt/soar/usr/python39/lib/python3.9/site-packages/psyc... See more...
Our current SOAR servers, fresh install on AWS EC2s, 500's each night. Upon investigation, it looks like there's this error in the logs: File "/opt/soar/usr/python39/lib/python3.9/site-packages/psycopg2/__init__.py", line 127, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) django.db.utils.OperationalError: connection to server on socket "/tmp//.s.PGSQL.6432" failed: No such file or directory Is the server running locally and accepting connections on that socket? On a healthy server, that file is present. On a 500-error server, it's missing. Is there an explanation of why that might be going missing? Issue is temporarily resolved by stopping and starting phantom again.  I think it might be related to PostgreSQL or pgbouncer. 
I have a splunk distributed system with 3 indexers, 3 search heads, a manger, and 2 heavy forwarders.  I am attempting to deploy the DB Connect application to the HF and the SHC.  The SHC has 3 membe... See more...
I have a splunk distributed system with 3 indexers, 3 search heads, a manger, and 2 heavy forwarders.  I am attempting to deploy the DB Connect application to the HF and the SHC.  The SHC has 3 member nodes and the deployer on the manger node.  Ideally, this would all be done with ansible, sadly, the deployer gets in the way.  I can deploy to the HF with ansible, but the deployer keeps removing the db connect app on the SHC. That said, to deploy I install the app on the manager node, install the drivers, and then copy it to the ...shcluster/apps directory  and run the splunk schcluster apply command.   I've done this both manually and using ansible.   When I run the apply the deployer does not put the entire app on the search heads, it only puts the default and metadata directories on the Search heads in the splunk_app_db_connect directory.    When I go into the manage  apps on the GUI I see the app installed but it is not visible.  I would prefer not to use the GUI for management and perform all management task via the cli and ansible.  The code is stored in a version control system and gives not only control over the deployments but also trakcs who did what, when, why, and how.   So I guess there are multiple questions.   Why is the deployer not pushing the entire application to the search heads? How can I disable the deployer and just use ansible?
While creating HEC token through putty by using below command, we are getting error like "Couldn't request server info: Couldn't complete HTTP request: Connection refused".  Please provide any info o... See more...
While creating HEC token through putty by using below command, we are getting error like "Couldn't request server info: Couldn't complete HTTP request: Connection refused".  Please provide any info or solution. /opt/splunk/bin/splunk http-event-collector create Appd_Splunk -uri http://<IP>:8089 -description "This is a Appd token" -disabled 1 -index toll_alrt
Panel_1: <set token="V20">$result.value_20$</set> <set token="V40">$result.value_40$</set> <set token="V0">$result.value_0$</set> <set token="V100">$result.value_100$</set> Panel_2: <format... See more...
Panel_1: <set token="V20">$result.value_20$</set> <set token="V40">$result.value_40$</set> <set token="V0">$result.value_0$</set> <set token="V100">$result.value_100$</set> Panel_2: <format type="color" field="&gt;6hrs-&lt;8hrs"> <colorPalette type="expression">case(value&lt;="$V20$", "#006400", value&gt;"$V20$" AND value&lt;="$V40$", "#ffb200", value&gt;"$V40$", "#800000")</colorPalette> </format> <format type="color" field="&gt;8hrs-&lt;10hrs"> <colorPalette type="expression">case(value&lt;="$V20$", "#006400", value&gt;"$V20$" AND value&lt;="$V40$", "#ffb200", value&gt;"$V40$", "#800000")</colorPalette> </format> <format type="color" field="&gt;10hrs-&lt;12hrs"> <colorPalette type="expression">case(value&lt;="$V20$", "#006400", value&gt;"$V20$" AND value&lt;="$V40$", "#ffb200", value&gt;"$V40$", "#800000")</colorPalette> </format> <format type="color" field="&gt;12hrs-&lt;14hrs"> <colorPalette type="expression">case(value&lt;="$V20$", "#006400", value&gt;"$V20$" AND value&lt;="$V40$", "#ffb200", value&gt;"$V40$", "#800000")</colorPalette> </format> <format type="color" field="&gt;14hrs-&lt;16hrs"> <colorPalette type="expression">case(value&lt;="$V20$", "#006400", value&gt;"$V20$" AND value&lt;="$V40$", "#ffb200", value&gt;"$V40$", "#800000")</colorPalette> </format> The above is Scenario, i have created the tokens from the Panel_1 result and passing those tokens into the Colorpalette expression to highlight the cells dynamically. But i can't able to reach the desired output. How can i reach the desired output?
I want to download the trial version of Splunk Enterprise. Managed to register it. Whenever I try to login to Splunk.com, it keep showing 403 error. I tried with both Chrome and Firefox, same error. ... See more...
I want to download the trial version of Splunk Enterprise. Managed to register it. Whenever I try to login to Splunk.com, it keep showing 403 error. I tried with both Chrome and Firefox, same error. Both browser are latest version. I already tried following   When I clicked on Login, it will redirect to following and shown 403 error   https://www.splunk.com/saml/login?module=nav&redirecturl=https://www.splunk.com/   Windows 11 (updated with latest MS patches) and home network   - rebooted the laptop and router - Clear cache of browsers - Added www.splunk.com to trusted zone - Disabled Windows Firewall - Disabled AV   Anything else I should be checking?
Hello, I have events in this format: <servername> <metricname> <epochtime> <metricvalue>   These events comes from HEC to an heavy forwarder and are then forwarded to indexers. I would like to se... See more...
Hello, I have events in this format: <servername> <metricname> <epochtime> <metricvalue>   These events comes from HEC to an heavy forwarder and are then forwarded to indexers. I would like to set Splunk to recognize <epochtime> as the event timestamp. <servername> and <metricname> are alphanumerical words with no whitespaces inside, while <metricvalue> is numerical. <epochtime> is a 10 digits, integer epoch time.   I've set up props.conf file on heavy forwarder as follows: [sourcetypename] TIME_FORMAT = %s   But events are not indexed with the correct timestamp. I also tried to add this property: TIME_PREFIX = \S+\s\S+\s But no luck.   Can you help me understand what am I doing wrong?   EDIT---- Log example: mywebserver123 SOME_METRIC 1706569460 5 myotherwebserver456.domain.com ANY_OTHER_NAME 1706569582 3
I am trying to install credential package to Splunk universal forwarder. Need help with few queries as below. When I am downloading the package from splunk cloud platform Apps--> Universal forward... See more...
I am trying to install credential package to Splunk universal forwarder. Need help with few queries as below. When I am downloading the package from splunk cloud platform Apps--> Universal forwarder -->download UF cred. The package is getting downloaded to my local machine but I am unable to locate the downloaded package in  my machine. please assist me where can I find the downloaded credential package
Hi, Would you mind to help on this?, I have been working for days to figure out how can I pass a lookup file subsearch as "like" condition in main search, something like: To examples: 1)  . ... See more...
Hi, Would you mind to help on this?, I have been working for days to figure out how can I pass a lookup file subsearch as "like" condition in main search, something like: To examples: 1)  . . main search| where like(onerowevent, "%".[search [| inputlookup blabla.csv| <whatever_condition_to_make_onecompare_field>|table onecompare }]."%"]]) 2) . . main search| eval onerowevent=if(like(onerowevent,, "%".[search [| inputlookup blabla.csv| <whatever_condition_to_make_onecompare_field>|table onecompare }]."%"]])),onerowevent,"")
Hi Splunkers,   I dont need the value in first line and need that value later in search to filter, so I tried tis way to skip the value dmz type IN (if($machine$=="DMZ",true,$machine$) ... See more...
Hi Splunkers,   I dont need the value in first line and need that value later in search to filter, so I tried tis way to skip the value dmz type IN (if($machine$=="DMZ",true,$machine$) Is that will work? Thanks in Advance!