All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Under Splunk DB Connect, we have data inputs created which will periodically pull data from our SQLServer database and put it into our indexes. The SQL queries used have a rising column which acts a... See more...
Under Splunk DB Connect, we have data inputs created which will periodically pull data from our SQLServer database and put it into our indexes. The SQL queries used have a rising column which acts as a checkpoint column for these inputs. Things were working fine until KVStore went down on the Splunk server. To bring it up, we followed these steps which worked. Stop the search head that has the stale KV store member. Run the command splunk clean kvstore --local. Restart the search head. Run the command splunk show kvstore-status to verify synchronization. But, after doing this, Splunk DB Connect stopped writing any new data to the indexes. The DB Connect logs are filled with error that look like this for all data inputs: loading checkpoint for input title=CDA_Eportal_JobUserMapping loading checkpoint by checkpointKey=63863acecd230000be007648 from KV Store for input title=CDA_Eportal_JobUserMapping error loading checkpoint for input title=CDA_Eportal_JobUserMapping java.lang.RuntimeException: Can't find checkpoint for DB Input CDA_Eportal_JobUserMapping I am unable to even edit the data inputs to manually add checkpoints as they fail while saving. Is there any way to fix all checkpoints or clear all of them so that data gets written to the indexes again? What should I do to fix this issue?  
hi. Any fields you want to have reported in the email have to be available in the search results.
What does the ***** represent?
I need to backfill some missing data into the summary index. However, there are already a few data present in the same index. Therefore, I only want to backfill the remaining events, and the data tha... See more...
I need to backfill some missing data into the summary index. However, there are already a few data present in the same index. Therefore, I only want to backfill the remaining events, and the data that is already present should not be injected again. I am currently using the 'fill_summary_index.py' script, but during testing, it seems to inject duplicate data, indicating that the deduplication function is not working correctly in this script. Please help me by providing a proper script to address this issue.
Hey folks, does anyone know of a straightforward way to get a count of the number of times each playbook is used as a subplaybook? I know you're able to click into the playbook and look to see where ... See more...
Hey folks, does anyone know of a straightforward way to get a count of the number of times each playbook is used as a subplaybook? I know you're able to click into the playbook and look to see where it's being used, but I was hoping to do so at a large scale without having to click into every single playbook. I've got some changes coming that will require a fair number of playbooks to be updated and was hoping to use the count to help determine where to prioritize our effort.
Ever since upgrading to version 2.1.4 of Cofense Triage Add-On, we get hundreds of these errors in our _internal logs: 01-30-2024 13:20:42.094 -0500 ERROR ScriptRunner [1503161 TcpChannelThread] - s... See more...
Ever since upgrading to version 2.1.4 of Cofense Triage Add-On, we get hundreds of these errors in our _internal logs: 01-30-2024 13:20:42.094 -0500 ERROR ScriptRunner [1503161 TcpChannelThread] - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py execute': /opt/splunk/etc/apps/TA-cofense-triage-add-on-for-splunk/bin/ta_cofense_triage_add_on_for_splunk/aob_py3/solnlib/utils.py:153: UserWarning: _get_all_passwords is deprecated, please use get_all_passwords_in_realm instead.   It does not affect log ingestion, but would like help in figuring out how to suppress the errors.
How to get the list of   indexes/sources that aren't being used in Splunk for last 90 days. can you anyone suggest query to get the index/sourcetype not used in any of knowledge object. 
I am working on creating an alert from Splunk.  In my search I am creating a variable using eval, but that is not used in the result table.  But I would like to use it in the email subject and body. ... See more...
I am working on creating an alert from Splunk.  In my search I am creating a variable using eval, but that is not used in the result table.  But I would like to use it in the email subject and body.     index=applications sourcetype=aws:cloudwatchlogs ((Job="*prod-job1*") OR (Job="*prod-job2*")) | eval emailTime=strftime(now(),"%m/%d/%Y") | stats latest(_time) as latest(s3Partition) as s3Partition latest(field1) as field1 latest(field2) as field2 latest(emailTime) as emailTime by table_name | search field2 ="*" emailTime=* | eval diff=(field2-field1) | eval evt_time=strftime(_time, "%Y-%m-%d") | eval partition_date=substr(s3Partition, len("event_creation_time=")+1, len("yyyy-mm-dd")) | where isnotnull(table_name) and isnotnull(emailTime) and ( evt_time == partition_date ) | table table_name, field1, field2, diff | sort raw_table_name | rename table_name AS "Table Name" field1 AS "Field1 count" field2 AS "Field2 count" diff as "Count Difference"       I tried using it like  -    $result.partition_date$  and  $result.emailTime$    -    in the subject and body, but the value is not getting substituted -  it appears  empty in both the places. Is it possible to use this value in email without using it in the table for the alert? Thank you  
Thanks for this!  This worked for me in 9.1.2.  Definitely nicer than my thought of grepping for the index  name recursively from all /opt/splunk/etc/apps/search and /opt/splunk/etc/users.
This allows me to create a timechart, but the time picker isn't connecting to it. So if I ask for a 90 day timechart I get all records for the last year vs just the last 90 days worth of data. Is the... See more...
This allows me to create a timechart, but the time picker isn't connecting to it. So if I ask for a 90 day timechart I get all records for the last year vs just the last 90 days worth of data. Is there a fix for that @burwell ?
Same OS? SeLinux turned on or some other company agent on there?? These are the usual culprits for this kind of fun errors  
This is a warm standby, and the primary and warm standby show the same behaviour.  Additionally, we have some standalone servers that also show it, so I don't think it's specific to a certain archit... See more...
This is a warm standby, and the primary and warm standby show the same behaviour.  Additionally, we have some standalone servers that also show it, so I don't think it's specific to a certain architecture.  I tried opening a support case ticket, but whenever I submit a ticket I just get a blank page and it doesn't go through I've reached out to a company contact to see if I can escalate the issue.  Thanks for looking!
@catherinelam I have not seen this before but it does look Postgres-ey.  Is this a single instance or Hot/Warm standby? If so are you sure the postgres stream is allowed (5432) between them and yo... See more...
@catherinelam I have not seen this before but it does look Postgres-ey.  Is this a single instance or Hot/Warm standby? If so are you sure the postgres stream is allowed (5432) between them and you have confirmed the sync is working? The files are definitely Postgres files but I am not sure what action creates them and why they would be deleted during runtime to then "go missing".  I hope you have also raised a support case for this too?  
Our current SOAR servers, fresh install on AWS EC2s, 500's each night. Upon investigation, it looks like there's this error in the logs: File "/opt/soar/usr/python39/lib/python3.9/site-packages/psyc... See more...
Our current SOAR servers, fresh install on AWS EC2s, 500's each night. Upon investigation, it looks like there's this error in the logs: File "/opt/soar/usr/python39/lib/python3.9/site-packages/psycopg2/__init__.py", line 127, in connect conn = _connect(dsn, connection_factory=connection_factory, **kwasync) django.db.utils.OperationalError: connection to server on socket "/tmp//.s.PGSQL.6432" failed: No such file or directory Is the server running locally and accepting connections on that socket? On a healthy server, that file is present. On a 500-error server, it's missing. Is there an explanation of why that might be going missing? Issue is temporarily resolved by stopping and starting phantom again.  I think it might be related to PostgreSQL or pgbouncer. 
--- IMPORTANT EDIT --- After I accepted this solution, user @PickleRick suggested a way better one, so I am reporting it here for future use: The /event endpoint gives you more flexibility than /ra... See more...
--- IMPORTANT EDIT --- After I accepted this solution, user @PickleRick suggested a way better one, so I am reporting it here for future use: The /event endpoint gives you more flexibility than /raw so I'd advise to use /event anyway. But in order for HEC input _not_ to skip the timestamp recognition (which it does by default - it either gets the timestamp from the field pushed with (not in!) an event or assigns current timestamp), you must add the ?auto_extract_timestamp=true parameter to the url. Like https://your_indexer:8088/services/collector/event?auto_extract_timestamp=true     Here below my original answer: Hi @gcusello  I tried this too but no luck. Eventually I solved my problem by changing the HEC endpoint. I was sending data to "/services/collector/event" endpoint. I changed to "/services/collector/raw" and time was indexed correctly with only the TIME_FORMAT property.   Thank you for your help anyway!
Thank you for your reply, I will do the debugging based on your input and will keep you posted.
Hi @tommasoscarpa1 , if before the timestamp in epochtime you always have some char but not digits, you could try: TIME_PREFIX = [a-zA-Z]\s+ this permits to be sure that Splunk uses the epochtime ... See more...
Hi @tommasoscarpa1 , if before the timestamp in epochtime you always have some char but not digits, you could try: TIME_PREFIX = [a-zA-Z]\s+ this permits to be sure that Splunk uses the epochtime timestamp. Ciao. Giuseppe
| makeresults count=5 | fields - _time | streamstats count as row | eval database1=mvindex(split("ABCDE",""),row - 1) | fields - row | appendcols [| makeresults count=7 | streamstats count as... See more...
| makeresults count=5 | fields - _time | streamstats count as row | eval database1=mvindex(split("ABCDE",""),row - 1) | fields - row | appendcols [| makeresults count=7 | streamstats count as row | eval database2=mvindex(split("ABCEFGH",""),row - 1) | fields - row] | eval result=database2
Hey @AKG11 , there is unfortunately no content pack or any other extension available containing additional graphical elements such as the type of arrows you are describing. I find that the best way ... See more...
Hey @AKG11 , there is unfortunately no content pack or any other extension available containing additional graphical elements such as the type of arrows you are describing. I find that the best way of going about designing a Glass Table in general is usually by using a 3rd party graphic design tool (can be as simple as PowerPoint) to generate the desired background image, adjust it accordingly to make space for eventually adding health score single value visualisations, and subsequently uploading and using that image in your Glass Table. Alternatively, you can create just the arrows in an external tool and bring them in individually as images, but I suspect that won't quite meet your requirements here. Feel free to reference the attached sample design, which was designed this way (including some custom arrows, but mostly other visual elements brought in as the Glass Table's background image). Let me know if this helps, avd
I have a splunk distributed system with 3 indexers, 3 search heads, a manger, and 2 heavy forwarders.  I am attempting to deploy the DB Connect application to the HF and the SHC.  The SHC has 3 membe... See more...
I have a splunk distributed system with 3 indexers, 3 search heads, a manger, and 2 heavy forwarders.  I am attempting to deploy the DB Connect application to the HF and the SHC.  The SHC has 3 member nodes and the deployer on the manger node.  Ideally, this would all be done with ansible, sadly, the deployer gets in the way.  I can deploy to the HF with ansible, but the deployer keeps removing the db connect app on the SHC. That said, to deploy I install the app on the manager node, install the drivers, and then copy it to the ...shcluster/apps directory  and run the splunk schcluster apply command.   I've done this both manually and using ansible.   When I run the apply the deployer does not put the entire app on the search heads, it only puts the default and metadata directories on the Search heads in the splunk_app_db_connect directory.    When I go into the manage  apps on the GUI I see the app installed but it is not visible.  I would prefer not to use the GUI for management and perform all management task via the cli and ansible.  The code is stored in a version control system and gives not only control over the deployments but also trakcs who did what, when, why, and how.   So I guess there are multiple questions.   Why is the deployer not pushing the entire application to the search heads? How can I disable the deployer and just use ansible?