All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi there is no HMAC or similar method to ensure that logs haven’t been tampered in Splunk. Of course you should use TLS in transport method, but it only ensures that stream is ok, not that original ... See more...
Hi there is no HMAC or similar method to ensure that logs haven’t been tampered in Splunk. Of course you should use TLS in transport method, but it only ensures that stream is ok, not that original events are exactly what they have when they are originally written into disk. If you’re needing this kind of functionality you should use e.g HEC to send those events directly from your logger to Splunk without writing those into disk on source side. r. Ismo
Hi the preferred method is set up syslog server (rsyslog or syslog-ng) or use SC4C to get logs from syslog sources and then send those logs from it by UF or in SC4C case it sends those via HEC to yo... See more...
Hi the preferred method is set up syslog server (rsyslog or syslog-ng) or use SC4C to get logs from syslog sources and then send those logs from it by UF or in SC4C case it sends those via HEC to your cloud instance. r. Ismo
Thank you in advance for your help community I performed the integration of Cisco DNA to Splunk Created my "cisco_dna" index on my Heavy Forwarder I installed the Cisco DNA Center Add-on on my He... See more...
Thank you in advance for your help community I performed the integration of Cisco DNA to Splunk Created my "cisco_dna" index on my Heavy Forwarder I installed the Cisco DNA Center Add-on on my Heavy Forwarder (https://splunkbase.splunk.com/app/6668) Added the account in the add-on (username, password, host) Activated all the inputs: cisco:dnac:clienthealth cisco:dnac:devicehealth cisco:dnac:compliance cisco:dnac:issue cisco:dnac:networkhealth cisco:dnac:securityadvisory I also created my “cisco_dna” index on my Splunk Cloud instance. Installed the Cisco DNA Center App (https://splunkbase.splunk.com/app/6669) Done, I started receiving logs in Splunk from Cisco DNA But when validating the dashboards in the APP and reviewing the search results I noticed that the values of the fields are duplicated. Even if I apply some dedup to any of the fields, the result is “only one duplicate value”. This affects me when I have to take a value to perform an operation or make a graph. Does anyone know what this problem is due to and how I could solve it? Cisco DNA Center Add-on Cisco DNA Center App 
Don't ingest syslog directly into Splunk.  Use a dedicated syslog server.  See https://www.splunk.com/en_us/blog/tips-and-tricks/using-syslog-ng-with-splunk.html and https://kinneygroup.com/blog/splu... See more...
Don't ingest syslog directly into Splunk.  Use a dedicated syslog server.  See https://www.splunk.com/en_us/blog/tips-and-tricks/using-syslog-ng-with-splunk.html and https://kinneygroup.com/blog/splunk-syslog/  
Hey @Richfez or @Splunkerninja , I've successfully ingested the Snowflake LOGIN_HISTORY and SESSIONS tables but I'm running into roadblock after roadblock with the ACCESS_HISTORY and QUERY_HISTORY... See more...
Hey @Richfez or @Splunkerninja , I've successfully ingested the Snowflake LOGIN_HISTORY and SESSIONS tables but I'm running into roadblock after roadblock with the ACCESS_HISTORY and QUERY_HISTORY table ingestions. https://docs.snowflake.com/en/sql-reference/account-usage/access_history https://docs.snowflake.com/en/sql-reference/account-usage/query_history These tables have a QUERY_ID field that looks like this:  a0fda135-d678-4184-942b-c3411ae8d1ce And a QUERY_START_TIME (TIMESTAMP_LTZ) field that looks like this: 2022-01-25 16:17:47.388 +0000 The Checkpoint value system for a Rising ingestion in the Splunk DB Connect app doesn't play nicely with either of these fields and I've tried creating temporary fields and tables to bypass this issues but not to avail. For the QUERY_ID, I tried removing the hyphens and replace the letters a,b,c,d,e,f with numeric values 1,2,3,4,5,6 and stored it in a different field called QUERY_ID_NUMERIC. When trying that out as the checkpoint value, the checkpoint never gets updated so it just ingests the same data over and over again. Similarly for the QUERY_START_TIME, I've tried casting the TIMESTAMP_LTZ to TIMESTAMP_NTZ and saved that as a new field QUERY_START_TIME_NTZ and that ingests the data but the checkpoint value isn't updating either. I was wondering if anyone has experienced this issue when ingesting these two data sources and if they've found any work arounds that resolved it! Thank you very much!
No. There's no such functionality within Splunk itself but you could implement something like this as modular or scripted input but you'd have to write such input yourself.
Figured it out,  fapolicy was the issue. 
Hi @rickymckenzie10  for your requirement, I would suggest to go for option 2 for upgarde let me explain difference as per my understanding  in most of the terms both works same but some differe... See more...
Hi @rickymckenzie10  for your requirement, I would suggest to go for option 2 for upgarde let me explain difference as per my understanding  in most of the terms both works same but some differences. with Maintenance mode  you are telling cluster master that  some activity will happen on the indexers, it can be stopping splunk on indexer, rebooting the server , upgrading Splunk . With Maintenance mode enabled bucket replication will not happen in the entire cluster  once maintenance mode disabled, bucket fixup tasks will complete. With Rolling upgrade command , manager node understands that its upgrade of cluster and with running Rolling upgrade command also enables maintenance mode and tries to minimize the impact to searches. 
Enabling maintenance mode simply tells the Cluster Manager to not bother doing bucket fix-ups.  Nothing happens on the indexers themselves. The upgrade-init command starts a rolling restart of the i... See more...
Enabling maintenance mode simply tells the Cluster Manager to not bother doing bucket fix-ups.  Nothing happens on the indexers themselves. The upgrade-init command starts a rolling restart of the indexers after setting maintenance mode.
I'm not sure what that statement means.  props apply only to the sourcetype, source, or host listed in the stanza name.  It may be necessary to replicate a stanza to cover all scenarios.
This gets me pretty close to what I need.  I modified it slightly to get to the data I need: | makeresults format=csv data="Day,Percent 2024-11-01,100 2024-11-02,99.6 2024-11-03,94.2 2024-11-04, 79.... See more...
This gets me pretty close to what I need.  I modified it slightly to get to the data I need: | makeresults format=csv data="Day,Percent 2024-11-01,100 2024-11-02,99.6 2024-11-03,94.2 2024-11-04, 79.9 2024-11-30, 49.9 2024-12-01,22.1 2024-12-02,19.0" | eval _time=strptime(Day, "%F") | foreach 50 80 100 [ eval REMAINING = 100 - <<FIELD>> | eval REMEDIATION_<<FIELD>> = if(Percent <= REMAINING, 1, null())] | stats earliest_time(_time) as Start earliest_time(REMEDIATION_*) as r_* I'll need to figure out a way to get the 100% field to show up after the stats command but I know I can do that in a brute force manner if necessary.   I haven't seen foreach before so thank you for such a concise, relevant example. 
Hi, We installed the #AbuseIPDB app in our Splunk cloud instance.  I created a workflow action called jodi_abuse_ipdb using the documentation provided in the app Label: Check $ip$ with AbuseIPDB ... See more...
Hi, We installed the #AbuseIPDB app in our Splunk cloud instance.  I created a workflow action called jodi_abuse_ipdb using the documentation provided in the app Label: Check $ip$ with AbuseIPDB Apply only to: ip Search string: |makeresults|abuseipdbcheck ip=$ip$ I'd like to be able to use this for a report but I haven't figured out how trigger to call this workflow action to provide results.  I've done Google searches and I've tried a number of things. I am hoping someone in the community might be able to help. Thank you! Jodi
@richgalloway   It works ... but however only if i pass source it taking this rule effective if i pass sourcetype this rule not effective in props.conf. Thank you..
Sure: https://community.splunk.com/t5/Getting-Data-In/Rolling-upgrade-vs-Maintenance-mode-commands-on-cluster-manager/td-p/705861 
I’ve read the documentation on these commands, executed both in a dev environment and observed the behavior. My interpretation of the commands is that they are the same. Does someone want to take a... See more...
I’ve read the documentation on these commands, executed both in a dev environment and observed the behavior. My interpretation of the commands is that they are the same. Does someone want to take a stab, and try to better explain them from your own perspective Please don't point me or reference to any Splunk docs, I've read them already and still can't see when is the best use case to use these. I want to read you're opinion! What is the main difference between these two commands? splunk enable maintenance-mode splunk upgrade-init cluster-peers Here is the scene: I will be upgrading a cluster of splunk cluster manager and their peers. Cluster manager Indexers I don't want to initiate a bucket fixup on each indexer (10 peers * 10TB on each peer). Which one best fits/servers my use case above?
Hi @rickymckenzie10 , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I am not sure why you won't show us what you do have - perhaps we might be able to see what is wrong - what you are sharing with us at the moment is not moving things forward.
Yea no reference to server_pkcs1.pem in server.conf.  I already renamed file, and finding is gone.  Just watching/waiting now to make sure no issues.  Thanks!  
Its important to note that i wrote a similar line of code for another panel and got no error, see below: index = index name sourcetype = sourcetype name (field names) earliest =$StartTime$ latest=... See more...
Its important to note that i wrote a similar line of code for another panel and got no error, see below: index = index name sourcetype = sourcetype name (field names) earliest =$StartTime$ latest=$FinishTime$
Its important to note that i wrote a similar line of code for another panel and got no error, see below: index = index name sourcetype = sourcetype name (field names) earliest =$StartTime$  latest=$... See more...
Its important to note that i wrote a similar line of code for another panel and got no error, see below: index = index name sourcetype = sourcetype name (field names) earliest =$StartTime$  latest=$FinishTime$