All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi, thank you for your reply In the error ticket, all parts mapped to Jira customfield (e.g. "customfield_10211": "$result.client$") are not included. In regular tickets, customer values, time, etc... See more...
hi, thank you for your reply In the error ticket, all parts mapped to Jira customfield (e.g. "customfield_10211": "$result.client$") are not included. In regular tickets, customer values, time, etc. are always present, but in error tickets, none of the values ​​mapped to customfield are coming in, even though they are confirmed in Splunk. If there is a problem with even one customfield value, can I not retrieve other values ​​as well? (For example, there are two Jira custom fields, client and reason. However, the reason field is a single line type, so there is a length limit. If the length limit is exceeded, i will not be able to retrieve values ​​from not only the reason field but also the client field...)
Sure, thanks for your reply @marnall  >> Yes, this is a security recommendation added recently   May i know if you or anybody got some more details about this security recommendation pls, thanks. 
Sure @PickleRick  just asked on #docs and waiting with fingers crossed
This is probably an INDEXED_EXTRACTIONS issue, see these, which should help https://community.splunk.com/t5/Splunk-Search/Why-is-my-search-on-JSON-data-producing-duplicate-results-for/m-p/520686 ht... See more...
This is probably an INDEXED_EXTRACTIONS issue, see these, which should help https://community.splunk.com/t5/Splunk-Search/Why-is-my-search-on-JSON-data-producing-duplicate-results-for/m-p/520686 https://community.splunk.com/t5/Getting-Data-In/Bug-Why-are-there-duplicate-values-with-INDEXED-EXTRACTION/m-p/676784 https://community.splunk.com/t5/Getting-Data-In/Why-would-INDEXED-EXTRACTIONS-JSON-in-props-conf-be-resulting-in/m-p/317327
foreach is immensely powerful and leads you to a place where in your SPL you can use good field naming conventions to create concise, if a little more obtuse, logic. Here it's using numbers, but you ... See more...
foreach is immensely powerful and leads you to a place where in your SPL you can use good field naming conventions to create concise, if a little more obtuse, logic. Here it's using numbers, but you typically use it with fields and then wildcards then a good naming strategy become important as it allows you to handle unknown field names.
Hi as you have SCP in you, you have one additional option. You could use Splunk Edge Processor to get syslog feed in. Of course you need LB before those endpoint to get HA. But probably the easiest ... See more...
Hi as you have SCP in you, you have one additional option. You could use Splunk Edge Processor to get syslog feed in. Of course you need LB before those endpoint to get HA. But probably the easiest way is use SC4S as @gcusello said. You could run it on docker or even k8s if you are familiar with it. r. Ismo
Hi there is no HMAC or similar method to ensure that logs haven’t been tampered in Splunk. Of course you should use TLS in transport method, but it only ensures that stream is ok, not that original ... See more...
Hi there is no HMAC or similar method to ensure that logs haven’t been tampered in Splunk. Of course you should use TLS in transport method, but it only ensures that stream is ok, not that original events are exactly what they have when they are originally written into disk. If you’re needing this kind of functionality you should use e.g HEC to send those events directly from your logger to Splunk without writing those into disk on source side. r. Ismo
Hi the preferred method is set up syslog server (rsyslog or syslog-ng) or use SC4C to get logs from syslog sources and then send those logs from it by UF or in SC4C case it sends those via HEC to yo... See more...
Hi the preferred method is set up syslog server (rsyslog or syslog-ng) or use SC4C to get logs from syslog sources and then send those logs from it by UF or in SC4C case it sends those via HEC to your cloud instance. r. Ismo
Thank you in advance for your help community I performed the integration of Cisco DNA to Splunk Created my "cisco_dna" index on my Heavy Forwarder I installed the Cisco DNA Center Add-on on my He... See more...
Thank you in advance for your help community I performed the integration of Cisco DNA to Splunk Created my "cisco_dna" index on my Heavy Forwarder I installed the Cisco DNA Center Add-on on my Heavy Forwarder (https://splunkbase.splunk.com/app/6668) Added the account in the add-on (username, password, host) Activated all the inputs: cisco:dnac:clienthealth cisco:dnac:devicehealth cisco:dnac:compliance cisco:dnac:issue cisco:dnac:networkhealth cisco:dnac:securityadvisory I also created my “cisco_dna” index on my Splunk Cloud instance. Installed the Cisco DNA Center App (https://splunkbase.splunk.com/app/6669) Done, I started receiving logs in Splunk from Cisco DNA But when validating the dashboards in the APP and reviewing the search results I noticed that the values of the fields are duplicated. Even if I apply some dedup to any of the fields, the result is “only one duplicate value”. This affects me when I have to take a value to perform an operation or make a graph. Does anyone know what this problem is due to and how I could solve it? Cisco DNA Center Add-on Cisco DNA Center App 
Don't ingest syslog directly into Splunk.  Use a dedicated syslog server.  See https://www.splunk.com/en_us/blog/tips-and-tricks/using-syslog-ng-with-splunk.html and https://kinneygroup.com/blog/splu... See more...
Don't ingest syslog directly into Splunk.  Use a dedicated syslog server.  See https://www.splunk.com/en_us/blog/tips-and-tricks/using-syslog-ng-with-splunk.html and https://kinneygroup.com/blog/splunk-syslog/  
Hey @Richfez or @Splunkerninja , I've successfully ingested the Snowflake LOGIN_HISTORY and SESSIONS tables but I'm running into roadblock after roadblock with the ACCESS_HISTORY and QUERY_HISTORY... See more...
Hey @Richfez or @Splunkerninja , I've successfully ingested the Snowflake LOGIN_HISTORY and SESSIONS tables but I'm running into roadblock after roadblock with the ACCESS_HISTORY and QUERY_HISTORY table ingestions. https://docs.snowflake.com/en/sql-reference/account-usage/access_history https://docs.snowflake.com/en/sql-reference/account-usage/query_history These tables have a QUERY_ID field that looks like this:  a0fda135-d678-4184-942b-c3411ae8d1ce And a QUERY_START_TIME (TIMESTAMP_LTZ) field that looks like this: 2022-01-25 16:17:47.388 +0000 The Checkpoint value system for a Rising ingestion in the Splunk DB Connect app doesn't play nicely with either of these fields and I've tried creating temporary fields and tables to bypass this issues but not to avail. For the QUERY_ID, I tried removing the hyphens and replace the letters a,b,c,d,e,f with numeric values 1,2,3,4,5,6 and stored it in a different field called QUERY_ID_NUMERIC. When trying that out as the checkpoint value, the checkpoint never gets updated so it just ingests the same data over and over again. Similarly for the QUERY_START_TIME, I've tried casting the TIMESTAMP_LTZ to TIMESTAMP_NTZ and saved that as a new field QUERY_START_TIME_NTZ and that ingests the data but the checkpoint value isn't updating either. I was wondering if anyone has experienced this issue when ingesting these two data sources and if they've found any work arounds that resolved it! Thank you very much!
No. There's no such functionality within Splunk itself but you could implement something like this as modular or scripted input but you'd have to write such input yourself.
Figured it out,  fapolicy was the issue. 
Hi @rickymckenzie10  for your requirement, I would suggest to go for option 2 for upgarde let me explain difference as per my understanding  in most of the terms both works same but some differe... See more...
Hi @rickymckenzie10  for your requirement, I would suggest to go for option 2 for upgarde let me explain difference as per my understanding  in most of the terms both works same but some differences. with Maintenance mode  you are telling cluster master that  some activity will happen on the indexers, it can be stopping splunk on indexer, rebooting the server , upgrading Splunk . With Maintenance mode enabled bucket replication will not happen in the entire cluster  once maintenance mode disabled, bucket fixup tasks will complete. With Rolling upgrade command , manager node understands that its upgrade of cluster and with running Rolling upgrade command also enables maintenance mode and tries to minimize the impact to searches. 
Enabling maintenance mode simply tells the Cluster Manager to not bother doing bucket fix-ups.  Nothing happens on the indexers themselves. The upgrade-init command starts a rolling restart of the i... See more...
Enabling maintenance mode simply tells the Cluster Manager to not bother doing bucket fix-ups.  Nothing happens on the indexers themselves. The upgrade-init command starts a rolling restart of the indexers after setting maintenance mode.
I'm not sure what that statement means.  props apply only to the sourcetype, source, or host listed in the stanza name.  It may be necessary to replicate a stanza to cover all scenarios.
This gets me pretty close to what I need.  I modified it slightly to get to the data I need: | makeresults format=csv data="Day,Percent 2024-11-01,100 2024-11-02,99.6 2024-11-03,94.2 2024-11-04, 79.... See more...
This gets me pretty close to what I need.  I modified it slightly to get to the data I need: | makeresults format=csv data="Day,Percent 2024-11-01,100 2024-11-02,99.6 2024-11-03,94.2 2024-11-04, 79.9 2024-11-30, 49.9 2024-12-01,22.1 2024-12-02,19.0" | eval _time=strptime(Day, "%F") | foreach 50 80 100 [ eval REMAINING = 100 - <<FIELD>> | eval REMEDIATION_<<FIELD>> = if(Percent <= REMAINING, 1, null())] | stats earliest_time(_time) as Start earliest_time(REMEDIATION_*) as r_* I'll need to figure out a way to get the 100% field to show up after the stats command but I know I can do that in a brute force manner if necessary.   I haven't seen foreach before so thank you for such a concise, relevant example. 
Hi, We installed the #AbuseIPDB app in our Splunk cloud instance.  I created a workflow action called jodi_abuse_ipdb using the documentation provided in the app Label: Check $ip$ with AbuseIPDB ... See more...
Hi, We installed the #AbuseIPDB app in our Splunk cloud instance.  I created a workflow action called jodi_abuse_ipdb using the documentation provided in the app Label: Check $ip$ with AbuseIPDB Apply only to: ip Search string: |makeresults|abuseipdbcheck ip=$ip$ I'd like to be able to use this for a report but I haven't figured out how trigger to call this workflow action to provide results.  I've done Google searches and I've tried a number of things. I am hoping someone in the community might be able to help. Thank you! Jodi
@richgalloway   It works ... but however only if i pass source it taking this rule effective if i pass sourcetype this rule not effective in props.conf. Thank you..
Sure: https://community.splunk.com/t5/Getting-Data-In/Rolling-upgrade-vs-Maintenance-mode-commands-on-cluster-manager/td-p/705861