All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,  I have a use case to get the index name from the field of one of the index/sourcetype and use that index name value to search the content of that index, but not getting any result. Here is ... See more...
Hello,  I have a use case to get the index name from the field of one of the index/sourcetype and use that index name value to search the content of that index, but not getting any result. Here is what I did: index =meta_info sourcetype=meta:info| search group_name=admingr AND spIndex_name=admin_audit | eval getIndex=spIndex_name | search index=getIndex Any help will be highly appreciated, thank you!  
Hi all,  our regex is unable to extract host from the logs, can you pleas ehelp with the correct regex.though this regex works when checked in regex101, not sure why unable to extract [hostextrac... See more...
Hi all,  our regex is unable to extract host from the logs, can you pleas ehelp with the correct regex.though this regex works when checked in regex101, not sure why unable to extract [hostextract] REGEX = ^.*\w+\s+\d+\s+(?:\d+:){2}\d+\s+(?P<test>\w+)\s+ SOURCE_KEY = _raw DEST_KEY = MetaData:Host FORMAT = host::$1     e.g. logs format   May 1 08:35:30 10.98.6.249 May 1 08:35:30 host_abc   Apr 10 08:45:20 10.98.6.249 Apr 10 08:45:20 host_def   May 1 08:35:30 10.98.6.249 May 1 08:35:30 host_ghi    
Can anyone help in resolving this issue? Noticed eStreamer stopped sending syslogs logs via HF to Splunk SH (on prem) and I tried to run this search. The splunk version is 9.2.1 /opt/splunk/etc/apps... See more...
Can anyone help in resolving this issue? Noticed eStreamer stopped sending syslogs logs via HF to Splunk SH (on prem) and I tried to run this search. The splunk version is 9.2.1 /opt/splunk/etc/apps/TA-eStreamer/bin/splencore.sh status Below was the message I got  bash-4.2$ /opt/splunk/etc/apps/TA-eStreamer/bin/splencore.sh status Traceback (most recent call last): File "./estreamer/preflight.py", line 33, in <module> import estreamer.crossprocesslogging File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/__init__.py", line 27, in <module> from estreamer.connection import Connection File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/connection.py", line 23, in <module> import ssl File "/opt/splunk/lib/python3.7/ssl.py", line 98, in <module> import _ssl # if we can't import it, let the error propagate ImportError: libssl.so.1.0.0: cannot open shared object file: No such file or directory I upgraded the add-on from 5.1.9 to 5.2.9 and still got the same message. Is there a fix or should open a support case. Your suggestions are welcome __PRESENT __PRESENT
Hi folks,  our field parsing/extraction has broken across all sourcetypes (nginx, log4j, aws:elb's, fix,custom formats as well). The most recent infra event we had was an increase of file storage ove... See more...
Hi folks,  our field parsing/extraction has broken across all sourcetypes (nginx, log4j, aws:elb's, fix,custom formats as well). The most recent infra event we had was an increase of file storage over a month ago.  If our error were related to a single sourcetype I would assume I have to review my props.conf file for the associated app and sourcetype,but in this case it appears something more systemic is occurring. As someone with limited knowledge of splunk admin,where can I look to narrow my search to the root cause?  Trying to RTFM, and am familiar with the "general" log structure but not sure exactly what I'm looking for. (an error/exception on restart directly calling out a props.conf file? An index related exception? idk) Would btool help me confirm if my props.conf files are correctly loading? Is there something would indicate a failure of log parsing?   Splunk Enterprise single-instance 9.2.0.1 on an 8 Core 32GB instance   Cheers //A
Hi all, I have a created a dashboard, in that dashboard  i added a text filter, to that text filter i need to add place holder like below Thanks in Advance!    
I have multiple Dashboards that I have cloned to make changes.  What is the best method to rename the existing dashboards out to old and rename the new dashboards in? 1)rename the current to old ... See more...
I have multiple Dashboards that I have cloned to make changes.  What is the best method to rename the existing dashboards out to old and rename the new dashboards in? 1)rename the current to old 2)rename the new to current. or do I have to do the below. 1)clone the current to old 2)delete current 3) clone the new into current 4) delete the new  
Hi all, new here, so go easy!   I have a dashboard with many panels, time picker and so on. I'm hoping that I can use multiple time buckets for event counts in one table, for example span=1h AND ... See more...
Hi all, new here, so go easy!   I have a dashboard with many panels, time picker and so on. I'm hoping that I can use multiple time buckets for event counts in one table, for example span=1h AND span=1m. At this stage I'm not bothered about events that might take longer than each span.   The result I'm looking for:   Average count Max count Per hour {results} {results} Per minute {results} {results}   I can get this to work with two separate tables but combining into one is proving to be a challenge. If at all even possible.   For example, for per hour I have: |.... main search... | bucket _time span=1h as perHour | stats count as "eventsPerHour" by perHour | stats avg(eventsPerHour) as avEventsPerHour max(eventsPerHour) as maxEventsPerHour | eval avEventsPerHour=round(avEventsPerHour,2) | rename avEventsPerHour as "Average count per hour" maxEventsPerHour as "Maximum count in one hour"   Which gives: Average count per hour Maximum count in one hour {results} {results}     Any pointers in the right direction much appreciated. 
Hi, I run splunk 9.0.8 and after an issue with our storage (LUN full). I had to full scan the disk and successfully repaired the filesystem. splunk starts now but there is a persistent error with t... See more...
Hi, I run splunk 9.0.8 and after an issue with our storage (LUN full). I had to full scan the disk and successfully repaired the filesystem. splunk starts now but there is a persistent error with the KV store. the status is the following: show kvstore-status This member: backupRestoreStatus : Ready disabled : 0 guid : E59FAAA8-D66A-498E-8DDF-0F5C29866F95 port : 8191 standalone : 1 status : failed storageEngine : wiredTiger Any suggestion on how could get an operational status? many thanks. Jose
Is there any way to search for events which has any special characters? thanks in advance for any help.
Hi there, I wanted to download the Embedded Dashboards For Splunk (EDFS) App. For me I can only visit the Github repo. Is the App missing? Or is the code/config in Github the 'app' ? https://sp... See more...
Hi there, I wanted to download the Embedded Dashboards For Splunk (EDFS) App. For me I can only visit the Github repo. Is the App missing? Or is the code/config in Github the 'app' ? https://splunkbase.splunk.com/app/4377 in Github itself I don't find anything to download, only config and code files   Best regards Fabian
I am trying to setup Webhook action to send IP form a search to Akamai.  Need help in writng the webhook
Hi all, A query, can calculate http calls, success responses and error response. I need an addition to the  query to get how many requests are without response. I mean calls - success_respnses - err... See more...
Hi all, A query, can calculate http calls, success responses and error response. I need an addition to the  query to get how many requests are without response. I mean calls - success_respnses - erros_rsponse = null_responses. Some good idea bout this? Thanks in advance!
we plan to have a multi-site clustering setup in HQ and DR so the question is can i configure the indexers located at DR with a retention policy less than indexers located at HQ?
I have the following environment: 1 HF -> 1 indexer -> 1 SH , code 9.1 How do I onboard the AD controller data into my HF ? I am using Add-on for Active Directory, any ldap commands? any recommendat... See more...
I have the following environment: 1 HF -> 1 indexer -> 1 SH , code 9.1 How do I onboard the AD controller data into my HF ? I am using Add-on for Active Directory, any ldap commands? any recommendations ? is this the right tool ?  
Hello community! I want to extract data from 2 different logs like bellow: Log 1: 2024-04-28 06:38:51 INFO Start auth for accountId=1, ip=192.168.1.1 Log 2: 2024-04-28 06:38:27 INFO Collect respon... See more...
Hello community! I want to extract data from 2 different logs like bellow: Log 1: 2024-04-28 06:38:51 INFO Start auth for accountId=1, ip=192.168.1.1 Log 2: 2024-04-28 06:38:27 INFO Collect response for accountId=1, was: response=FINISH For example for accountId=1 I have 10 logs with "Start auth", I mean 10 attempts of start auth. In second log, for the same accountId I have 1 or more logs with FINISH. I want to make a table like accountId                              Start auth                                      FINISH 1                                                 10                                                       1   Could you helm me with this?  Thank you.
Environment : Distributed Splunk Enterprise (indexer cluster) Version: 9.0.5 Issue: After setting journalCompression to zstd in indexes.conf, we noticed that the setting is applied for warm but not... See more...
Environment : Distributed Splunk Enterprise (indexer cluster) Version: 9.0.5 Issue: After setting journalCompression to zstd in indexes.conf, we noticed that the setting is applied for warm but not for frozen buckets. The setting was applied months ago. In the following example, we can see that files timestamped from today are zst in warm and gzip in frozen. I did not find any related information in documentation indexesconf Is it an expected behavior or am I missing some setting in my configuration? Evidence: ## WARM BUCKETS [splunk@indexer (PROD) ~]$ ls -latr /var/lib/splunk/warm/<index_name> [...] drwx--x---. 3 splunk splunk 4096 Apr 30 11:19 db_1714450734_1714041906_2521_1B4FA1BE-AA81-459F-B38A-1FB23A018EDB [splunk@indexer (PROD) ~]$ ls -latr /var/lib/splunk/warm/<index_name>/db_1714450734_1714041906_2521_1B4FA1BE-AA81-459F-B38A-1FB23A018EDB/rawdata/ [...] -rw-------. 1 splunk splunk 113295494 Apr 30 11:19 journal.zst ## FROZEN BUCKETS [splunk@indexer (PROD) ~]$ ls -latr /var/lib/splunk/frozen/<index_name> [...] drwx------. 3 splunk splunk 29 Apr 30 11:20 rb_1709121660_1709115460_2204_3BF8DDF1-9874-4848-9DB4-880DA5EBA00F [splunk@indexer (PROD) ~]$ ls -latr /var/lib/splunk/frozen/<index_name>/rb_1709121660_1709115460_2204_3BF8DDF1-9874-4848-9DB4-880DA5EBA00F/rawdata/ [...] -rw-------. 1 splunk splunk 2342045 Feb 28 19:08 journal.gz
In a dashboard I am using 2 searches and in each search I am using geostats command to build a map and show results on the map. Can I point these 2 searches on 1 map . meaning i want that rather than... See more...
In a dashboard I am using 2 searches and in each search I am using geostats command to build a map and show results on the map. Can I point these 2 searches on 1 map . meaning i want that rather than using geostats on each panel search I want it to be common for every panel so that i don't have to write it in every panel search.
Dear community members, I am running Splunk enterprise edition on my local windows system. Splunk web is up & running. I have created a Lambda function with a trigger cloudwatch logs where on every ... See more...
Dear community members, I am running Splunk enterprise edition on my local windows system. Splunk web is up & running. I have created a Lambda function with a trigger cloudwatch logs where on every invocation it should send the cloudwatch logs to Splunk. But while invocation I am getting connection refused error. Please find the error below. Can someone help me to understand ? ERROR Invoke Error
my table looks like so : I have been trying to update the table_cell_highlighting.js in the dashboard example app so that it only highlights the percentage cell for status=200 please point me i... See more...
my table looks like so : I have been trying to update the table_cell_highlighting.js in the dashboard example app so that it only highlights the percentage cell for status=200 please point me in the right direction - thx status count percent 200 895 95.927117 404 14 1.500536 304 12 1.286174 303 12 1.286174
Hi Has anyone installed the "Add-on for Cloudflare data" app, i just after some documentation on how it is supposed work and the setup process ?