All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Which buckets get frozen is decided by bucket's age. Bucket's age is the time of the most recent event in the bucket. But. There can be buckets which even have events from the future (they go to so c... See more...
Which buckets get frozen is decided by bucket's age. Bucket's age is the time of the most recent event in the bucket. But. There can be buckets which even have events from the future (they go to so called quarantine bucket). So it's a bit more complicated and depends on data characteristics of indexes involved. dbinspect is the command to check for info on your buckets.
Here is the full env_file SC4S_SOURCE_SYSLOG_PORTS=514,32514,32601,41514,41601,42514,42601 SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://<>:3001/services/collector/event SC4S_DEST_SPLUNK_HEC_DEFAULT_TO... See more...
Here is the full env_file SC4S_SOURCE_SYSLOG_PORTS=514,32514,32601,41514,41601,42514,42601 SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://<>:3001/services/collector/event SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=<> SC4S_DEST_SPLUNK_HEC_DEFAULT_INDEX=<> # SC4S_DEST_SPLUNK_HEC_DEFAULT_MODE=GLOBAL # SC4S_DEST_SPLUNK_HEC_DEFAULT_FORMAT=json # SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no SC4S_DEST_SPLUNK_HEC_TLS_CA_FILE=/etc/ssl/certs/ca-certificates.crt SC4S_DEST_SPLUNK_HEC_TLS_CLIENT_CERT=/etc/syslog-ng/tls/splunk.crt SC4S_DEST_SPLUNK_HEC_TLS_CLIENT_KEY=/etc/syslog-ng/tls/splunk.key SC4S_SOURCE_TLS_ENABLE=yes SC4S_SOURCE_TLS_KEY=/etc/syslog-ng/tls/server.key SC4S_SOURCE_TLS_CERT=/etc/syslog-ng/tls/server.pem SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_ENABLE=yes SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_DIR=/var/lib/sc4s/disk-buffer SC4S_HEALTH_CHECK_ENABLE=true SC4S_LISTEN_INTERNAL_HEALTH_PORT=9129 SC4S_ETC=/etc/syslog-ng SC4S_LISTEN_CHECKPOINT_SPLUNK_NOISE_CONTROL_SECONDS=30 SC4S_LISTEN_STATUS_PORT=9129 SC4S_LISTEN_DEFAULT_TCP_PORT=41514 SC4S_LISTEN_DEFAULT_UDP_PORT=42514 SC4S_LISTEN_DEFAULT_TLS_PORT=7514 SC4S_LISTEN_DEFAULT_RFC5426_PORT=41601 SC4S_LISTEN_DEFAULT_RFC6587_PORT=42601 SC4S_LISTEN_DEFAULT_RFC5425_PORT=7425 SC4S_HEALTH_CHECK_ENABLE=true SC4S_DEST_SPLUNK_HEC_BATCH_SIZE=1 SC4S_DEST_SPLUNK_HEC_RETRY_LIMIT=1 SC4S_DEST_SPLUNK_HEC_RETRY_INTERVAL=5 SOURCE_ALL_SET=DEFAULT_TCP,DEFAULT_UDP # SC4S_SEND_METRICS_TERMINAL=no SC4S_DEBUG=false SC4S_LOG_LEVEL=false SC4S_DEFAULT_TIMEZONE=Europe/Berlin PYTHONPATH=/var/lib/python-venv/lib/python3.12/site-packages:/etc/syslog-ng/python:/etc/syslog-ng/pylib # Tunning settings SC4S_DEST_SPLUNK_HEC_TIME_REOPEN=30 SC4S_DEST_SPLUNK_HEC_BATCH_LINES=100 SC4S_DEST_SPLUNK_HEC_BATCH_TIMEOUT=5000 SC4S_DEST_SPLUNK_HEC_KEEPALIVE=yes SC4S_DEST_SPLUNK_HEC_WORKERS=8 SC4S_DEST_SPLUNK_HEC_DISKBUFF_ENABLE=yes SC4S_DEST_SPLUNK_HEC_DISKBUFF_RELIABLE=yes SC4S_DEST_SPLUNK_HEC_DISKBUFF_MEMBUFLENGTH=10000 SC4S_DEST_SPLUNK_HEC_DISKBUFF_DISKBUFSIZE=200000000
Thank you ! I just tested it now, works like a charm. I was worried that the extra search you used will consume processing power, but all ok as job inspector said  This search has completed and ha... See more...
Thank you ! I just tested it now, works like a charm. I was worried that the extra search you used will consume processing power, but all ok as job inspector said  This search has completed and has returned 1 results by scanning 0 events in 0.014 seconds Apologies for the late reply, i've been swamped.
@PickleRick What I understand is that when data collection of the A index is stopped and only the data collection of the _internal index is maintained, A/colddb collected by collecting data from the... See more...
@PickleRick What I understand is that when data collection of the A index is stopped and only the data collection of the _internal index is maintained, A/colddb collected by collecting data from the two indexes was 30MB, and in _internaldb/db 10MB, only the data of the _internal index is collected. Since A/colddb is the oldest in the cold Volume, is it reduced from 30MB to 20MB? With this logic, wouldn't all data stored in A/colddb be frozen?
Depends on what you mean by "capacity". Splunk doesn't have "capacity" as such. It doesn't have its own limit above which it won't index events. It will simply freeze old data if it exceeds some limi... See more...
Depends on what you mean by "capacity". Splunk doesn't have "capacity" as such. It doesn't have its own limit above which it won't index events. It will simply freeze old data if it exceeds some limits. (ok, technically it will stop ingesting if the underlying storage becomes full and OS-level writes cannot be done anymore but that's a different story). Buckets from A/colddb might get frozen at some point because while the A index does not exceeds its thresholds, the overall size of buckets on the volume is too big and A's buckets are oldest.
Hi @Skv , if the condition of no connectivity is a temporary condition, having an Heavy Forwarder on premise will give you sufficient cache to store logs until the connection is resumed. Ciao. Giu... See more...
Hi @Skv , if the condition of no connectivity is a temporary condition, having an Heavy Forwarder on premise will give you sufficient cache to store logs until the connection is resumed. Ciao. Giuseppe
We are unable to access our splunk webserver post OS upgrade to RHEL 8.10, However our Splunk service is up and running fine but the UI is not available for us. Can someone please help us to fix this... See more...
We are unable to access our splunk webserver post OS upgrade to RHEL 8.10, However our Splunk service is up and running fine but the UI is not available for us. Can someone please help us to fix this issue. We have checked port 8000 is listening fine.  Trying 10.xxx.xxx.xx... Connected to 10.xxx.xxx.xx. Escape character is '^]'. ^Z Connection closed by foreign host. tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN We dont have anything in internal logs to dig in. we are getting the error message on the page as below: The connection has timed out The server at 10.xxx.xxx.xx is taking too long to respond.   The site could be temporarily unavailable or too busy. Try again in a few moments. If you are unable to load any pages, check your computer’s network connection. If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the web.  
For Tenable SC with Custom Server SSL Certificate: After identifying the source issue, if its SSL-related, Modify the following file to disable SSL Verification: /opt/splunk/etc/apps/TA-tenable/bin/... See more...
For Tenable SC with Custom Server SSL Certificate: After identifying the source issue, if its SSL-related, Modify the following file to disable SSL Verification: /opt/splunk/etc/apps/TA-tenable/bin/tenable_consts.py verify_ssl_for_sc_api_key = False It worked for me.  
Splunk cannot and  does not detect if data has already been indexed.  As @gcusello said,  it will attempt to avoid re-ingesting data, but that's not perfect. It's up to the app doing the ingestion t... See more...
Splunk cannot and  does not detect if data has already been indexed.  As @gcusello said,  it will attempt to avoid re-ingesting data, but that's not perfect. It's up to the app doing the ingestion to prevent reading the same data twice.  In DB Connect, for example, a "rising column" is defined to identify unique records.  Your app could do something similar, using case ID and Closed Time, perhaps.
@PickleRick  So shouldn’t the A/colddb capacity become 0 at some point?
homePath = volume:cold/_internaldb/db Are you sure you want your hot data on the cold volume? Anyway, you have small buckets, which is relatively rare for a Splunk installation and it skews your ob... See more...
homePath = volume:cold/_internaldb/db Are you sure you want your hot data on the cold volume? Anyway, you have small buckets, which is relatively rare for a Splunk installation and it skews your observations. There is no guarantee that the limits will be enforced precisely. Anyway, it works like this - every now and then (I don't remember the exact interval; you can find it in servers.conf) the housekeeping thread wakes up and checks the indexes. If a hot bucket triggers criteria (bucket size, inactivity time and so on), it is rolled to warm. If warm buckets for index trigger cirteria (number of warm buckets per index), oldest bucket for an index (in terms of most recent event in the bucket) is rolled to cold. If hot/warm volume exceeds size, oldest bucket for the whole volume is rolled to cold. If cold buckets for index trigger criteria (retention time, data size), oldest bucket is rolled to frozen If cold volume exceeds size, oldest bucket for the whole volume is rolled to frozen. That's how it's supposed to work.
Thanks @bowesmana  for the suggestions.  Can you please let me know how i can use the lookup with the below search.  Below query gives me the results of all the columns if there is a record in th... See more...
Thanks @bowesmana  for the suggestions.  Can you please let me know how i can use the lookup with the below search.  Below query gives me the results of all the columns if there is a record in the splunk logs with JOBNAME values as A1 and A2.  If there is no record for the jobname A1 and A2, there is no record fetched, and output is blank.  index = main source=xyz  (TERM(A1) OR TERM(A2) )   ("- ENDED" OR "- STARTED"  ) | rex field=TEXT "((A1-) |(A2-) )(?<Func>[^\-]+)" | eval Function=trim(Func), DAT = strftime(relative_time(_time, "+0h"), "%d/%m/%Y") | rename DAT as Date_of_reception | eval {Function}_TIME=_time | stats values(Date_of_reception) as Date_of_reception values(*_TIME) as *_TIME by JOBNAME | table JOBNAME,Description, Date_of_reception ,STARTED_TIME , ENDED_TIME  | sort -STARTED_TIME   We want the output as below even when there is no record in the splunk logs with column1 values as A1 and A2. When there is no record, fields "Date_of_reception ,STARTED_TIME , ENDED_TIME " should be Blank.  File.csv : JOBNAME ,Description A1 , Job A1 A2, Job A2  Desired Output :  JOBNAME,Description, Date_of_reception ,STARTED_TIME , ENDED_TIME A1 , Job A1 , 05/02/2025 , 12:54:31 , 12:54:40 A2, Job A2 ,  ,  ,   Date_of_reception ,STARTED_TIME , ENDED_TIME is blank for A2 because there are no logs in Splunk for the Jobname A2.  Can you please help to update the query to get the desired output:  Current query:  index = main source=xyz  (TERM(A1) OR TERM(A2) )   ("- ENDED" OR "- STARTED"  ) | rex field=TEXT "((A1-) |(A2-) )(?<Func>[^\-]+)" | eval Function=trim(Func), DAT = strftime(relative_time(_time, "+0h"), "%d/%m/%Y") | rename DAT as Date_of_reception | eval {Function}_TIME=_time | stats values(Date_of_reception) as Date_of_reception values(*_TIME) as *_TIME by JOBNAME | table JOBNAME,Description, Date_of_reception ,STARTED_TIME , ENDED_TIME  | sort -STARTED_TIME
Hi @Cheng2Ready  You can use REST for that, like in this example: | rest /servicesNS/-/-/saved/searches splunk_server=local | search action.snow_incident=1 | table title, disabled, action.snow_inci... See more...
Hi @Cheng2Ready  You can use REST for that, like in this example: | rest /servicesNS/-/-/saved/searches splunk_server=local | search action.snow_incident=1 | table title, disabled, action.snow_incident.param.assignment_group, action.snow_incident.param.contact_type The fields related to the alert actions in ServiceNow follow the pattern action.snow_event* or action.snow_incident*
Are you sure you're not talking about first 256 bytes of monitored file? (of course the header length is configurable). The only duplication detection I recall is connected with useACK and even then ... See more...
Are you sure you're not talking about first 256 bytes of monitored file? (of course the header length is configurable). The only duplication detection I recall is connected with useACK and even then it indexes an event twice but emits a warning AFAIR.
Did you enable inputs in the TA? Is data from the TA being indexed?
I have installed UF in one of the servers and did install the unix addon. Then restarted the uf services. However the entities didn't populate in ITSI page. Could someone please help on the same.
In Red hat OpenShift on premises cluster i need to collect logs, metrics, and traces of the cluster  , when there is no internet connection on the on prime cloud how can i do this ?
@livehybrid @gcusello  My requirement is I have to send events via Alert_Webhook. So we need to allow the Sender IP (in My case -Splunk Cloud)  at the receiving end of the webhook. What IP do we need... See more...
@livehybrid @gcusello  My requirement is I have to send events via Alert_Webhook. So we need to allow the Sender IP (in My case -Splunk Cloud)  at the receiving end of the webhook. What IP do we need to whitelist and where do we get that IP from?
Splunk does not work like a database in this respect. So, it depends on how Splunk has been set up to detect "duplicates" of this nature. This is normally done with searches in reports or alerts or d... See more...
Splunk does not work like a database in this respect. So, it depends on how Splunk has been set up to detect "duplicates" of this nature. This is normally done with searches in reports or alerts or dashboards. These will normally depend on your data. What searches do you already have set up? What does your data look like? How is it being ingested into Splunk? What criteria do you want to use to determine that an event represents a duplicate? Please provide as much detail as you can (without giving away sensitive information).
Hi @NevilleRadcliff , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by... See more...
Hi @NevilleRadcliff , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors