All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hello we are unable to receive logs from forwarders from 29 january. i checked splund.log and found this error ERROR TcpOutputFd [110883 TcpOutEloop] - Connection to host=<ip>:port failed what sh... See more...
hello we are unable to receive logs from forwarders from 29 january. i checked splund.log and found this error ERROR TcpOutputFd [110883 TcpOutEloop] - Connection to host=<ip>:port failed what should I do?
Refer to the tables in my original post. I'm doing a count of events per each span using tstats.  So just how many events were there from 00:00-04:00, 04:00-08:00 etc. But splunk chooses that startin... See more...
Refer to the tables in my original post. I'm doing a count of events per each span using tstats.  So just how many events were there from 00:00-04:00, 04:00-08:00 etc. But splunk chooses that starting point of 00:00 and sometimes it's a very poor choice so I would like to be able to adjust it. So instead it would be 01:00-05:00, 05:00-09:00 etc. The methods I've found in the forum do not seem to work with tstats. As shown in my second table, the _time labels are adjusted but the values are not recalculated. 
I want to be able to support adaptive response action in Splunk Enterprise Security but when I put some value there Im getting the error, even its not empty  what I did wrong?         
Does it work with tstats?
Ah, I do see it now, thanks. I was assuming all data would be included in one of the "Workload" (e.g. "Exchange") or "app" data values, but the sourcetype "o365:reporting:messagetrace" does not have ... See more...
Ah, I do see it now, thanks. I was assuming all data would be included in one of the "Workload" (e.g. "Exchange") or "app" data values, but the sourcetype "o365:reporting:messagetrace" does not have "Workload" or "app" data values and I was excluding the message trace events with search parameters like "Workload="*"" Appreciate the help! 
This looks like JSON so you should ingest it as such. Alternatively, you could use spath to extract the fields. Alternatively, look at the json functions.
I have string like this , {"code":"1234","bday":"15-02-06T07:02:01.731+00:00" "name":"Alex", "role":"student","age":"16"}, and I want to extract role from this string. Can any one suggest way in splu... See more...
I have string like this , {"code":"1234","bday":"15-02-06T07:02:01.731+00:00" "name":"Alex", "role":"student","age":"16"}, and I want to extract role from this string. Can any one suggest way in splunk logs? 
Which buckets get frozen is decided by bucket's age. Bucket's age is the time of the most recent event in the bucket. But. There can be buckets which even have events from the future (they go to so c... See more...
Which buckets get frozen is decided by bucket's age. Bucket's age is the time of the most recent event in the bucket. But. There can be buckets which even have events from the future (they go to so called quarantine bucket). So it's a bit more complicated and depends on data characteristics of indexes involved. dbinspect is the command to check for info on your buckets.
Here is the full env_file SC4S_SOURCE_SYSLOG_PORTS=514,32514,32601,41514,41601,42514,42601 SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://<>:3001/services/collector/event SC4S_DEST_SPLUNK_HEC_DEFAULT_TO... See more...
Here is the full env_file SC4S_SOURCE_SYSLOG_PORTS=514,32514,32601,41514,41601,42514,42601 SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://<>:3001/services/collector/event SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=<> SC4S_DEST_SPLUNK_HEC_DEFAULT_INDEX=<> # SC4S_DEST_SPLUNK_HEC_DEFAULT_MODE=GLOBAL # SC4S_DEST_SPLUNK_HEC_DEFAULT_FORMAT=json # SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no SC4S_DEST_SPLUNK_HEC_TLS_CA_FILE=/etc/ssl/certs/ca-certificates.crt SC4S_DEST_SPLUNK_HEC_TLS_CLIENT_CERT=/etc/syslog-ng/tls/splunk.crt SC4S_DEST_SPLUNK_HEC_TLS_CLIENT_KEY=/etc/syslog-ng/tls/splunk.key SC4S_SOURCE_TLS_ENABLE=yes SC4S_SOURCE_TLS_KEY=/etc/syslog-ng/tls/server.key SC4S_SOURCE_TLS_CERT=/etc/syslog-ng/tls/server.pem SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_ENABLE=yes SC4S_DEST_SPLUNK_HEC_DEFAULT_DISKBUFF_DIR=/var/lib/sc4s/disk-buffer SC4S_HEALTH_CHECK_ENABLE=true SC4S_LISTEN_INTERNAL_HEALTH_PORT=9129 SC4S_ETC=/etc/syslog-ng SC4S_LISTEN_CHECKPOINT_SPLUNK_NOISE_CONTROL_SECONDS=30 SC4S_LISTEN_STATUS_PORT=9129 SC4S_LISTEN_DEFAULT_TCP_PORT=41514 SC4S_LISTEN_DEFAULT_UDP_PORT=42514 SC4S_LISTEN_DEFAULT_TLS_PORT=7514 SC4S_LISTEN_DEFAULT_RFC5426_PORT=41601 SC4S_LISTEN_DEFAULT_RFC6587_PORT=42601 SC4S_LISTEN_DEFAULT_RFC5425_PORT=7425 SC4S_HEALTH_CHECK_ENABLE=true SC4S_DEST_SPLUNK_HEC_BATCH_SIZE=1 SC4S_DEST_SPLUNK_HEC_RETRY_LIMIT=1 SC4S_DEST_SPLUNK_HEC_RETRY_INTERVAL=5 SOURCE_ALL_SET=DEFAULT_TCP,DEFAULT_UDP # SC4S_SEND_METRICS_TERMINAL=no SC4S_DEBUG=false SC4S_LOG_LEVEL=false SC4S_DEFAULT_TIMEZONE=Europe/Berlin PYTHONPATH=/var/lib/python-venv/lib/python3.12/site-packages:/etc/syslog-ng/python:/etc/syslog-ng/pylib # Tunning settings SC4S_DEST_SPLUNK_HEC_TIME_REOPEN=30 SC4S_DEST_SPLUNK_HEC_BATCH_LINES=100 SC4S_DEST_SPLUNK_HEC_BATCH_TIMEOUT=5000 SC4S_DEST_SPLUNK_HEC_KEEPALIVE=yes SC4S_DEST_SPLUNK_HEC_WORKERS=8 SC4S_DEST_SPLUNK_HEC_DISKBUFF_ENABLE=yes SC4S_DEST_SPLUNK_HEC_DISKBUFF_RELIABLE=yes SC4S_DEST_SPLUNK_HEC_DISKBUFF_MEMBUFLENGTH=10000 SC4S_DEST_SPLUNK_HEC_DISKBUFF_DISKBUFSIZE=200000000
Thank you ! I just tested it now, works like a charm. I was worried that the extra search you used will consume processing power, but all ok as job inspector said  This search has completed and ha... See more...
Thank you ! I just tested it now, works like a charm. I was worried that the extra search you used will consume processing power, but all ok as job inspector said  This search has completed and has returned 1 results by scanning 0 events in 0.014 seconds Apologies for the late reply, i've been swamped.
@PickleRick What I understand is that when data collection of the A index is stopped and only the data collection of the _internal index is maintained, A/colddb collected by collecting data from the... See more...
@PickleRick What I understand is that when data collection of the A index is stopped and only the data collection of the _internal index is maintained, A/colddb collected by collecting data from the two indexes was 30MB, and in _internaldb/db 10MB, only the data of the _internal index is collected. Since A/colddb is the oldest in the cold Volume, is it reduced from 30MB to 20MB? With this logic, wouldn't all data stored in A/colddb be frozen?
Depends on what you mean by "capacity". Splunk doesn't have "capacity" as such. It doesn't have its own limit above which it won't index events. It will simply freeze old data if it exceeds some limi... See more...
Depends on what you mean by "capacity". Splunk doesn't have "capacity" as such. It doesn't have its own limit above which it won't index events. It will simply freeze old data if it exceeds some limits. (ok, technically it will stop ingesting if the underlying storage becomes full and OS-level writes cannot be done anymore but that's a different story). Buckets from A/colddb might get frozen at some point because while the A index does not exceeds its thresholds, the overall size of buckets on the volume is too big and A's buckets are oldest.
Hi @Skv , if the condition of no connectivity is a temporary condition, having an Heavy Forwarder on premise will give you sufficient cache to store logs until the connection is resumed. Ciao. Giu... See more...
Hi @Skv , if the condition of no connectivity is a temporary condition, having an Heavy Forwarder on premise will give you sufficient cache to store logs until the connection is resumed. Ciao. Giuseppe
We are unable to access our splunk webserver post OS upgrade to RHEL 8.10, However our Splunk service is up and running fine but the UI is not available for us. Can someone please help us to fix this... See more...
We are unable to access our splunk webserver post OS upgrade to RHEL 8.10, However our Splunk service is up and running fine but the UI is not available for us. Can someone please help us to fix this issue. We have checked port 8000 is listening fine.  Trying 10.xxx.xxx.xx... Connected to 10.xxx.xxx.xx. Escape character is '^]'. ^Z Connection closed by foreign host. tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN We dont have anything in internal logs to dig in. we are getting the error message on the page as below: The connection has timed out The server at 10.xxx.xxx.xx is taking too long to respond.   The site could be temporarily unavailable or too busy. Try again in a few moments. If you are unable to load any pages, check your computer’s network connection. If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the web.  
For Tenable SC with Custom Server SSL Certificate: After identifying the source issue, if its SSL-related, Modify the following file to disable SSL Verification: /opt/splunk/etc/apps/TA-tenable/bin/... See more...
For Tenable SC with Custom Server SSL Certificate: After identifying the source issue, if its SSL-related, Modify the following file to disable SSL Verification: /opt/splunk/etc/apps/TA-tenable/bin/tenable_consts.py verify_ssl_for_sc_api_key = False It worked for me.  
Splunk cannot and  does not detect if data has already been indexed.  As @gcusello said,  it will attempt to avoid re-ingesting data, but that's not perfect. It's up to the app doing the ingestion t... See more...
Splunk cannot and  does not detect if data has already been indexed.  As @gcusello said,  it will attempt to avoid re-ingesting data, but that's not perfect. It's up to the app doing the ingestion to prevent reading the same data twice.  In DB Connect, for example, a "rising column" is defined to identify unique records.  Your app could do something similar, using case ID and Closed Time, perhaps.
@PickleRick  So shouldn’t the A/colddb capacity become 0 at some point?
homePath = volume:cold/_internaldb/db Are you sure you want your hot data on the cold volume? Anyway, you have small buckets, which is relatively rare for a Splunk installation and it skews your ob... See more...
homePath = volume:cold/_internaldb/db Are you sure you want your hot data on the cold volume? Anyway, you have small buckets, which is relatively rare for a Splunk installation and it skews your observations. There is no guarantee that the limits will be enforced precisely. Anyway, it works like this - every now and then (I don't remember the exact interval; you can find it in servers.conf) the housekeeping thread wakes up and checks the indexes. If a hot bucket triggers criteria (bucket size, inactivity time and so on), it is rolled to warm. If warm buckets for index trigger cirteria (number of warm buckets per index), oldest bucket for an index (in terms of most recent event in the bucket) is rolled to cold. If hot/warm volume exceeds size, oldest bucket for the whole volume is rolled to cold. If cold buckets for index trigger criteria (retention time, data size), oldest bucket is rolled to frozen If cold volume exceeds size, oldest bucket for the whole volume is rolled to frozen. That's how it's supposed to work.
Thanks @bowesmana  for the suggestions.  Can you please let me know how i can use the lookup with the below search.  Below query gives me the results of all the columns if there is a record in th... See more...
Thanks @bowesmana  for the suggestions.  Can you please let me know how i can use the lookup with the below search.  Below query gives me the results of all the columns if there is a record in the splunk logs with JOBNAME values as A1 and A2.  If there is no record for the jobname A1 and A2, there is no record fetched, and output is blank.  index = main source=xyz  (TERM(A1) OR TERM(A2) )   ("- ENDED" OR "- STARTED"  ) | rex field=TEXT "((A1-) |(A2-) )(?<Func>[^\-]+)" | eval Function=trim(Func), DAT = strftime(relative_time(_time, "+0h"), "%d/%m/%Y") | rename DAT as Date_of_reception | eval {Function}_TIME=_time | stats values(Date_of_reception) as Date_of_reception values(*_TIME) as *_TIME by JOBNAME | table JOBNAME,Description, Date_of_reception ,STARTED_TIME , ENDED_TIME  | sort -STARTED_TIME   We want the output as below even when there is no record in the splunk logs with column1 values as A1 and A2. When there is no record, fields "Date_of_reception ,STARTED_TIME , ENDED_TIME " should be Blank.  File.csv : JOBNAME ,Description A1 , Job A1 A2, Job A2  Desired Output :  JOBNAME,Description, Date_of_reception ,STARTED_TIME , ENDED_TIME A1 , Job A1 , 05/02/2025 , 12:54:31 , 12:54:40 A2, Job A2 ,  ,  ,   Date_of_reception ,STARTED_TIME , ENDED_TIME is blank for A2 because there are no logs in Splunk for the Jobname A2.  Can you please help to update the query to get the desired output:  Current query:  index = main source=xyz  (TERM(A1) OR TERM(A2) )   ("- ENDED" OR "- STARTED"  ) | rex field=TEXT "((A1-) |(A2-) )(?<Func>[^\-]+)" | eval Function=trim(Func), DAT = strftime(relative_time(_time, "+0h"), "%d/%m/%Y") | rename DAT as Date_of_reception | eval {Function}_TIME=_time | stats values(Date_of_reception) as Date_of_reception values(*_TIME) as *_TIME by JOBNAME | table JOBNAME,Description, Date_of_reception ,STARTED_TIME , ENDED_TIME  | sort -STARTED_TIME
Hi @Cheng2Ready  You can use REST for that, like in this example: | rest /servicesNS/-/-/saved/searches splunk_server=local | search action.snow_incident=1 | table title, disabled, action.snow_inci... See more...
Hi @Cheng2Ready  You can use REST for that, like in this example: | rest /servicesNS/-/-/saved/searches splunk_server=local | search action.snow_incident=1 | table title, disabled, action.snow_incident.param.assignment_group, action.snow_incident.param.contact_type The fields related to the alert actions in ServiceNow follow the pattern action.snow_event* or action.snow_incident*