All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello I'm collecting cloudtrail logs by installing Splunk add on AWS in the Splunk heavy forwarder. The following logs are occurring in the aws:cloudtrail:log source type in the _internal index. "... See more...
Hello I'm collecting cloudtrail logs by installing Splunk add on AWS in the Splunk heavy forwarder. The following logs are occurring in the aws:cloudtrail:log source type in the _internal index. " ~ level=WARNING pid=3386853 tid=Thread-7090 logger=urllib3.connectionpool pos=connectionpool.py:_put_conn:308 | Connection pool is full, discarding connection: bucket.vpce-abc1234.s3.ap-northeast-2.vpce.amazonaws.com. Connection pool size: 10" Should Splunk add on AWS increase the Connection pool size? How can I increase the Connection pool size? Curiously, I would like to know the solution for this log. Thank you.
Not getting data from universal forwarder (ubuntu). 1) Installed Splunk UF version 9.2.0  and credential package from splunk cloud as it should be reporting to splunk cloud.  2)There are no error l... See more...
Not getting data from universal forwarder (ubuntu). 1) Installed Splunk UF version 9.2.0  and credential package from splunk cloud as it should be reporting to splunk cloud.  2)There are no error logs in splunkd.log and no metric log in internal splunk index in splunk cloud. 3) Port connectivity 9997 is working fine. The only logs received in splunkd.log is  02-16-2024 15:53:30.843 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/search_messages.log'. 02-16-2024 15:53:30.852 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/splunkd_ui_access.log'. 02-16-2024 15:53:30.859 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/btool.log'. 02-16-2024 15:53:30.876 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/mergebuckets.log'. 02-16-2024 15:53:30.885 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/wlm_monitor.log'. 02-16-2024 15:53:30.891 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/license_usage_summary.log'. 02-16-2024 15:53:30.898 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/searchhistory.log'. 02-16-2024 15:53:30.907 +0000 INFO WatchedFile [156345 tailreader0] - Will begin reading at offset=2859 for file='/opt/splunkforwarder/var/log/watchdog/watchdog.log'. 02-16-2024 15:53:31.112 +0000 INFO AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Connected to idx=1.2.3.4:9997:2, pset=0, reuse=0. autoBatch=1 02-16-2024 15:53:31.112 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.4:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=1 02-16-2024 15:54:00.446 +0000 INFO ScheduledViewsReaper [156309 DispatchReaper] - Scheduled views reaper run complete. Reaped count=0 scheduled views 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_threads=2. 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_jobs=5. 02-16-2024 15:54:00.447 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.4:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=21 02-16-2024 15:54:03.379 +0000 INFO TailReader [156345 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 02-16-2024 15:54:05.447 +0000 INFO BackgroundJobRestarter [156309 DispatchReaper] - inspect_count=0, restart_count=0
  So we have a query:   (index="it_ops") source="bank_sys" message.content.country IN ("CANADA","USA","UK","FRANCE","SPAIN","IRELAND") message.content.code <= 399 | stats max(message.timestamp) a... See more...
  So we have a query:   (index="it_ops") source="bank_sys" message.content.country IN ("CANADA","USA","UK","FRANCE","SPAIN","IRELAND") message.content.code <= 399 | stats max(message.timestamp) as maxtime by message.content.country   Now this returns a two column result with country, maxtime. However, when there is no hit for country that country is omitted. I tried fillnull but it is only adding columns not rows.  How do we set a default maxtime for countries that are not found.
Hello everyone,   Quick question : I need to forward data from HF to Indexer cluster. Right now, I'm using S2S tcpout function, with useAck, default loadbalancing and maxQueueSize I study the pos... See more...
Hello everyone,   Quick question : I need to forward data from HF to Indexer cluster. Right now, I'm using S2S tcpout function, with useAck, default loadbalancing and maxQueueSize I study the possibility to use the httpout instead of tcpout, due to traffic filtering.   The documentation seems a bit light about httpout, is it possible to use Indexer loadbalancer, ack, and maxQueueSize function? Thanks for your help!   Jonas
Hello Splunk Community, I'm currently facing an issue with integrating Group-IB threat intelligence feeds into my Splunk environment and could really use some assistance. Here's a brief overview of... See more...
Hello Splunk Community, I'm currently facing an issue with integrating Group-IB threat intelligence feeds into my Splunk environment and could really use some assistance. Here's a brief overview of the problem: 1. Inconsistent Sourcetype Ingestion: Upon integrating the Group-IB threat intel feeds and installing the corresponding app on my Search Head, I've noticed inconsistent behavior in terms of sourcetype ingestion. Sometimes only one sourcetype is ingested, while other times it's five or seven. This variability is puzzling, and I'm not sure what's causing it. 2. Ingestion Interruption: Additionally, after a few days of seemingly normal ingestion, I observed that the ingestion process stopped abruptly. Upon investigating further, I found the following message in the logs: *Health Check msg="A script exited abnormally with exit status 1" input="opt/splunk/etc/apps/gib_tia/bin/gib_tia.py" stanza = "xxx"* This message indicates that the intelligence downloads of a specific sourcetype have failed on the host. This issue is critical for our security operations, and I'm struggling to identify and resolve the root cause. If anyone has encountered similar challenges or has insights into troubleshooting such issues with threat intel feed integrations, I would greatly appreciate your assistance. Thanks in advance,
We want an alert to run every day (Monday-Sunday) on a 30 minutes interval with one exception. The exception is it should not run specifically on Wednesday and Friday from 5AM to 8AM. However it shou... See more...
We want an alert to run every day (Monday-Sunday) on a 30 minutes interval with one exception. The exception is it should not run specifically on Wednesday and Friday from 5AM to 8AM. However it should run on other hours on Wednesday and Friday as well  (apart from 5AM to 8AM) One cron is not able to achieve that. Hence want to change in the alert logic.
Hello, We are trying to achieve Power BI integration with Splunk. We have Power BI installed on windows machine and we also installed ODBC driver to connect to Splunk. As part of configuration we a... See more...
Hello, We are trying to achieve Power BI integration with Splunk. We have Power BI installed on windows machine and we also installed ODBC driver to connect to Splunk. As part of configuration we added credentials (same with which we connect to our splunk cloud instance) and URL in Power BI Get Data options but we are getting below error:   Steps: POWER BI Desktop -> Get Data -> Other -> ODBC ->  and when clicked ok, above mentioned error is displayed. Can you please suggest how I can fix this? Thank you.
Hello I want to monitor the health of db connect app inputs and connections and i noticed the the health monitor is not working. im getting the message "search populated no results" When i tried ... See more...
Hello I want to monitor the health of db connect app inputs and connections and i noticed the the health monitor is not working. im getting the message "search populated no results" When i tried to investigate the issue i found out that index=_internal is empty I guess its related. Can you please help me figure out why the index is empty and the health monitor is not working ?
Installing Splunk 9.2.0.1 on Windows Server 2019 ends prematurely. I get the issue if install the .msi in cmd with /passive and if I install it in gui. I have seen the issue resolved on earlier Win... See more...
Installing Splunk 9.2.0.1 on Windows Server 2019 ends prematurely. I get the issue if install the .msi in cmd with /passive and if I install it in gui. I have seen the issue resolved on earlier Windows Server versions by creating a dummy string i regedit. It does not work in Server 2019. I have a log fil but it is to big to be inserted in my post here. splunk log   
Splunkd not running in linux after EC2 instance stopped and starts, try all this commands ./splunk start --debug /opt/splunk/bin/splunk status /opt/splunk/var/log/splunk/splunkd.log But Can't f... See more...
Splunkd not running in linux after EC2 instance stopped and starts, try all this commands ./splunk start --debug /opt/splunk/bin/splunk status /opt/splunk/var/log/splunk/splunkd.log But Can't find the solution,. please share the solution with linux commands as well.  
For the past couple weeks I will at least once per day have one of our indexers go into internal logs only mode, and the reason it states is that License is expired.  It's a bogus message since the l... See more...
For the past couple weeks I will at least once per day have one of our indexers go into internal logs only mode, and the reason it states is that License is expired.  It's a bogus message since the license definitely is not expired and also not even close to exceeded, and restarting splunk service on the indexer always clears the error.  Unfortunately not much more is provided by the splunk logs that would indicate anything I can investigate. Has anyone ever ran into similar, or might know where I can look to troubleshoot this further?  It's making my life pretty tough because I have to constantly be restarting indexers due to this error.
We are planning to migrate a server that plays multiple roles as a DS, HEC, Proxy, SC4S, Syslog etc., to multiple servers by possibly trying to split the roles. Eg; server A to play DS role, server B... See more...
We are planning to migrate a server that plays multiple roles as a DS, HEC, Proxy, SC4S, Syslog etc., to multiple servers by possibly trying to split the roles. Eg; server A to play DS role, server B to take care of HEC Services and so on. What would be the easiest approach to achieve this?  Seems like a lot of work. Would it be recommended to do so in the first place? What are the criteria we should have in mind while doing this migration.
I have syslog events being written to a HF locally via syslog-ng - these events are then consumed via file reader and the IP address in the log name is extracted as host. I now want to run an ingest... See more...
I have syslog events being written to a HF locally via syslog-ng - these events are then consumed via file reader and the IP address in the log name is extracted as host. I now want to run an ingest_eval on the ip address and use a lookup to change the host If i run the cmd from search i get the required result: index=... | eval host=json_extract(lookup("lookup.csv",json_object("host",host),json_array("host_value")),"host_value") this replaces host with "host_value" I have this working on an AIO instance with the following config below: Now adding to HF tier : /opt/splunk/etc/apps/myapp/lookups/lookup.csv lookup has global access and export = system host,host_value 1.2.3.4, myhostname props.conf: [mysourcetype] TRANSFORMS-host_override = host_override transforms.conf: [host_override] INGEST_EVAL =host=json_extract(lookup("lookup.csv",json_object("host",host),json_array("host_value")),"host_value") When applied on the HF (restarted)  i see some of the hostnames are changed to "localhost" the others remain unchanged (but this is due to the config not working OR the data coming from another HF with the test config not applied Any ideas - thx
Hi, I want to know if there is any resources available to get a notification or some way to know when a new Splunk Enterprise version is released. This could either be through mail, a rss feed or som... See more...
Hi, I want to know if there is any resources available to get a notification or some way to know when a new Splunk Enterprise version is released. This could either be through mail, a rss feed or something similar? I already know that this one exists https://www.splunk.com/page/release_rss But it is not up to date. Thanks, Zarge
query: |tstats count where index=new_index host=new-host source=https://itcsr.welcome.com/logs* by PREFIX(status:) _time |rename status:  as Total_Status |where isnotnull(Total_Status) |eval Succ... See more...
query: |tstats count where index=new_index host=new-host source=https://itcsr.welcome.com/logs* by PREFIX(status:) _time |rename status:  as Total_Status |where isnotnull(Total_Status) |eval SuccessCount=if(Total_Status="0", count, Success), FailedCount=if(Total_Status!="0", count, Failed) OUTPUT: Total_Status _time count FailedCount SuccessCount 0 2022-01-12 13:30 100   100 0 2022-01-12 13:00 200   200 0 2022-01-13 11:30 110   110 500 2022-01-13 11:00 2 2   500 2022-01-11 10:30 4 4   500 2022-01-11 10:00 8 8     But i want the output as shown below table: _time SuccessCount FailedCount 2022-01-13 110 2 2022-01-12 300 0 2022-01-11 0 12
I am trying to get a understanding why I get a different count total for the number of events for the following searches 1. index=some_specific_index  (Returns the following  total for events 7,601,... See more...
I am trying to get a understanding why I get a different count total for the number of events for the following searches 1. index=some_specific_index  (Returns the following  total for events 7,601,134) 2. | tstats count where index=some_specific_index (Returns 7,593,248)   I do have the same date and time range sent when I run the query. I understand why tstats and stats have different values.      
Hi , I want Search query to fetch PCF application instances and its event messages such as start, stop and crash and with the reason. Can anyone help me with the query how to fetch this. Thanks, A... See more...
Hi , I want Search query to fetch PCF application instances and its event messages such as start, stop and crash and with the reason. Can anyone help me with the query how to fetch this. Thanks, Abhigyan.
Daftar Sekarang hitslot >>>> https://a6q6.short.gy/SesuapNasi Di hitslot memiliki berbagai games online terbaik serta event event games online yang terbaru di tahun 2024 Dengan Permainan ingin menco... See more...
Daftar Sekarang hitslot >>>> https://a6q6.short.gy/SesuapNasi Di hitslot memiliki berbagai games online terbaik serta event event games online yang terbaru di tahun 2024 Dengan Permainan ingin mencoba games online terbaik di tahun 2024 anda bisa mencobanya sekarang hanya di situs hitslot dengan design dan serta event terbaik
CHECK_METHOD = modtime is not working as expected due to a regression in 9.x as there is wrong calculation which will lead to un-expected re-reading of a file. Until next patch, use following workar... See more...
CHECK_METHOD = modtime is not working as expected due to a regression in 9.x as there is wrong calculation which will lead to un-expected re-reading of a file. Until next patch, use following workaround for inputs with CHECK_METHOD = modtime In inputs.conf set following for impacted stanza time_before_close=0  
Created 2 drop downs in a dashboard.  1. Country 2. Applications (getting data from .csv file) In applications drop down i am seeing individual applications in the drop down but I need "All" opt... See more...
Created 2 drop downs in a dashboard.  1. Country 2. Applications (getting data from .csv file) In applications drop down i am seeing individual applications in the drop down but I need "All" options in the dropdown. How can i do it??   <input type="radio" token="country"> <label>Country</label> <choice value="india">India</choice> <choice value="australian">Australian</choice> <default>india</default> <intialValue>india</intialValue> <change> <condition label="India"> <set token="sorc">callsource</set> </condition> <condition label="Australian"> <set token="sorc">callsource2</set> </condition> </change> </input> <input type="dropdown" token="application" searchWhenChanged="false"> <label>Application</label> <fieldForLabel>application_Succ</fieldForLabel> <fieldForValue>application_Fail</fieldForValue> <search> <query> |inputlookup application_lists.csv |search country=$country$ |sort country application_Succ |fields application_Succ application_Fail</query> <earliest>-15m</earliest> <latest>now</latest> </search> </input> </fieldset>