All Topics

Top

All Topics

I am trying to create a db input keeping in mind the data that we have in db. It has fields like PKEY, STARTTIME, ENDTIME etc. If I use PKEY or STARTTIME in Rising column, I am bound to miss some row... See more...
I am trying to create a db input keeping in mind the data that we have in db. It has fields like PKEY, STARTTIME, ENDTIME etc. If I use PKEY or STARTTIME in Rising column, I am bound to miss some rows. Also, both PKEY and STARTTIME fields are not unique. So, I am trying to use CONCATE(PKEY,STARTTIME). SELECT BTACHTASK, ASOF, PKEY, STARTTIME, ENDTIME, CONCAT(PKEY, STARTTIME) AS combination FROM CORE_MCA.SPLUNK_BATCH_STATES_VW WHERE CONCAT(PKEY,STARTTIME) > ? ORDER BY CONCAT(PKEY,STARTTIME) ASC I am using Rising input and checkpoint should have been combination but I am not getting any results in Rising Column. I am getting the error java.sql.SQLException: Missing IN or OUT parameter at index:: 1. What am I doing wrong here? Also, sometimes the normal query also gives this error but after refreshing and selecting the connection once again I get the required data.
I have a timechart that shows the last 30d and with the timechart I also have a trendline showing the sma7.  The problem is that on the timechart, the trendline doesn't show anything for days 1-6, wh... See more...
I have a timechart that shows the last 30d and with the timechart I also have a trendline showing the sma7.  The problem is that on the timechart, the trendline doesn't show anything for days 1-6, which I understand is because there is no data from the previous days for the sma7 to calculate. I thought that the solution could be to change my search for the last 37d and then only timechart days 7-37d (if that makes sense) but can't seem to figure out how to implement that or if that is even a possible solution. Existing search   index=palo eventtype=user_logon earliest=-37d@d | bin span=1d _time | timechart count(eval(like(user_auth, "%-Compliant"))) as compliant count as total | eval compliant=round(((compliant/total)*100),2) | trendline sma7(compliant) as compliant7sma | eval compliant7sma=round(compliant7sma,2) | table _time, compliant, compliant7sma      
I want to identify where the rate that an index's _indextime changes by a specific amount, with a tolerence that increases the faster the rate. For example: 1. Index A - It indexes once every 6... See more...
I want to identify where the rate that an index's _indextime changes by a specific amount, with a tolerence that increases the faster the rate. For example: 1. Index A - It indexes once every 6 hours and populates the past 6 hours of events. In this circumstance I would want to know if it hasn't indexed for 8 hours or more. The tolerance is therefore relatively small (around 30% extra). 2. Index B - It indexes every second, in this circumstance I may forgive it not indexing for a few seconds, but I'd definitely want to know if it hasn't indexed in 10 minutes. The tolerence is therefore relatively large.  I don't think _time is right to use, as that would retrospectively backfill the indexes and I'm thinking it'd give false results.  I feel that either the _internal index or tstats has the answer, but I've not yet come close.
Hi everyone! I need to capture an endpoint that is requested by the method PATCH. Has anyone found a way to do this? In the detection rules I could only find GET, POST, DELETE, PUT.
I am working on building an SRE dashboard. Similar to https://www.appdynamics.com/blog/product/software-reliability-metrics/. Help me how to build a month error budget burn chart? Thank you.
"I have an issue with creating a field named 'Path' which should be populated with 'YES' or 'NO' based on the following information: I have fields like 'Hostname', 'dev', and 'vulnerability'. I need... See more...
"I have an issue with creating a field named 'Path' which should be populated with 'YES' or 'NO' based on the following information: I have fields like 'Hostname', 'dev', and 'vulnerability'. I need to take the values in 'dev' and 'vulnerability' and check if there are other rows with the same 'hostname' and 'vulnerability'. If there is a match, I write 'NO' in the 'Path' field; otherwise, I write 'YES'." Hostname  dev vulnerabilita patch A B apache SI A B sql NO B 0 apache NO B 0 python NO C A apache SI
Hello I'm collecting cloudtrail logs by installing Splunk add on AWS in the Splunk heavy forwarder. The following logs are occurring in the aws:cloudtrail:log source type in the _internal index. "... See more...
Hello I'm collecting cloudtrail logs by installing Splunk add on AWS in the Splunk heavy forwarder. The following logs are occurring in the aws:cloudtrail:log source type in the _internal index. " ~ level=WARNING pid=3386853 tid=Thread-7090 logger=urllib3.connectionpool pos=connectionpool.py:_put_conn:308 | Connection pool is full, discarding connection: bucket.vpce-abc1234.s3.ap-northeast-2.vpce.amazonaws.com. Connection pool size: 10" Should Splunk add on AWS increase the Connection pool size? How can I increase the Connection pool size? Curiously, I would like to know the solution for this log. Thank you.
Not getting data from universal forwarder (ubuntu). 1) Installed Splunk UF version 9.2.0  and credential package from splunk cloud as it should be reporting to splunk cloud.  2)There are no error l... See more...
Not getting data from universal forwarder (ubuntu). 1) Installed Splunk UF version 9.2.0  and credential package from splunk cloud as it should be reporting to splunk cloud.  2)There are no error logs in splunkd.log and no metric log in internal splunk index in splunk cloud. 3) Port connectivity 9997 is working fine. The only logs received in splunkd.log is  02-16-2024 15:53:30.843 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/search_messages.log'. 02-16-2024 15:53:30.852 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/splunkd_ui_access.log'. 02-16-2024 15:53:30.859 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/btool.log'. 02-16-2024 15:53:30.876 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/mergebuckets.log'. 02-16-2024 15:53:30.885 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/wlm_monitor.log'. 02-16-2024 15:53:30.891 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/license_usage_summary.log'. 02-16-2024 15:53:30.898 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/searchhistory.log'. 02-16-2024 15:53:30.907 +0000 INFO WatchedFile [156345 tailreader0] - Will begin reading at offset=2859 for file='/opt/splunkforwarder/var/log/watchdog/watchdog.log'. 02-16-2024 15:53:31.112 +0000 INFO AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Connected to idx=1.2.3.4:9997:2, pset=0, reuse=0. autoBatch=1 02-16-2024 15:53:31.112 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.4:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=1 02-16-2024 15:54:00.446 +0000 INFO ScheduledViewsReaper [156309 DispatchReaper] - Scheduled views reaper run complete. Reaped count=0 scheduled views 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_threads=2. 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_jobs=5. 02-16-2024 15:54:00.447 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.4:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=21 02-16-2024 15:54:03.379 +0000 INFO TailReader [156345 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log' 02-16-2024 15:54:05.447 +0000 INFO BackgroundJobRestarter [156309 DispatchReaper] - inspect_count=0, restart_count=0
  So we have a query:   (index="it_ops") source="bank_sys" message.content.country IN ("CANADA","USA","UK","FRANCE","SPAIN","IRELAND") message.content.code <= 399 | stats max(message.timestamp) a... See more...
  So we have a query:   (index="it_ops") source="bank_sys" message.content.country IN ("CANADA","USA","UK","FRANCE","SPAIN","IRELAND") message.content.code <= 399 | stats max(message.timestamp) as maxtime by message.content.country   Now this returns a two column result with country, maxtime. However, when there is no hit for country that country is omitted. I tried fillnull but it is only adding columns not rows.  How do we set a default maxtime for countries that are not found.
Hello everyone,   Quick question : I need to forward data from HF to Indexer cluster. Right now, I'm using S2S tcpout function, with useAck, default loadbalancing and maxQueueSize I study the pos... See more...
Hello everyone,   Quick question : I need to forward data from HF to Indexer cluster. Right now, I'm using S2S tcpout function, with useAck, default loadbalancing and maxQueueSize I study the possibility to use the httpout instead of tcpout, due to traffic filtering.   The documentation seems a bit light about httpout, is it possible to use Indexer loadbalancer, ack, and maxQueueSize function? Thanks for your help!   Jonas
Hello Splunk Community, I'm currently facing an issue with integrating Group-IB threat intelligence feeds into my Splunk environment and could really use some assistance. Here's a brief overview of... See more...
Hello Splunk Community, I'm currently facing an issue with integrating Group-IB threat intelligence feeds into my Splunk environment and could really use some assistance. Here's a brief overview of the problem: 1. Inconsistent Sourcetype Ingestion: Upon integrating the Group-IB threat intel feeds and installing the corresponding app on my Search Head, I've noticed inconsistent behavior in terms of sourcetype ingestion. Sometimes only one sourcetype is ingested, while other times it's five or seven. This variability is puzzling, and I'm not sure what's causing it. 2. Ingestion Interruption: Additionally, after a few days of seemingly normal ingestion, I observed that the ingestion process stopped abruptly. Upon investigating further, I found the following message in the logs: *Health Check msg="A script exited abnormally with exit status 1" input="opt/splunk/etc/apps/gib_tia/bin/gib_tia.py" stanza = "xxx"* This message indicates that the intelligence downloads of a specific sourcetype have failed on the host. This issue is critical for our security operations, and I'm struggling to identify and resolve the root cause. If anyone has encountered similar challenges or has insights into troubleshooting such issues with threat intel feed integrations, I would greatly appreciate your assistance. Thanks in advance,
We want an alert to run every day (Monday-Sunday) on a 30 minutes interval with one exception. The exception is it should not run specifically on Wednesday and Friday from 5AM to 8AM. However it shou... See more...
We want an alert to run every day (Monday-Sunday) on a 30 minutes interval with one exception. The exception is it should not run specifically on Wednesday and Friday from 5AM to 8AM. However it should run on other hours on Wednesday and Friday as well  (apart from 5AM to 8AM) One cron is not able to achieve that. Hence want to change in the alert logic.
Hello, We are trying to achieve Power BI integration with Splunk. We have Power BI installed on windows machine and we also installed ODBC driver to connect to Splunk. As part of configuration we a... See more...
Hello, We are trying to achieve Power BI integration with Splunk. We have Power BI installed on windows machine and we also installed ODBC driver to connect to Splunk. As part of configuration we added credentials (same with which we connect to our splunk cloud instance) and URL in Power BI Get Data options but we are getting below error:   Steps: POWER BI Desktop -> Get Data -> Other -> ODBC ->  and when clicked ok, above mentioned error is displayed. Can you please suggest how I can fix this? Thank you.
Hello I want to monitor the health of db connect app inputs and connections and i noticed the the health monitor is not working. im getting the message "search populated no results" When i tried ... See more...
Hello I want to monitor the health of db connect app inputs and connections and i noticed the the health monitor is not working. im getting the message "search populated no results" When i tried to investigate the issue i found out that index=_internal is empty I guess its related. Can you please help me figure out why the index is empty and the health monitor is not working ?
Installing Splunk 9.2.0.1 on Windows Server 2019 ends prematurely. I get the issue if install the .msi in cmd with /passive and if I install it in gui. I have seen the issue resolved on earlier Win... See more...
Installing Splunk 9.2.0.1 on Windows Server 2019 ends prematurely. I get the issue if install the .msi in cmd with /passive and if I install it in gui. I have seen the issue resolved on earlier Windows Server versions by creating a dummy string i regedit. It does not work in Server 2019. I have a log fil but it is to big to be inserted in my post here. splunk log   
Splunkd not running in linux after EC2 instance stopped and starts, try all this commands ./splunk start --debug /opt/splunk/bin/splunk status /opt/splunk/var/log/splunk/splunkd.log But Can't f... See more...
Splunkd not running in linux after EC2 instance stopped and starts, try all this commands ./splunk start --debug /opt/splunk/bin/splunk status /opt/splunk/var/log/splunk/splunkd.log But Can't find the solution,. please share the solution with linux commands as well.  
Series 'Accreditation Mission - Observability SE I (Partner) is in progress. I passed the test, but they said I had to do an additional demo and get a score of 80 or higher. There is insufficient i... See more...
Series 'Accreditation Mission - Observability SE I (Partner) is in progress. I passed the test, but they said I had to do an additional demo and get a score of 80 or higher. There is insufficient information about the demo progress. 1. Should it be conducted in English? 2. Should I proceed according to the splunk show?    
For the past couple weeks I will at least once per day have one of our indexers go into internal logs only mode, and the reason it states is that License is expired.  It's a bogus message since the l... See more...
For the past couple weeks I will at least once per day have one of our indexers go into internal logs only mode, and the reason it states is that License is expired.  It's a bogus message since the license definitely is not expired and also not even close to exceeded, and restarting splunk service on the indexer always clears the error.  Unfortunately not much more is provided by the splunk logs that would indicate anything I can investigate. Has anyone ever ran into similar, or might know where I can look to troubleshoot this further?  It's making my life pretty tough because I have to constantly be restarting indexers due to this error.
We are planning to migrate a server that plays multiple roles as a DS, HEC, Proxy, SC4S, Syslog etc., to multiple servers by possibly trying to split the roles. Eg; server A to play DS role, server B... See more...
We are planning to migrate a server that plays multiple roles as a DS, HEC, Proxy, SC4S, Syslog etc., to multiple servers by possibly trying to split the roles. Eg; server A to play DS role, server B to take care of HEC Services and so on. What would be the easiest approach to achieve this?  Seems like a lot of work. Would it be recommended to do so in the first place? What are the criteria we should have in mind while doing this migration.
I have syslog events being written to a HF locally via syslog-ng - these events are then consumed via file reader and the IP address in the log name is extracted as host. I now want to run an ingest... See more...
I have syslog events being written to a HF locally via syslog-ng - these events are then consumed via file reader and the IP address in the log name is extracted as host. I now want to run an ingest_eval on the ip address and use a lookup to change the host If i run the cmd from search i get the required result: index=... | eval host=json_extract(lookup("lookup.csv",json_object("host",host),json_array("host_value")),"host_value") this replaces host with "host_value" I have this working on an AIO instance with the following config below: Now adding to HF tier : /opt/splunk/etc/apps/myapp/lookups/lookup.csv lookup has global access and export = system host,host_value 1.2.3.4, myhostname props.conf: [mysourcetype] TRANSFORMS-host_override = host_override transforms.conf: [host_override] INGEST_EVAL =host=json_extract(lookup("lookup.csv",json_object("host",host),json_array("host_value")),"host_value") When applied on the HF (restarted)  i see some of the hostnames are changed to "localhost" the others remain unchanged (but this is due to the config not working OR the data coming from another HF with the test config not applied Any ideas - thx