All Topics

Top

All Topics

I'm trying to optimize the alerts since I'm having issues. Where I work, it's somewhat slow to solve the problem (1 to 3 days) when the alert is triggered. This causes the alert to constantly trigger... See more...
I'm trying to optimize the alerts since I'm having issues. Where I work, it's somewhat slow to solve the problem (1 to 3 days) when the alert is triggered. This causes the alert to constantly trigger in the given time. I can't use Throttle since my alerts do not depend on a single host or event. For example: index=os_pci_windowsatom host IN (HostP1 HostP2 HostP3 HostP4) source=cnt_mx_pci_sql_*_status_db |dedup 1 host state_desc | streamstats values(state_desc) as State by host | eval Estado=case( State!="ONLINE", "Critico", State="ONLINE", "Safe" ) | table Estado host State _time | where Estado="Critico" When the status of a Host changes to critical, it triggers the alert. For this reason, I cannot use Throttle because in the time span that this alert is silenced, one of the hosts may trigger, omitting the entire alert completely. My idea is to create logic based on the results of the last triggered alert and compare them with the current alert where if the host and status are the same, it remains unchanged. However, if the host and status are different from the previous one triggered, it should be triggered. I thought about using the data where it's stored, but I don't know how to search for this information, does anyone have an idea? e Any comment is greatly appreciated.
I would like to know the duration of the voucher for taking the Splunk Power User exam, as I am unable to find the expiration date anywhere. Thank you, kind regards.  
Hello all, I have the following case: Splunk accessible on https://dh2.mydomain.com/sendemail931 with "enable_spotlight_search = true" in web-features.conf. If I search for anything and a result/mat... See more...
Hello all, I have the following case: Splunk accessible on https://dh2.mydomain.com/sendemail931 with "enable_spotlight_search = true" in web-features.conf. If I search for anything and a result/match is shown upon clicking I get "The requested URL was not found on this server.", because the root_endpoint is being removed from the URL. Splunk is behind a reverse proxy (httpd) and an applicaiton load balancer. So upon clicking on the result, I'm being redirected to: https://dh2.mydomain.com/manager/launcher/admin/alert_actions/email?action=edit, but it should be  https://dh2.mydomain.com/sendemail931/en-US/manager/launcher/admin/alert_actions/email?action=edit I'm pretty sure that the redirect is happening internally, because I cannot see any relevant logs on the apache. I've tried to add the following to the web.conf, but the result is the same:     tools.proxy.base = https://dh2.mydomain.com/sendemail931/ tools.proxy.on = true     This is the only case were root_endpoint is not preserved. I've tried to reverse-engineer why this could happen and found that the request is handled by common.min.js, I guess somewhere here: {title:(0,r._)("Alert actions"),id:"/servicesNS/nobody/search/data/ui/manager/alert_actions",description:(0,r._)("Review and manage available alert actions"),url:"/manager/".concat(e,"/alert_actions"),keywords:[]}  + here: {var o=m.default.getSettingById(r);if(void 0===o)return;return(0,b.default)(o.title,o.url,O.length),n=o.url,void(window.location.href=n)  
Hello to everyone! I'm not sure how to correctly name this thing, but I will carefully try to explain what I want to achieve. In our infrastructure we have plenty of Windows Server instances with U... See more...
Hello to everyone! I'm not sure how to correctly name this thing, but I will carefully try to explain what I want to achieve. In our infrastructure we have plenty of Windows Server instances with Universal Forwarder installed. All servers are divided into groups according to the particular application that the servers host. For example, Splunk servers have group 'spl,' remote desktop session servers have group 'rdsh,' etc. Each server has an environment variable with this group value. By design, the access policy to logs was built on these groups. One group - one index. Because of it, each UF input stanza has the option "index = group.". According to this idea, introspection logs of UF agents are related to the SPL (or Splunk) group\index. And here the nuisance started. Sometimes UF agents report about errors that demand some things on the running hosts, for example, restarting the agent manually. I see these errors because I have access to the 'spl' index, but I don't have access to all Windows machines and I have to notify the machine owner about it manually. So, the question is how to create a sort of tag or field on the UF that can help me separate all Splunk UF logs by these groups? Maybe I can use our environment variable to achieve it? I only need to access this field during search time to create various alerts that will notify machine owners instead of me.
I am using same index for both stats disctinctcount and timechart distinctcount. But the results from timechart is always high. Anyone knows the reason behind it and how to resolve this? Also i have ... See more...
I am using same index for both stats disctinctcount and timechart distinctcount. But the results from timechart is always high. Anyone knows the reason behind it and how to resolve this? Also i have tried with bucket span and plotted the time chart. But the results were not matching with stats distinctcount. Please help.
there is a user lets say ABC and I want to check why his AD account is locked .
Dear Splunk Dev team,  One more simple typo issue:  Splunk fresh install 9.4.0 (last week's version 9.3.2 also had this issue, but i thought to wait to post this till next version) showing the warn... See more...
Dear Splunk Dev team,  One more simple typo issue:  Splunk fresh install 9.4.0 (last week's version 9.3.2 also had this issue, but i thought to wait to post this till next version) showing the warning msg - "Error in 'lookup' command: Could not construct lookup 'test_lenlookup, data'. See search.log for more details." (on older splunk versions i remember this search.log, but nowadays both search.log and searches.log are not available)   https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/WhatSplunklogsaboutitself as per what Splunk logs about itself, it should be "See searches.log for more details." one more bigger issue -both search.log or searches.log are not available. All these searches are not returning anything (the doc says that - The Splunk search logs are located in sub-folders under $SPLUNK_HOME/var/run/splunk/dispatch/. )       index=_* source="*search.log" OR index=_* source="*searches.log" OR index=_* source="C:\Program Files\Splunk\var\run\splunk\dispatch*"         will post this to Splunk Slack as well, thanks.  If any post helped you in anyway, pls consider adding a karma point, thanks. 
For simplicity assume I have the following saved as a report (testReport): index=testindex host=testhost earliest=-90m latest=now I need to create 2 bar graphs in the same chart comparing two dates... See more...
For simplicity assume I have the following saved as a report (testReport): index=testindex host=testhost earliest=-90m latest=now I need to create 2 bar graphs in the same chart comparing two dates.  For starters I need to be able to run the above with a time I specify overrriding the time range above. | savedsearch "testReport" earliest="12/08/2024:00:00:00" latest="12/08/2024:23:59:00" I have seen a few similar question here but I don't think it has  a working solution.     
https://docs.splunk.com/Documentation/Splunk/9.4.0/ReleaseNotes/MeetSplunk#What.27s_New_in_9.4 Why New Splunk TcpOutput Persistent Queue? Scheduled no connectivity for extended period but ne... See more...
https://docs.splunk.com/Documentation/Splunk/9.4.0/ReleaseNotes/MeetSplunk#What.27s_New_in_9.4 Why New Splunk TcpOutput Persistent Queue? Scheduled no connectivity for extended period but need to resume data transmission once connection is back up. Assuming there is enough storage, tcpout output queue can persist all events to disk instead of buying expensive third party subscription(unsupported) to persist  data to SQS/S3. If there are two tcpout output destinations and one is down for extended period. Down destination has large enough PQ to persist data, then second destination is  not blocked. Second destination will block only once PQ of down destination is full.  Don't have to  pay for  third party SQS & S3 puts. Third party/ external S3 persistent queue introduces permanent additional latency( due to detour to external SQS/S3 queue). There are chances of loss of events( events getting in to SQS/S3 DLQ). Third party/ external SQS/S3 persistent queuing requires batching events, which adds additional latency in order to reduce SQS/S3 puts cost. Unwanted additional network bandwidth usage incurred due to uploading all data to SQS/S3 and then downloading . Third party imposes upload payload size limits. Monitored corporate laptops are off network, not connected  to internet or not connected to VPN for extended period of time. Later laptops might get switched off but events should be persisted and forwarded as and when laptop connects to network. Sensitive data should stay/persisted within network. On demand persistent queuing on forwarding tier when Indexer Clustering is down. On demand persistent queuing on forwarding tier when Indexer Clustering indexing is slow due to high system load. On demand persistent queuing on forwarding tier when Indexer Clustering is in rolling restart. On demand persistent queuing on forwarding tier during Indexer Clustering upgrade. Don't have to use decade old S2S protocol version as suggested by some third party vendors ( you all know enableOldS2SProtocol=true in outputs.conf) How to enable? Just set  persistentQueueSize as per outputs.conf [tcpout:splunk-group1] persistentQueueSize=1TB [tcpout:splunk-group2] persistentQueueSize=2TB Note: Sizing guide coming soon.
Hi, I'm using the Journald input in univarsal forwarder to collect logs form journald: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/CollecteventsfromJournalD. The data comes to my indexer ... See more...
Hi, I'm using the Journald input in univarsal forwarder to collect logs form journald: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/CollecteventsfromJournalD. The data comes to my indexer as expected. One of the fields that I send with the logs is the TRANSPORT field. When I search the logs I can see that TRANSPORT event metadata is present as expected.   I would like to set the logs sourcetype dynamically based on the value of the TRANSPORT field. Here is the props.conf and transforms.conf that I'm trying to use   props.conf: [default] TRANSFORMS-change_sourcetype = set_new_sourcetype   transforms.conf [set_new_sourcetype] REGEX = TRANSPORT=([^\s]+) FORMAT = sourcetype::test DEST_KEY = MetaData:Sourcetype   Unfortunately the above seems to have no impact on the logs. I think that the problem lies in the REGEX field. When I change it to REGEX = .* , all of the events have the sourcetype set to test as expected. Why can't I use the TRANSPORT event in the REGEX?
Hello, in case you can't use SSL certificate, you may modify cybereason python script.
Hello everybody, I am facing some challenges with some custom log file containing bits of xml surrounded by some sort of headers... The file looks something like this:   [1][DATA]BEGIN --- - 06:... See more...
Hello everybody, I am facing some challenges with some custom log file containing bits of xml surrounded by some sort of headers... The file looks something like this:   [1][DATA]BEGIN --- - 06:03:09[012] <xml> <tag1>value</tag1> <nestedTag> <tag2>another value</tag2> </nestedTag> </xml> [1][DATA]END --- - 06:03:09[012] [1][DATA]BEGIN --- - 07:03:09[123] <xml> <tag1>some stuff</tag1> <nestedTag> <tag2>other stuff</tag2> </nestedTag> </xml> [1][DATA]END --- - 07:03:09[123] [1][DATA]BEGIN --- - 08:03:09[456] <xml> <tag1>some more data</tag1> <nestedTag> <tag2>fooband a bit more</tag2> </nestedTag> </xml> [1][DATA]END --- - 08:03:09[456]    It is worth noting that the xml parts can be very large. I would like to take advantage of Splunk's automatic xml parsing as it is not realistic to do it manually in this case, but the square bracket lines around each xml block seem to prevent the xml parser to do its job and I get no field extraction. So, what I would like to do is: Converting the "data begin" line with the square brackets, before each xml block, into an xml formatted line, so that I can use it for the time of the event (the date itself is encoded in the filename...) and let Splunk parse the rest of the xml data automatically Stripping out the lines with the "data end" bit after each block of xml. These are not useful as they provide the same time than the "data begin" line. Aggregating the xml lines of the same block into one event What I have tried with props.conf and transforms.conf: props.conf   [my_sourcetype] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = KV_MODE = xml LINE_BREAKER = \]([\r\n]+)\[1\]\[DATA\]BEGIN NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Custom pulldown_type = true TRANSFORMS-full=my_transform # only with transforms.conf v1 TRANSFORMS-begin=begin # only with transforms.conf v2 TRANSFORMS-end=end # only with transforms.conf v2   transforms.conf (version 1):   [my_transform] REGEX = (?m)\[1\]\[DATA\]BEGIN --- - (\d{2}:\d{2}:\d{2}).*([\r\n]+)([^\[]*)\[1\]\[DATA\]END.*$[\r\n]* FORMAT = <time>$1</time>$2$3 WRITE_META = true DEST_KEY = _raw    transforms.conf (version 2):   [begin] REGEX = (?m)^\[1\]\[DATA\]BEGIN --- - (\d{2}:\d{2}:\d{2}).*$ FORMAT = <time>$1</time> WRITE_META = true DEST_KEY = _raw [end] REGEX = (?m)^\[1\]\[DATA\]END.*$ DEST_KEY = queue FORMAT = nullQueue     With the various combinations listed here, I got all sorts of results: well separated events but with square brackets left over one big block with all events aggregated together and no override of the square bracket lines one event with the begin square bracket line truncated at 10k characters 4 events with one "time" xml tag but nothing else... Could anybody help me out with this use case? Many thanks, Alex
(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust features to give you great insights into your protected infrastructure, helping you str... See more...
(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust features to give you great insights into your protected infrastructure, helping you strike a nice balance between visibility and reducing alert fatigue. In my role at Splunk, I often design and test non-production environments for Splunk Enterprise Security. Lately, I've been diving into various correlation searches (ES7) and detections (ES8). During this journey, I've found myself wondering, “Why isn't this finding being generated?” While troubleshooting, I came across a few scenarios that I think are worth sharing. Now, my situation is pretty straightforward since I have control over the data my instance receives. It’s easy for me to pinpoint when findings should have popped up. However, this might not be the case in a real production environment. So, having a solid and regular plan for testing detections in your environment is super important. And don’t forget, Splunk has a wealth of resources available to help you tackle these challenges! Just a heads-up, I’ll be sticking with Enterprise Security 8 terminology for the rest of this post.  A comprehensive glossary can be found here. However, here is a little table with  the closest equivalent in Enterprise Security 7 terminology to keep things clear.   Reason #1: The Search is not valid (any longer?). Occam's principle suggests that the simplest explanation is often the most accurate. While this may seem straightforward, there have been instances where I anticipated a discovery, only to find that my initial search was misguided. Let’s illustrate this with an example. In this scenario, I have established the following rule: It seems it has succeeded 25 times before. However, in the last 24 hours, it seems there are no items in the queue, even though I'm certain of some matching events happening. So, what’s the deal with these events not showing up? To get to the bottom of this, let’s take a closer look at the detection search. Just click on the detection name to move forward. If you are familiar with Splunk Enterprise Security, you know this particular search is rather simple. Definitely, you would like to have more restrictive queries to avoid performance overhead. Anyway, its simplicity will help us illustrate the idea. Now, let’s copy that detection and run it directly against our data. To do so, we can, of course, go to the search & reporting app, but there is no need as we have a Search tab on the Enterprise Security top menu. No results in the past 24 hours. Not even in verbose mode. Interesting. So I went ahead and removed one of the search constraints, in this case, the tag, and ran the search again. We have Results. Hence, the problem must be with the tags. To prove it I removed the table command and executed my search again. Then, I checked the available tags. As expected, the privileged tag was not there.  The privileged tag on some users was removed, affecting the detection. The solution will be to either edit the search and remove the tag constraint, or restore the tag on the data.  So, I went with editing the detection. Then, after a while I got new hits on this detection. In this case the actual root cause was a miscommunication between the Splunk Administrator and the Splunk Enterprise Security Detection Engineer. Something that would never happen in your organization, right?  Now, if you don’t feel as comfortable with SPL, or even if you are an expert but will love some assistance, remember you can leverage the guided mode.   As a note aside, this seems to be a perfect opportunity to introduce detection versioning.   Reason #2: Wrong time range. This is a subcategory of the previous reason. One that deserves to be mentioned on its own.  The time range of the search defined on a detection is shown in the time range section. Clearly.  Let’s analyze the example below: In this case, the search will run every 60 minutes, capturing all events that occurred in the lapse between 72 minutes before the minute of detection time to 12 minutes before execution time. As a result, if the detection is scheduled to run at 10:20 PM but the event happened at 10:18, it won’t show until the next iteration, at 11:20 PM. Now, usually, you won’t be messing with timing and will use the latest time as close to real-time as possible. Honestly, I have seen more instances of events duplicated because of the time adjustments. If the cron schedule is set to every 10 minutes and includes events from the past 60 minutes, you may end up with duplicated results. Indeed, over-correcting this second scenario was the cause of me creating some uncovered time windows. By the way, pay close attention to snapping. I will come back with some more reasons I have found in a upcoming post.
Hi All,   I've installed the Splunk Add-on for Unix and Linux in both Splunk Enterprise as well as my forwarder which is running 9.3.2  However I keep running into this error below: 12-19-2024 15:... See more...
Hi All,   I've installed the Splunk Add-on for Unix and Linux in both Splunk Enterprise as well as my forwarder which is running 9.3.2  However I keep running into this error below: 12-19-2024 15:54:30.303 +0000 ERROR ExecProcessor [1376795 ExecProcessor] - message from "/opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/vmstat_metric.sh" /opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/hardware.sh: line 62: /opt/splunkforwarder/var/run/splunk/tmp/unix_hardware_error_tmpfile: No such file or directory   The above is coming from the splunkd.log after I have stopped and restarted the SplunkForwarder.service.  I am very new to Splunk and do not posses any certifications.  My company has tasked me with learning and configuring Splunk and I am enjoying it except I am unable to get this data sent to my indexer so that I can see the data in Search and Reporting.   These are the steps taken so far: Installed Splunk Add-on for Unix and Linux on my enterprise UI machine installed Splunk Add-on for Unix and Linux on my unvf As the local directory was not created at "/opt/splunkforwarder/etc/apps/Splunk_TA_nix/" I created it and copied the inputs.conf from default to local and then allowed the scripts I wanted. Made sure the Splunk user was owner and had the privileges needed to the local directory stopped the splunk service and restarted it. Ran -  cat /opt/splunkforwarder/var/log/splunk/splunkd.log | grep ERROR Almost every Error is "unix_hardware_error_tmpfile: No such file or directory" if I create the tmpfile it disappears and is not recreated I'm sure there are many other things I didnt mention because I honestly dont remember because I have been trying to figure this issue out since yesterday and am not getting anywhere.  PLEASE HELP!
I am trying to set up a synthetic browser test that makes use of variables. I can't seem to find information about the usage of variables in Browser Tests other than this. So far I tried to: Access... See more...
I am trying to set up a synthetic browser test that makes use of variables. I can't seem to find information about the usage of variables in Browser Tests other than this. So far I tried to: Access Global variable as a url in a "go to URL" step -> which leads to a "https:// is not a valid URL" although I can see the variable in the "variables" tab Access Global/Predefined variables in "execute Javascript" steps -> undefined Set variable via "Save return value from javascript" to variable var and try to reuse it in both -> "assert Text present" with {{custom.$var}} -> undefined            -> "execute Javascript" with custom.var -> undefined Assign var/const in "execute Javascript" step and reference it in consecutive "execute Javascript" steps -> undefined Tried to access built-in variables: in "execute Javascript" tests (except those that are only available in API-Tests) -> undefined   Which raises the following questions for me: What is the idiomatic way to use variables in Synthetic Browser Tests? So far it seems to me that they can only be used to fill a field as its the only Action mentioned here and no other action I tried seems to support variables which would quite honestly be really dissapointing. Am I overlooking any documentation? Which kind of actions support the use of variables created by other steps?   Thank you
Is there a way to capture the Thread Name for Business Transactions in Java? I see RequestGUID and URL are captured when looking through the UI. Thanks.
Hi Everyone, I need to send a hard coded message to the users just before every daylight savings of the year saying "Daylight savings is scheduled tomorrow, please be alerted " and i don't want to u... See more...
Hi Everyone, I need to send a hard coded message to the users just before every daylight savings of the year saying "Daylight savings is scheduled tomorrow, please be alerted " and i don't want to use any index for the that but just hard coded message. Is it possible to create an alert based on the requirement.
I have an app server running a custom application that is, unfortunately, a bit buggy.  This bug causes it's service to spike in CPU usage and degrade performance.  There's a fix in the works but bec... See more...
I have an app server running a custom application that is, unfortunately, a bit buggy.  This bug causes it's service to spike in CPU usage and degrade performance.  There's a fix in the works but because I can manually resolve it by restarting the service it is lower on the priority list. I currently use Splunk to send me an alert when CPU usage gets to 80% or more - this lets me get in there to do the reset before performance degrades. It looks like Splunk used to have a simple feature to run a script on the UF's /bin/ directory, which would have made this pretty simple - but it is deprecated and I assume doesn't work at all.  Now, however, we're supposed to create a custom alert action to reinvent this alert action. Following the basic directions here, I've come to find I don'thave the ability to create a new Alert Action:  Create alert actions - Splunk Documentation I can "Browse More" and view the existing ones, but there's no ability to create anything new.  Is there some sort of pre-requisite before these can be done?  It does not appear to be mentioned in this documentation if that's the case. Alternatively, does Splunk still trigger scripts even though the feature is deprecated?  The above needs learned but seems like a lot of overhead to have one specific server run net stop [service] && net start [service].
Hi Team,   We have recently installed OCI add-on in splunk heavy forwarder to collect the OCI log's from oracel cloud instances. after insalling and configuring the OCI input's we are getting below... See more...
Hi Team,   We have recently installed OCI add-on in splunk heavy forwarder to collect the OCI log's from oracel cloud instances. after insalling and configuring the OCI input's we are getting below error's. can you please help us with the resolution.   12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" oci._vendor.requests.exceptions.SSLError: (MaxRetryError("OCIConnectionPool(host='cell-1.streaming.XX.XX.XX.oci.oraclecloud.com', port=443): Max retries exceeded with url: /20180418/streams/XX.XX.XX/groupCursors (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)')))"), 'Request Endpoint: POST https://XX.XX.XX/groupCursors See https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_troubleshooting.htm for help troubleshooting this error, or contact support and provide this full error message.') 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" During handling of the above exception, another exception occurred: 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" Traceback (most recent call last): 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/multiprocess/pool.py", line 121, in worker 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" result = (True, func(*args, **kwds)) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py", line 102, in get_messages 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" cursor = get_cursor_by_group(global_stream_clients[i], stream_id, stream_id, opt_partition) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py", line 59, in get_cursor_by_group 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" response = sc.create_group_cursor(sid, cursor_details) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/streaming/stream_client.py", line 505, in create_group_cursor 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" api_reference_link=api_reference_link) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/retry/retry.py", line 308, in make_retrying_call 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" response = func_ref(*func_args, **func_kwargs) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/base_client.py", line 485, in call_api 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" response = self.request(request, allow_control_chars, operation_name, api_reference_link) 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/base_client.py", line 606, in request 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" raise exceptions.RequestException(e) 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" """ 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" The above exception was the direct cause of the following exception: 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" Traceback (most recent call last): 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py", line 510, in stream_events 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" get_response = r.get() 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/multiprocess/pool.py", line 657, in get 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" raise self._value
I have been trying to monitor a SQLite database, and have been having nothing but problems. I managed to find some stanzas that apparently worked for other people, notably this one: https://community... See more...
I have been trying to monitor a SQLite database, and have been having nothing but problems. I managed to find some stanzas that apparently worked for other people, notably this one: https://community.splunk.com/t5/All-Apps-and-Add-ons/Monitor-SQLite-database-file-with-Splunk-DB-Connect/m-p/294331   I am actually able to see the driver in the installed drivers tab, and I can see my stanza within possible connections when trying to test a query:   I used exactly what was in that previous question and that didn't work, and I tried several other changes, and currently have this: db_connection_types.conf: [sqlite] displayName = SQLite serviceClass = com.splunk.dbx2.DefaultDBX2JDBC jdbcDriverClass = org.sqlite.JDBC jdbcUrlFormat = jdbc:sqlite:<database> ui_default_catalog = main database = main port = 443 db_connections.conf: [incidents] connection_type = sqlite database = /opt/tece/pb_data/data.db host = localhost identity = owner jdbcUrlFormat = jdbc:sqlite:<database> jdbcUseSSL = 0 I am getting this error now:   I also see this in the logs: 2024-12-19 14:38:59.018 +0000 Trace-Id= [dw-36 - GET /api/inputs] INFO c.s.d.s.dbinput.task.DbInputCheckpointFileManager - action=init_checkpoint_file_manager working_directory=/opt/splunk/var/lib/splunk/modinputs/server/splunk_app_db_connect 2024-12-19 14:39:15.807 +0000 Trace-Id=6dac40b0-1bcc-4410-bc28-53d743136056 [dw-40 - GET /api/connections/incidents/status] WARN com.splunk.dbx.message.MessageEnum - action=initialize_resource_bundle_files error=Can't find bundle for base name Messages, locale en_US I have tried 2 seperate SQLite drivers, the most up to date one, and the one specifically for the version of the database of SQLite that I am using. Anyone have any ideas?