All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for the detailed explanation; I truly appreciate it.
Happens in Splunk Enterprise v9.4.0 for Windows too.
I suspect the OTel auto instrumentation for PHP is not very mature as of today. I experimented with it and ran in to similar challenges. When I manually instrumented PHP, it did work and I see traces... See more...
I suspect the OTel auto instrumentation for PHP is not very mature as of today. I experimented with it and ran in to similar challenges. When I manually instrumented PHP, it did work and I see traces that way. 
Hi Everyone, I need to send a hard coded message to the users just before every daylight savings of the year saying "Daylight savings is scheduled tomorrow, please be alerted " and i don't want to u... See more...
Hi Everyone, I need to send a hard coded message to the users just before every daylight savings of the year saying "Daylight savings is scheduled tomorrow, please be alerted " and i don't want to use any index for the that but just hard coded message. Is it possible to create an alert based on the requirement.
Pickle Rick, Thanks for the link - I did come across that early in my troubleshooting, wondering if I had inherited a multi-lingual setup like yours. However, in my case it looks like my Splunk ins... See more...
Pickle Rick, Thanks for the link - I did come across that early in my troubleshooting, wondering if I had inherited a multi-lingual setup like yours. However, in my case it looks like my Splunk instance is actually missing underlying Windows components that allows it to recognize these "Objects".  This was confirmed when using commands such as (Get-Counter -ListSet *).Counter | Select-String "\\Processor*" would return the "Processor Information" work-around I had but not Processor itself, nor would (Get-Counter -ListSet *).Counter  return any of the Objects Splunk would mention as being MIA when I checked Data Inputs. Working with a tech on this at the moment - this is certainly not something I've encountered before.
Let’s think about this from 2 perspectives: sending logs and ingesting logs. Splunk Enterprise and Splunk Cloud are where logs are ingested so you can send logs there using any method you prefer. Th... See more...
Let’s think about this from 2 perspectives: sending logs and ingesting logs. Splunk Enterprise and Splunk Cloud are where logs are ingested so you can send logs there using any method you prefer. There are countless ways to send logs; some examples include Splunk universal forwarder, OpenTelemetry collector, and fluentd. With the OTel collector, you choose which receiver to use to collect logs such as the filelog or otlp receivers. The OTel collector uses exporters to send those logs to a logging backend like Splunk Enterprise/Cloud. Splunk Observability Cloud ingests metrics and traces and it uses an integration called Log Observer Connect to read logs from Splunk Cloud/Enterprise and display and correlate them to metrics and traces so you can see all 3 signals in one place. In the OTel yaml you shared, that is your pipeline configuration where you’re telling an OTel collector how to receive, process, and export your telemetry.  For example, in your “logs” pipeline, you’re receiving logs from the fluentforward and otlp receivers, your processing those logs with memory_limiter, batch, and resourcedetection processors, and then exporting log data to splunk_hec and splunk_hec/profiling endpoints. The splunk_hec exporter represents an http event collector endpoint on Splunk Cloud/Enterprise and the splunk_hec/profiling endpoint represents a special Observability Cloud endpoint dedicated for code profiling data (not typical logs, but still technically logs).
I have an app server running a custom application that is, unfortunately, a bit buggy.  This bug causes it's service to spike in CPU usage and degrade performance.  There's a fix in the works but bec... See more...
I have an app server running a custom application that is, unfortunately, a bit buggy.  This bug causes it's service to spike in CPU usage and degrade performance.  There's a fix in the works but because I can manually resolve it by restarting the service it is lower on the priority list. I currently use Splunk to send me an alert when CPU usage gets to 80% or more - this lets me get in there to do the reset before performance degrades. It looks like Splunk used to have a simple feature to run a script on the UF's /bin/ directory, which would have made this pretty simple - but it is deprecated and I assume doesn't work at all.  Now, however, we're supposed to create a custom alert action to reinvent this alert action. Following the basic directions here, I've come to find I don'thave the ability to create a new Alert Action:  Create alert actions - Splunk Documentation I can "Browse More" and view the existing ones, but there's no ability to create anything new.  Is there some sort of pre-requisite before these can be done?  It does not appear to be mentioned in this documentation if that's the case. Alternatively, does Splunk still trigger scripts even though the feature is deprecated?  The above needs learned but seems like a lot of overhead to have one specific server run net stop [service] && net start [service].
@isoutamo, Thank you for your attention to my problem. I saw this post, and I also saw the resolution—create the user 'system'. But my case is a little bit different because errors have no informat... See more...
@isoutamo, Thank you for your attention to my problem. I saw this post, and I also saw the resolution—create the user 'system'. But my case is a little bit different because errors have no information about the user that is absent. Only quotes without anything.
From where you are, you could simply do something like this | filldown Threshold
Hi Team,   We have recently installed OCI add-on in splunk heavy forwarder to collect the OCI log's from oracel cloud instances. after insalling and configuring the OCI input's we are getting below... See more...
Hi Team,   We have recently installed OCI add-on in splunk heavy forwarder to collect the OCI log's from oracel cloud instances. after insalling and configuring the OCI input's we are getting below error's. can you please help us with the resolution.   12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" oci._vendor.requests.exceptions.SSLError: (MaxRetryError("OCIConnectionPool(host='cell-1.streaming.XX.XX.XX.oci.oraclecloud.com', port=443): Max retries exceeded with url: /20180418/streams/XX.XX.XX/groupCursors (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)')))"), 'Request Endpoint: POST https://XX.XX.XX/groupCursors See https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_troubleshooting.htm for help troubleshooting this error, or contact support and provide this full error message.') 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" During handling of the above exception, another exception occurred: 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" Traceback (most recent call last): 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/multiprocess/pool.py", line 121, in worker 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" result = (True, func(*args, **kwds)) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py", line 102, in get_messages 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" cursor = get_cursor_by_group(global_stream_clients[i], stream_id, stream_id, opt_partition) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py", line 59, in get_cursor_by_group 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" response = sc.create_group_cursor(sid, cursor_details) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/streaming/stream_client.py", line 505, in create_group_cursor 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" api_reference_link=api_reference_link) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/retry/retry.py", line 308, in make_retrying_call 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" response = func_ref(*func_args, **func_kwargs) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/base_client.py", line 485, in call_api 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" response = self.request(request, allow_control_chars, operation_name, api_reference_link) 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/base_client.py", line 606, in request 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" raise exceptions.RequestException(e) 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" """ 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" The above exception was the direct cause of the following exception: 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" Traceback (most recent call last): 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py", line 510, in stream_events 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" get_response = r.get() 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/multiprocess/pool.py", line 657, in get 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" raise self._value
I have been trying to monitor a SQLite database, and have been having nothing but problems. I managed to find some stanzas that apparently worked for other people, notably this one: https://community... See more...
I have been trying to monitor a SQLite database, and have been having nothing but problems. I managed to find some stanzas that apparently worked for other people, notably this one: https://community.splunk.com/t5/All-Apps-and-Add-ons/Monitor-SQLite-database-file-with-Splunk-DB-Connect/m-p/294331   I am actually able to see the driver in the installed drivers tab, and I can see my stanza within possible connections when trying to test a query:   I used exactly what was in that previous question and that didn't work, and I tried several other changes, and currently have this: db_connection_types.conf: [sqlite] displayName = SQLite serviceClass = com.splunk.dbx2.DefaultDBX2JDBC jdbcDriverClass = org.sqlite.JDBC jdbcUrlFormat = jdbc:sqlite:<database> ui_default_catalog = main database = main port = 443 db_connections.conf: [incidents] connection_type = sqlite database = /opt/tece/pb_data/data.db host = localhost identity = owner jdbcUrlFormat = jdbc:sqlite:<database> jdbcUseSSL = 0 I am getting this error now:   I also see this in the logs: 2024-12-19 14:38:59.018 +0000 Trace-Id= [dw-36 - GET /api/inputs] INFO c.s.d.s.dbinput.task.DbInputCheckpointFileManager - action=init_checkpoint_file_manager working_directory=/opt/splunk/var/lib/splunk/modinputs/server/splunk_app_db_connect 2024-12-19 14:39:15.807 +0000 Trace-Id=6dac40b0-1bcc-4410-bc28-53d743136056 [dw-40 - GET /api/connections/incidents/status] WARN com.splunk.dbx.message.MessageEnum - action=initialize_resource_bundle_files error=Can't find bundle for base name Messages, locale en_US I have tried 2 seperate SQLite drivers, the most up to date one, and the one specifically for the version of the database of SQLite that I am using. Anyone have any ideas?
Hi @all, Thank you for all your hints, but my issue is that I must find the title4, for each title1 where value is max, with this solution I find the max value for each title1, not the title4 where ... See more...
Hi @all, Thank you for all your hints, but my issue is that I must find the title4, for each title1 where value is max, with this solution I find the max value for each title1, not the title4 where value is max and relative value for each title1. Have you any other hint? Ciao. Giuseppe  
Same (Masterschool student) , so if I understand I can run the enterprise version on MAC OS and a forwarder on the kali VM? So I can practise?
Hi i have a below query where I'm calculating the total prod server count in first dataset and in second dataset I'm plottting a timechart for the server count. what i want to display is a line chart... See more...
Hi i have a below query where I'm calculating the total prod server count in first dataset and in second dataset I'm plottting a timechart for the server count. what i want to display is a line chart with total prod server showing as threshold and line and the below line chart as server count index=data sourcetype="server" | rex field=_raw "server=\"(?<EVENT_CODE>[^\"]*)" | search [ | inputlookup prodata_eventcode.csv | fields EVENT_Code ] | stats dc(host_name) as server_prod_count |rename | append [ | search index=appdata source=appdata_value | rex field=value "\|(?<Item>[^\|]+)?\|(?<EVENT_CODE>[^\|]+)|(?<PROD_Count>[^\|]+)?" | dedup DATE,EVENT_CODE | timechart span=1d sum(PROD_Count) as SERVER_COUNT] | table _time,local_PROD_COUNT,snow_prod_count | rename DYNA_PROD_COUNT as SERVER_COUNT,snow_prod_count as Threshold Question is how can  i get the threshold value in all the rows so that i can plot threshold vs server count in the line graph  Below is the snapshot   
Hi @tmcbride17 , let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @t_splunk_d , you can use the transaction command: index=your_index sourcetype IN (sourcetype1, sourcetype2) | eval FileName=coalesce(file_name, FileName) | stats earliest(_time) AS FileCopyLoca... See more...
Hi @t_splunk_d , you can use the transaction command: index=your_index sourcetype IN (sourcetype1, sourcetype2) | eval FileName=coalesce(file_name, FileName) | stats earliest(_time) AS FileCopyLocation latest(_time) AS TargetLocation BY FileName | eval FileCopyLocation=strftime(FileCopyLocation,"%Y-%m-%d %H:%M:$S"), TargetLocation=strftime(TargetLocation,"%Y-%m-%d %H:%M:$S") | fillnull value="Pending" TargetLocation | table FileName FileCopyLocation TargetLocation Ciao. Giuseppe
Thanks for the quick response! 
I am trying to track file transfers from one location to another.  Flow: Files are copied to File copy location -> Target Location Both File copy location and Target location logs are in the same i... See more...
I am trying to track file transfers from one location to another.  Flow: Files are copied to File copy location -> Target Location Both File copy location and Target location logs are in the same index but each has it own sourcetype. File copy location events has logs for each file but Target location has a logs which has multiple files names. Log format of filecopy location: 2024-12-18 17:02:50 , file_name="XYZ.csv",  file copy success  2024-12-18 17:02:58, file_name="ABC.zip", file copy success  2024-12-18 17:03:38, file_name="123.docx", file copy success 2024-12-18 18:06:19, file_name="143.docx", file copy success Log format of Target Location: 2024-12-18 17:30:10 <FileTransfer status="success>                                               <FileName>XYZ.csv</FileName>                                              <FileName>ABC.zip</FileName>                                              <FileName>123.docx</FileName>                                                </FileTransfer> Desired result:       File Name                  FileCopyLocation               Target Location       XYZ.csv                  2024-12-18 17:02:50          2024-12-18 17:30:10       ABC.zip                   2024-12-18 17:02:58          2024-12-18 17:30:10       123.docx                2024-12-18 17:03:38          2024-12-18 17:30:10        143.docx               2024-12-18 18:06:19            Pending I want to avoid join.
Thanks, @bowesmana . Q - "When you say fuzzy, do you mean it should match based on similarity using something like  Levenshtein distance? Do you want  123 main street 123 maine street 123 cain st... See more...
Thanks, @bowesmana . Q - "When you say fuzzy, do you mean it should match based on similarity using something like  Levenshtein distance? Do you want  123 main street 123 maine street 123 cain street all to match." A - No. I know about Levenshtein ; however, the similarity would have to disregard (not the correct word) the street numbers in counting/calculating. 123 main street and 124 main street would never be a match. 123 main street and 123 main street apt 2 would be a match. It is assumed, and probably incorrectly, the property owner of 123 main street apt 4 and 123 main street apt 6 are the same for the building. Of course condos knock this idea out. Q - "What size is your lookup - you may well be hitting the default limits defined (25MB)" A - csv: 1 million records - 448,500 bytes // kvstore: 3 million records - 2,743.66 MB Q - "What are you currently doing to be 'fuzzy' so your matches currently work or are you really looking for exact matches somewhere in your data?" A - I stripped off any non-numeric characters at the beginning of the address on the lookup and use that field for the as in my lookup command with my kvstore | lookup my_kvstore addr as mod_addr output owner   Q - Is your KV store currently being updated - and is it replicated? A - No replication. The data would be refreshed yearly, or possibly every quarter. Q - Also, if you are just looking at some exact match somewhere, then the KV store may benefit from using accelerated fields - that can speed up lookups against the KV store (if that's the way you're doing it) significantly. A - Using the above code, the addr would be the accelerated field, correct? Thanks again for your help and God bless. Genesius
I recommend using a HF only if necessary.  In addition to the factors listed previously, HFs add a layer of complexity, are something else to manage, and introduce another point of failure. I distin... See more...
I recommend using a HF only if necessary.  In addition to the factors listed previously, HFs add a layer of complexity, are something else to manage, and introduce another point of failure. I distinct advantage of HFs in a Splunk Cloud environment is better control over how your data is parsed.  It's much easier to manage the apps in a HF than it is to do so in Splunk Cloud - even with the Victoria experience. Of course, you should have at least 2 HFs for redundancy.