All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello everybody, I am facing some challenges with some custom log file containing bits of xml surrounded by some sort of headers... The file looks something like this:   [1][DATA]BEGIN --- - 06:... See more...
Hello everybody, I am facing some challenges with some custom log file containing bits of xml surrounded by some sort of headers... The file looks something like this:   [1][DATA]BEGIN --- - 06:03:09[012] <xml> <tag1>value</tag1> <nestedTag> <tag2>another value</tag2> </nestedTag> </xml> [1][DATA]END --- - 06:03:09[012] [1][DATA]BEGIN --- - 07:03:09[123] <xml> <tag1>some stuff</tag1> <nestedTag> <tag2>other stuff</tag2> </nestedTag> </xml> [1][DATA]END --- - 07:03:09[123] [1][DATA]BEGIN --- - 08:03:09[456] <xml> <tag1>some more data</tag1> <nestedTag> <tag2>fooband a bit more</tag2> </nestedTag> </xml> [1][DATA]END --- - 08:03:09[456]    It is worth noting that the xml parts can be very large. I would like to take advantage of Splunk's automatic xml parsing as it is not realistic to do it manually in this case, but the square bracket lines around each xml block seem to prevent the xml parser to do its job and I get no field extraction. So, what I would like to do is: Converting the "data begin" line with the square brackets, before each xml block, into an xml formatted line, so that I can use it for the time of the event (the date itself is encoded in the filename...) and let Splunk parse the rest of the xml data automatically Stripping out the lines with the "data end" bit after each block of xml. These are not useful as they provide the same time than the "data begin" line. Aggregating the xml lines of the same block into one event What I have tried with props.conf and transforms.conf: props.conf   [my_sourcetype] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = KV_MODE = xml LINE_BREAKER = \]([\r\n]+)\[1\]\[DATA\]BEGIN NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Custom pulldown_type = true TRANSFORMS-full=my_transform # only with transforms.conf v1 TRANSFORMS-begin=begin # only with transforms.conf v2 TRANSFORMS-end=end # only with transforms.conf v2   transforms.conf (version 1):   [my_transform] REGEX = (?m)\[1\]\[DATA\]BEGIN --- - (\d{2}:\d{2}:\d{2}).*([\r\n]+)([^\[]*)\[1\]\[DATA\]END.*$[\r\n]* FORMAT = <time>$1</time>$2$3 WRITE_META = true DEST_KEY = _raw    transforms.conf (version 2):   [begin] REGEX = (?m)^\[1\]\[DATA\]BEGIN --- - (\d{2}:\d{2}:\d{2}).*$ FORMAT = <time>$1</time> WRITE_META = true DEST_KEY = _raw [end] REGEX = (?m)^\[1\]\[DATA\]END.*$ DEST_KEY = queue FORMAT = nullQueue     With the various combinations listed here, I got all sorts of results: well separated events but with square brackets left over one big block with all events aggregated together and no override of the square bracket lines one event with the begin square bracket line truncated at 10k characters 4 events with one "time" xml tag but nothing else... Could anybody help me out with this use case? Many thanks, Alex
Hi @devsru, You can use makeresults for that: | makeresults | eval msg="Daylight savings is scheduled tomorrow, please be alerted " | fields - _time Create an cron scheduled alert based on this ... See more...
Hi @devsru, You can use makeresults for that: | makeresults | eval msg="Daylight savings is scheduled tomorrow, please be alerted " | fields - _time Create an cron scheduled alert based on this SPL, triggering when the results are more than 0, and configure the 'Send Email' alert action.
| makeresults count=365 | streamstats count | eval DayOfYear=strftime(round(relative_time(now(), "-0y@y"))+((count-1)*86400),"%Y-%m-%d") | eval FirstOfMonth=strftime(strptime(DayOfYear, "%Y-%m-%d"... See more...
| makeresults count=365 | streamstats count | eval DayOfYear=strftime(round(relative_time(now(), "-0y@y"))+((count-1)*86400),"%Y-%m-%d") | eval FirstOfMonth=strftime(strptime(DayOfYear, "%Y-%m-%d"),"%Y-%m-01") | eval Sunday=strftime(relative_time(strptime(FirstOfMonth, "%Y-%m-%d"),"+2w@w0"), "%Y-%m-%d") | eval Match=if((Sunday=DayOfYear AND (strftime(round(relative_time(now(), "-0y@y"))+((count-1)*86400),"%m")=="03" OR strftime(round(relative_time(now(), "-0y@y"))+((count-1)*86400),"%m")=="11") ),"TRUE","FALSE") | table _time DayOfYear FirstOfMonth Sunday Match | search Match=TRUE This search will find the second Sunday of every March and November for the current year.  You actually need to identify if today is the day before in order to trigger an alert which you can program to send an email. There might be easier methods to identify the DST change but my research has not found it yet this morning.  Also this assumes the DST change is for the Americas, other portions of the globe may not share the same DST days.
Transaction command is costly and it has limitations for wider timeframe and larger datasets.
Right! If you have only one indexer
@ITWhisperer  Can you please help?
Hi All,   I've installed the Splunk Add-on for Unix and Linux in both Splunk Enterprise as well as my forwarder which is running 9.3.2  However I keep running into this error below: 12-19-2024 15:... See more...
Hi All,   I've installed the Splunk Add-on for Unix and Linux in both Splunk Enterprise as well as my forwarder which is running 9.3.2  However I keep running into this error below: 12-19-2024 15:54:30.303 +0000 ERROR ExecProcessor [1376795 ExecProcessor] - message from "/opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/vmstat_metric.sh" /opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/hardware.sh: line 62: /opt/splunkforwarder/var/run/splunk/tmp/unix_hardware_error_tmpfile: No such file or directory   The above is coming from the splunkd.log after I have stopped and restarted the SplunkForwarder.service.  I am very new to Splunk and do not posses any certifications.  My company has tasked me with learning and configuring Splunk and I am enjoying it except I am unable to get this data sent to my indexer so that I can see the data in Search and Reporting.   These are the steps taken so far: Installed Splunk Add-on for Unix and Linux on my enterprise UI machine installed Splunk Add-on for Unix and Linux on my unvf As the local directory was not created at "/opt/splunkforwarder/etc/apps/Splunk_TA_nix/" I created it and copied the inputs.conf from default to local and then allowed the scripts I wanted. Made sure the Splunk user was owner and had the privileges needed to the local directory stopped the splunk service and restarted it. Ran -  cat /opt/splunkforwarder/var/log/splunk/splunkd.log | grep ERROR Almost every Error is "unix_hardware_error_tmpfile: No such file or directory" if I create the tmpfile it disappears and is not recreated I'm sure there are many other things I didnt mention because I honestly dont remember because I have been trying to figure this issue out since yesterday and am not getting anywhere.  PLEASE HELP!
I am trying to set up a synthetic browser test that makes use of variables. I can't seem to find information about the usage of variables in Browser Tests other than this. So far I tried to: Access... See more...
I am trying to set up a synthetic browser test that makes use of variables. I can't seem to find information about the usage of variables in Browser Tests other than this. So far I tried to: Access Global variable as a url in a "go to URL" step -> which leads to a "https:// is not a valid URL" although I can see the variable in the "variables" tab Access Global/Predefined variables in "execute Javascript" steps -> undefined Set variable via "Save return value from javascript" to variable var and try to reuse it in both -> "assert Text present" with {{custom.$var}} -> undefined            -> "execute Javascript" with custom.var -> undefined Assign var/const in "execute Javascript" step and reference it in consecutive "execute Javascript" steps -> undefined Tried to access built-in variables: in "execute Javascript" tests (except those that are only available in API-Tests) -> undefined   Which raises the following questions for me: What is the idiomatic way to use variables in Synthetic Browser Tests? So far it seems to me that they can only be used to fill a field as its the only Action mentioned here and no other action I tried seems to support variables which would quite honestly be really dissapointing. Am I overlooking any documentation? Which kind of actions support the use of variables created by other steps?   Thank you
Thank you for the help. I always get null for  TargetLocation in stats and thus showing "Pending" I notice that latest(TargetLocation) has multiple values and null is the latest. Is there a way to e... See more...
Thank you for the help. I always get null for  TargetLocation in stats and thus showing "Pending" I notice that latest(TargetLocation) has multiple values and null is the latest. Is there a way to eliminate null so that the latest time can be displayed?
Is there a way to capture the Thread Name for Business Transactions in Java? I see RequestGUID and URL are captured when looking through the UI. Thanks.
Thank you for the detailed explanation; I truly appreciate it.
Happens in Splunk Enterprise v9.4.0 for Windows too.
I suspect the OTel auto instrumentation for PHP is not very mature as of today. I experimented with it and ran in to similar challenges. When I manually instrumented PHP, it did work and I see traces... See more...
I suspect the OTel auto instrumentation for PHP is not very mature as of today. I experimented with it and ran in to similar challenges. When I manually instrumented PHP, it did work and I see traces that way. 
Hi Everyone, I need to send a hard coded message to the users just before every daylight savings of the year saying "Daylight savings is scheduled tomorrow, please be alerted " and i don't want to u... See more...
Hi Everyone, I need to send a hard coded message to the users just before every daylight savings of the year saying "Daylight savings is scheduled tomorrow, please be alerted " and i don't want to use any index for the that but just hard coded message. Is it possible to create an alert based on the requirement.
Pickle Rick, Thanks for the link - I did come across that early in my troubleshooting, wondering if I had inherited a multi-lingual setup like yours. However, in my case it looks like my Splunk ins... See more...
Pickle Rick, Thanks for the link - I did come across that early in my troubleshooting, wondering if I had inherited a multi-lingual setup like yours. However, in my case it looks like my Splunk instance is actually missing underlying Windows components that allows it to recognize these "Objects".  This was confirmed when using commands such as (Get-Counter -ListSet *).Counter | Select-String "\\Processor*" would return the "Processor Information" work-around I had but not Processor itself, nor would (Get-Counter -ListSet *).Counter  return any of the Objects Splunk would mention as being MIA when I checked Data Inputs. Working with a tech on this at the moment - this is certainly not something I've encountered before.
Let’s think about this from 2 perspectives: sending logs and ingesting logs. Splunk Enterprise and Splunk Cloud are where logs are ingested so you can send logs there using any method you prefer. Th... See more...
Let’s think about this from 2 perspectives: sending logs and ingesting logs. Splunk Enterprise and Splunk Cloud are where logs are ingested so you can send logs there using any method you prefer. There are countless ways to send logs; some examples include Splunk universal forwarder, OpenTelemetry collector, and fluentd. With the OTel collector, you choose which receiver to use to collect logs such as the filelog or otlp receivers. The OTel collector uses exporters to send those logs to a logging backend like Splunk Enterprise/Cloud. Splunk Observability Cloud ingests metrics and traces and it uses an integration called Log Observer Connect to read logs from Splunk Cloud/Enterprise and display and correlate them to metrics and traces so you can see all 3 signals in one place. In the OTel yaml you shared, that is your pipeline configuration where you’re telling an OTel collector how to receive, process, and export your telemetry.  For example, in your “logs” pipeline, you’re receiving logs from the fluentforward and otlp receivers, your processing those logs with memory_limiter, batch, and resourcedetection processors, and then exporting log data to splunk_hec and splunk_hec/profiling endpoints. The splunk_hec exporter represents an http event collector endpoint on Splunk Cloud/Enterprise and the splunk_hec/profiling endpoint represents a special Observability Cloud endpoint dedicated for code profiling data (not typical logs, but still technically logs).
I have an app server running a custom application that is, unfortunately, a bit buggy.  This bug causes it's service to spike in CPU usage and degrade performance.  There's a fix in the works but bec... See more...
I have an app server running a custom application that is, unfortunately, a bit buggy.  This bug causes it's service to spike in CPU usage and degrade performance.  There's a fix in the works but because I can manually resolve it by restarting the service it is lower on the priority list. I currently use Splunk to send me an alert when CPU usage gets to 80% or more - this lets me get in there to do the reset before performance degrades. It looks like Splunk used to have a simple feature to run a script on the UF's /bin/ directory, which would have made this pretty simple - but it is deprecated and I assume doesn't work at all.  Now, however, we're supposed to create a custom alert action to reinvent this alert action. Following the basic directions here, I've come to find I don'thave the ability to create a new Alert Action:  Create alert actions - Splunk Documentation I can "Browse More" and view the existing ones, but there's no ability to create anything new.  Is there some sort of pre-requisite before these can be done?  It does not appear to be mentioned in this documentation if that's the case. Alternatively, does Splunk still trigger scripts even though the feature is deprecated?  The above needs learned but seems like a lot of overhead to have one specific server run net stop [service] && net start [service].
@isoutamo, Thank you for your attention to my problem. I saw this post, and I also saw the resolution—create the user 'system'. But my case is a little bit different because errors have no informat... See more...
@isoutamo, Thank you for your attention to my problem. I saw this post, and I also saw the resolution—create the user 'system'. But my case is a little bit different because errors have no information about the user that is absent. Only quotes without anything.
From where you are, you could simply do something like this | filldown Threshold
Hi Team,   We have recently installed OCI add-on in splunk heavy forwarder to collect the OCI log's from oracel cloud instances. after insalling and configuring the OCI input's we are getting below... See more...
Hi Team,   We have recently installed OCI add-on in splunk heavy forwarder to collect the OCI log's from oracel cloud instances. after insalling and configuring the OCI input's we are getting below error's. can you please help us with the resolution.   12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" oci._vendor.requests.exceptions.SSLError: (MaxRetryError("OCIConnectionPool(host='cell-1.streaming.XX.XX.XX.oci.oraclecloud.com', port=443): Max retries exceeded with url: /20180418/streams/XX.XX.XX/groupCursors (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)')))"), 'Request Endpoint: POST https://XX.XX.XX/groupCursors See https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_troubleshooting.htm for help troubleshooting this error, or contact support and provide this full error message.') 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" During handling of the above exception, another exception occurred: 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" Traceback (most recent call last): 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/multiprocess/pool.py", line 121, in worker 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" result = (True, func(*args, **kwds)) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py", line 102, in get_messages 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" cursor = get_cursor_by_group(global_stream_clients[i], stream_id, stream_id, opt_partition) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py", line 59, in get_cursor_by_group 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" response = sc.create_group_cursor(sid, cursor_details) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/streaming/stream_client.py", line 505, in create_group_cursor 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" api_reference_link=api_reference_link) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/retry/retry.py", line 308, in make_retrying_call 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" response = func_ref(*func_args, **func_kwargs) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/base_client.py", line 485, in call_api 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" response = self.request(request, allow_control_chars, operation_name, api_reference_link) 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/base_client.py", line 606, in request 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" raise exceptions.RequestException(e) 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" """ 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" The above exception was the direct cause of the following exception: 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" Traceback (most recent call last): 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py", line 510, in stream_events 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" get_response = r.get() 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/multiprocess/pool.py", line 657, in get 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" raise self._value