All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello! I have been trying to get some logs into a metric index and I'm wondering if they can be improved with better field extraction. These are what the logs look like:     t=1713291900 path="/da... See more...
Hello! I have been trying to get some logs into a metric index and I'm wondering if they can be improved with better field extraction. These are what the logs look like:     t=1713291900 path="/data/p1/p2" stat=s1:s2:s3:s4 type=COUNTER value=12 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s6 type=COUNTER value=18 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3:s7 type=COUNTER value=2 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3 type=COUNTER value=104 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3 type=COUNTER value=18 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8 type=COUNTER value=18 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9:10 type=COUNTER value=8 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3:s4 type=COUNTER value=104 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9 type=COUNTER value=140 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9 type=COUNTER value=3 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9 type=COUNTER value=1 t=1713291900 path="/data/p3/p4" stat=s20 type=COUNTER value=585 t=1713291900 path="/data/p3/p4" stat=s21 type=COUNTER value=585 t=1713291900 path="/data/p3/p4" stat=s22 type=TIMEELAPSED value=5497.12 t=1713291900 path="/data/p3/p5" stat=s23 type=COUNTER value=585 t=1713291900 path="/data/p1/p5" stat=s24 type=COUNTER value=585 t=1713291900 path="/data/p1/p5" stat=s25 type=TIMEELAPSED value=5497.12 t=1713291900 path="/data/p1/p5/p6" stat=s26 type=COUNTER value=253 t=1713291900 path="/data/p1/p5/p6" stat=s27 type=GAUGE value=1     t is the epoch time. path is the path of a URL which is in double quotes, always starts with /data/, and can have anywhere between 2 and 7 (maybe more) subpaths. stat is is either a single stat (like s20) OR a colon-delimited string of between 3 and 6 stat names. type is either COUNTER, TIMEELAPSED, or GAUGE. value is the metric. Right now I've been able to get a metric index set up that: Assigns t as the timestamp and ignores t as a dimension or metric Makes value the metric Makes path, stat, and type dimensions This is my transforms.conf:     [metrics_field_extraction] REGEX = ([a-zA-Z0-9_\.]+)=\"?([a-zA-Z0-9_\.\/:-]+) [metric-schema:cm_log2metrics_keyvalue] METRIC-SCHEMA-MEASURES = value METRIC-SCHEMA-WHITELIST-DIMS = stat,path,type METRIC-SCHEMA-BLACKLIST-DIMS = t     And props.conf (it's basically log2metrics_keyvalue, we need cm_ to match to our license):     [cm_log2metrics_keyvalue] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) METRIC-SCHEMA-TRANSFORMS = metric-schema:cm_log2metrics_keyvalue TRANSFORMS-EXTRACT = metrics_field_extraction NO_BINARY_CHECK = true category = Log to Metrics description = '<key>=<value>' formatted data. Log-to-metrics processing converts the keys with numeric values into metric data points. disabled = false pulldown_type = 1      path and stat are extracted exactly as they appear in the logs. However, I'm wondering if it's possible to get each part in the path & stat fields into their own dimension, so instead of: _time path stat value type 4/22/24 2:20:00.000 PM /p1/p2/p3 s1:s2:s3 500 COUNTER   It would be: _time path1 path2 path3 stat1 stat2 stat3 value type 4/22/24 2:20:00.000 PM p1 p2 p3 s1 s2 s3 500 COUNTER   My thinking was that we'd be able to get really granular stats and interesting graphs. Thanks in advance!
I'm having issues getting parsing working using a custom config otel specification. The `log.file.path` should be one of these two formats: 1. /splunk-otel/app-api-starter-project-template/app-api-s... See more...
I'm having issues getting parsing working using a custom config otel specification. The `log.file.path` should be one of these two formats: 1. /splunk-otel/app-api-starter-project-template/app-api-starter-project-template-96bfdf8866-9jz7m/app-api-starter-project-template.log 2. /splunk-otel/app-api-starter-project-template/app-api-starter-project-template.log One with and one without the pod name. We are doing it this way so that we only index one application log file in a set of directories rather than picking up a ton of kubernetes logs that we will never review, but yet have to store. At the bottom is the full otel config. We are noticing that regardless of the file path (1 or 2) above, it keeps going to the default option, and in the `catchall` attribute in splunk, it has the value of log.file.path which always is the 1st format above (e.g. /splunk-otel/app-api-starter-project-template/app-api-starter-project-template-96bfdf8866-9jz7m/app-api-starter-project-template.log). - id: catchall type: move from: attributes["log.file.path"] to: attributes["catchall"] Why is it that it's not going to the route `parse-deep-filepath` considering the Regex should match. We want to be able to pull out the `application name`, the `pod name`, and the `namespace` which are all reflected in the full `log.file.path` receivers: filelog/mule-logs-volume: include: - /splunk-otel/*/app*.log - /splunk-otel/*/*/app*.log start_at: beginning include_file_path: true include_file_name: true resource: com.splunk.sourcetype: mule-logs k8s.cluster.name: {{ k8s_cluster_instance_name }} deployment.environment: {{ aws_environment_name }} splunk_server: {{ splunk_host }} operators: - type: router id: get-format routes: - output: parse-deep-filepath expr: 'log.file.path matches "^/splunk-otel/[^/]+/[^/]+/app-[^/]+[.]log$"' - output: parse-shallow-filepath expr: 'log.file.path matches "^/splunk-otel/[^/]+/app-[^/]+[.]log$"' - output: nil-filepath expr: 'log.file.path matches "^<nil>$"' default: catchall # Extract metadata from file path - id: parse-deep-filepath type: regex_parser regex: '^/splunk-otel/(?P<namespace>[^/]+)/(?P<pod_name>[^/]+)/(?P<application>[^/]+)[.]log$' parse_from: attributes["log.file.path"] - id: parse-shallow-filepath type: regex_parser regex: '^/splunk-otel/(?P<namespace>[^/]+)/(?P<application>[^/]+)[.]log$' parse_from: attributes["log.file.path"] - id: nil-filepath type: move from: attributes["log.file.path"] to: attributes["nil_filepath"] - id: catchall type: move from: attributes["log.file.path"] to: attributes["catchall"] exporters: splunk_hec/logs: # Splunk HTTP Event Collector token. token: "{{ splunk_token }}" # URL to a Splunk instance to send data to. endpoint: "{{ splunk_full_endpoint }}" # Optional Splunk source: https://docs.splunk.com/Splexicon:Source source: "output" # Splunk index, optional name of the Splunk index targeted. index: "{{ splunk_index_name }}" # Maximum HTTP connections to use simultaneously when sending data. Defaults to 100. #max_connections: 20 # Whether to disable gzip compression over HTTP. Defaults to false. disable_compression: false # HTTP timeout when sending data. Defaults to 10s. timeout: 900s tls: # Whether to skip checking the certificate of the HEC endpoint when sending data over HTTPS. Defaults to false. # For this demo, we use a self-signed certificate on the Splunk docker instance, so this flag is set to true. insecure_skip_verify: true processors: batch: extensions: health_check: endpoint: 0.0.0.0:8080 pprof: endpoint: :1888 zpages: endpoint: :55679 file_storage/checkpoint: directory: /output/ timeout: 10s compaction: on_start: true directory: /output/ max_transaction_size: 65_536 service: extensions: [pprof, zpages, health_check, file_storage/checkpoint] pipelines: logs: receivers: [filelog/mule-logs-volume] processors: [batch] exporters: [splunk_hec/logs]
Ok. Now it's a bit better described but. 1. You still haven't shown us a sample of the actual events. 2. Not everything in Splunk can be done (reasonably and effectively) with just a single search.... See more...
Ok. Now it's a bit better described but. 1. You still haven't shown us a sample of the actual events. 2. Not everything in Splunk can be done (reasonably and effectively) with just a single search. Maybe you could bend over backwards and compose some monster using subsearches and map but it will definitely not be a good solution. Performance would be bad and you still might hit limits for subsearches and get wrong results. It sounds like something that should be done by means of repeated scheduled search storing intermediate state in a lookup. You might try to to search through "all-time" and build a huge list of everything that happened in your index only to choose two most recent changes but it would consume a lot of memory and is not really a good solution.
@deepakc's was a so-called "run-anywhere" example. A sequence of commands that can be run on its own without any additional data that you need to search for, meant for showing some mechanism. It star... See more...
@deepakc's was a so-called "run-anywhere" example. A sequence of commands that can be run on its own without any additional data that you need to search for, meant for showing some mechanism. It starts with a makeresults command which creates an "empty" result. This example was not meant to be run as part of your search but you should do something similar with your data and your field names.
What do you mean by "there is no overlapping"? A 4728 or 4729 event will have an Account Name field. Splunk applies transform class from left to right and applies them all (if they match). So your... See more...
What do you mean by "there is no overlapping"? A 4728 or 4729 event will have an Account Name field. Splunk applies transform class from left to right and applies them all (if they match). So your event will first match the first transform, if the event is 4728 or 4729 the index will get overwritten to index1 but then immediately Splunk will apply the second transform which will - for the *.adm accounts - overwrite the index to index2. At least that's how it should work if the regexes are OK (I didn't check that).
Be aware that subsearches have limitations and it can be nasty if you hit the limit because the search will be finalized silently. You won't know something's not right. Also the | dedup host | tab... See more...
Be aware that subsearches have limitations and it can be nasty if you hit the limit because the search will be finalized silently. You won't know something's not right. Also the | dedup host | table host part is quite suboptimal. And in general be wary when using the dedup command (you have it in outer search as well) - it might behave differently than you'd expect.
Hello @marnall , I already tested both regex in regex101 and there is not overlapping, this is why I do not understand why it's not working.
You could hit the REST endpoint for approvals. (https://docs.splunk.com/Documentation/SOARonprem/6.2.1/PlatformAPI/RESTApproval) Unfortunately the docs do not include the POST requests for actually a... See more...
You could hit the REST endpoint for approvals. (https://docs.splunk.com/Documentation/SOARonprem/6.2.1/PlatformAPI/RESTApproval) Unfortunately the docs do not include the POST requests for actually approving the task, so you'll have to do an approval in the web interface and then log the POST request using your browser dev tools. Then you can use that POST request to approve tasks without having to log into SOAR. You will need to provide authentication credentials or a token though.
Hi Deepak  I am bit confused using the time command  Filed name - event.Properties.duration  How do i execute this in the command.  I tried the below but sure i am missing something  index=... See more...
Hi Deepak  I am bit confused using the time command  Filed name - event.Properties.duration  How do i execute this in the command.  I tried the below but sure i am missing something  index=testing | "event.Properties.duration"="*" | makeresults | eval millis_sec = 5000 | eval seconds = millis_sec/1000 | table millis_sec, seconds
An important thing to keep in mind with this configuration is that each transform will be applied to the events, so the first transform can change the destination index, but then the second transform... See more...
An important thing to keep in mind with this configuration is that each transform will be applied to the events, so the first transform can change the destination index, but then the second transform can change the destination index again. If events are going to index2 but should be going to index1, it indicates that the regex for the rewrite_index_adm transform is matching on the events that should go to index1. Check your regexes and make sure that the regex for rewrite_ad_group_management ONLY applies to logs with EventCode 4728 or 4729, while the regex for rewrite_index_adm ONLY applies to the Eventcodes 4624,4634,4625 and for admin users.
Can you split your query into a set of smaller queries that index those rows into a summary index?
This error would indicate an authentication problem. You should double-check your SMTP settings to ensure that they contain authentication settings for a valid account that can send email through you... See more...
This error would indicate an authentication problem. You should double-check your SMTP settings to ensure that they contain authentication settings for a valid account that can send email through your email provider.
I am having the same issue. I have also checked the Release Notes you linked. I already have those items configured: $ bin/splunk btool web list | grep mgmtHostPort mgmtHostPort = 0.0.0.0:8089 $ b... See more...
I am having the same issue. I have also checked the Release Notes you linked. I already have those items configured: $ bin/splunk btool web list | grep mgmtHostPort mgmtHostPort = 0.0.0.0:8089 $ bin/splunk btool server list | grep disableDefaultPort disableDefaultPort = false But still, I don't see splunkd listening on port 8089: $ sudo lsof -i tcp -P | grep 8089 (I get nothing.) The Universal Forwarder is v9.2.1 on Red Hat Enterprise Linux 8.9.
I have a current Splunk install in my production environment, all running RedHat Linux.  I have a single server w/ Splunk Enterprise installed on it, as well as SplunkForwarder.  I have 100+ other se... See more...
I have a current Splunk install in my production environment, all running RedHat Linux.  I have a single server w/ Splunk Enterprise installed on it, as well as SplunkForwarder.  I have 100+ other servers w/ SplunkForwarder installed on them all pushing logs to the Splunk Enterprise server.  All servers had v9.1.2 of the forwarder installed, and the Enterprise server was also this version. I recently updated the Splunk Enterprise server, as well as the Splunk Forwarders on all servers, to version 9.2.0.1 successfully.  With one exception.  The forwarder installed on my Splunk Enterprise server (named "splunkenter1") fails.  It displays the error listed below where it says that the splunkforwarder package is conflicting with the splunk install. I have another Splunk Enterprise install (using the same set-up) in another environment, and I did not run into this issue.  That upgrade worked without issue. I've tried Google'ing the issue, but haven't found much.  Anyone have any ideas on what could be causing this, or has anyone seen this before?   [root@splunkenter1 ~]# dnf update splunkforwarder Last metadata expiration check: 0:01:36 ago on Mon 22 Apr 2024 04:47:07 PM UTC. Dependencies resolved. ======================================================================================================== Package Architecture Version Repository Size ======================================================================================================== Upgrading: splunkforwarder x86_64 9.2.0.1-d8ae995bf219 splunk-repo 44 M Transaction Summary ======================================================================================================== Upgrade 1 Package Total download size: 44 M Is this ok [y/N]: y Downloading Packages: splunkforwarder-9.2.0.1-d8ae995bf219.x86_64.rpm 41 MB/s | 44 MB 00:01 -------------------------------------------------------------------------------------------------------- Total 41 MB/s | 44 MB 00:01 Running transaction check Transaction check succeeded. Running transaction test The downloaded packages were saved in cache until the next successful transaction. You can remove cached packages by executing 'dnf clean packages'. Error: Transaction test error: file /usr/lib/.build-id/03/f57acc2883000e6b54bf75c7e67d1a07446919 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/06/a82be30cc16ea5bea39f78f8056447e18beb15 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/1a/b0b8e873c6d668dcd3361470954d12004926cd from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/1e/8edb02a946c645cd20558aa8a6b420792f5541 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/35/e87a7fb154de7d5226e5a0a28c80ffd0c1be48 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/3a/3aac493bff5bb22e02b8726142dd67443dd03c from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/42/abc0f2a26bfb13b563104e87287312420c707e from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/44/6a270f1de8d26f47bf9ff9ae778e1fd3332403 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/64/b2324ff715d30c8a91dee6a980d63c291648d8 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/65/274a42201dd21f83996ba7c8bd0ba0dc3894c8 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/6d/dd008477651e7c8febce4699a739aaf188b0ae from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/88/cbe6deabd44a4766207eebf7c5e74f7ed53120 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/8a/6ee8699fb74fb883874a1123d91acf0b0d98a6 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/94/ea2865a21761f062a2db312845c535d5429bfc from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/95/d5fe61c313d8a5616f8a45f6c7d05151283ab6 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/96/b9463c40fc6541345a4b87634e8517281f8d4d from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/99/93008fdae763af21c831956de21501bb09e197 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/9b/2a882e45910da32603baf28a13b1630987184e from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/9f/b5fd366b32867d537caa84d4b2b521f5c21083 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/a0/1ae9032915dce67a58e8696c3c9fe195193d77 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/a1/616e140409dc54f0db2bf02ed7e114f07490af from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/b6/6dd3d33542916fff507849621dac5f763a98a2 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/b6/fd3c259a4c6e552d9b067f39e66c03cc134895 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/b7/e3d0b70694caa826df19d93b7341de0decdad3 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/bc/f1c9c6878bb887ef6869012b79c97546983b83 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/c8/d218675e02086588c28882c28b3533069d505c from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/d0/be01f291a5b978e02dcdd0069b82ce8a764dbf from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/d3/7dcf7bcf859ed048625d20139782517947e6e0 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/d7/30a0409850e89f806f3798ca99b378c335b7a5 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/dc/259ac038741ecbd76f6052a9fa403bc5ab5ab3 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/de/294f4dd1fa80d590074161566f06b39b9230fb from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/e0/0ee3712cdbd590286c2b8da49724fdaf6dee15 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/e6/7f07efdda1fcfe82b6ceb170412f22e03d2ab5 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/ec/dc3eeaba4750e657f5910fa2adb21365533f27 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/ee/6addfc324fb4bf57058df3adf7ea55dff4953f from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/f1/0b5a5bc3bcb996183924bd6029efba8290c71a from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/f2/c0dd88030fc9e343f6d9104a5015938cfe3503 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/f3/61ef732e036606eef3d78bb13f6d6165bcd927 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/f4/c1fc01304f2796efaabefd2a6350ba67cc9edc from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/f9/3cf5828d46fbdd6e82b2d18a4a5c650b84c185 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/fa/a370a95319b4a8ce1bd239652457843a09c15e from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/fd/201b0799acb29720c90a6129be08800ce4b7e5 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64  
Try something like this (after finding all events) | rex field=_raw "Restart transaction item: (?<Step>.*?) \(WorkId:" | rex field=_raw "Error restart workflow item: (?<Success>.*?) \(WorkId:" | rex... See more...
Try something like this (after finding all events) | rex field=_raw "Restart transaction item: (?<Step>.*?) \(WorkId:" | rex field=_raw "Error restart workflow item: (?<Success>.*?) \(WorkId:" | rex field=_raw "Restart Pending event from command, (?<Failure>.*?) Workid" | eval Step=coalesce(Step,coalesce(Success, Failure)) | stats count(eval(if(Step==Success,1,null()))) as Success count(eval(if(Step==Failure,1,null()))) as Failure by Step
@ITWhisperer , Query1 has a field extracted as Step. So in this Step field which we have extracted as the information "Validation, Creation, Compliance Portal Report etc., with count So the same inf... See more...
@ITWhisperer , Query1 has a field extracted as Step. So in this Step field which we have extracted as the information "Validation, Creation, Compliance Portal Report etc., with count So the same information Validation, Creation and Compliance Portal Report with Success count needs to be pulled using the second query and the failure ones needs to be extracted using the 3rd query. The output should be something like this(Combining all 3 queries): Step (Count)                                                      Success(Count)     Failure (Count) Validation                                                                     3                                           2 Creation                                                                       2                                          2 Compliance Report Portal                                  2                                          2\ So kindly help with the query.      
Hi @kiran_panchavat renaming DS "app/local" to "app/local.OLD" is enough? Thanks.  
@jaibalaramanIf the time is in milliseconds, microseconds, or nanoseconds you must convert the time into seconds. You can use the pow function to convert the number. To convert from milliseconds t... See more...
@jaibalaramanIf the time is in milliseconds, microseconds, or nanoseconds you must convert the time into seconds. You can use the pow function to convert the number. To convert from milliseconds to seconds, divide the number by 1000 or 10^3. To convert from microseconds to seconds, divide the number by 10^6. To convert from nanoseconds to seconds, divide the number by 10^9. Date and Time functions - Splunk Documentation *** If the above solution helps, an upvote is appreciated. ***   
@splunkreal  The configuration file props.conf will be stored in $SPLUNK_HOME/etc/apps(Heavy forwarder) if you generate it on the deployment server at $SPLUNK_HOME/etc/deployment-apps and send it to... See more...
@splunkreal  The configuration file props.conf will be stored in $SPLUNK_HOME/etc/apps(Heavy forwarder) if you generate it on the deployment server at $SPLUNK_HOME/etc/deployment-apps and send it to the heavy forwarders via the deployment server. Please remove the file before pushing it from the deployment server; otherwise, it will not automatically appear in the /etc/apps (heavy forwarder) when the deployment server is reloaded. It will surpass. https://docs.splunk.com/Documentation/Splunk/9.2.1/Updating/Createdeploymentapps  How to edit a configuration file - Splunk Documentation 
Hi @Muhammad Husnain.Ashfaq, Thank you for coming back to the community and sharing the solution!