All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all I have a question about using relaystate with SAML when using Azure Ad B2C as the Idp we successfully managed to integrate Splunk as SP with AD B2C as Idp using SAML and Custom policies Now... See more...
Hi all I have a question about using relaystate with SAML when using Azure Ad B2C as the Idp we successfully managed to integrate Splunk as SP with AD B2C as Idp using SAML and Custom policies Now we want to redirect the users after successfull authentication to another url the only way forward I could find was via the Relaystate parameter below are all the combinations I tried for the Single Sign On (SSO) URL: https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?SAML_Request=<base64_SAML_Auth_Request>&RelayState=https%3A%2F%2FredirectWebsite.com https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?RelayState=https%3A%2F%2FredirectWebsite.com https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?RelayState=https://redirectWebsite.com I keep getting the error error while parsing relaystate. failed to decode relaystate. any advise on how to embed the relaystate in the SSO
Good morning I am receiving events from windows on a collector with Splunk Edge Processor and it is sending them correctly to the tenant but not to the correct index. According to the data it goes ... See more...
Good morning I am receiving events from windows on a collector with Splunk Edge Processor and it is sending them correctly to the tenant but not to the correct index. According to the data it goes through the pipeline but sends it to the main instead of the index:   This is the spl2 of the pipeline: /* A valid SPL2 statement for a pipeline must start with "$pipeline", and include "from $source" and "into $destination". */ $pipeline = | from $source | eval index = if(isnull(index), "usa_windows", index) | into $destination;
Hello,   I have been receiving the events without format and I have installed the addon in the HF and in cloud.
I would like some help creating a report that will show the seconds diff between my event timestamp and the Splunk landing timestamp. I have the below query which will give me the diff between _in... See more...
I would like some help creating a report that will show the seconds diff between my event timestamp and the Splunk landing timestamp. I have the below query which will give me the diff between _indextime  and  _time  but I would also like the seconds difference between GenerationTime (ie...2024-04-23 12:49:52)    and _indextime. index=splunk_index  sourcetype=splunk_sourcetype | eval tnow = now() | convert ctime(tnow) | convert ctime(_indextime) as Index_Time | eval secondsDifference=_indextime-_time | table Node EventNumber GenerationTime Index_Time, _time, secondsDifference 
Hi    1 bucket stuck at “fixup task pending” state with below error. I tried restarting Splunk, Re-sync and roll but its not working. Can anyone suggest the possible solution in order to troublesho... See more...
Hi    1 bucket stuck at “fixup task pending” state with below error. I tried restarting Splunk, Re-sync and roll but its not working. Can anyone suggest the possible solution in order to troubleshoot the issue   Missing enough suitable candidates to create replicated copy in order to meet replication policy. Missing={ site3:1 }
Hi All, I have created a dashboard for JSON data. There are 2 sets of data in same index. One is Info.metadata{} and another one is Info.runtime_data{} under same index as different events. But bo... See more...
Hi All, I have created a dashboard for JSON data. There are 2 sets of data in same index. One is Info.metadata{} and another one is Info.runtime_data{} under same index as different events. But both of the events have one common field that is "Info.Title". How can i combine these 2 events?  
Hello, I want to fetch the value present in the inputs.conf file(/Splunk/etc/apps/$app/local), ie: [stanza-name] value-name = value How can I retrieve this value and use it inside a python lookup ... See more...
Hello, I want to fetch the value present in the inputs.conf file(/Splunk/etc/apps/$app/local), ie: [stanza-name] value-name = value How can I retrieve this value and use it inside a python lookup script (stored in /Splunk/etc/apps/$app/bin)? thanks,
Hello Everyone, please help me with fetching events from Windows event collector. I installed universal Forwarder on windows server 2022, where all events from computers keep in this server. I am try... See more...
Hello Everyone, please help me with fetching events from Windows event collector. I installed universal Forwarder on windows server 2022, where all events from computers keep in this server. I am trying to fetch all forwarded events from this windows server 2022 to my splunk indexer by splunk agent, but agent sends the events sometimes, not in real time. Can't see some errors in splunkforwarder events or in splunk indexer. Also I used Splunk_TA_Windows to fetch events.
Hi Team, I am looking for an option to monitor the page load performance of a Salesforce Community cloud (built using Lightning Web components) application that run in authenticated mode. We want to... See more...
Hi Team, I am looking for an option to monitor the page load performance of a Salesforce Community cloud (built using Lightning Web components) application that run in authenticated mode. We want to capture the network timings, resource loading and transaction times to name a few.  Is this possible with AppDynamics? If so, please help with required documentations around the same. Thanks. 
I'm currently working on optimizing our Splunk deployment and would like to gather some insights on the performance metrics of Splunk forwarders. Transfer Time for Data Transmission: I'm intereste... See more...
I'm currently working on optimizing our Splunk deployment and would like to gather some insights on the performance metrics of Splunk forwarders. Transfer Time for Data Transmission: I'm interested in understanding the typical time it takes for a Splunk forwarder to send a significant volume of data, say 10 GB, to the indexer. Are there any benchmarks or best practices for estimating this transfer time? Are there any factors or configurations that can significantly affect this transfer time? Expected EPS (Events Per Second): Additionally, I'm curious about the achievable Event Per Second (EPS) rates with Splunk forwarders. What are the typical EPS rates that organizations achieve in real-world scenarios? Are there any strategies or optimizations that can help improve EPS rates while maintaining stability and reliability? Any insights, experiences, or recommendations regarding these performance metrics would be greatly appreciated. Thank you!
Hi Dear Malaysian Splunkers,  Part of the SplunkTrust tasks, I have created a Splunk User Group for Kuala Lumper Malaysia.  https://usergroups.splunk.com/kuala-lumpur-splunk-user-group/   Pls joi... See more...
Hi Dear Malaysian Splunkers,  Part of the SplunkTrust tasks, I have created a Splunk User Group for Kuala Lumper Malaysia.  https://usergroups.splunk.com/kuala-lumpur-splunk-user-group/   Pls join and lets discuss monthly about Splunk and getting more value from the data. see you there. thanks.    Best Regards Sekar
Hello, I have this search for tabular format.   index="webbff" "SUCCESS: REQUEST" | table _time verificationId code BROWSER BROWSER_VERSION OS OS_VERSION USER_AGENT status | rename verificationId... See more...
Hello, I have this search for tabular format.   index="webbff" "SUCCESS: REQUEST" | table _time verificationId code BROWSER BROWSER_VERSION OS OS_VERSION USER_AGENT status | rename verificationId as "Verification ID", code as "HRC" | sort -_time   The issue is at BROWSER column where even when user access our app via Edge it still shows as Chrome. I found a dissimilarity between the two logs. One that is accessed via Edge contains "Edg" in the logs. Edge logs   metadata={BROWSER=Chrome, LOCALE=, OS=Windows, USER_AGENT=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/xxx.xx (KHTML, like Gecko) Chrome/124.0.0.0 Safari/xxx.xx Edg/124.0.0.0, BROWSER_VERSION=124, LONGITUDE=, OS_VERSION=10, IP_ADDRESS=, APP_VERSION=, LATITUDE=})   Chrome logs   metadata={BROWSER=Chrome, LOCALE=, OS=Mac OS X, USER_AGENT=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/xxx.xx (KHTML, like Gecko) Chrome/124.0.0.0 Safari/xxx.xx, BROWSER_VERSION=124, LONGITUDE=, OS_VERSION=10, IP_ADDRESS=, APP_VERSION=, LATITUDE=})   My question is, how do i create a conditional search for BROWSER like if contains Edg then Edge else BROWSER?
hey guys, with data retention being set, is there a way to whitelist a specific container to prevent it from being deleted?
I apologize if the following question might be a bit basic.  But I'm confused with the results.   When I append the  following code into the "search" line, it returns a shortened list of results. (f... See more...
I apologize if the following question might be a bit basic.  But I'm confused with the results.   When I append the  following code into the "search" line, it returns a shortened list of results. (from 47 to 3)  AND ("a" in ("a"))   original code.  index=main_service ABC_DATASET Arguments.email="my_email@company_X.com" | rename device_model as hardware, device_build as builds, device_train as trains, ABC_DATASET.Check_For_Feature_Availability as Check_Feature_Availability | search (Check_Feature_Availability=false) AND ("a" in ("a")) | table builds, trains, Check_Feature_Availability   I was expecting to see the same number of results.  Am I wrong about my expectations, or am I missing something here? TIA     index=main_service  ABC_DATASET  Arguments.email="my_email@company_X.com" | rename device_model as hardware, device_build as builds, device_train as trains, ABC_DATASET.Check_For_Feature_Availability as Check_Feature_Availability | search (Check_Feature_Availability=false)  AND ("a" in ("a")) | table builds, trains, Check_Feature_Availability
Could someone help me in deriving solution for this case below? Background : We have an app and in which we set all our saved searches as durable ones as we dont want to miss any runs. So any schedu... See more...
Could someone help me in deriving solution for this case below? Background : We have an app and in which we set all our saved searches as durable ones as we dont want to miss any runs. So any scheduled search if it fails on that particular scheduled time due to any issues like infra related or resource related it will be covered in next run. So am trying to capture the last status even after the durable logic applied.  Lets say I have 4 events. So the first two runs  (Scheduled_time=12345  AND Scheduled_time=12346)  of ALERT ABC failed. And in the third schedule during 12347 those two are covered and in that 12347 is also covered and all are success.  So if I take query like this first .. | stats last(status) by savedsearch_name scheduled_time I get output like this savedsearch_name last(status) scheduled_time ABC                    skipped                   12345 ABC                    skipped                   12346 ABC                    success                   12347   I need to write a logic that take A. jobs whose last status is not success - So here  ABC 12345 and ABC 12346 B. where durable_cursor != scheduled_time. So it will pick events for that job where multiple jobs covered for that missed duration. In this case here it will pick my EVENT 3  C. Then I have to derive like this. Take the failed saved search job name with its scheduled time in which its failed and check that scheduled_time falls within next durable_cursor and scheduled_time with status=success. .. TAKE FAILED SAVEDSEARCH NAME TIME as FAILEDTIME | where durable_cursor!=scheduled_time | eval Flag=if(FAILEDTIME>=durable_cursor OR FAILEDTIME<=scheduled_time, "COVERED", "NOT COVERED") with its schedule_time and check again if that job (with its job name) other scheduled time run falls betweee EVENT 4 : savedsearch_name = ABC ; status = success; scheduled_time =12347 EVENT 3 : savedsearch_name = ABC ; status = success ;  durable_cursor=12345 scheduled_time =12347 EVENT 2 : savedsearch_name = ABC ; status = skipped ; scheduled_time =12346 EVENT 1 : savedsearch_name = ABC ; status = skipped ; scheduled_time =12345 How I derived so far and where I stuck. I took this in two reports First report will take all the Jobs whose last status is not success and tabled output with fields SAVEDSEARCH NAME, SCHEDULEDTIME AS FAILEDTIME, LAST(STATUS) as FAILEDSTATU Then I saved this result in lookup Thsi has to run for last one hour window Second Report It will refer the lookup and take the failed savedsearch names from the lookup and search only those events in Splunk internal sets and search only the events where durable_cursor!=scheduled_time and then check if that failed savedsearch time falls within durable_cursor and next scheduled_time and check if status is success. Thsi is working fine if I have one savedsearch job for one time. But not for multivalues Lets say Job A itself is having four runs in an hour and except first all are failures. In this case I could not cover as referring values from lookup as multivalue field not matching the exact stuff Here is the question I posted for the same https://community.splunk.com/t5/Splunk-Search/How-to-retrieve-value-from-lookup-for-multivalue-field/m-p/684637#M233699   If somebody have any alternate or better thoughts on this can you please throw some light on this.
Hello, I have a static data about 200,000 rows (potentially grow) needs to be moved to a summary index daily. 1) Is it possible to move the data from DBXquery to summary index and re-write the data... See more...
Hello, I have a static data about 200,000 rows (potentially grow) needs to be moved to a summary index daily. 1) Is it possible to move the data from DBXquery to summary index and re-write the data daily, so there will not be old data with _time after the re-write? 2) Is it possible to use summary index without _time and make it like DBXquery?  The reason I do this is because I want to do data manipulation (split, etc)  and move it to another "placeholder" other than CSV or DBXquery, so I can perform correlation with another index.  For example:  | dbxquery query=" SELECT * from Table_Test"   the scheduled report for summary index will add something like this: summaryindex  spool=t  uselb=t  addtime=t  index="summary" file="test_file" name="test" marker="hostname=\"https://test.com/\",report=\"test\"" Please suggest. Thank you for your help.
Hey, I installed splunk enterprise free trial on ubuntu server and this is the first time I am using splunk so I am following a video. I am having trouble locating "local event logs" option while add... See more...
Hey, I installed splunk enterprise free trial on ubuntu server and this is the first time I am using splunk so I am following a video. I am having trouble locating "local event logs" option while adding data to splunk from a universal forwarder in windows server. I want to capture event logs from windows server to see in splunk. Please help me out as soon as possible. Thank you.
Hello! I have been trying to get some logs into a metric index and I'm wondering if they can be improved with better field extraction. These are what the logs look like:     t=1713291900 path="/da... See more...
Hello! I have been trying to get some logs into a metric index and I'm wondering if they can be improved with better field extraction. These are what the logs look like:     t=1713291900 path="/data/p1/p2" stat=s1:s2:s3:s4 type=COUNTER value=12 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s6 type=COUNTER value=18 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3:s7 type=COUNTER value=2 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3 type=COUNTER value=104 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3 type=COUNTER value=18 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8 type=COUNTER value=18 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9:10 type=COUNTER value=8 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3:s4 type=COUNTER value=104 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9 type=COUNTER value=140 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9 type=COUNTER value=3 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9 type=COUNTER value=1 t=1713291900 path="/data/p3/p4" stat=s20 type=COUNTER value=585 t=1713291900 path="/data/p3/p4" stat=s21 type=COUNTER value=585 t=1713291900 path="/data/p3/p4" stat=s22 type=TIMEELAPSED value=5497.12 t=1713291900 path="/data/p3/p5" stat=s23 type=COUNTER value=585 t=1713291900 path="/data/p1/p5" stat=s24 type=COUNTER value=585 t=1713291900 path="/data/p1/p5" stat=s25 type=TIMEELAPSED value=5497.12 t=1713291900 path="/data/p1/p5/p6" stat=s26 type=COUNTER value=253 t=1713291900 path="/data/p1/p5/p6" stat=s27 type=GAUGE value=1     t is the epoch time. path is the path of a URL which is in double quotes, always starts with /data/, and can have anywhere between 2 and 7 (maybe more) subpaths. stat is is either a single stat (like s20) OR a colon-delimited string of between 3 and 6 stat names. type is either COUNTER, TIMEELAPSED, or GAUGE. value is the metric. Right now I've been able to get a metric index set up that: Assigns t as the timestamp and ignores t as a dimension or metric Makes value the metric Makes path, stat, and type dimensions This is my transforms.conf:     [metrics_field_extraction] REGEX = ([a-zA-Z0-9_\.]+)=\"?([a-zA-Z0-9_\.\/:-]+) [metric-schema:cm_log2metrics_keyvalue] METRIC-SCHEMA-MEASURES = value METRIC-SCHEMA-WHITELIST-DIMS = stat,path,type METRIC-SCHEMA-BLACKLIST-DIMS = t     And props.conf (it's basically log2metrics_keyvalue, we need cm_ to match to our license):     [cm_log2metrics_keyvalue] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) METRIC-SCHEMA-TRANSFORMS = metric-schema:cm_log2metrics_keyvalue TRANSFORMS-EXTRACT = metrics_field_extraction NO_BINARY_CHECK = true category = Log to Metrics description = '<key>=<value>' formatted data. Log-to-metrics processing converts the keys with numeric values into metric data points. disabled = false pulldown_type = 1      path and stat are extracted exactly as they appear in the logs. However, I'm wondering if it's possible to get each part in the path & stat fields into their own dimension, so instead of: _time path stat value type 4/22/24 2:20:00.000 PM /p1/p2/p3 s1:s2:s3 500 COUNTER   It would be: _time path1 path2 path3 stat1 stat2 stat3 value type 4/22/24 2:20:00.000 PM p1 p2 p3 s1 s2 s3 500 COUNTER   My thinking was that we'd be able to get really granular stats and interesting graphs. Thanks in advance!
I'm having issues getting parsing working using a custom config otel specification. The `log.file.path` should be one of these two formats: 1. /splunk-otel/app-api-starter-project-template/app-api-s... See more...
I'm having issues getting parsing working using a custom config otel specification. The `log.file.path` should be one of these two formats: 1. /splunk-otel/app-api-starter-project-template/app-api-starter-project-template-96bfdf8866-9jz7m/app-api-starter-project-template.log 2. /splunk-otel/app-api-starter-project-template/app-api-starter-project-template.log One with and one without the pod name. We are doing it this way so that we only index one application log file in a set of directories rather than picking up a ton of kubernetes logs that we will never review, but yet have to store. At the bottom is the full otel config. We are noticing that regardless of the file path (1 or 2) above, it keeps going to the default option, and in the `catchall` attribute in splunk, it has the value of log.file.path which always is the 1st format above (e.g. /splunk-otel/app-api-starter-project-template/app-api-starter-project-template-96bfdf8866-9jz7m/app-api-starter-project-template.log). - id: catchall type: move from: attributes["log.file.path"] to: attributes["catchall"] Why is it that it's not going to the route `parse-deep-filepath` considering the Regex should match. We want to be able to pull out the `application name`, the `pod name`, and the `namespace` which are all reflected in the full `log.file.path` receivers: filelog/mule-logs-volume: include: - /splunk-otel/*/app*.log - /splunk-otel/*/*/app*.log start_at: beginning include_file_path: true include_file_name: true resource: com.splunk.sourcetype: mule-logs k8s.cluster.name: {{ k8s_cluster_instance_name }} deployment.environment: {{ aws_environment_name }} splunk_server: {{ splunk_host }} operators: - type: router id: get-format routes: - output: parse-deep-filepath expr: 'log.file.path matches "^/splunk-otel/[^/]+/[^/]+/app-[^/]+[.]log$"' - output: parse-shallow-filepath expr: 'log.file.path matches "^/splunk-otel/[^/]+/app-[^/]+[.]log$"' - output: nil-filepath expr: 'log.file.path matches "^<nil>$"' default: catchall # Extract metadata from file path - id: parse-deep-filepath type: regex_parser regex: '^/splunk-otel/(?P<namespace>[^/]+)/(?P<pod_name>[^/]+)/(?P<application>[^/]+)[.]log$' parse_from: attributes["log.file.path"] - id: parse-shallow-filepath type: regex_parser regex: '^/splunk-otel/(?P<namespace>[^/]+)/(?P<application>[^/]+)[.]log$' parse_from: attributes["log.file.path"] - id: nil-filepath type: move from: attributes["log.file.path"] to: attributes["nil_filepath"] - id: catchall type: move from: attributes["log.file.path"] to: attributes["catchall"] exporters: splunk_hec/logs: # Splunk HTTP Event Collector token. token: "{{ splunk_token }}" # URL to a Splunk instance to send data to. endpoint: "{{ splunk_full_endpoint }}" # Optional Splunk source: https://docs.splunk.com/Splexicon:Source source: "output" # Splunk index, optional name of the Splunk index targeted. index: "{{ splunk_index_name }}" # Maximum HTTP connections to use simultaneously when sending data. Defaults to 100. #max_connections: 20 # Whether to disable gzip compression over HTTP. Defaults to false. disable_compression: false # HTTP timeout when sending data. Defaults to 10s. timeout: 900s tls: # Whether to skip checking the certificate of the HEC endpoint when sending data over HTTPS. Defaults to false. # For this demo, we use a self-signed certificate on the Splunk docker instance, so this flag is set to true. insecure_skip_verify: true processors: batch: extensions: health_check: endpoint: 0.0.0.0:8080 pprof: endpoint: :1888 zpages: endpoint: :55679 file_storage/checkpoint: directory: /output/ timeout: 10s compaction: on_start: true directory: /output/ max_transaction_size: 65_536 service: extensions: [pprof, zpages, health_check, file_storage/checkpoint] pipelines: logs: receivers: [filelog/mule-logs-volume] processors: [batch] exporters: [splunk_hec/logs]
I have a current Splunk install in my production environment, all running RedHat Linux.  I have a single server w/ Splunk Enterprise installed on it, as well as SplunkForwarder.  I have 100+ other se... See more...
I have a current Splunk install in my production environment, all running RedHat Linux.  I have a single server w/ Splunk Enterprise installed on it, as well as SplunkForwarder.  I have 100+ other servers w/ SplunkForwarder installed on them all pushing logs to the Splunk Enterprise server.  All servers had v9.1.2 of the forwarder installed, and the Enterprise server was also this version. I recently updated the Splunk Enterprise server, as well as the Splunk Forwarders on all servers, to version 9.2.0.1 successfully.  With one exception.  The forwarder installed on my Splunk Enterprise server (named "splunkenter1") fails.  It displays the error listed below where it says that the splunkforwarder package is conflicting with the splunk install. I have another Splunk Enterprise install (using the same set-up) in another environment, and I did not run into this issue.  That upgrade worked without issue. I've tried Google'ing the issue, but haven't found much.  Anyone have any ideas on what could be causing this, or has anyone seen this before?   [root@splunkenter1 ~]# dnf update splunkforwarder Last metadata expiration check: 0:01:36 ago on Mon 22 Apr 2024 04:47:07 PM UTC. Dependencies resolved. ======================================================================================================== Package Architecture Version Repository Size ======================================================================================================== Upgrading: splunkforwarder x86_64 9.2.0.1-d8ae995bf219 splunk-repo 44 M Transaction Summary ======================================================================================================== Upgrade 1 Package Total download size: 44 M Is this ok [y/N]: y Downloading Packages: splunkforwarder-9.2.0.1-d8ae995bf219.x86_64.rpm 41 MB/s | 44 MB 00:01 -------------------------------------------------------------------------------------------------------- Total 41 MB/s | 44 MB 00:01 Running transaction check Transaction check succeeded. Running transaction test The downloaded packages were saved in cache until the next successful transaction. You can remove cached packages by executing 'dnf clean packages'. Error: Transaction test error: file /usr/lib/.build-id/03/f57acc2883000e6b54bf75c7e67d1a07446919 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/06/a82be30cc16ea5bea39f78f8056447e18beb15 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/1a/b0b8e873c6d668dcd3361470954d12004926cd from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/1e/8edb02a946c645cd20558aa8a6b420792f5541 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/35/e87a7fb154de7d5226e5a0a28c80ffd0c1be48 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/3a/3aac493bff5bb22e02b8726142dd67443dd03c from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/42/abc0f2a26bfb13b563104e87287312420c707e from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/44/6a270f1de8d26f47bf9ff9ae778e1fd3332403 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/64/b2324ff715d30c8a91dee6a980d63c291648d8 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/65/274a42201dd21f83996ba7c8bd0ba0dc3894c8 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/6d/dd008477651e7c8febce4699a739aaf188b0ae from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/88/cbe6deabd44a4766207eebf7c5e74f7ed53120 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/8a/6ee8699fb74fb883874a1123d91acf0b0d98a6 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/94/ea2865a21761f062a2db312845c535d5429bfc from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/95/d5fe61c313d8a5616f8a45f6c7d05151283ab6 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/96/b9463c40fc6541345a4b87634e8517281f8d4d from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/99/93008fdae763af21c831956de21501bb09e197 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/9b/2a882e45910da32603baf28a13b1630987184e from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/9f/b5fd366b32867d537caa84d4b2b521f5c21083 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/a0/1ae9032915dce67a58e8696c3c9fe195193d77 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/a1/616e140409dc54f0db2bf02ed7e114f07490af from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/b6/6dd3d33542916fff507849621dac5f763a98a2 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/b6/fd3c259a4c6e552d9b067f39e66c03cc134895 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/b7/e3d0b70694caa826df19d93b7341de0decdad3 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/bc/f1c9c6878bb887ef6869012b79c97546983b83 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/c8/d218675e02086588c28882c28b3533069d505c from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/d0/be01f291a5b978e02dcdd0069b82ce8a764dbf from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/d3/7dcf7bcf859ed048625d20139782517947e6e0 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/d7/30a0409850e89f806f3798ca99b378c335b7a5 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/dc/259ac038741ecbd76f6052a9fa403bc5ab5ab3 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/de/294f4dd1fa80d590074161566f06b39b9230fb from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/e0/0ee3712cdbd590286c2b8da49724fdaf6dee15 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/e6/7f07efdda1fcfe82b6ceb170412f22e03d2ab5 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/ec/dc3eeaba4750e657f5910fa2adb21365533f27 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/ee/6addfc324fb4bf57058df3adf7ea55dff4953f from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/f1/0b5a5bc3bcb996183924bd6029efba8290c71a from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/f2/c0dd88030fc9e343f6d9104a5015938cfe3503 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/f3/61ef732e036606eef3d78bb13f6d6165bcd927 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/f4/c1fc01304f2796efaabefd2a6350ba67cc9edc from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/f9/3cf5828d46fbdd6e82b2d18a4a5c650b84c185 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/fa/a370a95319b4a8ce1bd239652457843a09c15e from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/fd/201b0799acb29720c90a6129be08800ce4b7e5 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64