All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm experimenting with doing ETW logging of Microsoft IIS, where the IIS log ends up as XML in a windows eventlog. But I have problems getting Splunk to use the correct timestamp field, Splunk uses ... See more...
I'm experimenting with doing ETW logging of Microsoft IIS, where the IIS log ends up as XML in a windows eventlog. But I have problems getting Splunk to use the correct timestamp field, Splunk uses the TimeCreated property for eventtime (_time), and not the date and time properties that indicate when IIS served the actual webpage. An example: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-IIS-Logging' Guid='{7e8ad27f-b271-4ea2-a783-a47bde29143b}'/><EventID>6200</EventID><Version>0</Version><Level>4</Level><Task>0</Task><Opcode>0</Opcode><Keywords>0x8000000000000000</Keywords><TimeCreated SystemTime='2024-04-04T12:23:43.811459900Z'/><EventRecordID>11148</EventRecordID><Correlation/><Execution ProcessID='1892' ThreadID='3044'/><Channel>Microsoft-IIS-Logging/Logs</Channel><Computer>sw2iisxft</Computer><Security UserID='S-1-5-18'/></System><EventData><Data Name='EnabledFieldsFlags'>2149961727</Data><Data Name='date'>2024-04-04</Data><Data Name='time'>12:23:37</Data><Data Name='cs-username'>ER\4dy</Data><Data Name='s-sitename'>W3SVC5</Data><Data Name='s-computername'>sw2if</Data><Data Name='s-ip'>192.168.32.86</Data><Data Name='cs-method'>GET</Data><Data Name='cs-uri-stem'>/</Data><Data Name='cs-uri-query'>blockid=2&amp;roleid=8&amp;logid=21</Data><Data Name='sc-status'>200</Data><Data Name='sc-win32-status'>0</Data><Data Name='sc-bytes'>39600</Data><Data Name='cs-bytes'>984</Data><Data Name='time-taken'>37</Data><Data Name='s-port'>443</Data><Data Name='csUser-Agent'>Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/123.0.0.0+Safari/537.36+Edg/123.0.0.0</Data><Data Name='csCookie'>-</Data><Data Name='csReferer'>https://tsidologg/?blockid=2&amp;roleid=8</Data><Data Name='cs-version'>-</Data><Data Name='cs-host'>-</Data><Data Name='sc-substatus'>0</Data><Data Name='CustomFields'>X-Forwarded-For - Content-Type - https on host tsidologg</Data></EventData></Event> I've tried every combination in props.conf that I can think of This should work, but doesen't..   TIME_PREFIX = <Data Name='date'> MAX_TIMESTAMP_LOOKAHEAD = 100 TIME_FORMAT =<Data Name='date'>%Y-%m-%d</Data><Data Name='time'>%H:%M:%S</Data> TZ = UTC   Any ideas?
I have a situation with ingestion latency that I am trying to fix. The heavy forwarder is set to Central Standard Time in the front end under preferences. Does the front end setting have any bearing... See more...
I have a situation with ingestion latency that I am trying to fix. The heavy forwarder is set to Central Standard Time in the front end under preferences. Does the front end setting have any bearing on the props.conf 
I am trying to instrument a Xamarin Forms app with the following configuration:. var config = AppDynamics.Agent.AgentConfiguration.Create("<EUM_APP_KEY>"); config.CollectorURL = "http://<IP_Address... See more...
I am trying to instrument a Xamarin Forms app with the following configuration:. var config = AppDynamics.Agent.AgentConfiguration.Create("<EUM_APP_KEY>"); config.CollectorURL = "http://<IP_Address>:7001"; config.CrashReportingEnabled = true; config.ScreenshotsEnabled = true; config.LoggingLevel = AppDynamics.Agent.LoggingLevel.All; AppDynamics.Agent.Instrumentation.EnableAggregateExceptionHandling = true; AppDynamics.Agent.Instrumentation.InitWithConfiguration(config); In the documentation, it is stated that: The Xamarin Agent does not support automatic instrumentation for network requests made with any library. You will need to manually instrument HTTP network requests regardless of what library is used. Mobile RUM Supported Environments (appdynamics.com) Yet, in these links Customize the Xamarin Instrumentation (appdynamics.com) and Instrument Xamarin Applications (appdynamics.com) I see that I can use AppDynamics.Agent.AutoInstrument.Fody package to automatically instrument the network requests. I tried to follow the approach of automatic instrumentation using both AppDynamics.Agent and AppDynamics.Agent.AutoInstrument.Fody packages as per the docs, but I get the below error when building the project: MSBUILD : error : Fody: An unhandled exception occurred: MSBUILD : error : Exception: MSBUILD : error : Failed to execute weaver /Users/username/.nuget/packages/appdynamics.agent.autoinstrument.fody/2023.12.0/build/../weaver/AppDynamics.Agent.AutoInstrument.Fody.dll MSBUILD : error : Type: MSBUILD : error : System.Exception MSBUILD : error : StackTrace: MSBUILD : error : at InnerWeaver.ExecuteWeavers () [0x0015a] in C:\projects\fody\FodyIsolated\InnerWeaver.cs:222 MSBUILD : error : at InnerWeaver.Execute () [0x000fe] in C:\projects\fody\FodyIsolated\InnerWeaver.cs:112 MSBUILD : error : Source: MSBUILD : error : FodyIsolated MSBUILD : error : TargetSite: MSBUILD : error : Void ExecuteWeavers() MSBUILD : error : Sequence contains more than one matching element MSBUILD : error : Type: MSBUILD : error : System.InvalidOperationException MSBUILD : error : StackTrace: MSBUILD : error : at System.Linq.Enumerable.Single[TSource] (System.Collections.Generic.IEnumerable`1[T] source, System.Func`2[T,TResult] predicate) [0x00045] in /Users/builder/jenkins/workspace/build-package-osx-mono/2020-02/external/bockbuild/builds/mono-x64/external/corefx/src/System.Linq/src/System/Linq/Single.cs:71 MSBUILD : error : at AppDynamics.Agent.AutoInstrument.Fody.ImportedReferences.LoadTypes () [0x000f1] in /opt/buildAgent/work/2ab48d427bca2ab0/sdk/AppDynamics.Agent.AutoInstrument.Fody/ImportedReferences.cs:117 MSBUILD : error : at AppDynamics.Agent.AutoInstrument.Fody.ImportedReferences..ctor (ModuleWeaver weaver) [0x0000d] in /opt/buildAgent/work/2ab48d427bca2ab0/sdk/AppDynamics.Agent.AutoInstrument.Fody/ImportedReferences.cs:90 MSBUILD : error : at ModuleWeaver.Execute () [0x00000] in /opt/buildAgent/work/2ab48d427bca2ab0/sdk/AppDynamics.Agent.AutoInstrument.Fody/ModuleWeaver.cs:17 MSBUILD : error : at InnerWeaver.ExecuteWeavers () [0x000b7] in C:\projects\fody\FodyIsolated\InnerWeaver.cs:204 MSBUILD : error : Source: MSBUILD : error : System.Core MSBUILD : error : TargetSite: MSBUILD : error : Mono.Cecil.MethodDefinition Single[MethodDefinition](System.Collections.Generic.IEnumerable`1[Mono.Cecil.MethodDefinition], System.Func`2[Mono.Cecil.MethodDefinition,System.Boolean]) Any help please? Thank you
Hi, By chance, I discovered that a power user with admin rights disabled sysmon agent and splunk forwarder on his computer to gain some extra CPU for his daily tasks. To react quicker to this type o... See more...
Hi, By chance, I discovered that a power user with admin rights disabled sysmon agent and splunk forwarder on his computer to gain some extra CPU for his daily tasks. To react quicker to this type of "issue" in the future, I would like to have some alerts in splunk informing me whenever any host, initially domain joined workstations and servers, might have stopped reporting events. Did someone implement something like this already? any good article to follow? I plan to create a lookup table using ldapsearch and then, an alert detecting which hosts from that table are not present in a basic listing host search on sysmon index for example. Any other better or simpler approach?   many thanks
Using the new  appdynamics-gradle-plugin 24.1.0 makes the build fail with the following exception: Execution failed for task ':ControlCenterApp:expandReleaseArtProfileWildcards'. > A failure o... See more...
Using the new  appdynamics-gradle-plugin 24.1.0 makes the build fail with the following exception: Execution failed for task ':ControlCenterApp:expandReleaseArtProfileWildcards'. > A failure occurred while executing com.android.build.gradle.internal.tasks.ExpandArtProfileWildcardsTask$ExpandArtProfileWildcardsWorkAction > invalid END header (bad central directory offset) The relevant stacktrace part: ... Caused by: java.util.zip.ZipException: invalid END header (bad central directory offset) at com.android.tools.profgen.ArchiveClassFileResourceProvider.getClassFileResources(ClassFileResource.kt:62) at com.android.tools.profgen.ProfgenUtilsKt.expandWildcards(ProfgenUtils.kt:60) at com.android.build.gradle.internal.tasks.ExpandArtProfileWildcardsTask$ExpandArtProfileWildcardsWorkAction.run(ExpandArtProfileWildcardsTask.kt:120) at com.android.build.gradle.internal.profile.ProfileAwareWorkAction.execute(ProfileAwareWorkAction.kt:74) at org.gradle.workers.internal.DefaultWorkerServer.execute(DefaultWorkerServer.java:63) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:66) at org.gradle.workers.internal.NoIsolationWorkerFactory$1$1.create(NoIsolationWorkerFactory.java:62) at org.gradle.internal.classloader.ClassLoaderUtils.executeInClassloader(ClassLoaderUtils.java:100) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.lambda$execute$0(NoIsolationWorkerFactory.java:62) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:44) at org.gradle.workers.internal.AbstractWorker$1.call(AbstractWorker.java:41) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204) at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:199) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66) at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157) at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59) at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53) at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73) at org.gradle.workers.internal.AbstractWorker.executeWrappedInBuildOperation(AbstractWorker.java:41) at org.gradle.workers.internal.NoIsolationWorkerFactory$1.execute(NoIsolationWorkerFactory.java:59) at org.gradle.workers.internal.DefaultWorkerExecutor.lambda$submitWork$0(DefaultWorkerExecutor.java:170) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runExecution(DefaultConditionalExecutionQueue.java:187) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.access$700(DefaultConditionalExecutionQueue.java:120) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner$1.run(DefaultConditionalExecutionQueue.java:162) at org.gradle.internal.Factories$1.create(Factories.java:31) at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:264) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:128) at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:133) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.runBatch(DefaultConditionalExecutionQueue.java:157) at org.gradle.internal.work.DefaultConditionalExecutionQueue$ExecutionRunner.run(DefaultConditionalExecutionQueue.java:126) ... 2 more Environment: com.android.tools.build:gradle:8.3.1 Any ideas what is happening here? It kind of sounds like there is something wrong with the library-packaging itself.. but thats just a guess and in fact i could open the renamed jar with a zip-tool myself. cheers, Jerg
Hello, I am trying to install splunk operator using the operator Hub in OpenShift cluster through webconsole but it is failing with below error  installing: waiting for deployment splunk-operator-... See more...
Hello, I am trying to install splunk operator using the operator Hub in OpenShift cluster through webconsole but it is failing with below error  installing: waiting for deployment splunk-operator-controller-manager to become ready: deployment "splunk-operator-controller-manager" not available: Deployment does not have minimum availability. I also tried to follow the below steps mentioned in this document How to deploy Splunk in an OpenShift environment | by Eduardo Patrocinio | Medium  but found below as well in CLI  oc apply -f https://raw.githubusercontent.com/patrocinio/openshift/main/splunk/standalone-standalone.yaml error: resource mapping not found for name: "standalone" namespace: "splunk-operator" from "https://raw.githubusercontent.com/patrocinio/openshift/main/splunk/standalone-standalone.yaml": no matches for kind "Standalone" in version "enterprise.splunk.com/v1alpha2" ensure CRDs are installed first my requirement is to create a instance of splunk in a namespace in OCP. Kindly help on this Splunk Operator for Kubernetes 
Hello!  So here is a doozy.  We have blacklists in place using Regx. In particular this one: [WinEventLog://Microsoft-Windows-Sysmon/Operational] source= XmlWinEventLog:Microsoft-Windows-Sysmon/O... See more...
Hello!  So here is a doozy.  We have blacklists in place using Regx. In particular this one: [WinEventLog://Microsoft-Windows-Sysmon/Operational] source= XmlWinEventLog:Microsoft-Windows-Sysmon/Operational disabled = false start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = $XmlRegex= (C:\\Program Files\\Preton\\PretonSaver\\PretonService.exe)   along with some other blacklists in sequence as blacklist2, blacklist3 etc.  Now, on my and some colleagues local machines where I have tested the inputs.conf file to see if we are blacklisting correctly, it all works, no logs come into splunk cloud.   BUT when this is deployed using our deployment server to the estate the blacklist doesnt work.  Does anybody know where i can troubleshoot this from?  Ive ran the btool and confs come out fine.  we are local UF agents going to a DS then to Splunk Cloud.   
Hi Splunkers, Let me provide a bit of background,   We are ingesting logs into splunk using an API from our DLP service provider using HEC. The script that run the API read the last 24 hours of... See more...
Hi Splunkers, Let me provide a bit of background,   We are ingesting logs into splunk using an API from our DLP service provider using HEC. The script that run the API read the last 24 hours of events,  get the results in Json format and send it to splunk cloud using HEC. The problem I have is every pull some of the events get duplicates  .... eg the event_id get duplicated as the events changes its status on daily basis so the the script catches that.  so doing reporting is a nightmare, I know I can deduplicated but is there a easier way for splunk to remove those duplicates and maintain the same event_id?
Hi Everyone I was wondering how I would go about a stats count for 2 fields where 1 field depends on the other.  So for example we have a field called service version and another called service... See more...
Hi Everyone I was wondering how I would go about a stats count for 2 fields where 1 field depends on the other.  So for example we have a field called service version and another called service operation. Operations aren't necessarily exclusive to a version but I was wondering if it was possible to do a stats count where it would display something like: Service 1 - Operation 1: *how many* - Operation 2: *How many* and so on. Is something like this possible? 
Hello Everyone, I've encountered an issue where certain customers appear to have duplicate ELB access logs. During a routine check, I noticed instances of identical events being logged with the exac... See more...
Hello Everyone, I've encountered an issue where certain customers appear to have duplicate ELB access logs. During a routine check, I noticed instances of identical events being logged with the exact same timestamp, which shouldn't normally occur. I'm utilizing the Splunk_TA_aws app for ingesting logs, specifying each S3 bucket and the corresponding ELB log prefix as inputs. My search pattern is index=customer-index-{customer_name} sourcetype="aws:elb:accesslogs", aimed at isolating the data per customer. Upon reviewing the original logs directly within the S3 buckets, I confirmed that the duplicates are not present at the source; they only appear once ingested into Splunk. This leads me to wonder if there might be a configuration or processing step within Splunk or the AWS Add-on that could be causing these duplicates. Has anyone experienced a similar issue or could offer insights into potential causes or solutions? Any advice or troubleshooting tips would be greatly appreciated. here we can see the same timestamp for the logs:  if im adding | dedup _raw the number of events going down to "6535" from 12,710    Thank you in advance for your assistance.  
Hi all,  I was wondering if there's a way to create a search that I can add to a dashboard that'll present the Peak day and what the volume is over a 30 day period?  Essentially when loading into... See more...
Hi all,  I was wondering if there's a way to create a search that I can add to a dashboard that'll present the Peak day and what the volume is over a 30 day period?  Essentially when loading into the dashboard I was hoping it could save whatever day it occurred and not be replaced until a larger peak occurs. Assuming that's even possible.  Possibly worded this poorly so feel free to ask any questions about what I'm trying to achieve. 
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help ... See more...
My apologies i was using "eventTimestamp" instead of  "@timestamp" in my rex command  i just realized and its working now , However i do not need date in last column need only time. Please help how to do that. please find below details  ================================================================================ Query index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"@timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  --> Please help Here  |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time ================================================================================ Screenshot   ------------------------------------------------------------------------------------------- raw data {"@timestamp":"2024-04-04T02:25:59.366Z","level":"INFO","message":"Snapshot event published: SnapshotEvent(version=SnapshotVersion(sourceSystem=dbI-LDN, entityType=ACCOUNT, subType=, date=2024-04-03, version=1, snapshotSize=326718, uuid=8739e273-cedc-482b-b696-48357efc8704, eventTimestamp=2024-04-04T02:24:52.762129638), status=CREATED)","thread":"snapshot-checker-3","loggerName":"com.db.sdda.dc.kafka.snapshot.writer.InternalEventSender"} Show syntax highlighted   Need only time 02:25:59 AM/PM in last column
Good morning, I have some alerts that I have set up that are not triggering. They are Defender events. If I run the query in a normal search if I get the results of the alerts that I miss. However, ... See more...
Good morning, I have some alerts that I have set up that are not triggering. They are Defender events. If I run the query in a normal search if I get the results of the alerts that I miss. However, for some reason the alerts are not triggered: neither the email is sent, nor do they appear in the Triggered alerts section. This is my alert   and this is one of the events for which it should have triggered and has not triggered: I also tried disabling the throttle in case there was a problem and it was leaking. I also checked to see if the search had been skipped but it was not.   Any idea?
Good morning, I did the Splunk Enterprise update from version 9.1.1 to 9.2.1 but the web interface no longer shows the connected machines. What can I verify? Thanks OS Centos 7
I am planning to provide basic splunk session to my team. Can you help if any cheatsheet available online which I can download easily.
=========================================== Query used  index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND messag... See more...
=========================================== Query used  index=* namespace="dk1017-j" sourcetype="kube:container:kafka-clickhouse-snapshot-writer" message="*Snapshot event published*" AND message="*dbI-LDN*" AND message="*2024-04-03*" AND message="*" |fields message |rex field=_raw "\s+date=(?<BusDate>\d{4}-\d{2}-\d{2})" |rex field=_raw "sourceSystem=(?<Source>[^,]*)" |rex field=_raw "entityType=(?<Entity>\w+)" |rex field=_raw "\"timestamp\":\"(?<Time>\d{4}-\d{2}-\d{2}[T]\d{2}:\d{2})"  -- this is not working |sort Time desc |dedup Entity |table Source, BusDate, Entity, Time =========================================== This how raw data looks like i  would like to extract only time , also suggest how can i covert to AM/PM   Kindly provide solution.    
There a few questions in regards to the Interactions for Dashboard studios. This is the current code I have: {     "drilldown": "true",     "visualizations": {         "viz_ar3dmyq9": {       ... See more...
There a few questions in regards to the Interactions for Dashboard studios. This is the current code I have: {     "drilldown": "true",     "visualizations": {         "viz_ar3dmyq9": {             "type": "splunk.line",             "dataSources": {                 "primary": "ds_jjp4wUrz"             },             "eventHandlers": [                 {                     "type": "drilldown.setToken",                     "options": {                         "tokens": [                             {                                 "token": "click_time",                                 "key": "row._time.value"                             }                         ]                     }                 }, { "type": "drilldown.linkToSearch", "options": { "query": "SomeSearchQuery", "earliest": "$click_time$", "latest": "$global_time.latest$", "type": "custom", "newTab": true } }             ],             "options": {                 "seriesColorsByField": {},                 "dataValuesDisplay": "minmax",                 "y2AxisTitleText": ""             }         },         "viz_leeY0Yzv": {             "type": "splunk.markdown",             "options": {                 "markdown": "**Value of Clicked row : $click_time$**",                 "backgroundColor": "#ffffff",                 "fontFamily": "Times New Roman",                 "fontSize": "extraLarge"             }         }     },     "dataSources": {         "ds_uHpCvdwq": {             "type": "ds.search",             "options": {                 "enableSmartSources": true,                 "query": "someCustomQuery",                 "refresh": "10s",                 "refreshType": "delay"             },             "name": "CustomQuery"         },         "ds_jjp4wUrz": {             "type": "ds.search",             "options": {                 "query": "AnotherCustomQuery",                 "refresh": "10s",                 "refreshType": "delay"             },             "name": "CustomQuery2"         }     },     "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$global_time.latest$",                         "earliest": "$global_time.earliest$"                     }                 }             }         }     },     "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "global_time",                 "defaultValue": "-60m@m,now"             },             "title": "Global Time Range"         }     },     "layout": {         "type": "absolute",         "options": {             "width": 1440,             "height": 960,             "display": "auto"         },         "structure": [             {                 "item": "viz_ar3dmyq9",                 "type": "block",                 "position": {                     "x": 0,                     "y": 0,                     "w": 1440,                     "h": 330                 }             },             {                 "item": "viz_leeY0Yzv",                 "type": "block",                 "position": {                     "x": 310,                     "y": 780,                     "w": 940,                     "h": 30                 }             }         ],         "globalInputs": [             "input_global_trp"         ]     },     "description": "Tracking objects",     "title": "Info" } The line chart maps the data from 'AnotherCustomQuery'  (ds_jjp4wUrz), which is a query returning a chart that gets the max object_times for the 3 objects at one minute intervals: .... | chart span=1m max(object_time) OVER _time by object Producing a 3 line chart, each line representing each object. I can get the time the user clicked on, tokenized in "click_time", but a few puzzles remain: 1. When the user clicks on a object line, how do I get the line's object_name and pass it along to my search query? 2. For the "drilldown.linkToSearch", how can I add 1 minute to the click_time and add it into the latest?      Is it possible I don't need that range and search at a specific time?  3. When I open the dashboard, the first click search says "invalid earliest time", but subsequent clicks on the chart says it's working fine. Is there are particular reason why this is happening?
Hello    We received an alert from the Upgrade Readiness App about this app not being compatible with Python 3. This app appears to be an internal Splunk app. Does anyone know anything about it? ... See more...
Hello    We received an alert from the Upgrade Readiness App about this app not being compatible with Python 3. This app appears to be an internal Splunk app. Does anyone know anything about it? Splunk Cloud version: 9.1.2312.103   Thank you!  
Hi All, I have an issue where the Data Model Wrangler app can no longer see most tags where all other apps can. The Data Model Wrangler app only sees 30 tags (None of the CIM tags) where other apps ... See more...
Hi All, I have an issue where the Data Model Wrangler app can no longer see most tags where all other apps can. The Data Model Wrangler app only sees 30 tags (None of the CIM tags) where other apps see 830 tags. The Data Model Wrangler app used to be able to see all tags and all panels were displaying correctly. Now we get no results for the Field Quality Data score due to tag visibility. The permissions on the tags are everyone read and globally shared. Has anyone experienced anything similar where one app cannot see tags that all other apps can see? Thanks