All Topics

Top

All Topics

Hi,  I have a simple dropdown with 3 options All, AA and BB. When I select AA/BB I am getting correct results however when I select "All" it says "No search results returned". Not sure where I am d... See more...
Hi,  I have a simple dropdown with 3 options All, AA and BB. When I select AA/BB I am getting correct results however when I select "All" it says "No search results returned". Not sure where I am doing wrong, can anyone help me solving this issue.    "input_iUKfLZBh": { "options": { "items": [ { "label": "AA", "value": "AA" }, { "label": "BB", "value": "BB" }, { "label": "All", "value": "*" } ], "token": "Config_type", "defaultValue": "AA" }, "title": "Select Error Type", "type": "input.dropdown" }  
If you’re a Spiunk Certified practitioner, then you will be excited by this validation –  all that hard work is paying off. According to a recent survey of Splunk users from our community, those who ... See more...
If you’re a Spiunk Certified practitioner, then you will be excited by this validation –  all that hard work is paying off. According to a recent survey of Splunk users from our community, those who had earned Splunk Certification earned 31% more than their uncertified peers.  And, if you’re a business leader who is invested in the success of your organization, you are likely among the 86% who feel they’re in a stronger competitive position because of their teams’ Splunk-certified skills.  Let’s Roll the Video  These are just a few of the powerful metrics that point to the career-impacting value of getting Splunk Certified. If you want the 60-second, fast-paced pitch about the way Splunk Certifications are impacting careers, check out our latest video.  And, There’s More… You can also learn more about how proficiency in Splunk supports resilience, career growth, and gaining a competitive edge by checking out the 2023 Splunk Career Impact Report.    And, Don’t Forget about Vegas Register today to schedule and take any exam onsite at .conf24 in Las Vegas. The Splunk Certification testing center will be up and running from Tuesday, June 11 to Friday, June 14, 2024. Get all the details here!    Happy Learning!   - Callie Skokos on behalf of the Splunk Education Crew
I'm trying to remove some Windows events from being ingested ... example below: The regex I've tried in both Ingest Actions and the old method works both at regex101 and in my SPL index=win* ... See more...
I'm trying to remove some Windows events from being ingested ... example below: The regex I've tried in both Ingest Actions and the old method works both at regex101 and in my SPL index=win* EventCode=4103 Message=*Files\\SplunkUniversalForwarder* | regex "EventCode=4103(.|\r|\n)+\s+Files.SplunkUniversalForwarder.bin.splunk-powershell.ps1" Yet, when I configure an ingest action ruleset, nothing gets removed. [_rule:ruleset_WinEventLogSecurity:filter:regex:ft7j3fkn] INGEST_EVAL = queue=if(match(_raw, "EventCode=4103(.|\\r|\\n)+\\s+Files.SplunkUniversalForwarder.bin.splunk-powershell.ps1"), "nullQueue", queue) STOP_PROCESSING_IF = queue == "nullQueue" same goes for trying to do it "the old way" [drop_4103_splunkpowershell] DEST_KEY = queue REGEX = EventCode=4103(.|\r|\n)+\s+Files.SplunkUniversalForwarder.bin.splunk-powershell.ps1 FORMAT = nullQueue   04/04/2024 07:02:28 PM LogName=Microsoft-Windows-PowerShell/Operational EventCode=4103 EventType=4 ComputerName=redacted User=NOT_TRANSLATED Sid=S-1-5-18 SidType=0 SourceName=Microsoft-Windows-PowerShell Type=Information RecordNumber=1258288151 Keywords=None TaskCategory=Executing Pipeline OpCode=To be used when operation is just executing a method Message=CommandInvocation(Start-Sleep): "Start-Sleep" ParameterBinding(Start-Sleep): name="Milliseconds"; value="200" Context:         Severity = Informational         Host Name = ConsoleHost         Host Version = 5.1.17763.5576         Host ID = 222d8490-3c1f-486d-94ed-47f91e59da32         Host Application = powershell.exe -command $input |C:\Program` Files\SplunkUniversalForwarder\bin\splunk-powershell.ps1 C:\Program` Files\SplunkUniversalForwarder e20c0be00a8583fe         Engine Version = 5.1.17763.5576         Runspace ID = 87084a50-365f-409b-aed6-d666c6c6b2b         Pipeline ID = 1         Command Name = Start-Sleep         Command Type = Cmdlet         Script Name = ....... 
Hi,  The requirement is that the user makes a dynamic selection (time range from time picker, environment from env dropdown and few more) and click submit button and as soon as hi clicks submit, a c... See more...
Hi,  The requirement is that the user makes a dynamic selection (time range from time picker, environment from env dropdown and few more) and click submit button and as soon as hi clicks submit, a csv file should be generated as per the user input selection and later on the user should be able to reference that csv in the dashboard panel to create different visualisations.  Is that possible in Splunk? 
Hello Team, We are in process to setup DB monitoring using Appdynamics DB. Getting attached error while accessing (Activity,query,session etc) tabs.  1) How and where to enable Event Service (contr... See more...
Hello Team, We are in process to setup DB monitoring using Appdynamics DB. Getting attached error while accessing (Activity,query,session etc) tabs.  1) How and where to enable Event Service (controller or DB Coolector). 2) will there be any performance Impact on the existing setup if we enable the Event Service. Thanks
hello all,   I noticed that timestamp in activity log is in UTC, and also while using timer app and in the event name adding "$now()" ,the timestamp is also UTC. it is not the time zone I defined ... See more...
hello all,   I noticed that timestamp in activity log is in UTC, and also while using timer app and in the event name adding "$now()" ,the timestamp is also UTC. it is not the time zone I defined in the user settings nor in the administration/company settings. is there a way to change the time zone from UTC to different time?
Hello  Can i get a regex that matches with this;  permission=Permission12345. I have tried to bring up one but its not working. Thanks in advance 
I'm trying to deploy a cluster agent in my Kubernetes cluster to monitor the infrastructure using the kubectl CLI. I've followed the steps and executed these commands: kubectl create -f cluster-age... See more...
I'm trying to deploy a cluster agent in my Kubernetes cluster to monitor the infrastructure using the kubectl CLI. I've followed the steps and executed these commands: kubectl create -f cluster-agent-operator.yaml kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key=<access-key> kubectl create -f cluster-agent.yaml However, the cluster agent pod is stuck in the "CrashLoopBackOff" state. The logs indicate an issue with the account access key: [ERROR]: 2024-04-03 18:29:45 - main.go:183 - Account accessKey is not specified [ERROR]: 2024-04-03 18:29:45 - main.go:184 - Please provide account accessKey before starting cluster-agent. Exiting... I've verified that the cluster-agent-secret contains the controller-key with the correct access key value. What could be causing this issue despite providing the access key in the secret? Are there any additional configuration steps I might be missing? Reference : https://docs.appdynamics.com/appd/22.x/latest/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/install-the-cluster-agent/install-the-cluster-agent-with-the-kubernetes-cli
Hello, I have this data here: 2024-04-03 13:57:54 10.237.8.167 GET / "><script>alert('struts_sa_surl_xss.nasl-1712152675')</script> 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Window... See more...
Hello, I have this data here: 2024-04-03 13:57:54 10.237.8.167 GET / "><script>alert('struts_sa_surl_xss.nasl-1712152675')</script> 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 2 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET / - 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 0 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET / - 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 1 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET / - 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 1 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET /Default.aspx - 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 0 1 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET /home.jsf autoScroll=0%2c275%29%3b%2f%2f--%3e%3c%2fscript%3e%3cscript%3ealert%28%27myfaces_tomahawk_autoscroll_xss.nasl%27 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 2 1 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET /admin/statistics/ConfigureStatistics - 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 2 2 10.236.125.4 It is not line breaking properly as expected for our IIS logs.  This is what I currently have for our sourcetype stanza on the indexer.     [iis] LINE_BREAKER = ([\r\n]+)\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} SHOULD_LINEMERGE = false TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 19  
Hi    Assuming a sample of data from this example:        | makeresults count=5 | eval f1=random()%2 | eval f2=random()%2 | eval f3=random()%2 | eval f4=random()%2 | eval H=round(((random() % 1... See more...
Hi    Assuming a sample of data from this example:        | makeresults count=5 | eval f1=random()%2 | eval f2=random()%2 | eval f3=random()%2 | eval f4=random()%2 | eval H=round(((random() % 102)/(102)) * (104 - 100) + 100)       H f1 f2 f3 f4 100 1 0 0 1 100 1 1 0 1 101 1 1 0 0 102 1 1 1 0   I want to built a chart which contains the distinct count of H  for f1,f2,f3,f4 with 1  f1 f2 f3 f4 3 3 1 1   Someone can help?
Hi, I am trying to collect metrics from various sources with the OTel Collector and send them to our Splunk Enterprise instance via a HEC. Collecting and sending the metrics via OTel seems to work q... See more...
Hi, I am trying to collect metrics from various sources with the OTel Collector and send them to our Splunk Enterprise instance via a HEC. Collecting and sending the metrics via OTel seems to work quite fine and I was quickly able to see metrics in my splunk index. However, what I am completely missing are the labels of those prometheus metrics in Splunk. Here an example of some of the metrics I scrape:   # HELP jmx_exporter_build_info A metric with a constant '1' value labeled with the version of the JMX exporter. # TYPE jmx_exporter_build_info gauge jmx_exporter_build_info{version="0.20.0",name="jmx_prometheus_javaagent",} 1.0 # HELP jvm_info VM version info # TYPE jvm_info gauge jvm_info{runtime="OpenJDK Runtime Environment",vendor="AdoptOpenJDK",version="11.0.8+10",} 1.0 # HELP jmx_config_reload_failure_total Number of times configuration have failed to be reloaded. # TYPE jmx_config_reload_failure_total counter jmx_config_reload_failure_total 0.0 # HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds. # TYPE jvm_gc_collection_seconds summary jvm_gc_collection_seconds_count{gc="G1 Young Generation",} 883.0 jvm_gc_collection_seconds_sum{gc="G1 Young Generation",} 133.293 jvm_gc_collection_seconds_count{gc="G1 Old Generation",} 0.0 jvm_gc_collection_seconds_sum{gc="G1 Old Generation",} 0.0 # HELP jvm_memory_pool_allocated_bytes_total Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. # TYPE jvm_memory_pool_allocated_bytes_total counter jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'profiled nmethods'",} 6.76448896E8 jvm_memory_pool_allocated_bytes_total{pool="G1 Old Gen",} 1.345992784E10 jvm_memory_pool_allocated_bytes_total{pool="G1 Eden Space",} 9.062406160384E12 jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-profiled nmethods'",} 3.38238592E8 jvm_memory_pool_allocated_bytes_total{pool="G1 Survivor Space",} 1.6919822336E10 jvm_memory_pool_allocated_bytes_total{pool="Compressed Class Space",} 1.41419488E8 jvm_memory_pool_allocated_bytes_total{pool="Metaspace",} 1.141665096E9 jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-nmethods'",} 3544448.0   I do see the values in Splunk, but especially for the last metric "jvm_memory_pool_allocated_bytes_total" the label of which pool is lost in splunk. Is this intentional or am I missing something. The getting started page for metrics also has no information on where those labels are stored and how I could query based on them (https://docs.splunk.com/Documentation/Splunk/latest/Metrics/GetStarted)   tia,     Jörg
Hi Guys, In my scenario i want to compare two column values .If its match its fine if the values is in difference i want to display both the field values in some colour in the splunk dashboard. ... See more...
Hi Guys, In my scenario i want to compare two column values .If its match its fine if the values is in difference i want to display both the field values in some colour in the splunk dashboard. Field1 Field2 28 28 100 99 33 56 18 18
Is there a Splunk query I can use to list when CD drive is access and written to and the users associated with those actions made
Is there a query I can add to my splunk dashboard that will list accounts inactive over 35 days?
Hello Splunkers, My Splunk instance is configured with default SAML authentication. Now i wanted to add users from external domain to access list of Splunk dashboards. How can i do that? I search... See more...
Hello Splunkers, My Splunk instance is configured with default SAML authentication. Now i wanted to add users from external domain to access list of Splunk dashboards. How can i do that? I searched in community and found that we can use en-US/account/login?loginType=splunk after changing enable_insecure_login = False in web.conf I'm little worried about the consequences after I change the above setting.  Is there any way to provide access to external users without any concerns with security. Thank you in advance!
I'm looking to export Service from Splunk ITSI however, there is no direct export feature in the GUI (at least within the Services page). Is there any other way to export ITSI services?
From what I understand about Splunk, it works on the raw data and does not parse it. It does mark and "segments" areas of the data In the tsidx file. Also from what I understand about HF vs. UF, unl... See more...
From what I understand about Splunk, it works on the raw data and does not parse it. It does mark and "segments" areas of the data In the tsidx file. Also from what I understand about HF vs. UF, unlike the universal forwarder, the heavy forwarder does part of the indexing himself. So what exactly does it index? does he segment the raw data to the tsidx file and sends them both to the indexer?
I'm experimenting with doing ETW logging of Microsoft IIS, where the IIS log ends up as XML in a windows eventlog. But I have problems getting Splunk to use the correct timestamp field, Splunk uses ... See more...
I'm experimenting with doing ETW logging of Microsoft IIS, where the IIS log ends up as XML in a windows eventlog. But I have problems getting Splunk to use the correct timestamp field, Splunk uses the TimeCreated property for eventtime (_time), and not the date and time properties that indicate when IIS served the actual webpage. An example: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-IIS-Logging' Guid='{7e8ad27f-b271-4ea2-a783-a47bde29143b}'/><EventID>6200</EventID><Version>0</Version><Level>4</Level><Task>0</Task><Opcode>0</Opcode><Keywords>0x8000000000000000</Keywords><TimeCreated SystemTime='2024-04-04T12:23:43.811459900Z'/><EventRecordID>11148</EventRecordID><Correlation/><Execution ProcessID='1892' ThreadID='3044'/><Channel>Microsoft-IIS-Logging/Logs</Channel><Computer>sw2iisxft</Computer><Security UserID='S-1-5-18'/></System><EventData><Data Name='EnabledFieldsFlags'>2149961727</Data><Data Name='date'>2024-04-04</Data><Data Name='time'>12:23:37</Data><Data Name='cs-username'>ER\4dy</Data><Data Name='s-sitename'>W3SVC5</Data><Data Name='s-computername'>sw2if</Data><Data Name='s-ip'>192.168.32.86</Data><Data Name='cs-method'>GET</Data><Data Name='cs-uri-stem'>/</Data><Data Name='cs-uri-query'>blockid=2&amp;roleid=8&amp;logid=21</Data><Data Name='sc-status'>200</Data><Data Name='sc-win32-status'>0</Data><Data Name='sc-bytes'>39600</Data><Data Name='cs-bytes'>984</Data><Data Name='time-taken'>37</Data><Data Name='s-port'>443</Data><Data Name='csUser-Agent'>Mozilla/5.0+(Windows+NT+10.0;+Win64;+x64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/123.0.0.0+Safari/537.36+Edg/123.0.0.0</Data><Data Name='csCookie'>-</Data><Data Name='csReferer'>https://tsidologg/?blockid=2&amp;roleid=8</Data><Data Name='cs-version'>-</Data><Data Name='cs-host'>-</Data><Data Name='sc-substatus'>0</Data><Data Name='CustomFields'>X-Forwarded-For - Content-Type - https on host tsidologg</Data></EventData></Event> I've tried every combination in props.conf that I can think of This should work, but doesen't..   TIME_PREFIX = <Data Name='date'> MAX_TIMESTAMP_LOOKAHEAD = 100 TIME_FORMAT =<Data Name='date'>%Y-%m-%d</Data><Data Name='time'>%H:%M:%S</Data> TZ = UTC   Any ideas?
I have a situation with ingestion latency that I am trying to fix. The heavy forwarder is set to Central Standard Time in the front end under preferences. Does the front end setting have any bearing... See more...
I have a situation with ingestion latency that I am trying to fix. The heavy forwarder is set to Central Standard Time in the front end under preferences. Does the front end setting have any bearing on the props.conf 
I am trying to instrument a Xamarin Forms app with the following configuration:. var config = AppDynamics.Agent.AgentConfiguration.Create("<EUM_APP_KEY>"); config.CollectorURL = "http://<IP_Address... See more...
I am trying to instrument a Xamarin Forms app with the following configuration:. var config = AppDynamics.Agent.AgentConfiguration.Create("<EUM_APP_KEY>"); config.CollectorURL = "http://<IP_Address>:7001"; config.CrashReportingEnabled = true; config.ScreenshotsEnabled = true; config.LoggingLevel = AppDynamics.Agent.LoggingLevel.All; AppDynamics.Agent.Instrumentation.EnableAggregateExceptionHandling = true; AppDynamics.Agent.Instrumentation.InitWithConfiguration(config); In the documentation, it is stated that: The Xamarin Agent does not support automatic instrumentation for network requests made with any library. You will need to manually instrument HTTP network requests regardless of what library is used. Mobile RUM Supported Environments (appdynamics.com) Yet, in these links Customize the Xamarin Instrumentation (appdynamics.com) and Instrument Xamarin Applications (appdynamics.com) I see that I can use AppDynamics.Agent.AutoInstrument.Fody package to automatically instrument the network requests. I tried to follow the approach of automatic instrumentation using both AppDynamics.Agent and AppDynamics.Agent.AutoInstrument.Fody packages as per the docs, but I get the below error when building the project: MSBUILD : error : Fody: An unhandled exception occurred: MSBUILD : error : Exception: MSBUILD : error : Failed to execute weaver /Users/username/.nuget/packages/appdynamics.agent.autoinstrument.fody/2023.12.0/build/../weaver/AppDynamics.Agent.AutoInstrument.Fody.dll MSBUILD : error : Type: MSBUILD : error : System.Exception MSBUILD : error : StackTrace: MSBUILD : error : at InnerWeaver.ExecuteWeavers () [0x0015a] in C:\projects\fody\FodyIsolated\InnerWeaver.cs:222 MSBUILD : error : at InnerWeaver.Execute () [0x000fe] in C:\projects\fody\FodyIsolated\InnerWeaver.cs:112 MSBUILD : error : Source: MSBUILD : error : FodyIsolated MSBUILD : error : TargetSite: MSBUILD : error : Void ExecuteWeavers() MSBUILD : error : Sequence contains more than one matching element MSBUILD : error : Type: MSBUILD : error : System.InvalidOperationException MSBUILD : error : StackTrace: MSBUILD : error : at System.Linq.Enumerable.Single[TSource] (System.Collections.Generic.IEnumerable`1[T] source, System.Func`2[T,TResult] predicate) [0x00045] in /Users/builder/jenkins/workspace/build-package-osx-mono/2020-02/external/bockbuild/builds/mono-x64/external/corefx/src/System.Linq/src/System/Linq/Single.cs:71 MSBUILD : error : at AppDynamics.Agent.AutoInstrument.Fody.ImportedReferences.LoadTypes () [0x000f1] in /opt/buildAgent/work/2ab48d427bca2ab0/sdk/AppDynamics.Agent.AutoInstrument.Fody/ImportedReferences.cs:117 MSBUILD : error : at AppDynamics.Agent.AutoInstrument.Fody.ImportedReferences..ctor (ModuleWeaver weaver) [0x0000d] in /opt/buildAgent/work/2ab48d427bca2ab0/sdk/AppDynamics.Agent.AutoInstrument.Fody/ImportedReferences.cs:90 MSBUILD : error : at ModuleWeaver.Execute () [0x00000] in /opt/buildAgent/work/2ab48d427bca2ab0/sdk/AppDynamics.Agent.AutoInstrument.Fody/ModuleWeaver.cs:17 MSBUILD : error : at InnerWeaver.ExecuteWeavers () [0x000b7] in C:\projects\fody\FodyIsolated\InnerWeaver.cs:204 MSBUILD : error : Source: MSBUILD : error : System.Core MSBUILD : error : TargetSite: MSBUILD : error : Mono.Cecil.MethodDefinition Single[MethodDefinition](System.Collections.Generic.IEnumerable`1[Mono.Cecil.MethodDefinition], System.Func`2[Mono.Cecil.MethodDefinition,System.Boolean]) Any help please? Thank you