All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello I could not find a clear answer.  We have a setup where we run an IIS server on a windows virtual machine. On the IIS server we run a PHP webshop that makes calls to different databases and e... See more...
Hello I could not find a clear answer.  We have a setup where we run an IIS server on a windows virtual machine. On the IIS server we run a PHP webshop that makes calls to different databases and external calls.   Does your Observerability system work out of the box on the PHP webshop, or is this not supported.   The reason for the question is that some monitoring solutions such as AppDynamics, and New Relic does not support that setup. The question is mainly to know if we should start moving the setup to a different tech stack or if can wait a little.
Hello Splunkers!! Below are the sample events I have in which I want to mask UserID field and Password field. There is no selected & interesting field is availble. I want to mask it from the raw eve... See more...
Hello Splunkers!! Below are the sample events I have in which I want to mask UserID field and Password field. There is no selected & interesting field is availble. I want to mask it from the raw event directly. Please suggest me solution from the UI by using rex mode command and second solution  by using the Props & transforms.conf from the backend .   Sample log:   <?xml version="1.0" encoding="UTF-8"?> <HostMessage><![CDATA[<?xml version="1.0" encoding="UTF-8" standalone="no"?><UserMasterRequest><MessageID>25255620</MessageID><MessageCreated>2024-04-05T07:00:55Z</MessageCreated><OpCode>CHANGEPWD</OpCode><UserId>pnkof123</UserId><Password>Summer123</Password><PasswordExpiry>2024-06-09</PasswordExpiry></UserMasterRequest>]]><original_header><IfcLogHostMessage xsi:schemaLocation="http://vanderlande.com/FM/Gtw/GtwLogging/V1/0/0 GtwLogging_V1.0.0.xsd"> <MessageId>25255620</MessageId> <MessageTimeStamp>2024-04-05T05:00:55Z</MessageTimeStamp> <SenderFmInstanceName>CMP_GTW</SenderFmInstanceName> <ReceiverFmInstanceName>FM_BPI</ReceiverFmInstanceName>   </IfcLogHostMessage></original_header></HostMessage>
Data Summary is not showing host at all even I already added UDP with ip address on port 514.
Hi Guys, In my scenario i need show error details for correlation id .There are field called tracePoint="EXCEPTION" and message field with PRD(ERROR): In some cases we have exception first after th... See more...
Hi Guys, In my scenario i need show error details for correlation id .There are field called tracePoint="EXCEPTION" and message field with PRD(ERROR): In some cases we have exception first after that the transaction got success.So at that time i want to ignore the transaction in my query.But its not ignoring the success correlationId in my result   index="mulesoft" applicationName="s-concur-api" environment=PRD (tracePoint="EXCEPTION" AND message!="*(SUCCESS)*")|transaction correlationId | rename timestamp as Timestamp correlationId as CorrelationId tracePoint as TracePoint content.ErrorType as Error content.errorType as errorType content.errorMsg as ErrorMsg content.ErrorMsg as errorMsg | eval ErrorType=if(isnull(Error),"Unknown",Error) | dedup CorrelationId |eval errorType=coalesce(Error,errorType)|eval Errormsg=coalesce(ErrorMsg,errorMsg) |table CorrelationId,Timestamp, applicationName, locationInfo.fileName, locationInfo.lineInFile, errorType, message,Errormsg | sort -Timestamp    
Hi,  I have a simple dropdown with 3 options All, AA and BB. When I select AA/BB I am getting correct results however when I select "All" it says "No search results returned". Not sure where I am d... See more...
Hi,  I have a simple dropdown with 3 options All, AA and BB. When I select AA/BB I am getting correct results however when I select "All" it says "No search results returned". Not sure where I am doing wrong, can anyone help me solving this issue.    "input_iUKfLZBh": { "options": { "items": [ { "label": "AA", "value": "AA" }, { "label": "BB", "value": "BB" }, { "label": "All", "value": "*" } ], "token": "Config_type", "defaultValue": "AA" }, "title": "Select Error Type", "type": "input.dropdown" }  
I'm trying to remove some Windows events from being ingested ... example below: The regex I've tried in both Ingest Actions and the old method works both at regex101 and in my SPL index=win* ... See more...
I'm trying to remove some Windows events from being ingested ... example below: The regex I've tried in both Ingest Actions and the old method works both at regex101 and in my SPL index=win* EventCode=4103 Message=*Files\\SplunkUniversalForwarder* | regex "EventCode=4103(.|\r|\n)+\s+Files.SplunkUniversalForwarder.bin.splunk-powershell.ps1" Yet, when I configure an ingest action ruleset, nothing gets removed. [_rule:ruleset_WinEventLogSecurity:filter:regex:ft7j3fkn] INGEST_EVAL = queue=if(match(_raw, "EventCode=4103(.|\\r|\\n)+\\s+Files.SplunkUniversalForwarder.bin.splunk-powershell.ps1"), "nullQueue", queue) STOP_PROCESSING_IF = queue == "nullQueue" same goes for trying to do it "the old way" [drop_4103_splunkpowershell] DEST_KEY = queue REGEX = EventCode=4103(.|\r|\n)+\s+Files.SplunkUniversalForwarder.bin.splunk-powershell.ps1 FORMAT = nullQueue   04/04/2024 07:02:28 PM LogName=Microsoft-Windows-PowerShell/Operational EventCode=4103 EventType=4 ComputerName=redacted User=NOT_TRANSLATED Sid=S-1-5-18 SidType=0 SourceName=Microsoft-Windows-PowerShell Type=Information RecordNumber=1258288151 Keywords=None TaskCategory=Executing Pipeline OpCode=To be used when operation is just executing a method Message=CommandInvocation(Start-Sleep): "Start-Sleep" ParameterBinding(Start-Sleep): name="Milliseconds"; value="200" Context:         Severity = Informational         Host Name = ConsoleHost         Host Version = 5.1.17763.5576         Host ID = 222d8490-3c1f-486d-94ed-47f91e59da32         Host Application = powershell.exe -command $input |C:\Program` Files\SplunkUniversalForwarder\bin\splunk-powershell.ps1 C:\Program` Files\SplunkUniversalForwarder e20c0be00a8583fe         Engine Version = 5.1.17763.5576         Runspace ID = 87084a50-365f-409b-aed6-d666c6c6b2b         Pipeline ID = 1         Command Name = Start-Sleep         Command Type = Cmdlet         Script Name = ....... 
Hi,  The requirement is that the user makes a dynamic selection (time range from time picker, environment from env dropdown and few more) and click submit button and as soon as hi clicks submit, a c... See more...
Hi,  The requirement is that the user makes a dynamic selection (time range from time picker, environment from env dropdown and few more) and click submit button and as soon as hi clicks submit, a csv file should be generated as per the user input selection and later on the user should be able to reference that csv in the dashboard panel to create different visualisations.  Is that possible in Splunk? 
Hello Team, We are in process to setup DB monitoring using Appdynamics DB. Getting attached error while accessing (Activity,query,session etc) tabs.  1) How and where to enable Event Service (contr... See more...
Hello Team, We are in process to setup DB monitoring using Appdynamics DB. Getting attached error while accessing (Activity,query,session etc) tabs.  1) How and where to enable Event Service (controller or DB Coolector). 2) will there be any performance Impact on the existing setup if we enable the Event Service. Thanks
hello all,   I noticed that timestamp in activity log is in UTC, and also while using timer app and in the event name adding "$now()" ,the timestamp is also UTC. it is not the time zone I defined ... See more...
hello all,   I noticed that timestamp in activity log is in UTC, and also while using timer app and in the event name adding "$now()" ,the timestamp is also UTC. it is not the time zone I defined in the user settings nor in the administration/company settings. is there a way to change the time zone from UTC to different time?
Hello  Can i get a regex that matches with this;  permission=Permission12345. I have tried to bring up one but its not working. Thanks in advance 
I'm trying to deploy a cluster agent in my Kubernetes cluster to monitor the infrastructure using the kubectl CLI. I've followed the steps and executed these commands: kubectl create -f cluster-age... See more...
I'm trying to deploy a cluster agent in my Kubernetes cluster to monitor the infrastructure using the kubectl CLI. I've followed the steps and executed these commands: kubectl create -f cluster-agent-operator.yaml kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key=<access-key> kubectl create -f cluster-agent.yaml However, the cluster agent pod is stuck in the "CrashLoopBackOff" state. The logs indicate an issue with the account access key: [ERROR]: 2024-04-03 18:29:45 - main.go:183 - Account accessKey is not specified [ERROR]: 2024-04-03 18:29:45 - main.go:184 - Please provide account accessKey before starting cluster-agent. Exiting... I've verified that the cluster-agent-secret contains the controller-key with the correct access key value. What could be causing this issue despite providing the access key in the secret? Are there any additional configuration steps I might be missing? Reference : https://docs.appdynamics.com/appd/22.x/latest/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/install-the-cluster-agent/install-the-cluster-agent-with-the-kubernetes-cli
Hello, I have this data here: 2024-04-03 13:57:54 10.237.8.167 GET / "><script>alert('struts_sa_surl_xss.nasl-1712152675')</script> 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Window... See more...
Hello, I have this data here: 2024-04-03 13:57:54 10.237.8.167 GET / "><script>alert('struts_sa_surl_xss.nasl-1712152675')</script> 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 2 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET / - 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 0 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET / - 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 1 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET / - 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 200 0 0 1 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET /Default.aspx - 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 0 1 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET /home.jsf autoScroll=0%2c275%29%3b%2f%2f--%3e%3c%2fscript%3e%3cscript%3ealert%28%27myfaces_tomahawk_autoscroll_xss.nasl%27 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 2 1 10.236.125.4 2024-04-03 13:57:55 10.237.8.167 GET /admin/statistics/ConfigureStatistics - 443 - 10.237.123.253 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+5.1;+Trident/4.0) - 404 0 2 2 10.236.125.4 It is not line breaking properly as expected for our IIS logs.  This is what I currently have for our sourcetype stanza on the indexer.     [iis] LINE_BREAKER = ([\r\n]+)\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2} SHOULD_LINEMERGE = false TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 19  
Hi    Assuming a sample of data from this example:        | makeresults count=5 | eval f1=random()%2 | eval f2=random()%2 | eval f3=random()%2 | eval f4=random()%2 | eval H=round(((random() % 1... See more...
Hi    Assuming a sample of data from this example:        | makeresults count=5 | eval f1=random()%2 | eval f2=random()%2 | eval f3=random()%2 | eval f4=random()%2 | eval H=round(((random() % 102)/(102)) * (104 - 100) + 100)       H f1 f2 f3 f4 100 1 0 0 1 100 1 1 0 1 101 1 1 0 0 102 1 1 1 0   I want to built a chart which contains the distinct count of H  for f1,f2,f3,f4 with 1  f1 f2 f3 f4 3 3 1 1   Someone can help?
Hi, I am trying to collect metrics from various sources with the OTel Collector and send them to our Splunk Enterprise instance via a HEC. Collecting and sending the metrics via OTel seems to work q... See more...
Hi, I am trying to collect metrics from various sources with the OTel Collector and send them to our Splunk Enterprise instance via a HEC. Collecting and sending the metrics via OTel seems to work quite fine and I was quickly able to see metrics in my splunk index. However, what I am completely missing are the labels of those prometheus metrics in Splunk. Here an example of some of the metrics I scrape:   # HELP jmx_exporter_build_info A metric with a constant '1' value labeled with the version of the JMX exporter. # TYPE jmx_exporter_build_info gauge jmx_exporter_build_info{version="0.20.0",name="jmx_prometheus_javaagent",} 1.0 # HELP jvm_info VM version info # TYPE jvm_info gauge jvm_info{runtime="OpenJDK Runtime Environment",vendor="AdoptOpenJDK",version="11.0.8+10",} 1.0 # HELP jmx_config_reload_failure_total Number of times configuration have failed to be reloaded. # TYPE jmx_config_reload_failure_total counter jmx_config_reload_failure_total 0.0 # HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds. # TYPE jvm_gc_collection_seconds summary jvm_gc_collection_seconds_count{gc="G1 Young Generation",} 883.0 jvm_gc_collection_seconds_sum{gc="G1 Young Generation",} 133.293 jvm_gc_collection_seconds_count{gc="G1 Old Generation",} 0.0 jvm_gc_collection_seconds_sum{gc="G1 Old Generation",} 0.0 # HELP jvm_memory_pool_allocated_bytes_total Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. # TYPE jvm_memory_pool_allocated_bytes_total counter jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'profiled nmethods'",} 6.76448896E8 jvm_memory_pool_allocated_bytes_total{pool="G1 Old Gen",} 1.345992784E10 jvm_memory_pool_allocated_bytes_total{pool="G1 Eden Space",} 9.062406160384E12 jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-profiled nmethods'",} 3.38238592E8 jvm_memory_pool_allocated_bytes_total{pool="G1 Survivor Space",} 1.6919822336E10 jvm_memory_pool_allocated_bytes_total{pool="Compressed Class Space",} 1.41419488E8 jvm_memory_pool_allocated_bytes_total{pool="Metaspace",} 1.141665096E9 jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-nmethods'",} 3544448.0   I do see the values in Splunk, but especially for the last metric "jvm_memory_pool_allocated_bytes_total" the label of which pool is lost in splunk. Is this intentional or am I missing something. The getting started page for metrics also has no information on where those labels are stored and how I could query based on them (https://docs.splunk.com/Documentation/Splunk/latest/Metrics/GetStarted)   tia,     Jörg
Hi Guys, In my scenario i want to compare two column values .If its match its fine if the values is in difference i want to display both the field values in some colour in the splunk dashboard. ... See more...
Hi Guys, In my scenario i want to compare two column values .If its match its fine if the values is in difference i want to display both the field values in some colour in the splunk dashboard. Field1 Field2 28 28 100 99 33 56 18 18
Is there a Splunk query I can use to list when CD drive is access and written to and the users associated with those actions made
Is there a query I can add to my splunk dashboard that will list accounts inactive over 35 days?
Hello Splunkers, My Splunk instance is configured with default SAML authentication. Now i wanted to add users from external domain to access list of Splunk dashboards. How can i do that? I search... See more...
Hello Splunkers, My Splunk instance is configured with default SAML authentication. Now i wanted to add users from external domain to access list of Splunk dashboards. How can i do that? I searched in community and found that we can use en-US/account/login?loginType=splunk after changing enable_insecure_login = False in web.conf I'm little worried about the consequences after I change the above setting.  Is there any way to provide access to external users without any concerns with security. Thank you in advance!
I'm looking to export Service from Splunk ITSI however, there is no direct export feature in the GUI (at least within the Services page). Is there any other way to export ITSI services?
From what I understand about Splunk, it works on the raw data and does not parse it. It does mark and "segments" areas of the data In the tsidx file. Also from what I understand about HF vs. UF, unl... See more...
From what I understand about Splunk, it works on the raw data and does not parse it. It does mark and "segments" areas of the data In the tsidx file. Also from what I understand about HF vs. UF, unlike the universal forwarder, the heavy forwarder does part of the indexing himself. So what exactly does it index? does he segment the raw data to the tsidx file and sends them both to the indexer?