All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  I need help in extracting the time gaps in a multi-value field represented as Date. My data output looks like this: index=myindex | stats values(_time) as _time values(recs) as recs co... See more...
Hi,  I need help in extracting the time gaps in a multi-value field represented as Date. My data output looks like this: index=myindex | stats values(_time) as _time values(recs) as recs count by Token | eval Date= strftime (_time,"%F %H:%M:%S.%2q ") | where count > 1 | table Token Date Token        Date 363311    2024-06-25 17:20:08.26                     2024-06-25 17:23:51.12 231321    2024-06-25 18:10:58.86                     2024-06-25 18:11:28.12                     2024-06-25 18:12:19.38                     2024-06-25 18:13:21.90 827341    2024-06-25 15:17:18.06                     2024-06-25 15:37:47.93                     2024-06-25 15:41:03.21 I would like to display the difference in time stamps in a new column called "time_gaps", and it would list the time in seconds between the latest time and the previous time, some Tokens have only 2 time stamps, so there should be only 1 value in the time_gap field, however, other that have 4, should have values representing the time difference between the 1st and 2nd, 2nd and 3rd, 3rd and 4th. I tried streamstats but it seems I may be doing something wrong. Any clean and effect SPL would be appreciated. Thanks
I came across this post for Splunk Enterprise upgrade. https://community.splunk.com/t5/Installation/What-do-I-validate-after-I-upgrade-Splunk-Enterprise-to-confirm/m-p/479261 I need details about w... See more...
I came across this post for Splunk Enterprise upgrade. https://community.splunk.com/t5/Installation/What-do-I-validate-after-I-upgrade-Splunk-Enterprise-to-confirm/m-p/479261 I need details about what to validate after ES upgrade. I already have this from Splunk docs. But I am looking for something as detailed as above post for ES. https://docs.splunk.com/Documentation/ES/7.3.1/Install/Upgradetonewerversion#Step_5._Validate_the_upgrade  
hello i'm beginner in splunk. Currently, i'm working with splunk entreprise i want to retrieve microservices depandancy and export this informations  How can i do that?
I am getting permission denied error on in Splunk forwarder logs ERROR DC:DeploymentClient - Failed to save manifest file to disk at='/opt/splunkforwarder/var/run/serverclass.xml': Permission denied... See more...
I am getting permission denied error on in Splunk forwarder logs ERROR DC:DeploymentClient - Failed to save manifest file to disk at='/opt/splunkforwarder/var/run/serverclass.xml': Permission denied Severclass.xml has  read/write permission to Splunk user on the server where UF is installed. Can anyone help 
Please guide on onboarding cloudflare with splunk for a distributed architecture. along with information on, on which instance (HF, indexer, search heads, management instances) to install the add-ON... See more...
Please guide on onboarding cloudflare with splunk for a distributed architecture. along with information on, on which instance (HF, indexer, search heads, management instances) to install the add-ON, and on which instance to create custom index
I have a lookup that has saved all apps installed on our deployment server. I need a query that checks all apps in the lookup without an app event in the last 90 days. Thank you for any assistance. 
Hello. I am working with opentelemetry metrics. I have a metric type index, and the format of the payload I receive is like this: example payload : {"deployment.environment":"entorno-prueb... See more...
Hello. I am working with opentelemetry metrics. I have a metric type index, and the format of the payload I receive is like this: example payload : {"deployment.environment":"entorno-pruebas","k8s.cluster.name":"splunk-otel","k8s.namespace.name":"default","k8s.node.name":"minikube","k8s.pod.name":"my-otel-demo-emailservice-fc5bc4c5f-jxzqz","k8s.pod.uid":"5fe1ada8-8baa-4960-b873-381b475b2b26","metric_type":"Gauge","os.type":"linux","metric_name:k8s.pod.filesystem.usage":491520}   I need a search that retrieves the various values ​​of the k8s.pod.name field. I'm trying different variations of the search, but I can't get it: |mstats avg(_value) as VAL WHERE index=otel_k8s_metrics metric_name="metric_name:k8s.pod.filesystem.usage*" |spath input=_raw path=k8s.pod.name output=k8s.pod.name |stats values(k8s.pod.name) as k8s.pod.name |table k8s.pod.name   Does anyone have any idea why it doesn't work. Metrics type indexes support spath   I appreciate any ideas BR  JAR  
Hello, I need some help with adjusting an alert for detecting a password spray attack using Auth0 logs in Splunk. What I'm looking for is to not just catch the password spray itself but also get ale... See more...
Hello, I need some help with adjusting an alert for detecting a password spray attack using Auth0 logs in Splunk. What I'm looking for is to not just catch the password spray itself but also get alerted when there's a successful login from the same source right after the spray attempt. Currently, I have the following query that detects password spray attempts by identifying IPs with more than 10 unique failed login attempts within a 5-minute window:     index = auth0 (data.type IN ("fu", "fp")) | bucket span=5m _time | stats dc(data.user_name) AS unique_accounts values(data.user_name) as tried_accounts values(data.client_name) as clientName values(data.type) as failure_reason by data.ip | where unique_accounts > 10   Is there an way to adjust this query to also detect and alert on successful logins (data.type = "s") from the same IPs that performed the spray attack? I am looking to create an alert that indicates a successful login following the spray, so we can respond accordingly. Log Event Type Codes (auth0.com) Thank you
We created a dynamic drop down for service code and time range. We have many  service code values out of which "null"  is one of  them. So when we select a particular time range if null value logs ar... See more...
We created a dynamic drop down for service code and time range. We have many  service code values out of which "null"  is one of  them. So when we select a particular time range if null value logs are not present also it is showing up in the drop down, We want to see the options in the drop down only if that logs are present during that time. Below is the xml code used: <form version="1.1" theme="light"> <label>Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="ServiceCode"> <label>ServCode</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>ServiceCode</fieldForLabel> <fieldForValue>ServiceCode</fieldForValue> <search> <query> index=app-index |rename "resource.attributes.servicecode" as ServiceCode |stats count by ServiceCode |fields ServiceCode </query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> </search> </input> </fieldset> <row> <panel> <table> <title>Incoming Count</title> <search> <query>index=app-index source=application.logs AND resource.attributes.servicecode="$ServiceCode$" |table Income Rej_app ATM DMM Reject Rej_log Rej_app </query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentageRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <form>  
Hello, I have the following question: I would like to set up a multisite cluster with the following structure: --------- Site 01: Node01 Index A Site 02: Node02 Index B Index C Node03 Ind... See more...
Hello, I have the following question: I would like to set up a multisite cluster with the following structure: --------- Site 01: Node01 Index A Site 02: Node02 Index B Index C Node03 Index D SearchHead: Only via Node02 and Node03 Replication: Index A and B on Node01, Node02, Node03 Index C and D only on Node02 and Node03 Only the replications should be exchanged between Site 01 and Site 02 (no distributed search) ---------------- Is this possible and what do the configs look like (server.conf, indexes.conf etc)?   Have a great day and thank you very much  
As AppDynamics provides a robust set of APIs that allow you to retrieve various metrics, including CPU and memory usage, from monitored applications and systems/servers, i was confused to get a respo... See more...
As AppDynamics provides a robust set of APIs that allow you to retrieve various metrics, including CPU and memory usage, from monitored applications and systems/servers, i was confused to get a response in terms of percentage and other unrelated metrics. Can i have any specific APIs where i would get the info regarding CPU, Memory , Disk available capacity etc? This should be similar as we get in the controller with regards to Volumes as shared in the screen shot. Thanks
Hello, I have an index with events, where events belong to a transaction (transaction_id). I am interested in transactions which contain exact two event with specific eventtype (type1 and type2). W... See more...
Hello, I have an index with events, where events belong to a transaction (transaction_id). I am interested in transactions which contain exact two event with specific eventtype (type1 and type2). When I have all the transaction(_ids) which contain those two eventtypes, I want to join to the events again to get those complete transactions, including all their events. There might be events with other eventtypes as well which need to be retrieved. This is what I tried:   index="data" | stats values(eventtype) as eventtype by transaction_id | search eventtype="TYPE1" AND eventtype="TYPE2" | table transaction_id | join type=inner transaction_id[search index="data"] | table *   But this query returns only a fraction of the available matching transactions. I read some other posts with all kind of approaches, is it really so hard in Splunk to get such tasks done?
Removing FQDN from field values Hi all, can anyone help me with framing the SPL query for the below requirement. I have a field named Host which contains multiple values. some of them includes FQDN... See more...
Removing FQDN from field values Hi all, can anyone help me with framing the SPL query for the below requirement. I have a field named Host which contains multiple values. some of them includes FQDN in various format at the end of the hostname. eg: Host (value1.corp.abc.com, value2.abc.com,  value3.corp.abc, value4.xyz.com,  value5.klm.corp, value6.internal, value7.compute.internal, etc...) In this, I need to get Host value as (value1, value2, value3, value4, value5, value6, value7) in my result by removing all types of FQDN. Please can you help. Thanks in advance.  
Hi Team, I am connecting any point studio with Splunk using HEC. The logs are forwarding but some of the logs are missing in Splunk but it is presented in Any point studio logs. How to troubleshoot ... See more...
Hi Team, I am connecting any point studio with Splunk using HEC. The logs are forwarding but some of the logs are missing in Splunk but it is presented in Any point studio logs. How to troubleshoot the issue?   Thanks, Karthi
I am trying to write a splunk query. I have asset inventory data with hostname and IP address(multivalued), one hostname will have multiple IP address. And I have indexed data in Splunk with a field ... See more...
I am trying to write a splunk query. I have asset inventory data with hostname and IP address(multivalued), one hostname will have multiple IP address. And I have indexed data in Splunk with a field called Hostname(this is mix of hostname and IP addresses of some random assets).  Now I need to compare the asset inventory data with the indexed data,  and the output should be hostname & IP address that is not present in the indexed data. Sample data -  index=asset_inventory | table hostname IPaddress output hostname IPaddress abc 0.0.0.0 abc 2.2.2.2 abc 3.3.3.3 def 1.1.1.1 xyz 4.5.6.7 Indexed data -  index=indexed_data | stats count by Reporting_Host Reporting_Host 3.3.3.3 def Expected output - Host_not_present xyz Can someone help with with a Splunk query to get desired output.
Hi,   I use collect for to create a summary about VPN login and logout events. This worked fine but on last week I have 24 hours of logout events missing. Meanwhile the summary of login events were... See more...
Hi,   I use collect for to create a summary about VPN login and logout events. This worked fine but on last week I have 24 hours of logout events missing. Meanwhile the summary of login events were created. I checked the search without the collect command and it gives the correct output. I tried it with a test index and it worked too. But when I run the search for the missing timeframe nothing appears in the destination index. Do you have any advice what else could I check?   Thanks
Hi  I am trying to get visualization in a way when i select multiple categories i see multiple lines in chart and when single value selected i need to get single line but unable to create chart whe... See more...
Hi  I am trying to get visualization in a way when i select multiple categories i see multiple lines in chart and when single value selected i need to get single line but unable to create chart when passing multiple values in categories Please help  | rex field=message "MessageTemplate\\\":\\\"(?<msgType>[^\\\"]+)" | spath | search SAP OR TMD |timechart count by SAP or TMD Expecting result like below but this query gives error | rex field=message "MessageTemplate\\\":\\\"(?<msgType>[^\\\"]+)" | spath | search SAP | timechart count . This is coming fine as I have timechart given by count Thanks in advance Ashima
HI I have a cluster(3 indexers) with data and I want to copy one index "logs_Test" data to a single install for testing. Can I copy it from the back end on all 3 and bring them together, I feel thi... See more...
HI I have a cluster(3 indexers) with data and I want to copy one index "logs_Test" data to a single install for testing. Can I copy it from the back end on all 3 and bring them together, I feel this won't work. Can I export it from the Search head to a new index and then move that? Any ideas would be great  Thanks in advance Robbie
We are using a clustered index environment and want to use NAS as our cold storage.   I mapped NAS to a local folder for linux to be accessable by splunk. I can see the folders to be mapped on loc... See more...
We are using a clustered index environment and want to use NAS as our cold storage.   I mapped NAS to a local folder for linux to be accessable by splunk. I can see the folders to be mapped on local linux device. But when i change the configuration on cluster master to use this nas as cold data and push the configuration, splunk some time hang up and stops, even if i restart it it would not work. Has anyone tried nas as cold storage and can share his fstab and indexex.conf that would be great!