All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @ITWhisperer , No my ask is for example in servicode we have below values 2031 1345 2345 null 5643 when i select time range as 24 hours we have data for all above codes so they are all sho... See more...
Hi @ITWhisperer , No my ask is for example in servicode we have below values 2031 1345 2345 null 5643 when i select time range as 24 hours we have data for all above codes so they are all showing up in the ServiceCode drop down. but when i select time range for last 15 mins there are no logs for "null" but still it is showing up in the drop down. We dont want to see null option if logs are not present.
The following macro formats the time to a standard utc timezone: [utc] definition = eval time_offset=strftime(_time,"%:::z") | convert num(time_offset) | eval time_offset=if(time_offset<=0, "+" . ... See more...
The following macro formats the time to a standard utc timezone: [utc] definition = eval time_offset=strftime(_time,"%:::z") | convert num(time_offset) | eval time_offset=if(time_offset<=0, "+" . -time_offset, tostring(-time_offset)), time_utc=relative_time(_time,time_offset . "h") | convert timeformat="%F %T UTC" ctime(time_utc) | convert `timeformat` ctime(_time) AS time_local The following macro sets the time to the timezone of your choice: [tz(1)] definition = eval utc_offset=strftime(_time,"%:::z") | convert num(utc_offset) | eval tz_offset = $tz$ - utc_offset, tz_offset = if(tz_offset>=0,"+".tz_offset,tz_offset), utc_offset = if(utc_offset<=0,"+".-utc_offset,tostring(-utc_offset)) | eval time_tz=relative_time(_time, tz_offset . "h"), utc_time=relative_time(_time,utc_offset . "h") | convert timeformat="%F %T UTC" ctime(utc_time) | convert timeformat="%F %T UTC$tz$" ctime(time_tz) | convert `timeformat` ctime(_time) AS my_time | fields - tz_offset utc_offset* | rename time_tz AS "time:$tz$" args = tz [timeformat] definition = timeformat="%F %T UTC%:::z %Z"
Please guide on onboarding cloudflare with splunk for a distributed architecture. along with information on, on which instance (HF, indexer, search heads, management instances) to install the add-ON... See more...
Please guide on onboarding cloudflare with splunk for a distributed architecture. along with information on, on which instance (HF, indexer, search heads, management instances) to install the add-ON, and on which instance to create custom index
Hi @Roger_FB , at first, this question isn't for the Community but youshould engage a Splunk Architect or a Splunk PS. Anyway, let me understand: you have one Indexer on Site1 and two in Site2 i... See more...
Hi @Roger_FB , at first, this question isn't for the Community but youshould engage a Splunk Architect or a Splunk PS. Anyway, let me understand: you have one Indexer on Site1 and two in Site2 indexes on Site2 must be replicated only on Indexers in Site2, instead Indexes in Site1 must be replicated also in Site2. I'm not sure that's possible to have Indexes not replicated in both the Sites. Ciao. Giuseppe
Hi @nspaitsec license manager and clustered indexers 
Assuming your app name in the events is app and the app name in the lookup is also app, you could do something like this (over the past 90 days) | stats count by app | append [| inputlookup lookup... See more...
Assuming your app name in the events is app and the app name in the lookup is also app, you could do something like this (over the past 90 days) | stats count by app | append [| inputlookup lookup] | stats count by app | where count == 1
Perhaps a better title would be: "Find an error in one system and then find errors close in time in a 2nd system".  In my case, both search strings include the word 'Error' and the values are text to... See more...
Perhaps a better title would be: "Find an error in one system and then find errors close in time in a 2nd system".  In my case, both search strings include the word 'Error' and the values are text to indicate what the errors are about. Two Searches:   index=first_index sourcetype=first_source error 500 | rex field=_raw "string(?<REF_VAL>\d+)" | table _time REF_VAL   Output: _time    REF_VAL 2024-06-2024 10:48:04.003   Avalue   index=second_index soucetype=second_souce error somestring | rex field=_raw "ERROR - (?<ERR_MTHD>\S+)" | table _time ERR_MTHD   Output: _time    ERR_MTHD 2024-06-24 10:48:51.174  Method1text 2024-06-24 10:48:51:158  Method2text Output that I would like:  EVENT_TIME      REFERENCE_VAL      RELATED_TIME      RELATED_VAL 2024-06-2024 10:48:04.003   Avalue 2024-06-24 10:48:51.174  Method1text 2024-06-2024 10:48:04.003   Avalue 2024-06-24 10:48:51:158  Method2text
I have a lookup that has saved all apps installed on our deployment server. I need a query that checks all apps in the lookup without an app event in the last 90 days. Thank you for any assistance. 
Do you mean that resource.attributes.servicecode has a string value of "null" in some of your events and you want to exclude that from your list of available options in your dropdown? <query> index=... See more...
Do you mean that resource.attributes.servicecode has a string value of "null" in some of your events and you want to exclude that from your list of available options in your dropdown? <query> index=app-index |rename "resource.attributes.servicecode" as ServiceCode |stats count by ServiceCode |where ServiceCode != "null" |fields ServiceCode </query>
Hello. I am working with opentelemetry metrics. I have a metric type index, and the format of the payload I receive is like this: example payload : {"deployment.environment":"entorno-prueb... See more...
Hello. I am working with opentelemetry metrics. I have a metric type index, and the format of the payload I receive is like this: example payload : {"deployment.environment":"entorno-pruebas","k8s.cluster.name":"splunk-otel","k8s.namespace.name":"default","k8s.node.name":"minikube","k8s.pod.name":"my-otel-demo-emailservice-fc5bc4c5f-jxzqz","k8s.pod.uid":"5fe1ada8-8baa-4960-b873-381b475b2b26","metric_type":"Gauge","os.type":"linux","metric_name:k8s.pod.filesystem.usage":491520}   I need a search that retrieves the various values ​​of the k8s.pod.name field. I'm trying different variations of the search, but I can't get it: |mstats avg(_value) as VAL WHERE index=otel_k8s_metrics metric_name="metric_name:k8s.pod.filesystem.usage*" |spath input=_raw path=k8s.pod.name output=k8s.pod.name |stats values(k8s.pod.name) as k8s.pod.name |table k8s.pod.name   Does anyone have any idea why it doesn't work. Metrics type indexes support spath   I appreciate any ideas BR  JAR  
Hello, I need some help with adjusting an alert for detecting a password spray attack using Auth0 logs in Splunk. What I'm looking for is to not just catch the password spray itself but also get ale... See more...
Hello, I need some help with adjusting an alert for detecting a password spray attack using Auth0 logs in Splunk. What I'm looking for is to not just catch the password spray itself but also get alerted when there's a successful login from the same source right after the spray attempt. Currently, I have the following query that detects password spray attempts by identifying IPs with more than 10 unique failed login attempts within a 5-minute window:     index = auth0 (data.type IN ("fu", "fp")) | bucket span=5m _time | stats dc(data.user_name) AS unique_accounts values(data.user_name) as tried_accounts values(data.client_name) as clientName values(data.type) as failure_reason by data.ip | where unique_accounts > 10   Is there an way to adjust this query to also detect and alert on successful logins (data.type = "s") from the same IPs that performed the spray attack? I am looking to create an alert that indicates a successful login following the spray, so we can respond accordingly. Log Event Type Codes (auth0.com) Thank you
We created a dynamic drop down for service code and time range. We have many  service code values out of which "null"  is one of  them. So when we select a particular time range if null value logs ar... See more...
We created a dynamic drop down for service code and time range. We have many  service code values out of which "null"  is one of  them. So when we select a particular time range if null value logs are not present also it is showing up in the drop down, We want to see the options in the drop down only if that logs are present during that time. Below is the xml code used: <form version="1.1" theme="light"> <label>Dashboard</label> <fieldset submitButton="false"> <input type="time" token="timepicker"> <label>TimeRange</label> <default> <earliest>-15m@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="ServiceCode"> <label>ServCode</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>ServiceCode</fieldForLabel> <fieldForValue>ServiceCode</fieldForValue> <search> <query> index=app-index |rename "resource.attributes.servicecode" as ServiceCode |stats count by ServiceCode |fields ServiceCode </query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> </search> </input> </fieldset> <row> <panel> <table> <title>Incoming Count</title> <search> <query>index=app-index source=application.logs AND resource.attributes.servicecode="$ServiceCode$" |table Income Rej_app ATM DMM Reject Rej_log Rej_app </query> <earliest>timepicker.earliest</earliest> <latest>timepicker.latest</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentageRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <form>  
Not so far, unfortunately
Hello, I have the following question: I would like to set up a multisite cluster with the following structure: --------- Site 01: Node01 Index A Site 02: Node02 Index B Index C Node03 Ind... See more...
Hello, I have the following question: I would like to set up a multisite cluster with the following structure: --------- Site 01: Node01 Index A Site 02: Node02 Index B Index C Node03 Index D SearchHead: Only via Node02 and Node03 Replication: Index A and B on Node01, Node02, Node03 Index C and D only on Node02 and Node03 Only the replications should be exchanged between Site 01 and Site 02 (no distributed search) ---------------- Is this possible and what do the configs look like (server.conf, indexes.conf etc)?   Have a great day and thank you very much  
As AppDynamics provides a robust set of APIs that allow you to retrieve various metrics, including CPU and memory usage, from monitored applications and systems/servers, i was confused to get a respo... See more...
As AppDynamics provides a robust set of APIs that allow you to retrieve various metrics, including CPU and memory usage, from monitored applications and systems/servers, i was confused to get a response in terms of percentage and other unrelated metrics. Can i have any specific APIs where i would get the info regarding CPU, Memory , Disk available capacity etc? This should be similar as we get in the controller with regards to Volumes as shared in the screen shot. Thanks
Hello On which node did they advised to set this parameter ? on the License Manager and/or any other node ? Thank you.
Hello, I have an index with events, where events belong to a transaction (transaction_id). I am interested in transactions which contain exact two event with specific eventtype (type1 and type2). W... See more...
Hello, I have an index with events, where events belong to a transaction (transaction_id). I am interested in transactions which contain exact two event with specific eventtype (type1 and type2). When I have all the transaction(_ids) which contain those two eventtypes, I want to join to the events again to get those complete transactions, including all their events. There might be events with other eventtypes as well which need to be retrieved. This is what I tried:   index="data" | stats values(eventtype) as eventtype by transaction_id | search eventtype="TYPE1" AND eventtype="TYPE2" | table transaction_id | join type=inner transaction_id[search index="data"] | table *   But this query returns only a fraction of the available matching transactions. I read some other posts with all kind of approaches, is it really so hard in Splunk to get such tasks done?
Is your splunk instance connected to the internet?
Removing FQDN from field values Hi all, can anyone help me with framing the SPL query for the below requirement. I have a field named Host which contains multiple values. some of them includes FQDN... See more...
Removing FQDN from field values Hi all, can anyone help me with framing the SPL query for the below requirement. I have a field named Host which contains multiple values. some of them includes FQDN in various format at the end of the hostname. eg: Host (value1.corp.abc.com, value2.abc.com,  value3.corp.abc, value4.xyz.com,  value5.klm.corp, value6.internal, value7.compute.internal, etc...) In this, I need to get Host value as (value1, value2, value3, value4, value5, value6, value7) in my result by removing all types of FQDN. Please can you help. Thanks in advance.  
Hi Team, I am connecting any point studio with Splunk using HEC. The logs are forwarding but some of the logs are missing in Splunk but it is presented in Any point studio logs. How to troubleshoot ... See more...
Hi Team, I am connecting any point studio with Splunk using HEC. The logs are forwarding but some of the logs are missing in Splunk but it is presented in Any point studio logs. How to troubleshoot the issue?   Thanks, Karthi