All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello - I am a new Splunk user and learning as I go. My current task is to breakdown Errors/Exceptions in chart group by error codes in error tables or list. current query: My current query  only ... See more...
Hello - I am a new Splunk user and learning as I go. My current task is to breakdown Errors/Exceptions in chart group by error codes in error tables or list. current query: My current query  only returns null values. index= (index name) host=(hostname) | timechart count by error
I am trying to get multiple values from xml as shows below I have tried xpath and spath and both shows nothing I am looking for ResponseCode, SimpleResponseCode and nResponseCode here is the sa... See more...
I am trying to get multiple values from xml as shows below I have tried xpath and spath and both shows nothing I am looking for ResponseCode, SimpleResponseCode and nResponseCode here is the sample xml for reference           | makeresults | eval _raw="<?xml version=\"1.0\" encoding=\"utf-8\"?> <soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soapenv:Body> <ns3:LogResponse xmlns:ns2=\"http://randomurl.com/sample1\" xmlns:ns3=\"http://randomurl.com/sample2\"> <ResponseCode>OK</ResponseCode> <State>Simple</State> <Transactions> <TransactionName>CHANGED</TransactionName> </Transactions> <Transactions> <TransactionData>CHANGE_SIMPLE</TransactionData> </Transactions> <ServerTime>1649691711637</ServerTime> <SimpleResponseCode>OK</SimpleResponseCode> <nResponseCode> <nResponseCode>OK</nResponseCode> </nResponseCode> <USELESS>VALUES</USELESS> <MORE_USELESS>false</MORE_USELESS> </ns3:LogResponse> </soapenv:Body> </soapenv:Envelope>" | xpath outfield=          
My sample events are like this  event 1 My name is Ethan [host="asw.pbrfinance.sdo.dgr.com"] My address is 46e 91 st [host="asw.pbrfinance.sdo.dgr.com"] my city is Atlanta [host="asw.pbrfinan... See more...
My sample events are like this  event 1 My name is Ethan [host="asw.pbrfinance.sdo.dgr.com"] My address is 46e 91 st [host="asw.pbrfinance.sdo.dgr.com"] my city is Atlanta [host="asw.pbrfinance.sdo.dgr.com"] event 2 My name is Thomas [host="asw..sdo.dgr.cowq234wdwaf.mhh.com"] My address is 996e 97 st [host="asw..sdo.dgr.cowq234wdwaf.mhh.com"] my city is Atlanta [host="asw..sdo.dgr.cowq234wdwaf.mhh.com"]   I want to limit the host name coming in the output as only one entry and not multiple times . Is there anyway to do this in props .conf ? Please help me with a proper regex for this .  Expected output  event 1 My name is Ethan [host="asw.pbrfinance.sdo.dgr.com"] My address is 46e 91 st  my city is Atlanta
I'm new to ES.  I have taken the ES Admin course so I probably shouldn't have to ask for help but I'm pulling my hair out. I have a linux host running sshd, no firewall.  This host has the universa... See more...
I'm new to ES.  I have taken the ES Admin course so I probably shouldn't have to ask for help but I'm pulling my hair out. I have a linux host running sshd, no firewall.  This host has the universal forwarder sending events to the index cluster. I have another linux host running a brute force attack against it. Search in Splunk clearly shows the failed attempts, thousands of them. In ES, I have enabled the "Brute Force Access Behavior Detected" correlation search, and added a Adaptive Response Action to create notable. However, even though there are thousands of matching events, I never get a notable created. SA_AccessProtection app is installed. Any ideas of how to troubleshoot this, or what might be wrong greatly appreciated.  
Hi, Some data source is indexed one hour in the future (probably since TZ shift => twice a year hour change in France !! this time +0100hour). We were on gmt+1, now we're on gmt+2. I don't kn... See more...
Hi, Some data source is indexed one hour in the future (probably since TZ shift => twice a year hour change in France !! this time +0100hour). We were on gmt+1, now we're on gmt+2. I don't know where the problem is. - checked the server ntp => ok, gmt+2 updated - checked the data source file => ok - tried to reproduced in dev env on a mono-instance : issue not reproducted ! - this is the only data source with the issue My prod env is distributed (SHC, Indexer Cluster and multiple forwarders) - data is a jsonl file. I'm soo lost !! Thank you for your help !! Ema on the indexer cluster : [mysourcetype] NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_PREFIX= "dte":" TIME_FORMAT = %d/%m/%Y %H:%M:%S TRUNCATE = 0 MAX_DAYS_AGO = 4000 category = Structured disabled = false pulldown_type = true   data sample : {"idj":"3108824152","dce":"IDN","fce":"IDN2","ace":"176","dte":"08/04/2022 14:44:31","org":"GN","dmc":"2","idu":"211151","csu":"00082827","lsu":"CROSS BDOHRIJ GHBGD14 ","ctx":"Identifiant:PN-003042021007790-ARD-PPM-70732201#Procédure de référence:CIAHTDT CENTRAL DE CNJAEN-2021-007790#Type personne:Physique#Qualité personne:Mise en cause#Nom:XXX#Prénom:yyy#Lieu de naissance:CAEN#Date de naissance:05/01/1991#","idd":"PN-0030428541021007790-ARD-PPM-7074532201","ise":"N","cts":[{"idj":"3108824152","nom":"XXX","pre":"yyy","jne":"5","mne":"1","ane":"1981","lne":"CAEN","cot":"","not":"","qot":"","nuo":"","ctt":"","gtt":"","qtt":"","ntt":""}]}   This data is indexed at  08/04/2022 15:44:31 for 08/04/2022 14:44:31 !    
Right now I have a lot of macros to help with reports, dashboards and knowledge items in general. We do not really use tags/eventtypes. Right now though each business has multiple macros that need to... See more...
Right now I have a lot of macros to help with reports, dashboards and knowledge items in general. We do not really use tags/eventtypes. Right now though each business has multiple macros that need to be managed based on how our items our logged (this is the root cause but this wont change easily). I am wondering from a performance standpoint is there a way I could more easily get the events I need through a tag/event type or other way?   For example I need to get a list of all functions that get called. So with that we need to have an over all macro, something to exclude some carryovers/one time jobs and other items we dont care about. We are implementing more and its becoming a huge mess so far. I was thinking use the macros to create a weekly lookup that then can be used in dashboards/reports to try and make things more efficient as well. Just looking for ideas as to what might be a better/cleaner way to do things.   Edit: I get macros are not a performance issue and will just run whatever SPL is in there. I was more wondering is this generally the most efficient way or could I benefit from using something different here.
Hey Team, I have Million records to search for. Record Structure is given below. My requirement is to get length of aValues across million records. For example if aValues length for two recors is ... See more...
Hey Team, I have Million records to search for. Record Structure is given below. My requirement is to get length of aValues across million records. For example if aValues length for two recors is 10,12 . I should display 22.   { resp: { meta: { bValues: [ { aValues: [ ] } ] } } }   Below Splunk Query I tried but its not working for million records. Only working for small set like 10 records   index=myIndex| spath path=resp.meta.bValues{} output=BValues | stats count by BValues | spath input=BValues path=aValues{} output=aValues | mvexpand aValues | spath input=aValues | spath input=BValues | fields - BValues count aValues* | stats count  
I have a Linux server falsely showing as down on Splunk Web.  I have tried restarting the Linux server and restarting the Splunk forwarder on the Linux server but the issue still remains.
Hello, I've been trying a few different ways, with no luck, to represent some server counts that I see happening on Thursday, Friday, Saturday, Sunday, Monday(sometimes). Unfortunately, it seems ... See more...
Hello, I've been trying a few different ways, with no luck, to represent some server counts that I see happening on Thursday, Friday, Saturday, Sunday, Monday(sometimes). Unfortunately, it seems like I can't do this count "per week" as we need to count per the last "scan time" which will start thursday and end on the latest Monday. I started looking into my possible options, and think I have half an idea of how to accomplish it, but if there's better ideas then that would be awesome as well. Is it possible to do a sum based on "grouped days") Thurs+Fri+Sat+Sun+Mon, or dayofweek 4,5,6,0,1?  The main thing I can't get over is how to differentiate the "grouped days"?  We like to evaluate based on the "current week" of the year, but this would bring our "grouped days" to persisting through multiple "current weeks" of the year (this is variable 'weekofyear'). Essentially, I need to count weekofyear where the output would be like: Department Week of Year (technically, this is our "scan cycle") Server Count (Server_Responses) Dept.A 10 (this would be combined between Thurs,Fri,Sat,Sun,Mon...) 100 (ie; we saw 3 thurs, 90 fri, 3 sat, 3 sun, 1 mon...) Dept.B 10 200 Dept.A 11 105 (ie; we saw 10 thurs, 80 fri, 10 sat, 3 sun, 2 mon...) Dept.B 11 203 I haven't really gotten any further than just evaluating date commands to evaluate my options.  Other than that, I just have a line chart indicating a day of week over the counts... It's not very pretty. index blah sourcetype blah search blah ```what i have been looking at so far...``` | rename server_id as "Server_Responses" ```at this point I was just looking at the possibilities to count by an aggregated "day of week in number" or by "dayofweek(short|full)", and real all possibilities``` | eval dayofweekshort=strftime(_time,"%a") | timechart count(ping.status) as pingstats, dc("Server_Responses") by Department span=1w@1w ```Start evaluating possible days, weeks, months, current weeks, etc``` | eval dayofweekshort=strftime(_time,"%a") | eval dayofweekfull=strftime(_time,"%A") | eval dayofweekasnumber=strftime(_time,"%w") | eval dayofmonth=strftime(_time,"%d") | eval weekofmonth=floor(dayofmonth/7)+1 | eval weekofyear=strftime(_time,"%U") | fields - day  
Hi All, I need help with  Splunk Query for below scenario: Query 1: index =abc | table src, dest_name, severity, action If it finds dest_name for any high and critical severity, it will look f... See more...
Hi All, I need help with  Splunk Query for below scenario: Query 1: index =abc | table src, dest_name, severity, action If it finds dest_name for any high and critical severity, it will look for computerdnsname in index xyz and there if it matches, it will display the result Query 2: index=xyz     
Hello! I can't manage to get Splunk to extract the following timestamp at import. 2015-12-01 00:00:00+00 Could you help me finding the format string required for proper extraction? Thanks!
Hi,   I am trying to get an App Key from your Controller: I tried the step 3 in the wizard: by selecting Install the iOS Agent and selected the option Auto-generate a new Mobile App Group    ... See more...
Hi,   I am trying to get an App Key from your Controller: I tried the step 3 in the wizard: by selecting Install the iOS Agent and selected the option Auto-generate a new Mobile App Group    After loading it generates the code snippet in the controller but it doesn't have an app key in the code snippet.   FYI I am using the sample app for running and exploring the AppD. To use the sample app I need to have an app key to keep it in appdelegate but here in the controller by generating it's getting empty. Is there any process or steps involved to generate?   ADEumAgentConfiguration *config = [[ADEumAgentConfiguration alloc] initWithAppKey:@" "];   ^ Post edited by @Ryan.Paredez for formatting and searchability    
Good morning! I updated my index cluster/shc to 8.2.6 yesterday, and everything went fairly well, except for the "Health of Splunk Deployment" screen (Top RIght of screen, the (!) next to my usernam... See more...
Good morning! I updated my index cluster/shc to 8.2.6 yesterday, and everything went fairly well, except for the "Health of Splunk Deployment" screen (Top RIght of screen, the (!) next to my username. On different instances, it is either completely broken:   Or working (ish)   In either case, I cannot scroll down to see the rest of it. I have restarted the instances, changed browsers, etc. It was definitely working before.
I have 2 sourcetype WinHostMon and wineventlog with Splunk add-on for Microsoft windows. After doing Asset and Identity configuration in Splunk ES. the lookup file is fine and I can see the results w... See more...
I have 2 sourcetype WinHostMon and wineventlog with Splunk add-on for Microsoft windows. After doing Asset and Identity configuration in Splunk ES. the lookup file is fine and I can see the results with the search command: | inputlookup test_assets2.csv and Asset Lookup information is also displayed in ES > Security Domains > Identity > Asset Center dashboard. But there is a problem that the enrichment fields for data like dest_asset, dest_asset_id, ... only appear in the WinHostMon sourcetype. Can someone help me pls? Thank you very much!
Hi Teams, I am newbie to splunk, I have log message like this: 10/04/2022 10:12:31.000   START RequestId: 46618528-6242-4eee-97b2-270e875bac1e Version: 165 END RequestId: 46618528-6242-4eee... See more...
Hi Teams, I am newbie to splunk, I have log message like this: 10/04/2022 10:12:31.000   START RequestId: 46618528-6242-4eee-97b2-270e875bac1e Version: 165 END RequestId: 46618528-6242-4eee-97b2-270e875bac1e REPORT RequestId: 46618528-6242-4eee-97b2-270e875bac1e Duration: 68.98 ms Billed Duration: 69 ms Memory Size: 256 MB Max Memory Used: 170 MB START RequestId: 9a8f3f1e-aa03-40d9-a064-bb10a47a92eb Version: 163 END RequestId: 9a8f3f1e-aa03-40d9-a064-bb10a47a92eb REPORT RequestId: 9a8f3f1e-aa03-40d9-a064-bb10a47a92eb Duration: 3.76 ms Billed Duration: 4 ms Memory Size: 256 MB Max Memory Used: 184 MB   I want to get MaxMemory Used value as percentage (Max Memory Used/Memory Size) in each message and create time chart to show this value. Can anyone help me in this!
timechart [stats count | eval range="$timeRange$" | eval search=case(range=="-6h", "span=30m ", range=="-1d", "span=1h ", range=="-3d", "span=2h ", range=="-7d", "span=4h ")] can't work after upgrade... See more...
timechart [stats count | eval range="$timeRange$" | eval search=case(range=="-6h", "span=30m ", range=="-1d", "span=1h ", range=="-3d", "span=2h ", range=="-7d", "span=4h ")] can't work after upgrade splunk from 8.0.6 to 8.2.5.
Hi I know this is probably an easy one but I'm new and need some help. I have the following Field Called "Account Name" Account Name                                   Alan Test Account            ... See more...
Hi I know this is probably an easy one but I'm new and need some help. I have the following Field Called "Account Name" Account Name                                   Alan Test Account                              Debbie Production Account          John Dev Account                             Ed Test Account                                  I would like to create a new field called Environment that matches Test, Production ,Dev Account Name                                  Environment Alan Test Account                             Test Debbie Production Account         Production John Dev Account                            Dev Ed Test Account                                 Test
Hello, We're running into an issue with a UF sending data to a new metrics index under an app deployed by our deployment server. None of the perfmon inputs are sending data into our new index, and ... See more...
Hello, We're running into an issue with a UF sending data to a new metrics index under an app deployed by our deployment server. None of the perfmon inputs are sending data into our new index, and we're not seeing any errors. We also have the Splunk TA Windows Base app deployed to these same servers, and if we test adjusting the inputs.conf stanzas in the TA app to send perfmon metrics to our new index, it works fine. Below are the input stanzas from both apps: Custom app: ## Process [perfmon://Process] counters = % Processor Time; % User Time- Private disabled = 0 instances = * interval = 60 mode = single object = Process useEnglishOnly=true index = custom_metrics TA Windows Base originally: ## Process [perfmon://Process] counters = % Processor Time; % User Time; % Privileged Time; Virtual Bytes Peak; Virtual Bytes; Page Faults/sec; Working Set Peak; Working Set; Page File Bytes Peak; Page File Bytes; Private Bytes; Thread Count; Priority Base; Elapsed Time; ID Process; Creating Process ID; Pool Paged Bytes; Pool Nonpaged Bytes; Handle Count; IO Read Operations/sec; IO Write Operations/sec; IO Data Operations/sec; IO Other Operations/sec; IO Read Bytes/sec; IO Write Bytes/sec; IO Data Bytes/sec; IO Other Bytes/sec; Working Set - Private disabled = 0 instances = * interval = 10 mode = single object = Process useEnglishOnly=true index = ava_cs_metrics   TA Windows Base changed to point to our new index (which writes to the index fine) : ## Process [perfmon://Process] counters = % Processor Time; % User Time; % Privileged Time; Virtual Bytes Peak; Virtual Bytes; Page Faults/sec; Working Set Peak; Working Set; Page File Bytes Peak; Page File Bytes; Private Bytes; Thread Count; Priority Base; Elapsed Time; ID Process; Creating Process ID; Pool Paged Bytes; Pool Nonpaged Bytes; Handle Count; IO Read Operations/sec; IO Write Operations/sec; IO Data Operations/sec; IO Other Operations/sec; IO Read Bytes/sec; IO Write Bytes/sec; IO Data Bytes/sec; IO Other Bytes/sec; Working Set - Private disabled = 0 instances = * interval = 10 mode = single object = Process useEnglishOnly=true index = custom_metrics  
How do I access and submit Splunk Observability Cloud cases to the Splunk Support Portal? For existing customers who used the Splunk Observability Cloud (SignalFx) Support site, what’s changed about... See more...
How do I access and submit Splunk Observability Cloud cases to the Splunk Support Portal? For existing customers who used the Splunk Observability Cloud (SignalFx) Support site, what’s changed about the support process now that we’ve moved to the Splunk Support Portal?
Hi All, I want to pull AD logs to Splunk Cloud. I see some source about Splunk Add-on for Microsoft Windows 6.0.0 and above which pulls the AD logs and another Add-on also does the same thing. I am... See more...
Hi All, I want to pull AD logs to Splunk Cloud. I see some source about Splunk Add-on for Microsoft Windows 6.0.0 and above which pulls the AD logs and another Add-on also does the same thing. I am confused. Can you point me in the right direction?    Thanks In Advance.