All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, I am installing splunk universal forwarder using ansible : When I am trying to start splunk and accept license, I am getting below error : ``` fatal: [Server-a]: FAILED! => {"changed"... See more...
Hi Team, I am installing splunk universal forwarder using ansible : When I am trying to start splunk and accept license, I am getting below error : ``` fatal: [Server-a]: FAILED! => {"changed": true, "cmd": ["/opt/splunkforwarder/bin/splunk", "start", "--accept-license", "--answer-yes", "--no-prompt"], "delta": "0:00:00.130544", "end": "2022-04-16 17:17:20.807732", "msg": "non-zero return code", "rc": 1, "start": "2022-04-16 17:17:20.677188", "stderr": "\n-- Migration information is being logged to '/opt/splunkforwarder/var/log/splunk/migration.log.2022-04-16.17-17-20' --\nERROR while running renew-certs migration.", "stderr_lines": ["", "-- Migration information is being logged to '/opt/splunkforwarder/var/log/splunk/migration.log.2022-04-16.17-17-20' --", "ERROR while running renew-certs migration."], "stdout": "\nThis appears to be an upgrade of Splunk.\n--------------------------------------------------------------------------------)\n\nSplunk has detected an older version of Splunk installed on this machine. To\nfinish upgrading to the new version, Splunk's installer will automatically\nupdate and alter your current configuration files. Deprecated configuration\nfiles will be renamed with a .deprecated extension.\n\nYou can choose to preview the changes that will be made to your configuration\nfiles before proceeding with the migration and upgrade:\n\nIf you want to migrate and upgrade without previewing the changes that will be\nmade to your existing configuration files, choose 'y'.\nIf you want to see what changes will be made before you proceed with the\nupgrade, choose 'n'.\n\n\nPerform migration and upgrade without previewing configuration changes? [y/n] y\n\nMigrating to:\nVERSION=8.2.4\nBUILD=87e2dda940d1\nPRODUCT=splunk\nPLATFORM=Linux-x86_64\n\n\nERROR: In order to migrate, Splunkd must not be running.", "stdout_lines": ["", "This appears to be an upgrade of Splunk.", "--------------------------------------------------------------------------------)", "", "Splunk has detected an older version of Splunk installed on this machine. To", "finish upgrading to the new version, Splunk's installer will automatically", "update and alter your current configuration files. Deprecated configuration", "files will be renamed with a .deprecated extension.", "", "You can choose to preview the changes that will be made to your configuration", "files before proceeding with the migration and upgrade:", "", "If you want to migrate and upgrade without previewing the changes that will be", "made to your existing configuration files, choose 'y'.", "If you want to see what changes will be made before you proceed with the", "upgrade, choose 'n'.", "", "", "Perform migration and upgrade without previewing configuration changes? [y/n] y", "", "Migrating to:", "VERSION=8.2.4", "BUILD=87e2dda940d1", "PRODUCT=splunk", "PLATFORM=Linux-x86_64", "", "", "ERROR: In order to migrate, Splunkd must not be running."]} ``` This error not happens everytime. First time when I run the script it doesnot throw this error and runs successfully. If I run second time on same host, it shows this error . Can someone help me to understand this please ??? ``` command I am using : /opt/splunkforwarder/bin/splunk start --accept-license --answer-yes Thanks in Advance, Poojitha 
Hi, Is there any way to set up an alert for server reboots in Appdynamics? ^ Post edited by @Ryan.Paredez for a searchable title. Please make sure the title of your posts are questions.
Hello, I have a dashboard with two different time filters. The first time filter is used to filter the _time filter The second time filter should be used to filter the results on a different fi... See more...
Hello, I have a dashboard with two different time filters. The first time filter is used to filter the _time filter The second time filter should be used to filter the results on a different field X. I see in the dashboard URL form2.date2.earliest=<VALUE> &   form2.date2.latest=<OPTIONAL_VALUE> I would like in a where clause or something similar to filter my results based on that date2 input. What is the best way to do it in Splunk> I Hope without Code snippet the question is clear and understandable.
Hi, I've been trying to use the output from a lookup as input to another lookup. In the first lookup i have the name of the files to search: I have a query with field names on a column like this:  ... See more...
Hi, I've been trying to use the output from a lookup as input to another lookup. In the first lookup i have the name of the files to search: I have a query with field names on a column like this:  field1 name1 name2  then, i search field1 in a lookup with a column with file names like this: | lookup wheretosearch.csv field1 OUTPUTNEW lookup_name  my lookup wheretosearch.csv looks like this field1 lookup_name name1 name1_lookup.csv name2 name2_lookup.csv  Then, I need that field lookup_name to search in a lookup for each row: | lookup lookup_name .... But obviously, this is not possible because the variable lookup_name is not the name of a csv file. How can i do this? 
Hi, I am using streamstats to calculate the rank based on cumulative count per day per category. On few days, a particular category may not appear. So, on those days, I want to have the count for t... See more...
Hi, I am using streamstats to calculate the rank based on cumulative count per day per category. On few days, a particular category may not appear. So, on those days, I want to have the count for that category from the  previous day. I tried all the arguments of streamstats and other commands with no success. Can someone help me on it please? I am pasting the code of similar case but for ranking based on the cumulative points for each football match    index=index | stats sum(TotalPoints) AS Points BY match, "Sold To" | fillnull value=0 | rename "Sold To" AS Owner | sort match | streamstats sum(Points) AS Total BY Owner | sort match Total | streamstats count AS Rank BY match | xyseries match Owner Rank   Output match Owner1 Owner2 Owner3 1 2 (Rank 2, Total: 10) 1 (Rank 1, Total: 15) (Total: 0) 2 1 (Rank 1, Total: 25) 3 (Rank 3, Total: 18) 2 (Rank 2, Total: 20) 3 (Total: 0) 2 (Rank 2, Total: 23) 1 (Rank 1, Total: 30)   Expected Output match Owner1 Owner2 Owner3 1 2 (Rank 2, Total: 10) 1 (Rank 1, Total: 15) 3 (Rank 3, Total: 0) 2 1 (Rank 1, Total: 25) 3 (Rank 3, Total: 18) 2 (Rank 2, Total: 20) 3 2 (Rank 3, Total: 25) 3 (Rank 2, Total: 23) 1 (Rank 1, Total: 30)
Don't show a result where the src_ip is X and dest_ip is Y  index=test    host=test  source=test conn_state=sf   | eval src_ip=x and 
Is there a way to do the following?   <row depends="$resultCount$"<=3>   I have a few panels I want to show dynamically based on the results from the above search.
Hi Everyone, I am struggling a lot to create a Dashboard that will show SLA for alerts received on Incident review Dashboard Basically I need two things only 1. SLA from alert received until ... See more...
Hi Everyone, I am struggling a lot to create a Dashboard that will show SLA for alerts received on Incident review Dashboard Basically I need two things only 1. SLA from alert received until assigned ( from status New to status in progress) 2. SLA from alert pending to closure ( from status Pending to status Closed) I am facing many issues where empty fields into alert urgency and creation time I have spent a week to create below query   | tstats `summariesonly` earliest(_time) as incident_creation_time from datamodel=Incident_Management.Notable_Events_Meta by source,Notable_Events_Meta.rule_id | `drop_dm_object_name("Notable_Events_Meta")` | `get_correlations` | join type=outer rule_id [| from inputlookup:incident_review_lookup | eval _time=time | stats earliest(_time) as review_time by rule_id, owner, user, status, urgency] | rename user as reviewer | lookup update=true user_realnames_lookup user as "reviewer" OUTPUTNEW realname as "reviewer_realname" | eval reviewer_realname=if(isnull(reviewer_realname),reviewer,reviewer_realname), nullstatus=if(isnull(status),"true","false"), temp_status=if(isnull(status),-1,status) | lookup update=true reviewstatuses_lookup _key as temp_status OUTPUT status,label as status_label,description as status_description,default as status_default,end as status_end | eval incident_duration_minutes=round(((review_time-incident_creation_time)/60),0) | eval sla=case(urgency="critical" AND incident_duration_minutes>15, "breached", urgency="high" AND incident_duration_minutes>15, "breached", urgency="medium" AND incident_duration_minutes>45, "breached", urgency="low" AND incident_duration_minutes>70, "breached", isnull(review_time), "incident not assigned", 1=1, "not breached") | convert timeformat="%F %T" ctime(review_time) AS review_time, ctime(incident_creation_time) AS incident_creation_time | fields rule_id, source, urgency, reviewer_realname, incident_creation_time, review_time, incident_duration_minutes, sla, status_label | table rule_id, source, urgency, reviewer_realname, incident_creation_time, review_time, incident_duration_minutes, sla, status_label But still a lot of things are missing, could you please help in creating a small Dashboard with below requirements 1. SLA from alert received until assigned ( from status New to status in progress) 2. SLA from alert pending to closure ( from status Pending to status Closed) Many thanks in advance
Hello, I'm using an App which is listed as a visualization.  With it I can go into a dashboard and create a panel with it.   My question is because I don't have to copy and paste in XML code myse... See more...
Hello, I'm using an App which is listed as a visualization.  With it I can go into a dashboard and create a panel with it.   My question is because I don't have to copy and paste in XML code myself for a panel does that mean if the App updates then any related panels would automatically update as well.  If not do I need to recreate it as I would for a panel I'd manually create.   thanks
Do any of you use (or know of) any scripts that look at splunk configuration and point out errors, or otherwise allow for a framework to do some sanity checking? This is a fairly open question, and I... See more...
Do any of you use (or know of) any scripts that look at splunk configuration and point out errors, or otherwise allow for a framework to do some sanity checking? This is a fairly open question, and I'd also love any ideas for what kind of things you'd like to see in such a script.
HI.  When we use table in a search rather than going to events it goes to the statistics tab automatically.  I would like to have a timechart macro go straight to the visualization tab in much the sa... See more...
HI.  When we use table in a search rather than going to events it goes to the statistics tab automatically.  I would like to have a timechart macro go straight to the visualization tab in much the same manner.  Is there any way to do this.  I don't want to make a dash I just want the macro to show a quick and dirty timechart.  Bonus if some options can be put into the search to specific the visualization type and such.  Here is the search im trying and its fine but I would like my users to see a visualization immediately rather than clicking on the visualization tab. index=dct_* "body.host.name"=somehost "body.system.cpu.total.norm.pct"="*" | fields body.host.name body.system.cpu.total.norm.pct | eval cpuPercent = body.system.cpu.total.norm.pct*100 | timechart cont=FALSE avg(body.system.cpu.total.norm.pct)
Hello - I am a new Splunk user and learning as I go. My current task is to breakdown Errors/Exceptions in chart group by error codes in error tables or list. current query: My current query  only ... See more...
Hello - I am a new Splunk user and learning as I go. My current task is to breakdown Errors/Exceptions in chart group by error codes in error tables or list. current query: My current query  only returns null values. index= (index name) host=(hostname) | timechart count by error
I am trying to get multiple values from xml as shows below I have tried xpath and spath and both shows nothing I am looking for ResponseCode, SimpleResponseCode and nResponseCode here is the sa... See more...
I am trying to get multiple values from xml as shows below I have tried xpath and spath and both shows nothing I am looking for ResponseCode, SimpleResponseCode and nResponseCode here is the sample xml for reference           | makeresults | eval _raw="<?xml version=\"1.0\" encoding=\"utf-8\"?> <soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soapenv:Body> <ns3:LogResponse xmlns:ns2=\"http://randomurl.com/sample1\" xmlns:ns3=\"http://randomurl.com/sample2\"> <ResponseCode>OK</ResponseCode> <State>Simple</State> <Transactions> <TransactionName>CHANGED</TransactionName> </Transactions> <Transactions> <TransactionData>CHANGE_SIMPLE</TransactionData> </Transactions> <ServerTime>1649691711637</ServerTime> <SimpleResponseCode>OK</SimpleResponseCode> <nResponseCode> <nResponseCode>OK</nResponseCode> </nResponseCode> <USELESS>VALUES</USELESS> <MORE_USELESS>false</MORE_USELESS> </ns3:LogResponse> </soapenv:Body> </soapenv:Envelope>" | xpath outfield=          
My sample events are like this  event 1 My name is Ethan [host="asw.pbrfinance.sdo.dgr.com"] My address is 46e 91 st [host="asw.pbrfinance.sdo.dgr.com"] my city is Atlanta [host="asw.pbrfinan... See more...
My sample events are like this  event 1 My name is Ethan [host="asw.pbrfinance.sdo.dgr.com"] My address is 46e 91 st [host="asw.pbrfinance.sdo.dgr.com"] my city is Atlanta [host="asw.pbrfinance.sdo.dgr.com"] event 2 My name is Thomas [host="asw..sdo.dgr.cowq234wdwaf.mhh.com"] My address is 996e 97 st [host="asw..sdo.dgr.cowq234wdwaf.mhh.com"] my city is Atlanta [host="asw..sdo.dgr.cowq234wdwaf.mhh.com"]   I want to limit the host name coming in the output as only one entry and not multiple times . Is there anyway to do this in props .conf ? Please help me with a proper regex for this .  Expected output  event 1 My name is Ethan [host="asw.pbrfinance.sdo.dgr.com"] My address is 46e 91 st  my city is Atlanta
I'm new to ES.  I have taken the ES Admin course so I probably shouldn't have to ask for help but I'm pulling my hair out. I have a linux host running sshd, no firewall.  This host has the universa... See more...
I'm new to ES.  I have taken the ES Admin course so I probably shouldn't have to ask for help but I'm pulling my hair out. I have a linux host running sshd, no firewall.  This host has the universal forwarder sending events to the index cluster. I have another linux host running a brute force attack against it. Search in Splunk clearly shows the failed attempts, thousands of them. In ES, I have enabled the "Brute Force Access Behavior Detected" correlation search, and added a Adaptive Response Action to create notable. However, even though there are thousands of matching events, I never get a notable created. SA_AccessProtection app is installed. Any ideas of how to troubleshoot this, or what might be wrong greatly appreciated.  
Hi, Some data source is indexed one hour in the future (probably since TZ shift => twice a year hour change in France !! this time +0100hour). We were on gmt+1, now we're on gmt+2. I don't kn... See more...
Hi, Some data source is indexed one hour in the future (probably since TZ shift => twice a year hour change in France !! this time +0100hour). We were on gmt+1, now we're on gmt+2. I don't know where the problem is. - checked the server ntp => ok, gmt+2 updated - checked the data source file => ok - tried to reproduced in dev env on a mono-instance : issue not reproducted ! - this is the only data source with the issue My prod env is distributed (SHC, Indexer Cluster and multiple forwarders) - data is a jsonl file. I'm soo lost !! Thank you for your help !! Ema on the indexer cluster : [mysourcetype] NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TIME_PREFIX= "dte":" TIME_FORMAT = %d/%m/%Y %H:%M:%S TRUNCATE = 0 MAX_DAYS_AGO = 4000 category = Structured disabled = false pulldown_type = true   data sample : {"idj":"3108824152","dce":"IDN","fce":"IDN2","ace":"176","dte":"08/04/2022 14:44:31","org":"GN","dmc":"2","idu":"211151","csu":"00082827","lsu":"CROSS BDOHRIJ GHBGD14 ","ctx":"Identifiant:PN-003042021007790-ARD-PPM-70732201#Procédure de référence:CIAHTDT CENTRAL DE CNJAEN-2021-007790#Type personne:Physique#Qualité personne:Mise en cause#Nom:XXX#Prénom:yyy#Lieu de naissance:CAEN#Date de naissance:05/01/1991#","idd":"PN-0030428541021007790-ARD-PPM-7074532201","ise":"N","cts":[{"idj":"3108824152","nom":"XXX","pre":"yyy","jne":"5","mne":"1","ane":"1981","lne":"CAEN","cot":"","not":"","qot":"","nuo":"","ctt":"","gtt":"","qtt":"","ntt":""}]}   This data is indexed at  08/04/2022 15:44:31 for 08/04/2022 14:44:31 !    
Right now I have a lot of macros to help with reports, dashboards and knowledge items in general. We do not really use tags/eventtypes. Right now though each business has multiple macros that need to... See more...
Right now I have a lot of macros to help with reports, dashboards and knowledge items in general. We do not really use tags/eventtypes. Right now though each business has multiple macros that need to be managed based on how our items our logged (this is the root cause but this wont change easily). I am wondering from a performance standpoint is there a way I could more easily get the events I need through a tag/event type or other way?   For example I need to get a list of all functions that get called. So with that we need to have an over all macro, something to exclude some carryovers/one time jobs and other items we dont care about. We are implementing more and its becoming a huge mess so far. I was thinking use the macros to create a weekly lookup that then can be used in dashboards/reports to try and make things more efficient as well. Just looking for ideas as to what might be a better/cleaner way to do things.   Edit: I get macros are not a performance issue and will just run whatever SPL is in there. I was more wondering is this generally the most efficient way or could I benefit from using something different here.
Hey Team, I have Million records to search for. Record Structure is given below. My requirement is to get length of aValues across million records. For example if aValues length for two recors is ... See more...
Hey Team, I have Million records to search for. Record Structure is given below. My requirement is to get length of aValues across million records. For example if aValues length for two recors is 10,12 . I should display 22.   { resp: { meta: { bValues: [ { aValues: [ ] } ] } } }   Below Splunk Query I tried but its not working for million records. Only working for small set like 10 records   index=myIndex| spath path=resp.meta.bValues{} output=BValues | stats count by BValues | spath input=BValues path=aValues{} output=aValues | mvexpand aValues | spath input=aValues | spath input=BValues | fields - BValues count aValues* | stats count  
I have a Linux server falsely showing as down on Splunk Web.  I have tried restarting the Linux server and restarting the Splunk forwarder on the Linux server but the issue still remains.
Hello, I've been trying a few different ways, with no luck, to represent some server counts that I see happening on Thursday, Friday, Saturday, Sunday, Monday(sometimes). Unfortunately, it seems ... See more...
Hello, I've been trying a few different ways, with no luck, to represent some server counts that I see happening on Thursday, Friday, Saturday, Sunday, Monday(sometimes). Unfortunately, it seems like I can't do this count "per week" as we need to count per the last "scan time" which will start thursday and end on the latest Monday. I started looking into my possible options, and think I have half an idea of how to accomplish it, but if there's better ideas then that would be awesome as well. Is it possible to do a sum based on "grouped days") Thurs+Fri+Sat+Sun+Mon, or dayofweek 4,5,6,0,1?  The main thing I can't get over is how to differentiate the "grouped days"?  We like to evaluate based on the "current week" of the year, but this would bring our "grouped days" to persisting through multiple "current weeks" of the year (this is variable 'weekofyear'). Essentially, I need to count weekofyear where the output would be like: Department Week of Year (technically, this is our "scan cycle") Server Count (Server_Responses) Dept.A 10 (this would be combined between Thurs,Fri,Sat,Sun,Mon...) 100 (ie; we saw 3 thurs, 90 fri, 3 sat, 3 sun, 1 mon...) Dept.B 10 200 Dept.A 11 105 (ie; we saw 10 thurs, 80 fri, 10 sat, 3 sun, 2 mon...) Dept.B 11 203 I haven't really gotten any further than just evaluating date commands to evaluate my options.  Other than that, I just have a line chart indicating a day of week over the counts... It's not very pretty. index blah sourcetype blah search blah ```what i have been looking at so far...``` | rename server_id as "Server_Responses" ```at this point I was just looking at the possibilities to count by an aggregated "day of week in number" or by "dayofweek(short|full)", and real all possibilities``` | eval dayofweekshort=strftime(_time,"%a") | timechart count(ping.status) as pingstats, dc("Server_Responses") by Department span=1w@1w ```Start evaluating possible days, weeks, months, current weeks, etc``` | eval dayofweekshort=strftime(_time,"%a") | eval dayofweekfull=strftime(_time,"%A") | eval dayofweekasnumber=strftime(_time,"%w") | eval dayofmonth=strftime(_time,"%d") | eval weekofmonth=floor(dayofmonth/7)+1 | eval weekofyear=strftime(_time,"%U") | fields - day