All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello I use the code below in order to display an <h1> title at the bottom of 2 single panel  I try to do the same thing but I need an <h1> title for each single panel and to have the 2 single pane... See more...
Hello I use the code below in order to display an <h1> title at the bottom of 2 single panel  I try to do the same thing but I need an <h1> title for each single panel and to have the 2 single panel side by side <row> <panel> <html> <h1> <center>HARDWARE and COMPLIANCE</center> </h1> </html> </panel> </row>   <row> <panel> <single> </single> </panel> <panel> <single> </single> </panel> </row>   How to do this please?  
Hi all, I am running a Splunk 7.3.0 distributed / clustered environment and I have noticed that the DMC is reporting that disk usage on my indexers is high ie around the 85% mark, however Windows Se... See more...
Hi all, I am running a Splunk 7.3.0 distributed / clustered environment and I have noticed that the DMC is reporting that disk usage on my indexers is high ie around the 85% mark, however Windows Server 2016 says that it is around 65% as per attached screenshot. Its mainly the F drive as far as I can tell. Disk usage was high previously however I then implemented retention policies on the indexes which cleared out a large amount of data, is it possible the DMC is caching an old value and is not updating ? I have restarted the Cluster master node and we have rebooted the index cluster since then I believe.   Any info would be great,   Thanks 
This issue happens when incoming thruput for hotbuckets is faster than splunk optimize can merge tsidx files and   keep the count < 100(hardcoded). If number of tsidx files per hotbucket are >=100, t... See more...
This issue happens when incoming thruput for hotbuckets is faster than splunk optimize can merge tsidx files and   keep the count < 100(hardcoded). If number of tsidx files per hotbucket are >=100, then indexer will apply indexing pause to allow splunk-optimize catch up.  
How to identify whether the Splunk installation is done by a root or a non-root user on Ubuntu Linux machine? Can someone specify the command to identify the above requirement?
@chrisyounger  Hi Chris, It would be great to add the option to set the default stroke colour to be used for nodes, when no color field is given. Currently it's hard coded to #777. Easy enough to c... See more...
@chrisyounger  Hi Chris, It would be great to add the option to set the default stroke colour to be used for nodes, when no color field is given. Currently it's hard coded to #777. Easy enough to change the code, but a config setting in the UI format page would be nice.  
Hi, So I am trying to build SPL for how long does it take to restart splunk. BIt of context, We do sometimes do rolling restart through Cluster Master. So I am trying to determine, how long does roll... See more...
Hi, So I am trying to build SPL for how long does it take to restart splunk. BIt of context, We do sometimes do rolling restart through Cluster Master. So I am trying to determine, how long does rolling restart take.    So far from research, I can find splunk starting log from splunkd event. But that's just tells me one instance splunk starting. But i can't find logs from when splunk is shutting down. 
I know you can search for list of all db connect jobs and when they've ran historically within the internal logs, which gets you how long the query ran, # events and error count. Is there anyway to g... See more...
I know you can search for list of all db connect jobs and when they've ran historically within the internal logs, which gets you how long the query ran, # events and error count. Is there anyway to get which index, source and sourcetype it was written to without having to check on the DB Connection inputs ? For example, I'd like to know where it was written and if anything has changed since it ran?  index=_internal sourcetype=dbx* status=* input_name    
I'm trying to read an array field from database query using dbxquery, and got error "failed to load column with type FLOAT_ARRAY_1800". The search string I used: | dbxquery timeout=0 connection=[ser... See more...
I'm trying to read an array field from database query using dbxquery, and got error "failed to load column with type FLOAT_ARRAY_1800". The search string I used: | dbxquery timeout=0 connection=[server] query="select [array_field] from [table] where ID='1234'"
hello Splunkers, We have a index whose retention pol;icy is varying for the applications that are reporting to that index. For example, Application 1 we can see the logs from August and for applicat... See more...
hello Splunkers, We have a index whose retention pol;icy is varying for the applications that are reporting to that index. For example, Application 1 we can see the logs from August and for application 2 we can see the logs from September, both applications are reporting to same index. I know we can set the retention policy per index. Is there any setting that we are missing here?
Hi, I have over 150 alerts to which I have to add new lines of code like below example. I am updating each alert manually and it is getting tedious. Is there a way to update all the alerts in bulk? I... See more...
Hi, I have over 150 alerts to which I have to add new lines of code like below example. I am updating each alert manually and it is getting tedious. Is there a way to update all the alerts in bulk? I also want to add additional alert action (send Webex teams notification) along with the existing send email action. Can anyone please suggest a way to do it? eg: old alert search |makeresults | eval message="Hi How are you" New alert search |makeresults| eval message="Hi How are you"| eval message2= "this is message2" |eval message3="this is message3"
Hi! I'm new to using splunk and I am currently trying to chart a series of events over a time period. I have managed to use the rex command and extracted out the values of interest to me.  Right n... See more...
Hi! I'm new to using splunk and I am currently trying to chart a series of events over a time period. I have managed to use the rex command and extracted out the values of interest to me.  Right now I would like to make it such that each individual "power" values are appearing as a line on its own with it's own legend, similar to how timechart would look like. How can I do that?  Also, I have another 5 events together in this index which are not the same format as the other logs in this index, is it possible for me to also use rex to extract them into another field and finally COMBINE both extracted REX fields into one big happy timechart/chart?      expected something like this ^ I appreciate your time and thanks in advance for the help! 
Hi Everyone, I have one requirement: I have one Usage Dashboard where I am showing the dashboard Name with their Count. Below is the search query for it: index="_internal" EventLogFiles | eval D... See more...
Hi Everyone, I have one requirement: I have one Usage Dashboard where I am showing the dashboard Name with their Count. Below is the search query for it: index="_internal" EventLogFiles | eval DashboardName=if(like(uri, "%EventLogFiles%"), "EventLogFiles", "Unknown Dashboards") | stats count by DashboardName |append[search index="_internal" InformaticaExtract | eval DashboardName=if(like(uri, "%InformaticaExtract%"), "InformaticaExtract", "Unknown Dashboards") | stats count by DashboardName]|sort DashboardName I am getting result like this : DashboardName                                                      Count EventLogFiles                                                               500 InformaticaExtract                                                     345 Now My requirement is like this: I need to create one lookup file(Dashboard.csv) which consists of Dashboard Name like EventLogFiles   and InformaticaExtract      etc.  I want my lookup to combine with my search query to show the counts. I want my DashboardName to come from lookup File and my count should come from my search query. Can someone Guide me on that. Thanks in advance.  
  In the servers that contain the logs I have installed in universal forwarder and I have configured with inputs the path where the logs are and with outputs the ip and the port where these logs sho... See more...
  In the servers that contain the logs I have installed in universal forwarder and I have configured with inputs the path where the logs are and with outputs the ip and the port where these logs should be sent. I have installed the credential package in heavy forwarder I have activated port 9997 and I see communication between the servers and the heavy forwarder. I have doubts in the process of forwarding logs from the heavy forwarder to splunk cloud   if the heavy forwarder points directly to splunk cloud, the "host" field should I put https://xxxxxxxx.splunkcloud.com? and what port? 443 or 9997? https://docs.splunk.com/Documentation/SplunkCloud/8.0.2006/Forwarding/Deployaheavyforwarder splunk add forward-server <host>:<port> -auth <username>:<password>  I have already created the index with the same name that I defined in the intputs file of the logs source server I don't see logs coming in, what else do I need to review?  
I have a timechart panel in a dashboard that has all the e-mails that were sent this month, whether that'd be an Alert or Report. I want to be able to click on any of those stacked values and on anot... See more...
I have a timechart panel in a dashboard that has all the e-mails that were sent this month, whether that'd be an Alert or Report. I want to be able to click on any of those stacked values and on another tab, to bring me those results. I know using the loadjob command with the sid should help me with this, but I don't know how to use tokens for this. Here is the panel, can anybody help? index=_internal source="D:\\Example\\Splunk\\var\\log\\splunk\\python.log" sourcetype=splunk_python TERM(email) | eval ReportNamingConvention=if(match(subject, "Name\sSIEM\sReport:\sFirewall"), 1, 0) | where ReportNamingConvention==1 | eval subject=substr(subject, 30) | timechart useother=false count as Count by subject | rename subject as "Email Subject"
I'm working on a project for work where I want to see employee entry data for specific groups. We have a lookup file that has everyone's cost center that I use to see everyone's entries into an offic... See more...
I'm working on a project for work where I want to see employee entry data for specific groups. We have a lookup file that has everyone's cost center that I use to see everyone's entries into an office as well as what team they're in. However now I want to see more granular data by only showing one cost center rather than all of them. Here's my current search that I can't get to work   index="myindex" EVDESCR="Access Granted" READERDESC="yes*" |lookup user_lookup.csv user_employee_number as EMPLOYEE_ID |search user_esc_cost_center="specific group" |timechart span=1d dc(EMPLOYEE_ID) by FIRSTNAME   I keep getting 0 results but I'm not sure how else to get around to this. I'm fairly new to Splunk and am basically self teaching with a little help from our other teams. 
I wish to generate the results for month of September. I am currently using the following query, however if i set the date range it doesnt generate results for September. We still see the data for Oc... See more...
I wish to generate the results for month of September. I am currently using the following query, however if i set the date range it doesnt generate results for September. We still see the data for October.    index=XYZ source=XYZ (SMF30JBN=F*DC03D OR SMF30JBN=M*DC03D) (SMF30STP=1 OR (SMF30STP=4 AND SMF30STM=DOWNS020)) SMF30JNM=JOB* earliest=@d-48h latest=@d+6h | eval ACTUAL_START = case(SMF30STP=1,DATETIME) | eval ACTUAL_END = case(SMF30STP=4,DATETIME) | stats values(ACTUAL_START) as ACTUAL_START values(ACTUAL_END) as ACTUAL_END by SMF30JNM SMF30JBN | rename SMF30JBN as JOBNAME | eval CYCLE = relative_time(now(),"@d-720m") | eval WEEKDAY=strftime(CYCLE,"%A") | eval CYCLE = strftime(CYCLE, "%Y-%m-%d %H:%M:%S.%2N") | eval MONTH = substr(CYCLE,6,2) | eval DAY = substr(CYCLE,9,2) | eval YEAR = substr(CYCLE,1,4) | eval DATE = substr(CYCLE,1,10) | lookup workloadinfo.csv JOBNAME output WEEK_START WEEK_END WEEK_RT SAT_START SAT_END SAT_RT SYS STATES | eval WEEK_START = case(WEEK_START="SLA0600",relative_time(now(),"@d-1080m"),WEEK_START="SLA0700",relative_time(now(),"@d-1020m"),WEEK_START="SLA1300",relative_time(now(),"@d-660m"),WEEK_START="SLA1400",relative_time(now(),"@d-600m"),WEEK_START="SLA1430",relative_time(now(),"@d-570m"),WEEK_START="SLA1600",relative_time(now(),"@d-480m"),WEEK_START="SLA1700",relative_time(now(),"@d-420m"),WEEK_START="SLA1730",relative_time(now(),"@d-390m"),WEEK_START="SLA1800",relative_time(now(),"@d-360m"),WEEK_START="SLA1830",relative_time(now(),"@d-330m"),WEEK_START="SLA1900",relative_time(now(),"@d-300m"),WEEK_START="SLA1930",relative_time(now(),"@d-270m"),WEEK_START="SLA2000",relative_time(now(),"@d-240m"),WEEK_START="SLA2100",relative_time(now(),"@d-180m"),WEEK_START="SLA2200",relative_time(now(),"@d-120m")) | eval WEEK_END = case(WEEK_END="SLA0600",relative_time(now(),"@d-1080m"),WEEK_END="SLA0700",relative_time(now(),"@d-1020m"),WEEK_END="SLA1300",relative_time(now(),"@d-660m"),WEEK_END="SLA1400",relative_time(now(),"@d-600m"),WEEK_END="SLA1430",relative_time(now(),"@d-570m"),WEEK_END="SLA1600",relative_time(now(),"@d-480m"),WEEK_END="SLA1700",relative_time(now(),"@d-420m"),WEEK_END="SLA1730",relative_time(now(),"@d-390m"),WEEK_END="SLA1800",relative_time(now(),"@d-360m"),WEEK_END="SLA1830",relative_time(now(),"@d-330m"),WEEK_END="SLA1900",relative_time(now(),"@d-300m"),WEEK_END="SLA1930",relative_time(now(),"@d-270m"),WEEK_END="SLA2000",relative_time(now(),"@d-240m"),WEEK_END="SLA2100",relative_time(now(),"@d-180m"),WEEK_END="SLA2200",relative_time(now(),"@d-120m")) | eval SAT_START = case(SAT_START="SLA0600",relative_time(now(),"@d-1080m"),SAT_START="SLA0700",relative_time(now(),"@d-1020m"),SAT_START="SLA1300",relative_time(now(),"@d-660m"),SAT_START="SLA1400",relative_time(now(),"@d-600m"),SAT_START="SLA1430",relative_time(now(),"@d-570m"),SAT_START="SLA1600",relative_time(now(),"@d-480m"),SAT_START="SLA1700",relative_time(now(),"@d-420m"),SAT_START="SLA1730",relative_time(now(),"@d-390m"),SAT_START="SLA1800",relative_time(now(),"@d-360m"),SAT_START="SLA1830",relative_time(now(),"@d-330m"),SAT_START="SLA1900",relative_time(now(),"@d-300m"),SAT_START="SLA1930",relative_time(now(),"@d-270m"),SAT_START="SLA2000",relative_time(now(),"@d-240m"),SAT_START="SLA2100",relative_time(now(),"@d-180m"),SAT_START="SLA2200",relative_time(now(),"@d-120m")) | eval SAT_END = case(SAT_END="SLA0600",relative_time(now(),"@d-1080m"),SAT_END="SLA0700",relative_time(now(),"@d-1020m"),SAT_END="SLA1300",relative_time(now(),"@d-660m"),SAT_END="SLA1400",relative_time(now(),"@d-600m"),SAT_END="SLA1430",relative_time(now(),"@d-570m"),SAT_END="SLA1600",relative_time(now(),"@d-480m"),SAT_END="SLA1700",relative_time(now(),"@d-420m"),SAT_END="SLA1730",relative_time(now(),"@d-390m"),SAT_END="SLA1800",relative_time(now(),"@d-360m"),SAT_END="SLA1830",relative_time(now(),"@d-330m"),SAT_END="SLA1900",relative_time(now(),"@d-300m"),SAT_END="SLA1930",relative_time(now(),"@d-270m"),SAT_END="SLA2000",relative_time(now(),"@d-240m"),SAT_END="SLA2100",relative_time(now(),"@d-180m"),SAT_END="SLA2200",relative_time(now(),"@d-120m")) | eval EXP_START = if(WEEKDAY="Saturday" OR WEEKDAY="Sunday",SAT_START,WEEK_START) | eval EXP_END = if(WEEKDAY="Saturday" OR WEEKDAY="Sunday",SAT_END,WEEK_END) | eval RUNTIME = if(WEEKDAY="Saturday" OR WEEKDAY="Sunday",SAT_RT,WEEK_RT) | eval ACTUAL_START = strptime(ACTUAL_START, "%Y-%m-%d %H:%M:%S.%2N") | eval ACTUAL_END = strptime(ACTUAL_END, "%Y-%m-%d %H:%M:%S.%2N") | eval STARTC = case(ACTUAL_START < EXP_START AND ACTUAL_END > EXP_START, EXP_START,ACTUAL_START > EXP_START,ACTUAL_START,(ACTUAL_START < EXP_START AND ACTUAL_END < EXP_START),null(),(ACTUAL_START > EXP_END AND ACTUAL_END > EXP_END),null()) | eval ENDC = case(ACTUAL_END > EXP_END AND ACTUAL_START < EXP_END, EXP_END, ACTUAL_END < EXP_END,ACTUAL_END,(ACTUAL_START < EXP_START AND ACTUAL_END < EXP_START),null(),(ACTUAL_START > EXP_END AND ACTUAL_END > EXP_END),null()) | eval DURATION =(ENDC-STARTC)/60 | eval ACTUAL_START = strftime(ACTUAL_START, "%Y-%m-%d %H:%M:%S.%2N") | eval ACTUAL_END = strftime(ACTUAL_END, "%Y-%m-%d %H:%M:%S.%2N") | eval EXP_START = strftime(EXP_START, "%Y-%m-%d %H:%M:%S.%2N") | eval EXP_END = strftime(EXP_END, "%Y-%m-%d %H:%M:%S.%2N") | eval STARTC = strftime(STARTC, "%Y-%m-%d %H:%M:%S.%2N") | eval ENDC = strftime(ENDC, "%Y-%m-%d %H:%M:%S.%2N") | eval DURATION =if(DURATION < 0,0,DURATION) | eval DURATION = round(DURATION,2) | stats values(ACTUAL_START) as ACTUALSTART values(ACTUAL_END) as ACTUALEND values(EXP_START) as EXPSTART values(EXP_END) as EXPEND latest(STARTC) as CALCSTART latest(ENDC) as CALCEND sum(DURATION) as AVAILABILITY values(RUNTIME) as EXPRUNTIME values(WEEKDAY) as WEEKDAY values(MONTH) as MONTH values(DATE) as DATE values(SYS) as TYPE values(STATES) as CONTRACTOR values(YEAR) as YEAR values(DAY) as DAY by JOBNAME | eval DOWNTIME = round(abs(AVAILABILITY - EXPRUNTIME),2) | eval SLA_PERC = round(((AVAILABILITY / EXPRUNTIME) * 100),2) | eval AVAILABILITY = if(SLA_PERC > 100, ((AVAILABILITY)-(EXPRUNTIME)), AVAILABILITY) | eval SLA_PERC = if(SLA_PERC > 100, ((SLA_PERC)-100), SLA_PERC) | eval WORKLOAD = substr(JOBNAME, 1, 3) | fields *
Hello! I have the token() whose content is this:  $support_group_token$=support_group="Service Desk"   Is there any way to remove the quotes from the token? I tried to remove the double quotes... See more...
Hello! I have the token() whose content is this:  $support_group_token$=support_group="Service Desk"   Is there any way to remove the quotes from the token? I tried to remove the double quotes using single quotes but the replace didn't work. |eval my_variable = IF(replace('support_group="Service Desk","\"","")="support_group=Service Desk",1,0) |table my_variable Has anyone experienced the same problem? Basically I want to get the result below but the problem is that my token doesn't have the \" between the Service Desk name.
Example: Splunk Dashboard:  Panel 1---> EMPID     NAME       TASK      COMMENTS 1              SANDEEP    A           Task A is Completed 2              xyz                 B            B is pen... See more...
Example: Splunk Dashboard:  Panel 1---> EMPID     NAME       TASK      COMMENTS 1              SANDEEP    A           Task A is Completed 2              xyz                 B            B is pending due to so and so reason. 3               yyy                C            _______________________ I want to build the Dashboard in a way that User should be able to give the COMMENTS for the selected row in Panel 1. Then Next Time User runs the same Dashboard, User can see the already given comments corresponding to the row. Example 1 and 2 are already having comments.  3 is new row. User should able to update the comments for 3 by clicking on the row. is there any way to do it?
Hi I am new to Splunk. I wanted to know how to add a new service into a already created ITSI Splunk dashboard. I need to add a new monitoring service into the Splunk dashboard. Kindly help me with st... See more...
Hi I am new to Splunk. I wanted to know how to add a new service into a already created ITSI Splunk dashboard. I need to add a new monitoring service into the Splunk dashboard. Kindly help me with steps.
Can i get a regular expression to show TSK KUBHEKA v2.0.70 from the below extract 2020-10-13 17:24:15 [bp-[xxxxxxxxx]-completeMachineRun-2053693] HitService [INFO] Created typed run Run: i... See more...
Can i get a regular expression to show TSK KUBHEKA v2.0.70 from the below extract 2020-10-13 17:24:15 [bp-[xxxxxxxxx]-completeMachineRun-2053693] HitService [INFO] Created typed run Run: id=2053695, xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx name=AO TSK KUBHEKA v2.0.70 (verificationservice_VerificationFinalization) {size:0, status:READY_TO_PROCESS, rootRun:2c863fbe-7896-4e98-8f7f-7c79f930ab86, data:}