All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  I have a input token in my dashboard for register number called $tok_reg_num$. The customers can put in a specific number or leave it as the default of "*".    Here's the issue,  in one of th... See more...
Hi  I have a input token in my dashboard for register number called $tok_reg_num$. The customers can put in a specific number or leave it as the default of "*".    Here's the issue,  in one of the dashboard searches I can use the default of "*"   (e..g  index=blah sourcetype=blahblah register_number=*),  in a secondary panel  I have to use a where  with a LIKE clause due to the different log type to filter the register number so * won't work and I need to change it to a  %.    Non-working: | Where customer="foo" AND like(Register,"*")  <--the  dashboard default for  $tok_reg_num$ I want it to be this: | Where customer="foo" AND like(Register,"%")  <- change the $tok_reg_num$ to % I have exhausted my meager splunk token experience in trying to get this to work.  I can't figure out if I can examine and change it in the search  or do I need to do that  on the dashboard.   Someone give me a nudge in the right direction, please 
index=app_pc "Last Executed SQL" "Tablespace" | rex field=_raw <SERVICE_NAME>(?<SERVICE_NAME>.*)</SERVICE_NAME> | rex field=_raw <HOSTt>(?<HOST>.*)</HOST> | rex field=_raw <host>(?<host>.*)</host>... See more...
index=app_pc "Last Executed SQL" "Tablespace" | rex field=_raw <SERVICE_NAME>(?<SERVICE_NAME>.*)</SERVICE_NAME> | rex field=_raw <HOSTt>(?<HOST>.*)</HOST> | rex field=_raw <host>(?<host>.*)</host> | rex field=_raw <CONNECT_DATA>(?<CONNECT_DATA>.*)</CONNECT_DATA> | rex field=_raw <source>(?<source>.*)</source> | rex field=_raw <index>(?<index>.*)</index> | rex field=_raw <sql>(?<sql>.*)</sql> | rex field=_raw <tablel>(?<tablel>.*)</table> |eval hour=strftime(_time,"%H") |eval minute=strftime(_time,"%M") |table _time, SERVICE_NAME, HOST, host,CONNECT_DATA, source, index, sql, table I know not correct but trying to extract index and tables name of table running out of space unable to extend index PCR.PC0000009BU5 by 8192 in tablespace PCR and table name from SQL in Splunk.  SELECT COUNT(*) FROM (SELECT /* ISNULL:pcx_availablevolexcesses_ext.EffectiveDate:, ISNULL:pcx_availablevolexcesses_ext.ExpirationDate:; */ 1 as countCol FROM pcx_availablevolexcesses_ext pcx_availablevolexcesses_ext INNER JOIN pc_policyperiod policyperiod_0 ON policyperiod_0.ID=pcx_availablevolexcesses_ext.BranchID WHERE pcx_availablevolexcesses_ext.BranchID = ? AND ((((pcx_availablevolexcesses_ext.ExpirationDate IS NULL) OR (pcx_availablevolexcesses_ext.EffectiveDate IS NULL AND pcx_availablevolexcesses_ext.ExpirationDate <> ? AND pcx_availablevolexcesses_ext.ExpirationDate IS NOT NULL) OR (pcx_availablevolexcesses_ext.ExpirationDate <> pcx_availablevolexcesses_ext.EffectiveDate)))) AND policyperiod_0.Retired = 0 AND policyperiod_0.TemporaryBranch = '0') countTable
Hi everyone, I have produced a search, which formats events in a table with couple of columns. The data and column names use Cyrillic words and in the GUI these look just fine. However, when I try ... See more...
Hi everyone, I have produced a search, which formats events in a table with couple of columns. The data and column names use Cyrillic words and in the GUI these look just fine. However, when I try to export the table as CSV (via the "Export To" option) the data and column names are encoded incorrectly and are not readable.  Is there a setting which I can change so that this problem is fixed?   I've searched the other topics here in Communities, but didn't find an asnwer, e.g.: https://community.splunk.com/t5/Splunk-Search/Why-are-special-characters-replaced-with-UTF-8-after-exporting/m-p/448248 https://community.splunk.com/t5/Getting-Data-In/Korean-character-is-broken-when-I-export-the-query-results-in/m-p/180307  Any help is appreciated, Thanks!
Hello, I need to create a report that is identical to the interesting field pop up window: Top 10 Values  |  Count  |  % Is there anyway to create a report directly from this pop up or see the sea... See more...
Hello, I need to create a report that is identical to the interesting field pop up window: Top 10 Values  |  Count  |  % Is there anyway to create a report directly from this pop up or see the search that is performed when looking at this popup? Thank you for your help, Tom    
Hi, can anybody help me please? I have _json indexed events in Splunk. 19.08.21 08:26:27,746 { [-]    name: S8.ManuelFail    value: false } 19.08.21 08:26:25,746 { [-]    name: S8.Pruefprogr... See more...
Hi, can anybody help me please? I have _json indexed events in Splunk. 19.08.21 08:26:27,746 { [-]    name: S8.ManuelFail    value: false } 19.08.21 08:26:25,746 { [-]    name: S8.Pruefprogramm_PG    value: S3_450 } 19.08.21 08:26:27,746 { [-]    name: S8.ManuelFail    value: true } 19.08.21 08:27:25,746 { [-]    name: S8.Pruefprogramm_PG    value: S3_450 } 19.08.21 08:28:25,746 { [-]    name: S8.Pruefprogramm_PG    value: S3_600 } 19.08.21 08:29:25,746 { [-]    name: S8.Pruefprogramm_PG    value: S3_600 } In the dashboard I choose specific time interval with the time pick up element. E.g. last 24 hours. I would like to have % number for each name: S8.Pruefprogramm_PG and its value, where name: S8.ManuelFail has value: true. It means % all name: S8.ManuelFail value: true against all name: S8.ManuelFail value: true/false. It is even possible? E.g. in this case I would like to have a table output: S8.Pruefprogramm_PG S8.ManuelFail [%] S3_450 50
Hi Experts, I have a a job log file, that gets ingested to Splunk with naming convention "trace_08_19_2021_06_36_03_*********.txt". After trace, the format of the log file is _MM_DD_YYYY_HH_Min_Sec.... See more...
Hi Experts, I have a a job log file, that gets ingested to Splunk with naming convention "trace_08_19_2021_06_36_03_*********.txt". After trace, the format of the log file is _MM_DD_YYYY_HH_Min_Sec.****. Requirement is to extract the date and time from the file name and show the no. of the logs that get generated in an hour. I read a blog which mentioned that Splunk extracts the timestamp from filename, but that's not happening with this case. Kindly help with this case. Thanks in advance for any assistance. Regards, Karthikeyan.SV
Hi Experts, I have a requirement to in which a table is ingested to Splunk. And the table has a field named Time showing timestamp as "YYYY-MM-DD HH:MM:SS:milisec". Data ingestion is happening witho... See more...
Hi Experts, I have a requirement to in which a table is ingested to Splunk. And the table has a field named Time showing timestamp as "YYYY-MM-DD HH:MM:SS:milisec". Data ingestion is happening without any issues. When I tried to show the count of events in a particular day 1. With stats command, the count is is matching with source, but there are times when there is no event happens in source system and that day count is not showing as 0 in Splunk, its just ignored. 2. With time command, Splunk takes the ingested timestamp of the event and not the timestamp in the event, and the count is not matching. Whereas, here when there is no data gets ingested, the count is showing as 0. Please help me with this issue, where the count should be calculated with the field Time and in case if there is no event for a day, that should get displayed as 0. I'm trying to show the data in a bar chart.   Any help is much appreciated. Thanks. Regards, Karthikeyan.SV
Hello Everyone! I am struggling to find out how to keep the format visualization options for a chart in a Splunk dashboard when opening the search with the "Open in Search" option. I do have a dashb... See more...
Hello Everyone! I am struggling to find out how to keep the format visualization options for a chart in a Splunk dashboard when opening the search with the "Open in Search" option. I do have a dashboard where all panels have a 2-axis visualization (column & line) with many customization (type of scale, min and max values, etc.), however when opening the search using the "Open in Search" option that customization is lost. Do you  know if there's any way to preserve that customization? Many thanks in advance! Best
Hi I am using java script to change teh column value color in splunk … var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function (cell) { return _('so... See more...
Hi I am using java script to change teh column value color in splunk … var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function (cell) { return _('source').contains(cell.field); }, i want this with wildcard like this column to have any value before var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function (cell) { return _('*source').contains(cell.field); },
Let´s assume you have a multi-site indexer cluster with 2 sites, 3 indexers each and the following RF/SF. site_replication_factor = origin:2, total:4 site_search_factor = origin:2, total:4  So for... See more...
Let´s assume you have a multi-site indexer cluster with 2 sites, 3 indexers each and the following RF/SF. site_replication_factor = origin:2, total:4 site_search_factor = origin:2, total:4  So for each indexer getting data in, there is one site-local replication copy and two remote site copies. When one indexer becomes unavailable, the replication switches to the 2 remaining indexers on that site and everything is still working as before. But what happens in case of 2 unavailable indexers on one site or a complete site failure (the site without the master node)? As far as i understand, events should still be received by the remaining indexers (due to indexer discovery and load balancing of the forwarders), indeed replication factor is not met (because 2 copies on the site with failure is not possible anymore at this time), but local indexing and local site replication will still happen without any interruption or manual steps necessary (when master is still available and will not be restarted) and also searching should be fully possible, as there is at least one searchable copy of each bucket on the remaining indexers. The indexer cluster / master node knows, that there are buckets with pending replication tasks (as RF/SF is not met, because not enough target indexers are available), but everything is still working and when the indexers/site is back, this will be fixed automatically and after some time, the replication factor and search factor is met again. This is called "disaster recovery" in the Splunk documentation and what anyone could expect from "high-availability" imho. So is this explanation and my understanding in theory correct and what one can expect? Or are there any doubts or am I wrong in some details?  Are there any real-world practical experiences that this is fully working or that there are any problems/errors or exceptions in any cases?
My csv source data file contains below timestamp . how can we convert the timestamp into TIME_FORMET representation in props.conf file 18-AUG-21 11.40.00.027 PM"
Hi,  I'm an intern so new to splunk etc. I am trying to do data analysis for Garbage collection (Gc) and so far have created the table above from raw GC logs.  Pause_type - 0 stands for when it... See more...
Hi,  I'm an intern so new to splunk etc. I am trying to do data analysis for Garbage collection (Gc) and so far have created the table above from raw GC logs.  Pause_type - 0 stands for when it was full GC collection (Pause Full).   1 stands  minor GC collection  (pause Young) and the rest of the columns are literally differences between old and new values. GCRuntime - the time GC run for in m/s. I was wondering if anyone has done data analysis on GC, I am hoping to use the ML toolkit but I don't know where to go and identify correlations now.  Any help would be appreciated!
Hi Team, My customer using the IBM maximo as ticketing tool. its integrated with Netcool Omnibus for Incident. Now after Splunk comes in infra, we are looking Splunk alerts send to Maximo for INCID... See more...
Hi Team, My customer using the IBM maximo as ticketing tool. its integrated with Netcool Omnibus for Incident. Now after Splunk comes in infra, we are looking Splunk alerts send to Maximo for INCIDENT.   Please guide, or if someone achieved this please show the path how can we start.   Thanks Pawak023
Hi,   I have a log server with universal forwarder and some Linux server, and I set a cronjob to make those Linux server upload their /var/log/secure and /var/log/messages to log server every 10 m... See more...
Hi,   I have a log server with universal forwarder and some Linux server, and I set a cronjob to make those Linux server upload their /var/log/secure and /var/log/messages to log server every 10 mins, and universal forwarder will monitor them.   But every time, when linux server upload their log files to log  server, universal forwarder will index not only difference part but entire files, and it caused a lot of waste of license. here is my inputs.conf -- [monitor://D:\Log\Linux\*\messages] sourcetype = message index = os host_segment = 3 -- How can I fix it?  
Hey. Im trying to create a search that lists users that have for example more than 90 days between the last 2 logons. I have tried getting the last log on time with this: index="index" sou... See more...
Hey. Im trying to create a search that lists users that have for example more than 90 days between the last 2 logons. I have tried getting the last log on time with this: index="index" sourcetype="wineventlog:security" EventCode=4624 | stats max(_time) by user But that doesnt really work for me. Not sure how i proceed from here however
Memory utilization health report is not showing the correct memory usage percentage when compared with the unix box memory utilization for the same server node. This is causing false alarm in the sys... See more...
Memory utilization health report is not showing the correct memory usage percentage when compared with the unix box memory utilization for the same server node. This is causing false alarm in the system. How can I correct the Appdynamics Memory utilization health report.
We have the count of different fields We need to get all that data on x-axis for the that we are using appendcols more than thrice. Our data base contains huge data running search command more than o... See more...
We have the count of different fields We need to get all that data on x-axis for the that we are using appendcols more than thrice. Our data base contains huge data running search command more than once is creating a problem. We would like to group the count data. Can i please know how. Below is the query we are using: index="main" sourcetype="SF1" | stats count(CASS_RESULT) as CASS by CASS_RESULT |appendcols [search index="main" sourcetype="SF4" | stats count(DIALOGUE_RESULT) as DIALOGUE by DIALOGUE_RESULT] |appendcols [search index="main" sourcetype="SF2" | stats count(TPOS_RESULT) as TPOS by TPOS_RESULT] |appendcols [search index="main" sourcetype="SF3" | stats count(PCO_RESULT) as PCO by PCO_RESULT] |appendcols [search index="main" sourcetype="SF5" | stats count(VAS_RESULT) as VAS by VAS_RESULT] |table CASS_RESULT CASS DIALOGUE TPOS PCO VAS | transpose header_field=CASS_RESULT
index=Myindex sourcetype=mine mysearch    | eval Result=if(Apple="1","Bad","Good") | stats count by Result   The search above gives me the correct count of events where Apple="1" eg Result      ... See more...
index=Myindex sourcetype=mine mysearch    | eval Result=if(Apple="1","Bad","Good") | stats count by Result   The search above gives me the correct count of events where Apple="1" eg Result                                                 Count Bad                                                       5  Good                                                  12392   How do I express the stats as a  single value in percentage ie  Bad/Good? How do I alert  if the percentage  > .02%
Details of the event Please tell me how to pass the time of the search result in "Link to search" on the dashboard. For events that include time in search results, such as the attached sample dashboa... See more...
Details of the event Please tell me how to pass the time of the search result in "Link to search" on the dashboard. For events that include time in search results, such as the attached sample dashboard When searching using the "link to search" function, I would like to pass the time of the event as a parameter. Currently, earliest / latest is specified for SPL as shown in the sample. However, in this case, the time range picker when executing the search is not the earliest / latest specified by the parameter. The time range picker on the dashboard will be set. In the actual search, the earliest / latest specified in SPL is prioritized, so the result is as expected, but Since the range of the time range picker is different from the actual search result, users are confused. question 1 How can I pass the earliest / latest specified in SPL to the time range picker when executing a search? Question 2 In this sample, $ click.value $ is used because the time field is on the far left. Please tell me how to specify when passing the time field in any column. For reference, if you specify "$ row._time $", unlike the case of $ click.value $, it is not UNIX time. The displayed time has been set as a parameter.
Heavy Forwarder is installed to acquire Azure AD logs. Add-on uses the following. Splunk Add-on for Microsoft Office 365 Please tell me the following description of the above Add-on troubleshooting. ... See more...
Heavy Forwarder is installed to acquire Azure AD logs. Add-on uses the following. Splunk Add-on for Microsoft Office 365 Please tell me the following description of the above Add-on troubleshooting. It is a recognition that duplicate log capture may occur, What are the conditions under which duplicate logs occur?