All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, Can you help me with splunk query which gives  me visualization for scheduled searches spiking top of the hour? Thanks, Sharada
We are using the latest version of Splunk Cloud.  I have configured HTTP Event Collection (HEC) token under "Settings" in the UI.  It is also worth noting that we are using SSO for user authenticatio... See more...
We are using the latest version of Splunk Cloud.  I have configured HTTP Event Collection (HEC) token under "Settings" in the UI.  It is also worth noting that we are using SSO for user authentication. I have attempted (numerous times) and failed to get a connection using curl (leveraging the information contained in this article: https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/HTTPEventCollectortokenmanagement). All I get is "Failed to connect...". I am wondering if there is something unique I need to do for Splunk Cloud, something unique with SSO, or if I can even send directly to Splunk Cloud (does the connection have to originate from a forwarder for  example). Any advice is appreciated.
I have logs like of this form: [2021-08-19T13:59:05.607] [INFO] collect - [4a2b9170-0130-11ec-95b3-17c017e0ec5d] {"uid":967,"ec":"login","em":"Successful authentication with username: [test123] othe... See more...
I have logs like of this form: [2021-08-19T13:59:05.607] [INFO] collect - [4a2b9170-0130-11ec-95b3-17c017e0ec5d] {"uid":967,"ec":"login","em":"Successful authentication with username: [test123] other data here..." [2021-08-19T13:59:05.607] [INFO] collect - [4a2b9170-0130-11ec-95b3-17c017e0ec5d] {"uid":967,"ec":"login","em":"Successful authentication with username: [test123] other data here..." I would like to run a query that will show all the cases where "username: [specific user]" shows up within 1 second. So the two lines above would be a hit because the test123 appeared in two similar events 1ms apart. I have gotten this far: source="my.log" | rex field=_raw "Successful authentication with username: \[(?<username>.*)] " | streamstats count time_window=1s by username | where count > 1 But this doesn't take the value of username into account and returns all cases of "Successful authentication.." that happen to be within the same second. (Again I want that *only* if the username field is the same.)   Thanks!  - Henrik
Hello, Please let me know how I would break the events, write TIME_PREFIX and TIME_FORMAT for my PROPS Conf.  file  for this sample source data events: TIME_PREFIX= TIME_FORMAT= LINE_BREAKER= BR... See more...
Hello, Please let me know how I would break the events, write TIME_PREFIX and TIME_FORMAT for my PROPS Conf.  file  for this sample source data events: TIME_PREFIX= TIME_FORMAT= LINE_BREAKER= BREAK_ONLY_BEFORE= Sample data has 5 events.  I marked the text  as RED to indicate beginning of each events and time as GREEN Thank you so much, greatly appreciated! ---------------------------Sample Data Starts------------------- TCC     A TCU00002I 22.59.00 MFE REPORT LAST 5.0 MINUTES                                                     2021-06-14 00:00:09.420 TCC     A Server            TSID  I PKTS  O PKTS |Server            TSID  I PKTS  O PKTS                     2021-06-14 00:00:09.421 TCC     A VP2SMTBAPPICE10   VQME     607     623 |VP2SMTBAPPICE11   VQMF   629   661 _                       2021-06-14 00:00:09.422 TCC     A VP2SMTBAPPICE12   VQMG     603     605 |LAPKSC            UZ77     6     6                         2021-06-14 00:00:09.423 TCC     A VP2SMTBAPPICCE2   VPQJ     586     595 |VP2SMTBAPPICCE4   VPQK   600   618                         2021-06-14 00:00:09.424 TCC     A VP2SMTBAPPICCE5   VPQM       7       7 |VP2SMTBAPPICCE6   VPQN    11    11                         2021-06-14 00:00:09.425 TCC     A VP2SMTBAPPICCE7   VPQO      15      15 |VP2SMTBAPPCLS02   VXBK    13    13 _                       2021-06-14 00:00:09.426 TCC     A VP2SMTBAPPCLS03   VXBL      20      20 |VP2SMTBAPPCLS04   VXBM    11    11                         2021-06-14 00:00:09.427 TCC     A VP2SMEMAPPICCE1   VXBA     520     528 |VP2SMEMAPPICCE2   VXBB   548   560                         2021-06-14 00:00:09.428 TCC     A VP2SMEMAPPICCE3   VXBC     523     530 |VP2SMEMAPPICCE5   VXBE    28    28                         2021-06-14 00:00:09.429 TCC     A VP2SMEMAPPICCE6   VXBF      40      40 |VP2SMEMAPPICCE8   VXBH    25    28 _                       2021-06-14 00:00:09.430 TCC     A VD2SMEMAPPCLS02   VXBO      35      35 |VD2SMEMAPPCLS03   VXBP    49    49                         2021-06-14 00:00:09.431 TCC     A VD2SMEMAPPCLS04   VXBQ      40      40 |VP2SMEMAPPICE10   VQMB   526   537                         2021-06-14 00:00:09.432 TCC     A VP2SMEMAPPICE11   VQMC     602     609 |VP2SMEMAPPICE12   VQMD   486   486                         2021-06-14 00:00:09.433 TCC     A VP2SMTBAPPICE13   VQMH     565     572 |VP2SMEMAPPICCE4   VXBD   591   597 _                       2021-06-14 00:00:09.434 TCC     A VP2SMTBAPPCLS01   VXBJ      12      12 |VP2SMTBAPPICCE1   VPQI   565   580                         2021-06-14 00:00:09.435 TCC     A VP2SMTBAPPICCE4   VPQL     551     561 |VP2SMEMAPPICCE7   VXBG    40    40                         2021-06-14 00:00:09.436 TCC     A VD2SMEMAPPCLS01   VXBN      42      42 |VP2SMEMAPPICCE9   VQMA   528   535                         2021-06-14 00:00:09.437 TCC     A VP2SMTBAPPICCE8   VPQP       2       2 |                                                           2021-06-14 00:00:09.438 TCC     A                                                                                                    2021-06-14 00:00:09.439 TCC     A PID POOL PIDS IN USE: 1312 OUT OF 3001                                                             2021-06-14 00:00:09.440 TCC     A END OF MFE REPORT+ TCC     A CVZB0001I 22.59.00 LAST FALLBACK COPY OF CP KEYPOINTS ON SYMBOLIC                                  2021-06-14 00:00:09.442 TCC     A MODULE: 010A DEVICE: 710A+                                                                         2021-06-14 00:00:09.443 TCC     A TCPF0001I 22.59.00 TCP KEYPOINTED+                                                                 2021-06-14 00:00:09.444 TCC     A OCC10000I 22.59.02 RMT HOST-A CCMOD DSBL ERSS AT+                                                  2021-06-14 00:00:11.445 TCC     A OCC10013I 22.59.02 *MEH1PRD* COMMAND CODE(S) DISABLED BY RMT HOST+                                 2021-06-14 00:00:11.446 TCC     A COMMAND CODE DISPLAY                                                                               2021-06-14 00:00:11.447  ------------------------Sample Data Ends---------------------------
Need help to get the DHCP logs in Splunk tagged and parsed correctly.  The data is in the index xyz.    1. The IPv6 DHCP data is being tagged correctly, with sourcetype=dchp.  The IPv4 DHCP data is ... See more...
Need help to get the DHCP logs in Splunk tagged and parsed correctly.  The data is in the index xyz.    1. The IPv6 DHCP data is being tagged correctly, with sourcetype=dchp.  The IPv4 DHCP data is being tagged with sourcetype=xyz:bind:query.  Can we get that corrected to dhcp?  I believe all of the DHCP servers also provide DNS.  All of those log entries appear to have the correct sourcetype xyz:bind:query.   2. The DHCP request type is not being parsed in index=xyz sourcetype=dhcp.  I'd like this to be stored in a field.  It could be named type, action, or whatever you think is appropriate.  Sample values are: DHCP_GrantLease, DHCP_RenewLease, DHCP_RebindLease.
Hi All, We have an install of Splunk on Redhat 8 with SELinux on as enforcing.  Well all of the services start but the webpage for splunk does not work while SELinux is enforcing.  If I simply turn o... See more...
Hi All, We have an install of Splunk on Redhat 8 with SELinux on as enforcing.  Well all of the services start but the webpage for splunk does not work while SELinux is enforcing.  If I simply turn off SELinux and reboot everything works great.  My question is, what SELinux modules either need to be turn off specifically or do I have to do a SELinux chcon (Change context) on what files and set them to what.  If anyone has had to do this and can help, I would appreciate it.  Thanks
Hi All, Can someone please help me if our subsearch has results more than 50000 and we need to append those as well to our main search. As splunk subsearch has maxout 50000 whats the best way to o... See more...
Hi All, Can someone please help me if our subsearch has results more than 50000 and we need to append those as well to our main search. As splunk subsearch has maxout 50000 whats the best way to optimize them? to increase the limit in limits.conf or is there any better way to do it by optimizing the query itself to allow the results for more than 50000. Thanks, Dave
Hi, I currently have the bellow Search to find the 99% Percentile for Response Time:   index=test sourcetype=test |eval response_time=round(response_time/1000,2) | timechart span=1mon perc99(res... See more...
Hi, I currently have the bellow Search to find the 99% Percentile for Response Time:   index=test sourcetype=test |eval response_time=round(response_time/1000,2) | timechart span=1mon perc99(response_time) AS "99%"   I need to find the AVG response time with in the 99% Percentile and the single worst Response within the 99% Percentile. Any help would be greatly Appreciated.   Thanks
When ingesting csv files we get the warning and error in _internal - ERROR TailReader [5588 tailreader0] - error from read call from <file name> WARN FileClassifierManager [5588 tailreader0] - The ... See more...
When ingesting csv files we get the warning and error in _internal - ERROR TailReader [5588 tailreader0] - error from read call from <file name> WARN FileClassifierManager [5588 tailreader0] - The file <file name> is invalid. Reason: cannot_open. It happens when the new file, placed in the directory has the same 70 or so characters as an existing file. Shorting the file name and the ingestion worked just fine. Is there a limit for comparison?
I've read all the suggestion on importing bash history logs and tried variation of fschange, followTail and ignoreOlderThan.  For user logs this works just fine: [monitor:///home/*/.bash_history] ... See more...
I've read all the suggestion on importing bash history logs and tried variation of fschange, followTail and ignoreOlderThan.  For user logs this works just fine: [monitor:///home/*/.bash_history] disabled = false sourcetype = bash_history index = linux followTail = 1 ignoreOlderThan = 1d   For root logs. Nothing works unless I monitor the whole file and that has no value to me since Splunk forwards the full log file each time a change occurs. So if the history size is 1000, then 1000 events are sent to splunk if I run a single "who" command. Any suggestions? 
  Hi I have two linux virtual machines and i am trying to use splunk forwarder one linux to another. I am getting that "waiting for the results problem". How can i fix this ? Thnx a lot   ... See more...
  Hi I have two linux virtual machines and i am trying to use splunk forwarder one linux to another. I am getting that "waiting for the results problem". How can i fix this ? Thnx a lot      
Hi  I have a input token in my dashboard for register number called $tok_reg_num$. The customers can put in a specific number or leave it as the default of "*".    Here's the issue,  in one of th... See more...
Hi  I have a input token in my dashboard for register number called $tok_reg_num$. The customers can put in a specific number or leave it as the default of "*".    Here's the issue,  in one of the dashboard searches I can use the default of "*"   (e..g  index=blah sourcetype=blahblah register_number=*),  in a secondary panel  I have to use a where  with a LIKE clause due to the different log type to filter the register number so * won't work and I need to change it to a  %.    Non-working: | Where customer="foo" AND like(Register,"*")  <--the  dashboard default for  $tok_reg_num$ I want it to be this: | Where customer="foo" AND like(Register,"%")  <- change the $tok_reg_num$ to % I have exhausted my meager splunk token experience in trying to get this to work.  I can't figure out if I can examine and change it in the search  or do I need to do that  on the dashboard.   Someone give me a nudge in the right direction, please 
index=app_pc "Last Executed SQL" "Tablespace" | rex field=_raw <SERVICE_NAME>(?<SERVICE_NAME>.*)</SERVICE_NAME> | rex field=_raw <HOSTt>(?<HOST>.*)</HOST> | rex field=_raw <host>(?<host>.*)</host>... See more...
index=app_pc "Last Executed SQL" "Tablespace" | rex field=_raw <SERVICE_NAME>(?<SERVICE_NAME>.*)</SERVICE_NAME> | rex field=_raw <HOSTt>(?<HOST>.*)</HOST> | rex field=_raw <host>(?<host>.*)</host> | rex field=_raw <CONNECT_DATA>(?<CONNECT_DATA>.*)</CONNECT_DATA> | rex field=_raw <source>(?<source>.*)</source> | rex field=_raw <index>(?<index>.*)</index> | rex field=_raw <sql>(?<sql>.*)</sql> | rex field=_raw <tablel>(?<tablel>.*)</table> |eval hour=strftime(_time,"%H") |eval minute=strftime(_time,"%M") |table _time, SERVICE_NAME, HOST, host,CONNECT_DATA, source, index, sql, table I know not correct but trying to extract index and tables name of table running out of space unable to extend index PCR.PC0000009BU5 by 8192 in tablespace PCR and table name from SQL in Splunk.  SELECT COUNT(*) FROM (SELECT /* ISNULL:pcx_availablevolexcesses_ext.EffectiveDate:, ISNULL:pcx_availablevolexcesses_ext.ExpirationDate:; */ 1 as countCol FROM pcx_availablevolexcesses_ext pcx_availablevolexcesses_ext INNER JOIN pc_policyperiod policyperiod_0 ON policyperiod_0.ID=pcx_availablevolexcesses_ext.BranchID WHERE pcx_availablevolexcesses_ext.BranchID = ? AND ((((pcx_availablevolexcesses_ext.ExpirationDate IS NULL) OR (pcx_availablevolexcesses_ext.EffectiveDate IS NULL AND pcx_availablevolexcesses_ext.ExpirationDate <> ? AND pcx_availablevolexcesses_ext.ExpirationDate IS NOT NULL) OR (pcx_availablevolexcesses_ext.ExpirationDate <> pcx_availablevolexcesses_ext.EffectiveDate)))) AND policyperiod_0.Retired = 0 AND policyperiod_0.TemporaryBranch = '0') countTable
Hi everyone, I have produced a search, which formats events in a table with couple of columns. The data and column names use Cyrillic words and in the GUI these look just fine. However, when I try ... See more...
Hi everyone, I have produced a search, which formats events in a table with couple of columns. The data and column names use Cyrillic words and in the GUI these look just fine. However, when I try to export the table as CSV (via the "Export To" option) the data and column names are encoded incorrectly and are not readable.  Is there a setting which I can change so that this problem is fixed?   I've searched the other topics here in Communities, but didn't find an asnwer, e.g.: https://community.splunk.com/t5/Splunk-Search/Why-are-special-characters-replaced-with-UTF-8-after-exporting/m-p/448248 https://community.splunk.com/t5/Getting-Data-In/Korean-character-is-broken-when-I-export-the-query-results-in/m-p/180307  Any help is appreciated, Thanks!
Hello, I need to create a report that is identical to the interesting field pop up window: Top 10 Values  |  Count  |  % Is there anyway to create a report directly from this pop up or see the sea... See more...
Hello, I need to create a report that is identical to the interesting field pop up window: Top 10 Values  |  Count  |  % Is there anyway to create a report directly from this pop up or see the search that is performed when looking at this popup? Thank you for your help, Tom    
Hi, can anybody help me please? I have _json indexed events in Splunk. 19.08.21 08:26:27,746 { [-]    name: S8.ManuelFail    value: false } 19.08.21 08:26:25,746 { [-]    name: S8.Pruefprogr... See more...
Hi, can anybody help me please? I have _json indexed events in Splunk. 19.08.21 08:26:27,746 { [-]    name: S8.ManuelFail    value: false } 19.08.21 08:26:25,746 { [-]    name: S8.Pruefprogramm_PG    value: S3_450 } 19.08.21 08:26:27,746 { [-]    name: S8.ManuelFail    value: true } 19.08.21 08:27:25,746 { [-]    name: S8.Pruefprogramm_PG    value: S3_450 } 19.08.21 08:28:25,746 { [-]    name: S8.Pruefprogramm_PG    value: S3_600 } 19.08.21 08:29:25,746 { [-]    name: S8.Pruefprogramm_PG    value: S3_600 } In the dashboard I choose specific time interval with the time pick up element. E.g. last 24 hours. I would like to have % number for each name: S8.Pruefprogramm_PG and its value, where name: S8.ManuelFail has value: true. It means % all name: S8.ManuelFail value: true against all name: S8.ManuelFail value: true/false. It is even possible? E.g. in this case I would like to have a table output: S8.Pruefprogramm_PG S8.ManuelFail [%] S3_450 50
Hi Experts, I have a a job log file, that gets ingested to Splunk with naming convention "trace_08_19_2021_06_36_03_*********.txt". After trace, the format of the log file is _MM_DD_YYYY_HH_Min_Sec.... See more...
Hi Experts, I have a a job log file, that gets ingested to Splunk with naming convention "trace_08_19_2021_06_36_03_*********.txt". After trace, the format of the log file is _MM_DD_YYYY_HH_Min_Sec.****. Requirement is to extract the date and time from the file name and show the no. of the logs that get generated in an hour. I read a blog which mentioned that Splunk extracts the timestamp from filename, but that's not happening with this case. Kindly help with this case. Thanks in advance for any assistance. Regards, Karthikeyan.SV
Hi Experts, I have a requirement to in which a table is ingested to Splunk. And the table has a field named Time showing timestamp as "YYYY-MM-DD HH:MM:SS:milisec". Data ingestion is happening witho... See more...
Hi Experts, I have a requirement to in which a table is ingested to Splunk. And the table has a field named Time showing timestamp as "YYYY-MM-DD HH:MM:SS:milisec". Data ingestion is happening without any issues. When I tried to show the count of events in a particular day 1. With stats command, the count is is matching with source, but there are times when there is no event happens in source system and that day count is not showing as 0 in Splunk, its just ignored. 2. With time command, Splunk takes the ingested timestamp of the event and not the timestamp in the event, and the count is not matching. Whereas, here when there is no data gets ingested, the count is showing as 0. Please help me with this issue, where the count should be calculated with the field Time and in case if there is no event for a day, that should get displayed as 0. I'm trying to show the data in a bar chart.   Any help is much appreciated. Thanks. Regards, Karthikeyan.SV
Hello Everyone! I am struggling to find out how to keep the format visualization options for a chart in a Splunk dashboard when opening the search with the "Open in Search" option. I do have a dashb... See more...
Hello Everyone! I am struggling to find out how to keep the format visualization options for a chart in a Splunk dashboard when opening the search with the "Open in Search" option. I do have a dashboard where all panels have a 2-axis visualization (column & line) with many customization (type of scale, min and max values, etc.), however when opening the search using the "Open in Search" option that customization is lost. Do you  know if there's any way to preserve that customization? Many thanks in advance! Best
Hi I am using java script to change teh column value color in splunk … var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function (cell) { return _('so... See more...
Hi I am using java script to change teh column value color in splunk … var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function (cell) { return _('source').contains(cell.field); }, i want this with wildcard like this column to have any value before var CustomRangeRenderer = TableView.BaseCellRenderer.extend({ canRender: function (cell) { return _('*source').contains(cell.field); },