All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

.
Hi, I hav a table below to which I want to add the gridlines. It is controlled by the css code below:  Can anybody suggest?     table td > a,table td > a:hover,header{ color:#fff } /* [47... See more...
Hi, I hav a table below to which I want to add the gridlines. It is controlled by the css code below:  Can anybody suggest?     table td > a,table td > a:hover,header{ color:#fff } /* [47] */ .table-chrome,.table-chrome .sorts a,.table-chrome .sorts a:hover{ color:white!important; border:2px solid black; } /* [48] */ .table-chrome > thead > tr > th{ background-image:none; background-color:transparent!important; border:2px solid black; font-size: medium; text-align:-webkit-auto; } /* background colour for tables: Status volume & alert breaches */ /* [49] */ .table-chrome.table-striped > tbody > tr.odd > td,.table-chrome.table-striped > tbody > tr.even > td{ // background-color:#393a4b; padding-top:1px; border-color:transparent; text-align:auto; } /* [50] */ tr.shared-resultstable-resultstablerow.odd, tr.shared-resultstable-resultstablerow.even { background: none; }  
I want to create  a tile visualization which takes my search and then gives me the % of non 200 results from the "Response" field. Has anybody done this before?  
So this is what my data looks like. I need to check if the last column value is in the range of last 75 days. In other words, the date is later than last 75 days. How can i proceed?
Hi everybody I have a couple of questions regarding the compatibility between Splunk Enterprise Server and Universal Forwarders. This is mainly about the Mac OSX clients, which as everyone knows ar... See more...
Hi everybody I have a couple of questions regarding the compatibility between Splunk Enterprise Server and Universal Forwarders. This is mainly about the Mac OSX clients, which as everyone knows are always installed with the last OSX when ordering. Our servers are still on Enterprise Release 8.0.9, but we can't install that version on the Mac OSX 11 clients.   According to Splunk, Universal Forwarder 8.0.9 is not in the compatibility list of OSX 11 clients (only OSX 10.13, 10.14, 10.15). That means you have to install the 8.2.x version. From then on the OSX 11 appears as compatible.   This means that the Universal Forwarder has a higher version installed than the server.However, based on the compatibility list from Splunk (https://docs.splunk.com/Documentation/Forwarder/8.0.9/Forwarder/Compatibilitybetweenforwardersandindexers), compatibility would be given.   Can we install Universal Forwarder 8.0.9 on OSX-11 clients without having problems? Or can we install the Universal Forwarder 8.2.1 on the OSX-11 clients without having problems with our Enterprise servers which are still on version 8.0.9? Many thanks for your hint's
Hi I try to list the advantages of macro usage in Splunk As far as I know the main usage is if the name of the index or of the sourcetype change, we just have to change the macro But is there othe... See more...
Hi I try to list the advantages of macro usage in Splunk As far as I know the main usage is if the name of the index or of the sourcetype change, we just have to change the macro But is there other benefits of using a macro? For example, a macro is it faster? Thanks
Hi, I am trying to setup latest version Splunk Forwarder first time on a linux server. However, after exeuting below command, I am geeting errors. Please suggest what could be the issue here?   /s... See more...
Hi, I am trying to setup latest version Splunk Forwarder first time on a linux server. However, after exeuting below command, I am geeting errors. Please suggest what could be the issue here?   /splunk/splunkforwarder/bin> ./splunk start --accept-license This appears to be your first time running this version of Splunk. Splunk software must create an administrator account during startup. Otherwise, you cannot log in. Create credentials for the administrator account. Characters do not appear on the screen when you type in credentials. Please enter an administrator username: administrator Password must contain at least: * 8 total printable ASCII character(s). Please enter a new password: Please confirm new password: Splunk> Like an F-18, bro. Checking prerequisites... Checking mgmt port [8089]: open Creating: /splunk/splunkforwarder/var/lib/splunk Creating: /splunk/splunkforwarder/var/run/splunk Creating: /splunk/splunkforwarder/var/run/splunk/appserver/i18n Creating: /splunk/splunkforwarder/var/run/splunk/appserver/modules/static/css Creating: /splunk/splunkforwarder/var/run/splunk/upload Creating: /splunk/splunkforwarder/var/spool/splunk Creating: /splunk/splunkforwarder/var/spool/dirmoncache Creating: /splunk/splunkforwarder/var/lib/splunk/authDb Creating: /splunk/splunkforwarder/var/lib/splunk/hashDb ERROR: pid 27238 terminated with signal 11 SSL certificate generation failed.     Reffering to this link: https://splunkcommunities.force.com/customers/apex/ArticleDetailPage?URLName=Splunk-Won-t-Start-ERROR-SSL-Certificate-Generation-Failed Resolution is :: The main reason of this issue/error is because of an app of the Operating System (OS) named CylancePROTECT, it won’t let Splunk to create the certificates, so Splunk won’t be able to startCustomer will be able to start up Splunk after disabling Cylance.   Is there a way around this that we dont have to disable the security software?
Hi All, i'm struggling with the syslog configuration to forward events and maintain the original source IP. By rsyslog daemon i collect the data in a file then i need to forward  after parsing to a... See more...
Hi All, i'm struggling with the syslog configuration to forward events and maintain the original source IP. By rsyslog daemon i collect the data in a file then i need to forward  after parsing to a third syslog receiver. On my HF i have the following configuration: inputs.conf [monitor:///opt/syslog/udp_514/udp_switch.log] disabled = 0 sourcetype = syslog   outputs.conf [syslog:forward_syslog] server = 172.18.0.32:514   props.conf [source::/opt/syslog/udp_514/udp_switch.log] TRANSFORMS-t1 = to_syslog,to_null   transforms.conf [to_syslog] REGEX = <regex filter> DEST_KEY = _SYSLOG_ROUTING FORMAT = forward_syslog [to_null] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = nullQueue   this configuration is working fine, unfortunately the source ip is changed log in udp_switch.log "Sep 8 11:30:52 10.10.10.5 TEST5,007251000106157" in third party syslog the ip changes from "10.10.10.5" with the Heavy forwarder one. Is it possible to maintain the original IP, and how ? Many thanks    
Hi, Splunk logs are truncated to 10,000 characters. Please let me know TRUNCATE=20,000 need to change in Splunk installed location or forwarder installation location . Regards, Madhusri R
How to convert the alphnumeric values to numeric values, each time the length of the values changes.   Can someone suggest?   ClusterCPUUsed=31684 ClusterCPUHALimit=383880 HARamGBLimit=589 Clu... See more...
How to convert the alphnumeric values to numeric values, each time the length of the values changes.   Can someone suggest?   ClusterCPUUsed=31684 ClusterCPUHALimit=383880 HARamGBLimit=589 ClusterMemUsedGB=201
I Have trained a LogisiticRegression model by using TFIDF data (3K events in a month) as input successfully using probabilities=true.  In the fit process it shows the probabilities of everything cor... See more...
I Have trained a LogisiticRegression model by using TFIDF data (3K events in a month) as input successfully using probabilities=true.  In the fit process it shows the probabilities of everything correctly, I can even do a ROC curve analysis.  The problem comes when use the model by doing a new search and TFIDF the data, and right after the  "|apply logistic_model probabilities=true"  to new data (say... last 24 hours). The behavior is that it only shows the probabilities for the first event (sometimes two or three but not all if I apply the model to "old data") and the others appear blank but the predicted field appears correctly. Now, if I do a search and I apply only the TFIDF_model, without the apply logistic_model and then I "|loadjob  123ABC"  having only the TFIDF data calculated previously and then  Iapply the model to the loaded job of TFIDF data, the probabilities appear magically. I am almost sure this is a bug, but I want to know if there is some workaround ?   Thanks
Hi All, I want to shift the paginator of my dashboard panel to the upside. The dashboard is a xml dashboard.   Is there any way to directly edit the dashboard without adding css ? If not ple... See more...
Hi All, I want to shift the paginator of my dashboard panel to the upside. The dashboard is a xml dashboard.   Is there any way to directly edit the dashboard without adding css ? If not please suggest the css method as well.   Thanks in advance!
Hi everyone, I'm getting probably an issue with the extraction of my Fortinet data. I have installed the following apps:   Fortinet FortiGate App for Splunk SplunkAppForFortinet 1.5.1 Fo... See more...
Hi everyone, I'm getting probably an issue with the extraction of my Fortinet data. I have installed the following apps:   Fortinet FortiGate App for Splunk SplunkAppForFortinet 1.5.1 Fortinet Fortigate Add-on for Splunk Splunk_TA_fortinet_fortigate 1.6.2   Does anyone know the different of the field action and ftnt_action? because I'm getting different results there.  In field action do I have for example "blocked" but in ftnt_action do I have "detected" and also "dropped". This is a bit confusing while I'm trying to get only blocked attacks.  Could someone please help me?
I need to find the rows from the first inputlookup collection that has matching field values from the second inputlookup collection. For example: collection A :      field1, field2, field3        ... See more...
I need to find the rows from the first inputlookup collection that has matching field values from the second inputlookup collection. For example: collection A :      field1, field2, field3                                   X             1          3                                    X            2         4                                     Y            4         1                                     Z             1        2                                      B            3         3                                      B            1           1 CollectionB:    fieldX                                X                                       Y                                         B           The expected result is: (exclude row containing 'Z' as it does not entry in collectionB)                                  field1, field2, field3                                   X             1          3                                    X            2         4                                    Y            4         1                                     B            1           1 the query like: | inputlookup collectionA | search field1 IN ('X','Y','Z'....). How can I set  values 'X','Y','Z'....  to search for field1 from collectionB as this list can be of any length. I tried the following but didn't work: | inputlookup collectionA | search field1 IN (| inputlookup collectionB  |fields fieldX). (as in reality the collectionB can have more than one columns but I want to match values only with fieldX)
I am getting the events from an Australian timeline. But time running in my laptop is IST. So, when i try to calculate the events from today beginning i.e., 12:00 am  to latest now with span of 3h. i... See more...
I am getting the events from an Australian timeline. But time running in my laptop is IST. So, when i try to calculate the events from today beginning i.e., 12:00 am  to latest now with span of 3h. i having my stats showing the timechart starts from yesterday 10pm. I don't understand the mistake. But above the results it showing the events from 8,sep,2021 00:00:00 to 8, sep,2021 13:53:12(now). But i Don't know why it is starting to show from 10 pm yesterday. I calculating the avg of the results of particular field.
Hi , I want to add a text box in a dashboard panel and the manual input value of that textbox should be added to a new column in an already existing table. I understand that this can be done by loo... See more...
Hi , I want to add a text box in a dashboard panel and the manual input value of that textbox should be added to a new column in an already existing table. I understand that this can be done by lookup to save the values but i am not sure how to go ahead with it. This is the data format of the table i have with sample data(the original data i have is confidential). EMAIL NAME IP ID(new column) nish123@gmail.com Nishanth 10.10.10.0   abc098@gmail.com ABC 224.0.0.0   amit187@gmail.com Amit Sharma 63.125.0.0      I want to add a text box to this panel whose values should be inputted into ID column based on the unique value of EMAIL. and i want to save this table with the new values of ID.  How can this be done?? Any help would be appreciated.thanks  
1) What will you do when there is a delay in the indexer? 2) How long the delay period is? (Any maximum time cap is there or will you wait for the complete delay to be cleared in indexer) 3) Will y... See more...
1) What will you do when there is a delay in the indexer? 2) How long the delay period is? (Any maximum time cap is there or will you wait for the complete delay to be cleared in indexer) 3) Will you send any notifications regarding the indexer delay? If yes i) What are the information you can include in that notification (Like any tentative time for the next alert schedule) ii) If there is a continuous delay, so you missed 2-5 time intervals, can you send mail for each time period or a single mail with all the information? 4) If there is 2 hours delay in the indexer, did you check for the missed intervals after the delay is cleared, or else check only from the current time period? (For example, RunFrequency is 5 mins and there is a delay from 10 AM and it is cleared at 11 AM. Did you scan from 10 AM or from 11 AM?)
Hello. Splunk version - 8.2.2 Splunk DB connect version - 3.6.0. After update Splunk Enterprise version  from 8.0.2 to 8.2.2. i have noticed problem with timezone for Oracle database sources. My ... See more...
Hello. Splunk version - 8.2.2 Splunk DB connect version - 3.6.0. After update Splunk Enterprise version  from 8.0.2 to 8.2.2. i have noticed problem with timezone for Oracle database sources. My timezone is Europe/Kiev (GMT +3). I set timezone in DB connection settings, also i have tried to set timezone at Java settings (-Duser.timezone=Europe/Kiev) but each time a have the same result. So. I created DB connection with this select: SELECT CAST(EXTENDED_TIMESTAMP AS TIMESTAMP) EXTENDED_TIMESTAMP, AUDIT_TYPE, STATEMENT_TYPE, RETURNCODE FROM SYS.DBA_COMMON_AUDIT_TRAIL WHERE EXTENDED_TIMESTAMP > ? ORDER BY EXTENDED_TIMESTAMP Rising column - EXTENDED_TIMESTAMP Time column - EXTENDED_TIMESTAMP.    
multiple UFs on single machine will be supported from Splunk. 
I don't know how to disable this message for my dashboard properly.  I have added to a css style block within my dashboard      .splunk-choice-input-message { visibility: hidden }     This... See more...
I don't know how to disable this message for my dashboard properly.  I have added to a css style block within my dashboard      .splunk-choice-input-message { visibility: hidden }     This suppresses all of the text beneath all of the inputs in the dashboard which is fine as it's better than causing concern with the "Duplicate values causing conflict" message. So if anyone knows a way to manipulate the CSS to only suppress that message I'd be grateful.  I don't have to allow duplicates in the dropdown and that in general it's probably a good hint but sometimes it just makes sense or so I think to ignore such warnings. I [mis]use the dropdown to also show a relative timeline of events in the order they appear in a an splunk event (which is comprised of many lines of text).  The user can drill down to the specific log line within a splunk event that they they want to examine in detail.  I also realise that I could solve this with condition on change and split the $value$ selected so that I get the id part without breaking the built in idea of duplicates being bad..  I don't know that I should have to do extra calculations just to achieve the simple idea.  Data below is my justification and it works really well for the use case which actually has hundreds of lines in the combo box.       | makeresults | eval events="1147,Event A [1] 1066,Event B [2..3] 1147,Event A [4] 1156,Event C [5..8] 1147,Event A [9] 1073,Event D [10..14] 1050,Event E [15..20] 1073,Event D [21..40] 1156,Event C [41..44] 1050,Event E [45..46] 1147,Event A [47] 1090,Event F [48] 1678,Event G [49] 1090,Event F [50] 1180,Event H [51] 1127,Event I [52] 1097,Event J [53] 1127,Event I [54..55] 1180,Event H [56] 1068,Event K [57..60] 1138,Event L [61..63] 1122,Event M [64]" | rex max_match=0 field=events "(?<event>.*)\n" | table id event | mvexpand event | rex max_match=0 field=event "(?<id>.*),(?<event>.*)"       It's true that the label in this case will be unique and the id will be duplicated.  Yet it's the label that holds the human readable useful data and the id just refers to a lookup key to present.