All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have the following dashboard:     <form> <label>RHN Status sandbox</label> <row> <panel> <title>Test</title> <html> <div> <img src="http://collab.ubs.net/adv... See more...
I have the following dashboard:     <form> <label>RHN Status sandbox</label> <row> <panel> <title>Test</title> <html> <div> <img src="http://collab.ubs.net/adventskalender/IconBrowser/com.ubs.a00.icons.ui/src/web/icons/32x32/0239_OK6.png"/> </div> </html> <input type="checkbox" token="show_details"> <label></label> <choice value="1">Details</choice> </input> </panel> </row> </form>     The input checkbox is always above the img in div. Do could I put input below the img? 
Hi,   our splunk  universal forwarder is runnning under a non-root service account, which is defined in our central ldap.   We upgraded our universal forwarder from 7.3.7 to 8.0.5 and now our for... See more...
Hi,   our splunk  universal forwarder is runnning under a non-root service account, which is defined in our central ldap.   We upgraded our universal forwarder from 7.3.7 to 8.0.5 and now our forwarder cannot monitor our file anymore.   The file to be monitored is set as following: #ls -al /tmp/ldap-group-test -rw-r-----. 1 root ldapgroup 100 Sep 1 09:27 /tmp/ldap-group-test   Our splunk service account user is a member of the group ldapgroup: #id ldapsplunk uid=100007(ldapsplunk) gid=100008(ldapsplunkgroup) groups=100008(ldapsplunkgroup),100009(ldapgroup)   In Splunk with a universal forwarder in version 8.0.5 we get a permission denied. If we use a scripted input (inputs.conf) to display the user and group context of the currently running splunk forwarder session: bin/display_groups.sh #/bin/bash hostname=$(hostname) time=$(date +%s) id=$(id) rc=$? echo "${time} - ${hostname} - ${id} - ${rc}" we get as output: 1598967093 - splunkhost.bla.fasel.de - uid=100007(ldapsplunk) gid=100008(ldapsplunkgroup) groups=100008(ldapsplunkgroup) context=system_u:system_r:unconfined_service_t:s0 - 0   As you can see, the membership of ldapgroup is missing here. It seems that during the start process of the universal forwarder, the permissions for ldap groups aren't passed correctly.   Has anyone already noticed this? Are there any workarounds other than downgrading to version 7.3? Thanks - Lorenz  
What is the character limit of a field allowed in splunk? If we use a longer names would the values get truncated or skipped while using in stats ? Here the myfield's value contains 1000+ characters... See more...
What is the character limit of a field allowed in splunk? If we use a longer names would the values get truncated or skipped while using in stats ? Here the myfield's value contains 1000+ characters . eval myfield =  "aksjlfhasdfjasdfj/afkhsfkjas39@#$%^&*()HDKLJLO8849889slfkssfdfsdfsfsdfs........." Any advice ?
I have configured /local/inputs.confg file for tcp input data for ssl as suggested in documents. But after restart the splunk when it is not working.  I am using 8.x version. Please suggest the way ... See more...
I have configured /local/inputs.confg file for tcp input data for ssl as suggested in documents. But after restart the splunk when it is not working.  I am using 8.x version. Please suggest the way to securely send tcp data from my application to splunk server using ssl certificate.
Hello, When fetching the episodes from ITSI via REST (https://hostname:8089/servicesNS/fsspl06/itsi/event_management_interface/notable_event_group?filter={"status":"1","severity":{"$gte":"3"}}) a li... See more...
Hello, When fetching the episodes from ITSI via REST (https://hostname:8089/servicesNS/fsspl06/itsi/event_management_interface/notable_event_group?filter={"status":"1","severity":{"$gte":"3"}}) a list of several episodes with status "New" is obtained. However, in the ITSI GUI, in the Episode Review tab, a search for all new episodes over all time returns no results. How is this possible? Any clues on how to debug this? Thanks
HI, I see lot of DateParserverbose warnings in splunkd.log on my indexers. The errors goes as follows: WARN DateParserVerbose - Failed to parse timestamp. Defaulting to timestamp of previous event... See more...
HI, I see lot of DateParserverbose warnings in splunkd.log on my indexers. The errors goes as follows: WARN DateParserVerbose - Failed to parse timestamp. Defaulting to timestamp of previous event Time parsed (Sun Aug 2 22:06:30 2020) is too far away from the previous event's time (Tue Nov 5 15:47:34 2019) to be accepted. If this is a correct time, MAX_DIFF_SECS_AGO (3600) or MAX_DIFF_SECS_HENCE (604800) may be overly restrictive   DateParserVerbose - Accepted time format has changed (some random text) possibly indicating a problem in extracting timestamps. @isoutamo
HI, I see lot of DateParserverbose warnings in splunkd.log on my indexers. The errors goes as follows: WARN DateParserVerbose - Failed to parse timestamp. Defaulting to timestamp of previous event... See more...
HI, I see lot of DateParserverbose warnings in splunkd.log on my indexers. The errors goes as follows: WARN DateParserVerbose - Failed to parse timestamp. Defaulting to timestamp of previous event Time parsed (Sun Aug 2 22:06:30 2020) is too far away from the previous event's time (Tue Nov 5 15:47:34 2019) to be accepted. If this is a correct time, MAX_DIFF_SECS_AGO (3600) or MAX_DIFF_SECS_HENCE (604800) may be overly restrictive   DateParserVerbose - Accepted time format has changed (some random text) possibly indicating a problem in extracting timestamps.    
In notepad editor the field offset and its size is known , how to extract fields based upon offset ? AS log pattern is not same and may contain spaces instead of data , this is creating problem for e... See more...
In notepad editor the field offset and its size is known , how to extract fields based upon offset ? AS log pattern is not same and may contain spaces instead of data , this is creating problem for extraction , I tried putting delimiter but when two fields are missing only one pipeline added, this disturb the extraction .  example line1: value1 value2 value3   value4 value5  lin2:    value1                                value4  value5 line3   value1  value2                                value5  using | create data like this :  line1 value1|value2|value3|value4|value5 line2 value1|value4|value5 line3  value1|value2|value5  Kindly suggest how we can index or extract fields properly so that  data appears like :  line1: value1 value2 value3   value4 value5  lin2:    value1       na     na          value4  value5 line3   value1  value2       na     na          value5 
I'm using Splunk for the first time, and I have an sql query giving the following output: 2020-08-31 00:17:34.608, EMPTY_DATE="2020-12-03 00:00:00.0", ANTAL="2", SUM(AMOUNT)="2533"   The "SUM(AMOU... See more...
I'm using Splunk for the first time, and I have an sql query giving the following output: 2020-08-31 00:17:34.608, EMPTY_DATE="2020-12-03 00:00:00.0", ANTAL="2", SUM(AMOUNT)="2533"   The "SUM(AMOUNT)" is not saved under a name/alias (which I should have done retrospectively). However, now I don't know how to get the data out. I've tried to the following (but I suspect Splunk get's confused with a name which is also a function): | table  ANTAL "SUM(AMOUNT)" Is there a way to get the number out without going back and adding a name/alias to the sql?
Hi Team, We are planning to upgrade from Splunk Enterprise v7.2.9.1 to Splunk Enterprise v8.0.x on the next few months. We also planning to upgrade the Splunk ES app. With that, can you share with u... See more...
Hi Team, We are planning to upgrade from Splunk Enterprise v7.2.9.1 to Splunk Enterprise v8.0.x on the next few months. We also planning to upgrade the Splunk ES app. With that, can you share with us  which is the latest and most Stable Splunk Enterprise version 8.x and Splunk ES Version 6.x? Splunk Enterprise Version 8.0.3?  8.0.4? 8.0.5? Splunk ES App version 6.2.0? 6.1.1, 6.1.0? 6.0.2, 6.0.1, 6.0.0? Regards, Kevin
I have a lookup which is based on KV store. The lookup contains thousands of rows. We want to delete rows from this lookup which are older than 7 days.  To give an idea this is how the lookup looks ... See more...
I have a lookup which is based on KV store. The lookup contains thousands of rows. We want to delete rows from this lookup which are older than 7 days.  To give an idea this is how the lookup looks like except that it contains thousands of rows. my_date name age Sep-1-2020 Rupert 20 Aug-31-2020 Sam 30 Aug-30-2020 Tony 25 Aug-29-2020 Prince 27 Aug-28-2020 Tom 24 Aug-27-2020 Sean 28 Aug-26-2020 Roger 21    We keep on appending new data to this lookup on a daily basis but we don't really care about data which are 7 days old and hence want to remove those rows. How do I do remove the rows efficiently from this KV store based lookup. I know the classic way to do this would be to search for rows that are newer than 7 days and output the results to the same lookup. Something like,   inputlookup my_lookup | where date > now()-7 days | outputlookup my_lookup     But given that the lookup contains tons of rows, I would like to do this efficiently and remove just the old rows instead of writing the entire result set again. This should be possible with KV store but I am unable to figure out how.  
Hi, We have configured syslog-ng to send data to indexers, Sometimes, the syslog file is getting updated but data is not getting into indexer. For the same index we are getting data from other file... See more...
Hi, We have configured syslog-ng to send data to indexers, Sometimes, the syslog file is getting updated but data is not getting into indexer. For the same index we are getting data from other files but only one file is not getting indexed. We have to restart syslog splunk process to get the indexing We have below setting configured already. [thruput] maxKBps = 0 [inputproc] max_fd = 1000 [tcpout:targetgrpmaster] autoLBFrequency = 30 indexerDiscovery = masterindexdiscovery forceTimebasedAutoLB = True maxQueueSize = 128MB [general] parallelIngestionPipelines = 6 [queue] maxSize = 256MB [queue=parsingQueue] maxSize = 128MB  
I struggle with converting a time stamp into a date. In my data EMPTY_DATE looks like this: 2020-08-27 00:00:00.0   I have tried the following: | convert timeformat="%m/%d/%Y" ctime(EMPTY_D... See more...
I struggle with converting a time stamp into a date. In my data EMPTY_DATE looks like this: 2020-08-27 00:00:00.0   I have tried the following: | convert timeformat="%m/%d/%Y" ctime(EMPTY_DATE) AS date   ...and this: |eval date=strftime(EMPTY_DATE, "%m/%d/%Y")   ...and this: | eval time=strptime(EMPTY_DATE,"%Y%m%dT%H:%M:%S.%Q") | convert timeformat="%d%m%Y" ctime(time) as date |   All of the above returns empty columns. I don't know if it's because it doesn't recognize my time stamp or something else?
Good day everyone,   Is there a way to query the status/availability of a server and visualize it? because you cannot use logs on a server because once it's down then it will be useless and wont be... See more...
Good day everyone,   Is there a way to query the status/availability of a server and visualize it? because you cannot use logs on a server because once it's down then it will be useless and wont be giving you data, so whatever i need to do it has to be on splunk side. Thank you in advance.
Hi  I have input fields which has value as week number. Based on the Weeknum selected, how do I pass on the earliest and latest date under my drilldown. Here is my input field   <input type="drop... See more...
Hi  I have input fields which has value as week number. Based on the Weeknum selected, how do I pass on the earliest and latest date under my drilldown. Here is my input field   <input type="dropdown" token="weeknum" searchWhenChanged="true">   And here is my drilldown section from one of the dashboard panel where time range gets passed to another page (sre_module_summary) in the name of token selectedearliest & selectedlatest. How to get the values for the token based on the weeknum selected from input panel.   <drilldown target="_blank"> <eval token="Module">$click.value$</eval> <eval token="HostType">$HostType$</eval> <link> <![CDATA[/app/sre/sre_module_summary?form.Module=$Module$&host=$HostType$&form.timerange.earliest=$selectedearliest$&form.timerange.latest=$selectedlatest$]]> </link> </drilldown>   Could someone please help.
Hi i want to dynamically import data from an excel sheet to create dashboards in Splunk. First tab contains siummary of all the data in the other table. Rest of the tabs contain detailed data.How can... See more...
Hi i want to dynamically import data from an excel sheet to create dashboards in Splunk. First tab contains siummary of all the data in the other table. Rest of the tabs contain detailed data.How can i do it.
When saving a new connection in DBConnect running on Windows and trying to connect to a MSSQL Server database, I'm receiving the following error: "There was an error processing your request. It has ... See more...
When saving a new connection in DBConnect running on Windows and trying to connect to a MSSQL Server database, I'm receiving the following error: "There was an error processing your request. It has been logged (ID 86d1511df962721b8)." Of course, the ID changes every time I receive this message. Anyone had any success is resolving this issue?
Hi Splunkers, We defined 35 days retention for real time indexes in splunk.I see that retention are not happening strictly.Some of the old  events are not getting deleted .We have around 200 days da... See more...
Hi Splunkers, We defined 35 days retention for real time indexes in splunk.I see that retention are not happening strictly.Some of the old  events are not getting deleted .We have around 200 days data for some indexes .Also i see that for some indexes only January month data is present.I also cross checked with events time and indexed time. they are old data and not deleted due to retention policy.Now i have two concerns  1)How to maintain strict 35 days retention's ? 2)Any idea why January month data is present and it is not getting deleted even though buckets are rolling out? and how they can be deleted with retention's. Below are my settings for one index.   [rt] # 80 GB a day / 14 days in warm / 35 day retention homePath = volume:hot/rt/db coldPath = volume:cold/rt/colddb thawedPath = $SPLUNK_DB/rt/thaweddb homePath.maxDataSizeMB = 500000 coldPath.maxDataSizeMB = 1000000 maxWarmDBCount = 300 frozenTimePeriodInSecs = 3024000 maxDataSize = auto_high_volume maxTotalDataSizeMB = 150000    Regards,Shivanand
  There is a field "Message" which contains  "Error 1 , profileid = a, jsessionid=b" I want my search query to ignore profileid and jsessionid, and dispaly error count.  For below messages in logs... See more...
  There is a field "Message" which contains  "Error 1 , profileid = a, jsessionid=b" I want my search query to ignore profileid and jsessionid, and dispaly error count.  For below messages in logs, query should display error1 count as 2 and error2 count as 1 Message 1 : Error 1 , profileid = a, jsessionid=b Message 2 : Error 1, profileid = c, jsessionid=d Message 3 : Error 2 , profileid = e, jsessionid=f Please help
I am considering implementing the following Add-on Splunk Add-on for Symantec Endpoint Protection https://splunkbase.splunk.com/app/2772/#/details If Symantec Endpoint Protection's syslog output i... See more...
I am considering implementing the following Add-on Splunk Add-on for Symantec Endpoint Protection https://splunkbase.splunk.com/app/2772/#/details If Symantec Endpoint Protection's syslog output is in double-byte characters (Unicode) such as Japanese, does this add-on support it? The version of Splunk is ver.8. Thank you