All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've seen the warnings of having UFs sending HFs and relaying to a index cluster. The objection seems to be around uneven distribution of events from the HFs to the indexers. However, doesn't setting... See more...
I've seen the warnings of having UFs sending HFs and relaying to a index cluster. The objection seems to be around uneven distribution of events from the HFs to the indexers. However, doesn't setting the HFs outputs.conf variables: indexAndForward=false and autoLBFrequency=30 fix those issues? Basically, have the UFs load balance to a pool of HFs and the pool of HFs load balance to the index cluster. If you do this, will you still have problems?
On Kubernetes environment there is installed Fluentd Splunk plugin which sends to Heavy Forwarder, via HEC, the standard output application logs. The standard output application logs are not structu... See more...
On Kubernetes environment there is installed Fluentd Splunk plugin which sends to Heavy Forwarder, via HEC, the standard output application logs. The standard output application logs are not structured and I'm not able to  apply line merge to them. My input.conf is: [http://k8s_hec] disabled = 0 index = em_events source = em_metrics token = aaaaaaaa-bbbb-cccc-dddd-fffffffffff       Fluentd defined many sourcetypes, and all custom applications sourcetypes end with "app";  for example:  kube:container:goofy-app kube:container:donald-duck-app   So I defined these two configurations in props.conf inside my HF, but I'm not able to merge events: [kube:container:*-app] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 MAX_TIMESTAMP_LOOKAHEAD=30 disabled=false TIME_FORMAT=%Y-%m-%d %H:%M:%S.%3N TIME_PREFIX=^ MAX_EVENTS=1024 [source::k8s_hec] SHOULD_LINEMERGE=true LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 MAX_TIMESTAMP_LOOKAHEAD=30 disabled=false TIME_FORMAT=%Y-%m-%d %H:%M:%S.%3N TIME_PREFIX=^ MAX_EVENTS=1024       Someone can help me?
Hello I have a sourcetype that have a lot thousands of event each minute so it is very big. i have a use case that i need to search for specific event in this source type , in some points of time d... See more...
Hello I have a sourcetype that have a lot thousands of event each minute so it is very big. i have a use case that i need to search for specific event in this source type , in some points of time during the night, (22:30-22:40 , 01:30-01:40 ,  03:00-03:10).  i have to find all the hosts that have the specific event 3 time at night in this time periods , and i need to check it for the last 7 days (the result i need at the end is count the number of night with 3 occurrence of the event in the last week) at the first step i am trying to reduce the number of events for this search by searching only for events in this time frame. i tried to eval new fields with the value of the hour, and filtering base on that field. this is not so good because splunk need to check all the events and then filter them. what is the best and efficient way to reduce the number of event that are included in my search to only those in the time periods above ? Thanks    
Greetings, Quoting from https://docs.splunk.com/Documentation/Splunk/7.2.6/SearchReference/Commontimeformatvariables,      Refer to the list of tz database time zones for all permissible time zone... See more...
Greetings, Quoting from https://docs.splunk.com/Documentation/Splunk/7.2.6/SearchReference/Commontimeformatvariables,      Refer to the list of tz database time zones for all permissible time zone values.  My question: Given a search statement such as     strptime(SLA." ".timeZone, "%H:%M %Z")     Does Splunk have any built-in time zone database that might require periodic updates as for instance, when a locale changes its standard to daylight saving dates, or does Splunk simply use the database that's baked into a lower layer of the stack?    
Hi,  I have a search very simple but it returns wrong results : The problem is the result is incoherent because the number of event is : Total of OK and KO not exceed 100 Can you help... See more...
Hi,  I have a search very simple but it returns wrong results : The problem is the result is incoherent because the number of event is : Total of OK and KO not exceed 100 Can you help me please ?
I have a CSV file which first row contains the hear fields and remaining rows contains values as below.  name,application,targeturl,type ABC,Desktop,google.com,chrome XYZ,IOS,facebook.com,App GHI,An... See more...
I have a CSV file which first row contains the hear fields and remaining rows contains values as below.  name,application,targeturl,type ABC,Desktop,google.com,chrome XYZ,IOS,facebook.com,App GHI,Andriod,twitter.com,App KLM,Desktop,gmail.com,firefox  I have added props.conf as below. [pp_appeaser] CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv HEADER_FIELD_ACCEPTABLE_SPECIAL_CHARACTERS=_ KV_MODE=none NO_BINARY_CHECK=true SHOULD_LINEMERGE=false category=Structured description=Comma-separated value format. Set header and other settings in "Delimited Settings" disabled=false pulldown_type=true   In search the header fields are getting as fields and as well as values as below.  also i have tried CHECK_FOR_HEADER" and "HEADER_FIELD_LINE_NUMBER=1" stanzas but i have same results.     Can you please suggest how can i resolve this issue, so the name of headers should not index as values.   
I am trying to add a time picker to my dashboard and it is not working. Do I need certain credentials to do this? I have the following listed on my panels. It does not appear to have an edit search o... See more...
I am trying to add a time picker to my dashboard and it is not working. Do I need certain credentials to do this? I have the following listed on my panels. It does not appear to have an edit search option only a view report option.
Hi Everyone, I am new to Splunk Cloud App development. I have got some Splunk Cloud warning messages after AppInspect from Splunk Cloud Team, which are mostly related to manage file access in Python... See more...
Hi Everyone, I am new to Splunk Cloud App development. I have got some Splunk Cloud warning messages after AppInspect from Splunk Cloud Team, which are mostly related to manage file access in Python code. After I searched the document and Google, I came out a way but not sure if it works. Could anyone point me out if the follwoing way works to avoid the warning message? try:     from splunk.clilib.bundle_paths import make_splunkhome_path except ImportError:     from splunk.appserver.mrsparkle.lib.util import make_splunkhome_path _file = make_splunkhome_path(["var", "log", "splunk", log_file_name]) with open(_file, 'w') as filehandler:     filehandler.write("content need to write to file")   FYI, the warning messages were refering the "Method used to write/manipulate/remove to/from files outside of the app dir". Thanks. John
Hello, I have Splunk 8.0.2. My splunk instance is hosted in AWS and has 2 volumes (1 is root volume). What would be the appropriate alert query to trigger when disk is at 80% full? Thanks
Hello Everyone. This is my first post in the forum, please be gentle. I've spent an inordinate amount of trying to get this to work, but I have a requirement to take a FQDN passed as a URL parame... See more...
Hello Everyone. This is my first post in the forum, please be gentle. I've spent an inordinate amount of trying to get this to work, but I have a requirement to take a FQDN passed as a URL parameter to a dashboard and convert it to a short hostname for use a token in search for several panels. I have tried just about every combination of <init>, <eval>, and <set> to no avail to get it populate at page load. I have even tried to attack this in search itself, unsuccessfully, by using something similar to this:   index="syslog_main" source="/var/log/messages" | eval short_name=replace(fqdn,"^([^\.]+).+","\1") | where like (host, "%short_name%") ]     Can someone suggest  a new strategy? At this point I'm probably just making it too complicated. TIA
My company is currently using splunk to grab all office365 logs. We are currently having issues with teams. I can see most data, When I go to teams call overview I'm unable too see any logs.   sour... See more...
My company is currently using splunk to grab all office365 logs. We are currently having issues with teams. I can see most data, When I go to teams call overview I'm unable too see any logs.   sourcetype=m365:teams:callRecord - Should we be able to see this?  Im not getting any logs from this source type. Any help would be appreciated.
Hi Splunkers, I am currently trying to create a gauge visualization, but the issue is that my daily number of events is showing up as 0. This is my query: host=* COMMAND="PWD"| bucket _time span=d... See more...
Hi Splunkers, I am currently trying to create a gauge visualization, but the issue is that my daily number of events is showing up as 0. This is my query: host=* COMMAND="PWD"| bucket _time span=day | stats count by _time| outlier | stats max(count) as mx | eval y1=mx/4| eval y2= y1*2 | eval y3= y1*3| eval y4= mx | gauge count 0 y1 y2 y3 y4     Gauge with dynamic values As you can see the gauge is pegged at zero. The needle represents the total number of events for today. Any suggestions? Thank You,  Marco
I have a .log file that looks similar to the following below and I've tried doing multiple props.conf configurations to get the best results. I'm reaching out to more experience users that might have... See more...
I have a .log file that looks similar to the following below and I've tried doing multiple props.conf configurations to get the best results. I'm reaching out to more experience users that might have a better way of organizing the data. The first 13 lines are single lines of information that I used a regex to extract but after that the delineation changes to a tab delineated with multiple events per line. See example below, all inputs are arbitrary but resemble the actual log    LOG FILE: "the actual path" --> I used regex for the following to point to the portions and extract TEST DATE: 2019/10/27 TEST START: 10:32:25 AM OPERATOR ID: xxxxxxxx (tab portion) *****          *****         ******         ***** time           seq           host          information 10:30:20 pm         (can be blank)          computer1        PASS 10:30:22 pm          Seq 1            computer1            Fail   My main objective at the end of this is to index everything as fields but I do want to use that "TEST START" and the last "time" to create some reports about test station usage. Thanks in advance for the help
Hi everybody,   I created a dashboard with three checkboxes. Each of them amends the search behind a single value panel. Now when I tick the checkbox nothing happens. I have to refresh the whole w... See more...
Hi everybody,   I created a dashboard with three checkboxes. Each of them amends the search behind a single value panel. Now when I tick the checkbox nothing happens. I have to refresh the whole webpage to trigger the search behind the panel with the new settings. Is there a way to trigger the panel refresh with ticking/unticking the checkbox?
@gcusello Please help me for the below questions: 1. How to upgrade syslog-ng from older version to newer version? 2. How to install syslog-ng in Linux server? Please help with all the steps.
I ran the below query, index=s sourcetype=S_1 | search Gene="dow" OR Gene="x" OR Gene="ari" OR Gene="lia" OR Gene="SX" OR Gene=z | append [search index=s sourcetype="S_2"|fillnull |eval Gene="rage"]... See more...
I ran the below query, index=s sourcetype=S_1 | search Gene="dow" OR Gene="x" OR Gene="ari" OR Gene="lia" OR Gene="SX" OR Gene=z | append [search index=s sourcetype="S_2"|fillnull |eval Gene="rage"] | append [search index=s sourcetype=S_3 |fillnull|eval Gene="ork"] | append [search index=s sourcetype=S_4|fillnull|eval Gene="tat"] | append [search index=s sourcetype=S_5 | fillnull |eval Gene="bas"] | append [search index=s sourcetype=S_6 | fillnull |eval Gene="bas1"] | append [search index=s sourcetype=S_7|fillnull value=""|eval Gene="App"|fields *] |rename Gene as General | stats count by General,"Report" |eventstats sum(*) as sum_* by General |foreach * [eval "Status %"=round((count/sum_count)*100,2)]|rename count as Count |fields - sum_count |chart values("Status %") over "Report" by General |sort "Report" desc I expect the below Result, But I get the below result, where "bas1" and "App" shows together as "OTHER" And it happens after I use the chart command. Anyone can help me out.
Hello   As you can see below, I call a savedsearch in my dashboard and l link my table panel with a drilldown      <form stylesheet="format.css"> <label>Logon and reboot</label>...<fieldset su... See more...
Hello   As you can see below, I call a savedsearch in my dashboard and l link my table panel with a drilldown      <form stylesheet="format.css"> <label>Logon and reboot</label>...<fieldset submitButton="true" autoRun="true"> <input type="dropdown" token="tok_filtersite" searchWhenChanged="true"> <label>Site</label> <choice value="*">*</choice> <initialValue>*</initialValue> </input> </fieldset> <row> <panel> <title></title> <table> <title></title> <search> <query>| loadjob savedsearch="admin:TUTU_sh:Event - LogonReboot" | search Site=$tok_filtersite|s$</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> <drilldown> <link target="_blank">/app/TUTU_sh/event_monitoring__last_reboot_and_last_logon_details?Site=$tok_filtersite|s$</link> </drilldown> </table> </panel> </row> </form>   The search there is in my drilldown is the same that exists in the savedsearch but there is just new fields in my stats command and also different token filters I have 2 problems with my drilldown : 1) I need to improve performances because the search concerns the last 30 days 2) There is obviously a little gap in the events returned by the savedsearch and the results returned by the drilldown My need is to have a drilldown with good performances and with the same perimeter of events than in the savedsearch Is anybody can advice me please?     <form> <label>Event monitoring - Last reboot and last logon details</label> <fieldset submitButton="true"> <input type="text" token="tok_filterhost" searchWhenChanged="true"> <label>Hostname</label> <default>*</default> <initialValue>*</initialValue> </input> <input type="text" token="tok_reboot" searchWhenChanged="true"> <label>Days without reboot</label> <default>=*</default> <initialValue>*</initialValue> </input> <input type="text" token="tok_logon" searchWhenChanged="true"> <label>Days without logon</label> <default>=*</default> <initialValue>*</initialValue> </input> <input type="text" token="tok_filtermodel" searchWhenChanged="true"> <label>Model.</label> <default>*</default> <initialValue>*</initialValue> </input> <input type="text" token="tok_filterbuilding" searchWhenChanged="true"> <label>Building.</label> <default>*</default> <initialValue>*</initialValue> </input> <input type="text" token="tok_filteros" searchWhenChanged="true"> <label>OS.</label> <default>*</default> <initialValue>*</initialValue> </input> </fieldset> <row> <panel> <table> <search> <query> [| inputlookup host.csv | table host] `LastLogonBoot` | fields host SystemTime EventCode | eval host=upper(host) | eval SystemTime=strptime(SystemTime, "'%Y-%m-%dT%H:%M:%S.%9Q%Z'") | stats latest(SystemTime) as SystemTime by host EventCode | xyseries host EventCode SystemTime | rename "6005" as LastLogon "6006" as LastReboot | eval NbDaysLogon=round((now() - LastLogon)/(3600*24), 0) | eval NbDaysReboot=round((now() - LastReboot )/(3600*24), 0) | eval LastLogon=strftime(LastLogon, "%y-%m-%d %H:%M") | eval LastReboot=strftime(LastReboot, "%y-%m-%d %H:%M") | search NbDaysLogon$tok_logon$ | search NbDaysReboot$tok_reboot$ | lookup lookup_patch "Computer" as host output FileName | lookup fo_all HOSTNAME as host output SITE COUNTRY TOWN ROOM BUILDING_CODE DESCRIPTION_MODEL MANUFACTURER_NAME OS | search SITE=$Site$ | search NbDaysReboot &gt;= 15 AND NbDaysLogon &gt;= 15 | stats last(LastReboot) as "Last reboot date", last(NbDaysReboot) as "Days without reboot", last(LastLogon) as "Last logon date", last(NbDaysLogon) as "Days without logon", last(MANUFACTURER_NAME) as Manufacturer, last(DESCRIPTION_MODEL) as Model, last(OS) as OS, last(FileName) as "Patch level", last(COUNTRY) as Country, last(TOWN) as Town, last(SITE) as Site, last(BUILDING_CODE) as Building, last(ROOM) as Room by host | rename host as Hostname | search Building=$tok_filterbuilding$ | search Model=$tok_filtermodel$ | search Hostname=$tok_filterhost$ | search OS=$tok_filteros$ | sort -"Days without logon" -"Days without reboot"</query> <earliest>-30d@d</earliest> <latest>now</latest> </search> </table> </panel> </row> </form>      
How to integrate Splunk with third party applications without using add ons?
Hi Everyone, I need to create a dashboard to know from which location the user is accessing the splunkweb.  The issue is in my splunk _internal webaccess logs , every log has same ipaddress as 127.... See more...
Hi Everyone, I need to create a dashboard to know from which location the user is accessing the splunkweb.  The issue is in my splunk _internal webaccess logs , every log has same ipaddress as 127.0.0.1 How to change this configuration and how to know from which location the user is accessing the splunk web. Thanks in advance.   
Hi all,   Possible to join 2 search results like following?   Set 1: _time  field1 field2 field3 (common field)   Set 2: _time   fieldA (multiple values, contains start/end time)  fieldB... See more...
Hi all,   Possible to join 2 search results like following?   Set 1: _time  field1 field2 field3 (common field)   Set 2: _time   fieldA (multiple values, contains start/end time)  fieldB  field3 (common field)   Then join with common field3, together with:   fieldA (start) < _time (Set1) < fieldA (end)   Thanks a lot. Regards /stwong