All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have configured connection between the heavy forwarder and indexer. Also I created a custom index on the indexer. When I configure HEC on the heavy forwarder, I suppose to be able to select index... See more...
I have configured connection between the heavy forwarder and indexer. Also I created a custom index on the indexer. When I configure HEC on the heavy forwarder, I suppose to be able to select index created on the indexer. However, I cannot select the custom index from the heavy forwarder. Would there be any suggestions on properly forwarding HEC logs from heavy forwarder to indexer? Thank you.
There is any possible way to sort the parrllel co-ordinates visualization. ....| table count product test where count is a integer ( number).. In parallel coordinates visualization . The Number... See more...
There is any possible way to sort the parrllel co-ordinates visualization. ....| table count product test where count is a integer ( number).. In parallel coordinates visualization . The Number comes first follow by alphabets. 3              Alexa          Ball Is there possible to make reverse the visualization format something like Alexa    Ball        3
Hi All, I want to display some additional fields and I have added them by following the below method: Configure -> Incident Management -> Incident Review Settings, under Incident Review - Event A... See more...
Hi All, I want to display some additional fields and I have added them by following the below method: Configure -> Incident Management -> Incident Review Settings, under Incident Review - Event Attributes add those new fields and after that it will display in Incident Review page. However, after I save my settings, the fields are not displayed in the Incident Review Page. Any assistance will be appreciated.
Hi all, sorry for asking a very basic question, quite new to Splunk world. I have a piechart created by following code: ``` <panel> <chart> <search> <query> index=test_index | search splun... See more...
Hi all, sorry for asking a very basic question, quite new to Splunk world. I have a piechart created by following code: ``` <panel> <chart> <search> <query> index=test_index | search splunk_id="$splunk_id$" | table campaign_data.All.total_passed campaign_data.All.total_failed campaign_data.All.total_not_run | rename campaign_data.All.total_passed as "Passed" campaign_data.All.total_failed as "Failed" campaign_data.All.total_not_run as "Not Run" | eval name="No of Tests" | transpose 0 header_field=name </query> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"Failed": 0xFF0000, "Not Run": 0x808080, "Passed":0x009900, "NULL":0xC4C4C0}</option> <option name="refresh.display">progressbar</option> </chart> </panel> ``` it gives the following output:     When 1 hover over a part it shows 3 rows of data. What I want is in the 3rd row, the percentage should be cutoff by 2 decimal pts. Also can we change the label "No of tests%" to "Percentage" but keep 2nd row data value as it is? Is it possible? Thanks!
I would like to get the number of hosts per index in the last 7 days, the query as below gave me the format but not the correct number.   | tstats dc(host) where index=* by _time index | timechar... See more...
I would like to get the number of hosts per index in the last 7 days, the query as below gave me the format but not the correct number.   | tstats dc(host) where index=* by _time index | timechart span=1d dc(host) by index Any idea? Thanks!                           Index A Index B Index C Index D Index E Index F Index G Index H Index I Index J 2022-10-05 0               0                0             0               0             0              0              0           0            0  2022-10-06 0               0                0             0               0             0              0              0           0            0 2022-10-07 0               0                0             0               0             0              0              0           0            0 2022-10-08 0               0                0             0               0             0              0              0           0            0 2022-10-09 0               0                0             0               0             0              0              0           0            0 2022-10-10 0               0                0             0               0             0              0              0           0            0
I am tying to track down why my Windows Universal forwarder is not forwarding to the Splunk server/index. I can't seem to see anything for example in the past 24 hours and not sure why?     ## ... See more...
I am tying to track down why my Windows Universal forwarder is not forwarding to the Splunk server/index. I can't seem to see anything for example in the past 24 hours and not sure why?     ## ## SPDX-FileCopyrightText: 2021 Splunk, Inc. <sales@splunk.com> ## SPDX-License-Identifier: LicenseRef-Splunk-8-2021 ## DO NOT EDIT THIS FILE! ## Please make all changes to files in $SPLUNK_HOME/etc/apps/Splunk_TA_windows/local. ## To make changes, copy the section/stanza you want to change from $SPLUNK_HOME/etc/apps/Splunk_TA_windows/default ## into ../local and edit there. ## ###### OS Logs ###### [WinEventLog://Application] disabled = 0 index = wineventlog start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true [WinEventLog://System] disabled = 0 index = wineventlog start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true ###### Host monitoring ###### [WinHostMon://Computer] interval = 600 disabled = 0 index = hostmonitoring type = Computer [WinHostMon://Process] interval = 600 disabled = 0 index = hostmonitoring type = Process [WinHostMon://Processor] interval = 600 disabled = 0 index = hostmonitoring type = Processor [WinHostMon://NetworkAdapter] interval = 600 disabled = 0 index = hostmonitoring type = NetworkAdapter [WinHostMon://Service] interval = 600 disabled = 0 index = hostmonitoring type = Service [WinHostMon://Disk] interval = 600 disabled = 0 index = hostmonitoring type = Disk ###### Splunk 5.0+ Performance Counters ###### ## CPU [perfmon://CPU] counters = % Processor Time; % User Time; % Privileged Time; Interrupts/sec; % DPC Time; % Interrupt Time; DPCs Queued/sec; DPC Rate; % Idle Time; % C1 Time; % C2 Time; % C3 Time; C1 Transitions/sec; C2 Transitions/sec; C3 Transitions/sec disabled = 0 index = perfmoncpu instances = * mode = multikv object = Processor useEnglishOnly=true ## Logical Disk [perfmon://LogicalDisk] counters = % Free Space; Free Megabytes; Current Disk Queue Length; % Disk Time; Avg. Disk Queue Length; % Disk Read Time; Avg. Disk Read Queue Length; % Disk Write Time; Avg. Disk Write Queue Length; Avg. Disk sec/Transfer; Avg. Disk sec/Read; Avg. Disk sec/Write; Disk Transfers/sec; Disk Reads/sec; Disk Writes/sec; Disk Bytes/sec; Disk Read Bytes/sec; Disk Write Bytes/sec; Avg. Disk Bytes/Transfer; Avg. Disk Bytes/Read; Avg. Disk Bytes/Write; % Idle Time; Split IO/Sec disabled = 0 index = perfmonlogicaldisk instances = * interval = 60 mode = multikv object = LogicalDisk useEnglishOnly=true ## Physical Disk [perfmon://PhysicalDisk] counters = Current Disk Queue Length; % Disk Time; Avg. Disk Queue Length; % Disk Read Time; Avg. Disk Read Queue Length; % Disk Write Time; Avg. Disk Write Queue Length; Avg. Disk sec/Transfer; Avg. Disk sec/Read; Avg. Disk sec/Write; Disk Transfers/sec; Disk Reads/sec; Disk Writes/sec; Disk Bytes/sec; Disk Read Bytes/sec; Disk Write Bytes/sec; Avg. Disk Bytes/Transfer; Avg. Disk Bytes/Read; Avg. Disk Bytes/Write; % Idle Time; Split IO/Sec disabled = 0 index = perfmonphysicaldisk instances = * interval = 60 mode = multikv object = PhysicalDisk useEnglishOnly=true ## Memory [perfmon://Memory] counters = Page Faults/sec; Available Bytes; Committed Bytes; Commit Limit; Write Copies/sec; Transition Faults/sec; Cache Faults/sec; Demand Zero Faults/sec; Pages/sec; Pages Input/sec; Page Reads/sec; Pages Output/sec; Pool Paged Bytes; Pool Nonpaged Bytes; Page Writes/sec; Pool Paged Allocs; Pool Nonpaged Allocs; Free System Page Table Entries; Cache Bytes; Cache Bytes Peak; Pool Paged Resident Bytes; System Code Total Bytes; System Code Resident Bytes; System Driver Total Bytes; System Driver Resident Bytes; System Cache Resident Bytes; % Committed Bytes In Use; Available KBytes; Available MBytes; Transition Pages RePurposed/sec; Free & Zero Page List Bytes; Modified Page List Bytes; Standby Cache Reserve Bytes; Standby Cache Normal Priority Bytes; Standby Cache Core Bytes; Long-Term Average Standby Cache Lifetime (s) disabled = 0 index = perfmonmemory interval = 60 mode = multikv object = Memory useEnglishOnly=true ## Network [perfmon://Network] counters = Bytes Total/sec; Packets/sec; Packets Received/sec; Packets Sent/sec; Current Bandwidth; Bytes Received/sec; Packets Received Unicast/sec; Packets Received Non-Unicast/sec; Packets Received Discarded; Packets Received Errors; Packets Received Unknown; Bytes Sent/sec; Packets Sent Unicast/sec; Packets Sent Non-Unicast/sec; Packets Outbound Discarded; Packets Outbound Errors; Output Queue Length; Offloaded Connections; TCP Active RSC Connections; TCP RSC Coalesced Packets/sec; TCP RSC Exceptions/sec; TCP RSC Average Packet Size disabled = 0 index = perfmonnetwork instances = * interval = 60 mode = multikv object = Network Interface useEnglishOnly=true ## Process [perfmon://Process] counters = % Processor Time; % User Time; % Privileged Time; Virtual Bytes Peak; Virtual Bytes; Page Faults/sec; Working Set Peak; Working Set; Page File Bytes Peak; Page File Bytes; Private Bytes; Thread Count; Priority Base; Elapsed Time; ID Process; Creating Process ID; Pool Paged Bytes; Pool Nonpaged Bytes; Handle Count; IO Read Operations/sec; IO Write Operations/sec; IO Data Operations/sec; IO Other Operations/sec; IO Read Bytes/sec; IO Write Bytes/sec; IO Data Bytes/sec; IO Other Bytes/sec; Working Set - Private disabled = 0 index = perfmonprocess instances = * interval = 60 mode = multikv object = Process useEnglishOnly=true ## ProcessInformation [perfmon://ProcessorInformation] counters = % Processor Time; Processor Frequency disabled = 0 index = perfmonprocessinfo instances = * interval = 60 mode = multikv object = Processor Information useEnglishOnly=true ## System [perfmon://System] counters = File Read Operations/sec; File Write Operations/sec; File Control Operations/sec; File Read Bytes/sec; File Write Bytes/sec; File Control Bytes/sec; Context Switches/sec; System Calls/sec; File Data Operations/sec; System Up Time; Processor Queue Length; Processes; Threads; Alignment Fixups/sec; Exception Dispatches/sec; Floating Emulations/sec; % Registry Quota In Use disabled = 0 index = perfmonsystem instances = * interval = 60 mode = multikv object = System useEnglishOnly=true
I have setup different alerts. I would like to setup a report that would allow me to have stats for each Alerts Example: Alert    Count Alert1    25 Alert2      3 Alert3   128 Alert4    18 I... See more...
I have setup different alerts. I would like to setup a report that would allow me to have stats for each Alerts Example: Alert    Count Alert1    25 Alert2      3 Alert3   128 Alert4    18 Is there a way to do this?
I'm trying to convert a field with multiple results into a multivalue field. I'm querying a host lookup table that has several hostnames. I'd like to create a single multivalue field containing all... See more...
I'm trying to convert a field with multiple results into a multivalue field. I'm querying a host lookup table that has several hostnames. I'd like to create a single multivalue field containing all the hostnames returned by the inputlookup command separated by a comma. I'm using the makemv command to do this but it returns each host as a separate result instead of a single result with all the hosts separated by commas.    Any suggestions? here's my query: | inputlookup host_table fields hostname | makemv delim="," hostname | table hostname   Thanks in advance.  
I have few checkboxes where my panels are getting displayed when i select them and if i unselct them they are not appearing , till hear i am good but my requirement is under mainframe i have 2 che... See more...
I have few checkboxes where my panels are getting displayed when i select them and if i unselct them they are not appearing , till hear i am good but my requirement is under mainframe i have 2 checkboxes source and destination under services i have 6 checkboxes service1, service2...... under items  i have 4 checkboxes item1, item2.... So here , service1, service2 and service 3, item1, item 2 belongs to source service4, service5 and service 6, item3, item 4 belongs to destination So my panels should display when i select source, service1, item1 check box only. Main frame         services         items        source                     service1      item1 destination           service2       item2                                    service3      item3                                    service4      item4                                    service5                                          service6 How can i do that?????
Hello,  I realize this is a rather specific request so I'll keep it short and simple to see if anyone has had previous experience or any creative resolutions to this issue.  I have successfully con... See more...
Hello,  I realize this is a rather specific request so I'll keep it short and simple to see if anyone has had previous experience or any creative resolutions to this issue.  I have successfully configured an AWS IAM role and user within a dedicated account on our AWS environment, where cloudtrail logs are sent to and kept in cold storage in the form of an s3 bucket.  I have also successfully configured an incremental S3 input  which I've tested as working, but currently the volume of cloudtrail data from our AWS accounts exceeds that which we are licensed for in Splunk.  I'm hoping there's some way within the Log Prefix field to basically choose what accounts/directory paths you want to monitor within the dedicated S3 bucket so I can only monitor the accounts I want without ingesting data from all other accounts.  I'm sure this can be done in the form of an SQS queue on the AWS side of things, but before going that far I'm wondering what can be done given the access and configurations I've already made and obtained.  Thanks in advance!
Hello, I was wondering if anyone could help me with this simple problem- I'm trying to graph the total amount of good calls, bad calls, as well as their fail rate percentages to show up on a chart.... See more...
Hello, I was wondering if anyone could help me with this simple problem- I'm trying to graph the total amount of good calls, bad calls, as well as their fail rate percentages to show up on a chart. So far I've been able to chart the sums of good calls and bad calls according to the respective 'channel' that they were on, but the Fail_Rate percentage field that I've tried to define doesn't seem to be working out.   I've tried a few different methods of trying to plot the Fail_Rate but at this point I'm questioning whether or not I've defined the field correctly       source="C:\\Call_logs" termcodeID=1 OR termcodeID=34 OR termcodeID=7 OR termcodeID=9 OR termcodeID=21 OR termcodeID=27 OR termcodeID=30 OR termcodeID=32 OR termcodeID=34 ChanID!=0 | eval Good=if(termcodeID=1,"Good", "Bad") | eventstats count(termcodeID) as totalcalls | eval Fail_Rate=sum((Bad/totalcalls)*100,1) | chart count over ChanID by Good      
Hello, I wonder if someone could help me out with a query. I'm trying to compare a value against different point in time, for multiple sources. For some reason, the appendcols seems to be adding col... See more...
Hello, I wonder if someone could help me out with a query. I'm trying to compare a value against different point in time, for multiple sources. For some reason, the appendcols seems to be adding columns without matching with another field  so if the values come out in a different order (not all datasource report at different times), then the report gets mixed results from different datasources so they don't match.  Basically, I need this query to join/group on the actual_data_source field         index=bi sourcetype=dbx_bi source=automation earliest=-1h@h latest=@h | bin _time span=1h | stats sum(error_percentage) as last_hour_percentage by _time,actual_data_source | appendcols [search index=bi sourcetype=dbx_bi source=automation earliest=-169h@h latest=-168h@h | bin _time as last_week span=1h | stats sum(error_percentage) as last_week_percentage by last_week,actual_data_source ] | appendcols [search index=bi sourcetype=dbx_bi source=automation earliest=-673h@h latest=-672h@h | bin _time as last_month span=1h | stats sum(error_percentage) as last_month_percentage by last_month,actual_data_source ] | eval last_hour=strftime(_time,"%Y-%m-%d %H:%M (%:::z %Z)"),last_week=strftime(last_week,"%Y-%m-%d %H:%M (%:::z %Z)"), last_month=strftime(last_month,"%Y-%m-%d %H:%M (%:::z %Z)"), change=(last_hour_percentage-last_week_percentage) | search last_hour_percentage>10 | table actual_data_source, last_hour, last_hour_percentage, last_week, last_week_percentage, last_month, last_month_percentage         Thanks!
Hi, Got a message from Splunk that our universal forwarder certificate package will be expiring soon and trying to update the package following their instructions for installing the credentials pac... See more...
Hi, Got a message from Splunk that our universal forwarder certificate package will be expiring soon and trying to update the package following their instructions for installing the credentials package (which works on a new/clean install) it returns that we need to use the update argument:     App "100_XXXX_splunkcloud" already exists; use the "update" argument to install anyway     This is the syntax used (following Splunk documentation) that returns the message:     .\splunk install app ../etc/apps/splunkclouduf.spl -auth xxx:xxxxxxx      What is the syntax we should use to force the update? I have tried every which way that I can think of and nothing works. Thanks!
Hello, I am trying to come-up with something which will automatically enrich the events using the country information using the src_ip field in the events. I understand that the iplocation command ... See more...
Hello, I am trying to come-up with something which will automatically enrich the events using the country information using the src_ip field in the events. I understand that the iplocation command can do this in search time. Is there any way we can get this done automatically using props.conf? I am expecting to have a lookup file which we can leverage to achieve this and I cannot find any. Cheers.
Hi, I have the following event as an example.   Properties: { [-] Path: /v1.0/locations/branches QueryString: ?branchNumbers=5318& RequestPath: /v1.0/locations/branches StatusCode: 404 TraceId:... See more...
Hi, I have the following event as an example.   Properties: { [-] Path: /v1.0/locations/branches QueryString: ?branchNumbers=5318& RequestPath: /v1.0/locations/branches StatusCode: 404 TraceId: 3f39adaf-ae24-44f4-b5cb-f8c49be023a0 }   I am trying to query this using the below search:   index=myIndex "Properties.RequestPath"!="*/v1*/events/*" "Properties.RequestPath"!="*_status*" "Properties.StatusCode">399 "Properties.TraceId"!="" | dedup "Properties.TraceId" | table "Properties.RequestPath" "Properties.TraceId "Properties.StatusCode" "Properties.QueryString"     The above query is returning the RequestPath and the TraceId just fine. But StatusCode and QueryString are all blank and when I check the tab, it says NULL.   Can anyone please help?
Hello Friends, Basesearch | Table workflowname runid count status. When it's serached,results will be as mentioned below  workflowname runid count status Workflowname1 123    5      Completed... See more...
Hello Friends, Basesearch | Table workflowname runid count status. When it's serached,results will be as mentioned below  workflowname runid count status Workflowname1 123    5      Completed Workflowname2 456    7      Paused Workflowname1 789    8      Completed Workflowname3 1011  4      Running Workflowname1 1013  4      Running Workflowname2 432    8      Completed I have configured an alert,to trigger when the result are greater than 0. Which means all the above mentioned results will be part of the email alert notification. When I use the suppress option by mentioning the fieldname as workflowname only one result been recieved as a part of email alert notifications.    Example how now the email is received  Email received for the Workflowname1 workflowname runid count status Workflowname1 123    5      Completed   Email received for the Workflowname2 workflowname runid count status Workflowname2 456    7      Paused   Can someone help out here with different email alert all the results for the unique workflowname should be triggered. Excepted one -  One mail for the workflowname1 workflowname runid count status Workflowname1 123    5      Completed Workflowname1 789    8      Completed Workflowname1 1013  4      Running   Other email for the workflowname2 workflowname runid count status Workflowname2 456    7      Paused Workflowname2 432    8      Completed   Separate email for the workflowname3 workflowname runid count status Workflowname3 1011  4      Running   Looking forward to hear inorder to achieve the above result    Thanks for the support.
The Health check from my monitoring console keeps "loading". I pressed F12 and it shows a few errors: If I pressed the 404 errors I get a XML document:  Any idea what is happening? The ... See more...
The Health check from my monitoring console keeps "loading". I pressed F12 and it shows a few errors: If I pressed the 404 errors I get a XML document:  Any idea what is happening? The Health Check worked before without any issue
Hello All, I have been searching for "how to" but not had much luck. I have this search: I run it realtime, and test with fixed time range (like 15 min,. etc)   sourcetype=linux_secure eventtyp... See more...
Hello All, I have been searching for "how to" but not had much luck. I have this search: I run it realtime, and test with fixed time range (like 15 min,. etc)   sourcetype=linux_secure eventtype="ssh_open" OR eventtype="ssh_close" | eval Date=strftime(_time, "%Y-%m-%d %H:%M:%S") | eval UserAction=case(eventtype="ssh_open","On",eventtype="ssh_close","Off",1==1,UserAction) | stats last(UsaerAction) by Date,host,user,UserAction | sort - Date   This search gives me a user, a host, and a "on" if user logs on and an "Off" if user logs off. I would like to not show the "Off" condition when the user logs off - i.e. make the "On" line in the search result go away (disappear)   How might I do this? thanks for a great source of info, eholz1
In Splunk, each user role would be allocated with threshold memory limit. Once we exceeds the limit (in the form of running many/large search queries), we probably end with the error "Waiting for que... See more...
In Splunk, each user role would be allocated with threshold memory limit. Once we exceeds the limit (in the form of running many/large search queries), we probably end with the error "Waiting for queued job to start". Is there a way to check the memory usage of my user profile (and/or other specific user) in Splunk? I would like to check the usage details, as it helps me to optimise my search and obviously it helps me in avoiding the error. I tried to find from the logs of `_internal` index, but unable to find the exact information.  Could anyone please help on this.
Hello, I'm trying to retrieve all the host-sourcetype combinations that are not captured by any Datamodel. I have a perimeter with all the assets to verify and check if they fit some DM or not. I c... See more...
Hello, I'm trying to retrieve all the host-sourcetype combinations that are not captured by any Datamodel. I have a perimeter with all the assets to verify and check if they fit some DM or not. I can't crisp my mind around unfortunately. Is there anyone with any idea?   Thank you.