All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  I would like to onboard the data from Oracle 19c database to splunk. So, i would like to know if Oracle 19c is compatible/supportable version to be used via Splunk DB Connect ?
Hi , We are facing issues in listing available splunk indexes in SplunkCloud using  splunklib.client.connect provided by splunklib library. Below is the code snippet used: service = splunklib.c... See more...
Hi , We are facing issues in listing available splunk indexes in SplunkCloud using  splunklib.client.connect provided by splunklib library. Below is the code snippet used: service = splunklib.client.connect(host=host, token=session_token) ind_list = service.indexes.list() This code works perfectly fine in the splunk enterprise and previous splunk cloud versions but noticed this issues happening on the latest versions (8.2.2202 and above). When this code executes on the 8.2.2202 and above we notices the following debug line, which clearly shows the timeout error while fetching the indexes as below: We see this timeout error because the splunk  is not returning the index names ,since it is failing to connect to splunk cloud. ERROR 2022-10-03 05:39:22,296 CiscoCloudSecurity : API: fetch_indexes, Exception : [Errno 110] Connection timed out Traceback (most recent call last): File "/opt/splunk/etc/apps/cisco-cloud-security/bin/fetch_indexes.py", line 26, in handle ind_list = service.indexes.list() File "/opt/splunk/etc/apps/cisco-cloud-security/bin/splunklib/client.py", line 1479, in list return list(self.iter(count=count, **kwargs)) File "/opt/splunk/etc/apps/cisco-cloud-security/bin/splunklib/client.py", line 1438, in iter response = self.get(count=pagesize or count, offset=offset, **kwargs) File "/opt/splunk/etc/apps/cisco-cloud-security/bin/splunklib/client.py", line 1668, in get return super(Collection, self).get(name, owner, app, sharing, **query) File "/opt/splunk/etc/apps/cisco-cloud-security/bin/splunklib/client.py", line 766, in get **query) File "/opt/splunk/etc/apps/cisco-cloud-security/bin/splunklib/binding.py", line 290, in wrapper return request_fun(self, *args, **kwargs) File "/opt/splunk/etc/apps/cisco-cloud-security/bin/splunklib/binding.py", line 71, in new_f val = f(*args, **kwargs) File "/opt/splunk/etc/apps/cisco-cloud-security/bin/splunklib/binding.py", line 686, in get response = self.http.get(path, all_headers, **query) File "/opt/splunk/etc/apps/cisco-cloud-security/bin/splunklib/binding.py", line 1199, in get return self.request(url, { 'method': "GET", 'headers': headers }) File "/opt/splunk/etc/apps/cisco-cloud-security/bin/splunklib/binding.py", line 1259, in request response = self.handler(url, message, **kwargs) File "/opt/splunk/etc/apps/cisco-cloud-security/bin/splunklib/binding.py", line 1399, in request connection.request(method, path, body, head) File "/opt/splunk/lib/python3.7/http/client.py", line 1281, in request self._send_request(method, url, body, headers, encode_chunked) File "/opt/splunk/lib/python3.7/http/client.py", line 1327, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/opt/splunk/lib/python3.7/http/client.py", line 1276, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/opt/splunk/lib/python3.7/http/client.py", line 1036, in _send_output self.send(msg) File "/opt/splunk/lib/python3.7/http/client.py", line 976, in send self.connect() File "/opt/splunk/lib/python3.7/http/client.py", line 1443, in connect super().connect() File "/opt/splunk/lib/python3.7/http/client.py", line 948, in connect (self.host,self.port), self.timeout, self.source_address) File "/opt/splunk/lib/python3.7/socket.py", line 728, in create_connection raise err File "/opt/splunk/lib/python3.7/socket.py", line 716, in create_connection sock.connect(sa) TimeoutError: [Errno 110] Connection timed out Please let us know how we can resolve this issue on cloud . Thanks.  
Hello, I've set up a field choice called "ASSETtoken" where the user can select "value1", "value2" or "all". I would like in a Single Value element do something like count all lines in the column "... See more...
Hello, I've set up a field choice called "ASSETtoken" where the user can select "value1", "value2" or "all". I would like in a Single Value element do something like count all lines in the column "columnX" in CSVValue1 or CSVValue2 or both depending on the selection. Any tip is welcome (started Splunk 2/3 days ago) Thx all !
Hi Team, I am trying to compare IP addresses but I am unable to find any logic that can do so with the below query: index=index_name sourcetype="sourcetype2" (POSTEDID!=SYSTEM AND VERIFIERID!=SYS... See more...
Hi Team, I am trying to compare IP addresses but I am unable to find any logic that can do so with the below query: index=index_name sourcetype="sourcetype2" (POSTEDID!=SYSTEM AND VERIFIERID!=SYSTEM) | rename ENTRYID as Maker_User | rename POSTEDID as Checker_User | rename VERIFIERID as Verifier_User | stats values(Maker_User) as maker, values(ENTRYTIME) as maker_time, values(Checker_User) as checker, values(POSTEDTIME) as checker_time, values(Verifier_User) as verifier, values(VERIFIERTIME) as verifier_time, values(AMOUNT) as amount by TRANSACTIONID | eval USER_ID = lower(mvappend(maker,checkerverifier)) | mvexpand USER_ID | join USER_ID type=outer [ search index=index_name sourcetype="sourcetype1" | eval USERID=lower(USERID) | stats values(IP) as dev_ip by USERID] | where isnotnull(verifier) AND amount>100000 The results I get with this are as below: TRANSACTIONID Maker Maker Time Checker Checker Time Verifier Verifier Time Amount USERID IP 001 A 10:00 A 10:03 B 10:05 200000 A IP of A 001 A 10:00 A 10:03 B 10:05 200000 A IP of A 001 A 10:00 A 10:03 B 10:05 200000 B IP of B I want to have a logic that can compare the IP address of A and the IP address of B so that both IP addresses are not the same. Any assistance would be appreciated.
I have 2 types of error messages that I want to display along with their count. One error has "." at the end and another has "." at the end but has some redundant string surrounded by "<>" which I do... See more...
I have 2 types of error messages that I want to display along with their count. One error has "." at the end and another has "." at the end but has some redundant string surrounded by "<>" which I dont need. Is there a way to accomodate both these in the same regex? Currently I am using below regex with only "." condition and it seems its not working for messages with "<" Message 1 :    stack_trace : com.abc.xyz.package.ExceptionName: Missing A.     Message 2:      stack_trace : com.abc.xyz.package.ExceptionName: Missing B <abcd> com.     Query   BASE_SEARCH| rex field=_raw "Exception: (?<ExceptionText>[^\.]+)" | stats count as Count by "ExceptionTest"     Expected Output   Missing A 3 Missing B 4   Actual Output   Missing A 3 Missing B <abcd> com 4    
I am trying to explore the free trail edition of splunk Observability. However I am not able to integrate my AWS EKS cluster. Says : Contact your administrator 
Hi team,    I created one query with rex command and stats command, it is working fine. Now I need to add another column which can evaluate the error details and should display the status as 'ignor... See more...
Hi team,    I created one query with rex command and stats command, it is working fine. Now I need to add another column which can evaluate the error details and should display the status as 'ignore' or 'follow-up'.  The query looks like -  index=dev_master souce="testing source" |rex field=_raw "Error desc : (?<Err>[^\"|\<] |stats count by Err.   The result is looks like below :  Err                                                                                            count server timeout, try after sometime                                 5 Web service error                                                                   8 Address element not found                                               2 Now I want to enhance the above query to get the output like below. Err                                                                                            count                            Action server timeout, try after sometime                                 5                                 Ignore   Web service error                                                                   8                                 follow-up Address element not found                                               2                                  Ignore Can anyone help me on this.  Thanks in Advance.   
Hello guys ,   We`re encountering some log gaps from our proxy into Splunk periodically , so when they`re back , the usecases are not detecting anything for that previous period . How did other com... See more...
Hello guys ,   We`re encountering some log gaps from our proxy into Splunk periodically , so when they`re back , the usecases are not detecting anything for that previous period . How did other companies fixed that ? How is the best way to handle that , when the logs are back , with the minimum of resources ? Do we need to change the start date and end date ( of the log gaps ) manually every time it happens , and run the usecases again ? Or it`s any other more useful solution ?   Thank you!
Hello community, I am new here and I have a simple question on my chart which is not working as expected. Currently I have the following chart which brings me the dusk usage in KBytes. It works... See more...
Hello community, I am new here and I have a simple question on my chart which is not working as expected. Currently I have the following chart which brings me the dusk usage in KBytes. It works perfectly: sourcetype=app:my_app AND mount_usage_kb | timechart max(mount_usage_kb) as "Mount size in KB"   I tried to eval a new variable to have the values in MBytes, but it does not work, the chart is empty and values not shown (even on the table): sourcetype=app:my_app AND mount_usage_kb | eval mount_usage_mb=(mount_usage_kb/1024) | timechart max(mount_usage_mb) as "Used storage MB"   Any clue on what I am doing wrong?   Thanks a lot
Hi Team, Is it possible to disable ticket integration and Mail notifcation integration temporary for all alerts ? during maintenance window. I found the below path in my splunk account  Settin... See more...
Hi Team, Is it possible to disable ticket integration and Mail notifcation integration temporary for all alerts ? during maintenance window. I found the below path in my splunk account  Settings---> Alert Action--> serviceNow integartion --> Status(Enable/Disable) Will this help to diable temporary integration ? Please advise, Thank you  
I have configured connection between the heavy forwarder and indexer. Also I created a custom index on the indexer. When I configure HEC on the heavy forwarder, I suppose to be able to select index... See more...
I have configured connection between the heavy forwarder and indexer. Also I created a custom index on the indexer. When I configure HEC on the heavy forwarder, I suppose to be able to select index created on the indexer. However, I cannot select the custom index from the heavy forwarder. Would there be any suggestions on properly forwarding HEC logs from heavy forwarder to indexer? Thank you.
There is any possible way to sort the parrllel co-ordinates visualization. ....| table count product test where count is a integer ( number).. In parallel coordinates visualization . The Number... See more...
There is any possible way to sort the parrllel co-ordinates visualization. ....| table count product test where count is a integer ( number).. In parallel coordinates visualization . The Number comes first follow by alphabets. 3              Alexa          Ball Is there possible to make reverse the visualization format something like Alexa    Ball        3
Hi All, I want to display some additional fields and I have added them by following the below method: Configure -> Incident Management -> Incident Review Settings, under Incident Review - Event A... See more...
Hi All, I want to display some additional fields and I have added them by following the below method: Configure -> Incident Management -> Incident Review Settings, under Incident Review - Event Attributes add those new fields and after that it will display in Incident Review page. However, after I save my settings, the fields are not displayed in the Incident Review Page. Any assistance will be appreciated.
Hi all, sorry for asking a very basic question, quite new to Splunk world. I have a piechart created by following code: ``` <panel> <chart> <search> <query> index=test_index | search splun... See more...
Hi all, sorry for asking a very basic question, quite new to Splunk world. I have a piechart created by following code: ``` <panel> <chart> <search> <query> index=test_index | search splunk_id="$splunk_id$" | table campaign_data.All.total_passed campaign_data.All.total_failed campaign_data.All.total_not_run | rename campaign_data.All.total_passed as "Passed" campaign_data.All.total_failed as "Failed" campaign_data.All.total_not_run as "Not Run" | eval name="No of Tests" | transpose 0 header_field=name </query> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"Failed": 0xFF0000, "Not Run": 0x808080, "Passed":0x009900, "NULL":0xC4C4C0}</option> <option name="refresh.display">progressbar</option> </chart> </panel> ``` it gives the following output:     When 1 hover over a part it shows 3 rows of data. What I want is in the 3rd row, the percentage should be cutoff by 2 decimal pts. Also can we change the label "No of tests%" to "Percentage" but keep 2nd row data value as it is? Is it possible? Thanks!
I would like to get the number of hosts per index in the last 7 days, the query as below gave me the format but not the correct number.   | tstats dc(host) where index=* by _time index | timechar... See more...
I would like to get the number of hosts per index in the last 7 days, the query as below gave me the format but not the correct number.   | tstats dc(host) where index=* by _time index | timechart span=1d dc(host) by index Any idea? Thanks!                           Index A Index B Index C Index D Index E Index F Index G Index H Index I Index J 2022-10-05 0               0                0             0               0             0              0              0           0            0  2022-10-06 0               0                0             0               0             0              0              0           0            0 2022-10-07 0               0                0             0               0             0              0              0           0            0 2022-10-08 0               0                0             0               0             0              0              0           0            0 2022-10-09 0               0                0             0               0             0              0              0           0            0 2022-10-10 0               0                0             0               0             0              0              0           0            0
I am tying to track down why my Windows Universal forwarder is not forwarding to the Splunk server/index. I can't seem to see anything for example in the past 24 hours and not sure why?     ## ... See more...
I am tying to track down why my Windows Universal forwarder is not forwarding to the Splunk server/index. I can't seem to see anything for example in the past 24 hours and not sure why?     ## ## SPDX-FileCopyrightText: 2021 Splunk, Inc. <sales@splunk.com> ## SPDX-License-Identifier: LicenseRef-Splunk-8-2021 ## DO NOT EDIT THIS FILE! ## Please make all changes to files in $SPLUNK_HOME/etc/apps/Splunk_TA_windows/local. ## To make changes, copy the section/stanza you want to change from $SPLUNK_HOME/etc/apps/Splunk_TA_windows/default ## into ../local and edit there. ## ###### OS Logs ###### [WinEventLog://Application] disabled = 0 index = wineventlog start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true [WinEventLog://System] disabled = 0 index = wineventlog start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true ###### Host monitoring ###### [WinHostMon://Computer] interval = 600 disabled = 0 index = hostmonitoring type = Computer [WinHostMon://Process] interval = 600 disabled = 0 index = hostmonitoring type = Process [WinHostMon://Processor] interval = 600 disabled = 0 index = hostmonitoring type = Processor [WinHostMon://NetworkAdapter] interval = 600 disabled = 0 index = hostmonitoring type = NetworkAdapter [WinHostMon://Service] interval = 600 disabled = 0 index = hostmonitoring type = Service [WinHostMon://Disk] interval = 600 disabled = 0 index = hostmonitoring type = Disk ###### Splunk 5.0+ Performance Counters ###### ## CPU [perfmon://CPU] counters = % Processor Time; % User Time; % Privileged Time; Interrupts/sec; % DPC Time; % Interrupt Time; DPCs Queued/sec; DPC Rate; % Idle Time; % C1 Time; % C2 Time; % C3 Time; C1 Transitions/sec; C2 Transitions/sec; C3 Transitions/sec disabled = 0 index = perfmoncpu instances = * mode = multikv object = Processor useEnglishOnly=true ## Logical Disk [perfmon://LogicalDisk] counters = % Free Space; Free Megabytes; Current Disk Queue Length; % Disk Time; Avg. Disk Queue Length; % Disk Read Time; Avg. Disk Read Queue Length; % Disk Write Time; Avg. Disk Write Queue Length; Avg. Disk sec/Transfer; Avg. Disk sec/Read; Avg. Disk sec/Write; Disk Transfers/sec; Disk Reads/sec; Disk Writes/sec; Disk Bytes/sec; Disk Read Bytes/sec; Disk Write Bytes/sec; Avg. Disk Bytes/Transfer; Avg. Disk Bytes/Read; Avg. Disk Bytes/Write; % Idle Time; Split IO/Sec disabled = 0 index = perfmonlogicaldisk instances = * interval = 60 mode = multikv object = LogicalDisk useEnglishOnly=true ## Physical Disk [perfmon://PhysicalDisk] counters = Current Disk Queue Length; % Disk Time; Avg. Disk Queue Length; % Disk Read Time; Avg. Disk Read Queue Length; % Disk Write Time; Avg. Disk Write Queue Length; Avg. Disk sec/Transfer; Avg. Disk sec/Read; Avg. Disk sec/Write; Disk Transfers/sec; Disk Reads/sec; Disk Writes/sec; Disk Bytes/sec; Disk Read Bytes/sec; Disk Write Bytes/sec; Avg. Disk Bytes/Transfer; Avg. Disk Bytes/Read; Avg. Disk Bytes/Write; % Idle Time; Split IO/Sec disabled = 0 index = perfmonphysicaldisk instances = * interval = 60 mode = multikv object = PhysicalDisk useEnglishOnly=true ## Memory [perfmon://Memory] counters = Page Faults/sec; Available Bytes; Committed Bytes; Commit Limit; Write Copies/sec; Transition Faults/sec; Cache Faults/sec; Demand Zero Faults/sec; Pages/sec; Pages Input/sec; Page Reads/sec; Pages Output/sec; Pool Paged Bytes; Pool Nonpaged Bytes; Page Writes/sec; Pool Paged Allocs; Pool Nonpaged Allocs; Free System Page Table Entries; Cache Bytes; Cache Bytes Peak; Pool Paged Resident Bytes; System Code Total Bytes; System Code Resident Bytes; System Driver Total Bytes; System Driver Resident Bytes; System Cache Resident Bytes; % Committed Bytes In Use; Available KBytes; Available MBytes; Transition Pages RePurposed/sec; Free & Zero Page List Bytes; Modified Page List Bytes; Standby Cache Reserve Bytes; Standby Cache Normal Priority Bytes; Standby Cache Core Bytes; Long-Term Average Standby Cache Lifetime (s) disabled = 0 index = perfmonmemory interval = 60 mode = multikv object = Memory useEnglishOnly=true ## Network [perfmon://Network] counters = Bytes Total/sec; Packets/sec; Packets Received/sec; Packets Sent/sec; Current Bandwidth; Bytes Received/sec; Packets Received Unicast/sec; Packets Received Non-Unicast/sec; Packets Received Discarded; Packets Received Errors; Packets Received Unknown; Bytes Sent/sec; Packets Sent Unicast/sec; Packets Sent Non-Unicast/sec; Packets Outbound Discarded; Packets Outbound Errors; Output Queue Length; Offloaded Connections; TCP Active RSC Connections; TCP RSC Coalesced Packets/sec; TCP RSC Exceptions/sec; TCP RSC Average Packet Size disabled = 0 index = perfmonnetwork instances = * interval = 60 mode = multikv object = Network Interface useEnglishOnly=true ## Process [perfmon://Process] counters = % Processor Time; % User Time; % Privileged Time; Virtual Bytes Peak; Virtual Bytes; Page Faults/sec; Working Set Peak; Working Set; Page File Bytes Peak; Page File Bytes; Private Bytes; Thread Count; Priority Base; Elapsed Time; ID Process; Creating Process ID; Pool Paged Bytes; Pool Nonpaged Bytes; Handle Count; IO Read Operations/sec; IO Write Operations/sec; IO Data Operations/sec; IO Other Operations/sec; IO Read Bytes/sec; IO Write Bytes/sec; IO Data Bytes/sec; IO Other Bytes/sec; Working Set - Private disabled = 0 index = perfmonprocess instances = * interval = 60 mode = multikv object = Process useEnglishOnly=true ## ProcessInformation [perfmon://ProcessorInformation] counters = % Processor Time; Processor Frequency disabled = 0 index = perfmonprocessinfo instances = * interval = 60 mode = multikv object = Processor Information useEnglishOnly=true ## System [perfmon://System] counters = File Read Operations/sec; File Write Operations/sec; File Control Operations/sec; File Read Bytes/sec; File Write Bytes/sec; File Control Bytes/sec; Context Switches/sec; System Calls/sec; File Data Operations/sec; System Up Time; Processor Queue Length; Processes; Threads; Alignment Fixups/sec; Exception Dispatches/sec; Floating Emulations/sec; % Registry Quota In Use disabled = 0 index = perfmonsystem instances = * interval = 60 mode = multikv object = System useEnglishOnly=true
I have setup different alerts. I would like to setup a report that would allow me to have stats for each Alerts Example: Alert    Count Alert1    25 Alert2      3 Alert3   128 Alert4    18 I... See more...
I have setup different alerts. I would like to setup a report that would allow me to have stats for each Alerts Example: Alert    Count Alert1    25 Alert2      3 Alert3   128 Alert4    18 Is there a way to do this?
I'm trying to convert a field with multiple results into a multivalue field. I'm querying a host lookup table that has several hostnames. I'd like to create a single multivalue field containing all... See more...
I'm trying to convert a field with multiple results into a multivalue field. I'm querying a host lookup table that has several hostnames. I'd like to create a single multivalue field containing all the hostnames returned by the inputlookup command separated by a comma. I'm using the makemv command to do this but it returns each host as a separate result instead of a single result with all the hosts separated by commas.    Any suggestions? here's my query: | inputlookup host_table fields hostname | makemv delim="," hostname | table hostname   Thanks in advance.  
I have few checkboxes where my panels are getting displayed when i select them and if i unselct them they are not appearing , till hear i am good but my requirement is under mainframe i have 2 che... See more...
I have few checkboxes where my panels are getting displayed when i select them and if i unselct them they are not appearing , till hear i am good but my requirement is under mainframe i have 2 checkboxes source and destination under services i have 6 checkboxes service1, service2...... under items  i have 4 checkboxes item1, item2.... So here , service1, service2 and service 3, item1, item 2 belongs to source service4, service5 and service 6, item3, item 4 belongs to destination So my panels should display when i select source, service1, item1 check box only. Main frame         services         items        source                     service1      item1 destination           service2       item2                                    service3      item3                                    service4      item4                                    service5                                          service6 How can i do that?????
Hello,  I realize this is a rather specific request so I'll keep it short and simple to see if anyone has had previous experience or any creative resolutions to this issue.  I have successfully con... See more...
Hello,  I realize this is a rather specific request so I'll keep it short and simple to see if anyone has had previous experience or any creative resolutions to this issue.  I have successfully configured an AWS IAM role and user within a dedicated account on our AWS environment, where cloudtrail logs are sent to and kept in cold storage in the form of an s3 bucket.  I have also successfully configured an incremental S3 input  which I've tested as working, but currently the volume of cloudtrail data from our AWS accounts exceeds that which we are licensed for in Splunk.  I'm hoping there's some way within the Log Prefix field to basically choose what accounts/directory paths you want to monitor within the dedicated S3 bucket so I can only monitor the accounts I want without ingesting data from all other accounts.  I'm sure this can be done in the form of an SQS queue on the AWS side of things, but before going that far I'm wondering what can be done given the access and configurations I've already made and obtained.  Thanks in advance!