All Topics

Top

All Topics

Hello everyone, I am facing an issue with the alerts triggered by the "Set Default PowerShell Execution Policy To Unrestricted or Bypass" (Correlation Search) rule in Splunk, as many alerts are bein... See more...
Hello everyone, I am facing an issue with the alerts triggered by the "Set Default PowerShell Execution Policy To Unrestricted or Bypass" (Correlation Search) rule in Splunk, as many alerts are being generated unexpectedly. After reviewing the details, I added the command `| stats count BY process_name` to analyze the data more precisely. After executing this, the result was 389 processes within 24 hours. However, it seems there might be false positives and I’m unable to determine if this alert is normal or if there’s a misconfiguration. I would appreciate any help in identifying whether these alerts are expected or if there is an issue with the configuration or the rule itself. Any assistance or advice would be greatly appreciated. Thank you in advance.  
From where we can see the actual score of any Splunk exam. Because from Splunk website we can only get certification and from Pearson Vue we can only see report which says congratulations you're pass... See more...
From where we can see the actual score of any Splunk exam. Because from Splunk website we can only get certification and from Pearson Vue we can only see report which says congratulations you're passed and doesn't mention any actual score.
We have a 5 node Splunk forwarder cluster to handle throughput of multiple servers in our datacenter.  Currently our upgrade method is keeping the the Deployment server as mutable where we just run c... See more...
We have a 5 node Splunk forwarder cluster to handle throughput of multiple servers in our datacenter.  Currently our upgrade method is keeping the the Deployment server as mutable where we just run config. changes via Chef, and update it.  But, the 5 node forwarders are being treated as fully replaceable with Terraform and Chef. Everything is working, but I notice the Deployment server holds onto forwarders after Terraform destroys the old one, and the new one pings home on a new IP(currently on DHCP), but with the same hostname as the destroyed forwarder.  Would replacing the forwarders with the same static IP and Hostname resolve that, or would there still be duplicate entries? Deployment server: Oracle Linux 8.10 Splunk-enterprise 8.2.9 Forwarders: Oracle Linux 8.10 Splunkforwarder 8.2.9
Bom dia! No cenário apresentado abaixo, não consigo associar os itens em uma tabela dentro do campo DbrfMatrial: EngineeringCode, ItemDescription, ItemQty, SolutionCode   Usei o... See more...
Bom dia! No cenário apresentado abaixo, não consigo associar os itens em uma tabela dentro do campo DbrfMatrial: EngineeringCode, ItemDescription, ItemQty, SolutionCode   Usei o índice abaixo! index=analise Task.TaskStatus="Concluído" Task.DbrfMaterial{}. SolutionCode="410 TROCA DO MOD/PLACA/PECA" State IN ("*") CustomerName IN ("*") ItemCode("*") | mvexpand Task.DbrfMaterial{}. Código de Engenharia| pesquise Task.DbrfMaterial{}. CódigoDeEngenharia="*" | contagem de estatísticas por Task.DbrfMaterial{}. Código de Engenharia| renomear contagem como Quantidade | cabeça 20 | tabela Task.DbrfMaterial{}. Quantidade do código de engenharia| ordenar -Quantidade | appendcols [ search index=brazilcalldata Task.TaskStatus="Concluído" Task.DbrfMaterial.SolutionCode="410 TROCA DO MOD/PLACA/PECA" CustomerName IN ("*") State IN ("*") Task.DbrfMaterial.EngineeringCode="*" ItemCode = "*" | stats count, sum(Task.DbrfMaterial.ItemQty) as TotalItemQty by Task.DbrfMaterial.EngineeringCode Task.DbrfMaterial.ItemDescription | renomeie Task.DbrfMaterial.EngineeringCode como Item, Task.DbrfMaterial.ItemDescription como Descricao, TotalItemQty como "Qtde Itens" | table Item Descrição "Qtde Itens" count | sort - "Qtde Itens" ] | eval TotalQuantity = Quantity + 'Qtde Itens' | pesquise Task.DbrfMaterial{}. Código de Engenharia!="" | tabela Task.DbrfMaterial{}. EngineeringCode Quantidade "Qtde Itens" TotalQuantity
We’ve been buzzing with excitement about the recent validation of Splunk Education! The 2024 Splunk Career Impact Report reveals how mastering Splunk gives users and customers a serious competitive a... See more...
We’ve been buzzing with excitement about the recent validation of Splunk Education! The 2024 Splunk Career Impact Report reveals how mastering Splunk gives users and customers a serious competitive advantage. Have you checked it out yet?   While a picture paints a thousand words, an infographic backs it up with data and insights.   No time to dive into the full report? No problem! Explore the key stats and survey results in the 2024 Career Impact Survey Infographic. (Get a quick preview below!) All of us in Splunk Education are dedicated to empowering our learners and are always seeking new ways to support your growth and success. Congratulations to all of you who are on your career -boosting journey with Splunk. Cheers to a new year filled with opportunities! --  Callie Skokos, on behalf of the entire Splunk Education Crew
distinct results in splunk and how to show all data in selected fields vs the 100+ results  
We are creating an installation of one indexer, one search head, and one universal forwarder with syslog, and I wonder what the minimal OS requirements are--such as disabling transparent huge pages o... See more...
We are creating an installation of one indexer, one search head, and one universal forwarder with syslog, and I wonder what the minimal OS requirements are--such as disabling transparent huge pages on the indexer, file descriptors, etc. we are speaking about a bare minimum installation.
We are about to create new VMs with the Ubuntu OS. Which version of Ubuntu is supported and recommended? 
Splunk installation in a secure facility.  I see the following blocked attempts to phone-home in our logs and infosec is unhappy.  How do I prevent Splunk from phoning home every 15 seconds? TCP_DEN... See more...
Splunk installation in a secure facility.  I see the following blocked attempts to phone-home in our logs and infosec is unhappy.  How do I prevent Splunk from phoning home every 15 seconds? TCP_DENIED/403 3836 CONNECT beam.scs.splunk.com:443 - HIER_NONE/- text/html TCP_DENIED/403 3906 CONNECT quickdraw.splunk.com:443 - HIER_NONE/- text/html Splunk Enterprise Version:9.3.1 Build:0b8d769cb912
We need to connect a FortiWeb Cloud with a Splunk Heavy Forwarder. It is over internet so SSL must be used. We are receiving the test event correctly using TCP (without SSL) But it is not bein... See more...
We need to connect a FortiWeb Cloud with a Splunk Heavy Forwarder. It is over internet so SSL must be used. We are receiving the test event correctly using TCP (without SSL) But it is not being decrypted with SSL Reviewing the documentation, we do not undesrtand how to configure the ssl-tcp input, and what certificates should be configured in FortiWeb. We have seen some solutions centered in SSL between Splunk components, but none of them explain what certificates should be configured on the source. Does anyone know how to make this work? With FortiWeb or any other third party input
My requirement is simple, I have created a Certificate monitoring script and passing the log file through a splunk dashboard. I want splunk to only check the latest log file and not store any histori... See more...
My requirement is simple, I have created a Certificate monitoring script and passing the log file through a splunk dashboard. I want splunk to only check the latest log file and not store any historical data in search events. Below is the sample log file output - (It is a "|" separated log file output)     ALERT|appu2.de.com|rootca12|/applications/hs_cert/cert/live/h_hcm.jks|Expired|2020-10-18 WARNING|appu2.de.com|key|/applications/hs_cert/cert/live/h_hcm.jks|Expiring Soon|2025-06-14 INFO|appu2.de.com|rootca13|/applications/hs_cert/cert/live/h_core.jks|Valid|2026-10-18 ALERT|appu2.de.com|rootca12|/applications/hs_cert/cert/live/h_core.jks|Expired|2020-10-18 WARNING|appu2.de.com|key|/applications/hs_cert/cert/live/h_core.jks|Expiring Soon|2025-03-22 ALERT|appu2.de.com|key|/applications/hs_cert/cert/live/h_mq.p12|Expired|2025-01-03       I am looking for 2 points here: 1. How do I handle only latest log file content (no history) in "inputs.conf" - what changes to be done? 2. Below is the sample SPL query, kindly check and suggest if any changes.   index=test_event source=/applications/hs_cert/cert/log/cert_monitor.log | rex field=_raw "(?<Severity>[^\|]+)\|(?<Hostname>[^\|]+)\|(?<CertIssuer>[^\|]+)\|(?<FilePath>[^\|]+)\|(?<Status>[^\|]+)\|(?<ExpiryDate>[^\|]+)" | multikv forceheader=1 | table Severity Hostname CertIssuer FilePath Status ExpiryDate   @ITWhisperer - Kindly help
Hi, I have a pretty long search I want to be able to utilize as a savedsearch and allow others benefit from one shared search and maybe mutually edit the search, if need be. There is a part in the s... See more...
Hi, I have a pretty long search I want to be able to utilize as a savedsearch and allow others benefit from one shared search and maybe mutually edit the search, if need be. There is a part in the search utilizing a structure   search index=ix2 eventStatus="Successful" | localize timeafter=0m timebefore=1m | map search="search index=ix1 starttimeu=$starttime$ endtimeu=$endtime$ ( [ search index=ix2 eventStatus="Successful" | return 1000 eventID ] ) | stats values(client) values(port) values(target) by eventID   This is a simplified extraction of what I am really doing, but the search works fine when run as a plain direct search from the GUI. If I save it and try using it with   |savedsearch "my-savedsearch"   I get the error Error in 'savedsearch' command: Encountered the following error while building a search for saved search 'my-savedsearch': Error while replacing variable name='starttime'. Could not find variable in the argument map. It looks like the $starttime$ and $endtime$ cause trouble, but what can I do to come around? I want to have this stuff in a saved search to avoid operating with a long search all the time in the browser. also, it is essential to use the localize - map construction, because otherwise I am not able to run this search for long time windows and I would really like to be able to do it. There was a ticket by @neerajs_81 about pretty much the same issue, but there were no details about the saved search and above all, there seemed not to be a solution.
I installed the keycloak extension but I don't know how to configure it. Can you help me?
Hi, I am trying to implement a dashboard in splunk that presents data basing on Jenkins events. I use Splunk App for Jenkins and Splunk Jenkins plugin to send the events data. Idea of the dashboa... See more...
Hi, I am trying to implement a dashboard in splunk that presents data basing on Jenkins events. I use Splunk App for Jenkins and Splunk Jenkins plugin to send the events data. Idea of the dashboard is to display data about running active checks for Pull Requests in associated GitHub repository. Checks are designed in Jenkins in a way to have a trigger job which calls downstream jobs. In a dashboard, I'd like to present basic info about pull request and results of test coming from downstream jobs. Unfortunately, event for trigger job does not provide info about its downstream jobs. I collect it in such a way: index="jenkins_statistics" event_tag=job_event build_url="job/test_trigger/10316*" | eventstats latest(type) as latest_type by build_url, host | where latest_type="started" | eval full_build_url=https://+host+"/"+build_url | eval started=tostring(round((now() - strptime(job_started_at, "%Y-%m-%dT%H:%M:%S.%N"))/60,0)) + " mins ago" | append [ search index="jenkins_statistics" event_tag=job_event upstream="job/test_trigger/10316*" trigger_by="*test_trigger*"] where subsearch is appended to provide information about downstream jobs. I checked https://plugins.jenkins.io/splunk-devops/#plugin-content-i-am-using-upstreamdownstream-jobs-how-can-i-consolidate-the-test-results-to-root-trigger-job but this does not fit to my case, as tests are represented by downstream jobs and I'd like to have an actual data of them to display only failed ones in the dashboard. I had a plan to create custom python command (https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/createcustomsearchcmd/) which will: parse data from downstream jobs events create new fields in trigger job event basing on above finally, return only trigger job event Having that, I could be able to have all the interesting data in one event per Pull Request and format the table at the end. Unfortunately, it does not work as I wanted. It makes Splunk hanging even for single Pull Request case (1 trigger event + 20 downstream events). This python script iterates over the events twice (firstly to process downstream jobs and secondly to find trigger and add new fields there and return that). I am afraid that it is not a best approach. Example script based on https://github.com/splunk/splunk-app-examples/blob/master/custom_search_commands/python/customsearchcommands_app/bin/filter.py is presented below:   #!/usr/bin/env python # coding=utf-8 # # Copyright 2011-2015 Splunk, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"): you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import sys sys.path.append(os.path.join('opt', 'splunk', 'etc', 'apps', 'splunk_app_jenkins', 'bin', 'libs')) # splunklib from splunklib.searchcommands import dispatch, EventingCommand, Configuration, Option @Configuration() class ProcessTestsCommand(EventingCommand): """ Filters and updates records on the events stream. ##Example :code:`index="*" | processtests """ @staticmethod def get_win(tests_list, pr_num): for test in tests_list: if test.get('job_name') == 'build-win' and test.get('upstream') == f"job/test_trigger/{pr_num}": return test.get('job_result') if test.get('job_result') else '' @staticmethod def get_ubuntu(tests_list, pr_num): for test in tests_list: if test.get('job_name') == 'build-ubuntu' and test.get('upstream') == f"job/test_trigger/{pr_num}": return test.get('job_result') if test.get('job_result') else '' @staticmethod def get_failed_tests(tests_list, pr_num): failed_tests = [] failed_string = '' for test in tests_list: if test.get('upstream') == f"job/test_trigger/{pr_num}" and test.get('job_result') != 'SUCCESS': failed_tests.append(test) for failed_test in failed_tests: name = failed_test.get('job_name').split('/')[-1] status = failed_test.get('job_result') failed_string += f"{name} {status}\n" return failed_string def transform(self, records): tests = [] for record in records: if record.get('job_name') != 'test_trigger': tests.append(record) for record in records: if record.get('job_name') == 'test_trigger': pr_num = record.get('build_number') build_win = self.get_win(tests, pr_num) build_ubuntu = self.get_ubuntu(tests, pr_num) failed_tests = self.get_failed_tests(tests, pr_num) record['win'] = build_win record['ubuntu'] = build_ubuntu record['failed_tests'] = failed_tests yield record return dispatch(ProcessTestsCommand, sys.argv, sys.stdin, sys.stdout, __name__)   Goal is to have all the data (from trigger and downstream jobs) in a single event representing trigger job. Do you have any ideas, what could be the better way to achieve that? I also thought about dropping this subsearch and collecting the data of downstream jobs via GitHub or Jenkins API inside Python script, but this is not preferred (API may return some malformed data, I could be limited by api hitting limits). Appreciate any help.
With the assistance of this forum, I managed to combine the events of two sourcetypes and run stats to correlate the fields on a single shared field between the two sourcetypes. The problem is, when ... See more...
With the assistance of this forum, I managed to combine the events of two sourcetypes and run stats to correlate the fields on a single shared field between the two sourcetypes. The problem is, when running stats, it creates a table with mostly blank spots and only a few with all columns filled. The search is meant to look at switch logs and pull connection data inlcuding the MAC, the IP, the switch name, and the port id. Secondly, the search pulls from a sourcetype containing all devices that have been active on the network where it pulls the hostname and mac for that device. I then use stats to match those results up on the shared mac address field—the only difference between them being the mac field from one of the sourcetypes is in lowercase vs upper. The end goal of this is to be able to have a table showing me a device's name, it's IP, it's MAC, and which switch and port it connected to. As it is, the search does appear to work, however, because of how it's written, the resulting table is filled with blank spots from where the events from each source don't have the fields from the other source. How can I change things so it only shows rows where it has an entry for each column? Right now, based on other posts I've seen on this forum.....I'm considering whether I may be able to use eval and create fields like src_mac-{index} or something like that.....maybe with the inclusion of the coalesce command. Is this the right course of action, or is there a better way? The only other consideration is speed.....unfortunately, there's a very good chance I may end up searching millions of events. I'm trying to find ways restrict the search, but even if I manage to, it's still going to be a lot. I'm not trying to get an instant search, but if I can get it complete in less than thirty seconds as opposed to 3+ minutes..... Thank you     (index="routerswitch" action_type IN(Failed_Attempts, Passed_Attempts) src_mac=* SwitchName=switch1 Port_Id=GigabitEthernet1/0/21 earliest=-30d) OR (index=connections source="/var/devices.log" src_ip=172.* earliest=-30d src_mac=*) | fields src_mac dhcp_host_name src_ip IP_Address SwitchName Port_Id | eval src_mac=upper(src_mac) | stats values(dhcp_host_name) as hostname values(src_ip) as IP values(IP_Address) as net_IP values(SwitchName) as switch values(Port_Id) as portID by src_mac    
After upgrade from 9.3.1 to 9.4.0 in windows platform, there is a warning shows 41 files that did not match. After validate files, it shows the following query. How to solve the warning? ... See more...
After upgrade from 9.3.1 to 9.4.0 in windows platform, there is a warning shows 41 files that did not match. After validate files, it shows the following query. How to solve the warning? C:\Program Files\Splunk\bin>splunk.exe validate files Validating installed files against hashes from 'C:\Program Files\Splunk\splunk-9.4.0-6b4ebe426ca6-windows-x64-manifest' Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-console-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-datetime-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-debug-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-errorhandling-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-file-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-file-l1-2-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-file-l2-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-handle-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-heap-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-interlocked-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-libraryloader-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-localization-l1-2-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-memory-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-namedpipe-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-processenvironment-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-processthreads-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-processthreads-l1-1-1.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-profile-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-rtlsupport-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-string-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-synch-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-synch-l1-2-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-sysinfo-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-timezone-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-core-util-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-conio-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-convert-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-environment-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-filesystem-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-heap-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-locale-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-math-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-multibyte-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-private-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-process-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-runtime-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-stdio-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-string-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-time-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/api-ms-win-crt-utility-l1-1-0.dll': The system cannot find the file specified. Could not open 'C:\Program Files\Splunk\bin/ucrtbase.dll': The system cannot find the file specified.  
Hi! I was wondering if anybody knows if there's a way to hide the "populating..." text under this drillthrough on my dashboard? 
Hi All, I am rather hoping someone can assist me in creating a search that can be used for an alert to detect when a connection to MQ fails to re-connect for a system I am supporting. I am new to S... See more...
Hi All, I am rather hoping someone can assist me in creating a search that can be used for an alert to detect when a connection to MQ fails to re-connect for a system I am supporting. I am new to Splunk and although I have found posts related to this topic, I have so far not been able to adapt them for my particular scenario. I was hopeful the search below would suffice, but then realised it only works as I wanted it to if the MQ connection actually drops, otherwise the count evaluates as 0 and I end up with a false alerts.   index="sepa_instant" source="D:\\Apps\\Instant_Sepa_01\\log\\ContinuityRequester*" | transaction startswith="connection_down_error raised" maxspan=4m | search "-INFOS- {3} QM reconnected" | stats count | where count="0" Any assistance provided would be very much appreciated.  
Apologies if this is in the wrong place.  Im using the Splunk REST API to connect and run search requests through a Python script. I sadly don't have access to the SDK so I have to use the REST API.... See more...
Apologies if this is in the wrong place.  Im using the Splunk REST API to connect and run search requests through a Python script. I sadly don't have access to the SDK so I have to use the REST API. The issue I'm running into is that after the initial authentication and login, I get back the session key to use for subsequent API calls. The subsequent API calls have a chance to run into a 401 error more often than not, and my current working solution is to use a while loop to keep sending the information until it works. The code looks like below. I set a delay an API call happens every few seconds, but I can't figure out why it will usually fail, but then randomly choose to work.      done = False while not done: r = requests.post(host + '/services/search/jobs/', headers={'Authorization':'Splunk %s' %Session_key}, data={'Search':query}, verify=False) if r.status+code ==201: done = True    
Hello Splunkers    Have any of you worked with log files of Cisco equipment: - AP 9130 - WiFi Controller 9840   I am interested in how to add more information to log files. And also: perhaps... See more...
Hello Splunkers    Have any of you worked with log files of Cisco equipment: - AP 9130 - WiFi Controller 9840   I am interested in how to add more information to log files. And also: perhaps someone can share a use case for creating dashboards for this equipment.   Thanks in advance for your answers.