All Topics

Top

All Topics

Are you feeling it? All the career-boosting benefits of up-skilling with Splunk? It’s not just a feeling, it's a fact according to insights from the 2023 Splunk Career Impact Survey. All your hard wo... See more...
Are you feeling it? All the career-boosting benefits of up-skilling with Splunk? It’s not just a feeling, it's a fact according to insights from the 2023 Splunk Career Impact Survey. All your hard work taking Splunk Education courses and getting Splunk Certified is helping to weather the tough economy and increase career resilience.  Mastering Splunk The report was produced from a survey of 749 Splunk practitioners across the community that asked questions about their earning power, promotability, and proficiency. The survey results confirmed the potential benefits to the employee and employer alike – highlighting that mastering Splunk is one of the best ways to fortify both enterprise and career resilience.  Future-Proofing Your Career According to the survey results, very proficient practitioners of Splunk are 2.7 times more likely to get promoted, and those with Splunk certifications plus higher levels of Splunk proficiency reported earning approximately 131% more than their less-proficient peers. Over 86% believe their company is in a stronger competitive position because of Splunk. With your Splunk skills, you're on the way to future-proofing your career!  Executive Perspective Eric Fusilero, the VP of Global Enablement and Education at Splunk, recently shared his excitement about the results and his perspective on what this means in an industry struggling to fill IT and cybersecurity roles.  “[Splunk] is incredibly powerful – and yet we all know that no matter how amazing any piece of software or new technology is, it is really only as powerful as the people who use it. It makes me feel good to know that Splunk Education is a critical piece when it comes to harnessing the true power of Splunk and the impact Splunk Training and Certification has on the careers of those who use it and thrive with it.”   Don’t Let Up on the Gas Fortify your career resilience by digging even deeper into Cloud, Security, and Observability and validating that knowledge with industry-recognized certification badges. Keep going with everything  Splunk Education has to offer.    Happy learning.  Callie Skokos on behalf of the entire Splunk Education Crew
Learn how to deploy the Cisco AppDynamics Kubernetes® and App Service Monitoring solution for Cisco Cloud Observability using Helm charts and Amazon EKS Blueprints for Terraform module Cisco AppD... See more...
Learn how to deploy the Cisco AppDynamics Kubernetes® and App Service Monitoring solution for Cisco Cloud Observability using Helm charts and Amazon EKS Blueprints for Terraform module Cisco AppDynamics Sales Engineer Ed Barberis has published an extensive post on the AppDynamics Blog, outlining how you can deploy the AppDynamics Kubernetes and App Service Monitoring solution for Cisco Cloud Observability using Helm charts and Amazon EKS Blueprints for Terraform module. This AppDynamics add-on for Amazon EKS Blueprints makes standardized deployment with Terraform using a repeatable process for observing your cloud native applications and infrastructure.   What technical information is included in the Blog? In it, you can find: A description of the Cisco Cloud Observability product A brief overview of Amazon's EKS Blueprints for Terraform project A list of the prerequisites and deployment tools you'll need A link to the AppDynamics Add-on for Amazon EKS Blueprints on GitHub you can clone Step-by-step instructions—from generating and downloading Kubernetes Operators and Collectors files, through deployment of the AppDynamics add-on. Then, see the observability data from the EKS cluster. Finally, easily remove the AppDynamics add-on when needed.  Read the complete post on the Cisco AppDynamics Blog
I have been testing out SmartStore in a test environment. I can not find the setting to control how quickly data ingested into splunk can be replicated to my S3 bucket. What I want is for any data in... See more...
I have been testing out SmartStore in a test environment. I can not find the setting to control how quickly data ingested into splunk can be replicated to my S3 bucket. What I want is for any data ingested to be replicated to my s3 bucket as quickly as possible, I am looking for the closest to 0 minutes of data loss. Data only seems to replicate when the Splunk server is restarted. I have tested this by setting up another splunk server with the same s3 bucket as my original, and it seems to have only picked up older data when searching.    max_cache_size   only controls the size of the local cache which I'm not after   hotlist_recency_secs   controls how long before hot data could be deleted from cache, not how long before it is replicated to s3   frozenTimePeriodInSecs, maxGlobalDataSizeMB, maxGlobalRawDataSizeMB   controls freezing behavior which is not what I'm looking for. What setting do I need to configure? Am I missing something within conf files in Splunk or permissions to set in AWS for S3?  Thank you for the help in advance!
Hi, I am new to Splunk, and I am doing some testing with Blue Prism Data gateway with Splunk. How can I get the Splunk URL and API Token
Hello, I'm looking for assistance with a webmail-only report, I ran a query and I only got ActiveSync output, my customer is only interested in OWA not ActiveSync as a report for their users. Code ... See more...
Hello, I'm looking for assistance with a webmail-only report, I ran a query and I only got ActiveSync output, my customer is only interested in OWA not ActiveSync as a report for their users. Code which produced only Active Sync. index="iis_logs_exchxxx" sourcetype="iis" s_port="443" c_ip!="10.*" c_ip!="127.0.0.1" c_ip!="::1" cs_method!="HEAD" cs_username="*@domain.com" | iplocation c_ip | eval alert_time=_time | convert ctime(alert_time) timeformat="%m/%d/%Y %H:%M:%S %Z" | table alert_time,cs_username,cs_User_Agent,c_ip, City, Region, Country | stats values(c_ip) by alert_time,cs_username,cs_User_Agent,City,Region,Country | rename cs_username AS "Username", values(c_ip) AS "IP addresses", cs_User_Agent AS "Device Type", alert_time AS "Date/Time"
Hi, I am runing Splunk Stream to collect DNS data from Domain Controllers. On some of the busy DCs the Splunk_TA_stream is generating lots of the following errors:     ERROR [9412] (SplunkSenderM... See more...
Hi, I am runing Splunk Stream to collect DNS data from Domain Controllers. On some of the busy DCs the Splunk_TA_stream is generating lots of the following errors:     ERROR [9412] (SplunkSenderModularInput.cpp:435) stream.SplunkSenderModularInput - Event queue overflow; dropping 10001 events     Looking at the Splunk Stream Admin-Network Metrics dashboard these seem to occur at the same the Active Network Flows seem to be hitting a limit: I would like to increase the number of network flows allowed in an attempt to stop the event queue overflows. Looking at the documentation I can see 2 configurations that seem relevant: maxTcpSessionCount = <integer> * Defines maximum number of concurrent TCP/UDP flows per processing thread. processingThreads = <integer> * Defines number of threads to use for processing network traffic. Questions: 1) What is the default for maxTcpSessionCount and processingThreads? 2) Would parameter would it be better to increase? Also are these the correct parameters to be looking to tune with the errors I am getting. If not what should I look at?
Hello, I am managing Splunk roles. I want to adjust capabilities to roles, but unfortunately for few of them I did not find what exactly they do.  Searching did not give me results or the results we... See more...
Hello, I am managing Splunk roles. I want to adjust capabilities to roles, but unfortunately for few of them I did not find what exactly they do.  Searching did not give me results or the results were not satisfying. If you have some extract with all capabilities and their description, please advise me what exactly following capabilities do (screenshot attached)  
can anyone please tell me  the scenario based interview questions for splunk admin role ?
I am working on adding some drop down to an existing dashboard studio. I have the queries working with no issues by referencing the drop down's but wrapping the Token Name in $$. What I am working ... See more...
I am working on adding some drop down to an existing dashboard studio. I have the queries working with no issues by referencing the drop down's but wrapping the Token Name in $$. What I am working on now is I would like to update a Widgets Title with the Tokens Label as that is the 'human' readable data, not data to drive the queries. This works with showing the 'value' of the Tokens selection $tok_aToken$ but how do I show the tokens label? I have tried: $tok_aToken_label$,  $tok_aToken.label$ and have been searching for hours and have been unable to find a solution ?
Hi, I have 3 values and i want to display it in a single value panel like the below image which is from Tableau,I want to replicate the same in Splunk. Can it be done? If not can we represent 2 valu... See more...
Hi, I have 3 values and i want to display it in a single value panel like the below image which is from Tableau,I want to replicate the same in Splunk. Can it be done? If not can we represent 2 values (GPA and website) in a single value and Grade in legend?   Else please suggest what other representation can i go with which displays 3 values
I used the query index="botsv2" Amber. I found a capture_hostname: matar    Which e-mail seems to be linked to "matar"?   And who sends the person attach to the "feed" email to?   This... See more...
I used the query index="botsv2" Amber. I found a capture_hostname: matar    Which e-mail seems to be linked to "matar"?   And who sends the person attach to the "feed" email to?   This is from https://github.com/splunk/botsv2  
I would like to compare total throughput for two dates 60 days apart (say, current and -60d). The query in the CMC that generates the throughput is  index=_internal (host=`sim_indexer_url` OR host=... See more...
I would like to compare total throughput for two dates 60 days apart (say, current and -60d). The query in the CMC that generates the throughput is  index=_internal (host=`sim_indexer_url` OR host=`sim_si_url`) sourcetype=splunkd group=per_Index_thruput series!=_* | timechart minspan=30s per_second(kb) as kb by series I need the series information, but it could be binned into 1 whole day.  
I'm having trouble to use any action with a IPV6 value, any action of any app that I try to use a IPV6 on it, they return me this error. Nov 29, 09:25:17 : 'add_element_1' on asset 'akamai original'... See more...
I'm having trouble to use any action with a IPV6 value, any action of any app that I try to use a IPV6 on it, they return me this error. Nov 29, 09:25:17 : 'add_element_1' on asset 'akamai original': 1 action failed. (1)For Parameter: {"context":{"artifact_id":0,"guid":"857e066c-de68-4109-a58b-ee1e515b01dd","parent_action_run":[]},"elements":"2804:1b3:ac03:a6dd:d941:1714:85bb:8b4","networklistid":"7168_ORIGINALBLACKLIST"} Message: "Parameter 'elements' failed validation"   Nov 29, 09:25:17 : 'add_element_1' on asset 'akamai original' completed with status: 'failed'. Action Info: Size : 336 bytes : [{"app_name":"Akamai WAF","asset_name":"akamai original","param":{"context": {"guid": "857e066c-de68-4109-a58b-ee1e515b01dd", "artifact_id": 0, "parent_action_run": []}, "elements": "2804:1b3:ac03:a6dd:d941:1714:85bb:8b4", "networklistid": "7168_ORIGINALBLACKLIST"},"status":"failed","message":"Parameter 'elements' failed validation"}]   Always I receive a message "Parameter 'elements' failed validation", in that case is a app to add a IP on a Akamai network list.   If anyone is achieving use IPV6 I will be glad if you can share with me.   Thanks.  
Hi Team, I came across an issue where I have below sample logs in a file  15:30:31.396|Info|Response ErrorMessage: || 15:30:36.610|Info|Logging Rest Client Request...|| 15:30:36.610|Info|Request U... See more...
Hi Team, I came across an issue where I have below sample logs in a file  15:30:31.396|Info|Response ErrorMessage: || 15:30:36.610|Info|Logging Rest Client Request...|| 15:30:36.610|Info|Request Uri: https://abc-domain/api/xy/Identify|| 15:30:36.694|Info|Logging Rest Client Response...|| 15:30:36.694|Info|Response Status Code: 401|| 15:30:36.710|Info|Response Status Description: Unauthorized|| 15:30:36.741|Info|Response Content: || 15:30:36.741|Info|Response ErrorMessage: || 15:30:36.762|Info|Logging Rest Client Request...|| I am using splunk forwarder version splunkforwarder-8.2.4-87e2dda940d1-x64-release with below prop.conf settings   [xyz:mnl] LB_CHUNK_BREAKER = ([\r\n]+)     On splunk portal I am not getting one line as a one event instead I am getting multiple lines as a single event like below         
We have a situation where the application sends the logs in syslog format. But we don't have a Syslog server to receive it. Instead, can we make the UF (installed in the same app server) receive tho... See more...
We have a situation where the application sends the logs in syslog format. But we don't have a Syslog server to receive it. Instead, can we make the UF (installed in the same app server) receive those syslog events and forward them to Splunk Cloud? Note: We don't have the physical location of the logs in the app server to monitor using UF
Hi, I want to display the result only for users who  has both ID  AR9 & AD. Below is sample data, I have about 10k results being generated with multiple values but i need to display only those users... See more...
Hi, I want to display the result only for users who  has both ID  AR9 & AD. Below is sample data, I have about 10k results being generated with multiple values but i need to display only those users who has ID both AR9 & AD  USER  ID John AD John AY9 Riya AD Toby AR9 Nathan AD Nathan AR9 Sam AD Sam AR9   Thanks!  
Hi All,   I am having a very wierd issue where I cannot see report in Splunk UI. When I search using Filter: All, I can see the report but when I set the filter to 0, I get 'no searches, reports, ... See more...
Hi All,   I am having a very wierd issue where I cannot see report in Splunk UI. When I search using Filter: All, I can see the report but when I set the filter to 0, I get 'no searches, reports, and alerts found'. This couldn't be the case of visibilty as the configuration isn't set in the conf file. These are the set specs in the conf file, attched below for reference. I have also attached the metadata file as there is no access control information set for the specific saved search. There are 6 more savedsearches which I can see when I filter using Report, but not this sepcific one.   No clue how the report is not found during the filter.   Thanks in advance.   Pravin
\"message\": \"Invalid Application ID\", \"messages\": null, \"error_response\": null, Need to extract the above message field without dropping other log messages. Like Nodrop option 
Hello I have this query : index="report" Computer_Name="*" |chart dc(Category__Names_of_Patches) as totalNumberOfPatches by Computer_Name | eval exposure_level = case( totalNumberOfPatches >= ... See more...
Hello I have this query : index="report" Computer_Name="*" |chart dc(Category__Names_of_Patches) as totalNumberOfPatches by Computer_Name | eval exposure_level = case( totalNumberOfPatches >= 3 AND totalNumberOfPatches <= 6, "Low Exposure", totalNumberOfPatches >= 7 AND totalNumberOfPatches <= 10, "Medium Exposure", totalNumberOfPatches >= 11, "High Exposure", totalNumberOfPatches == 2, "Compliant", totalNumberOfPatches == 1, "<not reported>", 1=1,"other" ) | stats count(Computer_Name) as totalNumberOfPatches by exposure_level | eval category=exposure_level Looks like I've lost the _time field on the way so when im trying to run timechart im getting no results
Hi, I have a dashboard in Splunk and I have a question About the query, I have a line of fields and I have a column. and I want to color specific color if a specific field is true. how to do that. ... See more...
Hi, I have a dashboard in Splunk and I have a question About the query, I have a line of fields and I have a column. and I want to color specific color if a specific field is true. how to do that. the line in the dashboard of a specific column looks like this:   <format type="color" field="nemeOfColumn"> <colorPallete></colorPallete></format>