All Topics

Top

All Topics

Greetings! We are trying to generate a table after we got output from a Splunk query. We are trying pipe (|) this to our query but do not know how to do this. Can someone assist?  This is the outpu... See more...
Greetings! We are trying to generate a table after we got output from a Splunk query. We are trying pipe (|) this to our query but do not know how to do this. Can someone assist?  This is the output after we ran our Splunk query, Feb 13 20:36:21 hostname1 sshd[100607]: pam_unix(sshd:session): session opened for user user123 by (uid=0) Feb 13 20:36:23 hostname2 sshd[100608]: pam_unix(sshd:session): session opened for user user345 by (uid=0) We want to capture the table in this form, Time                                   Hosts                       Users Feb 13 20:36:21       hostname1                user123 Feb 13 20:36:23       hostname2                user345 And so on.. How do we do this. Thank you in advance!
I need some help updating the mmdb file for the iplocation command. Ive read the other forum questions regarding this, as well as the docs, and i am a bit confused.    I initially uploaded the new ... See more...
I need some help updating the mmdb file for the iplocation command. Ive read the other forum questions regarding this, as well as the docs, and i am a bit confused.    I initially uploaded the new mmdb file from MaxMind, the GeoLite2-City.mmdb. I uploaded it through the GeoIP panel through the lookups tab.    It uploads, but i cant seem to find the file afterwards. I am looking on the specific server that I uploaded the file to, we have a clustered environment, but that one specific server I uploaded it to should have it. I ran locate and find commands, but could not locate it. We still have the original under $SPLUNK_HOME$/share/dbip-city-lite.mmdb   Even though the dropbox for the mmdb file showed a successful upload, I can not find it anywhere.  I dont see any trace of the upload through splunkd, or through /export/opt/splunk/var/run/splunk/upload/ , or through any find or locate command.  I wanted to update the file path to include both databases, and i know i needed to change the limits.conf file, and update it to include both paths. But the question is, How do i change the limits.conf so that it replicates. We dont have any app named TA-geoisp or anything similar, and thats what these forums and docs reference.   Somewhere I saw that I could update the search app's limits.conf and just push that from the shcluster directory, as that will push a bundle change that will push out to all Search heads in the cluster. Since the search app is the default app, we could just use that app to point to the mmdb files. But we don't have the search app located under our /$SPLUNK_HOME$/etc/shcluster/apps/   We dont seem to have the search app under our Clustermaster/Deployer shcluster directory. I think i might be missing something. I would basically just like to update the limits.conf to point to the new dir path of both of the mmdb files. Id like to just edit the limits.conf to look like:     [iplocation] MMDBPaths = /path/to/your/GeoIP2-City.mmdb,/path/to/your/dbip-city-lite.mmdb       The question im trying to ask here, is when i upload the file through the gui, where does the file end up. And if i wanted to push these changes manually,  if i wanted to push to all SH and indexers from the deployer and deployment server, how do i go about replicating the folder that holds the mmdb as well as the limits.conf that hold the paths to the files.    Thank you for any assistance.   
I am relatively new to the Splunk coding space so bare with me in regards to my inquiry. Currently I am trying to create a table, each row would have the _time, host, and a unique field extracted fr... See more...
I am relatively new to the Splunk coding space so bare with me in regards to my inquiry. Currently I am trying to create a table, each row would have the _time, host, and a unique field extracted from the entry: _Time   Host                         Field-Type       Field-Value 00:00    Unique_Host_1   F_Type_1        F_Type_1_Value 00:00    Unique_Host_1   F_Type_2        F_Type_2_Value 00:00    Unique_Host_1   F_Type_3        F_Type_3_Value 00:00    Unique_Host_2   F_Type_1        F_Type_1_Value 00:00    Unique_Host_2   F_Type_2        F_Type_2_Value 00:00    Unique_Host_2   F_Type_3        F_Type_3_Value .. The data given for each server: Field-Type=F_Type_1,.....,Section=F_Type_1_Value Field-Type=F_Type_2,.....,Section=F_Type_2_Value Filed-Type=F_Type_3,.....,Section=F_Type_3_Value  I have created 3 field extractions for F-Type Values: (.|\n)*?\bF_Type_1.*?\b Section=(?<F_Type_1_Value>-?\d+) This is what I have done so far for the table: index="nothing" source-type="nothing" | first( F_Type_1) by host I am not sure this is the best approach, and I can also refine the field extraction if needed. Generally, my thought process follows: Source | Obtain first entries for all the hosts | Extract fields values | Create table But I am currently hitting a road block in the syntax to create rows for each of the unique Field-Types and their value.   
Hello! I am trying to send data to Splunk using UDP, I tried to set it up using the documentation and seen a few videos on how to set it up but can't get it right. I have the data coming into my HF ... See more...
Hello! I am trying to send data to Splunk using UDP, I tried to set it up using the documentation and seen a few videos on how to set it up but can't get it right. I have the data coming into my HF from network devices and then should be sent to my indexers. After going through the set up I get this error message "Search peer splunk_indexer_02 has the following message: Received event for unconfigured/disabled/deleted index=<index> with source="source::udp:514" host="host::xx.xx.xx.xx" sourcetype="sourcetype::<sourcetype>. So far received events from 2 missing index(es)." I created a new index during the set up but there is no data to search.
Hi, I am working my way through some of the splunk courses. I am currently on "working with time". In one of the videos the following command is used to find all results within the past day, roundi... See more...
Hi, I am working my way through some of the splunk courses. I am currently on "working with time". In one of the videos the following command is used to find all results within the past day, rounding down. "| eval yesterday = relative_time(now(),"1d@h")". However when I attempt this command myself, it simply prints the "yesterday" value however it uses the time specified in my time picker, not in the actual command. I was under the impression that any time specified within a command would automatically overwrite the time picker. Was I mistaken in this? Or am I perhaps using the command incorrectly? Any help would be greatly appreicated.
My company is transitioning from an on-premise MFA setup within ADFS to the Azure MFA setup.  What's the best approach to getting those MFA events into Splunk?  Does the Splunk Addon for Microsoft Az... See more...
My company is transitioning from an on-premise MFA setup within ADFS to the Azure MFA setup.  What's the best approach to getting those MFA events into Splunk?  Does the Splunk Addon for Microsoft Azure (splunkbase 3757) meet that goal?  
Been struggling for a while on this one. On-prem Splunk Enterprise.  v9.1.2, running on CentOS 7.9 -- Just trying to find a consistent way to be able to upload log files through HTTP Event Collect... See more...
Been struggling for a while on this one. On-prem Splunk Enterprise.  v9.1.2, running on CentOS 7.9 -- Just trying to find a consistent way to be able to upload log files through HTTP Event Collector (HEC) tokens.  I found the whole RAW vs JSON thing confusing at first and thought the only way to be able to specify/override values like host, sourcetype, etc. was to package up my log file in the JSON format. Discovered today that you can specify those values in the RAW url, like so: https://mysplunkinstance.com:8088/services/collector/raw?host=myserver&sourcetype=linux_server which was encouraging.  It seemed to work. And I think I've gotten further ahead.  I now have this effectively, as my curl command running in a bash script: curl -k https://mysplunkinstance.com:8088/services/collector/raw?host=myserver&sourcetype=linux_server -H "Authorization: Splunk <hec_token>" -H "Content-type: plain/text" -X 'POST' -d "@${file}" Happy to report that I now see the log data. However, it only seems happy if its a single line log.  When I give it a log file with more lines, it just jumbles it all together.  I thought it would honour the configuration rules we have programmed for sourcetype=linux_secure (from community add-ons and our own updates) but it doesn't.  Loading the same file through Settings -> Add Data has no problem properly line-breaking per the configuration. I'm guessing there is something I am missing then in how one is meant to send RAW log files through HEC?
Anyone know how and what path to query on splunkcloud instance to pull existing SAML configuration details and certificate? I can view the information by browsing to settings -> authentication metho... See more...
Anyone know how and what path to query on splunkcloud instance to pull existing SAML configuration details and certificate? I can view the information by browsing to settings -> authentication method -> SAML -> SAML configuration. I want to be able to export that information if it is captured in a file as a backup prior to migrating to different authentication method.  Thanks in advance.  
Splunk is pleased to announce the general availability of Splunk Enterprise 9.2, our latest product innovation to help you drive digital resilience.  Highlights of the latest release include improv... See more...
Splunk is pleased to announce the general availability of Splunk Enterprise 9.2, our latest product innovation to help you drive digital resilience.  Highlights of the latest release include improvements to existing platform functionality, such as: Deployment server scalability Significant enhancements to Deployment Server  make it more resilient and highly available. Deployment Server clusters make it possible to coordinate functionality across multiple deployment servers.  Dashboard Studio Dashboard Studio now offers new drill down actions, enhanced visualizations, a bigger and better code editor, and a Classic to Studio dashboard conversion report!  Federated Search for Splunk - Lookup command improvements for standard mode federated search When you use the lookup command in standard mode federated searches, you can set local=true in the search to force the lookup portion of the search (and all following commands) to be processed on the search head of your local Splunk platform deployment.   But wait, there's more!  For a complete list of all that’s new in Splunk Enterprise 9.2, check out the release notes.   Upgrading from an earlier version of Splunk Enterprise?  No worries, we got you - read Splunk Docs for a guide on how to upgrade.  As always, happy Splunking!
Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Gett... See more...
Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles that help you see everything that’s possible with data sources and data types in Splunk. This month we’re featuring our annual rundown of the Lantern articles that are getting the most views, as well as sharing some interesting site metrics with you from our past financial year. We’ve also published new use cases, product tips, and more! If you want to jump straight to our new articles, scroll to the bottom to find them. Splunk Lantern’s Top Articles Splunk has just ended its financial year, so here on Team Lantern we’ve been looking at our yearly metrics to see how much we’ve grown. And our growth has been amazing! Over the past financial year, Lantern has seen nearly a million unique page views - 975,940, which compared to last year’s 613K, represents a 59% increase. We’ve welcomed 314k new users to Lantern, a 75% increase year-on-year. And we have grown our passionate base of returning users to 310k, a figure that’s nearly doubled from last year’s 161k. We’re deeply proud of how we’ve grown to serve so many of you with articles that help you get more value from your Splunk implementation. While we offer hundreds of articles in dozens of areas of interest, here are the pages that came out on top with the most page views over the past year in each of our categories. We hope that you can be inspired by the same Lantern articles that inspired so many Splunk users over the past year! Security Most popular use cases published in FY24 Assessing and expanding MITRE ATT&CK coverage in Splunk Enterprise Security Protecting Operational Technology (OT) environments Detecting consumer bank account takeovers   Most popular use cases of all time Implementing risk-based alerting in Splunk Enterprise Security Using threat intelligence in Splunk Enterprise Security Assessing and expanding MITRE ATT&CK coverage in Splunk Enterprise Security Most popular product tips published in FY24 Using Threat Intelligence Management Configuring Windows security audit policies for Enterprise Security visibility Sending events from the Splunk platform to SOAR Most popular product tips of all time Using the Splunk Enterprise Security assets and identities framework Onboarding data to Splunk Enterprise Security Configuring Windows security audit policies for Enterprise Security visibility Platform Most popular use cases published in FY24 Detecting malicious activities with Sigma rules Monitoring major Cloud Service Providers (CSPs) Building a data-driven law enforcement strategy Most popular use cases of all time Detecting a ransomware attack Monitoring for network traffic volume outliers Investigating a ransomware attack Most popular product tips published in FY24 Replacing null values by using the fillnull and filldown commands Using ingest actions in Splunk Enterprise Working with multivalue fields Most popular product tips of all time Writing better queries in Splunk Search Processing Language Replacing null values by using the fillnull and filldown commands Using ingest actions in Splunk Enterprise   Observability Most popular use cases published in FY24 Managing the lifecycle of an alert: from detection to remediation Identifying DNS reliability and latency issues Monitoring availability and performance in non-public applications Most popular use cases of all time Managing the lifecycle of an alert: from detection to remediation Monitoring Kubernetes pods Monitoring API transactions Most popular product tips published in FY24 Getting started with the Microsoft Teams Add-on for Splunk Collecting Mac OS log files Getting Docker log data Into Splunk Cloud Platform with OpenTelemetry Most popular product tips of all time Getting started with Microsoft Azure Event Hub Getting started with the Microsoft Teams Add-on for Splunk Installing Splunk Connect For Syslog (SC4S) on a Windows network Huge thanks is due to all of our contributors who share their helpful knowledge through our articles. If you're a Splunker who could write an article for us that might make it into our most popular lists next year, then drop us a comment below! This Month’s New Articles Here’s the complete list of everything that’s new on Lantern, published over the month of January: Splunk 9.1.3 FAQ Using Admin Config Service (ACS) in Splunk Cloud Platform FedRAMP environments Migrating to Mission Control Converting complex data into metrics with Edge Processor Using Dashboard Studio inputs in the canvas Using the events viewer visualization in Dashboard Studio Showing and hiding Dashboard Studio elements based on data availability Converting a Classic dashboard to Dashboard Studio Using the Link to Search and Link to Reports interactions in Dashboard Studio Configuring the trellis layout in Dashboard Studio We hope you’ve found this update helpful. Thanks for reading! Kaye Chapman, Senior Lantern Content Specialist for Splunk Lantern
I have a distributed environment with 2 independent search heads.  I run the same search on both, and one shows a field that the other does not.  I can't figure out why.  I can't find any data models... See more...
I have a distributed environment with 2 independent search heads.  I run the same search on both, and one shows a field that the other does not.  I can't figure out why.  I can't find any data models that mention the index or sourcetype I'm searching.  Is there a way to show me if a data model is being used in my search? The logs are coming from an IBM i-series system using syslog through sc4s.
hello all, I have an app that to perform an action I cant insert the required parameter as a list. but as a string. this is a bit issue because I am using data value from action results as the para... See more...
hello all, I have an app that to perform an action I cant insert the required parameter as a list. but as a string. this is a bit issue because I am using data value from action results as the parameter to insert, for example:  "my_App_action:action_result.data.*.device_id" and as far as I understand, action_result.data collection is always an array. so I can not use directly this action results returned parameter as a parameter to insert to my action. the only workaround I found is to add a code block that gets the datapath-parameter as input, and outputs the value_name[0]. is there a better workaround for this?  
In previous versions of Splunk (at least up to 9.1.0), we could re-arrange the Apps menu by dragging the apps up or down in the Launcher app.  Now that Launcher seems to have been rebuilt with Dashbo... See more...
In previous versions of Splunk (at least up to 9.1.0), we could re-arrange the Apps menu by dragging the apps up or down in the Launcher app.  Now that Launcher seems to have been rebuilt with Dashboard Studio that capability is no longer present.  Is there a new way for users to re-arrange their Apps menu?
Hello, I'm looking to change our indexing architecture We have dozens of AWS accounts. We use the Splunk AWS app to ingest the data from a SQS queue. Currently, we have a single SQS-based input typ... See more...
Hello, I'm looking to change our indexing architecture We have dozens of AWS accounts. We use the Splunk AWS app to ingest the data from a SQS queue. Currently, we have a single SQS-based input type for each individual AWS account that grabs all the data and applies the index and a catch-all sourcetype named aws:logbucket. From there, we route the data to a more specific sourcetype based on the type of data. aws:logbucket will be changed to aws:cloudwatch:vpcflowlogs, aws:cloudtrail, aws:config, etc. This has worked well enough for us, but I now have a new requirement. For each of these AWS accounts, I want a separate index for the specific AWS service by AWS account. ie) awsaccount1-vpcflow, awsaccount1-cloudtrail, awsaccount2-vpcflow, etc. We use S2, so storing aws:cloudtrail with aws:cloudwatch:vpcflow hurts the performance of aws:cloudtrail data. Searching for aws:cloudtrail data requires us to write back all aws:cloudwatch:vpcflow data back to disk. This has accounted for 120x more buckets required written to disk for aws:cloudtrail since it's stored with VPCFlow. Expanding these indexes to be more specific will have huge performance improvements for my Splunk environment I would like to use a lookup table to match the source of the SQS-based S3 to specify the index and sourcetype. I am unable to do this using regex and FORMAT, since the bucket names and index names are not a 1-1 match. ie) for s3://acc1/cloudtrail/..., I would like to have a lookup table that tells it to route to index account1 and sourcetype aws:cloudtrail, for s3://acc2/config/... I would like to have it route to index account2 and sourcetype aws:config. After that long summary... how do I technically implement this and how will a lookup with ~300-400 different rows affect performance? Thank you, Nate      
I am getting an error when installing PHP agent on the RHEL server.  PHP version id: 7.4 PHP extensions directory: /usr/lib64/php/modules PHP ini directory: /etc/ PHP thread safety: NTS Controll... See more...
I am getting an error when installing PHP agent on the RHEL server.  PHP version id: 7.4 PHP extensions directory: /usr/lib64/php/modules PHP ini directory: /etc/ PHP thread safety: NTS Controller Host: https:\/\/xxxxxxxx.saas.appdynamics.com\/controller\/ Controller Port: 8090 Application Name: WebApp Tier Name: DemoWebTier Node Name: DemoNode Account Name: xxxxxxxx Access Key: xxxxxxxx SSL Enabled: true HTTP Proxy Host: HTTP Proxy Port: HTTP Proxy User: HTTP Proxy Password File: TLS Version: TLSv1.2 [Error] Agent installation does not contain PHP extension for PHP 7.4 i was installing the agent using shell script method. please let me know if someone has faced similar issue and how can we fix it. Thanks
Hello,    How do I obtain an NFR license (or the like)? We have integrations with Splunk but no way to test/evaluate them. The previous parties that handled this are no longer with the company and ... See more...
Hello,    How do I obtain an NFR license (or the like)? We have integrations with Splunk but no way to test/evaluate them. The previous parties that handled this are no longer with the company and we don't have much information. 
hi i would like some help on how to extract the next 5 lines after a keyword where it extracts the full line where the keyword is part of. example below....   where the keyword is the 'ethernet' ... See more...
hi i would like some help on how to extract the next 5 lines after a keyword where it extracts the full line where the keyword is part of. example below....   where the keyword is the 'ethernet' ********************************************** Redundant-ethernet Information: Name Status Redundancy-group reth0 Down Not configured reth1 Up 1 reth2 Up 1 reth3 Up 1 reth4 Down Not configured reth5 Down Not configured reth6 Down Not configured reth7 Down Not configured reth8 Down Not configured reth9 Up 2 Redundant-pseudo-interface Information: Name Status Redundancy-group lo0 Up 0   *****************************************   example value of a field now would be..   Redundant-ethernet Information: Name Status Redundancy-group reth0 Down Not configured reth1 Up 1 reth2 Up 1 reth3 Up 1   thanks, if it can be generic enough enough so that i can use it for other rex searches that of similar data   
    index=myindex source="/var/log/nginx/access.log" | eval status_group=case(status!=200, "fail", status=200, "success") | stats count by status_group | eventstats sum(count) as total | ev... See more...
    index=myindex source="/var/log/nginx/access.log" | eval status_group=case(status!=200, "fail", status=200, "success") | stats count by status_group | eventstats sum(count) as total | eval percent= round(count*100/total,2) | where status_group="fail"     Looking at nginx access logs for a web application.  This query tells me the amount of failures (non 200), total amount of calls (all msgs in log) and the % of failures vs total.  As follows: status_group count percent total fail 20976 2.00 1046605   What I'd like to do next is timechart these every 30m to see what % of failures I get in 30 min windows but the only attempt where I got close did it as a % of the total calls in the log skewing the result completely.  Basically a row like above but for every 30 min of my search period.  Feel free to rewrite the entire query as I cobbled this together anyway.
I have a "cost" for two different indexes that I want to calculate in one and the same SPL. As the "price" is different depending on index, I can't just use a "by" clause in my count/sum as I don't k... See more...
I have a "cost" for two different indexes that I want to calculate in one and the same SPL. As the "price" is different depending on index, I can't just use a "by" clause in my count/sum as I don't know how to apply the separate costs in that way. Let's say... idxCheap costs $10 per event. idxExpensive costs $20 per event. I've written this SPL that works, although the "cost" data ends up in a unique column for each index. The count is still in the same column. index=idxCheap OR index=idxExpensive | stats count by index | eval idxCheapCost = case(index="idxCheap", count*10) | eval idxExpensiveCost = case(index="idxExpensive", count*20)  The results looks like this: count idxCheapCost idxExpensiveCost index 44892 448920   idxCheap 155   3100 idxExpensive   Any pointers on how to most efficiently and dynamically achieve this?
I have used Splunk to threat hunt many times and have aspirations to build a distributed Splunk instance in the feature. I decided to start learning the installation, configuration, and deployment pr... See more...
I have used Splunk to threat hunt many times and have aspirations to build a distributed Splunk instance in the feature. I decided to start learning the installation, configuration, and deployment process of Splunk, by building a standalone instance. I get to a point where I think I have completed all the steps necessary to have a functioning Splunk set up. (connections are established on 8089 and 9997) and my web page is good. As soon as my apps are pushed to my (client)  this is when Splunk starts throwing an error stating indexers and ques are full. it also appears I am getting no logs from my applications. Any help is greatly appreciated.