All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Have anyone ran across the following issue before?  I am trying to implement the Splunk SOAR app but we are not able to select the “create server” button. We have referred to the online documentati... See more...
Have anyone ran across the following issue before?  I am trying to implement the Splunk SOAR app but we are not able to select the “create server” button. We have referred to the online documentation but we do not see the option to replace each instance of phantom with splunk_app_soar. 
Dear Splunkers, I have a dashboard with 2 pie charts and one statistical table I have set drilldown on both the pie chart so that when I select the slices on either pie chart, the data on statist... See more...
Dear Splunkers, I have a dashboard with 2 pie charts and one statistical table I have set drilldown on both the pie chart so that when I select the slices on either pie chart, the data on statistical table gets refreshed accordingly Problem is when I click on any slice on 1st Pie Chart, data on table loads correctly Then I click on any slice on 2nd Pie chart, data on table includes both the previous token value along with current one I need the previous token value to be reset when I click on 2nd pie chart Could someone please help
Hello from Splunk Assist Team, Splunk Assist is a cloud-connected service for Splunk® Enterprise that puts your telemetry data to work! It inspects Splunk deployments and makes cloud-powered recomme... See more...
Hello from Splunk Assist Team, Splunk Assist is a cloud-connected service for Splunk® Enterprise that puts your telemetry data to work! It inspects Splunk deployments and makes cloud-powered recommendations to help bridge security gaps, optimize cost and improve performance in your environment. With Assist, you can: 1) detect critical issues, 2) receive in-product recommendations to fix detected issues, and 3) receive continuous improvements (i.e. no need to keep up with version upgrades and security patch upgrades). We will bring new capabilities to Assist regularly and would love to answer any questions you might have about the Assist app. Before you search through previous conversations looking for assistance, we want to provide you with some basic information and quick resources. Want to get a high-level overview without reading the docs on What Splunk Assist is? See Splunk Assist: Cloud-Powered Insights Just for You, at Your Fingertips, or view the .conf22 video presentation + demo. Want to get into the nitty-gritty details? Start with the Splunk Assist documentation.  Want to request more features? Add your ideas and vote on other ideas at the Assist ideas Portal Please reply to this thread for any questions or to get extra help!
Hi Splunk Community, I’m trying to get syslogs from my router to show up in Splunk "NetFilter IPtables”. Would be great to get this view showing "results" where it says “no results found"   I... See more...
Hi Splunk Community, I’m trying to get syslogs from my router to show up in Splunk "NetFilter IPtables”. Would be great to get this view showing "results" where it says “no results found"   I have configured the router to send syslog to my mac’s IP address, port 514 and set up a data input in Splunk Enterprise on port 514 to listen for Syslog (as opposed to “ linux_messages_syslog”) That seems to transmit because if I run "sudo tcpdump -lns 0 -w - udp and port 514 | strings” from my CLI, I get a result like this every few seconds:   x?bu? B?̫,? ?,?? ~??<132> 2022-07-25T12:13:03.696064-04:00 L4 FIREWALL[9488]: nflog_log_fw(), action=DROP reason=POLICY-INPUT-GEN-DISCARD hook=INPUT mark=136314880 IN=br2 OUT= MAC=00:00:00:00:00:00:8484:26:2b:9b:ea:bd:src=167.94.138.139 DST=xx.yy.zzz.www LEN=44 TOS=0x00 PREC=0x00 TTL=40 ID=2539 PROTO=TCP SPT=19169 DPT=8901 SEQ=3022021036 ACK=0 WINDOW=1024 RES=0x00 SYN URGP=0 OPT (MSS=1460 )    And  this seems to work to some extent in splunk because I’m also picking up the messages now in “Splunk Search and Reporting” and in CIM.   This app seems to depend on:  TA_Netfilter (I unpacked and copied the TA_Netfilter into: “/Applications/Splunk/etc/apps/TA_netfilter/“. Seems to be installed... Splunk Enterprise 6.2+ (I’m running 9.0.0.1) Splunk Common Information Model 4.3+ (I have installed 5.0.1 on Splunk Enterprise, I did not change configuration in http://localhost:8000/en-US/app/search/cim_setup) I suspect I’m missing something simple. Could you pls suggest some reasons that this Netfilter IP Tables might not be working properly to help me troubleshoot?   Thanks, Rob
Hello, We have a heavy forwarder that occasionally receives and event that exceeds the bounds of Splunk indexers. When this happens, the heavy forwarder freezes and stops sending data to the indexe... See more...
Hello, We have a heavy forwarder that occasionally receives and event that exceeds the bounds of Splunk indexers. When this happens, the heavy forwarder freezes and stops sending data to the indexers. Is there a setting to tell the heavy forwarder to discard that from the queue and keep going? Our only workaround at this time is to restart the heavy forwarder.  Thank you. This is our limits.conf:   [kv] limit = 150 indexed_kv_limit = 0​    
rex command im using:  (?:\w+\s\:\s)(?<command>[^\;]+)?\;\s(?<Datainput>[^\s]+)\s\;\s(?<Extra>[^\s]+) Data 1) command : TTY-unknown ; data ; ... ; 2) command : Random blurb ; TTY-unknown ; data ... See more...
rex command im using:  (?:\w+\s\:\s)(?<command>[^\;]+)?\;\s(?<Datainput>[^\s]+)\s\;\s(?<Extra>[^\s]+) Data 1) command : TTY-unknown ; data ; ... ; 2) command : Random blurb ; TTY-unknown ; data ; ... ; 3) command : Some other blurb ; TTY-unknown ; data ; ... ; 4) command : TTY-unknown ; data ; ... ; 5) command : TTY-unknown ; data ; ... ; When I rex this data, what i get is:   command TTY datainput 1 unknown data ... 2 Random blurb unknown data 3 Some other blurb unknown data 4 unknown data ... 5 unknown data ... What I want to get is data that doesn't skip the data in 2 and 3, so it would look like this.  How do I accomplish this?   command TTY datainput 1   unknown data 2 Random blurb unknown data 3 Some other blurb unknown data 4   unknown data 5   unknown data
I have a very large Oracle database table that is being used as a log sink for an application. There is high transaction throughput on this table. I would like to get the data in this table (not abou... See more...
I have a very large Oracle database table that is being used as a log sink for an application. There is high transaction throughput on this table. I would like to get the data in this table (not about this table) into Splunk as real time as possible. Unfortunately, I do not have access to the source in order to add a log sink directly to Splunk. I realize I could read the data and move it in batches, but I'm wondering if there are any less-intensive options such as transaction log replication. What is the best practice for moving this type of data?
Is there any controls to limit the size of a user search? The use case is Splunk Cloud and limiting a search, if it downloads for example more than 10TB from SmartyStore to the cache.
What are the list of credentials that are acceptable for Just in Time entry? Or is there a way to add to that list when creating our own apps?  Looking through the documentation for the metadat... See more...
What are the list of credentials that are acceptable for Just in Time entry? Or is there a way to add to that list when creating our own apps?  Looking through the documentation for the metadata, I'm not seeing anything. 
I converted a old Classic Dashboard over to Dashboard Studio for experimentation and implementation but one of the core features of the dashboard was a map which used markers that listed the Name, La... See more...
I converted a old Classic Dashboard over to Dashboard Studio for experimentation and implementation but one of the core features of the dashboard was a map which used markers that listed the Name, Lat, and Lon of a location. When I made the conversion over to Studio using the marker option it is no longer allowing me to show the Name of the location and only providing the Lat and Lon. Any1 have a suggestion on what needs to be changed post conversion. The code is : "visualizations": {  "viz_map_1":{ "type": "splunk.maps" "options": [ 20,  30 ], "zoom": 1, "layers": [ { "seriesColors": [ "#00FF00", "#00FF00" ], "type": "marker", "additionalTooltipFields": [ "location" ], "latitude": "> primary | seriesByName('Lat')", "longitude": "> primary | seriesByName('Lon')"      
Hey all, I have a summary table that shows these values. Each error log and log in the 'Total logs' column (which contains error logs and successful logs) have a unique timestamp. Process Error... See more...
Hey all, I have a summary table that shows these values. Each error log and log in the 'Total logs' column (which contains error logs and successful logs) have a unique timestamp. Process Error logs Total logs A 5 10 B 6 15 C 7 9     I want to find the total execution time for the error logs and the total logs for each process by adding the total execution times of the error/successful logs under each process. I am hoping to get a summary table like the one shown below. Process Error logs Total logs Total execution time A 5 10 2 minutes B 6 15 50 seconds C 7 9 4 minutes   Any help would be much appreciated. Thanks!
Hello, I am trying to create dashboard input based on lookup table. I have simple lookup with monitor name and list of all components it may apply:   $ cat Itron_INS_monitors.csv "Monitor_Name"... See more...
Hello, I am trying to create dashboard input based on lookup table. I have simple lookup with monitor name and list of all components it may apply:   $ cat Itron_INS_monitors.csv "Monitor_Name",Component "AMM::DB::Unscheduled Jobs",DB "APP:::Tibco::ERROR: Accept() failed: too many open files",TIBCO "App::All::DB Connection Pool Exhausted","FWU GMR MPC MT NEM ODS THIRDPARTY TMB RMACA CAAS HCM NEC DMS DLCA * FPS SSNAGENT SSNAGENTFORWARDER TRAPROUTER AMMWSROUTE AMMJMSROUTE ODSJMSROUTE HCMWSROUTE MPCWSROUTE SENSORIQWSROUTE ODSWSROUTE AMMMULTISPEAK REG SAM PM SENSORIQ TBR ACTIVEMONITOR ZCU"   For some reason, mvexpand does not work. It is not memory, because my csv file has just ~100 lines. Please help!!! Thank you
I have 2 indexes, one with server events and one with server temperature readings.  The server events come in when generated and the temperature readings come in every 15 mins.  How do I create a sum... See more...
I have 2 indexes, one with server events and one with server temperature readings.  The server events come in when generated and the temperature readings come in every 15 mins.  How do I create a summary index so that I can see all the events for each server in order?  The goal would be to use that summary index for MLTK and predict failures based on event sequence and temperature reading. In SQL, I could do: CREATE TABLE mydb.mytable as SELECT (fields) from table1.a LEFT JOIN table2.b ON (primary_key) order by (timestamp); How to achieve this in Splunk?
First, new to splunk, learning as I go. Oh, BTW, I'm the splunk 'person' now in my org. Trying to figure out how to get MS Azure Gov into splunk securely. Yay! In preparation for this, I have decide... See more...
First, new to splunk, learning as I go. Oh, BTW, I'm the splunk 'person' now in my org. Trying to figure out how to get MS Azure Gov into splunk securely. Yay! In preparation for this, I have decided to see about the MS Security Graph app and where it should be installed. Ok, so today I learned that "Splunk recommends installing Splunk-supported add-ons across your entire Splunk platform deployment, then enabling and configuring inputs only where they are required" and " you can install any add-on to all tiers of your Splunk platform architecture – search tier, indexer tier, forwarder tier – without any negative impact." Which I got those from the splunk doc site: Where to install Splunk add-ons - Splunk Documentation Our architecture looks ad-hoc more than anything. We have apps on a search head but not on others, we have apps on 1 or 2 indexers but not the rest. That's just the apps from splunkbase. So now I have this task to create a spreadsheet of which instances have what apps so that I may streamline and make it so across the board. Question 1: Is it truly best to install an app on all instances, across all tiers? Example, a forensic investigator tool that really only interacts with the splunk portal for (a search head), does it really need to be on forwarders and the indexers? Question 2: Is there a way to export the list of apps installed on a splunk instance. This is so i can make an easy spreadsheet of what server has what app and then start the task of ensuring that app is spread across the board. Question 3a: Do I really need all of the MS add-ons? Microsoft Graph Security API add-on for Splunk, Microsoft Sysmon Add-on, Splunk Add-on for Microsoft Windows, Splunk Add-on for PowerShell, TA-microsoft-sysmon_inputs? Question 3b: I don't really see others that I would have thought would be good like Splunk Add-on for Microsoft Security (by Splunk), Splunk Add-on for Microsoft Office 365 (by splunk), and others. Would it be beneficial to have those? Question 4: Anyone have experience, do's and don'ts for the Microsoft Graph Security API add-on for Splunk? I have been told this is the app to install and configure to ensure Azure Gov data is brought into splunk securely.
We have configured Splunk SAML authentication using Azure as our IDP but when i attempt to login to the site Splunk times out on its request to authenticate. When i check the Azure logs all authentic... See more...
We have configured Splunk SAML authentication using Azure as our IDP but when i attempt to login to the site Splunk times out on its request to authenticate. When i check the Azure logs all authentication from the IDP is successful and prompts for MFA it fails after the token is passed back to Splunk to render the page. 
If I want to create the pooling based upon the Indexes instead of Indexer Is it possible?
Hi community, I have a problem with the add-on Fidelis solution EDR setup, I did not understand this error described below, After filling in all the requirements.  
Hi all, I'm looking to trigger an alert if our DHCP server loses connection with its partner DHCP for more than 30 minutes. When the server loses connectivity we get "EventCode=20255" in the logs... See more...
Hi all, I'm looking to trigger an alert if our DHCP server loses connection with its partner DHCP for more than 30 minutes. When the server loses connectivity we get "EventCode=20255" in the logs. This happens fairly often due to patching, but the server would always be back within 30 minutes, so we shouldn't get an alert in that case. When the connection is restablished we get "EventCode=20254" The question is, how would I trigger an alert if more than 30 minutes elapses before "EventCode=20254"? 
Hi, how to center the text in the column ? Using Dashboard studio  
In File monitoring extension(source: GitHub), the metric named as "last modified time" is returning time in an epoch format. So, how do we convert that epoch time to Normal time format?