All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey all, I have a summary table that shows these values. Each error log and log in the 'Total logs' column (which contains error logs and successful logs) have a unique timestamp. Process Error... See more...
Hey all, I have a summary table that shows these values. Each error log and log in the 'Total logs' column (which contains error logs and successful logs) have a unique timestamp. Process Error logs Total logs A 5 10 B 6 15 C 7 9     I want to find the total execution time for the error logs and the total logs for each process by adding the total execution times of the error/successful logs under each process. I am hoping to get a summary table like the one shown below. Process Error logs Total logs Total execution time A 5 10 2 minutes B 6 15 50 seconds C 7 9 4 minutes   Any help would be much appreciated. Thanks!
Hello, I am trying to create dashboard input based on lookup table. I have simple lookup with monitor name and list of all components it may apply:   $ cat Itron_INS_monitors.csv "Monitor_Name"... See more...
Hello, I am trying to create dashboard input based on lookup table. I have simple lookup with monitor name and list of all components it may apply:   $ cat Itron_INS_monitors.csv "Monitor_Name",Component "AMM::DB::Unscheduled Jobs",DB "APP:::Tibco::ERROR: Accept() failed: too many open files",TIBCO "App::All::DB Connection Pool Exhausted","FWU GMR MPC MT NEM ODS THIRDPARTY TMB RMACA CAAS HCM NEC DMS DLCA * FPS SSNAGENT SSNAGENTFORWARDER TRAPROUTER AMMWSROUTE AMMJMSROUTE ODSJMSROUTE HCMWSROUTE MPCWSROUTE SENSORIQWSROUTE ODSWSROUTE AMMMULTISPEAK REG SAM PM SENSORIQ TBR ACTIVEMONITOR ZCU"   For some reason, mvexpand does not work. It is not memory, because my csv file has just ~100 lines. Please help!!! Thank you
I have 2 indexes, one with server events and one with server temperature readings.  The server events come in when generated and the temperature readings come in every 15 mins.  How do I create a sum... See more...
I have 2 indexes, one with server events and one with server temperature readings.  The server events come in when generated and the temperature readings come in every 15 mins.  How do I create a summary index so that I can see all the events for each server in order?  The goal would be to use that summary index for MLTK and predict failures based on event sequence and temperature reading. In SQL, I could do: CREATE TABLE mydb.mytable as SELECT (fields) from table1.a LEFT JOIN table2.b ON (primary_key) order by (timestamp); How to achieve this in Splunk?
First, new to splunk, learning as I go. Oh, BTW, I'm the splunk 'person' now in my org. Trying to figure out how to get MS Azure Gov into splunk securely. Yay! In preparation for this, I have decide... See more...
First, new to splunk, learning as I go. Oh, BTW, I'm the splunk 'person' now in my org. Trying to figure out how to get MS Azure Gov into splunk securely. Yay! In preparation for this, I have decided to see about the MS Security Graph app and where it should be installed. Ok, so today I learned that "Splunk recommends installing Splunk-supported add-ons across your entire Splunk platform deployment, then enabling and configuring inputs only where they are required" and " you can install any add-on to all tiers of your Splunk platform architecture – search tier, indexer tier, forwarder tier – without any negative impact." Which I got those from the splunk doc site: Where to install Splunk add-ons - Splunk Documentation Our architecture looks ad-hoc more than anything. We have apps on a search head but not on others, we have apps on 1 or 2 indexers but not the rest. That's just the apps from splunkbase. So now I have this task to create a spreadsheet of which instances have what apps so that I may streamline and make it so across the board. Question 1: Is it truly best to install an app on all instances, across all tiers? Example, a forensic investigator tool that really only interacts with the splunk portal for (a search head), does it really need to be on forwarders and the indexers? Question 2: Is there a way to export the list of apps installed on a splunk instance. This is so i can make an easy spreadsheet of what server has what app and then start the task of ensuring that app is spread across the board. Question 3a: Do I really need all of the MS add-ons? Microsoft Graph Security API add-on for Splunk, Microsoft Sysmon Add-on, Splunk Add-on for Microsoft Windows, Splunk Add-on for PowerShell, TA-microsoft-sysmon_inputs? Question 3b: I don't really see others that I would have thought would be good like Splunk Add-on for Microsoft Security (by Splunk), Splunk Add-on for Microsoft Office 365 (by splunk), and others. Would it be beneficial to have those? Question 4: Anyone have experience, do's and don'ts for the Microsoft Graph Security API add-on for Splunk? I have been told this is the app to install and configure to ensure Azure Gov data is brought into splunk securely.
We have configured Splunk SAML authentication using Azure as our IDP but when i attempt to login to the site Splunk times out on its request to authenticate. When i check the Azure logs all authentic... See more...
We have configured Splunk SAML authentication using Azure as our IDP but when i attempt to login to the site Splunk times out on its request to authenticate. When i check the Azure logs all authentication from the IDP is successful and prompts for MFA it fails after the token is passed back to Splunk to render the page. 
If I want to create the pooling based upon the Indexes instead of Indexer Is it possible?
Hi community, I have a problem with the add-on Fidelis solution EDR setup, I did not understand this error described below, After filling in all the requirements.  
Hi all, I'm looking to trigger an alert if our DHCP server loses connection with its partner DHCP for more than 30 minutes. When the server loses connectivity we get "EventCode=20255" in the logs... See more...
Hi all, I'm looking to trigger an alert if our DHCP server loses connection with its partner DHCP for more than 30 minutes. When the server loses connectivity we get "EventCode=20255" in the logs. This happens fairly often due to patching, but the server would always be back within 30 minutes, so we shouldn't get an alert in that case. When the connection is restablished we get "EventCode=20254" The question is, how would I trigger an alert if more than 30 minutes elapses before "EventCode=20254"? 
Hi, how to center the text in the column ? Using Dashboard studio  
In File monitoring extension(source: GitHub), the metric named as "last modified time" is returning time in an epoch format. So, how do we convert that epoch time to Normal time format?
Hi , Need some help to extract regular expressions. I have a set of unstructured logs . Part of the log is as shown below: "RequestUTCDateTime":"2022-07-25T11:19:29.0106873Z"}  How would one ... See more...
Hi , Need some help to extract regular expressions. I have a set of unstructured logs . Part of the log is as shown below: "RequestUTCDateTime":"2022-07-25T11:19:29.0106873Z"}  How would one extract 2022-07-25T11:19:29.0106873Z   and assign it to field RequestUTCDateTime, . This should be done whenever "RequestUTCDateTime" is encountered in the raw log.   Please help me.   Thank You, Ranjitha N  
Hi I want to put the result of this command into a second one:     Actualy I extract the result into a csv file, and put the csv file as a lookup in an other command, like below. (damt... See more...
Hi I want to put the result of this command into a second one:     Actualy I extract the result into a csv file, and put the csv file as a lookup in an other command, like below. (damtest2.cvs is the result of my first command)   How Can I proceed to avoir to pass throught a lookup ?   Regards
how to collect IBM Guardium data into splunk
 we are getting netapp data it is in the main index as its only support default syslog ports. how i can create create props.conf to filter it like <host::*netapp*> and want it in its own index   how... See more...
 we are getting netapp data it is in the main index as its only support default syslog ports. how i can create create props.conf to filter it like <host::*netapp*> and want it in its own index   how is the props.conf and transform.conf on this requirement
Hi all, I have a problem with installing splunk agent on windows 2012 R2 server. I follow the installation via the wizard however the installation fails without returning error messages.  ... See more...
Hi all, I have a problem with installing splunk agent on windows 2012 R2 server. I follow the installation via the wizard however the installation fails without returning error messages.       I have attempted to install the following versions without success: 9.0.0 8.2.7 7.2.0 Below are the errors present in the log file C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd-utility: 07-25-2022 11:46:10.287 +0200 INFO ServerConfig - Found no hostname options in server.conf. Will attempt to use default for now. 07-25-2022 11:46:10.287 +0200 INFO ServerConfig - Host name option is "". 07-25-2022 11:46:10.318 +0200 WARN UserManagerPro - Can't find [distributedSearch] stanza in distsearch.conf, using default authtoken HTTP timeouts 07-25-2022 11:46:11.522 +0200 ERROR LimitsHandler - Configuration from app=SplunkUniversalForwarder does not support reload: limits.conf/[thruput]/maxKBps 07-25-2022 11:46:11.522 +0200 ERROR ApplicationUpdater - Error reloading SplunkUniversalForwarder: handler for limits (access_endpoints /server/status/limits/general): Bad Request 07-25-2022 11:46:11.522 +0200 ERROR ApplicationUpdater - Error reloading SplunkUniversalForwarder: handler for server (http_post /replication/configuration/whitelist-reload): Application does not exist: Not Found 07-25-2022 11:46:11.522 +0200 ERROR ApplicationUpdater - Error reloading SplunkUniversalForwarder: handler for web (http_post /server/control/restart_webui_polite): Application does not exist: Not Found 07-25-2022 11:46:11.522 +0200 WARN LocalAppsAdminHandler - User 'splunk-system-user' triggered the 'enable' action on app 'SplunkUniversalForwarder', and the following objects required a restart: default-mode, limits, server, web Thank you in advance for the support, Regards. Fabio.
Hi, everyone, The customer shared one last JSON formatted file. there are more than 1000 records. Customers want it as a lookup. my thought process is saying that I should use the kV-store approach... See more...
Hi, everyone, The customer shared one last JSON formatted file. there are more than 1000 records. Customers want it as a lookup. my thought process is saying that I should use the kV-store approach. but how can I upload a large amount of data into the kV store?  
Hi, I've created this rather complicated piece of SPL. To make it a bit more understandable I added some comment lines. In the screenshot you can see the SPL syntax highlighting stops working corre... See more...
Hi, I've created this rather complicated piece of SPL. To make it a bit more understandable I added some comment lines. In the screenshot you can see the SPL syntax highlighting stops working correctly from line #20. The strange thing is that when I remove line 20 all together it works fine, and there are more comment lines further on. Whatever comment I put on that position causes this behaviour.  Line removed, it works fine: Comment ```test comment``` breaks the thing down again.  It's just a minor cosmetic thing, but I'd like to know what's happening here and why. We're using splunk Enterprise 8.1.10.1 on my site. Any thoughts appreciated!            
Hi All, I have logs like below and want to create a table out of it.   log1: "connector": { "state": "RUNNING", }, "tasks": [ { "id": 0, ... See more...
Hi All, I have logs like below and want to create a table out of it.   log1: "connector": { "state": "RUNNING", }, "tasks": [ { "id": 0, "state": "RUNNING", } ], "type": "sink" } GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID connect-ABC ABC.sinkevents 0 15087148 15087148 0 connector-consumer-ABC /10.231.95.96 connector-consumer-ABC.sinkevents-0 log2: "connector": { "state": "RUNNING", }, "tasks": [ { "id": 0, "state": "FAILED", } ], "type": "sink" } GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID connect-XYZ XYZ.cardtransactionauthorizationalertsent 0 27775 27780 5 connector-consumer-XYZ /10.231.95.97 connector-consumer-XYZ.Cardtransactionauthorizationalertsent-0 connect-XYZ XYZ.cardtransactionauthorizationalertsent 1 27740 27747 7 connector-consumer-XYZ /10.231.95.97 connector-consumer-XYZ.Cardtransactionauthorizationalertsent-0 connect-XYZ XYZ.cardtransactionauthorizationalertsent 2 27836 27836 0 connector-consumer-XYZ /10.231.95.97 connector-consumer-XYZ.Cardtransactionauthorizationalertsent-0   I created the query which give the below table:   .... | rex field=_raw "CLIENT\-ID\s+(?P<Group>[^\s]+)\s(?P<Topic>[^\s]+)\s(?P<Partition>[^\s]+)\s+(?P<Current_Offset>[^\s]+)\s+(?P<Log_End_Offset>[^\s]+)\s+(?P<Lag>[^\s]+)\s+(?P<Consumer_ID>[^\s]+)\s{0,20}(?P<Host>[^\s]+)\s+(?P<Client_ID>[^\s]+)" | table Group,Topic,Partition,Lag,Consumer_ID   Group Topic Partition Lag Consumer_ID connect-ABC ABC.sinkevents 0 0 connector-consumer-ABC connect-XYZ XYZ.cardtransactionauthorizationalertsent 0 5 connector-consumer-XYZ Here I am missing the last 2 lines of log2.  I want to modify the query in a way that it produces the table in below manner: Group Topic Partition Lag Consumer_ID connect-ABC ABC.sinkevents 0 0 connector-consumer-ABC connect-XYZ XYZ.cardtransactionauthorizationalertsent 0 5 connector-consumer-XYZ connect-XYZ XYZ.cardtransactionauthorizationalertsent 1 7 connector-consumer-XYZ connect-XYZ XYZ.cardtransactionauthorizationalertsent 2 0 connector-consumer-XYZ Please help me to modify the query in a way to get my desired output. Your kind help on this is highly appreciated. Thank You..!!
Hello Splunkers !!! I am new to splunk and I am using splunk enterprises in AWS environment and want to fetch logs of few tables from SQL server, for that i have installed Splunk DB Connect . My ... See more...
Hello Splunkers !!! I am new to splunk and I am using splunk enterprises in AWS environment and want to fetch logs of few tables from SQL server, for that i have installed Splunk DB Connect . My question is what do i need to put in the below: Configurations > Settings >JRE Installation Path(JAVA_HOME)   if we are using splunk enterprises in the AWS environment, shall we use the JRE path of our local machine , i mean laptop on which we are working else we have use JRE path of aws environment.??  
I only want to know for field methodName=XYZ All the methodNames that occurred. I do not want the timestamps for each occurrence. So I want a table ABC DEF ... XYZ