All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am using imported CSV data to search throughout Splunk and the CSV file defines the column TIME and only includes the year and month in the format YYYY-MM. I am attempting to convert that field int... See more...
I am using imported CSV data to search throughout Splunk and the CSV file defines the column TIME and only includes the year and month in the format YYYY-MM. I am attempting to convert that field into a UTC UNIX timestamp using the strptime() function but have not had any success.  This is an image of the extracted fields with a basic search: These were the searches I used when attempting to use the strptime() function. All of the examples did not work.     index="financial_data" source="consumer_confidence_index.csv" LOCATION=USA | eval TIME=strptime(TIME, "%Y-%m") index="financial_data" source="consumer_confidence_index.csv" LOCATION=USA | eval TIME=TIME."-00:00:00:00", TIME=strptime(TIME, "%Y-%m-%d:%H:%M:%S") index="financial_data" source="consumer_confidence_index.csv" LOCATION=USA | eval my_time=strptime('TIME', "%Y-%m") index="financial_data" source="consumer_confidence_index.csv" LOCATION=USA | eval my_time=strptime(YEAR.MONTH, "%Y-%m")     Additionally, I also tried using the convert command and that didn't work either. Both of the examples below did not work.     index="financial_data" source="consumer_confidence_index.csv" LOCATION=USA | convert timeformat="%Y-%m" mktime(TIME) AS NEW_TIME index="financial_data" source="consumer_confidence_index.csv" LOCATION=USA | eval TIME=TIME."-00:00:00:00" | convert timeformat="%Y-%m-%d:%H:%M:%S" mktime(TIME) AS NEW_TIME       Any advice is appreciated, thank you.
Hello, I am facing disk space issue in my Splunk so decided to delete the unwanted data as it is test environment, while running the following command index=malware | delete i am getting the followi... See more...
Hello, I am facing disk space issue in my Splunk so decided to delete the unwanted data as it is test environment, while running the following command index=malware | delete i am getting the following error.   Search not executed: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch. user=admin., concurrency_category="historical", concurrency_context="user_instance-wide", current_concurrency=0, concurrency_limit=5000 and also I can see so many errors on my Splunk as follows   Please help me on this to solve the issues.
Hi Team, I'm looking for a query to compare Splunk ingestion volume between the current date and a week ago i.e compare today's ingestion volume with exact same day a week ago and get the % differe... See more...
Hi Team, I'm looking for a query to compare Splunk ingestion volume between the current date and a week ago i.e compare today's ingestion volume with exact same day a week ago and get the % difference. Please let me know if there are any queries available preferably with REST Services.Thanks    
Hi Team, After upgrading the SSL certificate we are not able to connect to Deployment server from UF. we are getting the below error logs.   DC:DeploymentClient - channel=tenantService/handshak... See more...
Hi Team, After upgrading the SSL certificate we are not able to connect to Deployment server from UF. we are getting the below error logs.   DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected INFO DC:PhonehomeThread - Attempted handshake 1050 times. Will try to re-subscribe to handshake reply  INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected  WARN TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group primary_indexers has been blocked for 12600 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.  INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected
I'm in a RHEL8, Splunk 8.2.6 distributed environment with single kvstore on each servers. Can Memory Mapped (MMAP) storage engine and WiredTiger storage engine exist in the same Splunk environment bu... See more...
I'm in a RHEL8, Splunk 8.2.6 distributed environment with single kvstore on each servers. Can Memory Mapped (MMAP) storage engine and WiredTiger storage engine exist in the same Splunk environment but on different servers in single instance? I will like to do the migration but one server at a time over multiple days.
Hello, We have a rather noisy agent that is logging about 19GB of data daily.  How can I filter the following from the inputs.conf?   Process Information:    Process ID: 0x1450  Process N... See more...
Hello, We have a rather noisy agent that is logging about 19GB of data daily.  How can I filter the following from the inputs.conf?   Process Information:    Process ID: 0x1450  Process Name: C:\Program Files\Rapid7\Insight Agent\components\insight_agent\3.1.5.14\ir_agent.exe     Thanks,   Garry
Hello Team @SPL, Was working on some of the development activity, got stuck at some level. We have a scenario where I need to check , on a single day which user had done transactions for more than 3 ... See more...
Hello Team @SPL, Was working on some of the development activity, got stuck at some level. We have a scenario where I need to check , on a single day which user had done transactions for more than 3 different vendors and transfer the output into tabular format. When i perform the distinct count i get the count of the user who had done transaction with 3 vendors on same day | stats dc(Vendor) AS dc_vendor values(Vendor) AS Vendor BY UserID   Need to have output detailed in table 2 Table 1:- Date  UserID Vendor Transactions 10/5/2021 user 1 SAAS(User 1) $$$$$ 10/5/2021 user 2 PAAS(User 1) $$$$$ 10/7/2021 user 3 IAAS $$$$$ 10/8/2021 user 4 AAA $$$$$ 10/9/2021 user 5 CCCC $$$$$ 10/10/2021 user 6 FFFF $$$$$ 10/5/2021 user 7 XXXX (User 1) $$$$$ 10/6/2021 user 8 ZZZZ $$$$$ 10/8/2021 user 9 EEE $$$$$ 10/9/2021 user 10 QQQQ $$$$$   Output Table 2 Date UserID Vendor Transactions 10/5/2021 user 1 SAAS(User 1) $$$$$     AAS(User 1) $$$$$     XXXX (User 1) $$$$$
Not strictly a Splunk question, more  a VMWare vCenter one, but II'm hoping somebody has solved this before me!!! We're working to get the logs from vCenter into Splunk using syslog, Kiwi and the S... See more...
Not strictly a Splunk question, more  a VMWare vCenter one, but II'm hoping somebody has solved this before me!!! We're working to get the logs from vCenter into Splunk using syslog, Kiwi and the Splunk Add-on for vCenter Logs.  We've figured out all the components: configured vCenter correctly, using rsyslog.config set Kiwi up to use Native messages, not add a date and time stamp and we were just about to start the app to fetch the kiwi logs when we found we could not control the severity level in rsyslog. We referred to the help cited - https://www.rsyslog.com/doc/v8-stable/configuration/modules/imfile.html - but this refers to the directive $InputFileSeverity  as being legacy... Regardless of what we set the parameter $InputFileSeverity to it ignores us and sends everything right up to Debug (Level 7). As that more than doubles the log size for no material benefit, I'd like to tell vCenter not to bother.  What is the corect syntax of the stanza in rsyslog.conf to set the severity level to Level 6 / Info or lower?  We tried  $InputFileSeverity 6 $InputFileSeverity Info $InputFileSeverity Info,Warning
Hi All, I have a SPL query that runs on an index , sourcetype which has milions of jobnames. I want to my SPL to read through a list of jobnames from a different query and use it as subsearch  O... See more...
Hi All, I have a SPL query that runs on an index , sourcetype which has milions of jobnames. I want to my SPL to read through a list of jobnames from a different query and use it as subsearch  OR I have created a lookup.csv for this 16,000 list of jonames and want to run my search on it.   How to do that ?   Main SPL that runs on millions of jobnames : earliest=-7d index=log-13120-nonprod-c laas_appId=qbmp.prediction* "jobPredictionAnalysis" prediction lastEndDelta | table jobname, prediction_status, predicted_end_time Below is an input lookup  freq_used_jobs_bmp_3months.csv which is a simple two columnar file  jobName, freq_count     I tried to join main query with this inputfile. I want to operate and write SPL queries on this list of jobNames only.     earliest=-7d index=log-13120-nonprod-c laas_appId=qbmp.prediction* "jobPredictionAnalysis" prediction lastEndDelta | table jobname, prediction_status, predicted_end_time | lookup freq_used_jobs_bmp_3months.csv jobName output freq_count |table jobname, freq_count The above query fails with error  na_prod_secure-ist-indexer-1_iapp724.randolph.ms.com-23000] Streamed search execute failed because: Error in 'lookup' command: Could not construct lookup 'freq_used_jobs_bmp_3months.csv, jobName, output, freq_count'. See search.log for more details I removed any null rows in the file.  Still I get the same error. Other option is to somehow combine, join main query with a sub search instead of a lookup file. main query  earliest=-7d index=log-13120-nonprod-c laas_appId=qbmp.prediction* "jobPredictionAnalysis" prediction lastEndDelta | table jobname, prediction_status, predicted_end_time sub search that will list a smaller number of jobNames that are used in last 3 months : earliest=-90d index="log-13120-prod-c" sourcetype="autosys_service_secondary:app" OR "autosys_service_primary:app" "request:JobSearch" installation="P*" NOT"*%*" | stats count as freq_count by jobName   Now how to join the above two?    
Working with this query, I'm hoping to get only results where field values are greater than the other.     index="index*" | eval MonthNumber=strftime(_time,"%m") | chart eval(round(avg(duratio... See more...
Working with this query, I'm hoping to get only results where field values are greater than the other.     index="index*" | eval MonthNumber=strftime(_time,"%m") | chart eval(round(avg(durationMs), 0)) AS avg_durationMs by properties.url, MonthNumber | rename 04 AS "Apr", 05 AS "May"      I want to get only results of where Apr values is greater than May by 10  
The Splunk documentation for setting up Predictive Analytics Alerting has you create a correlation PER Service that Predictive Analytics is setup on.  Has anyone tried to create a single correlation ... See more...
The Splunk documentation for setting up Predictive Analytics Alerting has you create a correlation PER Service that Predictive Analytics is setup on.  Has anyone tried to create a single correlation search to handle all their Predictive Analytics alerting and is there any documentation(Jeff Wiedemann's Alerting Blueprint didn't cover that) that would give someone some ideas on best approach? Below is the relevant Splunk Documentation: “ You click the bell icon  in the Worst Case Service Health Score panel to open the correlation search modal. You want ITSI to generate a notable event in Episode Review the next time the Middleware service's health is expected to degrade.” https://docs.splunk.com/Documentation/ITSI/4.12.1/SI/UseCase#:~:text=ITSI%20Predictive%20Analytics%20use%20case%20With%20ITSI%20Predictive,that%20you%20can%20use%20to%20make%20business%20decisions.
Hello,   I need some help. I am new to Splunk and have run into an issue. I want to have table that will display Computer Name, Physical Address, Device Type, IP Adress, and what version of Offic... See more...
Hello,   I need some help. I am new to Splunk and have run into an issue. I want to have table that will display Computer Name, Physical Address, Device Type, IP Adress, and what version of Office thy Have (2013 or 365). The data is under one index but 3 different source types.   Index=Desktops SourceType 1= AssetInfo - It has lots of fields but the 3 I care about is PysAddress, DevType, ComputerName  SourceType 2 = Network - It has many fields but the only tw I want, and they are called IPAddress, Computer SourceType 3 = Software -  It has 3 fields, I care about all 3, which are Compuetr, SoftwareName, Software Verison. I want to pull info from all 3 source types and make one table. the common filed is computer name. The first issue is that in SourceType 1 the field is called ComputerName and the other 2 sourcetypes it is Computer. I know I could do a rename command on the sourcetype 1 if I had to. I have tried the OR Boolaen, Multisearch command, Union command and Join but I can never seem to get it to work right, the table gets created but the info it pulls he IP and creates one line then a seperate line for software. they are never ont eh same line. the next issue is that I need to filter on software that contains Microsoft office 2013 or Office 365.  Any Ideas would be welcomed
Hi, I'm not sure if i understand maxVolumeDataSizeMB correctly Lets say, i have a volume stanza like this in an index cluster wih 4 peers:     [volume:volume_name] path = /foo/bar maxVolume... See more...
Hi, I'm not sure if i understand maxVolumeDataSizeMB correctly Lets say, i have a volume stanza like this in an index cluster wih 4 peers:     [volume:volume_name] path = /foo/bar maxVolumeDataSizeMB = 5242880     How does Splunk handle the total volume capacity? Does every peerhas it's own 5TB in this volume until buckets roll or does maxVolumeDataSizeMB counts against the total volume size across all peers, e.g. 5TB/4? Thanks in advance!
Hi all, I'm in the process of setting up performance reporting for services provided for a client. The logic in question is very simple: average response of the service and volume over a predeterm... See more...
Hi all, I'm in the process of setting up performance reporting for services provided for a client. The logic in question is very simple: average response of the service and volume over a predetermined time period. I'm having trouble including services with 0 calls during the time period in question. It is important to the end user that they see which services are not being called as well. I've coded up two solutions and neither is giving me what I need. I've also looked at https://community.splunk.com/t5/Splunk-Search/Include-zero-count-items-from-lookup/m-p/177260, but I'm having trouble understanding how this applies to my solution outside of the outputlookup.  The format of the table desired is: Service - Average Response - Volume. Solution 1 - base search - works but does not include 0 values for obvious reasons: index=******************************************** | lookup services.csv service AS liveDataField OUTPUT service | stats avg(ResponseTime) AS AverageResponseTime, count AS Volume BY service | eval AverageResponseTime=round((1000*AverageResponseTime), 2) | fillnull value=0 AverageResponseTime My first thought was to add in a lookup table with all the services, and build off of that: index=******************************************** | inputlookup append=t services.csv | lookup services.csv service AS liveDataField OUTPUT service | stats avg(ResponseTime) AS AverageResponseTime, count AS Volume BY service | eval AverageResponseTime=round((1000*AverageResponseTime), 2) | fillnull value=0 AverageResponseTime This gives me what I need in terms of zero values, but instead of returning a count of 0, it returns Volume=1 for each service with zero hits. It also increments each service's volume by one erroneously. I suspect this is due to the inputlookup append=t. I've also tried doing an eval count=0 initially.  TL;DR - adding in a lookup to address zero count items caused each service's volume to be incremented by 1. Is there a quick fix for this? Or perhaps a better way of doing it altogether? Cheers
Hello folks,  Been busting my head here.. trying to pull data from multiple sourcetypes which I thought would run like: Index=test sourcetype=A OR sourcetype=B | search host=* | where <appname> =... See more...
Hello folks,  Been busting my head here.. trying to pull data from multiple sourcetypes which I thought would run like: Index=test sourcetype=A OR sourcetype=B | search host=* | where <appname> ="value" AND  | table Host, IPAddress, Appname host is a field in both sourcetypes and IP related info is in B. Just trying to pull out host, it's IP address, and the app in question. What I get is a real long host list (so that's good) with a few IP's and a few apps..  Looking abit like this: Host | IPAddress |Appname host1 | IP                | host2 | ip                | host3 |                     | appname host4|                      | appname so on and so forth seems like any place that shows an ip address refuses to show an appname and vice versa??  Still acts the same. I pulled each part separately so I know the data is good. 
For the latest version, Version 5.2.4, I have vulnerability data coming in from Tenable.SC. How can I filter the results based on the scan name? Cannot seem to figure it out. I remember in previous v... See more...
For the latest version, Version 5.2.4, I have vulnerability data coming in from Tenable.SC. How can I filter the results based on the scan name? Cannot seem to figure it out. I remember in previous versions, we could leverage scan_result_info.name, but not in this latest version. Any thoughts is appreciated. Thanks
When a user is added i need the time to be recorded and displayed in a field called used_added. I created the field name and can see if in the csv table but there is no time per entry. <query> ... See more...
When a user is added i need the time to be recorded and displayed in a field called used_added. I created the field name and can see if in the csv table but there is no time per entry. <query> | inputlookup USB.csv | eval user_added = strftime(_time, "%Y-%d-%m %H:%M:%S") | append [ | makeresults | eval user="$user_tok$", description="$description_tok$", revisit="$revisit_tok$", Action="$dropdown_tok$" ] | fields - _time | table user_added, user_added, user, category, department, description, revisit, status | lookup lookup_ims user as user OUTPUT category department | outputlookup USB.csv </query>
Not sure if this is a limitation of Phantom prompt block or if someone has figured this out already. I am using a prompt block to allow a user build up a config file that will eventually be sent to... See more...
Not sure if this is a limitation of Phantom prompt block or if someone has figured this out already. I am using a prompt block to allow a user build up a config file that will eventually be sent to Splunk to create a saved search. The questions allow the user to select specific values for fields to generate the metadata necessary for the splunk saved search (splunk query, time fields, eval fields, etc).  The response type for the question is a list of choices. There are two choices: The existing field value (which comes from the config file that was pulled via prior action call) CHANGE (which would be selected when the value needs to be changed) When using the response type 'list', is there a way to have #1 be set as the default response? Therefore, you would only have to select CHANGE from the drop down, rather than having to select the existing field's value every time if it doesn't need changed.
I installed Splunk on an EC2 instance. I saw the login page on port 8000 but when I tried to login I got a server error. I am not sure what I should look at.
Hello All,  I want to know how can i Get the controller UI Access? Thanks in advance