All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is there a way to export each raw source files?  Example of my search criteria:  index="con1_batch" source="*/PB00E5*/log/*.log" Top 10 Values Count % /con7/var/batch/PB00E533/log/PB00E533.Batc... See more...
Is there a way to export each raw source files?  Example of my search criteria:  index="con1_batch" source="*/PB00E5*/log/*.log" Top 10 Values Count % /con7/var/batch/PB00E533/log/PB00E533.BatchEdbc.20210718031834.log 29,154 1.92% /con7/var/batch/PB00E517/log/PB00E517.BatchEdbc.20210718031918.log 28,679 1.889% /con7/var/batch/PB00E587/log/PB00E587.BatchEdbc.20210718031918.log 28,667 1.888% /con7/var/batch/PB00E551/log/PB00E551.BatchEdbc.20210718031936.log 28,643 1.887% /con7/var/batch/PB00E583/log/PB00E583.BatchEdbc.20210718031849.log 28,512 1.878% /con7/var/batch/PB00E530/log/PB00E530.BatchEdbc.20210718031841.log 28,433 1.873% /con7/var/batch/PB00E590/log/PB00E590.BatchEdbc.20210718032104.log 28,330 1.866% /con7/var/batch/PB00E548/log/PB00E548.BatchEdbc.20210718031953.log 28,157 1.855% /con7/var/batch/PB00E550/log/PB00E550.BatchEdbc.20210718031907.log 28,114 1.852% /con7/var/batch/PB00E584/log/PB00E584.BatchEdbc.20210718031838.log 28,061 1.848% ... There are 100+ source files. Can I download or export all the individual source file?
Hi all, I have been using Splunk for about 2 days, so am VERY new.  I'm trying to get a utilization number for endpoint analytics. What I would like is a query that can tell me the Kbps (formula be... See more...
Hi all, I have been using Splunk for about 2 days, so am VERY new.  I'm trying to get a utilization number for endpoint analytics. What I would like is a query that can tell me the Kbps (formula below) per User each 5 minutes for whatever timespan dropdown I pick.  I have tried numerous ways to do it and have had no luck.  Is there a way I can get a table maybe?  that will show me the count (or maybe dc) of users each 5 minutes and the avg Kbps per user?   The query below gives me a timechart by src_IP, but it doesn't look right to me, so I'd like a way to verify (hence the table).    index="myIndex"  source="tcp:xxxx" | eval Kbps=(((cs_bytes*8)/1000)/300) | timechart span=5m avg(Kbps) by src_ip   I've tried this to get the table, but it doesn't work - it gives me zero matches   index="myIndex"  source="tcp:xxxx" | bin _time span=5m | dedup src_ip as SrcIP | streamstats sum(cs_bytes) as Bytes by SrcIP | eval Kbps=(((Bytes*8)/1000)/300) | table SrcIP Bytes Kbps   Any guidance is much appreciated!
Looking for a process if possible to send cloud server log data to ON Premise Splunk indexers. Searched but nothing solid SplunkCloudLogs - > internet  -> On premise Splunk indexers    
Here's my query and I want to calculate the difference between count (_raw) each month . It would be a running column so next month it would include the August month and then September and so on.... See more...
Here's my query and I want to calculate the difference between count (_raw) each month . It would be a running column so next month it would include the August month and then September and so on.. Can anyone please provide me solution to it ?
Dear Splunkers,  If I could get an answer on how do I find which HEC token is causing authentication failures (num_of_auth_failures=1) from _introspection logs, will very much helpful. I'm using be... See more...
Dear Splunkers,  If I could get an answer on how do I find which HEC token is causing authentication failures (num_of_auth_failures=1) from _introspection logs, will very much helpful. I'm using below query to find the errors, but how do I pin point which is causing the issue? index=_introspection component=TERM(HttpEventCollector) "data.series"=TERM(http_event_collector) (data.num_of_auth_failures=1 OR data.num_of_requests_to_disabled_token=1 OR data.num_of_requests_to_incorrect_url=1) Thanks in Advance. 
How to write a query to populate total users per day, total unique users per day, and max user login for a min for that day.    My query : Index =ABC | timechart span=1m count as total_user dc(dat... See more...
How to write a query to populate total users per day, total unique users per day, and max user login for a min for that day.    My query : Index =ABC | timechart span=1m count as total_user dc(data) as unique_users | timechart max(unique_users) as max_count_per_min |appendcols [search index =abc| bin span=1d _time| stats count as total_user dc(data) as unique_users by time]|table _time total_users unique_users max_count_per_min   I need a simpler query having fulfilled all requirements. 
We are trying to setup a new cluster and move from Splunk single site to multisite. Could someone help with all the ports that needs to be allowed between the two sites for this to function properly?
Hi All, I have installed Splunk upgrade readiness app on linux box recently however when i am trying to access it, i am getting the following error :- Error reading progress for user: XXXX on host:... See more...
Hi All, I have installed Splunk upgrade readiness app on linux box recently however when i am trying to access it, i am getting the following error :- Error reading progress for user: XXXX on host:   Following are the logs :- 2021-08-02 14:06:45,437 INFO 140057984116544 - Handling a request 2021-08-02 14:06:45,437 INFO 140057984116544 - Executing function, name=get_read_progress 2021-08-02 14:06:45,513 ERROR 140057984116544 - Error reading progress for user:   I tried running /bin/splunk createssl server-cert -d . -n server as this is mentioned in one of the answers but it didn't work.   Can you please suggest, how can we resolve this issue.      
I was able to set indexes dynamically in inputs.conf based off the source path folder name however, it seems like its not working in Splunk cloud. I have tried to upload a app with the props and tran... See more...
I was able to set indexes dynamically in inputs.conf based off the source path folder name however, it seems like its not working in Splunk cloud. I have tried to upload a app with the props and transforms and also tried to use a HWF as well. hoping someone out there might be able to help.    this is basically what my conf files look like- -props- [source::\\fileshare\\folder\\...] TRANSFORMS=send_to_index_by_source   -transforms- [send_to_index_by_source] SOURCE_KEY=_MetaData:Source REGEX=\\\wfileshare\\\wfolder\\(\w+) DEST_KEY=_MetaData:Index FORMAT=$1 -inputs- [monitor://\\fileshare\folder\...\test15.txt] disabled=false recursive=true sourcetype=test15  
Hello, I'm trying to utilize the TA-QualysCloudPlatform app but running into a couple of issues.   1) When I go to open the splunk GUI, click on Data Inputs, Qualys Cloud Platform, Add New, >> ... See more...
Hello, I'm trying to utilize the TA-QualysCloudPlatform app but running into a couple of issues.   1) When I go to open the splunk GUI, click on Data Inputs, Qualys Cloud Platform, Add New, >> Where does that data input get stored? I don't see it in either the default or the local inputs.conf of the app, so it must be storing it elsewhere. 2) When I get the app setup and attempt an API call, the internal Qualys log shows No Credentials Found, Cannot Continue. This is after setting up the app, entering the credentials in the GUI, Saving, and Restarting Splunk. Any ideas?   Thanks for your help.
Hi, My objective is to read the Cluster Shared Volume (CSV) information from a Hyper-V cluster. I initially achieved it executing a PS1 using the powershell handler [powershell], but the user runni... See more...
Hi, My objective is to read the Cluster Shared Volume (CSV) information from a Hyper-V cluster. I initially achieved it executing a PS1 using the powershell handler [powershell], but the user running UF must have access rights in the cluster so I wanted to do it differently. After some analysis I decided to move the execution of the PS1 to a scheduled task, and simply configure splunk to read the resulting log file of each execution. The following stanza is the original one running directly the ps1, and it works OK.    #CSV PerfMon Data [powershell://CSVPerfMetrics] script = . "$SplunkHome\etc\apps\Splunk_TA_microsoft-hyperv\bin\GetCSVDiskInformation.ps1" interval = 300 source = microsoft:hyperv:powershell:csvperfmetrics.ps1 sourcetype = microsoft:hyperv:perf:csv index = hyper-v disabled = 0   Then, I created the scheduled task which creates a different log file on each execution and I tried the stanza bellow without success.   [monitor://C:\Scritps\CSV\Outputs\*.log] sourcetype = microsoft:hyperv:perf:csv queue = indexQueue index = hyper-v disabled = 0   the powershell script   $csvdata = Get-ClusterSharedVolume -Cluster XXXXXXXXXX $csvs = @() foreach ( $csvitem in ($csvdata)) { $csv = [PSCustomObject]@{ VolumeName = $csvitem.Name; ID = $csvitem.Id; TotalSpaceKB = ($csvitem.SharedVolumeInfo.Partition.Size); FreeSpaceKB = ($csvitem.SharedVolumeInfo.Partition.FreeSpace); PercentFree = $csvitem.SharedVolumeInfo.Partition.PercentFree; } $csvs += $csv } $timestamp = Get-Date -Format FileDateTime $csvs | Add-Content -Path "C:\Scritps\CSV\Outputs\CSVRead_$timestamp.log"   the content of the log files look like the following:   @{VolumeName=Cluster Vol1; ID=4e902f6b-3380-499b-af29-8ff35c02e80d; TotalSpaceKB=2198752718848; FreeSpaceKB=1135195062272; PercentFree=51.62905} @{VolumeName=Cluster Vol2; ID=002b604e-1671-4054-bed2-c1f8a068b40d; TotalSpaceKB=2198752718848; FreeSpaceKB=789340848128; PercentFree=35.89948} @{VolumeName=Cluster Vol3; ID=14ce695f-f586-4d65-9e3b-6ab85524fd91; TotalSpaceKB=2198752718848; FreeSpaceKB=1692997120000; PercentFree=76.99807} @{VolumeName=Cluster Vol4 LinuxRKE; ID=c49e9b9f-d5c1-409c-8ef0-cf8613f81571; TotalSpaceKB=805283295232; FreeSpaceKB=536175865856; PercentFree=66.58227} @{VolumeName=Cluster Vol5 LinuxSecLAB; ID=f0f7e5e7-1290-4cd4-af66-3c75ffffc1ad; TotalSpaceKB=805283295232; FreeSpaceKB=774000726016; PercentFree=96.11533}   any suggestion of what could be happening? any log I could check? So far I didnt find any error message splunkd.log or any other log file, but the indexing simply does not work. Many thanks
Hi.     Wondering if someone has found a similar solution? I don't have admin rights so I don't have the ability to load js/css files to the splunk server so it all has to be within local XML code. ... See more...
Hi.     Wondering if someone has found a similar solution? I don't have admin rights so I don't have the ability to load js/css files to the splunk server so it all has to be within local XML code. Here's a simple breakdown of what I'd like my table on the dashboard to do: query | table A, B, C, D, E When column A has the string value of "problem" I want to highlight the background of column B to red otherwise Column A remains default background color.   And I need to hide column A from the end user. I found many posts (including the 2 below) but after a couple of days, no luck in finding the correct logic.. https://community.splunk.com/t5/Dashboards-Visualizations/How-do-you-highlight-a-table-cell-based-on-a-field-of-the-search/m-p/455645 https://docs.splunk.com/Documentation/Splunk/latest/Viz/TableFormatsXML#Color_palette_types_and_options Thanks in advance!
Hello, I have implemented an indexer cluster with 16 nodes and attached the license after the cluster was up and running for approx 20 days. After applying the 1TB per day license, 15 nodes have be... See more...
Hello, I have implemented an indexer cluster with 16 nodes and attached the license after the cluster was up and running for approx 20 days. After applying the 1TB per day license, 15 nodes have been connected successfully with the license manager but only 1 is having issues and generation the following alert [idx001] Streamed search execute failed because: Error in 'litsearch' command: Your Splunk license expired or you have exceeded your license limit too many times   I have tried some restarts and re-connection with the license manager from this node but still the message persists. Also in the license manager this is the only indexer missing even though the IP/DNS connectivity works   Is there any possibility to force re-connection from this indexer?
Hi Splunkers,   While splunking on bucket statistics - I've noticed that despite the fact of default 90 days set for maxHotSpanSecs, I find many buckets (which are not quarantine buckets) with time... See more...
Hi Splunkers,   While splunking on bucket statistics - I've noticed that despite the fact of default 90 days set for maxHotSpanSecs, I find many buckets (which are not quarantine buckets) with timespan (difference between latest and earliest) around 600 or even 800 days. On the other hand I can also easily find out buckets with timespan of 2 days. Does anyone knows why this could happen ? Is it possible to clearly determine rules  based on which Splunk assigns timespan for hot buckets ?   For checking out my statistics I use below search:   index=_internal component=IndexWriter bucket="hot_*" idx=* | eval bucket_timespan=latest-earliest | eval bucket_timespan=(bucket_timespan/60/60/24) | where bucket_timespan > 90 OR bucket_timespan < 3  
Hello, We've installed a linux UF on a supported distribution. Done this on multiple machines but facing an issue on a single server. The forwarder fails to start with the following error: ./splu... See more...
Hello, We've installed a linux UF on a supported distribution. Done this on multiple machines but facing an issue on a single server. The forwarder fails to start with the following error: ./splunk start Starting Splunk... Splunk> All batbelt. No tights. Checking prerequisites... Checking mgmt port [8089]: open Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-8.2.0-e053ef3c985f-linux-2.6-x86_64-manifest' All installed files intact Done All preliminary checks passed. Starting Splunk server daemon (splunkd)... ERROR: pid 90702 terminated with signal 4 Done [ OK ]   Tried installing it as a daemon as well. It doesn't report any error but the service fails to start. When starting directly from the bin folder, get the above error. Any helpful pointers here please?  Thanks.   Regards,
I would like to get event count for a particular time period for each day for a given date range (that I will select from search drop down). Time period is between 14:31 hrs and 15:01 hrs for each da... See more...
I would like to get event count for a particular time period for each day for a given date range (that I will select from search drop down). Time period is between 14:31 hrs and 15:01 hrs for each day. I am using below query. So two questions: a) can this query be optimized for better performance and b) this query gives me statistics but graph in visualization display all the time and does not adhere to my choosen time window of between 14:31 hrs and 15:01 hrs index=applogs_01 AND sourcetype=app_pmt | eval Date=strftime(_time, "%m/%d/%Y") | where (_time >= strptime(Date." "."14:59","%m/%d/%Y %H:%M") AND _time<=strptime(Date." "."15:01","%m/%d/%Y %H:%M")) | bin span=1s _time | stats count by _time
Hello,    I have a problem. An education in my education.splunk account has expired. Is there any way to reset this?   Thank you.
  BElow query shows expected statistics table in Splunk 8.2, but shows only events in Splunk 6.2.   YOUR_SEARCH | fields A_real.S*.A* | rename A_real.* as * |eval dummy=null() | foreach S* [ eval... See more...
  BElow query shows expected statistics table in Splunk 8.2, but shows only events in Splunk 6.2.   YOUR_SEARCH | fields A_real.S*.A* | rename A_real.* as * |eval dummy=null() | foreach S* [ eval dummy= if(isnull(dummy),"<<FIELD>>".":".'<<FIELD>>',dummy."|"."<<FIELD>>".":".'<<FIELD>>') ] | eval dummy=split(dummy,"|") | stats count by dummy | fields - count | eval f1= mvindex(split(dummy,"."),0),I1= mvindex(split(dummy,"."),1), Id=mvindex(split(I1,":"),0),{f1}=mvindex(split(I1,":"),1) | fields - dummy I1 f1 | stats values(*) as * by Id | lookup YOUR_LOOKUP Id | where isnotnull(Timestamp) | fields - Timestamp Please check.
Hello Splunker   usernames in my environment are shown as  : user=Company\username@AD#   where the # is a number and some users are shown as: user=Company\username$@AD#   the username has ma... See more...
Hello Splunker   usernames in my environment are shown as  : user=Company\username@AD#   where the # is a number and some users are shown as: user=Company\username$@AD#   the username has many variations" only numbers only letters combination of both   i want to extract only the username with the other letters   thanks ^_^
Hi Everyone, My business user has a requirement, he needs to run a python script from the SPLUNK web portal, the script is located in a windows server (Heavy forwarder). Due to InfoSec policy, user... See more...
Hi Everyone, My business user has a requirement, he needs to run a python script from the SPLUNK web portal, the script is located in a windows server (Heavy forwarder). Due to InfoSec policy, users are not allowed to RDP into the windows server,  so we need to find a way to call the python script from the front end.  Script Background: We will get logs from InfoSec which has URL dumps from the application gateway, SMTP gateway, and many other sources. The python script will read the URLS from the dum and create individual log files for each application (SMTP Gateway, Proxy Server, Web application gateway, Load balancer, etc..). Then the individual log files will be indexed through Splunk forwarder. The script needs to run every five hours like a scheduled job. Challenge: The terminal server in which the user will log in is already reporting to a different SPLUNK instance handled by the InfoSec team, so I cannot configure the SPLUNK universal forwarder in the terminal server to do the required job. It will be helpful if you share some ideas on how to achieve it. Let me know if you need any additional details on this.   Thank you, Ashwath Kumar R