All Topics

Top

All Topics

Let's say we have bunch of frozen bucket files (db_<newest_time>_<oldest_time>_<localid>) on filesystem. How do we we find out which indexes these frozen buckets belong to? I looked into the files,... See more...
Let's say we have bunch of frozen bucket files (db_<newest_time>_<oldest_time>_<localid>) on filesystem. How do we we find out which indexes these frozen buckets belong to? I looked into the files, some are text files which don't seem to have strings or fields that could tell which index it is.
Hi,   I want to create an Alert which will trigger when any user created new alert or report in our environment. So could you please help me with suitable query for this.
Hello, I have an index that looks like that :     Server Month Number of connexions --------------------------------------- A January 10 B January 12 ... See more...
Hello, I have an index that looks like that :     Server Month Number of connexions --------------------------------------- A January 10 B January 12 C January 7 A February 5 B February C February 0     Let's say I sum the Number of connexions by Month, is there a way to raise an alert if a value is missing (here Server B in February) ?
Hi,  How can i delete the data in index after every one week? I came across Splunk answers and documents it is mentioned that we cannot completey clear index data as it depends on two attributes max... See more...
Hi,  How can i delete the data in index after every one week? I came across Splunk answers and documents it is mentioned that we cannot completey clear index data as it depends on two attributes maxTotalDataSizeMB and frozenTimePeriodInSecs . I am confused So i am using this index data in dashboard and basically i don't want to show the old data(P.s dealing with millions of user data so sorting/deduping is not an option).Now every week new data comes to the index so when it happens i have to clear the old data existing in my index . Irrespective of whether index has reached max default size or Frozen time period or maxHotSpanSecs  has reached its value how can i set the attributes in the index so that index data is empty every week? Thanks!
Hello,   I am getting error below when trying to search the data in Splunk SearchHead: ERROR SearchScheduler - The maximum number of concurrent historical searches for this user based on their... See more...
Hello,   I am getting error below when trying to search the data in Splunk SearchHead: ERROR SearchScheduler - The maximum number of concurrent historical searches for this user based on their role quota has been reached I tried to change the parameters as below scheduler max_searches_per_cpu from 25 to 50 search max_searches_per_cpu from 6 to 10 But too seems not working and even generating high LOAD. Could you please suggest how I can fix this issue? I am using Splunk Enterprise 8.0.5 version (I know its out of date  but still need help to figure out issue to plan upgrade)
Hi all, I have a set of data and i used stats(max) to get the maximum task number of every group. But the maximum number are not proper for most of the groups. I used the following query, | stats... See more...
Hi all, I have a set of data and i used stats(max) to get the maximum task number of every group. But the maximum number are not proper for most of the groups. I used the following query, | stats max(task_no) as "Latest Task " by group Can anyone tell is there anything should i add to get the maximum value?
Hello All, Just wanted to ask if there is any way to pull a CSV file in a blob container using SAS URL and add it into a lookup table /kV store without being indexed?
I want to capture the below time stamp using "Time_Prefix's Regex." 20220207T111737.014+0800 There is no guarantee that version will always be in that format, so  can someone please help me with ... See more...
I want to capture the below time stamp using "Time_Prefix's Regex." 20220207T111737.014+0800 There is no guarantee that version will always be in that format, so  can someone please help me with a generic Regex ? 
I need to use Dashboard Studio to ensure the PDF looks well-formatted. I am new to Studio, but was able to do everything except add a report. I need the ability to append a report as the last panel... See more...
I need to use Dashboard Studio to ensure the PDF looks well-formatted. I am new to Studio, but was able to do everything except add a report. I need the ability to append a report as the last panel, so if the report is lengthy, it will continue to continue across multiple pages. When I add the report, it attempts to create but it just makes the page incredibly long, not paginated as it should be. My goal is to have several graphic panels and a lengthy report that will be exported every night to 3-4 page PDF so the recipient can review it.
Anyone know if there is a schema I can load into my IDE so when I modify a dashboard json definition I can detect errors and invalid kv pairs?
Code is easier to explain: I wanted a bunch of new categories and i found eval especially useful - here is an obfuscated example | index=my_index CONNECTED source="/var/log/vmware/my_log.log" | eva... See more...
Code is easier to explain: I wanted a bunch of new categories and i found eval especially useful - here is an obfuscated example | index=my_index CONNECTED source="/var/log/vmware/my_log.log" | eval vdi_pool=case( match(name,"1A-VDI\d{3}"), "pool1", match(name,"1B-VDI\d{3}"), "pool2", match(name,"2A-VDI\d{3}"), "pool3", match(name,"2B-VDI\d{3}"), "pool4", match(name,"3A-VDI\d{3}"), "pool5", match(name,"3B-VDI\d{3}"), "pool6", 1=1, "unclassified" ) | timechart span=1h count by vdi_pool  This made the subsequent querys super easy.   Irritatingly within the dashboard, if I add a new value I need to update all of the queries - this vexes me greatly  I have noticed the entire definition can be downloaded as a json doc - so Im tempted to start templating this in python - this does not seem sane - ideally I'd like to create blocks of repeatable logic I can assemble together to show different scenarios. Anyone done anything similar to achieve this kind of capability - but more "splunkonic"? 
Howdy Splunkers! WARNING: NEWBIE I was after a preset drop down, and I'm after something more 'splunkonic' on the dashboard studio. Classic dashboards seem to come with the convenience of the rela... See more...
Howdy Splunkers! WARNING: NEWBIE I was after a preset drop down, and I'm after something more 'splunkonic' on the dashboard studio. Classic dashboards seem to come with the convenience of the relative timeset and so on: really useful.  Dashboard Studio is quite nice and after head scratching I wanted feature parity. So to either completely substitute the queryParameters dictionary with a bunch of drop downs - or create an object like this: "queryParameters": { "earliest": "$query.earliest_time$", "latest": "$query.latest_time$" }   If I was using python Id create a class or just insert a dict - how is everyone else managing?  Currently I have a ealiest_time input and a latest_time input drop down and substitute he values of the keys above:   "queryParameters": { "earliest": "$earliest_time$", "latest": "$latest_time$" }  
Hi all  If my understanding is correct then data will roll from hot to warm after 90 days. I check the time on index.conf it is mentioned 90 days. My concern 1. But for certain index I can... See more...
Hi all  If my understanding is correct then data will roll from hot to warm after 90 days. I check the time on index.conf it is mentioned 90 days. My concern 1. But for certain index I can see only see 56 days of data not 90 days. 2. A device from a index is last reporting on 30th of April now if I go and give a time frame of all time I will get no match or no data from that device.  Can anyone guide me why there is a deviation of rolling of data from hot to warm. 
Hello , I'm trying to install DB-Connector on my server  after reboot  I have ty to start it - but  I get this message  Missing or malformed messages.conf stanza for MOD_INPUT:INIT_FAILURE__server... See more...
Hello , I'm trying to install DB-Connector on my server  after reboot  I have ty to start it - but  I get this message  Missing or malformed messages.conf stanza for MOD_INPUT:INIT_FAILURE__server_the app "splunk_app_db_connect"_Introspecting scheme=server: script running failed (exited with code 1).   what do I need to do?  the server is working and running  Thanks ,   
Hello,  I have a running Splunk server , and I want to add to him a MYSQL addon. I have added  Splunk Add-on for MySQL and I can see it in the "APP" list: Splunk Add-on for MySQL Sp... See more...
Hello,  I have a running Splunk server , and I want to add to him a MYSQL addon. I have added  Splunk Add-on for MySQL and I can see it in the "APP" list: Splunk Add-on for MySQL Splunk_TA_mysql 3.0.0 Yes Yes Global | Permissions Enabled | Disable Launch app | Edit properties | View objects | View details on Splunkbase and also in the "Home Page" but I can't to config it . also when I press the "MYSQL" in the home page I get error I have try to reboot the server - didn't help.   Oops. Page not found! Click here to return to Splunk homepage.   what am I missing?   
Hello,    I have setup a cluster behind aws NLB, about a month back, suddenly the UI is taking too long to load, i tried tweaking web.conf cache size, no help. The page loads and even splunk de... See more...
Hello,    I have setup a cluster behind aws NLB, about a month back, suddenly the UI is taking too long to load, i tried tweaking web.conf cache size, no help. The page loads and even splunk default common.js loads with 200, thus sticking firewall, and infra usage on SH's is in single digits ( no user .. no load  )  Any idea folks.....   Network Trace
Hi all, I am currently configuring Splunk Enterprise Security for Alerts. I have a doubt in the implementation of this solution.  I created a alert for Failed logins from Windows devices. If this... See more...
Hi all, I am currently configuring Splunk Enterprise Security for Alerts. I have a doubt in the implementation of this solution.  I created a alert for Failed logins from Windows devices. If this alert is triggered, team is running some queries manually to collect more details such as the pattern of that user account in the past 30 days, or the servers to which the user account has logged in the past 30 days to identify a baseline or to investigate whether there are any anomaly in the usage of that account. My doubt is whether there are any way to automate this process. Like if the above alert triggers, then the subsequent queries which the team is currently running manually can be automated and show the results some where in the alert itself where the team can go and see either in tabular or graphical format. Could you please suggest a solution for this.  Any input on how to set up Enterprise security as a SOC detection and workflow is much appreciated. 
Hi, I know this is a hot topic and there is answers everywhere, but i couldn't figure out by my self what to do.   Suddenly the join stops work and my search is not performing as spect anymore, nob... See more...
Hi, I know this is a hot topic and there is answers everywhere, but i couldn't figure out by my self what to do.   Suddenly the join stops work and my search is not performing as spect anymore, nobody from infa gave me a reasonable explanation for that, so i have to figure out a different way .   Original Search   index=aws-prd-01 application.name=domestic-batch context=BATCH action=SEND_EMAIL (status=STARTED OR status="NOT RUN") | rename status as initialStatus | fields jobId initialStatus | join type=left jobId [search index=aws-prd-01 application.name=domestic-batch context=BATCH action=SEND_EMAIL (status=COMPLETED OR status=FAILED) | rename status as finalStatus | fields jobId finalStatus] | table jobId initialStatus finalStatus | sort -timestamp     Original result jobId initialStatus finalStatus 01 STARTED COMPLETED 02 STARTED FAILED   First search with no changes   index=aws-prd-01 application.name=domestic-batch context=BATCH action=SEND_EMAIL (status=STARTED OR status="NOT RUN") | table jobId, status     Result jobId status 01 STARTED 02 STARTED   Second search with no changes   index=aws-prd-01 application.name=domestic-batch context=BATCH action=SEND_EMAIL (status=COMPLETED OR status=FAILED) | table jobId, status     Result jobId status 01 COMPLETED 02 FAILED   thanks a lot
Hello, I have a field uptime in seconds as 1231456, Can some one help me with the eval expression to convert this to HH:MM:SS as a new field in a table.   Thanks in advance.
Splunk DB Connect 3.9 won't accept the Java Runtime "jdk-18_linux-x64_bin.tar.gz". When DB Connect prompts me to input the JRE Installation path in Configuration > Settings, it throws this error "Ne... See more...
Splunk DB Connect 3.9 won't accept the Java Runtime "jdk-18_linux-x64_bin.tar.gz". When DB Connect prompts me to input the JRE Installation path in Configuration > Settings, it throws this error "Need Oracle Corporation JRE version 1.8 or OpenSDK 1.8". I don't seem to get the link that will take me to download this JRE 1.8 version. Can someone point me in the right direction to get the right JRE version downloaded. Thanks in anticipation.