All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am looking to see if anyone knows how to do this or if it is possible. I am trying to have splunk read to the Active directory groups that hold the certain subnets for different locations and we us... See more...
I am looking to see if anyone knows how to do this or if it is possible. I am trying to have splunk read to the Active directory groups that hold the certain subnets for different locations and we use it to lock down certain people being able to see these specific subnets. It is a pain to update all these subnets in splunk and the AD group since it is 2 different parties working this. I am asking if anyone knows how I can get rid of the Subnets that I am searching for in Splunk and just have to route to the AD groups that we have set up. This would eliminate the need to update Splunk dashboards and the AD groups. 
All,  I have 2 separate queries working from AWS Description data that we collect on a regular basis. The ask from one of our portfolio leads is to send them a report on a weekly basis (Monday) tha... See more...
All,  I have 2 separate queries working from AWS Description data that we collect on a regular basis. The ask from one of our portfolio leads is to send them a report on a weekly basis (Monday) that includes the following information. For all AWS EC2 instances which are in a stopped state include the following: account_name = this is just a lookup field matching our account numbers to a human readable name EC2 instance ID who owns the instance (a tag applied to the instance) date the instance was stopped - this is in the "reason" field from AWS Description data. how much storage is attached to the stopped instance (a total amount) Right now I have 2 separate queries that results in all of the data that I need but I need to find out a way to merge the two sets of data together into one report. Search #1:  this gets the stopped instance IDs, who owns it (an applied tag), what account it is in, the region, the instance Name (an applied tag) and the reason it was stopped. index=awsdescription source=*ec2_instances  state=stopped | dedup id |  rename tags.Name as Name | rename tags.Owner as Owner  |rename id as instance_id | table account_name, region, Name,  instance_id, Owner, reason Search #2: uses the above search as a sub-search, just pulling out the instance IDs. Then bounces that list off of a different source (*ec2_volumes) to grab the list of volumes associated with the instances that are stopped.  The results are then aggregated (stats sum) to get a total amount of storage attached to each of the stopped instances. index=awsdescription* source=*ec2_volumes [search index=awsdes* source=*ec2_instances  state=stopped | dedup id | rename id as attach_data.instance_id | fields attach_data.instance_id] | rename attach_data.instance_id as instance_id |dedup id |stats sum(size) by instance_id   The two searches combined gives me all of the data that I need but it is in 2 separate reports.  From here, I have to download the results of each, throw them into a spreadsheet and merge the two sets of data (using vlookup on the instance_id) into a single report before I send it off to the customer.   Is there a way to combine these two searches?  If so, I would love some guidance. Thanks in advance
Hey all, I have the Splunk add on for unix/linux deployed to about ~70 servers. All was working fine (and has been for years!) up until yesterday. I'm receiving data into my os index (which is wher... See more...
Hey all, I have the Splunk add on for unix/linux deployed to about ~70 servers. All was working fine (and has been for years!) up until yesterday. I'm receiving data into my os index (which is where those logs are stored) but after searching for anything beyond index, host, sourcetype, it does not work. For example, for a search of 7 days ago I can search for something like:  index=os sourcetype=df host="server1" OR host="server2" | stats max(PercentUsedSpace) as PercentUsed by host,filesystem | sort - PercentUsed | where PercentUsed >=75 It will pull data from 7 days ago up until yesterday.   Searching data for yesterday to now gives me no data.  If I search index=os host="server1" OR host="server2", I'm receiving logs as normal. The other sources and sourcetypes are there.   So i guess my question is, what happened to my "PercentUsedSpace"? It doesnt show in the interesting fields portion. I can't search for it. It returns blank.   My search for index=os source=df host="server1" OR host="server2" shows my logs. But I can't refine it down further.    Edit: Now what is interesting in my logs, every now and then, I see that I am receiving a log that is something along the lines of " CPU pctUser pctNice pctSystem pctIowait pctIdle" , "Name rxPackets_PS txPackets_PS rxKB_PS txKB_PS", "memTotalMB memFreeMB memUsedMB memFreePct memUsedPct pgPageOut swapUsedPct pgSwapOut cSwitches interrupts forks processes threads loadAvg1mi waitThreads interrupts_PS pgPageIn_PS pgPageOut_PS"   So it seems that instead of parsing each field as type of field, it is parsing as a log.    Please assist!
I would like to take report for employees who are completed four different certification courses from my data.  For example: Employee 1 completed 3 courses , Employee 2 completed 2 courses , Employe... See more...
I would like to take report for employees who are completed four different certification courses from my data.  For example: Employee 1 completed 3 courses , Employee 2 completed 2 courses , Employee 3 completed 1 courses etc.. along with it's name and completion date.  Kindly suggest how to write query in this situation. Is it ok to create CSV file with one set results and compare it or any other way available.  
Hi All, We have recently upgraded our splunk enviornment from 7.X to 8.X and we want to compare splunk performance before and after the upgrade. One of the parameter we want to track is time taken ... See more...
Hi All, We have recently upgraded our splunk enviornment from 7.X to 8.X and we want to compare splunk performance before and after the upgrade. One of the parameter we want to track is time taken by cluster to complete the fix up tasks. Can you please guide if there is any way we can monitor the time taken by the fix up tasks and rolling restart to complete. Thanks & Regards, Rahul Bhatia
We've got the Splunk App for Infrastructure inputs for Windows  metrics deployed to our universal forwarders. Metrics are all working fine except in one situation; if any performance counter value is... See more...
We've got the Splunk App for Infrastructure inputs for Windows  metrics deployed to our universal forwarders. Metrics are all working fine except in one situation; if any performance counter value is 0 then Splunk isn't recording the value. For example, if a disk free space counter reaches 0, it'll drop off all charts. Likewise application pool request queues don't record a 0 value if there's nothing in them. I've got plenty of custom metrics I've done (faking statsd protocol) that can write 0 just fine, it seems to only be the perfmon metrics from SAI that have the issue. Has anyone else encountered this and know what the fix is?
Hello, I'm using Splunk App for Web Analytics version 2.3.0, it working well. I used 1 site to monitor, and I have data I need, but, when I added another log to monitor, The site name it use is the... See more...
Hello, I'm using Splunk App for Web Analytics version 2.3.0, it working well. I used 1 site to monitor, and I have data I need, but, when I added another log to monitor, The site name it use is the folder name. I did index=iislogs | dedup site | table site, and I see the 2 sites,  but I want to change the name of ome of them. How Can I change the site name. I tried to change it on the "configure website" but it's not updated. Thenk you, Dov
Hi all, I have two indexes with the following fields: index=sofware sw                        version       author software_1            1.0           Mark software_2            1.1           Hol... See more...
Hi all, I have two indexes with the following fields: index=sofware sw                        version       author software_1            1.0           Mark software_2            1.1           Holly software_3            1.2           Tom software_4            1.3          Gorge index=downloads timestamp                                         sw 2021-11-23 00:00:00          software_1 2021-11-22 00:00:00          software_1 2021-11-21 00:00:00          software_4 2021-11-20 00:00:00          software_1 2021-11-19 00:00:00          software_3 2021-11-18 00:00:00          software_1 I need to create a report with the number of downloads for each software, something like this: sw                     version                 author               #downloads software_1       1.0                     Mark                               4 software_2       1.1                     Holly                               0 software_3       1.2                     Tom                                1 software_4       1.3                   Gorge                               1 I tried using left join but couldn't fine any good solution. Thanks for helping.
Hello Community. I am trying to solve a problem and I can't see a solution. Hope you can help me! I am working with a metrics index. My final goal is to get average of two metrics, but with two dif... See more...
Hello Community. I am trying to solve a problem and I can't see a solution. Hope you can help me! I am working with a metrics index. My final goal is to get average of two metrics, but with two differente filters based on a dimension from that metric index, and get a final calculation from these calculated fields, something like this: | mstats avg(metric1) as result1 avg(metric2) as result2 where index=my_metric_index AND filter_field=filter_list_1 | mstats avg(metric1) as result3 avg(metric2) as result4 where index=my_metric_index AND filter_field=filter_list_2 | eval Final_Result_1=result3-result1, Final_Result_2=result4-result2  I also created a search (which I pretend to use as subsearch in the middle of previous search) to get both lists, filter_list_1 and filter_list_2, something like this: |mcatalog values(values1) as values1 values(values2) as values2 where index=my_metric_index AND filter1 AND filter2 AND filter3 BY values1, values2 {...some modification stuff here...} | table filter_list_1, filter_list_2  Both filter_list_1 and filter_list_2 can be returned as a column list or a multivalue field (created with join command from column list). The chalenge here is how to pass these filter_list_x to both from a subsearch to the main (or precedence) search to use as filter in mstats command. The best I've got was to make subsearch sent back one of the filter list, named as the field I need to filter in main search with, and subsearch formated field_list (automatically, I don't know how it did) as a bunch of "OR statements with all values of the filter_list filed to use with mstat command. But I only could o this with one mstat command, not both. I don't know if I get myself to be well-explained How can I achieve my "final and complicated" goal? Some like this:   | mstats avg(metric1) as result1 avg(metric2) as result2 where index=my_metric_index AND filter_field=filter_list_1 | mstats avg(metric1) as result3 avg(metric2) as result4 where index=my_metric_index AND filter_field=filter_list_2 [|mcatalog values(values1) as values1 values(values2) as values2 where index=my_metric_index AND filter1 AND filter2 AND filter3 BY values1, values2 {...some modification stuff here...} | table filter_list_1, filter_list_2] | eval Final_Result_1=result3-result1, Final_Result_2=result4-result2   Any help will be very appreciated. Thanks in advance for your help. Regards, Carlos M
What is the license required to be acquired for a single instance splunk enterprise deployment which involves zero data indexing? Scenario : Say for a customer who has some static data to be display... See more...
What is the license required to be acquired for a single instance splunk enterprise deployment which involves zero data indexing? Scenario : Say for a customer who has some static data to be displayed in dashboards, where the data values may or not change (and the dashboard performs some logical operations on the available static data to showup some valued-add information/visualizations in the dashboards). Assuming the data will hence be stored in a lookup file or in a database and then be read using splunk app for DB connect, with no other data indexing in plan.
Hi, I want to copy the ag-grid JS functions data in CDN  and access it into my Dashboard XML.  I copied all the functions into a New Js file and tried adding the js file in xml through <script src=... See more...
Hi, I want to copy the ag-grid JS functions data in CDN  and access it into my Dashboard XML.  I copied all the functions into a New Js file and tried adding the js file in xml through <script src=>, but it's not reading the JS file. I have tried to check for any single function from the console but it's not reading the data. Is this right approach to copy data from CDN into Js file and access through XML? If not please suggest the better approach.  
I have a lookup | inputlookup citizen_data , it has fields ID, Name, State. I have another sourcetype | index=bayseian souretype=herc , that has fields citizen_ID, mobile, email. My target is to ... See more...
I have a lookup | inputlookup citizen_data , it has fields ID, Name, State. I have another sourcetype | index=bayseian souretype=herc , that has fields citizen_ID, mobile, email. My target is to enrich the "citizen_data" lookup with additional columns so that, while doing  |inputlookup citizen_data, I should see ID, Name, State, Mobile, Email. NOTE :  ID field in the lookup is same as citizen_ID field in the sourcetype and I wanted the appendcol to link properly as per the matching ID data. But when I executed my query with appendcol, it does append new columns/fields in the lookup, but it doesn't link or match them comparing  the common field i.e.  the ID. It just randomly appends the new coloumns. The rows contain incorrect data. Any suggestion  how to append properly by  the common field (ID/citizen_ID)?
what should the best regex to catch it up these 3 diff fields    -ec-1 -ec-01 -ec01
Suppose I have A and B query..both have success and failure...so I want both A and B success at one bar with different colour and failure as separate bar with different color
Can anyone help on cron expression Query runs every 15min from 8:15am to 6pm Monday to Friday
Hello Splunkers, I have 2 panel and 1 Text Box filter in my splunk dashboard. I want to hide and show the panels depending on text box values. Example, if text box is empty, then panel A should on... See more...
Hello Splunkers, I have 2 panel and 1 Text Box filter in my splunk dashboard. I want to hide and show the panels depending on text box values. Example, if text box is empty, then panel A should only be visible. if text box is having some value, then panel b should only be visible. Default is textbox is empty, hence only Panel A should be visible.
Hello, we are forwarding Logs from a host via universal forwarder. As the universal forwarder is not able to filter events(logs we went for adjusting tarnsforms.conf and props.conf After editing ... See more...
Hello, we are forwarding Logs from a host via universal forwarder. As the universal forwarder is not able to filter events(logs we went for adjusting tarnsforms.conf and props.conf After editing those files we indeed only ingested the expected and desired logs according to the RegEx in transforms. However the indexed volume stayed the same. So i tried to send all events to the nullqueue and check the indexed volume again. For some reason even with zero events the query for indexed volume still is very high. Here the snippets from the relevent files and queries: 1. search query for getting indexed volume: index="_internal" source="*metrics.log" per_index_thruput series=<my index> | eval GB=kb/(1024*1024) | timechart span=2min partial=f sum(GB) by series 2. rather boring one => the search to check on event count index=<my index> | stats count 3. stanza in transforms.conf (to kill all events for testing) [<my transformation>] REGEX = . DEST_KEY = queue FORMAT = nullQueue 4. stanza in props.conf for sourcetype [<my sourcetype>] TRANSFORMS-setnull = <my transformation> ------------------------------------------------------------------------ I also tried with TRANSFORMS-set...no idea what the difference between the two is, but that doesn't work as well. So the nullqueue is working as i have no events in the index, however the query for indexing volume is off the charts. Any help would be apriciated. Thanks, Mike  
Hi Team, We need to integrate Splunk with our clients' AWS account's S3 bucket. However, it's a concern for the client to open ListBuckets permission to us. Is it possible for AWS Add-On to work wit... See more...
Hi Team, We need to integrate Splunk with our clients' AWS account's S3 bucket. However, it's a concern for the client to open ListBuckets permission to us. Is it possible for AWS Add-On to work without ListBuckets permission?
Hi there,  I am trying to implement a use case where I have an API that keeps sending partial results (around 50-100) until all the results from the API are done.  I have implemented a Generatin... See more...
Hi there,  I am trying to implement a use case where I have an API that keeps sending partial results (around 50-100) until all the results from the API are done.  I have implemented a GeneratingCommand for it, and it returns correct results.  However, I have to wait for quite some time, because Splunk returns results only when all the results from API are collected in Splunk.  The use case I want: I do not wish to wait for all results, but I want to have the partial results returned in Splunk as soon as they are returned from the API - so I do not have to wait. I have tried: 1) adding limits.conf 2) using chunked=True 3) editing maxresultrows and maxresults  4) using flush() results  5) converting to streaming command and using above steps  But nothing seems to work.  Please help, any help would be really appreciated.                
Hi everyone, i got two URLs which i want to represent in one regex group. The dest Port (443) will be in a seperate group Here are two examples. my.url.is.here:443 http://myurl.de/tasks/searc... See more...
Hi everyone, i got two URLs which i want to represent in one regex group. The dest Port (443) will be in a seperate group Here are two examples. my.url.is.here:443 http://myurl.de/tasks/search/home?   When i use the following regex "(?<url>[^\s:]+):?" the first example is fine, but the second only catches "http" because it only matches till the ":" Can someone help and fix my regex? Thanks.