All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, for this year .conf 22 registratations are open  and I see registation fee while signing up with personal account. is .conf 22 registration is free for Splunk partner companies?. 
hello I use appdncols command in order to aggregate in a table the result of different search I have 2 issues with the 3 fields In yellow Issue 1 If dont use the piece of code below, the... See more...
hello I use appdncols command in order to aggregate in a table the result of different search I have 2 issues with the 3 fields In yellow Issue 1 If dont use the piece of code below, the field "Tea" is not displayed (same thing for INC & OUT)       | appendpipe [ stats count as _events | where _events = 0 | eval "Tea"= 0]]       Issue 2 the appendpipe command put only "0" in the first line but not in other Here is the search :       | appendcols [ search index=titi earliest=@d+7h latest=@d+19h | bin span=1h _time | eval time = strftime(_time, "%H:%M") | stats dc(Tea) as Tea by time | rename time as Heure | appendpipe [ stats count as _events | where _events = 0 | eval Tea= 0] ] | appendcols [ search index=tutu earliest=@d+7h latest=@d+19h | bin span=1h _time | eval time = strftime(_time, "%H:%M") | stats dc(s) as "OUT" by time | rename time as Heure | appendpipe [ stats count as _events | where _events = 0 | eval "OUT"= 0]]       What is wrong please? And I have something else strange As you can, the the results is 0, the results is ususally displayed But why sometimes I have an empty field instaed 0 like in yellow? Is anybody can give the solution for displaying the results in any case when the value is 0?  
  The below table is for one User, like wise I have to pull the details for many users - who visited multiple url on different timestamp, I am trying to calculate the total duration between each url... See more...
  The below table is for one User, like wise I have to pull the details for many users - who visited multiple url on different timestamp, I am trying to calculate the total duration between each url/E to url/J.  So what I am trying to achieve is whenever the user is visiting url/E and traversing till url/J - calculate the total duration. I trying using transaction command but it only calculates the last event of url/E and url/J USER_ID.                         TIMESTAMP.                  URL CD_125 05:30:36 URL/E CD_!25 05:30:38 URL/F CD_125 05:30:39 URL/H CD_125 05:30:41 URL/J CD_125 05:30:43 URL/E CD_125 05:30:44 URL/I CD_125 05:30:45 URL/J   what I am looking here is duration for each URL/E to URL/J . The output what I am expecting is this. User_ID                            Duration            URL CD_125 5 url/E url/F url/H url/J CD_125 2 url/E url/I url/J   would appreciate if someone could guide and help me with the query. thanks 
I need some help to check configure send email, and I still have not received the email alert in my mailbox. The alert is already triggered as I can see that in the "triggered alerts" section. when ... See more...
I need some help to check configure send email, and I still have not received the email alert in my mailbox. The alert is already triggered as I can see that in the "triggered alerts" section. when i configure like this,and saved. then i open again, username,passward is gone,  
When trying to enable aws_description_tasks, I'm finding it in the logs that it is erroring out due to 'Connection reset by peer', indicating that this is due to a firewall error in my network. I can... See more...
When trying to enable aws_description_tasks, I'm finding it in the logs that it is erroring out due to 'Connection reset by peer', indicating that this is due to a firewall error in my network. I can ask the networking team to allow traffic from this endpoint, but I'm unsure as to what the url is. Is there any way I can go about finding the endpoint or url that aws_description_tasks uses to grab metadata from AWS? Not sure if it's simply 169.254.169.254 but would like to know how I can go about finding the endpoints used by the different aws inputs. 
Hi Guys, I was tasked with building a configuration register and define the processes for Splunk for my organization.  Could someone help me with an example ? Thank you  
Hello Folks, I have the below query on one of my dashboard panel. Here I pass the IN_BUSINESSDATE field value from dashboard (form input) with default as % and prefix & sufix value as %. So incas... See more...
Hello Folks, I have the below query on one of my dashboard panel. Here I pass the IN_BUSINESSDATE field value from dashboard (form input) with default as % and prefix & sufix value as %. So incase user does not provide, the query gets IN_BUSINESSDATE as %%% (its ok) index=dockerlogs  | search app_name = ABCD AND logEvent="Delivered" | spath input=businessKey path=businessDate output=businessDate | spath input=businessKey output=sourceSystem path=sourceSystem | eval businessDate=substr(businessDate,1,10) | where like(businessDate, "$IN_BUSINESSDATE$") | stats count by businessDate, sourceSystem Now I would like to change the stats on the query as below if IN_BUSINESSDATE is not provided (meaning value is %%%) | stats count by sourceSystem How can I achieve this ? Thank you!
I am looking for a way to check for multiple conditions to match, and if they are met, output a specific word... such as "true". Example: my_cool_search_here | eval condition_met=if(user=* AND Do... See more...
I am looking for a way to check for multiple conditions to match, and if they are met, output a specific word... such as "true". Example: my_cool_search_here | eval condition_met=if(user=* AND DoW IN (Mon,Wed) AND HoD IN (01,02,03) AND hostname IN ("hostname.hostdomain","hostname.hostdomain"), "true") I don't know if that makes sense... but essentially I want to check whether "user" has ANY value, and then if the fields "DoW", "HoD", and "hostname" have specific values out of a possible range.... and if all that matches, then set the value of "condition_met" to "true". I know I can do this for a single field/value, but how would I accomplish this for multiple different conditions? Thanks!
Hi All, having an issue where report acceleration is not working for non-admin roles. Report is accelerating correctly when running under the admin user and 'Using summaries for search' is found unde... See more...
Hi All, having an issue where report acceleration is not working for non-admin roles. Report is accelerating correctly when running under the admin user and 'Using summaries for search' is found under the job inspector. When running the same report under other users, report will not load over certain time periods and does not show this same 'Using summaries for search' confirmation in the job inspector. Things I have tried for other role in question: - Confirmed scheduled_search and accelerated_search capabilities are enabled - Confirmed user has write access to the report - Confirmed report is in shared app which  the user has access to - Tried various other capabilities and inheritance from power user role There is over 26 million events being matched, is there a chance of this role hitting a limit which is preventing the accelerated search functionality? Let me know if you need any more information.
Hello all. Thanks in advance for your assistance. I have a 6 node index cluster with a search factor of 6 and replication factor of 3. My ingest is 130GB per/day for proxy data My Hardware: 7TB ho... See more...
Hello all. Thanks in advance for your assistance. I have a 6 node index cluster with a search factor of 6 and replication factor of 3. My ingest is 130GB per/day for proxy data My Hardware: 7TB hot/warm SSD and 112TB cold 10k spindle per/indexer I have a requirement for 30 days hot/warm and 1,065 days cold = 1,095 days (3 years) total Calculations I have found say: (Daily Avg. Ingest x Compression Rate x Num Days Retention) / # of Indexers So: Hot/Warm: 130GB * .6 compression * 15 days) / 6 = 390 GB p/indexer Cold: 130GB * .6 compression * 1,065 days) / 6 = 13,845 GB p/indexer Total: 130GB * .6 compression * 1,095 days) / 6 = 14,235 GB p/indexer Given that, I think my indexes.conf would have: [idx_proxy] homePath = volume:primary/idx_proxy/db coldPath = volume:secondary/idx_proxy/colddb thawedPath = $SPLUNK_DB/idx_proxy/thaweddb maxTotalDataSizeMB = 14576640 maxDataSize = auto_high_volume homePath.maxDataSizeMB = 399360 frozenTimePeriodInSecs = 94608000 The question: With replication factor, I'm thinking this answer is not complete. Can anyone help? Updated formula/calc? This is current storage: [splunk@splunkidx1 ~]$ du -sh /splunkData/*/idx_proxy/ 8.0T /splunkData/cold/idx_proxy/ 947G /splunkData/hot/idx_proxy/ [splunk@splunkidx2 ~]$ du -sh /splunkData/*/idx_proxy/ 8.0T /splunkData/cold/idx_proxy/ 826G /splunkData/hot/idx_proxy/ [splunk@splunkidx3 ~]$ 7.9T /splunkData/cold/idx_proxy/ 955G /splunkData/hot/idx_proxy/ [splunk@splunkidx4 ~]$ du -sh /splunkData/*/idx_proxy/ 7.8T /splunkData/cold/idx_proxy/ 952G /splunkData/hot/idx_proxy/ [splunk@splunkidx5 ~]$ du -sh /splunkData/*/idx_proxy/ 8.0T /splunkData/cold/idx_proxy/ 936G /splunkData/hot/idx_proxy/ [splunk@splunkidx6 ~]$ du -sh /splunkData/*/idx_proxy/ 7.8T /splunkData/cold/idx_proxy/ 911G /splunkData/hot/idx_proxy/
How can I include several unique IP address in the search command with src=  or can I use src IN(ip,ip,ip)
Hi there! I want to add columns to this table that I copied from the docs about timewrap. I want to add columns that have the averages for each field (accessories, sports, strategy, etc.) across th... See more...
Hi there! I want to add columns to this table that I copied from the docs about timewrap. I want to add columns that have the averages for each field (accessories, sports, strategy, etc.) across the timewrapped columns. Basically, a column for the average of ACCESSORIES_S1, ACCESSORIES_S0, etc., and then a column for the average of SPORTS_S1, SPORTS_S0, etc., and a column for the average of STRATEGY_S1, STRATEGY_S0, etc. Additionally, I eventually want to use these averages as a trigger for an alert when the counts on these (i.e., accessories, sports, strategy, etc.) surpass the average. Long story short, I have an arbitrary number of fields, with a count on those fields, and I want to alert when the count on those fields exceeds the average, without having to set up multiple alerts for each field because I don't know what the fields are going to be ahead of time and the field names can change.  @mattymo your multipart article on timewrap and Cyclical Statistical Forecasts and Anomalies has helped me so much, can you please help me on this application of timewrap? Thank you!
I'm using the Webtools Add-on to do a get request for each row on a keyword field but combine the curl results with the initial data. Ie the initial data returns _time, keyword, hostname, etc and the... See more...
I'm using the Webtools Add-on to do a get request for each row on a keyword field but combine the curl results with the initial data. Ie the initial data returns _time, keyword, hostname, etc and the curl request returns curl_message, curl_status,etc and I want my final table to be _time,keyword,hostname,curl_message,curl_status . Right now I'm able to get the curl response using the map search="uri=.../$keyword$ " but that only returns the curl output.  I do see that you can use datafield without a map command to do multiple curl requests (|curl uri=... datafield=keyword) and it would have the my desired output but that made the uri look like this uri=.../?keyword. I need the uri to look like uri=.../keyword instead.  Any help/tips would be appreciated.
Hey, In a dashboard, I need a panel where it gives the user an option to download EVERY field of a specific index. Now, this index has over 100 fields. Can I use the Events Panel on the dashboard t... See more...
Hey, In a dashboard, I need a panel where it gives the user an option to download EVERY field of a specific index. Now, this index has over 100 fields. Can I use the Events Panel on the dashboard to show all the fields (admittedly a small view due to the volume of the fields) and then the user can export the respective results from the given panel? Many thanks, Patrick
Good afternoon  - We're working with a customer that would like to ingest data from AINS FOIAXpress (https://www.ains.com/foiaxpress/).  We don't see a TA for this that I could locate on Splunkbase b... See more...
Good afternoon  - We're working with a customer that would like to ingest data from AINS FOIAXpress (https://www.ains.com/foiaxpress/).  We don't see a TA for this that I could locate on Splunkbase but was wondering if anyone here has knowledge of customers that have built this TA in the past.  Any insight or direction would be greatly appreciated!
We have an outside scanning agency that is constantly doing nmap like scans of our perimeter.   It is generating a log of log data on the perimeter CISCO firewalls. We know the IPs that the scanning ... See more...
We have an outside scanning agency that is constantly doing nmap like scans of our perimeter.   It is generating a log of log data on the perimeter CISCO firewalls. We know the IPs that the scanning is coming from; is there a way to tell the forwarders to NOT forward that log data from the firewalls for those IPs? For example, if any tcp/ip log data is seen from 1.2.3.4, don't forward it, but if from any other IP address, treat it normally and forward it. Thanks for any insights on this. Our Splunk SME are looking at CRIBL to do this but reading this thread makes me believe there are configuration settings that might address this?    V/R Bob M.
I have 2 logs. The first statement gets logged when a pod dies. The second gets logged when my app gets notified. Sometimes, the pod dies and my app doesn't get notified. I want to write an alert whe... See more...
I have 2 logs. The first statement gets logged when a pod dies. The second gets logged when my app gets notified. Sometimes, the pod dies and my app doesn't get notified. I want to write an alert when the pod dies but my application doesn't get notified. Log1 (when a pod dies):   index=log1 "Forced deletion of orphaned Pod" | rex "podnamespace/(?<machineName>(.*?))\s"   Log2 (when my app gets notified):   index=conversation "*Clearing DMC pod" sourcetype="cui-orchestration-log" podname=<podNameWhichDied>   I tried several options, but I am unable to refer to the field 'machineName' created by rex in the Log1 inside Log2 even though machineName has the right pod name:   index=log1 "Forced deletion of orphaned Pod" | rex "podnamespace/(?<machineName>(.*?))\s" | stats count as podsCrashedCount by machineName| appendcols [search index=log2 "App is deleting pod" podname=$machineName| stats dc(podname) as deletedInApp] | where podsCrashedCount!=deletedInApp  
I would want an alert to be triggered and sent to mail if a particular panel has the count=0 in the dashboard how should we achieve that pls help
Hello Guys, I make an dashboard where the users see their tasks to do, I would like to add an button or checkbox on each events that the user can checkbox when the task is done and  the result keep... See more...
Hello Guys, I make an dashboard where the users see their tasks to do, I would like to add an button or checkbox on each events that the user can checkbox when the task is done and  the result keep saved. How can I do that ?    regards,  
Hello All, I have JSON data and sometimes it is nested and sometimes it is not, whenever it is a nested array I have a {} in the field name, and when it's not there is no {}. I'm trying to make a fi... See more...
Hello All, I have JSON data and sometimes it is nested and sometimes it is not, whenever it is a nested array I have a {} in the field name, and when it's not there is no {}. I'm trying to make a field alias to a common field name. But, I want to write a single alias to convert the field name if {} is present or not to a new name? Any leads on how can I do it? (Either remove {} before the fields are extracted at search time or aliasing in the props.conf to a new name.) eg: items{}.description once, items.description the other other time --> rename to items.description during the search time without using rename command                        OR remove {} before fields are extracted on the search head. P.S: I don't want to do index time field extraction. #fieldaliasing #json