All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  Hi everyone, My goal here is as follows: based on which option the user chooses from the first dropdown menu,  a corresponding dropdown menu will appear. And the panels & graphics below will ha... See more...
  Hi everyone, My goal here is as follows: based on which option the user chooses from the first dropdown menu,  a corresponding dropdown menu will appear. And the panels & graphics below will have queries based on the value passed from the second dropdown. In other words, if I choose the TI40 from the first dropdown menu, then only the dropdown associated with it - the one that has 'depends=$show_ti40$"' - should appear. And a value chosen from the latter will influence the panels and graphics. But this isn't really working. When I choose TI40, the dropdown menu for it does not appear. It only works as expected for ZV60 works. What am I doing wrong? Or is there a better way to do it?  
Afternoon Splunk Community,   I'm currently in charge of helping  sunset an old subsidiary of ours and putting their infrastructure out to pasture. As part of the decommission process I need to b... See more...
Afternoon Splunk Community,   I'm currently in charge of helping  sunset an old subsidiary of ours and putting their infrastructure out to pasture. As part of the decommission process I need to backup indexed data from their Splunk instance for long term retention for the purpose of restoring the data should we ever need to view it for any reason. The Splunk indexing tier consists of six total indexers across two sites, three indexers in site 1 and the other three indexers in site 2. The cluster master for the site has a replication factor of "origin:2 total:3" which means that each indexer in the site should contain two copies of each bucket that originated within the site, and a single copy of each bucket that did not originate within the site.  In an ideal world I think this would mean that I should only need to backup the data volume of a single indexer in the cluster to have a copy of all indexed data. However I've read that in taking a backup of a clustered indexer that there is no guarantee that a single indexer contains all data within the cluster, even with replication enabled. I have several questions: What is the best practice for archiving indexed data for potential restoration into another Splunk instance at a later date? Each of my indexer's "/var/lib/splunk" directory symlinks to a unique AWS EBS volume. Should I simply retain the AWS EBS volume from each of my six indexers, one indexer from each of my two sites, or can I retain a single EBS volume from one of my indexers and discard the rest?  My thought process behind this method is that if for any reason I ever needed to restore the archived data for searching I could simply setup a new Splunk indexer, attach the archived EBS volume, and point a search head to the new indexer in order to search the data. Is a better approach to simply stop splunkd on my indexer, create an archive of /var/lib/splunk using an archive utility and then restoring that archive to /var/lib/splunk on a new indexer at a later date if we need to restore the data for any reason? As a last resort, I could run a search against each of my indexes and export this data in a human readable format (.csv, XLS...etc.). I've already checked in all of my knowledge artifacts (apps, configuration files..etc.) in Git for record keeping. However, Is there any other portions of my Splunk deployment that I should consider backing up? For example, should I retain a copy of my cluster master's data volume for any reason? If I do need to restore archived data, can I restore the archived data to a standalone indexer, or do I need to restore the data to an indexer cluster? This one seems rhetorical to me, but I figured I should ask none the less.    Resources: Splunk - Back up Indexed Data  Splunk Community - How to backup/restore Splunk db to new system 
Hello! I've had a few successful installs of ES but this newest install only has one domain under "Security Domains" (Identity).  Security Intelligence is missing menus as well. Most of the Data Mode... See more...
Hello! I've had a few successful installs of ES but this newest install only has one domain under "Security Domains" (Identity).  Security Intelligence is missing menus as well. Most of the Data Models appear to be populating correctly and are accelerated. I'm getting good menus in Infosec. I've tried to reinstall it twice. I've also looked for any TA nav menus that may be overriding and I don't see anything. Any thoughts as to where to continue troubleshooting?
My splunk entry is firstName="Tom" lastName="Jerry" middleName="TJ" dob="1/1/2023" dept="mice" status="202" dept="house" In above event, field dept is repeated (with value mice and house). I wo... See more...
My splunk entry is firstName="Tom" lastName="Jerry" middleName="TJ" dob="1/1/2023" dept="mice" status="202" dept="house" In above event, field dept is repeated (with value mice and house). I would like to find all the field names which are duplicated in single event / within the event Tried dudep and other ways per google suggestion. But not able to get result. Can you please help me with this. Thanks in advance.
I am attempting to set up encryption between a Splunk Universal Forwarder (verion 9.0.3) and a Splunk Heavy Forwarder (version 9.0.3).  I followed the instructions found on this Splunk community site... See more...
I am attempting to set up encryption between a Splunk Universal Forwarder (verion 9.0.3) and a Splunk Heavy Forwarder (version 9.0.3).  I followed the instructions found on this Splunk community site exactly (with the exception of changing the IPs of course to reflect my environment, but am getting the errors below in the splunkd.log file on the Universal Forwarder (the Heavy Forwarder is listening for connections on 9997). Splunk Coimmunity Site URL https://community.splunk.com/t5/Security/How-do-I-set-up-SSL-forwarding-with-new-self-signed-certificates/td-p/57046 Please advise.
Hi All, I started working in splunk just few months ago and new to splunk. Can anyone help me with some idea please.. I have a lookup file (contains around 8500 rows - columns: Host, status, catego... See more...
Hi All, I started working in splunk just few months ago and new to splunk. Can anyone help me with some idea please.. I have a lookup file (contains around 8500 rows - columns: Host, status, category) 1. to find non reporting hosts list present in my lookup file (it taking time to execute the below script) | inputlookup lookupfile | where category="categoryname" AND status="Active" | fields Host | search NOT [tstats count where index="indexname" by Host | fields - count] | stats count 2. Among the non reporting hosts, have to find the list of hosts that stopped sending logs for past 24 hours. I am executing the below script with time range 24hours. I am getting incorrect result.  | inputlookup lookupfile | where category="categoryname" AND status="Active" | fields Host | search NOT [tstats count where index="indexname" by Host | fields - count] | search [tstats count where (index="indexname" earliest=-6mon@mon latest=now) by Host | fields - count] | stats count Please help me correcting my script.    
Is it possible to find the storage (logs) used by application/services in a particular index for particular time range? Or something similar For ex.  Query: ((index="digconn-timeser-prod") (kub... See more...
Is it possible to find the storage (logs) used by application/services in a particular index for particular time range? Or something similar For ex.  Query: ((index="digconn-timeser-prod") (kubernetes.container_name="*conn-server*")) | ((index="digconn-timeser-qa") (kubernetes.container_name="*conn-server*")) |   This would help identify logging  issues from apps/services side over a period of time Result: conn-server-latency 1105 GB last 5 days conn-server-lag 1505 GB last 5 days    
  Hello, I would like to request guidance on how to create a correlation search based on data provided by SANS Threat Intelligence from https://isc.sans.edu/block.txt The malicious IPs from "bl... See more...
  Hello, I would like to request guidance on how to create a correlation search based on data provided by SANS Threat Intelligence from https://isc.sans.edu/block.txt The malicious IPs from "block.txt" are updated regularly. How can my correlation search track that change in real-time? What queries to use? Notes: The SANS Threat Intel has already been enabled.     
Hello, I am trying to obtain IPs from Hostnames. I am using inputlookup to get the list of hostnames from a CSV file. The problem is that "lookup dnslookup" is only displaying IPs for certain hosts... See more...
Hello, I am trying to obtain IPs from Hostnames. I am using inputlookup to get the list of hostnames from a CSV file. The problem is that "lookup dnslookup" is only displaying IPs for certain hosts (there are no missing fields). My queries are correct as the Table command provides all the information I need. Except, I obtain a lot of nulls instead of IPs. What I am doing wrong? xxxxx xxxxxxx | lookup dnslookup clienthost as Hostname OUTPUT clientip as ComputerIP | table Hostname ComputerIP
"total size of all databases that reside on this volume to the maximum size specified, in MB" >> What is meant by all databases? does it sum up the file size of all tsidx file? OR Does it include e... See more...
"total size of all databases that reside on this volume to the maximum size specified, in MB" >> What is meant by all databases? does it sum up the file size of all tsidx file? OR Does it include everything in an index/db folder, which would include bloomfilter, .lex & .data files and rawdata folder    https://docs.splunk.com/Documentation/Splunk/latest/admin/Indexesconf maxVolumeDataSizeMB = <positive integer> * If set, this setting limits the total size of all databases that reside on this volume to the maximum size specified, in MB. Note that this it will act only on those indexes which reference this volume, not on the total size of the path set in the 'path' setting of this volume. * If the size is exceeded, splunkd removes buckets with the oldest value of latest time (for a given bucket) across all indexes in the volume, until the volume is below the maximum size. This is the trim operation. This can cause buckets to be chilled [moved to cold] directly from a hot DB, if those buckets happen to have the least value of latest-time (LT) across all indexes in the volume.  
replace() function produce an empty string if the string to be replaced starts with a "+" character. this search with replace() works:   | makeresults | eval message = "This is mark1 replacement ... See more...
replace() function produce an empty string if the string to be replaced starts with a "+" character. this search with replace() works:   | makeresults | eval message = "This is mark1 replacement mark2", ph2="different" | rex field=message "mark1 (?<ph1>[^/s]*) mark2" | eval message2 = replace(message, ph1, ph2) | table message, message2, ph1, ph2   this one will produce an empty message2:   | makeresults | eval message = "This is mark1 +replacement mark2", ph2="different" | rex field=message "mark1 (?<ph1>[^/s]*) mark2" | eval message2 = replace(message, ph1, ph2) | table message, message2, ph1, ph2  
Good day, I have a usecase explained below - Index A has Reporting_Host (mix of IP address, hostname, FQDN) and Index CMDBB had data from CMDB ( so contains hostname, FQDN, IP Address, Server owner... See more...
Good day, I have a usecase explained below - Index A has Reporting_Host (mix of IP address, hostname, FQDN) and Index CMDBB had data from CMDB ( so contains hostname, FQDN, IP Address, Server owner information etc ). My requirement is to map Reporting_Host data from index A again CMDB data and display server owner information along with hostname, IP etc.  Issue here is, Index CMDB has data in multiple fields like Hostname (contains servername), CI_Name(contains FQDN), IP_address(obvious IP address).  How do I match Reporting_Host field values agains these 3 fields in CMDB and display the output? I tried using join but able to compare with any one field in CMDB data but not 3. Sample query below - index=A sourcetype=syslog_stats | stats min(_time) as old, max(_time) as new by Reporting_Host | stats min(old) as oldest, max(new) as newest by Reporting_Host | eval diff = tostring((newest - oldest), "duration") | where newest < now() - (86400 * 2) | eval stopped= (now()-newest) | eval stopped_for = round(stopped/86400, 0) | convert ctime(oldest) | convert ctime(newest) | join Reporting_Host [ search index=CMDB | rename HostName as Reporting_Host ] | fields oldest newest diff stopped_for Reporting_Host Server_Owner I did a field alias for CI_Name, IP_address and Hostname and named it as HostName but its not working.
Hello, I am currently trying to figure out how to combine the below three searches with different conditions into one query/alert.  if abc reminder is <1 then trigger an alert if xyz reminder i... See more...
Hello, I am currently trying to figure out how to combine the below three searches with different conditions into one query/alert.  if abc reminder is <1 then trigger an alert if xyz reminder is <5 then trigger an alert if 123 reminder is <22 then trigger an alert Here is my query so far: index="xyz" sourcetype=xyz ("abc reminder") OR ("xyz reminder") OR ("123 reminder") earliest=-24h | eval JobName=case( searchmatch("abc reminder"), "ABC reminder", searchmatch("xyz reminder"), "XYZ reminder", searchmatch("123 reminder"), "123 reminder") | stats count as ABCJobCount by JobName | where ABCJobCount<1 | stats count as XYZJobCount by JobName | where XYZJobCount<1 | stats count as 123JobCount by JobName | where 123JobCount<1 |eval NetcoolTitle = JobName + " did not complete in last 24 hours"
I have a splunk query as below which contains a lot of backslashes index="ABC" os="Win" FileName="*\\Programs\\Startup\\*" | rex field=FileName "Users\\\(?<username>[^\\\]+)." Now, I now that whe... See more...
I have a splunk query as below which contains a lot of backslashes index="ABC" os="Win" FileName="*\\Programs\\Startup\\*" | rex field=FileName "Users\\\(?<username>[^\\\]+)." Now, I now that when I tried to add this in savedseacrhes.conf it wont work as expected as in Splunk it breaks the line when it sees backslash.    Any suggestion on how we can add it to saved searches.conf  ?        
Hi Experts, I'm trying to validate whether the user is a new user or an existing user using summary index. The userLogin field is a combination of username, userId and uniqueId associated to user's... See more...
Hi Experts, I'm trying to validate whether the user is a new user or an existing user using summary index. The userLogin field is a combination of username, userId and uniqueId associated to user's each login. I just want the username and userId from userLogin field to maintain single record of each user and to find the count of userLogin within specific dateTime interval (i.e past one week). Here's the query i've written, but. Any suggestions would be highly welcomed. Thanks in advance. index=user_login_details | rex field=userLogin "(?<userName>\s+\d{5}).*" | eval time=strftime(_time,"%Y-%m-%dT%H:%M:%S") | stats count, earliest(time) as FirstTime by userName | join type=left userName [search index=user_login_details sourcetype=existing_login_users latest=-7d | eval Time=strptime(FirstTime ,"%Y-%m-%dT%H:%M:%S") | stats count as ExistingUser by Time userName ] | fillnull ExistingUser value=0 | search ExistingUser=0 | fields-ExistingUser | collect index=user_login_details sourcetype=existing_login_users
How do i compare for todays let say 9a-10a with yesterdays 9a-10a stats side by side? Is it possible on 1 qeury? index=foo <query> | stats avg(responsetime) today and tomorrow count by uri    
Hi, Splunkers,   I have my multiselect field ccs code as below:    2nd part, 350px works for entire area width,   but 1st part looks not working as expected,  input area width not as  300px, ... See more...
Hi, Splunkers,   I have my multiselect field ccs code as below:    2nd part, 350px works for entire area width,   but 1st part looks not working as expected,  input area width not as  300px, or 340 px.           #KeyWordID div[data-component="splunk-core:/splunkjs/mvc/components/MultiSelect"]           {             width: 300px; !important;             max-width: 340px; !important;           }             #KeyWordID {             width: 350px;           }   thx, in advance.   Kevin  
see an error while trying to upgrade the event service via the enterprise console. EC was upgraded successfully 
I have a dashboard with statistics table and I want to add color to the font alone in the statistic table.There is no condition to be given.I have to give color to the font for all rows in the table.... See more...
I have a dashboard with statistics table and I want to add color to the font alone in the statistic table.There is no condition to be given.I have to give color to the font for all rows in the table.How to do it?    
we are trying to find why server error appears on search head though don't see any errors in logs and no high CPU usage found, running with v7.3.5