All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Is it possible to calculate the storage that part of log is taking ?  I have a log file that contains a message that i want to calculate the storage it takes after getting the numbers, is ... See more...
Hello Is it possible to calculate the storage that part of log is taking ?  I have a log file that contains a message that i want to calculate the storage it takes after getting the numbers, is it possible to exclude it from index ?   Thanks
Hi, I want to fetch http error 500 from the logs using search bar, I have set index, source type and source in the query. what should I add to it to retrieve a specifc logs of 500 http error respon... See more...
Hi, I want to fetch http error 500 from the logs using search bar, I have set index, source type and source in the query. what should I add to it to retrieve a specifc logs of 500 http error response? I have tried "status=500" which is not working.
Hello Splunkers, I have used a query in the search for mitre fields extraction and after the extraction i have got the results with the query name and the technique_id. But here the problem come... See more...
Hello Splunkers, I have used a query in the search for mitre fields extraction and after the extraction i have got the results with the query name and the technique_id. But here the problem comes. Each query is having the technique_id and the sub technique_id, so i have matched the sub technique_id with the technique_id and the results is shown with the same rule name two times with the technique_id. So i want to remove  the duplicate rule name, if so i used the dedup then the rule having the other technique_id is also getting removed. I have attached the screenshot for reference... The query i used for getting the results is  | rest /services/configs/conf-analyticstories | where annotations!="" | spath input=annotations path=mitre_attack{} output=mitre_attack | eval rule_name=ltrim(title,"savedsearch://") | fields rule_name,mitre_attack | join rule_name [| rest /services/configs/conf-analyticstories | where searches!="" | eval rule_name=searches | table title,rule_name | eval rule_name=trim(rule_name,"[") | eval rule_name=trim(rule_name,"]") | eval rule_name=split(rule_name,",") | mvexpand rule_name | eval rule_name=trim(rule_name," ") | eval rule_name=trim(rule_name,"\"") ] | append [| rest services/configs/conf-savedsearches | eval rule_name=title | search action.correlationsearch.annotations="*" | spath input=action.correlationsearch.annotations path=mitre_attack{} output=mitre_attack | fields rule_name, mitre_attack] | eval technique_name = if(match(mitre_attack,"^T\d\d\d"),null(), mitre_attack) | lookup mitre_tt_lookup technique_name OUTPUT technique_id as tmp_id0 | eval tmp_id1 = if(match(mitre_attack,"^T\d\d\d"), mitre_attack, null()) | eval technique_id=coalesce(tmp_id0, tmp_id1) | where NOT isnull(technique_id) | table rule_name, technique_id | inputlookup mitre_user_rule_technique_lookup append=true | inputlookup mitre_app_rule_technique_lookup append=true | makemv tokenizer="([^\n\s]+)" technique_id | mvexpand technique_id | dedup rule_name,technique_id | join rule_name [| rest services/configs/conf-savedsearches | eval rule_name=title | eval stage= if(disabled == 1, "Disabled", "Enabled") | table rule_name, stage ] | eval subtechnique_id=if(match(technique_id,"\."),technique_id,null()) | eval technique_id=if(match(technique_id,"\."),replace(technique_id,"\.\d+",""),technique_id) |search stage=Enabled |table rule_name,technique_id     Thanks in advance....
Hello ,   we are getting "0365 splunk addon data comes after delay of 1 day " Which version of Splunk are you currently using? Answer 8.2 Which app/add-on are you using to send the O3... See more...
Hello ,   we are getting "0365 splunk addon data comes after delay of 1 day " Which version of Splunk are you currently using? Answer 8.2 Which app/add-on are you using to send the O365 data? Answer:-  Microsoft Graph Security Add-on for SplunkTA-microsoft-graph-security-add-on-for-splunk1.2.1 Thanks Lalit 
Is there a reason the minimum number of nodes for indexer clustering needs to be 3? If three units are needed because of the role of parity in the raid theory, I don't think this role is necessary b... See more...
Is there a reason the minimum number of nodes for indexer clustering needs to be 3? If three units are needed because of the role of parity in the raid theory, I don't think this role is necessary because the CM is already doing it. Therefore, I think that 2 units should also play a clustering role, but I wonder why 3 units always come out as default in most examples. Is there any other reason??
index=my_index [search is here]  | outputcsv mycsv.csv After saving the search results into mycsv.csv file,  can I access the file via search head? | inputlookup mycsv.csv   -- is not working ... See more...
index=my_index [search is here]  | outputcsv mycsv.csv After saving the search results into mycsv.csv file,  can I access the file via search head? | inputlookup mycsv.csv   -- is not working     
I want any logfile (local, or remote via a UniversalForwarder) with the filename "xyz.log" to have a sourcetype of XYZ, and get indexed in my xyz index (not the main index). What do I need to put in... See more...
I want any logfile (local, or remote via a UniversalForwarder) with the filename "xyz.log" to have a sourcetype of XYZ, and get indexed in my xyz index (not the main index). What do I need to put in props.conf? Do I also need to configure transforms.conf? I'm using Splunk Enterprise v8 on Windows. current props.conf: [source::...\\xyz.log] sourcetype = XYZ   
I'm creating a query where I want to get an id from a log in one side (first search) and in the second search I just want to bring the results that have the ids of the first search. Then I want to ... See more...
I'm creating a query where I want to get an id from a log in one side (first search) and in the second search I just want to bring the results that have the ids of the first search. Then I want to calculate the difference between them. Something like:   index=anything source=anything route1 Payload OK | rex field=_raw "\:[0-9]{4} \- (?<IDROUTE1>[0-9a-f{8}) \-" | stats count(_raw) as CROUTE1 | table IDROUTE1 | appendcols [search index=anything source=anything route2 Payload OK | rex field=_raw "\:[0-9]{4} \- (<IDROUTE2>[0-9a-f{8}) \-" | stats count(_raw) as CROUTE2 | table IDROUTE2] | where IDROUTE1=IDROUTE2 | eval TOTAL=CROUTE1-CROUTE2 | table TOTAL     what is not working is to count the events using where I guess. Searches when done separately bring me the correct results. In Events show me a correct number of events but in Statistics show me 0.  
Hey Splunk developers,   Quick question for you guys, I'm curious if it would be considered against best practices, or an anti-pattern to add your own code to the props.conf / transforms.conf file ... See more...
Hey Splunk developers,   Quick question for you guys, I'm curious if it would be considered against best practices, or an anti-pattern to add your own code to the props.conf / transforms.conf file of a Splunk supported technical addon in order to add support for your own custom sourcetypes? Specifically, The Splunk Addon for Microsoft Cloud Services. My use-case is this, I'm utilizing a "custom" event delivery mechanism to send Azure Diagnostic Logs generated by my resources in my Azure tenant to Splunk over HTTP Event Collection (HEC). The mechanism consists of an Azure Event Hub that receives diagnostic events from my resources in Azure. Downstream an Azure Function App monitors the upstream event-hub namespace for incoming events, parses those events, constructs a source-type based on the data in the parsed event, and then wraps the event payload in a HTTP request and sends it to Splunk. The code for the Azure Function App can be found here.    The general source-typing logic for the function application is below.   Functions that collect diagnostic log data attempt to construct a sourcetype based on the resourceId of the event. The logic for this sourcetype construction can be found in the getSourceType function in the ./helpers/splunk.js file. The following steps are used to construct the sourcetype: A regular expression is used to extract two groups after the text /PROVIDERS Example /PROVIDERS/MICROSOFT.RESOURCES/DEPLOYMENTS/ Periods (.) and forward slashes (/) are replaced with colons (:) The event category is appended Example An event with a resourceId of /SUBSCRIPTIONS/subscription ID/RESOURCEGROUPS/group/PROVIDERS/MICROSOFT.RESOURCES/DEPLOYMENTS/FAILURE-ANOMALIES-ALERT-RULE-DEPLOYMENT-12345678 will have a sourcetype of azure:resources:deployments:administrative If a sourcetype cannot be constructed from the event, the specified default sourcetype entered at setup will be used.     Here's my dilemma, the team that administers our corporate Splunk instance would like to utilize as many TA's as possible to ensure CIM compliance of on-boarded data with as little work as possible, completely understandable. Therefore, for certain event sources I've hard-coded the source-type logic in my function app to match several of the pre-defined source-types found in the TA I linked above ("azure:monitor:aad" & "mscs:azure:security:alert" as examples). However, we're at the point where I'm trying to on-board data that does not have a pre-defined sourcetype within the TA's we're utilizing. As an example, Azure Front Door Firewall Logs which will sourcetype to "azure:network:frontdoors:frontdoorwebapplicationfirewalllog". To work around this, I've suggested to our content development team that we begin adding logic to the props.conf and transforms.conf of the TA listed above that will match the sourcetypes my function-app is sending over to Splunk and I've essentially been told no, as this isn't best practice. Is my suggestion truly against best practice, or an anti-pattern? In my mind the only extra overhead to take into consideration is merging our custom props.conf / transforms.conf with the base TA whenever an update for the TA is released.   What's my best path forward here?
Hi everyone, I recently took over a project by someone who is no longer with my employer. He made several scheduled searches that write to an index, and it was working great. However last month out... See more...
Hi everyone, I recently took over a project by someone who is no longer with my employer. He made several scheduled searches that write to an index, and it was working great. However last month out of nowhere it just stopped working. Supposedly no changes were made.  The other searches are working, it's just this one. The search runs just fine, gets the expected results, but the results aren't being exported to the index.  I actually found another post on here with someone who looked to have the same problem, but it wasn't successfully answered.  Another post suggested that a forwarder might be a solution. Does that seem right? I'd rather avoid that solution as I don't want to be installing apps on this environment, but if necessary I will get the permission. Just want to make sure that's a probable solution before doing so. 
  Hi everyone, My goal here is as follows: based on which option the user chooses from the first dropdown menu,  a corresponding dropdown menu will appear. And the panels & graphics below will ha... See more...
  Hi everyone, My goal here is as follows: based on which option the user chooses from the first dropdown menu,  a corresponding dropdown menu will appear. And the panels & graphics below will have queries based on the value passed from the second dropdown. In other words, if I choose the TI40 from the first dropdown menu, then only the dropdown associated with it - the one that has 'depends=$show_ti40$"' - should appear. And a value chosen from the latter will influence the panels and graphics. But this isn't really working. When I choose TI40, the dropdown menu for it does not appear. It only works as expected for ZV60 works. What am I doing wrong? Or is there a better way to do it?  
Afternoon Splunk Community,   I'm currently in charge of helping  sunset an old subsidiary of ours and putting their infrastructure out to pasture. As part of the decommission process I need to b... See more...
Afternoon Splunk Community,   I'm currently in charge of helping  sunset an old subsidiary of ours and putting their infrastructure out to pasture. As part of the decommission process I need to backup indexed data from their Splunk instance for long term retention for the purpose of restoring the data should we ever need to view it for any reason. The Splunk indexing tier consists of six total indexers across two sites, three indexers in site 1 and the other three indexers in site 2. The cluster master for the site has a replication factor of "origin:2 total:3" which means that each indexer in the site should contain two copies of each bucket that originated within the site, and a single copy of each bucket that did not originate within the site.  In an ideal world I think this would mean that I should only need to backup the data volume of a single indexer in the cluster to have a copy of all indexed data. However I've read that in taking a backup of a clustered indexer that there is no guarantee that a single indexer contains all data within the cluster, even with replication enabled. I have several questions: What is the best practice for archiving indexed data for potential restoration into another Splunk instance at a later date? Each of my indexer's "/var/lib/splunk" directory symlinks to a unique AWS EBS volume. Should I simply retain the AWS EBS volume from each of my six indexers, one indexer from each of my two sites, or can I retain a single EBS volume from one of my indexers and discard the rest?  My thought process behind this method is that if for any reason I ever needed to restore the archived data for searching I could simply setup a new Splunk indexer, attach the archived EBS volume, and point a search head to the new indexer in order to search the data. Is a better approach to simply stop splunkd on my indexer, create an archive of /var/lib/splunk using an archive utility and then restoring that archive to /var/lib/splunk on a new indexer at a later date if we need to restore the data for any reason? As a last resort, I could run a search against each of my indexes and export this data in a human readable format (.csv, XLS...etc.). I've already checked in all of my knowledge artifacts (apps, configuration files..etc.) in Git for record keeping. However, Is there any other portions of my Splunk deployment that I should consider backing up? For example, should I retain a copy of my cluster master's data volume for any reason? If I do need to restore archived data, can I restore the archived data to a standalone indexer, or do I need to restore the data to an indexer cluster? This one seems rhetorical to me, but I figured I should ask none the less.    Resources: Splunk - Back up Indexed Data  Splunk Community - How to backup/restore Splunk db to new system 
Hello! I've had a few successful installs of ES but this newest install only has one domain under "Security Domains" (Identity).  Security Intelligence is missing menus as well. Most of the Data Mode... See more...
Hello! I've had a few successful installs of ES but this newest install only has one domain under "Security Domains" (Identity).  Security Intelligence is missing menus as well. Most of the Data Models appear to be populating correctly and are accelerated. I'm getting good menus in Infosec. I've tried to reinstall it twice. I've also looked for any TA nav menus that may be overriding and I don't see anything. Any thoughts as to where to continue troubleshooting?
My splunk entry is firstName="Tom" lastName="Jerry" middleName="TJ" dob="1/1/2023" dept="mice" status="202" dept="house" In above event, field dept is repeated (with value mice and house). I wo... See more...
My splunk entry is firstName="Tom" lastName="Jerry" middleName="TJ" dob="1/1/2023" dept="mice" status="202" dept="house" In above event, field dept is repeated (with value mice and house). I would like to find all the field names which are duplicated in single event / within the event Tried dudep and other ways per google suggestion. But not able to get result. Can you please help me with this. Thanks in advance.
I am attempting to set up encryption between a Splunk Universal Forwarder (verion 9.0.3) and a Splunk Heavy Forwarder (version 9.0.3).  I followed the instructions found on this Splunk community site... See more...
I am attempting to set up encryption between a Splunk Universal Forwarder (verion 9.0.3) and a Splunk Heavy Forwarder (version 9.0.3).  I followed the instructions found on this Splunk community site exactly (with the exception of changing the IPs of course to reflect my environment, but am getting the errors below in the splunkd.log file on the Universal Forwarder (the Heavy Forwarder is listening for connections on 9997). Splunk Coimmunity Site URL https://community.splunk.com/t5/Security/How-do-I-set-up-SSL-forwarding-with-new-self-signed-certificates/td-p/57046 Please advise.
Hi All, I started working in splunk just few months ago and new to splunk. Can anyone help me with some idea please.. I have a lookup file (contains around 8500 rows - columns: Host, status, catego... See more...
Hi All, I started working in splunk just few months ago and new to splunk. Can anyone help me with some idea please.. I have a lookup file (contains around 8500 rows - columns: Host, status, category) 1. to find non reporting hosts list present in my lookup file (it taking time to execute the below script) | inputlookup lookupfile | where category="categoryname" AND status="Active" | fields Host | search NOT [tstats count where index="indexname" by Host | fields - count] | stats count 2. Among the non reporting hosts, have to find the list of hosts that stopped sending logs for past 24 hours. I am executing the below script with time range 24hours. I am getting incorrect result.  | inputlookup lookupfile | where category="categoryname" AND status="Active" | fields Host | search NOT [tstats count where index="indexname" by Host | fields - count] | search [tstats count where (index="indexname" earliest=-6mon@mon latest=now) by Host | fields - count] | stats count Please help me correcting my script.    
Is it possible to find the storage (logs) used by application/services in a particular index for particular time range? Or something similar For ex.  Query: ((index="digconn-timeser-prod") (kub... See more...
Is it possible to find the storage (logs) used by application/services in a particular index for particular time range? Or something similar For ex.  Query: ((index="digconn-timeser-prod") (kubernetes.container_name="*conn-server*")) | ((index="digconn-timeser-qa") (kubernetes.container_name="*conn-server*")) |   This would help identify logging  issues from apps/services side over a period of time Result: conn-server-latency 1105 GB last 5 days conn-server-lag 1505 GB last 5 days    
  Hello, I would like to request guidance on how to create a correlation search based on data provided by SANS Threat Intelligence from https://isc.sans.edu/block.txt The malicious IPs from "bl... See more...
  Hello, I would like to request guidance on how to create a correlation search based on data provided by SANS Threat Intelligence from https://isc.sans.edu/block.txt The malicious IPs from "block.txt" are updated regularly. How can my correlation search track that change in real-time? What queries to use? Notes: The SANS Threat Intel has already been enabled.     
Hello, I am trying to obtain IPs from Hostnames. I am using inputlookup to get the list of hostnames from a CSV file. The problem is that "lookup dnslookup" is only displaying IPs for certain hosts... See more...
Hello, I am trying to obtain IPs from Hostnames. I am using inputlookup to get the list of hostnames from a CSV file. The problem is that "lookup dnslookup" is only displaying IPs for certain hosts (there are no missing fields). My queries are correct as the Table command provides all the information I need. Except, I obtain a lot of nulls instead of IPs. What I am doing wrong? xxxxx xxxxxxx | lookup dnslookup clienthost as Hostname OUTPUT clientip as ComputerIP | table Hostname ComputerIP
"total size of all databases that reside on this volume to the maximum size specified, in MB" >> What is meant by all databases? does it sum up the file size of all tsidx file? OR Does it include e... See more...
"total size of all databases that reside on this volume to the maximum size specified, in MB" >> What is meant by all databases? does it sum up the file size of all tsidx file? OR Does it include everything in an index/db folder, which would include bloomfilter, .lex & .data files and rawdata folder    https://docs.splunk.com/Documentation/Splunk/latest/admin/Indexesconf maxVolumeDataSizeMB = <positive integer> * If set, this setting limits the total size of all databases that reside on this volume to the maximum size specified, in MB. Note that this it will act only on those indexes which reference this volume, not on the total size of the path set in the 'path' setting of this volume. * If the size is exceeded, splunkd removes buckets with the oldest value of latest time (for a given bucket) across all indexes in the volume, until the volume is below the maximum size. This is the trim operation. This can cause buckets to be chilled [moved to cold] directly from a hot DB, if those buckets happen to have the least value of latest-time (LT) across all indexes in the volume.