All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a search that takes quite some time to run. *using py to run the search with splunk api   it returns by saying it "role-based disk usage quita of search artifacts for user "test" has been r... See more...
I have a search that takes quite some time to run. *using py to run the search with splunk api   it returns by saying it "role-based disk usage quita of search artifacts for user "test" has been reached (used 192mb quota=100mb)  how can I increase the disk quota?
Apologies, but I am new to Splunk and am looking for a little bit of guidance/help. I am having an issue with one of our apps.  For whatever reason when I try going AdvancedSearch/Macros from wit... See more...
Apologies, but I am new to Splunk and am looking for a little bit of guidance/help. I am having an issue with one of our apps.  For whatever reason when I try going AdvancedSearch/Macros from with the Network app, the url changes to /data/macros and I get a 404 error. However, if I manually change it to /admin/macros, I can get in just fine and edit/view macros. It is only happening under one specific app. Other apps are able to use /data/macros just fine.  I think it might be permission related, but I am unsure where to check or what to look for.  It is like this older thread, but I'm not sure where to start looking: https://community.splunk.com/t5/Knowledge-Management/Editing-macro-is-giving-a-404-error/m-p/433712/highlight/true 
Hi, Just wanted to know Ad account activity who deleted user account ?
I created a search to list servers and the last time a windows log reported.  command i am using is  Tstats latest(_time) as lastseen where (index=windows) by host | convert ctime(lastseen)  ... See more...
I created a search to list servers and the last time a windows log reported.  command i am using is  Tstats latest(_time) as lastseen where (index=windows) by host | convert ctime(lastseen)   I am trying to compare that  “last seen” to current time and if more than 24 hrs then I would like to alert. Any thoughts on how to identify and alert on that. Ultimately would love adding this to a dashboard.
I need to get the  sourcetype count by each source top 10 events counts in splunk Example :  I have 3 sourcetype and sending data from different sources,  sourcetype A - a,b,c,d,e sourcetype B -a... See more...
I need to get the  sourcetype count by each source top 10 events counts in splunk Example :  I have 3 sourcetype and sending data from different sources,  sourcetype A - a,b,c,d,e sourcetype B -a,b,c,d,e sourcetype C -a,b,c,d,e Now, I need to display top 10 event count by each source by sourcetype
I have the below error showing on the search head, I've been looking for a cause of this error with no luck. Unable to initialize modular input "es_identity" defined in the app "SplunkEnterpriseSecu... See more...
I have the below error showing on the search head, I've been looking for a cause of this error with no luck. Unable to initialize modular input "es_identity" defined in the app "SplunkEnterpriseSecuritySuite": Introspecting scheme=es_identity_export: script running failed (exited with code 1) Post-Install Configuration gives Error: Fetch failed:admin/ess_configured/ssl I'm new to SPLUNK, thank you.    
I have CSV File with delimiter "|" like sample below for new ingestion. I wanted to use standard sourcetype csv. But standard:csv is having "," as delimiter. Can we add more than one delimiter in FIE... See more...
I have CSV File with delimiter "|" like sample below for new ingestion. I wanted to use standard sourcetype csv. But standard:csv is having "," as delimiter. Can we add more than one delimiter in FIELD_DELIMITER attribute? If so how can do that. Or for the below one do we need to define other sourcetype with separate individual delimiter I tried this FIELD_DELIMITER=",||" but not working name|age|class john|9|first bob|10|second Could you please help 
Hi there, I have managed to enter other fields in Chart title, but I fail to get the selected time period visible with a dynamic variable in the title of a Chart. So, I can do this chart title:  ... See more...
Hi there, I have managed to enter other fields in Chart title, but I fail to get the selected time period visible with a dynamic variable in the title of a Chart. So, I can do this chart title:  Time chart of Users and Journal notes ($care_unit_tok$) Which after Care Unit selection = ASIH Botkyrka results in this title:   Antal användare och journalanteckningar (| where (care_unit = "ASIH Botkyrka")) But I fail to do similar action with time_tok, e.g. like this:  Time chart of Users and Journal notes ($time_tok$) But even if I have made a time selection which I can see in the filter as Jul 2023, I only get this text over the Chart, without any time period displayed:  Time chart of Users and Journal notes Thanks for any ideas on how to solve this.   
We had a problem that certain fields weren't searchable.  index=foo bar=* did not show any result even though interesting fields and the events themselves showed that the fields were extracted as ex... See more...
We had a problem that certain fields weren't searchable.  index=foo bar=* did not show any result even though interesting fields and the events themselves showed that the fields were extracted as expected. Searching specific values also didn't help. Some fields did return results but only if you added wildcards:   index=foo bar="*value"   It was possible to find the events when using:   | where isnontull(foo)    OR    | eval check = if(foo="bar",1,0) | where check = 1   but any other method did not return results. After some research in the community I found Posts with similar problems. Some suggested that there could be hidden fields present. Most of the time the suggestion was to enable indexed extractions with the fields.conf and adding a wildcard there.   [MyField] INDEXED_VALUE = *<VALUE>   While trying out whether this option would work for me, I compared existing fields.conf files in the environment and noticed that all of the affected missing field values are part of the fields.conf of another app. By default fields.conf is global if im not mistaken hence the fields defined in fields.conf as indexed extraction affected other sourcetypes but since the fields in those other apps did not include the necessary parameters to successfully extract at index time this error occurred.  Setting export to none in the metadata of the app that holds the fields.conf file solved my problem. local/default.meta:   [fields] export = none   As seen in the fields.conf Docs: * Do not create indexed fields with names that collide with names of fields that are extracted at search time. I was aware of this setting regarding creating new fields myself but forgot to check whether any of the installed apps already do so.  Hope this helps anyone that may be in a similar situation! TLDR: If you have trouble with extracted fields and cant search them even though they are shown inside the events and interesting fields, check your environment for fields.conf and if the affected fields are defined as indexed extraction. If so limit the fields.conf with metadata to the original app.
The slack channel mentioned here: https://hub.docker.com/r/splunk/splunk   is private, I'd like to join it.
Did the blacklist/whitelist got replaced by denylist/allowlist in Splunk 9? In some Blogs i read that Splunk 9 replaced blacklist with denylist? Or is blacklist still usable? In the Changelogs ... See more...
Did the blacklist/whitelist got replaced by denylist/allowlist in Splunk 9? In some Blogs i read that Splunk 9 replaced blacklist with denylist? Or is blacklist still usable? In the Changelogs of Splunk 9 i didnt found any evidence for the change, but the Splexicon and some Blogs say something different. https://docs.splunk.com/Splexicon:Denylist https://www.splunk.com/en_us/blog/leadership/biased-language-has-no-place-in-tech-a-follow-up.html?locale=en_us Thanks for explanation
App installs, the readme is there and the in browser CyberChef.html works just fine. However, when I try to execute SPL using cyberchef like the example from the documentation:   | makeresults cou... See more...
App installs, the readme is there and the in browser CyberChef.html works just fine. However, when I try to execute SPL using cyberchef like the example from the documentation:   | makeresults count=3 | streamstats count | eval data=random()/random() | cyberchef infield='data' outfield='convertedData' operation="ToBase64" | table data convertedData   I get an error output from Splunk   Error in 'cyberchef' command: CyberChef chef.bake is not a function   I'm trying to get this running on a single test instance in a VM, all file persmissions looks right.. Am I missing something, or is this a symptom of me trying to test the app using the "free" version of Splunk (I seem to remember enterprise being needed)? Any suggestions and/or feedback appreciated!
Hello, I have deployed an app to a distributed Search Head Cluster. This app contains only a props.conf file in the default directory. In the props.conf there is a calculated field writing a vendor-... See more...
Hello, I have deployed an app to a distributed Search Head Cluster. This app contains only a props.conf file in the default directory. In the props.conf there is a calculated field writing a vendor-product to a specific source type. In the search app this field is not populated. I believe because the sharing permission is set to app, rather than to global. how can I change the permission from app to global? Initially I created this app on the Search Head deployer in the directory shcluster/apps by adding a directory TEST, there i created a directory default and there i created a file props.conf. Then I deployed it to the search heads. I see the app on all the search heads, it's just the wrong sharing permission. thanks Alex  
Hi Splunkers. I've been trying for weeks to do the following: I have a search that outputs a table with MITRE techniques as shown below: Query index=notable search_name="Endpoint - KTH*" | field... See more...
Hi Splunkers. I've been trying for weeks to do the following: I have a search that outputs a table with MITRE techniques as shown below: Query index=notable search_name="Endpoint - KTH*" | fields tactic_mitre, technique_mitre, risk_mitre, src_user, user, Computer, dest, search_name | stats count by technique_mitre |eval codes_tech=technique_mitre |makemv delim=", " codes_tech |mvexpand codes_tech |rename count as carry |stats sum(carry) as Total by codes_tech Result And I have another query that brings me a table with the IDs and names of the Techniques and tactics of MITRE Query | inputlookup mitre_lookup | foreach TA00* [| lookup mitre_tt_lookup technique_id as <<FIELD>> OUTPUT technique_name as <<FIELD>>_technique_name | eval <<FIELD>>_technique_name=mvindex(<<FIELD>>_technique_name, 0) | eval <<FIELD>>=<<FIELD>>_technique_name . " - " . <<FIELD>>] | fields TA0043,TA0001, TA0002, TA0003, TA0004, TA0005, TA0006, TA0007, TA0008, TA0009, TA0011, TA0010, TA0040, TA0042 | rename TA0043 as "Reconnaissance", TA0042 as "Resource Development", TA0001 as "Initial Access", TA0002 as "Execution", TA0003 as "Persistence", TA0004 as "Privilege Escalation", TA0005 as "Defense Evasion", TA0006 as "Credential Access", TA0007 as "Discovery", TA0008 as "Lateral Movement", TA0009 as "Collection", TA0011 as "Command and Control", TA0010 as "Exfiltration", TA0040 as "Impact" Result   I would like to search within the MITRE table for the codes_tech of the first query and if Total is greater than 0 I would put the Total and otherwise leave the other IDs at 0 Please, I really need your help, please, please, please...
Hi, I am trying to create a schema in Analytics using postman. But i am getting the below error. Error:   "Error provisioning account on any cluster"   I am following the steps as per the D... See more...
Hi, I am trying to create a schema in Analytics using postman. But i am getting the below error. Error:   "Error provisioning account on any cluster"   I am following the steps as per the Docs :   POST http://analytics.api.example.com/events/schema/{schemaName} HTTP/1.1 X-Events-API-AccountName:<global_account_name> X-Events-API-Key:<api_key> Content-Type: application/vnd.appd.events+json;v=2 Accept: application/vnd.appd.events+json;v=2 { "schema" : { "account": "integer", "amount": "float", "product": "string" } } And we are using the Analytics API URL as below: as our controller is from North America region. https://analytics.api.appdynamics.com
I have a "Severity Level" field in both index A and index B. Their structure is like:     ==index A=== Severity Level 1 2 3 4 ===index B=== Severity Level critical high medium low      Now ... See more...
I have a "Severity Level" field in both index A and index B. Their structure is like:     ==index A=== Severity Level 1 2 3 4 ===index B=== Severity Level critical high medium low      Now I want to combine the two indexes in a search and display the Severity Level using      | timechart count by "Severity Level"     where the combined "Severity Level" values only contain 1,2,3,4 So, I need an eval = case() to map them. My syntax for for that is      eval "Severity Level" = case('Severity Level' == "critical", 1 ,'Severity Level' == "high", 2, 'Severity Level' == "medium", 3, 'Severity Level' == "low", 4, 'Severity Level' == 1, 1, 'Severity Level' == 2, 2, 'Severity Level' == 3, 3, 'Severity Level' == 4, 4, 1=1, null)      By this, the result gives incorrect result, i.e., only showing incorrect counts on 4. I think the problem is in the single and double quote, but I am not sure which is which. It is a bit urgent so I need help. Thanks. 
I have Snort shoving JSON logs to Splunk, I see everything just dandy when I do the following in search:     sourcetype="snort3:alert:json"     I see many events! However, I can have to crea... See more...
I have Snort shoving JSON logs to Splunk, I see everything just dandy when I do the following in search:     sourcetype="snort3:alert:json"     I see many events! However, I can have to create alerts based on suspicious activities that I already search for in Splunk. Is there some way to load pre-set common alerts into Splunk?  Do I have to go through the task of making each alert for Splunk (eg: create alert for bad RDP logins, lengthy ICMP probes, bad FTP logins, etc.. etc..?)  I guess what I'm asking is there a way to make my Splunk populate alert rules that are predefined or are zero day?
Hello Splunk Community,  I'm trying to extract fields from the cloudwatch events like 1)region 2)arn 3) startTime 4) endTime 5)eventTypeCode 6)latestDescription from an event. The regex works fine ... See more...
Hello Splunk Community,  I'm trying to extract fields from the cloudwatch events like 1)region 2)arn 3) startTime 4) endTime 5)eventTypeCode 6)latestDescription from an event. The regex works fine in regex101 however it's not extracting all field values in Splunk For ex: | rex field=_raw "region":\s(?P<_region>"\w+-\w+-\d)" the above rex is only extracting us-east-1 region only where I have multiple regions in the data. Please help to extract the field I mentioned/highlighted.  sample event: 2020-02-10T17:42:41.088Z 775ab4c6-ccc3-600b-9c84-124320628f00 {"records": [{"value": {"successfulSetoflog": [{"awsAccountId": "123456789123", "event": {"arn": "arn:aws:health:us-east-........................................................
Hello Community, I have tried searching, but I've not find an answer to my specifics needs... Or I dont know how to word my question. I work in a company that manufactures servers. Each server ma... See more...
Hello Community, I have tried searching, but I've not find an answer to my specifics needs... Or I dont know how to word my question. I work in a company that manufactures servers. Each server manufactured creates a logfile with a unique name. The log file is a txt file that has identifying lines like "Serial Number: FMXXXXXXX", "Station: 1", "Start Tme: 12:00:00", etc. I am trying to configure splunk to search all these log files based on serial number (to start with) and eventually create a searchable dashboard where I can lookup log files based on serial numbers. I'm obviously new to splunk, and have watched a lot of tutuorials, but most tutorials focus an searching one big log file, or several log files. so far, i have setup the splunk UI and pointed it to a directory containing my log files.  Under "data summary" my sources are over 100k and sourcetypes are over 14k. any hep would be appreciated. Kevin
A couple of weeks ago I took a screenshot of the "Save As Alert" window. Alert type Real-time was available. Today, my colleagues discovered that we can no longer do so, the only alert type is "Sche... See more...
A couple of weeks ago I took a screenshot of the "Save As Alert" window. Alert type Real-time was available. Today, my colleagues discovered that we can no longer do so, the only alert type is "Scheduled". Our admin role has the schedule_rtsearch capability set, but we are not able to find the "rtsearch" capability mentioned in the documentation as also required. Has the latest Splunk Cloud version upgrade to 9.0.2305 removed that capability? Or am I missing something?