All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Howdy, I'm building out some alerting in Splunk ES, and created a new correlation search. That is all working, but I'm unable to pass my eval as a value into email alert. What I have: | eval al... See more...
Howdy, I'm building out some alerting in Splunk ES, and created a new correlation search. That is all working, but I'm unable to pass my eval as a value into email alert. What I have: | eval alert_message=range.":".sourcetype." log source has not checked in ".'Communicated Minutes Ago'." minutes. On index=".index.". Latest Event:".'Latest Event' | table alert_message Just running the search works, the table is there and looks correct. I've tried variations of $alert_message$ with and without quotes, but the alert_message never gets passed to the email alert. I haven't tried to generate a notable, but I'm guessing I'll have the same issue. I feel like I'm missing something easy here...
So I have been trying to use if statements, but I don't seem to be getting the if statement correct: index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR=ERROR OR ERROR=WARN Action="restart fa... See more...
So I have been trying to use if statements, but I don't seem to be getting the if statement correct: index=kafka-np sourcetype="KCON" connName="CCNGBU_*" ERROR=ERROR OR ERROR=WARN Action="restart failed" OR Action="disconnected" OR Action="Task threw an uncaught an unrecoverable exception" | eval if(Action="restart failed", "restart failed", "OK", Action="disconnected","disconnected","OK", Action="Task threw an uncaught an unrecoverable exception", "ok") | table Action host connName   I've tried several different formats for the if, but it keeps telling me the if statements are wrong.  What am I not seeing here?
Hi All,   I am trying to parse raw data with json elements to proper JSON format in Splunk. I have tried multiple props.conf but failed to parse it as per expected output. Below I have attached the... See more...
Hi All,   I am trying to parse raw data with json elements to proper JSON format in Splunk. I have tried multiple props.conf but failed to parse it as per expected output. Below I have attached the data coming as a single event on Splunk and expected data what we want to see. Can someone please correct my props.conf ?   Events on Splunk with default sourcetype   {"messageType":"DATA_MESSAGE","owner":"381491847064","logGroup":"tableau-cluster","logStream":"SentinelOne Agent Logs","subscriptionFilters":["splunk"],"logEvents":[{"id":"38791169637844522680841662226148491272212438883591651328","timestamp":1739456206172,"message":"[2025-02-13 15:16:41.413885] [110775] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24388.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24388.tmp: No such file or directory\n[2025-02-13 15:16:42.213970] [110823] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/hyper_transient.112335.24390.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/hyper_transient.112335.24390.tmp: No such file or directory\n[2025-02-13 15:16:42.214870] [110830] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24389.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24389.tmp: No such file or directory\n[2025-02-13 15:16:42.218488] [110823] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24391.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24391.tmp: No such file or directory\n[2025-02-13 15:16:43.815051] [110827] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24392.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24392.tmp: No such file or directory\n[2025-02-13 15:16:44.617525] [110773] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24394.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24394.tmp: No such file or directory\n[2025-02-13 15:16:45.413954] [110823] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24393.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24393.tmp: No such file or directory"},{"id":"38791169749325947928296247310685546917181598051987750913","timestamp":1739456211171,"message":"[2025-02-13 15:16:47.014642] [110770] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/hyper_transient.112335.24395.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/hyper_transient.112335.24395.tmp: No such file or directory\n[2025-02-13 15:16:47.813934] [110823] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24396.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24396.tmp: No such file or directory\n[2025-02-13 15:16:47.814459] [110828] [warning] DV process create: Couldn't fetch grandparent process of process 26395 from the data model\n[2025-02-13 15:16:47.815399] [110828] [warning] DV process create: Couldn't fetch grandparent process of process 26396 from the data model\n[2025-02-13 15:16:47.816855] [110827] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24397.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24397.tmp: No such file or directory\n[2025-02-13 15:16:48.616944] [110825] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream   Expected Output with fiedls extraction   { "messageType": "DATA_MESSAGE", "owner": "381491847064", "logGroup": "tableau-cluster", "logStream": "SentinelOne Agent Logs", "subscriptionFilters": ["splunk"], "logEvents": [ { "id": "38791169637844522680841662226148491272212438883591651328", "timestamp": 1739456206172, "message": "[2025-02-13 15:16:41.413885] [110775] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24388.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24388.tmp: No such file or directory\n[2025-02-13 15:16:42.213970] [110823] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/hyper_transient.112335.24390.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/hyper_transient.112335.24390.tmp: No such file or directory\n[2025-02-13 15:16:42.214870] [110830] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24389.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24389.tmp: No such file or directory\n[2025-02-13 15:16:42.218488] [110823] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24391.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24391.tmp: No such file or directory\n[2025-02-13 15:16:43.815051] [110827] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24392.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24392.tmp: No such file or directory\n[2025-02-13 15:16:44.617525] [110773] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24394.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24394.tmp: No such file or directory\n[2025-02-13 15:16:45.413954] [110823] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24393.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24393.tmp: No such file or directory" } ] }   Props.conf   [json_splunk_logs] # Define the source type for the logs sourcetype = json_splunk_logs # Time configuration - Parse the timestamp in your message TIME_FORMAT = %Y-%m-%d %H:%M:%S.%6N TIME_PREFIX = \["message"\] \[ # Specify how to break events in the multiline message SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) # Event timestamp extraction DATETIME_CONFIG = NONE # JSON parsing - This tells Splunk to extract fields from JSON automatically KV_MODE = json # The timestamp is embedded in the message, so the following configuration is necessary for time extraction. EXTRACT_TIMESTAMP = \["messageType":"DATA_MESSAGE","owner":"\d+","logGroup":"\w+","logStream":"\w+","subscriptionFilters":\[\\"splunk\\"\],\s"timestamp":(\d+),".*?
Can someone please tell me where I can obtain a Trial Enterprise License from?
| stats values(X_*) as * by _time This line removes all the other fields. You would need to add more fields to this if you want more fields to be kept.
when i upgrade ES to 8.0.2 i missed the "Short ID " button in the Additional Field, also i can't search about the case id instead of time 
Hello Splunk colleagues! I'm trying to create a new correlation search that generates a notable event, and uses a field I generate for the title.  The title field in the notable indicates I can us... See more...
Hello Splunk colleagues! I'm trying to create a new correlation search that generates a notable event, and uses a field I generate for the title.  The title field in the notable indicates I can use variable substitution, and I've verified that the field is being created for every event the correlation search generates.  the field is called my_rule_title In the notable event, I am putting in $my_rule_title$ and when the notable is generated, the rule title on incident review literally says "$my_rule_title$" and not the contents of the field my_rule_title. what am I doing wrong to get the rule title in incident review to display the value of my_rule_title?  the other variable substitutions I'm doing in the correlation search for $description$ and $urgency$ are working as expected, just not the title.
Hello, I have the below SPL where I am looking to fetch the user accounts that have not logged in for 30 days or more but I am not seeing any results, Can someone please help me with this query if e... See more...
Hello, I have the below SPL where I am looking to fetch the user accounts that have not logged in for 30 days or more but I am not seeing any results, Can someone please help me with this query if everything is good. index=windows sourcetype=* EventCode=4624  | stats latest(_time) as lastLogon by Account_Name | eval days_since_last_logon = round((now() - lastLogon) / 86400, 0) | where days_since_last_logon > 30 | table Account_Name, days_since_last_logon Thanks in advance
Appreciate the link, I'll have to dig into that  I was hoping to get a working example here though that I could use and customize on my own
Hi, Thanks for your reply!!..I need to do partial match on LKUP_DSN.  Could you please help ? Thanks, Ravikumar
Hi In the end, I used this however it was not clear to me why I did not need to reference the newly created X_Mr. I could go straight to MR source="trace_Marketing_Bench_31032016_17_cff762901d1eff0... See more...
Hi In the end, I used this however it was not clear to me why I did not need to reference the newly created X_Mr. I could go straight to MR source="trace_Marketing_Bench_31032016_17_cff762901d1eff01766119738a9218e2*.jsonl" host="TEST2" index="murex_logs" sourcetype="Market_Risk_DT" "**strategy**" 920e1021406277a9 | spath resourceSpans{}.scopeSpans{}.spans{}.attributes{} output=attributes | mvexpand attributes | spath input=attributes | eval X_{key}=coalesce('value.doubleValue', 'value.stringValue') | stats values(X_*) as * by _time | stats sum(mr_batch_load_cpu_time) as batch_load_cpu_time sum(mr_batch_load_time) as batch_load_time sum(mr_batch_compute_time) as mr_batch_compute_time sum(mr_batch_compute_cpu_time) as mr_batch_compute_cpu_time by mr_strategy  This created the below table that I was looking to do What I don't understand is at this point I can only see the new fields X_mr I added in a "| stats values(X_*) as * by _time" and we are back to the original - I don't get that.  
This is a feature that is a required. Tried several steps, nothing worked. Was anyone able to achieve this?
The StatusMsg field is being created on the fly, but it has to come from *somewhere*.  The OP has a list of possible messages, but there is no indication of when each is used. <<some expression>> re... See more...
The StatusMsg field is being created on the fly, but it has to come from *somewhere*.  The OP has a list of possible messages, but there is no indication of when each is used. <<some expression>> refers to a Boolean check that decides when to set StatusMsg to a specific string.  The expression probably will need to test the values of other fields (perhaps Host and/or ConnName).  You know your data better than I do so I can't be more detailed than that.
Search for all your users then extract the CN using the rex. If you are trying the tighten your search criteria, here is the spec for searches https://datatracker.ietf.org/doc/html/rfc2254
StatusMsg is the field (on the fly field) that I want to be populated by the message so I'm not certain what you mean by  <<some expression>> So that was why I thought maybe this would be an if the... See more...
StatusMsg is the field (on the fly field) that I want to be populated by the message so I'm not certain what you mean by  <<some expression>> So that was why I thought maybe this would be an if then type of query.  If StatusMsg="some value" then put that in the table along with the other data.  If not, then go to the next status message.  So I would want: Action                                                  Host           ConnName "Task through an uncaught..."     lx.......           CCNBU---- So should this be an if then search?
if this helps, this is how we currently list the members of a specific AD Group ldapsearch search="(&(objectClass=user)(memberOf=CN=Schema Admins,OU=groups,DC=domain,DC=xxx))"
Setup currently is LM, indexer, SH, and DS are all on the same host. I'm currently using Splunk Enterprise Version: 9.4. I get about 10 messages a second logged in the splunkd.log with the following... See more...
Setup currently is LM, indexer, SH, and DS are all on the same host. I'm currently using Splunk Enterprise Version: 9.4. I get about 10 messages a second logged in the splunkd.log with the following error: ERROR BTree [1001653 IndexerTPoolWorker-3] - 0th child has invalid offset: indexsize=67942584 recordsize=166182200, (Internal) ERROR BTreeCP [1001653 IndexerTPoolWorker-3] - addUpdate CheckValidException caught: BTree::Exception: Validation failed in checkpoint I have noticed the btree_index.dat and btree_records.dat in /opt/splunk/data/fishbucket/splunk_private_db are re-created every few seconds. From what I can tell, after they get get to a certain point, those files are copied into the corrupt directory and are deleted. It then starts all over. I have tried to shutdown splunk and copy snapshot files over, but when I restart splunk they are overwritten and we start the whole loop of files getting created and then copied to corrupt.     I tried a repair on the data files with the following command: splunk cmd btprobe -d /opt/splunk/data/fishbucket/splunk_private_db -r which returned the following output no root in /opt/splunk/data/fishbucket/splunk_private_db/btree_index.dat with non-empty recordFile /opt/splunk/data/fishbucket/splunk_private_db/btree_records.dat recovered key: 0xd3e9c1eb89bdbf3e | sptr=1207 Exception thrown: BTree::Exception: called debug on btree that isn't open!   It is totally possible there is some corruption somewhere. We did have a filesystem issue a while back. I had to do a fsck and there were a few files that I removed.  As far as data I can't seem to find out where the problem might be.  In splunk search I appear to have incomplete data in the _internal index. I can't view licensing and Data Quality are empty and have no data.   Do I have some corrupt data somewhere which is causing problems with my btree index data? How would I go about finding the cause of this problem?
You are almost there - assuming your field is _raw | rex "(?<groups>(?<=CN=)[^,]+)"
Hi everyone! After upgrading to version 3.8.1, I got a bunch of errors: In the Security Content i get the following:    app/Splunk_Security_Essentials/security_content 404 (Not Found)Understand th... See more...
Hi everyone! After upgrading to version 3.8.1, I got a bunch of errors: In the Security Content i get the following:    app/Splunk_Security_Essentials/security_content 404 (Not Found)Understand this errorAI web-client-content-script.js:2 Uncaught (in promise) Error: Access to storage is not allowed from this context.   On the Data Inventory page i get the following: Error! Received the following error: Description Message Error occurred while grabbing data_inventory_products   Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_Security_Essentials/bin/generateShowcaseInfo.py", line 473, in handle if row['stage'] in ['all-done', 'step-review', 'step-eventsize', 'step-volume', 'manualnodata']: KeyError: 'stage'   There are also a couple of apps: splunk_essentials_8_2 and splunk_essentials_9_0 are both enabled. Does anyone know how to fix this? Thanks!
Example table output would be something like User1  Schema Admins User2  Schema Admins User1  Enterprise Admins User3  Domain Admins