All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I can't imagine anything other than that the regex doesn't match - all else looks fine. AND - the data you provided I think was munged by the editor! Can you repaste that sample event only be SURE ... See more...
I can't imagine anything other than that the regex doesn't match - all else looks fine. AND - the data you provided I think was munged by the editor! Can you repaste that sample event only be SURE to use the </> code button?
| eval Filename=mvindex(split(INTERVAL_FILE,"\\"),-1)
Oh lovely, the "once per day" does wonders for simplifying the problem's edges.  So there's a few different ways to handle this then.  Let's go through some options. I think our base search will ... See more...
Oh lovely, the "once per day" does wonders for simplifying the problem's edges.  So there's a few different ways to handle this then.  Let's go through some options. I think our base search will be something like (index="a" sourcetype="x" "Generating Event gz File for*") OR (index="b" sourcetype="y" "File Processed*") I'm giving you the search piece by piece, with the idea you'll paste each piece in, see what the results are (perhaps with something like `| table *` after it), so you understand what it's doing before you add the next piece.  (Note some are "add this to the end" and others are "replace the last one with this one", so just be aware) Anyway, that's what many of us call a 'data salad'.  Splunk handles messy stuff just fine.  Toss it all in the salad, then later we'll add croutons and dressing.  That should give you all the data - both sides of it. Now, from here you could do something as simple as counting the results.  Add this to the end. | stats count If all is well you will have an answer of 2.  If the process is broken you may get 1, and if it's not run yet today you'll get 0.  This could be used as is, but I feel it's rather plain and the alert will be sort of dumb and uninteresting and without context. The dumb way to make it interesting at the end is eval the count so it says words.  Add this to the end: | eval status = case(if(count==2), "Everything processed correctly.", if(count==1, "Danger Will Robinson, it didn't process right!", true(), "I don't know what's going on, nothing came in today at all!") Now when you run it, you'll get some words that would possibly be useful in the alert! But this is still just kind of "not using the information we have available" So, replacing the entire | stats ... through the end with this new stats + stuff (eg after the base search at the top): | eval generated = if(searchmatch("Generating Event gz File for"), 1,0) | eval processed = if(searchmatch("File Processed"), 1,0) | stats count(generated) AS generated, count(processed) AS processed BY filename | eval status = case(generated == 1 AND processed == 1, "Received and Processed " . Filename, generated == 1 AND processed = 0, "NOT PROCESSED " . filename, true(), "Nothing reported at all")   What that does is, before we stats we create some fields (generated and processed) with a 0 or 1 in them (e.g. false or true).  We group those by filename (just in case!) with the stats, then create a "status" field that's got some information plus the filename. It should work?  I mean, I don't have your data but at least it generates no errors.  Feel free to break it down - start by adding the two evals to see that THEY work right, then add the stats to see if it counts right, etc... Let me know what else this might need to do!  We could include a time so that you could run historical reports... there's all sorts of other things you could do with it.
@ITWhisperer  seems its working partially. I can see only data with cl1 is getting replaced. I also have data with cl3 which needs to be replaced by ACD85. cl1=ACD55 cl3=ACD85   Am I missing any ... See more...
@ITWhisperer  seems its working partially. I can see only data with cl1 is getting replaced. I also have data with cl3 which needs to be replaced by ACD85. cl1=ACD55 cl3=ACD85   Am I missing any thing here.
"searchable" and "archived" are Splunk Cloud terms, but this question is in the Splunk Enterprise forum.  Please confirm which is in use. In Splunk Cloud, one sets the Searchable Days value in the U... See more...
"searchable" and "archived" are Splunk Cloud terms, but this question is in the Splunk Enterprise forum.  Please confirm which is in use. In Splunk Cloud, one sets the Searchable Days value in the UI or via ACS.  For one year of searching, set the value to 365.  Make sure the maximum size of the index is sufficient to hold the expected volume of data for that time.  Set the archive period by enabling DDAA and entering 730 as the archive time (365 days as searchable plus 365 days archived). In Splunk Enterprise, data is searchable until it is frozen.  There is no archive status unless you implement a coldToFrozenScript or coldToFrozenDir to move the data to a separate location for safe-keeping. These settings are in indexes.conf. For more information, see: https://docs.splunk.com/Documentation/SplunkCloud/9.0.2303/Admin/ManageIndexes https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Indexesconf https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Setting_data_retention_rules_in_Splunk_Cloud_Platform  
Hi, yes, that's exactly what I did and that fixed the issue in my case :). Thanks ! Ema
thanks
Also have another doubt. I have written below query to get the specific output. index=xyz sourcetype="automation:csv" source="D:\\Intradiem_automation\\ACD_FILETRACKER.csv" | rex field=_raw "^(?P<A... See more...
Also have another doubt. I have written below query to get the specific output. index=xyz sourcetype="automation:csv" source="D:\\Intradiem_automation\\ACD_FILETRACKER.csv" | rex field=_raw "^(?P<ACD>\w+\.\d+),(?P<ATTEMPTS>[^,]+),(?P<FAIL_REASON>[^,]*),(?P<INTERVAL_FILE>[^,]+),(?P<STATUS>\w+),(?P<START>[^,]+),(?P<FINISH>[^,]+),(?P<INGEST_TIME>.+)" | eval field_in_hhmmss=tostring(INGEST_TIME, "duration") | rename field_in_hhmmss AS INGESTION_TIME_HH-MM-SS | search ACD="*" ATTEMPTS="*" FAIL_REASON="*" INTERVAL_FILE="*" STATUS="*" START="*" FINISH="*" INGESTION_TIME_HH-MM-SS="*" | table ACD, ATTEMPTS, FAIL_REASON, INTERVAL_FILE,INTERVAL_FILE1, STATUS, START, FINISH, INGESTION_TIME_HH-MM-SS | dedup INTERVAL_FILE | sort -START I like to extract the filename "020624.0500" from Interval_file column and create another column name "Filename" beside the Interval_file column and before status column. Please help ACD ATTEMPTS FAIL_REASON INTERVAL_FILE STATUS START FINISH INGESTION_TIME_HH-MM-SS acd.55 1 NULL C:\totalview\ftp\switches\customer1\55\020624.0500 PASS 2024-02-06 11:32:30.057 +00:00 2024-02-06 11:32:52.274 +00:00 00:00:22 acd.55 1 NULL C:\totalview\ftp\switches\customer1\55\020624.0530 PASS 2024-02-06 12:02:30.028 +00:00 2024-02-06 12:02:54.151 +00:00 00:00:24 acd.85 1 NULL C:\totalview\ftp\switches\customer1\85\020624.0500 PASS 2024-02-06 11:31:30.021 +00:00 2024-02-06 11:31:40.788 +00:00 00:00:10
@ITWhisperer  great!! it worked Thank you for quick solution
| eval filelocation=if(like(filelocation,"%\cl1%"),"ACD55","ACD85")
Hi team, I have the same issue and I make sure all keys are correct
  I am looking for specific query where I can alter the row values after the final output and create new column with new value.  For example, I have written the below query : index=csv sourc... See more...
  I am looking for specific query where I can alter the row values after the final output and create new column with new value.  For example, I have written the below query : index=csv sourcetype="miscprocess:csv" source="D:\\automation\\miscprocess\\output_acd.csv" | rex field=_raw "\"(?<filename>\d*\.\d*)\"\,\"(?<filesize>\d*\.\d*)\"\,\"(?<filelocation>\S*)\"" | search filename="*" filesize="*" filelocation IN ("*cl3*", "*cl1*") | table filename, filesize, filelocation Which gives me the following output: filename filesize filelocation 012624.1230 13253.10546875 E:\totalview\ftp\acd\cl1\backup_modified\012624.1230 012624.1230 2236.3291015625 E:\totalview\ftp\acd\cl3\backup\012624.1230 012624.1200 13338.828125 E:\totalview\ftp\acd\cl1\backup_modified\012624.1200 012624.1200 2172.1640625 E:\totalview\ftp\acd\cl3\backup\012624.1200 012624.1130 13292.32421875 E:\totalview\ftp\acd\cl1\backup_modified\012624.1130 012624.1130 2231.9658203125 E:\totalview\ftp\acd\cl3\backup\012624.1130 012624.1100 13438.65234375 E:\totalview\ftp\acd\cl1\backup_modified\012624.1100   BUT, I like the row values to be replaced by "ACD55" where the file location is cl1 and "ACD85" where the file location is cl3 under filelocation column. So the desire output should be: filename filesize filelocation 012624.1230 13253.10546875 ACD55 012624.1230 2236.3291015625 ACD85 012624.1200 13338.828125 ACD55 012624.1200 2172.1640625 ACD85 012624.1130 13292.32421875 ACD55 012624.1130 2231.9658203125 ACD85 012624.1100 13438.65234375 ACD55     The raw events are like below:   "020424.0100","1164.953125","E:\totalview\ftp\acd\cl3\backup\020424.0100" "020624.0130","1754.49609375","E:\totalview\ftp\acd\cl1\backup_modified\020624.0130"   please suggest :
What events are you dealing with? Please share an anonymised sample selection. What do your expected results look like? What have you tried so far?
All these questions can be answered in the cloud monitoring console and you should start there instead of trying to write your own bespoke spl. License > Storage overview is a great place to start. ... See more...
All these questions can be answered in the cloud monitoring console and you should start there instead of trying to write your own bespoke spl. License > Storage overview is a great place to start. There is also DDAS and DDAA searches there. specifically for archive data please review here as per docs: https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Admin/DataArchiver Steps to review the overall size and growth of your archived indexes You might want to review the size and growth of your archived indexes to better understand how much of your entitlement you are consuming. This can help you predict usage and expenses for your archived data. From Splunk Web, go to Settings > Indexes. From the Indexes page, click on a value in the Archive Retention column.    
I'm trying understand the below query to implement. what would be the expected result . Any idea about this query. https://lantern.splunk.com/Splunk_Platform/UCE/Security/Threat_Hunting/Protecting_... See more...
I'm trying understand the below query to implement. what would be the expected result . Any idea about this query. https://lantern.splunk.com/Splunk_Platform/UCE/Security/Threat_Hunting/Protecting_a_Salesforce_cloud... ROWS_PROCESSED>0 EVENT_TYPE=API OR EVENT_TYPE=BulkAPI OR EVENT_TYPE=RestAPI |lookup lookup_sfdc_usernames USER_ID |bucket _time span=1d |stats sum(ROWS_PROCESSED) AS rows BY _time Username |stats count AS num_data_samples max(eval(if(_time >= relative_time(maxtime, "-1d@d"), 'rows',null))) AS rows avg(eval(if(_time<relative_time(maxtime,"-1d@d"),'rows',null))) AS avg stdev(eval(if(_time<relative_time(maxtime,"-1d@d"),'rows',null))) AS stdev BY Username |eval lowerBound=(avg-stdev*2), upperBound=(avg+stdev*2) |where 'rows' > upperBound AND num_data_samples >=7  
Yes, thank you,... we have integrated S3 bucket and from there we are onboarding the logs.
Basically - How to get the below information as this critical for us to know it on daily basis as service provider. 1.How much data ingested to Splunk on daily basis. (We have query for that). 2.Ho... See more...
Basically - How to get the below information as this critical for us to know it on daily basis as service provider. 1.How much data ingested to Splunk on daily basis. (We have query for that). 2.How much data is getting stored on active searchable storage.(Assuming the same should be reflecting in the active searchable storage) on daily basis. 3.How much data has been moved from searchable active(Online Storage) to Active Archive(Offline Storage) on daily basis. 4.How much data has been purged/deleted from Active Archive(Offline Storage) daily.
Horizontal Scan:  External scan against a group of IPs for a single port.    Vertical Scan:  External Single IP being scan against multiple port. 
Sorry for not being more descriptive,  both searches has different indexes. i want to alert when search1 AND search2 result greater than zero.   how long is the time period involved- only one tim... See more...
Sorry for not being more descriptive,  both searches has different indexes. i want to alert when search1 AND search2 result greater than zero.   how long is the time period involved- only one time in a day.  how often will you have this alert scheduled for (different from the first question!) - first and second searches can be done at same time, because right after few seconds of file received file will be processed is it a 1 to 1 relationship between "create" events and and "processing" events - yes  what's the maximum time difference between those two events - maximum 1 hr 1 minute does it matter more if a file gets created but not processed, or does that situation matter less, or is this actually the only thing that matters - yes its critical if file received( search1) and not processed (search2) do you already have the filename being extracted as a field in these two events - yes i have how often do you expect the pair of messages (daily?  hourly?  hundreds per second?) - daily once
Hi,   So sorry. I though I had update and resolved this message. As I was trying to get logged in (it took a while!), you sent the other update. That was not the fix for me. While I had a case op... See more...
Hi,   So sorry. I though I had update and resolved this message. As I was trying to get logged in (it took a while!), you sent the other update. That was not the fix for me. While I had a case open for while with Splunk, I cam across this fix : On the Forwarder : /opt/splunkforwarder/etc/system/local/server.conf Add this Stanza : [httpServer] mgmtMode = tcp   Regards.