All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I could get the results when I run the command. My observation about the lookup file  between SH and ES on SH is , the .CSV extension is missing.once added it's running. I'm trying understa... See more...
Hi, I could get the results when I run the command. My observation about the lookup file  between SH and ES on SH is , the .CSV extension is missing.once added it's running. I'm trying understand the below query to implement. Firstly, the description provided in   the usecase is not clearly understood . I got this usecase from the splunk SF content search. Anyone has idea about this query. https://lantern.splunk.com/Splunk_Platform/UCE/Security/Threat_Hunting/Protecting_a_Salesforce_cloud_deployment/Spike_in_exported_records_from_Salesforce_cloud ROWS_PROCESSED>0 EVENT_TYPE=API OR EVENT_TYPE=BulkAPI OR EVENT_TYPE=RestAPI |lookup lookup_sfdc_usernames USER_ID |bucket _time span=1d |stats sum(ROWS_PROCESSED) AS rows BY _time Username |stats count AS num_data_samples max(eval(if(_time >= relative_time(maxtime, "-1d@d"), 'rows',null))) AS rows avg(eval(if(_time<relative_time(maxtime,"-1d@d"),'rows',null))) AS avg stdev(eval(if(_time<relative_time(maxtime,"-1d@d"),'rows',null))) AS stdev BY Username |eval lowerBound=(avg-stdev*2), upperBound=(avg+stdev*2) |where 'rows' > upperBound AND num_data_samples >=7      
For the last one, it should be 1+01:02:10 to signify 1 day + 1 hour, 2 minutes and 10 seconds, but since you haven't shown your complete search, it is difficult to know why you are missing the "1+"
This is an old thread so you may be more likely to get responses from a new question. Have you tried untarring the file rather than using the splunk install command? 7-zip can be used to extract the... See more...
This is an old thread so you may be more likely to get responses from a new question. Have you tried untarring the file rather than using the splunk install command? 7-zip can be used to extract the .spl file to %SPLUNK_HOME%\etc\apps
| eval newfield= mvjoin(mvappend(LocationName,EventName,ErrorCode,summary),",")
Hi @man03359 , if you want the values of the fields separated by comma, you should use eval in this way: | eval newfield=LocationName.",".EventName.",".ErrorCode.",".summary Ciao. Giuseppe
I had aws:cloudwatch:metrics to get the custom metrics. Is there any way where I can get all the aws cloudwatch log group directly rather mentioning one by one , because when there is new log grou... See more...
I had aws:cloudwatch:metrics to get the custom metrics. Is there any way where I can get all the aws cloudwatch log group directly rather mentioning one by one , because when there is new log group created we have to reconfigure it and there chance of forgetting to add new log group
Hi, I have an output like this - Location EventName ErrorCode Summary server1 Mssql.LogBackupFailed BackupAgentError Failed backup.... server2 Mssql.LogBackupFailed BackupAgentErro... See more...
Hi, I have an output like this - Location EventName ErrorCode Summary server1 Mssql.LogBackupFailed BackupAgentError Failed backup.... server2 Mssql.LogBackupFailed BackupAgentError Failed backup....   Now I am trying to combine all the values of Location, EventName, ErrorCode and Summary into one field called "newfield" , lets say using a comma "," or ";" I am trying this command -     | eval newfield= mvappend(LocationName,EventName,ErrorCode,summary)     but the output it is giving is -   server1 Mssql.LogBackupFailed BackupAgentError Failed backup....   Output I am expecting is - server1,Mssql.LogBackupFailed,BackupAgentError,Failed backup
HI @SplunkingKnight  Not sure what I am doing wrong here. I am setting it in local on my app. But I am getting this on a startup.   An the colors are still very bright. I am using th... See more...
HI @SplunkingKnight  Not sure what I am doing wrong here. I am setting it in local on my app. But I am getting this on a startup.   An the colors are still very bright. I am using this to applay it any help would be great - 
Yes, I use ansible to push the app to the deployer.    Then from within ansible I run the splunk apply shcluster.  
Hi @att35, I usually use the different names and coalesce solution in a calculated field. Ciao. Giuseppe
that worked for 2 results but not for the last one   
We have application data coming from Apache Tomcat's and have a regex in place to extract exception name. But there are some tomcats sending data in a slightly different formats and the extraction do... See more...
We have application data coming from Apache Tomcat's and have a regex in place to extract exception name. But there are some tomcats sending data in a slightly different formats and the extraction doesn't work for them.  I have updated regex ready for these different formats, but want to keep the field name same, i.e. exception. How Do I manage multiple extractions against the same sourcetype while keeping the field names same? If I add these regex in transforms, would they end up conflicting with each other?  Or should I be creating them into different fields, such as exception1, exception2 and then use coalesce to eventually merge them into a single field?
Again - Splunk won't find something that's not there. Because how should it? So you need to have a list of what you expect, then you do a list of what you have and you compare both lists. You can't g... See more...
Again - Splunk won't find something that's not there. Because how should it? So you need to have a list of what you expect, then you do a list of what you have and you compare both lists. You can't get it other way because how? If Splunk doesn't have something it can't tell you what it is. See the link I pointed you to. The question is how do you compile that list.  You're saying that you have specific sourcetypes "associated" with indexes. So you should have some table. Upload this table to Splunk as lookup and use this lookup to compare with your search results.  
@pacifikn  Did you able to find the solution for the mentioned requirement? I too had a similar kind of requirement, if you were able to find the solution kindly help.    
Have you checked if any firewalls are blocking connections to Splunk Cloud?  What does splunkd.log say? Please confirm the user that installed the UF.  Windows does not have 'root'.
Splunk stores that information in the "fishbucket" at /opt/splunkforwarder/var/lib/splunk/fishbucket/splunk_private_db.  That database cannot be changed or moved, but you should be able to backup and... See more...
Splunk stores that information in the "fishbucket" at /opt/splunkforwarder/var/lib/splunk/fishbucket/splunk_private_db.  That database cannot be changed or moved, but you should be able to backup and restore it.
we have nearly 700+ index configured in splunk and more than 1000+ sourcetypes associated with it. So  I will need to find out which index and sourcetype is not used by user in any of the savedsearch... See more...
we have nearly 700+ index configured in splunk and more than 1000+ sourcetypes associated with it. So  I will need to find out which index and sourcetype is not used by user in any of the savedsearch, dashboard, macro, Ad-hoc searches, alerts. I was looking into audit index for last 90 days but didnt get accurate result.   i  will need splunk query to get the report to show unused index and sourcetype. 
Hello to everyone! One of the source types contains messages with no timestamp   <172>hostname: -Traceback: 0x138fc51 0x13928fa 0x1399b28 0x1327c33 0x3ba6c07dff 0x7fba45b0339d   To resolve th... See more...
Hello to everyone! One of the source types contains messages with no timestamp   <172>hostname: -Traceback: 0x138fc51 0x13928fa 0x1399b28 0x1327c33 0x3ba6c07dff 0x7fba45b0339d   To resolve this problem, I created a transform rule that successfully eliminated this "junk" from index   [wlc_syslog_rt0] REGEX = ^<\d+>.*?:\s-Traceback:\s+ DEST_KEY = queue FORMAT = nullQueue   But after it, I still have messages that indicate timestamp extraction failed   01-31-2024 15:08:17.539 +0300 WARN DateParserVerbose [17276 merging_0] - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (20) characters of event. Defaulting to timestamp of previous event (Wed Jan 31 15:08:05 2024). Context: source=udp:1100|host=172.22.0.11|wlc_syslog|\r\n 566 similar messages suppressed. First occurred at: Wed Jan 31 15:03:13 2024     All events from this sourcetype look like this:   <172>hostname: *spamApTask0: Jan 31 12:58:47.692: %LWAPP-4-SIG_INFO1: [PA]spam_lrad.c:56582 Signature information; AP 00:57:d2:86:c0:30, alarm ON, standard sig Auth flood, track per-Macprecedence 5, hits 300, slot 0, channel 1, most offending MAC 54:14:f3:c8:a1:b3     Before asking, I tried to find events without a timestamp by using regex and cluster commands but didn't find anything So, is it normal behavior, and splunk indicates timestamp absence before moving to nullQueue or did I do something wrong?
Hi @davidwaugh , as @ITWhisperer said it isn't always a best practice to haveasterisk at the beginning and the end of a field value, but, for the index field isn't a grave sin. I'm curious to under... See more...
Hi @davidwaugh , as @ITWhisperer said it isn't always a best practice to haveasterisk at the beginning and the end of a field value, but, for the index field isn't a grave sin. I'm curious to understand why you have so many indexes: indexes aren't database tables, usually in Splunk you use different indexes when you have different retentions or different access grants, so why do you have so many indexes? Using many indexes you haven't any advantage and many problems in management. So I hint to redesign your data structure and use some indexes. You can differentiate data flows using sourcetype and other fields. Ciao. Giuseppe
It is not clear what you are trying to achieve when _time is from the previous day. Also, note that you could consider using | eval time_difference=tostring(now() - _time, "duration")