All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Twagner79 , as also @richgalloway and @isoutamo said, there's no utility to use an intermediate Forwarder as concentrator, it's better to directly send logs to the Indexers. The only applicatio... See more...
Hi @Twagner79 , as also @richgalloway and @isoutamo said, there's no utility to use an intermediate Forwarder as concentrator, it's better to directly send logs to the Indexers. The only application I know (and I applied) of this solution is when you have forwarders in a restricted network and you don't want to open many firewall routes between all the forwarders and the Indexers, but it shouldn't be your case. In addition, adding an additional layer doesn't reduce latency but increases it and at the same time doesn't give any improvement to integrity, that's guaranteed by the use of Forwarders that have a local cache. Ciao. Giuseppe
Hi @curiouspuppet , if one of the downloadbale versions isn't usable, you can ask an older version only to Splunk Support. Ciao. Giuseppe
Hi @jjohn149 , please try something like this: index=osp source=xxx EVENT_TYPE=xxx EVENT_SUBTYPE=xxx PLNF=* REN=INT OKELS="" | eval example=case( HTSZ="R" AND NOT EHUH="FIERY", "example 3", HTSZ=... See more...
Hi @jjohn149 , please try something like this: index=osp source=xxx EVENT_TYPE=xxx EVENT_SUBTYPE=xxx PLNF=* REN=INT OKELS="" | eval example=case( HTSZ="R" AND NOT EHUH="FIERY", "example 3", HTSZ="R", "example 2", true(), "example 1" ) | eval DATE = strftime(strptime(BADAT, "%Y%m%d"), "%Y-%m-%d") | stats count(eval(example="example 1")) AS example1_count count(eval(example="example 2")) AS example2_count count(eval(example="example 3")) AS example3_count BY FNHB FNPO DATE | stats sum(example1_count) AS "example 1" sum(example3_count) AS "example 2" sum(example3_count) AS "example 3" BY DATE Ciao. Giuseppe  
1. You should not use underscores in names for your indexes. Underscores denote Splunk's internal indexes. As _metrics is - that's Splunk's internal metrics index. 2. Retention period is one thing b... See more...
1. You should not use underscores in names for your indexes. Underscores denote Splunk's internal indexes. As _metrics is - that's Splunk's internal metrics index. 2. Retention period is one thing but if you exceed index size limits oldest bucket will get rolled to frozen (by default it will be deleted). As typically firewall logs (assuming you're logging network sessions) are very "noisy", that's what I'd suspect If you have an all-in-one setup the easiest way to check the index size would be to go to Settings->Indexes
Then @KendallW ‘s answer should work with minor change on outputs.conf. You should just use default group and put all those indexers there and no index definitions into it. 
Hi @JJE , one additional information: did you received logs until the 31st of July and logs stopped at the 1st of August? if this is true, the issue is that you're receiving logs from your firewall... See more...
Hi @JJE , one additional information: did you received logs until the 31st of July and logs stopped at the 1st of August? if this is true, the issue is that you're receiving logs from your firewalls with an European date format (dd/mm/yyyy) and you didn't declared the date format, in this case Splunk tries to recognize timestamp and did it until the 31st of July using the standard america format (mm/dd/yyyy), so101/08/2024 is read as the 8th of January. Force the time format in props.conf for that sourcetype: TIME_FORMAT = %d/%m/%Y %H:%M:%S If you didn't solved, could you share a sample of your logs and props.conf? The indexes.conf isn't relevant for the time format. Only for your information: indexes in Splunk are only a recipient for the logs, but there isn't any information about logs, infact you can store different logs in the same index: an index isn't a database table where you have to define every data information . Ciao. Giuseppe
Hi there,   It should work with IP addresses, if your data is going through an HF before reaching an indexer then the config should be applied on the HF.   Let me know if it works for you!   Ch... See more...
Hi there,   It should work with IP addresses, if your data is going through an HF before reaching an indexer then the config should be applied on the HF.   Let me know if it works for you!   Cheers, David
Then, how do I change the field name from attach_filename{} to file_name?
OK. Apart from the fact that you're routing to servers (which - if these are clustered indexers should replicate the buckets), not redirecting to indexes (indexer is not the same as index), let me po... See more...
OK. Apart from the fact that you're routing to servers (which - if these are clustered indexers should replicate the buckets), not redirecting to indexes (indexer is not the same as index), let me point out two things 1) You should not use the main index. It comes configured by default so that something is created in the environment but you should rather have properly configured indexes created according to your needs 2) Do you _need_ to split the data into indexes? (Two main reasons for splitting data into indexes are access rights and retention periods). That's not the same as using two different sourcetypes for two different kinds of data (which you should definitely do if the data formats do indeed differ).
So, you already have attach_filename{} extracted by Splunk.  No need for extra work.  Is this correct? To answer your question about two searches, when you add an additional filter, you SHOULD expec... See more...
So, you already have attach_filename{} extracted by Splunk.  No need for extra work.  Is this correct? To answer your question about two searches, when you add an additional filter, you SHOULD expect the result to change.  It is obvious that not all events have that attach_filename{} field populated.  If you do index="botsv2" sourcetype="stream:smtp" attach_filename{}="*" you only select those events with this field.  Without attach_filename{}="*", you pick up every event, including those that do not have attach_filename{}.
OK. We need more info. "our log collector picks up an udp packets from the iDRACs." - does that mean that you can receive (how?) logs from the iDRACs but they  don't get ingested into Splunk? Or doe... See more...
OK. We need more info. "our log collector picks up an udp packets from the iDRACs." - does that mean that you can receive (how?) logs from the iDRACs but they  don't get ingested into Splunk? Or does it mean that you get some other UDP packets (on different ports?) not your desired syslogs? How did you verify it?  
Hello   I am new to Splunk. I wish to use the sign in information from Azure AD/Entra ID. Is there a way to get these logs (sign-in logs) in real-time? Or probably even the syslog for sign-in acti... See more...
Hello   I am new to Splunk. I wish to use the sign in information from Azure AD/Entra ID. Is there a way to get these logs (sign-in logs) in real-time? Or probably even the syslog for sign-in activity? I have been through Microsoft Log Analytics Workspace, it suggests latency for the same to be 20 sec to 3 min. Is there a way to reduce this? Is a documentation supporting confirming the latency limits?
you're right. I am trying to extract fields from JSON-data. I used botsv2 data, in "stream:smtp" sourcetype. This is my _raw data(I try to search index="botsv2" sourcetype="stream:smtp"). The _r... See more...
you're right. I am trying to extract fields from JSON-data. I used botsv2 data, in "stream:smtp" sourcetype. This is my _raw data(I try to search index="botsv2" sourcetype="stream:smtp"). The _raw data result. {"endtime":"2017-08-31T22:56:56.070751Z","timestamp":"2017-08-31T22:56:56.070751Z","ack_packets_in":0,"ack_packets_out":0,"bytes":72,"bytes_in":0,"bytes_out":72,"capture_hostname":"matar","client_rtt":0,"client_rtt_packets":0,"client_rtt_sum":0,"data_packets_in":0,"data_packets_out":1,"dest_ip":"172.31.38.181","dest_mac":"06:6A:51:FA:0A:B0","dest_port":25,"duplicate_packets_in":0,"duplicate_packets_out":0,"flow_id":"b6b9eb1b-e8e1-4cec-ab3c-f7223adc490a","greeting":"ip-172-31-38-181.us-west-2.compute.internal ESMTP Postfix (Ubuntu)","missing_packets_in":0,"missing_packets_out":0,"network_interface":"eth0","packets_in":0,"packets_out":1,"protocol_stack":"ip:tcp:smtp","reply_time":0,"request_ack_time":0,"request_time":0,"response_ack_time":24624,"response_code":220,"response_time":0,"sender_server":"ip-172-31-38-181.us-west-2.compute.internal","server_agent":"ESMTP Postfix (Ubuntu)","server_response":"220 ip-172-31-38-181.us-west-2.compute.internal ESMTP Postfix (Ubuntu)","server_rtt":0,"server_rtt_packets":0,"server_rtt_sum":0,"src_ip":"104.47.34.68","src_mac":"06:E3:CC:18:AA:33","src_port":37952,"time_taken":0,"transport":"tcp"} I have one more question. The raw data results I searched with index=botsv2 sourcetype="stream:smtp" and Why are the search results with index="botsv2" sourcetype="stream:smtp" attach_filename{}="*" different? The field I want to extract exists in the search results with index="botsv2" sourcetype="stream:smtp" attach_filename{}="*". Search Try: index="botsv2" sourcetype="stream:smtp" attach_filename{}="*" {"endtime":"2017-08-30T15:08:00.075698Z","timestamp":"2017-08-30T15:07:59.774655Z","ack_packets_in":0,"ack_packets_out":31,"attach_disposition":["attachment"],"attach_filename":["Saccharomyces_cerevisiae_patent.docx"],"attach_size":[142540],"attach_size_decoded":[104162],"attach_transfer_encoding":["base64"],"attach_type":["application/vnd.openxmlformats-officedocument.wordprocessingml.document"],"bytes":155976,"bytes_in":155939,"bytes_out":37,"capture_hostname":"matar","client_rtt":0,"client_rtt_packets":0,"client_rtt_sum":0,"content":["DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;\r\n d=jacobsmythe111.onmicrosoft.com; s=selector1-froth-ly;\r\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version;    
My first reaction is: regex is the wrong solution.  This looks like part of a JSON document.  Treating structured data as text string is just calling for trouble down the road.  Can you share raw eve... See more...
My first reaction is: regex is the wrong solution.  This looks like part of a JSON document.  Treating structured data as text string is just calling for trouble down the road.  Can you share raw events? (Anonymize as needed.) Or, if this is a developer's joke, and you only have this string in a field, let's call it field1, you can still use Splunk's JSON capability to extract data.  It's much more robust.  Something like this:   | eval field1 = "{" . field1 . "}" | spath input=field1   Your mock data will give attach_filename{} field1 image.png GoT.S7E2.BOTS.BOTS.BOTS.mkv.torrent {"attach_filename":["image.png","GoT.S7E2.BOTS.BOTS.BOTS.mkv.torrent"]} image.png Office2016_Patcher_For_OSX.torrent {"attach_filename":["image.png","Office2016_Patcher_For_OSX.torrent"]} image.png {"attach_filename":["image.png"]} Saccharomyces_cerevisiae_patent.docx {"attach_filename":["Saccharomyces_cerevisiae_patent.docx"]} Here is an emulation you can play with and compare with real data, if your developers really play such a joke.   | makeresults | fields - _* | eval field1 = split("\"attach_filename\":[\"image.png\",\"GoT.S7E2.BOTS.BOTS.BOTS.mkv.torrent\"] \"attach_filename\":[\"image.png\",\"Office2016_Patcher_For_OSX.torrent\"] \"attach_filename\":[\"image.png\"] \"attach_filename\":[\"Saccharomyces_cerevisiae_patent.docx\"]", " ") | mvexpand field1 ``` data emulation ```  
We are able to perform a successful iDRAC syslog sent to Splunk for Firmware version 3.xx but when its Firmware version 5.xx, we aren't successful. Any chance that it is related to the firmware that ... See more...
We are able to perform a successful iDRAC syslog sent to Splunk for Firmware version 3.xx but when its Firmware version 5.xx, we aren't successful. Any chance that it is related to the firmware that I need to configure? Both configuration are the same and our log collector picks up an udp packets from the iDRACs.
Does anyone know why eventtype [wineventlog_index_windows] definition= index=wineventlog OR index=main doesn't return something? Am I doing something wrong in the eventtypes.conf file or should I... See more...
Does anyone know why eventtype [wineventlog_index_windows] definition= index=wineventlog OR index=main doesn't return something? Am I doing something wrong in the eventtypes.conf file or should I declare it somewhere else as well? Thank you very much
Hi @scelikok , Thanks. I checked the URL, and it is an error page and I can't locate the requestid from the splunk internal logs. Regarding the set_permissions.sh, yes, I have run it using root... See more...
Hi @scelikok , Thanks. I checked the URL, and it is an error page and I can't locate the requestid from the splunk internal logs. Regarding the set_permissions.sh, yes, I have run it using root in my Splunk instance. Just make it clear, I don't need to run similar script on the Windows server where I deployed the Splunk_TA_Stream app, correct?   Thanks again.    
I would like to automatically extract fields using props.conf. When there is a pattern like the one below, what I want to extract is each file name. attach_filename:[""] contains one or two file nam... See more...
I would like to automatically extract fields using props.conf. When there is a pattern like the one below, what I want to extract is each file name. attach_filename:[""] contains one or two file names. How can I extract all file names?   "attach_filename":["image.png","GoT.S7E2.BOTS.BOTS.BOTS.mkv.torrent"] "attach_filename":["image.png","Office2016_Patcher_For_OSX.torrent"] "attach_filename":["image.png"] "attach_filename":["Saccharomyces_cerevisiae_patent.docx"]   field extract will be store file_name   file_name : image.png,  Saccharomyces_cerevisiae_patent.docx,  GoT.S7E2.BOTS.BOTS.BOTS.mkv.torrent, Office2016_Patcher_For_OSX.torrent  
have one forwarder and three indexer servers. Each indexer server holds the indexes index-card,  index=bank, index=error.
I can't answer your specific question other than to ask if you have inputs or tokens, which are discussed from other users who have similar questions. However, there is a relatively new app https:/... See more...
I can't answer your specific question other than to ask if you have inputs or tokens, which are discussed from other users who have similar questions. However, there is a relatively new app https://splunkbase.splunk.com/app/7171 which may be of interest, as it will give you more control over the output.