All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We are utilizing Splunk Ingest actions to copy data to an S3 bucket. After reviewing various articles and conducting some tests, I've successfully forwarded data to the S3 bucket, where it's currentl... See more...
We are utilizing Splunk Ingest actions to copy data to an S3 bucket. After reviewing various articles and conducting some tests, I've successfully forwarded data to the S3 bucket, where it's currently being stored with the Sourcetype name. However, there's a requirement to store these logs using the hostname instead of the Sourcetype for improved visibility and operational efficiency. Although there isn't a direct method to accomplish this through the Ingest actions GUI, I believe it can be achieved using props and transforms. Can someone assist me with this?
I have the same problem. Only on ADFS audit logs, maybe something to change on the windows server directly instead of touching splunk here?    
Hello SplunkCommunity, After configuring SSL, when I execute the following command: openssl s_client -showcerts -connect host:port I am encountering the following error: 803BEC33F07F0000:error:0A... See more...
Hello SplunkCommunity, After configuring SSL, when I execute the following command: openssl s_client -showcerts -connect host:port I am encountering the following error: 803BEC33F07F0000:error:0A000126:SSL routines:ssl3_read_n:unexpected eof while reading:../ssl/record/rec_layer_s3.c:317: Could anyone help me understand why I am seeing this error and assist me in resolving it? Thank you in advance for your help. Best regards,  
We have deployed splunk enterprise on huawei cloud. After conducting baseline checking, we have discovered several risk items targeting mongodb with the following: Rule:Use a Secure TLS Version Rul... See more...
We have deployed splunk enterprise on huawei cloud. After conducting baseline checking, we have discovered several risk items targeting mongodb with the following: Rule:Use a Secure TLS Version Rule:Disable Listening on the Unix Socket Rule:Set the Background Startup Mode Rule:Disable the HTTP Status Interface Rule:Configure bind_ip Rule:Disable Internal Command Test Rule:Do Not Omit Server Name Verification Rule:Enable the Log Appending Mode Rule:Restrict the Permission on the Home Directory of MongoDB Rule:Restrict the Permission on the Bin Directory of MongoDB Rule:Check the FIPS Mode Option I have checked if there is any related documentation but I cannot find any of them. I am wondering if I should create a mongodb.conf for it. Thanksss
Hi, I have an app that is used for all the configurations that we have in Splunk Cloud. Quite a lot of users on our instance are admin (for good reasons that I don't want to get into ). Now becau... See more...
Hi, I have an app that is used for all the configurations that we have in Splunk Cloud. Quite a lot of users on our instance are admin (for good reasons that I don't want to get into ). Now because not all of those users are really "developer enthusiasts" they tend to sometimes make configuration changes through the GUI. For example disable a search in the GUI instead of nicely in the app (with pipeline etc) when they don't need it anymore. To try to make this impossible I changed the default.meta to:     [] access = read : [ * ], write : [] export = system   But this doesn't seem to work and people can still disable savedsearches (and many other things). Is there any way to disable write entirely for any content in the app?  
I Will try to use add-on Simple SNMP Getter but the response like the picture , please advice   this is the input    
This seems confusing, as Splunk hasn't attempted to do the mongodb upgrade yet, I would expect it to fail after the upgrade if this was the case?   Edit: I ran HWinfo on the box, its showing AVX, A... See more...
This seems confusing, as Splunk hasn't attempted to do the mongodb upgrade yet, I would expect it to fail after the upgrade if this was the case?   Edit: I ran HWinfo on the box, its showing AVX, AVX2 and AVX-512 supported, so I don't think this is the issue.
Thats incorrect, its a server 2022 box.
Hi folks I am trying to schedule a dashboard to send mail with its details periodically, I can able to do it but the output is not that legible i.e the attachment either .pdf or .pmg format is blurr... See more...
Hi folks I am trying to schedule a dashboard to send mail with its details periodically, I can able to do it but the output is not that legible i.e the attachment either .pdf or .pmg format is blurry when zoomed in at slightest to view the readable things like table data, graph points and so on. the Dashboard contains multiple panel like table,bar graph, pie chat and different kinds of graph as well. does increasing font size of the data helps? I dont want to alter the dashboard and make a mess unless it will surely solve the matter.  I am not very good at dashboarding and new to Splunk as well.  Please advise 
Sure.Thank you.
Hi @Miguel3393 , at first, don't use the search command after the main search because your search is slower! Then, you already calculated the difference in seconds in the field diffTime, you have o... See more...
Hi @Miguel3393 , at first, don't use the search command after the main search because your search is slower! Then, you already calculated the difference in seconds in the field diffTime, you have only to add this field to the table command. then, I'm not sure that the solution to extress the duration in minutes and seconds is correct, you shuld use: | eval Duracion=tostring(diffTime,"duration")  In other words, please try this: index="cdr" ("Call.TermParty.TrunkGroup.TrunkGroupId"="2811" OR "Call.TermParty.TrunkGroup.TrunkGroupId"="2810") "Call.ConnectTime"=* "Call.DisconnectTime"=* | lookup Pais Call.RoutingInfo.DestAddr OUTPUT Countrie | eval Disctime=strftime('Call.DisconnectTime'/1000,"%m/%d/%Y %H:%M:%S %Q") | eval Conntime=strftime('Call.ConnectTime'/1000, "%m/%d/%Y %H:%M:%S%Q") | eval diffTime=('Call.DisconnectTime'-'Call.ConnectTime') | eval Duracion=tostring(diffTime,"duration") | table Countrie Duracion diffTime Ciao. Giuseppe
Hi  @anu1 ,the dashboard is very easy, but it requires a preparation that depends on the number of data sources that you want to display in this dashboard. In few words, you should: analyze your ... See more...
Hi  @anu1 ,the dashboard is very easy, but it requires a preparation that depends on the number of data sources that you want to display in this dashboard. In few words, you should: analyze your data sources and define the conditions for LOGIN, LOGOUT and LOGFAIL, eg, for Windows login is EventCode=4624, logout is EventCode=4634 and logfail is EventCode=4625, then create av eventtype for each condition assigning a tag (LOGIN, LOGOUT or LOGFAIL) to each eventtype, create some alias to have the same field names for the fields to display (e.g. UserName, IP_Source,  Hostname, etc...) create a dashboard running a search like the following: tag=$tag$ host=$host$ UserName=$user$ | table _time tag HostName UserName IP_Source the three tags in the main search come from three inputs. Let me know if you need help to create the dashboard that's very easy. Ciao. Giuseppe  
@MikeMakai Please run `tcpdump` to verify if the expected logs are being received. If the expected output is observed, we can proceed to check from the Splunk side. If this reply helps you, Karma w... See more...
@MikeMakai Please run `tcpdump` to verify if the expected logs are being received. If the expected output is observed, we can proceed to check from the Splunk side. If this reply helps you, Karma would be appreciated.
@MikeMakai Could you share your `inputs.conf` file? Are you sending data directly from the FMC to Splunk, or is there an intermediate forwarder between your FMC and Splunk?
Please do not use screenshot to illustrate text data.  Use text table or text box.  But even the two index search screenshots are inconsistent, meaning there is no common dest_ip.  You cannot expect ... See more...
Please do not use screenshot to illustrate text data.  Use text table or text box.  But even the two index search screenshots are inconsistent, meaning there is no common dest_ip.  You cannot expect all fields to be populated when there is no matching field value.  This is basic mathematics. Like @bowesmana says, find a small number of events that you know have matching dest_ip in both indices, manually calculate what the output should be, then use the proposed searches on this small dataset. Here is a mock dataset losely based on your screenshots but WITH matching dest_ip src_zone src_ip dest_zone dest_ip transport dest_port app rule action session_end_reason packets_out packets_in src_translated_ip dvc_name index server_name ssl_cipher ssl_version trusted 10.80.110.8 untrusted 152.88.1.76 UDP 53 dns_base whatever1 blocked policy_deny 1 0 whateverNAT don'tmatter *firewall*             152.88.1.76                     *corelight* whatever2 idon'tcare TLSv3 The first row is from index=*firewall*, the second from *corelight*. Because your two searches operators on different indices, @gcusello 's search can also be expressed with append (as opposed to OR) without much penalty.  Like this   index="*firewall*" sourcetype=*traffic* src_ip=10.0.0.0/8 | append [search index=*corelight* sourcetype=*corelight* server_name=*microsoft.com*] | fields src_zone, src_ip, dest_zone, dest_ip, server_name, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name | stats values(*) AS * BY dest_ip | rename src_zone AS From, src_ip AS Source, dest_zone AS To, dest_ip AS Destination, server_name AS SNI, transport AS Protocol, dest_port AS Port, app AS "Application", rule AS "Rule", action AS "Action", session_end_reason AS "End Reason", packets_out AS "Packets Out", packets_in AS "Packets In", src_translated_ip AS "Egress IP", dvc_name AS "DC"   Using the mock dataset, the output is Destination Action Application DC Egress IP End Reason From Packets In Packets Out Port Protocol Rule SNI Source To 152.88.1.76 blocked dns_base don'tmatter whateverNAT policy_deny trusted 0 1 53 UDP whatever1 whatever2 10.80.110.8 untrusted This is a full emulation for you to play with and compare with real data   | makeresults format=csv data="src_zone, src_ip, dest_zone, dest_ip, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name trusted, 10.80.110.8, untrusted, 152.88.1.76, UDP, 53, dns_base, whatever1, blocked, policy_deny, 1, 0, whateverNAT, don'tmatter" | table src_zone, src_ip, dest_zone, dest_ip, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name | eval index="*firewall*" ``` the above emulates index="*firewall*" sourcetype=*traffic* src_ip=10.0.0.0/8 ``` | append [makeresults format=csv data="server_name, dest_ip, ssl_version, ssl_cipher whatever2, 152.88.1.76, TLSv3, idon'tcare" | eval index="*corelight*" ``` the above emulates index=*corelight* sourcetype=*corelight* server_name=*microsoft.com* ```] | fields src_zone, src_ip, dest_zone, dest_ip, server_name, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name | stats values(*) AS * BY dest_ip | rename src_zone AS From, src_ip AS Source, dest_zone AS To, dest_ip AS Destination, server_name AS SNI, transport AS Protocol, dest_port AS Port, app AS "Application", rule AS "Rule", action AS "Action", session_end_reason AS "End Reason", packets_out AS "Packets Out", packets_in AS "Packets In", src_translated_ip AS "Egress IP", dvc_name AS "DC"    
Please share the search so far and some sample data then we might be able to help you with the search query.
Hey team, I have one requirement i.e have to Create a splunk dashboard to report the # of Logins , # of Logouts The input for the Splunk report should be as follows :  Input dropdown - Time Picker... See more...
Hey team, I have one requirement i.e have to Create a splunk dashboard to report the # of Logins , # of Logouts The input for the Splunk report should be as follows :  Input dropdown - Time Picker, Customer, Host Name Either identify using probe data or Splunk Command metrics Output for the following metrics should be shown as a timegraph with # of logins, logouts , the graph should consists of time,which host and which customer we are using.and the query also should have the tokens when i ran the query can you give me the search query for this requirement.I used multiple queries but am not getting the exact data. Can you help me with the query.Thanks.
@richgalloway The perfect solution, exactly what I was looking for. Thank you
Thanks Kiran
I am sending syslog data to Splunk from Cisco FMC. I am using UCAPL compliance and therefore cannot use eStreamer. The data is being ingested into Splunk and the dashboard is showing some basic event... See more...
I am sending syslog data to Splunk from Cisco FMC. I am using UCAPL compliance and therefore cannot use eStreamer. The data is being ingested into Splunk and the dashboard is showing some basic events, like connection events, volume file events and malware events. When I try to learn more about these events it doesn't drill down into more info. For example, when I click on the 14 Malware Events and chose open in search it just shows the number of events. There is no information regarding these events. When I click on inspect, it shows command.tstats at13 and  command.tstats.execute_output at 1. It doesn't provide further clarity regarding the malware events. When I view the Malware files dashboard on the FMC, is shows no data for malware threats. So based on the FMC it seems that the data in the Splunk dashboard is incorrect or at least interpreting malware events differently from the FMC dashboard.