All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for the response @gcusello  This is the result I get with what you mention. Regards.
Hi all, Was wondering if there was a way to manually grab the threat intelligence updates for Splunk ES (we are on 7.3.1.) Specifically:  Intelligence download of "mitre_attack" - threatlist downlo... See more...
Hi all, Was wondering if there was a way to manually grab the threat intelligence updates for Splunk ES (we are on 7.3.1.) Specifically:  Intelligence download of "mitre_attack" - threatlist download Our Splunk environment is on-prem and air-gapped, so there is not really any way to create an external connection to the internet. Any ideas or advice would be appreciated.
Have you try it with Splunk's openssl or OS's openssl? You could/should try it with  splunk cmd openssl s_client -showcerts -connect host:port
Please validate your data. Based on your screenshots, it seems that when error code 404 occurs, the field message.incomingRequest.lob does not exist in these events.
There is still no response for 404 status code, its only coming for below query index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.Splunk... See more...
There is still no response for 404 status code, its only coming for below query index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" | chart count by "message.backendCalls{}.responseCode", "message.incomingRequest.lob"  
Add message.incomingRequest.lob=* to your base search to filter for events that contain the field message.incomingRequest.lob index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc... See more...
Add message.incomingRequest.lob=* to your base search to filter for events that contain the field message.incomingRequest.lob index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" "message.incomingRequest.lob"=* | chart count by "message.backendCalls{}.responseCode", "message.incomingRequest.lob"  
Hello hello! There may be a simpler way to get this working, but my first thought is to use something like this:     Mysearch | eval Guest=if(sid=22, "BOT", "Others") | convert timeformat="%... See more...
Hello hello! There may be a simpler way to get this working, but my first thought is to use something like this:     Mysearch | eval Guest=if(sid=22, "BOT", "Others") | convert timeformat="%Y-%m-%d" ctime(_time) AS date | stats count by date, Guest | eventstats sum(count) as total by date | eval percentage=round((count/total)*100, 0) | eval count=count." (".percentage."%)" | xyseries Guest date count     Edit: Yep, here is a version that's a little shorter:   Mysearch | eval Guest=if(sid=22, "BOT", "Others") | bin _time span=1d | stats count by _time Guest | eval total=count, percentage=round((count/total)*100, 0), count=count." (".percentage."%)" | xyseries Guest _time count  
Thanks is there any way though which we can re-adjust the query so that only correct lob values come. There is 404 status codes which should comes for below shared URL    When i am trying w... See more...
Thanks is there any way though which we can re-adjust the query so that only correct lob values come. There is 404 status codes which should comes for below shared URL    When i am trying with message.backendCalls{}.endPoint then its showing exactly where 404 is coming but i want result on the basis for LOB.    
Please try: index=<yourindex> sid=* |eval Guest=if(sid=22,BOT,Others) | bin _time span=1d | eventstats count as totalevents by _time | eventstats count as guest_count by Guest | eval percentage... See more...
Please try: index=<yourindex> sid=* |eval Guest=if(sid=22,BOT,Others) | bin _time span=1d | eventstats count as totalevents by _time | eventstats count as guest_count by Guest | eval percentage=round((guest_count/totalevents)*100,2) | eval final_field = guest_count. "(" .percentage. " %)" | eval time=strftime(_time, "%Y-%m-%d") | chart values(final_field) over Guest by time    
You have events where Field message.incomingRequest.lob does not exist but field message.backendCalls{}.responseCode exists in these kind of events. That's why the "NULL" value is set.
  index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" | chart count by "message.backendCalls{}.responseCode"... See more...
  index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" | chart count by "message.backendCalls{}.responseCode", "message.incomingRequest.lob" Issue is there is no response for value NULL  Under field "message.incomingRequest.lob" but its giving NULL in above shared result, Any idea? or any instruction for debugging so that we can find the root cause. Let me know if more details is needed.  
Hi, I am using a search Mysearch |eval Guest=if(sid=22,BOT,Others) | convert timeformat="%Y-%m-%d" ctime(_time) AS date |chart count over Guest by date And the results is like below. Gue... See more...
Hi, I am using a search Mysearch |eval Guest=if(sid=22,BOT,Others) | convert timeformat="%Y-%m-%d" ctime(_time) AS date |chart count over Guest by date And the results is like below. Guest                                               2024-12-18                                       2024-12-19 BOT                                                            10                                                            20 Others                                                       90                                                            80 Now I want to display the percentage of activity by Guest over date Maybe something like below Guest                                                       2024-12-18                                                  2024-12-19 BOT                                                            10 (10%)                                                           200(20%) Others                                                       90   (90%)                                                         800(80%) Could someone possible help here? Many thanks   
Hi @hcelep , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the ... See more...
Hi @hcelep , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @anu1 , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the co... See more...
Hi @anu1 , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi All, I have designed a splunk query: | inputlookup Expiry_details_list.csv | lookup SupportTeamEmails.csv Application_name OUTPUT Owner_Email_address Ops_Leads_Email_address Escalation_Contact... See more...
Hi All, I have designed a splunk query: | inputlookup Expiry_details_list.csv | lookup SupportTeamEmails.csv Application_name OUTPUT Owner_Email_address Ops_Leads_Email_address Escalation_Contacts_Email_address | eval Expiry_Date = strptime(Expiry_date, "%m/%d/%Y") | eval Current_Time = now() | eval Expiry_Date_Timestamp = strftime(Expiry_Date, "%Y/%m/%d %H:%M:%S") | eval Days_until_expiry = round((Expiry_Date - Current_Time) / 86400, 0) | eval alert_type = case( Days_until_expiry <= 7, "Owner", Days_until_expiry <= 15, "Support", Days_until_expiry < 1, "Expired", Days_until_expiry > 15, "Others", true(), "None") | search alert_type != "None" | eval email_list = case( alert_type == "Owner", Escalation_Contacts_Email_address, alert_type == "Support", Ops_Leads_Email_address, alert_type == "Expired", mvappend(Owner_Email_address, Ops_Leads_Email_address, Escalation_Contacts_Email_address), true(), "None") | eval email_list = split(mvjoin(email_list, ","), ",") | eval cc_email_list = case( alert_type == "Owner", Owner_Email_address, alert_type == "Support", Owner_Email_address, true(), "None") | eval cc_email_list = split(mvjoin(cc_email_list, ","), ",") | dedup Application_name Environment email_list | eval email_recipient = mvdedup(email_list) | eval email_recipient = mvjoin(email_recipient, ",") | eval email_cc = mvdedup(cc_email_list) | eval email_cc = mvjoin(email_cc, ",") | table Application_name, Environment, Type, Sub_Type, Expiry_Date_Timestamp, Days_until_expiry, email_recipient, email_cc | fields - alert_type, Owner_Email_address, Ops_Leads_Email_address, Escalation_Contacts_Email_address Now this is returning output as provided in the attached file, what I am expecting is in email_list it should return only Escalation_contacts_email_address and in cc_email_list it should merge the email address of Owner_Email_address and Ops_Leads_Email_address seperated by a comma when the alert_type == "Expired" How do I get this using splunk query?
We are utilizing Splunk Ingest actions to copy data to an S3 bucket. After reviewing various articles and conducting some tests, I've successfully forwarded data to the S3 bucket, where it's currentl... See more...
We are utilizing Splunk Ingest actions to copy data to an S3 bucket. After reviewing various articles and conducting some tests, I've successfully forwarded data to the S3 bucket, where it's currently being stored with the Sourcetype name. However, there's a requirement to store these logs using the hostname instead of the Sourcetype for improved visibility and operational efficiency. Although there isn't a direct method to accomplish this through the Ingest actions GUI, I believe it can be achieved using props and transforms. Can someone assist me with this?
I have the same problem. Only on ADFS audit logs, maybe something to change on the windows server directly instead of touching splunk here?    
Hello SplunkCommunity, After configuring SSL, when I execute the following command: openssl s_client -showcerts -connect host:port I am encountering the following error: 803BEC33F07F0000:error:0A... See more...
Hello SplunkCommunity, After configuring SSL, when I execute the following command: openssl s_client -showcerts -connect host:port I am encountering the following error: 803BEC33F07F0000:error:0A000126:SSL routines:ssl3_read_n:unexpected eof while reading:../ssl/record/rec_layer_s3.c:317: Could anyone help me understand why I am seeing this error and assist me in resolving it? Thank you in advance for your help. Best regards,  
We have deployed splunk enterprise on huawei cloud. After conducting baseline checking, we have discovered several risk items targeting mongodb with the following: Rule:Use a Secure TLS Version Rul... See more...
We have deployed splunk enterprise on huawei cloud. After conducting baseline checking, we have discovered several risk items targeting mongodb with the following: Rule:Use a Secure TLS Version Rule:Disable Listening on the Unix Socket Rule:Set the Background Startup Mode Rule:Disable the HTTP Status Interface Rule:Configure bind_ip Rule:Disable Internal Command Test Rule:Do Not Omit Server Name Verification Rule:Enable the Log Appending Mode Rule:Restrict the Permission on the Home Directory of MongoDB Rule:Restrict the Permission on the Bin Directory of MongoDB Rule:Check the FIPS Mode Option I have checked if there is any related documentation but I cannot find any of them. I am wondering if I should create a mongodb.conf for it. Thanksss
Hi, I have an app that is used for all the configurations that we have in Splunk Cloud. Quite a lot of users on our instance are admin (for good reasons that I don't want to get into ). Now becau... See more...
Hi, I have an app that is used for all the configurations that we have in Splunk Cloud. Quite a lot of users on our instance are admin (for good reasons that I don't want to get into ). Now because not all of those users are really "developer enthusiasts" they tend to sometimes make configuration changes through the GUI. For example disable a search in the GUI instead of nicely in the app (with pipeline etc) when they don't need it anymore. To try to make this impossible I changed the default.meta to:     [] access = read : [ * ], write : [] export = system   But this doesn't seem to work and people can still disable savedsearches (and many other things). Is there any way to disable write entirely for any content in the app?