All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello hello! There may be a simpler way to get this working, but my first thought is to use something like this:     Mysearch | eval Guest=if(sid=22, "BOT", "Others") | convert timeformat="%... See more...
Hello hello! There may be a simpler way to get this working, but my first thought is to use something like this:     Mysearch | eval Guest=if(sid=22, "BOT", "Others") | convert timeformat="%Y-%m-%d" ctime(_time) AS date | stats count by date, Guest | eventstats sum(count) as total by date | eval percentage=round((count/total)*100, 0) | eval count=count." (".percentage."%)" | xyseries Guest date count     Edit: Yep, here is a version that's a little shorter:   Mysearch | eval Guest=if(sid=22, "BOT", "Others") | bin _time span=1d | stats count by _time Guest | eval total=count, percentage=round((count/total)*100, 0), count=count." (".percentage."%)" | xyseries Guest _time count  
Thanks is there any way though which we can re-adjust the query so that only correct lob values come. There is 404 status codes which should comes for below shared URL    When i am trying w... See more...
Thanks is there any way though which we can re-adjust the query so that only correct lob values come. There is 404 status codes which should comes for below shared URL    When i am trying with message.backendCalls{}.endPoint then its showing exactly where 404 is coming but i want result on the basis for LOB.    
Please try: index=<yourindex> sid=* |eval Guest=if(sid=22,BOT,Others) | bin _time span=1d | eventstats count as totalevents by _time | eventstats count as guest_count by Guest | eval percentage... See more...
Please try: index=<yourindex> sid=* |eval Guest=if(sid=22,BOT,Others) | bin _time span=1d | eventstats count as totalevents by _time | eventstats count as guest_count by Guest | eval percentage=round((guest_count/totalevents)*100,2) | eval final_field = guest_count. "(" .percentage. " %)" | eval time=strftime(_time, "%Y-%m-%d") | chart values(final_field) over Guest by time    
You have events where Field message.incomingRequest.lob does not exist but field message.backendCalls{}.responseCode exists in these kind of events. That's why the "NULL" value is set.
  index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" | chart count by "message.backendCalls{}.responseCode"... See more...
  index="uhcportals-prod-logs" sourcetype=kubernetes container_name="myuhc-sso" logger="com.uhg.myuhc.log.SplunkLog" message.ssoType="Inbound" | chart count by "message.backendCalls{}.responseCode", "message.incomingRequest.lob" Issue is there is no response for value NULL  Under field "message.incomingRequest.lob" but its giving NULL in above shared result, Any idea? or any instruction for debugging so that we can find the root cause. Let me know if more details is needed.  
Hi, I am using a search Mysearch |eval Guest=if(sid=22,BOT,Others) | convert timeformat="%Y-%m-%d" ctime(_time) AS date |chart count over Guest by date And the results is like below. Gue... See more...
Hi, I am using a search Mysearch |eval Guest=if(sid=22,BOT,Others) | convert timeformat="%Y-%m-%d" ctime(_time) AS date |chart count over Guest by date And the results is like below. Guest                                               2024-12-18                                       2024-12-19 BOT                                                            10                                                            20 Others                                                       90                                                            80 Now I want to display the percentage of activity by Guest over date Maybe something like below Guest                                                       2024-12-18                                                  2024-12-19 BOT                                                            10 (10%)                                                           200(20%) Others                                                       90   (90%)                                                         800(80%) Could someone possible help here? Many thanks   
Hi @hcelep , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the ... See more...
Hi @hcelep , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @anu1 , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the co... See more...
Hi @anu1 , let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi All, I have designed a splunk query: | inputlookup Expiry_details_list.csv | lookup SupportTeamEmails.csv Application_name OUTPUT Owner_Email_address Ops_Leads_Email_address Escalation_Contact... See more...
Hi All, I have designed a splunk query: | inputlookup Expiry_details_list.csv | lookup SupportTeamEmails.csv Application_name OUTPUT Owner_Email_address Ops_Leads_Email_address Escalation_Contacts_Email_address | eval Expiry_Date = strptime(Expiry_date, "%m/%d/%Y") | eval Current_Time = now() | eval Expiry_Date_Timestamp = strftime(Expiry_Date, "%Y/%m/%d %H:%M:%S") | eval Days_until_expiry = round((Expiry_Date - Current_Time) / 86400, 0) | eval alert_type = case( Days_until_expiry <= 7, "Owner", Days_until_expiry <= 15, "Support", Days_until_expiry < 1, "Expired", Days_until_expiry > 15, "Others", true(), "None") | search alert_type != "None" | eval email_list = case( alert_type == "Owner", Escalation_Contacts_Email_address, alert_type == "Support", Ops_Leads_Email_address, alert_type == "Expired", mvappend(Owner_Email_address, Ops_Leads_Email_address, Escalation_Contacts_Email_address), true(), "None") | eval email_list = split(mvjoin(email_list, ","), ",") | eval cc_email_list = case( alert_type == "Owner", Owner_Email_address, alert_type == "Support", Owner_Email_address, true(), "None") | eval cc_email_list = split(mvjoin(cc_email_list, ","), ",") | dedup Application_name Environment email_list | eval email_recipient = mvdedup(email_list) | eval email_recipient = mvjoin(email_recipient, ",") | eval email_cc = mvdedup(cc_email_list) | eval email_cc = mvjoin(email_cc, ",") | table Application_name, Environment, Type, Sub_Type, Expiry_Date_Timestamp, Days_until_expiry, email_recipient, email_cc | fields - alert_type, Owner_Email_address, Ops_Leads_Email_address, Escalation_Contacts_Email_address Now this is returning output as provided in the attached file, what I am expecting is in email_list it should return only Escalation_contacts_email_address and in cc_email_list it should merge the email address of Owner_Email_address and Ops_Leads_Email_address seperated by a comma when the alert_type == "Expired" How do I get this using splunk query?
We are utilizing Splunk Ingest actions to copy data to an S3 bucket. After reviewing various articles and conducting some tests, I've successfully forwarded data to the S3 bucket, where it's currentl... See more...
We are utilizing Splunk Ingest actions to copy data to an S3 bucket. After reviewing various articles and conducting some tests, I've successfully forwarded data to the S3 bucket, where it's currently being stored with the Sourcetype name. However, there's a requirement to store these logs using the hostname instead of the Sourcetype for improved visibility and operational efficiency. Although there isn't a direct method to accomplish this through the Ingest actions GUI, I believe it can be achieved using props and transforms. Can someone assist me with this?
I have the same problem. Only on ADFS audit logs, maybe something to change on the windows server directly instead of touching splunk here?    
Hello SplunkCommunity, After configuring SSL, when I execute the following command: openssl s_client -showcerts -connect host:port I am encountering the following error: 803BEC33F07F0000:error:0A... See more...
Hello SplunkCommunity, After configuring SSL, when I execute the following command: openssl s_client -showcerts -connect host:port I am encountering the following error: 803BEC33F07F0000:error:0A000126:SSL routines:ssl3_read_n:unexpected eof while reading:../ssl/record/rec_layer_s3.c:317: Could anyone help me understand why I am seeing this error and assist me in resolving it? Thank you in advance for your help. Best regards,  
We have deployed splunk enterprise on huawei cloud. After conducting baseline checking, we have discovered several risk items targeting mongodb with the following: Rule:Use a Secure TLS Version Rul... See more...
We have deployed splunk enterprise on huawei cloud. After conducting baseline checking, we have discovered several risk items targeting mongodb with the following: Rule:Use a Secure TLS Version Rule:Disable Listening on the Unix Socket Rule:Set the Background Startup Mode Rule:Disable the HTTP Status Interface Rule:Configure bind_ip Rule:Disable Internal Command Test Rule:Do Not Omit Server Name Verification Rule:Enable the Log Appending Mode Rule:Restrict the Permission on the Home Directory of MongoDB Rule:Restrict the Permission on the Bin Directory of MongoDB Rule:Check the FIPS Mode Option I have checked if there is any related documentation but I cannot find any of them. I am wondering if I should create a mongodb.conf for it. Thanksss
Hi, I have an app that is used for all the configurations that we have in Splunk Cloud. Quite a lot of users on our instance are admin (for good reasons that I don't want to get into ). Now becau... See more...
Hi, I have an app that is used for all the configurations that we have in Splunk Cloud. Quite a lot of users on our instance are admin (for good reasons that I don't want to get into ). Now because not all of those users are really "developer enthusiasts" they tend to sometimes make configuration changes through the GUI. For example disable a search in the GUI instead of nicely in the app (with pipeline etc) when they don't need it anymore. To try to make this impossible I changed the default.meta to:     [] access = read : [ * ], write : [] export = system   But this doesn't seem to work and people can still disable savedsearches (and many other things). Is there any way to disable write entirely for any content in the app?  
I Will try to use add-on Simple SNMP Getter but the response like the picture , please advice   this is the input    
This seems confusing, as Splunk hasn't attempted to do the mongodb upgrade yet, I would expect it to fail after the upgrade if this was the case?   Edit: I ran HWinfo on the box, its showing AVX, A... See more...
This seems confusing, as Splunk hasn't attempted to do the mongodb upgrade yet, I would expect it to fail after the upgrade if this was the case?   Edit: I ran HWinfo on the box, its showing AVX, AVX2 and AVX-512 supported, so I don't think this is the issue.
Thats incorrect, its a server 2022 box.
Hi folks I am trying to schedule a dashboard to send mail with its details periodically, I can able to do it but the output is not that legible i.e the attachment either .pdf or .pmg format is blurr... See more...
Hi folks I am trying to schedule a dashboard to send mail with its details periodically, I can able to do it but the output is not that legible i.e the attachment either .pdf or .pmg format is blurry when zoomed in at slightest to view the readable things like table data, graph points and so on. the Dashboard contains multiple panel like table,bar graph, pie chat and different kinds of graph as well. does increasing font size of the data helps? I dont want to alter the dashboard and make a mess unless it will surely solve the matter.  I am not very good at dashboarding and new to Splunk as well.  Please advise 
Sure.Thank you.
Hi @Miguel3393 , at first, don't use the search command after the main search because your search is slower! Then, you already calculated the difference in seconds in the field diffTime, you have o... See more...
Hi @Miguel3393 , at first, don't use the search command after the main search because your search is slower! Then, you already calculated the difference in seconds in the field diffTime, you have only to add this field to the table command. then, I'm not sure that the solution to extress the duration in minutes and seconds is correct, you shuld use: | eval Duracion=tostring(diffTime,"duration")  In other words, please try this: index="cdr" ("Call.TermParty.TrunkGroup.TrunkGroupId"="2811" OR "Call.TermParty.TrunkGroup.TrunkGroupId"="2810") "Call.ConnectTime"=* "Call.DisconnectTime"=* | lookup Pais Call.RoutingInfo.DestAddr OUTPUT Countrie | eval Disctime=strftime('Call.DisconnectTime'/1000,"%m/%d/%Y %H:%M:%S %Q") | eval Conntime=strftime('Call.ConnectTime'/1000, "%m/%d/%Y %H:%M:%S%Q") | eval diffTime=('Call.DisconnectTime'-'Call.ConnectTime') | eval Duracion=tostring(diffTime,"duration") | table Countrie Duracion diffTime Ciao. Giuseppe