All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have the following JSON object which contains certificates expreation date: {         "certificate-one.crt": 2022-11-11T16:00:00.000Z,         "certificate-two.crt": 2022-11-11T16:00:00.000Z }... See more...
I have the following JSON object which contains certificates expreation date: {         "certificate-one.crt": 2022-11-11T16:00:00.000Z,         "certificate-two.crt": 2022-11-11T16:00:00.000Z } I want to convert it to the following table: certificate name        |  expiration date  --------------------------|--------------------------------------- certificate-one.crt    |  2022-11-11T16:00:00.000Z --------------------------|--------------------------------------- certificate-two.crt    |  2022-11-11T16:00:00.000Z
I am getting the following message when I switch to the Visualization tab after a search: "Your search isn't generating any statistic or visualization results. Here are some possible ways to get re... See more...
I am getting the following message when I switch to the Visualization tab after a search: "Your search isn't generating any statistic or visualization results. Here are some possible ways to get results." This is coming from a universal forwarder installed on a Windows server. I was trying to graph the network interface stats. I know I am missing something to allow me to do this.  
I have an Adaptive Response Action (execute_flow in the pic below)  that requires certain identity data about the subject of the notable (mobile phone number).   Not all users have a mobile number se... See more...
I have an Adaptive Response Action (execute_flow in the pic below)  that requires certain identity data about the subject of the notable (mobile phone number).   Not all users have a mobile number set in ES identity. Currently,  I throw a failure event in Python for this condition.   Is it possible to return a warning status instead?  
Good afternoon Splunk ninjas, i will require your assistance in designing regex that will help me take the values inside of the [] brackets, my sample log line:   2022-09-23T13:20:25.765+01:00 [29]... See more...
Good afternoon Splunk ninjas, i will require your assistance in designing regex that will help me take the values inside of the [] brackets, my sample log line:   2022-09-23T13:20:25.765+01:00 [29] WARN Core.ErrorResponse - {} - Error message being sent to user with Http Status code: BadRequest: {"Details":[{"Code":50,"FieldName":"myfield","Message":"Please supply the value of my field","Detail":null}],"Message":"Sorry, we're unable to process your request. Please check your details and try again.","UserMessage":null,"Code":1,"Explanation":null,"Resolution":null,"Category":2}   I'm interested in filtering for the values of Details: code, FieldName, Message and Detail, many thanks for your help!
Hi. I'm trying to get only failed login attempts but while I could find the correct field, it's not as accurate as there might be a successful login after the session. The only way I can think of... See more...
Hi. I'm trying to get only failed login attempts but while I could find the correct field, it's not as accurate as there might be a successful login after the session. The only way I can think off to bypass this is to use "if" argument but I don't know how to involve "if" in SPL. Here's the fields I currently use: index=application sourcetype=globalscape cs_method="*user*" sc_status=530 - provides all failed logins. index=application sourcetype=globalscape cs_method="*pass*" sc_status=230 - provides all successful logins.   Thank you for assisting!
so i was trying to install a forwarder on the DC and i ran into this issue   here is the link to the log file since i cant figure out how to attach it here  https://drive.google.com/file/d/1j73adah... See more...
so i was trying to install a forwarder on the DC and i ran into this issue   here is the link to the log file since i cant figure out how to attach it here  https://drive.google.com/file/d/1j73adahjOwc52lE6Oxxi6lBzAFHpErK1/view?usp=sharing
<form> <fieldset submitButton="false"> <input type="time" token="tok_time"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input id=... See more...
<form> <fieldset submitButton="false"> <input type="time" token="tok_time"> <label>Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input id="Reset" type="link" token="resetTokens" searchWhenChanged="true"> <label></label> <choice value="Reset">Reset</choice> <change> <condition value="Reset"> <unset token="tok_Time"></unset> <unset token="form.tok_Time"></unset> <set token="resetTokens">yes</set> <set token="form.resetTokens">yes</set> </condition> </change> </input> <html depends="$alwaysHideCSSOverride$"> <style> div[id^="Reset"] button{ width: 180px !important; background: rgb(192,192,192,1); padding: 10px; border-radius: 10px; color: Blue !important; } </style> </html> </fieldset> </form>
  Clara-fication: Customizing SimpleXML Dashboards With Inline CSS | Splunk" border="0">\0
 
We need a way for our custom add-on to include additional information from an alert into the cim_modactions log it writes when a failure happens.  The custom add-on's purpose is to create tickets in ... See more...
We need a way for our custom add-on to include additional information from an alert into the cim_modactions log it writes when a failure happens.  The custom add-on's purpose is to create tickets in a remote system with fields from the alert results.   Therefore, in the case of a failure to create a ticket in the remote system, it would be really helpful to know details of the alert results which failed to be sent.  We can then alert on cim_modactions in the case of action_staus=failure and be able to respond by resending that alert. (Ideally we would  modify the add on to be resilient and try to send again, however we do also need to know about these failures, because in the case of an outage on the remote side we would need to still konw what had failed to be sent) Ideally we would include the entire contents of the alert result in the cim_modactions index. As nearly as we can tell the "signature" field is often filled with contextual information.  Replacing that value may be an option for us if we can find a sensible way to do so.   I go into some more detail and specificity below.  The cim_modactions index is useful in determining whether a specific action has been successful or not at our client's environment.  We send the output of our Splunk alerts to an external ticketing system through an adding we built using the Splunk Add-on Builder | Splunkbase. For the sake of this question let's call the application we built the "ticketing system TA" and the corresponding sourcetype in cim_modifications, "modular_alerts:ticketing_system".  If we search using "index=cim_modactions sourcetype="modular_alerts:ticketing_system", we return all cim_modactions about the ticketing system We can know if an alert was successfully created in the remote system if we search on: "index=cim_modactions sourcetype="modular_alerts:ticketing_system" action_status=failure We get results like:   2022-10-01 09:25:29,179 ERROR pid=1894149 tid=MainThread file=cim_actions.py:message:431 | sendmodaction - worker="search_head_fqdn" signature="HTTPSConnectionPool(host='ticketing_system_fqdn', port=443): Max retries exceeded with url: /Ticketing/system/path/to/login (Caused by ProxyError('Cannot connect to proxy.', ConnectionResetError(104, 'Connection reset by peer')))" action_name="ticketing_system" search_name="Bad things might be happening" sid="scheduler__nobody_ZHNsYV91c2VfY2FzZXM__RMD5e17ae2c72132ca0f_at_1664615700_985" rid="14" app="app where search lives" user="nobody" digest_mode="0" action_mode="saved" action_status="failure" host = search_head_hostname source = /opt/splunk/var/log/splunk/ticketing_system_ta_modalert.logsourcetype = modular_alerts:ticketing_system     Notice that we get a helpful error about the reason for the failure, the search it happened during and the timestamp. Unfortunately this does not get us down to which alert or alerts failed to be sent.  In each of our searches we have a field which identified which remote application is logging. Let's call it client_application_id. If we could include that number, like client_application_id=#####, that would be a help. Even more helpful would be to include alert_result_text="<complete text of the payload being sent across to the remote system at the time of the failure>" We also noticed that if signature contains anything that looks like an assignment, then that assignment becomes a field.  for example in a few cases we actually do see client_applicaiton_id=#####, but these are few and not in the case of failures.  In these cases there is also   signature="client_application_id=#####" Any direction on solving this specific question or even a suggestion on an alternate approach would be much appreciated.  
Dear Splunk community, I'm new to Splunk, so excuse my incompetence... What I'm trying to do is enriching my web access log with app name and team name from a csv lookup file. The CSV file "ing... See more...
Dear Splunk community, I'm new to Splunk, so excuse my incompetence... What I'm trying to do is enriching my web access log with app name and team name from a csv lookup file. The CSV file "ingress_map.csv" looks like this:   ingress,app,team https://mycompany.com/abc,foo-bar,a-team https://app.mycompany.com,good-app,b-team https://app.mycompany.com/abc,better-app,c-team https://app.mycompany.com/abc/xyz,best-app,d-team     The url field of my web access log will seldom match exactly one of the ingresses, is it possible to have a lookup that finds the best matching ingress and adds the fields app and team to the log line? Or is there a better way of solving this problem?   Regards Terje Gravvold
I have a bar chart stacked graph with time on X-axis and Success, failure count stacked on Y axis. when i click on the success count, it needs to display the table with success transaction details.... See more...
I have a bar chart stacked graph with time on X-axis and Success, failure count stacked on Y axis. when i click on the success count, it needs to display the table with success transaction details. same for failure count as well.  As of now i am passing the earliest and latest time from the bar chart with the below condition. <eval token="e">$click.value$</eval> <eval token="le">relative_time($click.value$, "+60m")</eval> I have 2 panel described as Show_Success and Show_failure. Can someone help me how to set the token to pass the value for the panel to show depends on the click for success or failure. 
How to convert Windows lastLogonTimestamp from this format 07:17.45 PM, Fri 09/30/2022 to 09/30/2022 19:17:45 Thank you    
        index=aws sourcetype="aws:metadata" InstanceId=i-* | spath Tags{}.key.Name output=Hostname | mvexpand Hostname | fieldsummary | search field = Hostname         The ab... See more...
        index=aws sourcetype="aws:metadata" InstanceId=i-* | spath Tags{}.key.Name output=Hostname | mvexpand Hostname | fieldsummary | search field = Hostname         The above search give me count of value instead the value itself.  What I am missing? Tag &  AmiLaunchIndex is at same level right?  Splunk extracts "Tags{}.Key"=Name, AmiLaunchIndex at the INTERESTING FIELDS.  I really want to learn spath.  I Know how to do with regex.  I read the documentation but, it doesn't make sense to me.     | spath Tags{5}.Key output=HN     Give me values at the key level but not at the Name values
I would like to detect successful authentication after a brute force attempt. It would be nice to see multiple status code 400s and the 200s all from the same IP. That way, I do not have to do multip... See more...
I would like to detect successful authentication after a brute force attempt. It would be nice to see multiple status code 400s and the 200s all from the same IP. That way, I do not have to do multiple searches for every IP. I used the below query but was unsuccessful. Please help if you can index=[index name] sourcetype=[sourcetypename] httpmethod=* status code=* | eventstats count(eval('action'=="success")) AS success, count(eval('action'=="failure")) AS failure BY src_ip | where total_success>=1 AND total_failure>=15 | stats count by src_ip In between I even added |strcat success . failure but could not get results. Kindly assist.  Thank you.
Dear community, I am new to Splunk DB and I am trying to understand a few things: Context: I am trying to use Splunk DB as an interface for my data stored in Hudi or HDFS or Cassandra. I want to gi... See more...
Dear community, I am new to Splunk DB and I am trying to understand a few things: Context: I am trying to use Splunk DB as an interface for my data stored in Hudi or HDFS or Cassandra. I want to give the Splunk DB interface which can query this data and returns this date to a Splunk environment. I have a few questions: - I read that it is recommended to install Splunk DB on the heavy forwarder. If we only have access to the research head, is it possible to install it on search heads? - in terms of indexing, is it required to have the Splunk indexing, or I can use the indexing of the other database? - overall, my use cases will use Slunk DB just as an interface. Thanks a lot  
I am trying to extract field from the "textPayload" value which is log message and it has "status" as key.  I want to make my search by extracting "status" as a field and apply for creating alerts.... See more...
I am trying to extract field from the "textPayload" value which is log message and it has "status" as key.  I want to make my search by extracting "status" as a field and apply for creating alerts.  Here is the regex i generated and working in regex101 >> \\"status\\":\\"(?<status>[^\"]+) Here is our sample log ================================================================================ {"insertId":"l9ple6wfkvbdfasfdsfdwyoo","labels":{"compute.googleapis.com/resource_name":"gke-default-node-poo-4e912bb9-vrl1","k8s-pod/app":"some-service,"k8s-pod/environment":"dev","k8s-pod/part-of":"some-service","k8s-pod/pod-template-hash":"79cb686fcf","k8s-pod/security_istio_io/tlsMode":"istio","k8s-pod/service_istio_io/canonical-name":"some-service","k8s-pod/service_istio_io/canonical-revision":"v1","k8s-pod/stage":"dev","k8s-pod/version":"v1"},"logName":"projects/abc-dev/logs/stdout","receiveTimestamp":"2022-09-30T15:00:05.2690572Z","resource":{"labels":{"cluster_name":"-gke-dev","container_name":"some-service-v1","location":"us-east4","namespace_name":"dev","pod_name":"some-service-v1-79cb686fcf-x2frb","project_id":"gke-dev"},"type":"k8s_container"},"severity":"INFO","textPayload":"2022-09-30 15:00:00.952 INFO 1 --- [nio-8080-exec-8] c.a.a.a.controller.BrokerController : {\"classification\" "NORMAL\",\"action\" "ALERT\",\"host\" "asome-service-v1-79cb686fcf-x2frb\",\"ipAddr\" "10.143.104.169\",\"status\" "SUCCESS\",\"time\" "2022-09-30T15:00:00.952Z\",\"msg\" "getToken - Start\"}","timestamp":"2022-09-30T15:00:00.95264915Z"}
Hello!   I'm relatively new to Splunk but I've worked with databases over the years so I felt like approaching this wasn't too bad.    The problem: in our situation, we have hosts that exist ... See more...
Hello!   I'm relatively new to Splunk but I've worked with databases over the years so I felt like approaching this wasn't too bad.    The problem: in our situation, we have hosts that exist under our own index for an application. However sometimes those hosts go down or stop reporting logs. That's a separate issue but it's something we want to detect and give the user/client insight into which hosts are up and which ones are down.   So here's what I have so far: ( I attempted a code sample here but it wasn't working )       | union     [ search index=unique_index host IN ($hosts$) source="<applicationPath>/http_logs/access_log.log"     | dedup host     | stats count by host     | rename host AS hostsFound     | fields hostsFound]     [ makeresults     | eval hosts=split("$hosts$", ",")] | eventstats values(hosts) as AllHosts | stats count(hostsFound) as Match dc(AllHosts) as MaxMatch values(hostsFound) as HostsFound values(AllHosts) as AllHosts | search Match < MaxMatch | mvexpand AllHosts | where !(AllHosts in (HostsFound)) | rename AllHosts as HostsMissing | eval hosts=mvappend(HostsFound,HostsMissing) | fields hosts,HostsMissing | mvexpand hosts | eval count = if(hosts in (HostsMissing), 0, 1) | table hosts, count | dedup hosts   "$hosts$" is a local variable we have on the dashboard for this query so when a list of hosts are selected, or just one host, then it'll populate there and run the query.   This is a bit of a combination of what I've read on these forums and what I can up with. In the end we're doing the initial query in the union to get what results we have our there for hosts that report back. It's just a tomcat access log. Then the other side of the union are all of the hosts we pass in. In our example we have 7 that report and one that does not, so a total of 8. This query in the experiences I've had will work if ONE of the hosts doesn't report, like explained above, however if all of the hosts report back then it won't return any results.   So a few questions What can I do to make it return all results if all hosts return data AND if only a few or none of them return data? Can this query be improved, and how?    I'm still learning how this system works but any insight would be fantastic.   Thank you!
ERROR HttpListener [97417 TcpChannelThread] - Exception while processing request from x.x.x.x:63596 for /en-US/splunkd/__raw/services/search/shelper?output_mode=json&snippet=true&snippetEmbedJS=false... See more...
ERROR HttpListener [97417 TcpChannelThread] - Exception while processing request from x.x.x.x:63596 for /en-US/splunkd/__raw/services/search/shelper?output_mode=json&snippet=true&snippetEmbedJS=false&namespace=search&search=search%20i&useTypeahead=true&showCommandHelp=true&showCommandHistory=true&showFieldInfo=false&_=1664562934323: std::bad_alloc Any help please     
Hi I got error message after upgrading splunk enterprise from version 8.1 to version 8.2.7 in all my splunk dashboard, it shows warning with message : cannot expand lookup field 'hostname' due to... See more...
Hi I got error message after upgrading splunk enterprise from version 8.1 to version 8.2.7 in all my splunk dashboard, it shows warning with message : cannot expand lookup field 'hostname' due to a reference cycle in the lookup configuration can you tell me how to fix this issue?   thank you