All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi folks, I have a realtime search that looks at failed windows logins, producing a "single value" timechart visualization with a sparkline and trend value next to it. index=windows EventCode=462... See more...
Hi folks, I have a realtime search that looks at failed windows logins, producing a "single value" timechart visualization with a sparkline and trend value next to it. index=windows EventCode=4625 | timechart span=1h count Instead of having it snap to the hour, I would like it to show the values without snapping, effectively grouping by the current minute - 60m for each.  Is there a way to group that in real time searches? 
Hello! I'm implementing a custom circuit breaker in my application spring gateway application, which returns the following event/log: 1/20/23 1:20:28.487 PM   [app=teste-gateway... See more...
Hello! I'm implementing a custom circuit breaker in my application spring gateway application, which returns the following event/log: 1/20/23 1:20:28.487 PM   [app=teste-gateway, traceId=traceid, spanId=spanId, INFO CircuitBreakerCustomConfig : Circuit breaker circuitBreakerName=/test-circuit-breaker-gateway-route onStateTransition=State transition from CLOSED to OPEN fromState=CLOSED toState=OPEN stateName=CLOSED_TO_OPEN host =  host-ip source = http:source-test sourcetype = sourcetype_teste-gateway There are about 120 hosts of this application. If a request to a route starts receiving many errors it will open the circuit, and it will receive the above event, and eventually until the route stabilizes it will change to HALF-OPEN and CLOSED. Each route is represented by the circuitBreakerName field. So, for example, in an instance/host, the route: /test-circuit-breaker-gateway-route will not have the above event/log until it opens the circuit due to many errors. Then it will receive the log just like the above with stateName=CLOSED_TO_OPEN. Eventually, it will change to stateName=OPEN_TO_HALF-OPEN, and then it can go to stateName=HALF-OPEN_TO_OPEN (if the errors continue) or stateName=HALF-OPEN_TO_CLOSED (if errors stop). And according with how the requests are being balanced between hosts, I can have hosts where the circuit is currently HALF_OPEN, whereas at the same moment, others hosts have the circuit OPEN for the same route. I would like to keep track of the routes with an OPEN circuit as it last status (most recent). It could be something like a Line Chart, that Y-axis is the number of hosts with currently OPEN circuit, and X-axis is time. So, I imagine that I'll have to check the last status of the above log for every host and search for OPEN. But I'm not really sure how to do this. Is it possible? How can I do this? Thank you for any help in advance. And sorry if I wrote anything wrong, english is not my main language.  
Hi Splunkers, I'm trying to integrate GCP chronicle app with Splunk and perform chronicle-related activities. please someone help me with this. Thanks    
I'm creating a ServiceNow Dashboard in Splunk, and there is a particular column called "dv_priority" that I'd like to assign a color code to.  For example, their are four values assigned to dv_priori... See more...
I'm creating a ServiceNow Dashboard in Splunk, and there is a particular column called "dv_priority" that I'd like to assign a color code to.  For example, their are four values assigned to dv_priority field, it's either going to "1 - Critical" ,  "2 - High" , "3 - Moderate" , "4 - Low", "5 - Informational"   I'd like to color code these values, for example "1 - Critical" (Red), "2 - High" (Orange), "3 - Moderate" (Yellow) and "4 - Low" (Purple) and "5 - Informational" (Green). What would be the best approach SPL-wise in doing this with the below query?     index=servicenow sourcetype=* NOT dv_state IN("Closed", "Resolved", "Cancelled") | eval dv_number = if(isnull(dv_number), task_effective_number, dv_number) | eval dv_number = if((isnull(dv_number) OR len('dv_number') == 0), DV_NUMBER, dv_number) | eval number = if((isnull(number) OR len('number') == 0), dv_number, number) | eval number = if((isnull(number) OR len('number') == 0), NUMBER, number) | eval number = if((isnull(number) OR len('number') == 0), "Error", number) | eval number = if(number!=dv_number, dv_number, number) | eval dv_u_subcategory = if((isnull(dv_u_subcategory) OR len('dv_u_subcetegory') == 0), DV_U_SUBCATEGORY, dv_u_subcategory) | eval dv_u_category = if((isnull(dv_u_category) OR len('dv_u_category')==0), DV_U_CATEGORY, dv_u_category) | eval dv_business_service = if(((isnull(dv_business_service) OR len('dv_u_business_service')==0) AND dv_category="MDR Analytics"), "Detect", dv_business_service) | eval dv_business_service = if(((isnull(dv_business_service) OR len('dv_u_business_service')==0) AND dv_category="MDR Engineering"), "Engineering", dv_business_service) | eval dv_business_service = if((isnull(dv_business_service) OR len('dv_u_business_service')==0), DV_BUSINESS_SERVICE, dv_business_service) | eval dv_business_service = if(((isnull(dv_business_service) OR len('dv_business_service')==0) AND dv_u_category="Notable" AND dv_u_subcategory="Security"), "Detect", dv_business_service) | eval dv_business_service = if((isnull(dv_business_service) OR len('dv_u_business_service')==0), "Error", dv_business_service) | eval dv_business_service = if(dv_u_category="Infrastructure", "Engineering", dv_business_service) | eval state = if((isnull(state) OR len('state')==0), STATE, state) | eval dv_state = if((isnull(dv_state) AND state=1), "New", dv_state) | eval dv_state = if((isnull(dv_state) AND state=3), "Closed", dv_state) | eval dv_state = if((isnull(dv_state) AND state=6), "Resolved", dv_state) | eval dv_state = if((isnull(dv_state) AND state=11), "On-Hold", dv_state) | eval dv_state = if((isnull(dv_state) AND state=18), "In Progress - Customer", dv_state) | eval dv_state = if((isnull(dv_state) AND state=7), "Cancelled", dv_state) | eval dv_state = if((isnull(dv_state) AND state=10), "In Progress - dw", dv_state) | eval dv_state = if((isnull(dv_state) OR len('dv_state')==0), DV_STATE, dv_state) | eval dv_state = if((isnull(dv_state) OR len('dv_state')==0), "Error", dv_state) | eval dv_state = if(dv_state="Error" AND (isnotnull(closed_at) OR len('closed_at') == 0), "Resolved", dv_state) | eval dv_short_description = if((isnull(dv_short_description) OR len('dv_short_description') == 0), short_description, dv_short_description) | eval dv_short_description = if((isnull(dv_short_description) OR len('dv_short_description') == 0), case, dv_short_description) | eval dv_short_description = if((isnull(dv_short_description) OR len('dv_short_description') == 0), DV_SHORT_DESCRIPTION, dv_short_description) | eval dv_category = if(dv_business_service="Detect", "MDR Analytics", dv_category) | eval closed_at = if((isnull(closed_at) OR len('closed_at')==0), CLOSED_AT, closed_at) | eval u_mttn = if((isnull(u_mttn) OR len('u_mttn')==0), U_MTTN, u_mttn) | eval u_mttca_2 = if((isnull(u_mttca_2) OR len('u_mttca_2')==0), U_MTTCA_2, u_mttca_2) | eval u_mttcv = if((isnull(u_mttcv) OR len('u_mttcv')==0), U_MTTCV, u_mttcv) | eval u_mttdi = if((isnull(u_mttdi) OR len('u_mttdi')==0), U_MTTDI, u_mttdi) | eval u_mttrv = if((isnull(u_mttrv) OR len('u_mttrv')==0), U_MTTRV, u_mttrv) | eval u_mttc = if((isnull(u_mttc) OR len('u_mttc')==0), U_MTTC, u_mttc) | table _time, number, dv_state, dv_priority, dv_u_category, dv_short_description,dv_assigned_to,dv_assignment_group, opened_at | where dv_assignment_group="Security" | sort - _time | sort - dv_state | dedup number      
Hi, Splunkers, I have the following token handler,   if input "Gucid_token_with3handlers" is 2 digits number, it will return as token skillexpressionLength,  if it's not 2 digits number will retu... See more...
Hi, Splunkers, I have the following token handler,   if input "Gucid_token_with3handlers" is 2 digits number, it will return as token skillexpressionLength,  if it's not 2 digits number will return itself as token Gucid_token, which works fine. now I want to add handler 3,  if 1st 3 characters are VQ_,   I want it to return itself as token Gucid_token_VQ, otherwise, it return as Gucid_token_VQ  with value * but it looks handler 3 is conflicting with handler 1. so my question is how to define the 3rd token handler to have it work together 1st and 2nd token handlers without conflict?    the following is the input token definition with 3 token handlers. <input type="text" token="Gucid_token_with3handlers" searchWhenChanged="true"> <label>Gucid/UUID/SID</label> <change> <eval token="Gucid_token"> if(match(value, "^[0-9][0-9]?$"),"", value)</eval> <eval token="skillexpressionLength">if(match(value, "^[0-9][0-9]?$"),value, 0)</eval> <eval token="Gucid_token_VQ">if(substr(value,1,3)=="VQ_"),value, *)</eval> </change> <default></default> </input>   then I used the following search  clause with  $Gucid_token_VQ$ | search VQ = $Gucid_token_VQ$   then this search panel showing :   search is waiting for input.   thx in advance.   Kevin
Hi, I would like to understand the working of URL monitoring extension. 1. Does it requires any browser for it to work or it uses CURL to test the URL? 2. How many maximum URLs can be monitored us... See more...
Hi, I would like to understand the working of URL monitoring extension. 1. Does it requires any browser for it to work or it uses CURL to test the URL? 2. How many maximum URLs can be monitored using the extension? 3. Does it monitors the URLs sequentially or parallelly. For example, let us say I need to monitor 100 URLs, will the extension check for each URL and then move to check next URL or it will check all the URLs parallelly. Regards, Bhaskar
Hey All, wondering if I can get some input on this. I have data coming in as JSON. The fields follow this naming convention: objects.Server::34385.fields.friendlyname = Server123 objects.Server:... See more...
Hey All, wondering if I can get some input on this. I have data coming in as JSON. The fields follow this naming convention: objects.Server::34385.fields.friendlyname = Server123 objects.Server::88634.fields.friendlyname = Server444 What I'm trying to do is to somehow rename the fields, so I omit the ::<number> after the Server part. End result is needed to be like this: objects.Server.fields.friendlyname = Server123 objects.Server.fields.friendlyname = Server444 It's worth mentioning that there are around 10k servers, so I can't list them out one by one.
Hi there!   In the Splunk Add-on for AppDynamics release notes, it is mentioned: "Now Splunk Cloud supported! Updated using Add-On-Builder 4.0 to set python version preference to python3 and pass c... See more...
Hi there!   In the Splunk Add-on for AppDynamics release notes, it is mentioned: "Now Splunk Cloud supported! Updated using Add-On-Builder 4.0 to set python version preference to python3 and pass cloud vetting (no changes in functionality)." Now this message and the app itself is a little old: July 23rd, 2021. However, when I try to install this add-on in Splunk Cloud, it fails AppIsnspect validation. There are 3 failed check: check_python_sdk_version:  Detected an outdated version of the Splunk SDK for Python (1.6.6). Please upgrade to version 1.6.16 or later. File: bin/splunk_ta_appdynamics/aob_py3/solnlib/packages/splunklib/binding.py check_simplexml_standards_version: Change the version attribute in the root node of your Simple XML dashboard default/data/ui/views/home.xml to `<version=1.1>`. Earlier dashboard versions introduce security vulnerabilities into your apps and are not permitted in Splunk Cloud File: default/data/ui/views/home.xml check_for_addon_builder_version: The Add-on Builder version used to create this app is below the minimum required version of 4.1.0.Please re-generate your add-on using at least Add-on Builder 4.1.0. File: default/addon_builder.conf Line Number: 4 Any chance a new version will be made available with recent support for Splunk Cloud? We'd love to offload this add-on from our on-prem Heavy Forwarders to Splunk Cloud. Thank you!
Hi, I am trying to get two cols from the same table onto a line graph. Each col is an independent value, so the graph should show two lines; I do not want to consolidate the two col together. Thi... See more...
Hi, I am trying to get two cols from the same table onto a line graph. Each col is an independent value, so the graph should show two lines; I do not want to consolidate the two col together. This is the Search SPL I am using to pull data: ------graph 1------- mstats avg(_value) prestats=true WHERE metric_name="cpu.system" AND "index"="em_metrics" AND "host"="ABC" AND `sai_metrics_indexes` span=10s timechart avg(_value) AS Avg span=10s fields - _span* ------graph 2------- mstats avg(_value) prestats=true WHERE metric_name="memory.used" AND "index"="em_metrics" AND "host"="ABC" AND `sai_metrics_indexes` span=10s timechart avg(_value) AS Avg span=10s fields - _span* As you can see, almost everything is the same besides the metric_name. I am trying to get both metric_name data's onto one graph. I tried to combine both metric_name into one by adding another AND statement, but it won't work. Thanks in Advance!    
Hello, I have a list of cities (in a .csv) around the world and want to put them on a cluster map of the world with a count of how many times a particular city occurs in the list. There is no iploc... See more...
Hello, I have a list of cities (in a .csv) around the world and want to put them on a cluster map of the world with a count of how many times a particular city occurs in the list. There is no iplocation, lat, lon etc data. How Can I map the cities out ? Thanks
I have a spring boot application, deployed in AKS. I am using Appynamics docker image to inject the agent into JVM. The setup is working fine in lower environments. Even in the production environment... See more...
I have a spring boot application, deployed in AKS. I am using Appynamics docker image to inject the agent into JVM. The setup is working fine in lower environments. Even in the production environment, everything worked fine for 1-2 days. Then I configured the transaction detection rules of java to include a custom header in the transaction name, making them similar to what we have in non-production environments. It worked for a few hours, then it stopped capturing any business transactions and transaction snapshots. I can still see the traffic using the Service Endpoints, where the number of calls and the average response time is captured. I tried to look at the events, and there I can see some events like New Business Transaction Discovered /xxx.yyy (But no data for this in Business Transaction Tab) Errors like - Application Server Exception Agent Data Diagnostics - OVERFLOW_BT Currently, the production environment has less traffic than the non-production environment. And apart from this application, we only have a few Browser and Mobile apps, but no other Java/.Net application. The application has not been restarted since I configured the business transaction name. Tried setting "find-entry-points"  to true, but could not make much sense from its logs.
Hello Splunkers, Using props.conf file, is it possible to combine multiple [<spec>] stanza ? I would like to set up a specific LINE_BREAKER but only for a source of a specific host... Is somethi... See more...
Hello Splunkers, Using props.conf file, is it possible to combine multiple [<spec>] stanza ? I would like to set up a specific LINE_BREAKER but only for a source of a specific host... Is something like [host::<whatever>] + [source::<whatever>] possible ? Thanks a lot, GaetanVP
Hello everyone,  I have a question with dashboard studio, in JSON format. I made this dashboard and I want a specific color for the "diff" field.  When diff is <0 : red color when diff is > 0 : g... See more...
Hello everyone,  I have a question with dashboard studio, in JSON format. I made this dashboard and I want a specific color for the "diff" field.  When diff is <0 : red color when diff is > 0 : green color  My spl request is : `easyVista` source="incidents_jour" | dedup "N° d'Incident" |rename "Statut de l'incident" as statut |eval STATUT=case( match(statut,"Résolu"),"fermé", match(statut,"Clôturé"),"fermé", match(statut,"Annulé"),"fermé", match(statut,"Archivé"),"fermé", match(statut,"A prendre en compte"),"ouvert", match(statut,"Suspendu"),"ouvert", match(statut,"En cours"),"ouvert", match(statut,"Escaladé"),"ouvert") |timechart count by STATUT usenull=f | eval diff=fermé-ouvert Can you help me please ? Thanks  
I have some error logs like below:     TYPE=ERROR, DATE_TIME=2022-12-31 03:30:27,281, CLASS_NAME=myClass, METHOD_NAME=myMethod, message=unknown error while fetching recordsjavax.net.ssl.SSLHand... See more...
I have some error logs like below:     TYPE=ERROR, DATE_TIME=2022-12-31 03:30:27,281, CLASS_NAME=myClass, METHOD_NAME=myMethod, message=unknown error while fetching recordsjavax.net.ssl.SSLHandshakeException: General SSLEngine problem TYPE=ERROR, DATE_TIME=2023-01-19 00:38:09,013, CLASS_NAME=myClass, METHOD_NAME=myMethod, message=unknown error while fetching recordsjava.lang.IllegalStateException: could not create the default ssl context     I need to get the message field of these logs and to sort them based on _time. I am trying the below queries to get the same. Query 1:     index="myIndes" host=myHost source="/my/app/location/app.log" TYPE=ERROR | extract pairdelim=",|" kvdelim="=" | table message | sort _time     Query 2:     index="myIndes" host=myHost source="/my/app/location/app.log" TYPE=ERROR | rex "^(?:(?<message>[^,]*),){7}" | table message | sort _time     But for both these queries, my response looks like below. Instead of getting the entire message field, I am getting just the first word.     -------- message -------- unknown     Please help on how to achieve this.
Hi, I need to show error messages for one particular service. But the challenge here is that for example , I need to show error messages including "withdrawal failed" error. But if this error hap... See more...
Hi, I need to show error messages for one particular service. But the challenge here is that for example , I need to show error messages including "withdrawal failed" error. But if this error happened because of "insufficient  balance" then this withdrawal failed should not be listed. Other error messages should be listed anyway. e.g query: index=abc sourcetype=payment log_level=Error |table  message. Result would be like: withdrawal failed deposit failed Error 404 customer not found But i want something like, index=abc sourcetype=payment log_level=Error and | search message="*Withdrawal failed*" | join type=inner requestId [search index=abc | search message ="*insufficient balance*"]  --> if this part is true then it should not list "Withdrawal failed error" Results should be like, deposit failed Error 404 customer not found   Please help me with the search. Thanks In advance.  
Hi..  I have suggested some of my new team mates to open a splunk user account(for free splunk trainings and to login to splunk community as well) but when my team mates are trying to use gmail i... See more...
Hi..  I have suggested some of my new team mates to open a splunk user account(for free splunk trainings and to login to splunk community as well) but when my team mates are trying to use gmail id to register for a splunk user account, they get error as: Due to US export compliance requirements, Splunk has suspended your access. (my team mates are facing this issue around for a more than a month period and even just now.) Any ideas/suggestions please. 
Hi, I am building a modular input which grabs data from an API on a regular basis. I'd like to let the app user configure the polling interval so that they can manage it against rate limits, but I... See more...
Hi, I am building a modular input which grabs data from an API on a regular basis. I'd like to let the app user configure the polling interval so that they can manage it against rate limits, but I am hitting some really weird behaviour. If I specify a value for interval in inputs.conf, but do not include interval in inputs.conf.spec, the modular input runs on schedule. Of course, this means that the interval is not configurable by the user on the data input page. However, if I include interval in inputs.conf.spec so that the user can configure it, the modular input runs once and never again. This is reflected in the splunkd logs too - when the interval is included in inputs.conf.spec, the scheduler just shows 'run once'. If the interval is not in the spec file, the scheduler schedules the input properly. I've tested this a few times and managed to repeatedly recreate the behaviour - and I also created a barebones input just to make sure that it wasn't something I had done in developing my input. The same problem occurred. Does anyone know what is going on here? Is it possible to let app users configure the interval parameter? Cheers, Matt
Hello, I have a custom time range picker, which I defined in the local times.conf of my app: /opt/splunkdev2/splunk/etc/apps/FRUN/local/times.conf: ... [EVENT_20220819] header_label = UPG label... See more...
Hello, I have a custom time range picker, which I defined in the local times.conf of my app: /opt/splunkdev2/splunk/etc/apps/FRUN/local/times.conf: ... [EVENT_20220819] header_label = UPG label = 2208 earliest_time = 20220819 latest_time = 20230217 order = 501 ... It works kind of okay, but I am not getting my label displayed when I pick the time in the dashboard, it gets translated to the date: when I choose from the picker: I will get displayed in the dashboard: What do I do wrong?   The second question would be, if there is a way to set the default of the time picker based on the dynamic fetch from the times.conf. The point is that I am updating the times.conf on daily basis at os level with crontab, then refresh the conf-times entity for the time ranges to appear in the picker. This works fine, but with the new ranges coming, I would like also to change the default for the picker in my dashboard (e.g. setting it to last time range event, being last software upgrade for instance). And the last question would be, if there is any way to make these time ranges visible only in my particular dashboard. I have set it on the App-Level, but the app has many dashboards and I do not need them there. It confuses the users. Or at least club them is a separate category like Presets2 (e.g. SoftwareUpgrades).
we are using DB connect addon to get data from Oracle DB  while searching the data was stopped coming but inputs are working while executing in the addon how we can monitor the status data was st... See more...
we are using DB connect addon to get data from Oracle DB  while searching the data was stopped coming but inputs are working while executing in the addon how we can monitor the status data was stopped
Is it possible with ITSI to establish uni-directional ticketing integration with ServiceNow?   the plugin work bidirectional connection i need servicenow pull events from splunk by uni-directional... See more...
Is it possible with ITSI to establish uni-directional ticketing integration with ServiceNow?   the plugin work bidirectional connection i need servicenow pull events from splunk by uni-directional maybe you have a solution?