All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Fairly new to Splunk so may not have the correct terms for everything. Currently working in a distributed environment with Splunk Enterprise with windows and Linux host. These hosts are sending logs ... See more...
Fairly new to Splunk so may not have the correct terms for everything. Currently working in a distributed environment with Splunk Enterprise with windows and Linux host. These hosts are sending logs via UFs to the clustered indexers. There is also an HF that is receiving logs from apps and AWS. My issue is that the logs coming from my UF are not being parsed into field name-value pairs. The windows/Linux host, indexers, and Search Heads all have the splunk_TA_nix and splunk_TA_windows add-ons installed.  I almost feel like my indexers are not parsing the data that is coming in. Log data is getting into Splunk and I can see my events however it is all in a format similar to this, very crude I know.     <data><data><data>1039<data><data><data>time<data><data>program<data>splunk<data>     I would like it to be in field name values.  At some point I was receiving logs in this format however I am no longer. What could be causing this?      time: 10:39 program: splunk          
Hi, I'm logged as root user, How do I log in to individual account in Linux below is the error message  Please login with individual user account, direct root account login sessions will be record... See more...
Hi, I'm logged as root user, How do I log in to individual account in Linux below is the error message  Please login with individual user account, direct root account login sessions will be recorded and audited.   ciao  
The value from the CommandLine field getting truncated. I am use index search. index=* source="process"  | table host CommandLine The value is truncated in the table result field CommandLine ... See more...
The value from the CommandLine field getting truncated. I am use index search. index=* source="process"  | table host CommandLine The value is truncated in the table result field CommandLine eg: Input field CommandLine= "-propertyfile=D:/projects/Testing/properties/perf "-Dtest_jvm_id=002 col 1" -Dbootstrap.folder=D:/projects/Testing/properties" After search result: CommandLine= "-propertyfile=D:/projects/Testing/properties/perf " I need to remove the double quotes from the field like this "-propertyfile=D:/projects/Testing/properties/perf -Dtest_jvm_id=002 col 1-Dbootstrap.folder=D:/projects/Testing/properties"  
We have ingested into Splunk logs from our application - these logs include two keys - stageType  and correlation id, along with other keys.    I have to find a list of correlation ids that are retur... See more...
We have ingested into Splunk logs from our application - these logs include two keys - stageType  and correlation id, along with other keys.    I have to find a list of correlation ids that are returned for one stageType and not for other stageType.   I realise Splunk queries cannot be written similar to SQL I am not very conversant with Splunk -  I just normally get by - using simpler queries.   Hence hoping, someone can help me with a query that gives me the list - so I can do further analysis to find out the reason for differences, which should not normally exist. Is it possible to do it in Splunk? Can someone help me with the query? index=grp-applications sourcetype="kafka:status" stageType IN ("STAGEA", "STAGEB" )  env=qa | dedup env, correlationId, stageType | stats count by env, correlationId, stageType Thank you  
I have an application that have some instances/hosts. Because of change of throughput or instability new instances/hosts can be initiated and old can be terminated. There are many different events/l... See more...
I have an application that have some instances/hosts. Because of change of throughput or instability new instances/hosts can be initiated and old can be terminated. There are many different events/logs being registered.  When a new instance/host is initiated it shows the following event/log: 1/20/23 6:00:01.256 PM   [app=gateway-example-app, traceId=, spanId=, INFO 1 [ main] gateway.GatewayApplicationKt : Started GatewayApplicationKt in 21.081 seconds (JVM running for 48.641) host = ip-example-of-ip-01 source = http:source-example sourcetype = example-sourcetype    When an instance is terminated, it shows the following log: 1/20/23 3:53:42.778 PM   CoreServiceImpl INFO: JVM is shutting down host = ip-example-of-ip-02 source = http:source-example sourcetype = example-sourcetype  Is there a way of getting a list of hosts that have the log of initialization, but don't have the log of termination?  In other words, a list of currently active hosts? Thank you for any help in advance. And sorry if I wrote anything wrong, english is not my main language.
I have two indexes having status of Batch jobs that run in our system daily.  Source 1:  Contains JobName, StartTime, EndTime, Status. The job status can be - Active, Completed, Failed. The source ... See more...
I have two indexes having status of Batch jobs that run in our system daily.  Source 1:  Contains JobName, StartTime, EndTime, Status. The job status can be - Active, Completed, Failed. The source or the log name itself is the jobname here. A new event will be created each time the Status changes in the same source. This source contains the up to date information for all jobs except those that are bypassed. Source 2: This is a DB source containing these fields - JobName, BypassFlag, AvgRunTime. This source contains AvgRunTime of all jobs, BypassFlag which tells if a job was bypassed today.  Need: I am trying to get each JobName(Source1 and Source2), StartTime(Source1), EndTime(Source1),  JobStatus(Source1), if the job was bypassed today(Source2),AvgRunTime of job.   Query I tried using Outer join: I tried using both indexes in same query and also joins but with outer join i am getting results only from the first index even though I am doing an outer join. Please help. (index="index2" sourcetype=db) | table JobName,BypassFlag,AvgRunTime | join type=outer JobName [search index="index1" host=tnsm123* | stats latest(JobName), latest(Status), latest(StartTime),latest(EndTime) by source | table JobName, Status, StartTime, EndTime by source] | table, StartTime, EndTime, Status, AvgRunTime, BypassFlag  
Hello, We have migrated from an app called Mirth to Splunk. With Mirth we used a tool called Interface Explorer for HL7 to view messages cleaner. Is there a tools for Splunk to view messages in a c... See more...
Hello, We have migrated from an app called Mirth to Splunk. With Mirth we used a tool called Interface Explorer for HL7 to view messages cleaner. Is there a tools for Splunk to view messages in a cleaner format? Thank you. Jean
I am trying to build a dashboard panel to show different colors based on 3 different counters. Single panel should show green if the count of timeouts < 5 and count of errors <10 and response time <5... See more...
I am trying to build a dashboard panel to show different colors based on 3 different counters. Single panel should show green if the count of timeouts < 5 and count of errors <10 and response time <500ms for an API request, and should be yellow If the count of timeouts < 10 and count of errors <20 and response time <1000ms and other values it should be red.
These two pieces of SPL return two different-looking tables.          index=servicenow sourcetype=incident number=INC5181781 | spath opened_at | spath resolved_at | table number, opene... See more...
These two pieces of SPL return two different-looking tables.          index=servicenow sourcetype=incident number=INC5181781 | spath opened_at | spath resolved_at | table number, opened_at, resolved_at, number, _time         Will provide me with different results vs.         index=servicenow sourcetype=incident number=INC5181781 | table number, opened_at, resolved_at, number, _time         In the one with "spath" the table has more values for those values for "opened_at" and "resolved_at". The same number of events are discovered, but the table makes it look like one event is "missing" dimensions. Even if I do these two search, and compare the "Selected Fields" section on the left hand side, the one with spath has more "events" that have the values.    In the props.conf file the "source" has the line INDEXED_EXTRACTIONS = json This may also be impacting my ability to search as well. It seems like I will not get complete results unless I do something like          sourcetype=incident |spath number |spath category |search number=INC5181781 category=Closed         I assume something is not configured as I expect it to be, and I am unsure where else to check.
Hello All, I just upgraded our Tenable App for Splunk to 6.0.3 to match our Tenable Add-On for Splunk that has been 6.0.3. When I upgraded the Tenable App for Splunk I started to get an error when... See more...
Hello All, I just upgraded our Tenable App for Splunk to 6.0.3 to match our Tenable Add-On for Splunk that has been 6.0.3. When I upgraded the Tenable App for Splunk I started to get an error when I launched the Tenable App for Splunk: "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details." Is this a bug in the current version or something I need to change on our end? I'm not sure what's going on, but I open the developer console the only error that I get is this:  TypeError: Cannot read properties of undefined (reading 'get') at setupMultiInput (eval at <anonymous> (dashboard_1.1.js:277:107834), <anonymous>:14:42) at eval (eval at <anonymous> (dashboard_1.1.js:277:107834), <anonymous>:33:13) at Object.execCb (eval at module.exports (common.js:1:1), <anonymous>:1658:33) at Module.check (eval at module.exports (common.js:1:1), <anonymous>:869:55) at Module.enable (eval at module.exports (common.js:1:1), <anonymous>:1151:22) at Module.init (eval at module.exports (common.js:1:1), <anonymous>:782:26) at eval (eval at module.exports (common.js:1:1), <anonymous>:1424:36 Thanks for your time!
Hi folks, I have a realtime search that looks at failed windows logins, producing a "single value" timechart visualization with a sparkline and trend value next to it. index=windows EventCode=462... See more...
Hi folks, I have a realtime search that looks at failed windows logins, producing a "single value" timechart visualization with a sparkline and trend value next to it. index=windows EventCode=4625 | timechart span=1h count Instead of having it snap to the hour, I would like it to show the values without snapping, effectively grouping by the current minute - 60m for each.  Is there a way to group that in real time searches? 
Hello! I'm implementing a custom circuit breaker in my application spring gateway application, which returns the following event/log: 1/20/23 1:20:28.487 PM   [app=teste-gateway... See more...
Hello! I'm implementing a custom circuit breaker in my application spring gateway application, which returns the following event/log: 1/20/23 1:20:28.487 PM   [app=teste-gateway, traceId=traceid, spanId=spanId, INFO CircuitBreakerCustomConfig : Circuit breaker circuitBreakerName=/test-circuit-breaker-gateway-route onStateTransition=State transition from CLOSED to OPEN fromState=CLOSED toState=OPEN stateName=CLOSED_TO_OPEN host =  host-ip source = http:source-test sourcetype = sourcetype_teste-gateway There are about 120 hosts of this application. If a request to a route starts receiving many errors it will open the circuit, and it will receive the above event, and eventually until the route stabilizes it will change to HALF-OPEN and CLOSED. Each route is represented by the circuitBreakerName field. So, for example, in an instance/host, the route: /test-circuit-breaker-gateway-route will not have the above event/log until it opens the circuit due to many errors. Then it will receive the log just like the above with stateName=CLOSED_TO_OPEN. Eventually, it will change to stateName=OPEN_TO_HALF-OPEN, and then it can go to stateName=HALF-OPEN_TO_OPEN (if the errors continue) or stateName=HALF-OPEN_TO_CLOSED (if errors stop). And according with how the requests are being balanced between hosts, I can have hosts where the circuit is currently HALF_OPEN, whereas at the same moment, others hosts have the circuit OPEN for the same route. I would like to keep track of the routes with an OPEN circuit as it last status (most recent). It could be something like a Line Chart, that Y-axis is the number of hosts with currently OPEN circuit, and X-axis is time. So, I imagine that I'll have to check the last status of the above log for every host and search for OPEN. But I'm not really sure how to do this. Is it possible? How can I do this? Thank you for any help in advance. And sorry if I wrote anything wrong, english is not my main language.  
Hi Splunkers, I'm trying to integrate GCP chronicle app with Splunk and perform chronicle-related activities. please someone help me with this. Thanks    
I'm creating a ServiceNow Dashboard in Splunk, and there is a particular column called "dv_priority" that I'd like to assign a color code to.  For example, their are four values assigned to dv_priori... See more...
I'm creating a ServiceNow Dashboard in Splunk, and there is a particular column called "dv_priority" that I'd like to assign a color code to.  For example, their are four values assigned to dv_priority field, it's either going to "1 - Critical" ,  "2 - High" , "3 - Moderate" , "4 - Low", "5 - Informational"   I'd like to color code these values, for example "1 - Critical" (Red), "2 - High" (Orange), "3 - Moderate" (Yellow) and "4 - Low" (Purple) and "5 - Informational" (Green). What would be the best approach SPL-wise in doing this with the below query?     index=servicenow sourcetype=* NOT dv_state IN("Closed", "Resolved", "Cancelled") | eval dv_number = if(isnull(dv_number), task_effective_number, dv_number) | eval dv_number = if((isnull(dv_number) OR len('dv_number') == 0), DV_NUMBER, dv_number) | eval number = if((isnull(number) OR len('number') == 0), dv_number, number) | eval number = if((isnull(number) OR len('number') == 0), NUMBER, number) | eval number = if((isnull(number) OR len('number') == 0), "Error", number) | eval number = if(number!=dv_number, dv_number, number) | eval dv_u_subcategory = if((isnull(dv_u_subcategory) OR len('dv_u_subcetegory') == 0), DV_U_SUBCATEGORY, dv_u_subcategory) | eval dv_u_category = if((isnull(dv_u_category) OR len('dv_u_category')==0), DV_U_CATEGORY, dv_u_category) | eval dv_business_service = if(((isnull(dv_business_service) OR len('dv_u_business_service')==0) AND dv_category="MDR Analytics"), "Detect", dv_business_service) | eval dv_business_service = if(((isnull(dv_business_service) OR len('dv_u_business_service')==0) AND dv_category="MDR Engineering"), "Engineering", dv_business_service) | eval dv_business_service = if((isnull(dv_business_service) OR len('dv_u_business_service')==0), DV_BUSINESS_SERVICE, dv_business_service) | eval dv_business_service = if(((isnull(dv_business_service) OR len('dv_business_service')==0) AND dv_u_category="Notable" AND dv_u_subcategory="Security"), "Detect", dv_business_service) | eval dv_business_service = if((isnull(dv_business_service) OR len('dv_u_business_service')==0), "Error", dv_business_service) | eval dv_business_service = if(dv_u_category="Infrastructure", "Engineering", dv_business_service) | eval state = if((isnull(state) OR len('state')==0), STATE, state) | eval dv_state = if((isnull(dv_state) AND state=1), "New", dv_state) | eval dv_state = if((isnull(dv_state) AND state=3), "Closed", dv_state) | eval dv_state = if((isnull(dv_state) AND state=6), "Resolved", dv_state) | eval dv_state = if((isnull(dv_state) AND state=11), "On-Hold", dv_state) | eval dv_state = if((isnull(dv_state) AND state=18), "In Progress - Customer", dv_state) | eval dv_state = if((isnull(dv_state) AND state=7), "Cancelled", dv_state) | eval dv_state = if((isnull(dv_state) AND state=10), "In Progress - dw", dv_state) | eval dv_state = if((isnull(dv_state) OR len('dv_state')==0), DV_STATE, dv_state) | eval dv_state = if((isnull(dv_state) OR len('dv_state')==0), "Error", dv_state) | eval dv_state = if(dv_state="Error" AND (isnotnull(closed_at) OR len('closed_at') == 0), "Resolved", dv_state) | eval dv_short_description = if((isnull(dv_short_description) OR len('dv_short_description') == 0), short_description, dv_short_description) | eval dv_short_description = if((isnull(dv_short_description) OR len('dv_short_description') == 0), case, dv_short_description) | eval dv_short_description = if((isnull(dv_short_description) OR len('dv_short_description') == 0), DV_SHORT_DESCRIPTION, dv_short_description) | eval dv_category = if(dv_business_service="Detect", "MDR Analytics", dv_category) | eval closed_at = if((isnull(closed_at) OR len('closed_at')==0), CLOSED_AT, closed_at) | eval u_mttn = if((isnull(u_mttn) OR len('u_mttn')==0), U_MTTN, u_mttn) | eval u_mttca_2 = if((isnull(u_mttca_2) OR len('u_mttca_2')==0), U_MTTCA_2, u_mttca_2) | eval u_mttcv = if((isnull(u_mttcv) OR len('u_mttcv')==0), U_MTTCV, u_mttcv) | eval u_mttdi = if((isnull(u_mttdi) OR len('u_mttdi')==0), U_MTTDI, u_mttdi) | eval u_mttrv = if((isnull(u_mttrv) OR len('u_mttrv')==0), U_MTTRV, u_mttrv) | eval u_mttc = if((isnull(u_mttc) OR len('u_mttc')==0), U_MTTC, u_mttc) | table _time, number, dv_state, dv_priority, dv_u_category, dv_short_description,dv_assigned_to,dv_assignment_group, opened_at | where dv_assignment_group="Security" | sort - _time | sort - dv_state | dedup number      
Hi, Splunkers, I have the following token handler,   if input "Gucid_token_with3handlers" is 2 digits number, it will return as token skillexpressionLength,  if it's not 2 digits number will retu... See more...
Hi, Splunkers, I have the following token handler,   if input "Gucid_token_with3handlers" is 2 digits number, it will return as token skillexpressionLength,  if it's not 2 digits number will return itself as token Gucid_token, which works fine. now I want to add handler 3,  if 1st 3 characters are VQ_,   I want it to return itself as token Gucid_token_VQ, otherwise, it return as Gucid_token_VQ  with value * but it looks handler 3 is conflicting with handler 1. so my question is how to define the 3rd token handler to have it work together 1st and 2nd token handlers without conflict?    the following is the input token definition with 3 token handlers. <input type="text" token="Gucid_token_with3handlers" searchWhenChanged="true"> <label>Gucid/UUID/SID</label> <change> <eval token="Gucid_token"> if(match(value, "^[0-9][0-9]?$"),"", value)</eval> <eval token="skillexpressionLength">if(match(value, "^[0-9][0-9]?$"),value, 0)</eval> <eval token="Gucid_token_VQ">if(substr(value,1,3)=="VQ_"),value, *)</eval> </change> <default></default> </input>   then I used the following search  clause with  $Gucid_token_VQ$ | search VQ = $Gucid_token_VQ$   then this search panel showing :   search is waiting for input.   thx in advance.   Kevin
Hi, I would like to understand the working of URL monitoring extension. 1. Does it requires any browser for it to work or it uses CURL to test the URL? 2. How many maximum URLs can be monitored us... See more...
Hi, I would like to understand the working of URL monitoring extension. 1. Does it requires any browser for it to work or it uses CURL to test the URL? 2. How many maximum URLs can be monitored using the extension? 3. Does it monitors the URLs sequentially or parallelly. For example, let us say I need to monitor 100 URLs, will the extension check for each URL and then move to check next URL or it will check all the URLs parallelly. Regards, Bhaskar
Hey All, wondering if I can get some input on this. I have data coming in as JSON. The fields follow this naming convention: objects.Server::34385.fields.friendlyname = Server123 objects.Server:... See more...
Hey All, wondering if I can get some input on this. I have data coming in as JSON. The fields follow this naming convention: objects.Server::34385.fields.friendlyname = Server123 objects.Server::88634.fields.friendlyname = Server444 What I'm trying to do is to somehow rename the fields, so I omit the ::<number> after the Server part. End result is needed to be like this: objects.Server.fields.friendlyname = Server123 objects.Server.fields.friendlyname = Server444 It's worth mentioning that there are around 10k servers, so I can't list them out one by one.
Hi there!   In the Splunk Add-on for AppDynamics release notes, it is mentioned: "Now Splunk Cloud supported! Updated using Add-On-Builder 4.0 to set python version preference to python3 and pass c... See more...
Hi there!   In the Splunk Add-on for AppDynamics release notes, it is mentioned: "Now Splunk Cloud supported! Updated using Add-On-Builder 4.0 to set python version preference to python3 and pass cloud vetting (no changes in functionality)." Now this message and the app itself is a little old: July 23rd, 2021. However, when I try to install this add-on in Splunk Cloud, it fails AppIsnspect validation. There are 3 failed check: check_python_sdk_version:  Detected an outdated version of the Splunk SDK for Python (1.6.6). Please upgrade to version 1.6.16 or later. File: bin/splunk_ta_appdynamics/aob_py3/solnlib/packages/splunklib/binding.py check_simplexml_standards_version: Change the version attribute in the root node of your Simple XML dashboard default/data/ui/views/home.xml to `<version=1.1>`. Earlier dashboard versions introduce security vulnerabilities into your apps and are not permitted in Splunk Cloud File: default/data/ui/views/home.xml check_for_addon_builder_version: The Add-on Builder version used to create this app is below the minimum required version of 4.1.0.Please re-generate your add-on using at least Add-on Builder 4.1.0. File: default/addon_builder.conf Line Number: 4 Any chance a new version will be made available with recent support for Splunk Cloud? We'd love to offload this add-on from our on-prem Heavy Forwarders to Splunk Cloud. Thank you!
Hi, I am trying to get two cols from the same table onto a line graph. Each col is an independent value, so the graph should show two lines; I do not want to consolidate the two col together. Thi... See more...
Hi, I am trying to get two cols from the same table onto a line graph. Each col is an independent value, so the graph should show two lines; I do not want to consolidate the two col together. This is the Search SPL I am using to pull data: ------graph 1------- mstats avg(_value) prestats=true WHERE metric_name="cpu.system" AND "index"="em_metrics" AND "host"="ABC" AND `sai_metrics_indexes` span=10s timechart avg(_value) AS Avg span=10s fields - _span* ------graph 2------- mstats avg(_value) prestats=true WHERE metric_name="memory.used" AND "index"="em_metrics" AND "host"="ABC" AND `sai_metrics_indexes` span=10s timechart avg(_value) AS Avg span=10s fields - _span* As you can see, almost everything is the same besides the metric_name. I am trying to get both metric_name data's onto one graph. I tried to combine both metric_name into one by adding another AND statement, but it won't work. Thanks in Advance!    
Hello, I have a list of cities (in a .csv) around the world and want to put them on a cluster map of the world with a count of how many times a particular city occurs in the list. There is no iploc... See more...
Hello, I have a list of cities (in a .csv) around the world and want to put them on a cluster map of the world with a count of how many times a particular city occurs in the list. There is no iplocation, lat, lon etc data. How Can I map the cities out ? Thanks