All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Splunk Noob here.  How do I search for Windows Servers Version (2008, 2012 etc)?  Can this be done?
I am fairly new to Splunk but I come from a background of SQL databases and I may still be trying to do things in a "relational" way... Having said that I have two data sources. One represents test ... See more...
I am fairly new to Splunk but I come from a background of SQL databases and I may still be trying to do things in a "relational" way... Having said that I have two data sources. One represents test results (a list of test results) and one represents test suites (just some metadata for a set of tests like number of tests and a minimum required passing tests) I want to be able to compute the ratio of tests that passed and compare that with a passing threshold ratio. To do this I join test results with the test summary data like this: index=test_results | where (!isnull(test_result)) | join type=inner left=L right=R where L.test_summary_id=R.test_summary_id [search index=test_summaries] |stats values(L.project_short) AS project, count(eval(L.test_result=='PASS')) as tests_passing count, values(R.number_of_tests) as number_of_tests, values(R.passing_threshold) as pass_threshold by L.sw_release_id The line count(eval(L.test_result=='PASS')) as tests_passing always evaluates to 0 but I expect it to be the number of tests with the value "PASS" as a result for that sw_release_id. Other searches where I am not joining two tables, I can compute the tests_passing value correctly. Is there something about a join that prevents me from doing evaluations? Should I not use a join? Thanks...
Hello, this is the first time i post here but I have learn alot from this website by just using google search. Situation: At work server admins ask if I could "silence" splunk email alerts when t... See more...
Hello, this is the first time i post here but I have learn alot from this website by just using google search. Situation: At work server admins ask if I could "silence" splunk email alerts when they were doing maintenance so that they do not get emails of errors during server maintenance. I was able to do this because I created a maintenance.log in the /var/log/ folder that splunk keeps track of. if the admins write: "start of maintenance..." then any alert that monitors this logs will stop sending emails. when the admins write: "end of maintenance..." then splunk knows it can start sending emails since maintenance period is completed. this was useful to silence apache access log alerts that occurred during maintenance, meaning the admins did not get alerts that the apache access log wrote while admins were during maintenance as denoted by the _time of "stat of maintenance..." and _time of "end of maintenance...." Task: I have to show search results that do not contain any results that were reported during a maintenance period in a dashboard. this means that any search results between the _time of "start of maintenance...." and _time of "end of maintenance..." should not be included in the results. Moreover, there might be times when maintenance happened several times, for example, if maintnenace was done twice in one day or if they are searching for a time period of say, 1 month, and it shows there were 3 "starts of maintneance" and 3 corresponding "end of maintenance..." entries. Action: I have writen SPL that will get all the results: earliest=-1d (host="Server-web" source="/var/log/httpd24/error_log") OR (host="Server-Web" index=bizapps source=/var/log/bizapps_maintenance.log) I am not sure if splunk SPL can pull this off but am confident someone can help me out. If you need more info, let me know.
How do I schedule an alert to run every 5 minutes between the hours of 9:30 and 16:00 Eastern Time Monday-Friday?
I'm looking for a way to extract a value from the middle of a sting. The value(green) I want is after the first underscore(blue) and before the dash(pink) Example: GET_tres_main.aspx_detail_showa... See more...
I'm looking for a way to extract a value from the middle of a sting. The value(green) I want is after the first underscore(blue) and before the dash(pink) Example: GET_tres_main.aspx_detail_showall-0
Hi all, I need to get the value Windows 7 from the below string . used something like OS[\n]+([^\n]+) , but then it captures from Value till Windows 7.  Could someone please help me in capturing on... See more...
Hi all, I need to get the value Windows 7 from the below string . used something like OS[\n]+([^\n]+) , but then it captures from Value till Windows 7.  Could someone please help me in capturing only windows 7? DeviceProperties: [ [-] { [-] Name: OS Value: Windows 7     
Say I'm just trying to find if anything in Splunk is showing number "12345678". Isn't there a way to query a simple search trying to find that?  Or if I'm looking for a specific user; is there a wa... See more...
Say I'm just trying to find if anything in Splunk is showing number "12345678". Isn't there a way to query a simple search trying to find that?  Or if I'm looking for a specific user; is there a way to write a query like "jsmith@gmail.com". Essentially looking for anything associated with this username or anything associated with that number above. 
I am trying to run a search where I want my data to be more than 12 months old. However when I run this search, it brings up data between 2 days old and 12 months old. Anyone got any ideas on wher... See more...
I am trying to run a search where I want my data to be more than 12 months old. However when I run this search, it brings up data between 2 days old and 12 months old. Anyone got any ideas on where I am going wrong? | inputlookup append=T access_tracker where lastTime_user>=1659602543.000000 | stats min(firstTime) as firstTime,values(second2lastTime) as second2lastTime,values(lastTime) as lastTime_vals,max(lastTime) as lastTime by user | eval "second2lastTime"=mvdedup(mvappend('second2lastTime',NULL,'lastTime_vals')),"second2lastTime"=if(mvcount('lastTime')=1 AND mvcount('second2lastTime')>1 AND 'second2lastTime'='lastTime',split(ltrim(replace("|".mvjoin('second2lastTime',"|"),"\|".'lastTime',""),"|"), "|"),'second2lastTime'),"second2lastTime"=max('second2lastTime'),inactiveDays=round((lastTime-second2lastTime)/86400,2),_time=lastTime | search inactiveDays>=12mo
Hello, I have complex JSON events ingested as *.log files. I have issues (or couldn't do) with extracting fields from this files/events. Any help on how to extract Key-Value pairs from these events... See more...
Hello, I have complex JSON events ingested as *.log files. I have issues (or couldn't do) with extracting fields from this files/events. Any help on how to extract Key-Value pairs from these events would be highly appreciated. One sample event is given below. Thank you so much.   2022-07-15 12:44:03 - {     "type" : "TEST",     "r/o" : false,     "booting" : false,     "version" : "6.2.7.TS",     "user" : "DS",     "domainUUID" : null,     "access" : "NATIVE",     "remote-address" : "localhost",     "success" : true,     "ops" : [{         "address" : [             {                 "subsystem" : "datasources"             },             {                 "data-source" : "mode_tp"             }         ],   "address" : [                 {                     "cservice" : "management"                 },                 {                     "access" : "identity"                 }             ],             "DSdomain" : "TESTDomain"         },         {             "address" : [                 {                     "cservice" : "management"                 },   {             "operation" : "add",             "address" : [                 {                     "subsystem" : "finit"                 },                 {                     "bucket" : "TEST"                 },                 {                     "clocal" : "passivation"                 },                 {                     "store" : "file"                 }             ],             "passivation" : true,             "purge" : false         },         {             "operation" : "add",             "address" : [                 {                     "subsystem" : "finit"                 },                 {                     "bucket" : "TEST"                 }             ],             "module" : "dshibernate"         },         {             "operation" : "add",             "address" : [                 {                     "subsystem" : "finit"                 },                 {                     "bucket" : "hibernate"                 },                 {                     "clocal" : "entity"                 }             ]         },         {             "operation" : "add",             "address" : [                 {                     "subsystem" : "finit"                 },                 {                     "bucket" : "hibernate"                 },                 {                     "clocal" : "entity"                 },                 {                     "component" : "transaction"                 }             ],             "model" : "DSTEST"         },         {             "operation" : "add",             "address" : [                 {                     "subsystem" : "infit"                 },                 {                     "bucket" : "hibernate"                 },                 {                     "clocal" : "entity"                 },                 {                     "memory" : "object"                 }             ],             "size" : 210000         },   {             "operation" : "add",             "address" : [                 {                     "subsystem" : "DS"                 },                 {                     "workplace" : "default"                 },                 {                     "running-spin" : "default"                 }             ],             "Test-threads" : 45,             "queue-length" : 60,             "max-threads" : 70,             "keepalive-time" : {                 "time" : 20,                 "unit" : "SECONDS"             }         },         {             "operation" : "add",             "address" : [                 {                     "subsystem" : "DS"                 },                 {                     "workplace" : "default"                 },                 {                     "long-running-threads" : "default"                 }             ],             "Test-threads" : 45,             "queue-length" : 70,             "max-threads" : 70,             "keepalive-time" : {                 "time" : 20,                 "unit" : "SECONDS"             }         },       }] }
Hi All, I have enquired about this problem earlier as well in Splunk community, thus, apologies for duplicate query. However, I am unable to get a solution that helps to get my results 100% correct... See more...
Hi All, I have enquired about this problem earlier as well in Splunk community, thus, apologies for duplicate query. However, I am unable to get a solution that helps to get my results 100% correct. Moreover, the previous threads are old, thus, not sure if my comments will get visibility in those threads.  Resources in hand:-  I have a lookup table which has many fields. I am concerned with two fields: index and host. I have a custom index, which has many fields, but I need to use orig_index, orig_host. Requirement: - I need to fetch list of those hosts for each index which are present in lookup table but not in custom index. I tried with following with time range of last 24 hours:- |inputlookup table.csv |fields index, host |search NOT [search index="xxx" |rename orig_* AS *| table index, host | format] But, when I try to cross check the last reported date time of host value against the index, I get results of last hour. I tried with below SPL for cross check:- |tstats max(indextime) AS lastTime where index="dummy" AND host="10.10.10.10" BY host |convert ctime(lastTime) Thus, I need your help to resolve the issue. Thank you.
Hi community, I have table like below - Client Error_code Error Results abc 1003 2 abc 1003 3 abc 1013 1 abc 1027 3 abc 1027 5 ... See more...
Hi community, I have table like below - Client Error_code Error Results abc 1003 2 abc 1003 3 abc 1013 1 abc 1027 3 abc 1027 5 abc 1013 2 abc Total 16   I am trying to have distinct error codes in the table combining error results as well. I tries stats, dedup, didn't work
I have a query that returns multi-row and multi-column results. I want to be able to take a specific 'cell' result and assign it as a token. I have done this with a single row table, using <set to... See more...
I have a query that returns multi-row and multi-column results. I want to be able to take a specific 'cell' result and assign it as a token. I have done this with a single row table, using <set token="foo">$result.TYPE1$</set>, but cant get the syntax for a multi-row table. For example, given the below, I want to tokenize the value chevy. VEHICLE TYPE1 TYPE2 MOTO harley honda CAR chevy oldsmobile TRUCK fire garbage   I thought $result.CAR.TYPE1$ would do it, but nope. (To be fair, this problem has plagued my life for some time, but I have finally gotten to the point to ask). This is not a drilldown or click.value, but a chart (results) that I want to pull out the specific value returned. 
Based on what I've studied, I should be able to show a new field named item with a search such as the one below: index=existing_index | eval item = "apple" | stats count by source | table source, i... See more...
Based on what I've studied, I should be able to show a new field named item with a search such as the one below: index=existing_index | eval item = "apple" | stats count by source | table source, item, count I would expect output similar to the table below. source item count a/b/123.log apple 5 a/c/915.log apple 6 a/b/574.log apple 1   Instead, this happens: source item count a/b/123.log   5 a/c/915.log   6 a/b/574.log   1   Why did I not get what I expected?
Hai All, we have events from different hosts with same name. any search query to add them in single host field please suggest    dallvcrfix1p 1913 dallvcrfix1p.ops.invesco.net ... See more...
Hai All, we have events from different hosts with same name. any search query to add them in single host field please suggest    dallvcrfix1p 1913 dallvcrfix1p.ops.invesco.net 20    
Hi, On last version of Dashboard Examples (v 8.2.5) & Splunk (v9.0), I see a bug with table cell highlight JS. When you go on Table cell Page on dashboard Examples, you see table with cell highli... See more...
Hi, On last version of Dashboard Examples (v 8.2.5) & Splunk (v9.0), I see a bug with table cell highlight JS. When you go on Table cell Page on dashboard Examples, you see table with cell highlighting work correctly. But if you refresh avec Ctrl + R this page, randomly, Highlighting disappear. I reproduce the same bug with a custom dashboard. Have you the same Bug ?  
Im trying to make a punchcard to visualize incoming issues per hour in the prvious week. This is the result i get with the following code:        | eval issues="Issue Priority" | stats count... See more...
Im trying to make a punchcard to visualize incoming issues per hour in the prvious week. This is the result i get with the following code:        | eval issues="Issue Priority" | stats count(issues) by date_hour date_wday       I really want to get more bins like on the right side so that i can assign values with color. E.g. 0<10 = green, 11<70 = yellow, 71<150=red.  Something i need to include?
Hello, I have two searches with same index but different host names. Is it possible to have the results of both the searches in a dropdown in a dashboard so that the result appears in a table as pe... See more...
Hello, I have two searches with same index but different host names. Is it possible to have the results of both the searches in a dropdown in a dashboard so that the result appears in a table as per the selection of the host name? Both the searches have different field names. Can someone guide me on this please? Thanks in advance. Arshi.
In the "Configure The OpenTelemetry Collector" documentation I see "Attribute Description": service.name string shoppingcart Logical name of the service; equivalent to your AppDynamics tier n... See more...
In the "Configure The OpenTelemetry Collector" documentation I see "Attribute Description": service.name string shoppingcart Logical name of the service; equivalent to your AppDynamics tier name..... service.namespace string Shop A namespace for the service.name; equivalent to your AppDynamics application name.... Is this correct? "service.namespace" = "application name" and "service.name" = "tier name". Or should it be "service.namespace" = "tier name" and "service.name" = "application name" and this is just a mistake documenting it?
  I would like to create a dashboard that shows it's output when the time is picked and the values of the time is equal to two fields in the index data  itself, how can I make this ?
please help me I have 2 problems the first problem with sending alerts by email: in analysnat index= _internal "sendmail" it notifies me that it has a bad password problem but I am sure of enterin... See more...
please help me I have 2 problems the first problem with sending alerts by email: in analysnat index= _internal "sendmail" it notifies me that it has a bad password problem but I am sure of entering my password the second problem with access to splunkbase via splunk portal: I can't access splunkbase via the splunk enterprise portal to download applications, (bad password too) however I can do that through URL is there a workaround because it is very important to send alert emails?? for app installation; I manage, I download from the site then I install it but I must have a solution for the problem of alert emails.