All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi.  I have a question. the below as lookup table for example.   value | data | time 1111| 2222 | 12312313 (epoch time)   in this situation, Can ttl be configured using epoch time fields? The ... See more...
Hi.  I have a question. the below as lookup table for example.   value | data | time 1111| 2222 | 12312313 (epoch time)   in this situation, Can ttl be configured using epoch time fields? The epoch time is the time when the value is registered. i know what there is exist that "Configure time-based lookup" on lookup table. Can I use this to configure ttl? I would like to use a lookup table for about a month. Thanks.  
Hello All, I keep getting duplicates on my values for multiselect dropdown. I made sure my fields were correct the - field for name and field for values and query on the search.  Not sure what else ... See more...
Hello All, I keep getting duplicates on my values for multiselect dropdown. I made sure my fields were correct the - field for name and field for values and query on the search.  Not sure what else I need to check for. My source below... Any advise is welcome.    <input type="multiselect" token="tok_space_name" searchWhenChanged="true"> <label>Select Space</label> <choice value="*">All</choice> <default>*</default> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>space_name="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>Select Space</fieldForLabel> <fieldForValue>space_name</fieldForValue> <search> <query>index="lll_soruceIdx.idx" | stats values(space_name) as space_name | mvexpand space_name</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> </search>      
          { \\\"person\\\":{\\\"name\\\":{\\\"firstName\\\":\\\"John\\\",\\\"lastName\\\":\\\"Doe\\\"},\\\"address\\\":{\\\"street\\\":\\\"100 Main Ave\\\",\\\"city\\\":\\\"Redwood City\\\",\\... See more...
          { \\\"person\\\":{\\\"name\\\":{\\\"firstName\\\":\\\"John\\\",\\\"lastName\\\":\\\"Doe\\\"},\\\"address\\\":{\\\"street\\\":\\\"100 Main Ave\\\",\\\"city\\\":\\\"Redwood City\\\",\\\"usState\\\":\\\"CA\\\",\\\"zipCode\\\":\\\"94061\\\",\\\"country\\\":\\\"United States\\\",\\\"phones\\\":[],\\\"emails\\\":[],\\\"addressLines\\\":[]},\\\"addresses\\\":[],\\\"phones\\\":[{\\\"phoneType\\\":\\\"Home\\\",\\\"phoneNumber\\\":\\\"6500000000\\\"}],\\\"email\\\":\\\"johndoe@gmail.com\\\",\\\"dateOfBirth\\\":\\\"1900/01/01\\\",\\\"nationalId\\\":\\\"100\\\",\\\"gender\\\":\\\"Male\\\"},\\\"credential\\\":{\\\"userName\\\":\\\"johndoe@gmail.com\\\",\\\"password\\\":\\\"Password\\\",\\\"securityQuestion\\\":\\\"Name of First Car?\\\",\\\"securityAnswer\\\":\\\"Volvo\\\"}\"" }         I need help in getting email in splunk search query for above json which has blackslash in logs.  I have grabbed the nametag from very big log json using spath and i am calling that tag as "nametagforthisjson"  to simplify. I tried this :    | rex field=nametagforthisjson max_match=0 "\"email:\\\\\\\":\\\\\\\"(?<email>.*)\"(?=,)" | table email   I see email label printed but not value . So my regex is wrong. the email value johndoe@gmail.com is for email name tag . So the value is until semicolon (,) . I am putting 7 blackslash.(2 backslash for 1 \  and 1 for ") regex  query version https://regex101.com/r/8BevNW/1  
Hi,  Here are my searches index=foo <search criteria> | table user _time index=bar <search criteria> | table user _time The user field values are passed from inner to outer search        ... See more...
Hi,  Here are my searches index=foo <search criteria> | table user _time index=bar <search criteria> | table user _time The user field values are passed from inner to outer search         index=foo [search index=bar <search criteria> | eval time1=_time | table user time1] <search criteria> |eval time2=_time| table user time1 time2           I want to create a table >>>> user  time1 time2  Then I will be doing a delta on the time diff. I am stuck trying to get the time carried over from the inner search to the outer search, not sure if this way is even possible...  its been a while but I am pretty sure I have done this before... Seems like whenever I pass the new field time1, the outer search tries to search with that as criteria, which produces no results... Thx!
how do you search for hash value in splunk? Do we need to use a specific index?
Is there any capability in "splunk app for jenkins" where i can search my job _name on the bases of input_step parameter used in my jenkins pipeline?
I have the following log example and Splunk correctly pulls the first few fields (non-nested) as well as the first value pair of the nested fields.  However, after the first field, Splunk does not se... See more...
I have the following log example and Splunk correctly pulls the first few fields (non-nested) as well as the first value pair of the nested fields.  However, after the first field, Splunk does not seem to recognize the remaining fields. { "sessionId": "kevin70", "service": "RAF", "request": { "vendorId": "Digital", "clientId: "1234567890d" }, "response": { "vendorId": "Digital", "clientId": "1234567890d", "transactionStatus": "7000", "transactionMessage": "Success" }, "elapsedTime": "513", "timestamp_begin": 2021-04-26T21:33:43.893Z, "level": "info", "message": "SUCCESS", "timestamp": "2021-04-26T21:33:44.406Z" } My props.conf looks like the following: [json_v3] BREAK_ONLY_BEFORE = ^{ LINE_BREAKER = ^{ KV_MODE=json NO_BINARY_CHECK = true TZ = America/Chicago category = Structured description = A variant of the JSON source type, with support for nonexistent timestamps disabled = false pulldown_type = true BREAK_ONLY_BEFORE_DATE = My inputs.conf looks like this: [monitor:///home/myuser/json_test.log] index = personalizedoffer source = json_test.log sourcetype = json_v3 host = myhost The last value pair that Splunk recognized is request.vendorId.  After that, no other fields are automatically generated.  Additionally, I have attempted to use spath by piping it to my simple search which is below: index=personalizedoffer source="json_test.log" I want the values of pairs represented including: request.clientId, response.vendorId, response.clientId, response.transactionStatus, response,transactionMessage, elapsedTime, timestamp_begin, level, message, timestamp Any help is appreciated!  
I have been trying to link with drilldown a panel from one dashboard to open another with a static option from a dropdown form selected. The problem is, I can't manage to create a parameter that fits... See more...
I have been trying to link with drilldown a panel from one dashboard to open another with a static option from a dropdown form selected. The problem is, I can't manage to create a parameter that fits both the name and the value of said static option.   What I basically need is; when clicking the VNF'S Down panel (first image below), it has to open the other Dashboard (second image below) with a drilldown static option selected (in this case, VNF), if it is possible.   Thanks!
I have learned the the default value is 6 years for  logs retention. So how do I view / use some this data going back say 2-3 years?
How to assign multiple risk object fields and object types in Risk analysis response action. I know it's possible from search using appendpipe and sendalert but we want this to be added from the resp... See more...
How to assign multiple risk object fields and object types in Risk analysis response action. I know it's possible from search using appendpipe and sendalert but we want this to be added from the response action. Example as below: Risk Score - 20 Risk Object Field - user, ip, host Risk Object Type - System, user  
Hello,    I am building a query to be able to display a line graph of status (offline, online) over a period of 30days.  Query currently is so slow it usually doesn't finish.  looking for assistanc... See more...
Hello,    I am building a query to be able to display a line graph of status (offline, online) over a period of 30days.  Query currently is so slow it usually doesn't finish.  looking for assistance to see if I can do something different to speed it up.  thanks!   Current query: index=mydata sourcetype="mySourceType" (_raw=*offline* OR _raw=*online*)  | eval status=if(like(_raw, "%offline%"),"Offline","Online") | timechart span=1d count by status
hi Has anyone every tried to set up their data collectors so that it only displays when values are present or how did you work your analytic queries to show this.  We have a search form with sever... See more...
hi Has anyone every tried to set up their data collectors so that it only displays when values are present or how did you work your analytic queries to show this.  We have a search form with several different fields that can be used...ie data range, ID, payment method etc.  We want to be able to trend what performance is like when the different searches are used.  The only way I have been able to do this is to have a bunch of different searchs where I list out each data collector is not null.  Is this the best way to get this done?  Has anyone had any luck with this?
Is there any API that we can use to get the status of a specific healthrule?  Looking to be able to use the healthrule status with deployments of applications that we can use to validate that the app... See more...
Is there any API that we can use to get the status of a specific healthrule?  Looking to be able to use the healthrule status with deployments of applications that we can use to validate that the application is still in good working condition. 
Hi All, Having issue in identifying the correct blacklist regex expression to skip the few logs which are loading to Splunk. Below is my monitoring path which is updated in the inputs.conf file: ... See more...
Hi All, Having issue in identifying the correct blacklist regex expression to skip the few logs which are loading to Splunk. Below is my monitoring path which is updated in the inputs.conf file: [monitor:///project-abc/src/logs/*.log] I will have the log files will be created daily as below: /project-abc/src/logs/interestrate_runner_2021-04-23_15:06:05.123456_3456789012345_7890.log /project-abc/src/logs/contractrate_runner_2021-05-21_16:06:05.654321_2345345678901_7891.log /project-abc/src/logs/savingscost_runner_2021-05-21_17:08:05.214356_2345345678901_7892.log /project-abc/src/logs/interestrate_2021-04-23_15:06:05.123456_3456789012345_7890.log /project-abc/src/logs/contractrate_2021-05-21_16:06:05.654321_2345345678901_7891.log /project-abc/src/logs/savingscost_2021-05-21_17:08:05.214356_2345345678901_7892.log I want to blacklist the below files: /project-abc/src/logs/interestrate_runner_2021-04-23_15:06:05.123456_3456789012345_7890.log /project-abc/src/logs/contractrate_runner_2021-05-21_16:06:05.654321_2345345678901_7891.log /project-abc/src/logs/savingscost_runner_2021-05-21_17:08:05.214356_2345345678901_7892.log I have tried the below regex, but none of them worked. [monitor:///project-abc/src/logs/*.log] blacklist = .*runner.*\.(log)$ blacklist = runner\.(log)$ Can someone please help? what will be correct regex used to skip the logs with string called "runner"??    
Hello, I have a group of events like this (for one specific User Id): 2021-04-27 11:45:23  User Id: 123 Session Complete 2021-04-27 11:45:12  User Id: 123 Begin session  time: 1619538290 2021-04-... See more...
Hello, I have a group of events like this (for one specific User Id): 2021-04-27 11:45:23  User Id: 123 Session Complete 2021-04-27 11:45:12  User Id: 123 Begin session  time: 1619538290 2021-04-27 11:44:56  User Id: 123 Begin session  time: 1619538290 2021-04-27 11:44:50  User Id: 123 Begin session  time: 1619538290 2021-04-27 11:42:25  User Id: 123 Begin session  time: 1619538145 2021-04-27 11:42:14  User Id: 123 Session Complete   In this example, I want to be able to grab all of the events from 11:44:50 until 11:45:23 because they have the same time value and end with a "Session Complete". However, my current query includes the event at 11:42:25. How can I rewrite this to exclude that entry and only keep the events from 11:44:50 up to the Session Complete message? My current query is below:   index=INDEX host=HOSTNAME sourcetype=SOURCETYPE | rex field=_raw "User\sId:\s(?<user_id>\d+)\sBegin\ssession\s+time:\s(?<time_value>\d+)" | rex field=_raw "User\sId:\s(?<user_id>\d+)\sSession\sComplete" | where user_id<2000 | eval begin=if(match(_raw,"Begin"),_time,null) | eval complete=if(match(_raw,"Complete"),_time,null) | sort 0 user_id time_value -_time | streamstats min(complete) as complete by time_value user_id | stats min(begin) as begin by time_value user_id complete | fieldformat complete=strftime(complete,"%Y-%m-%d %H:%M:%S") | fieldformat begin=strftime(begin,"%Y-%m-%d %H:%M:%S") | eval duration=tostring((complete-begin), "duration") | where (complete-begin)>0  
I'm trying to run the predict query on an existing csv file with the _time and count in it. This csv was exported from a query where it gathered the count of an event in span = 5m, and then exported... See more...
I'm trying to run the predict query on an existing csv file with the _time and count in it. This csv was exported from a query where it gathered the count of an event in span = 5m, and then exported using the export button below the search bar.  _time,                           count 2021-03-24T00:00:00.000-0400,    85 Predict seems to need timechart to work properly, but I don't know how to get timechart to point to the already existing timestamps produced within the csv. Query:  | inputlookup csv_name.csv | predict count as prediction algorithm=LLP future_timespan=150 holdback=0 | I've read that maybe strptime and/or timechart need to be used somewhere within the query, but I do not know how to apply them.  Error code that we get is: External search command 'predict' returned error code 1. 
How can I compare if worldTime happened before helloTime by combining the below 2 searches?       index=search Type=Hello | stats first(Time) as helloTime by Account WorkOrder index=search Type... See more...
How can I compare if worldTime happened before helloTime by combining the below 2 searches?       index=search Type=Hello | stats first(Time) as helloTime by Account WorkOrder index=search Type=World | stats first(Time) as worldTime by Account WorkOrder           Expected Column Result: Account WorkOrder list(Type) list(Time) worldBeforeHelloFlag      
Hey folks, I suspect this is more of a Jenkins problem than a Splunk problem, but I figured I'd ask here anyway. Our Jenkins instances have the 'Splunk App for Jenkins' installed and set to send all... See more...
Hey folks, I suspect this is more of a Jenkins problem than a Splunk problem, but I figured I'd ask here anyway. Our Jenkins instances have the 'Splunk App for Jenkins' installed and set to send all console output from all jobs to Splunk on the "text::jenkins" sourcetype. Recently though I've noticed that there seem to be more events in Splunk than there should be. For example if I do a search to look for a specific job's console output , like index=jenkins_console sourcetype="text:jenkins" earliest=-5d@d latest=now source=source_of_job_I_Want | stats values(_raw) by _time I see:  From Splunk query   But if I go directly to Jenkins and look at the console output for this job I see:  From Jenkins Console This is an issue because we have an alert for the "Cannot contact X: java.lang.InterruptedException" String to help detect agent failures, but it's confusing when that alert goes off but then we look at the console and don't see that message.  Anyone come across anything similar? 
Where do I find documentation reg. how long Splunk is retaining audit logs? Can this be edited? Thank u.
Hi! I´m planning the upgrade from our current version of Splunk (V 6.*) to 8.1.3 on Server2016-19 and to V 7.3.9 on Server2012 Do anyone knows if a reboot of the computer should be needed? Can I... See more...
Hi! I´m planning the upgrade from our current version of Splunk (V 6.*) to 8.1.3 on Server2016-19 and to V 7.3.9 on Server2012 Do anyone knows if a reboot of the computer should be needed? Can I install the new version directly without unistall the previous one? It´s needed to backup/restore any file to maintan the current configuration after the upgrade?