All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

HI    I am facing issue when running collect command event are double in new index test  | collect index=test_1 output_format=hec if in test index there are 100 event when running collect com... See more...
HI    I am facing issue when running collect command event are double in new index test  | collect index=test_1 output_format=hec if in test index there are 100 event when running collect command  with output_format=hec then event are 200 in test_1 index. how can I resolved this event duplication.
Can Someone  help to build the query for below. Need to collect configured path list (coldpath/homePath / thawedPath ) by indexes.  
Hello team, I have a Fortigate v7.2.0 connected to a FortiAP (FP221E-v7.2) . After i configured Splunk as a syslog server and enabling all the logs at information level, i can see logs for traffic,... See more...
Hello team, I have a Fortigate v7.2.0 connected to a FortiAP (FP221E-v7.2) . After i configured Splunk as a syslog server and enabling all the logs at information level, i can see logs for traffic, UTM and vpn , but i can not see anything in the Wireless and System pages. Wireless and System pages are blank, no data found.  I checked the raw logs on the Fortigate side and i didn't saw any change in the values.  Please help.   Best regards.
I have successfully created data model and created output for windows logs. We were able to see logs under sample log in CEF format, but now unable to get logs on forwarded machine. Also can't see it... See more...
I have successfully created data model and created output for windows logs. We were able to see logs under sample log in CEF format, but now unable to get logs on forwarded machine. Also can't see it under sample search for CEF
So I'm trying to create a metrics search using the following query:   index="test" identities="ident_*" src=10.11.40.0/22 OR src=10.11.48.0/22 OR src=10.11.56.0/22 OR src=10.11.64.0/22 OR src=10.... See more...
So I'm trying to create a metrics search using the following query:   index="test" identities="ident_*" src=10.11.40.0/22 OR src=10.11.48.0/22 OR src=10.11.56.0/22 OR src=10.11.64.0/22 OR src=10.11.72.0/22 OR src=10.120.40.0/22 OR src=10.120.48.0/22 OR src=10.120.56.0/22 OR src=10.120.64.0/22 OR src=10.15.8.0/22 OR src=10.15.40.0/22 OR src=10.15.48.0/22 OR src=10.15.56.0/22 OR src=10.15.72.0/22 OR src=10.15.76.0/22 OR src=10.15.80.0/22 | top src | outputlookup test-excludes-no-dedup.csv I then take the CSV and use it here: index="test" identities="ident_*" NOT [ inputlookup test-excludes-no-dedup.csv ] | top src Is this the correct way to [exclude] the CIDR ranges contained within the lookup CSV? I get some results doing this but here it is, almost 1AM and I'm starting to question whether OR is correct.  Maybe I should be using AND?  I want to find all the 'src' items that are not in those CIDR ranges in the CSV.. am I going about it correctly?  
Is there a way to monitor the status of all lookup files through a search query. I would like to specifically show all lookups that are unreadable and alert on these.
The scenario is,  A lookup csv has become unreadable. A lookup definition exists for it. The lookup was deleted and recreated. The existing definition was not changed.   My question is: Can a... See more...
The scenario is,  A lookup csv has become unreadable. A lookup definition exists for it. The lookup was deleted and recreated. The existing definition was not changed.   My question is: Can a lookup be recreated and use the existing lookup definition?
HI everyone   I want to upload customer app on splunk(like picture).    If I creat an APP, is it automatically uploaded to the app market(?)?      
Hello SPLUNKERS, I have a field called GPU which has values GPU0,GPU1,GPU2,GPU3. etc ..Some might have 7 values some might have 4 and some might have 3 for each host... I  want to compare   the curr... See more...
Hello SPLUNKERS, I have a field called GPU which has values GPU0,GPU1,GPU2,GPU3. etc ..Some might have 7 values some might have 4 and some might have 3 for each host... I  want to compare   the current GPU and with the previous event for that host and if there is a difference I want to show what is the difference  and if its same then show no difference .For example  Current Event : GPU0,GPU1,GPU2,GPU3,GPU4,GPU5,GPU6,GPU7 Previous Event : GPU0,GPU2,GPU6,GPU7   Thanks in Advance I want to output the difference :GPU1,GPU3,GPU4,GPU5 
I have a group of 'filters' in a dashboard that I can add to or remove from. A filter comprises 3 fields Dropdown containing field names Selector indicating comparison type Data containing va... See more...
I have a group of 'filters' in a dashboard that I can add to or remove from. A filter comprises 3 fields Dropdown containing field names Selector indicating comparison type Data containing value I want a SINGLE panel where these 3 fields are displayed horizontally - simple I'm looking for CSS to solve this particular problem: I want to be able to add a new filter set on a separate line below the first one, in the same panel, not on a new row. This is the single group, but I want to have other groups of inputs that are conditionally show using token depends. Normally inputs will travel horizontally across the screen, so the 4th input would then be to the rights of the data_f_1 field below. How can I get the CSS to force any input with id of f_* to break to a new line, so it's left aligned in the panel?   <row id="filter_selector_row_1" depends="$f_1$"> <panel> <input id="f_1" depends="$f_1$" type="dropdown" token="field_f_1" searchWhenChanged="true"> <label>Field</label> <fieldForLabel>label_name</fieldForLabel> <fieldForValue>field_name</fieldForValue> <search base="object_list"> <query></query> </search> <change> <eval token="filter_1">$field_f_1$$selector_f_1$"*$data_f_1$*"</eval> </change> </input> <input depends="$f_1$" type="dropdown" token="selector_f_1" searchWhenChanged="true"> <label>Selector</label> <choice value="=">Equals</choice> <choice value="!=">Not Equals</choice> <initialValue>=</initialValue> <change> <eval token="filter_1">$field_f_1$$selector_f_1$"*$data_f_1$*"</eval> </change> </input> <input depends="$f_1$" type="text" token="data_f_1" searchWhenChanged="true"> <label>Data</label> <change> <set token="filter_1">$field_f_1$$selector_f_1$"*$data_f_1$*"</set> </change> </input> </panel> </row>    
I have ack enabled for a HEC input. I can successfully send data into splunk with guid #1. With the same curl but a different guid #2 the data is not in splunk but the response is Success and the ack... See more...
I have ack enabled for a HEC input. I can successfully send data into splunk with guid #1. With the same curl but a different guid #2 the data is not in splunk but the response is Success and the ack Id request also returns true. I looked in splunkd.log and I see this: 08-09-2022 17:10:04.755 -0700 ERROR JsonLineBreaker [49865490 parsing] - JSON StreamId:0 had parsing error:Unexpected character while looking for value: 'c' - data_source="http:dev", data_host="host:8088", data_sourcetype="sourcetype_name"
Hi guys, I have a query that works and gives me table such as below.   What I wanted to do was when count of values in Field1 and Field2 is greater than 1, exclude it.  In other words, if combina... See more...
Hi guys, I have a query that works and gives me table such as below.   What I wanted to do was when count of values in Field1 and Field2 is greater than 1, exclude it.  In other words, if combination of svchost and services.exe is seen more than once, exclude it.  In this case, we see it twice, so we want to exclude it from results?   How could I do this? I tried but I am not getting my head around this one.   Thanks for your help in advance.     Field1 Field2 Field3 svchost services.exe c:\windows\system32 rdp.exe cmd.exe c:\windows\system32 svchost services.exe c:\windows\system32 wmic.exe powershell.exe c:\windows\system32
Hello, I'm trying to  pull the latest values for every 4 hours in a day ie., latest values between the time 00:00:00 to 04:00:00, 04:00:00 to 08:00:00, 08:00:00 to 12:00:000, 12:00:00 to 16:00:00.... See more...
Hello, I'm trying to  pull the latest values for every 4 hours in a day ie., latest values between the time 00:00:00 to 04:00:00, 04:00:00 to 08:00:00, 08:00:00 to 12:00:000, 12:00:00 to 16:00:00.... Below is the example of how the data looks like. TIA  
I am attempting to build a search that pulls back all logs that have a value in a multi-value field but do not have other values. With a few values I do not care if exist or not. To break it down ... See more...
I am attempting to build a search that pulls back all logs that have a value in a multi-value field but do not have other values. With a few values I do not care if exist or not. To break it down more. The field "names" must have "bob". The field "names" can have any or all "tom","dan","harry" but is not required to have them. The field "names" cannot have any other value. I do not have a full list of the names and they can change over time so it is not possible to make a list of the "names" I do not want. I need other values from the logs just filtering by the "names" field as an example. As a few examples: "bob"= returned in the search "bob","tom" = returned in the search "tom","dan" = not returned in the search "bob","sam" = not returned in the search "bob","harry","fred" = not returned in search I am having trouble figuring out what to use to exclude multi-value fields in this way.
Hello Splunk community, I'm testing Splunk Timeline - Custom Visualization plugin. I'd like to visualize distributed tracing in my microservices architecture, like app name, request URL, response c... See more...
Hello Splunk community, I'm testing Splunk Timeline - Custom Visualization plugin. I'd like to visualize distributed tracing in my microservices architecture, like app name, request URL, response codes, etc.  According to the documentation , I can specify only one resource field. How can use more fields? I also found Splunk .conf19 presentation and the 19th slide looks promising: Unfortunately, mentioned presentation doesn't provide us with implementation details. Could you point me in the right direction to achieve the same, please? Any help would be much appreciated. Thanks!  
Hello, trying to create visualization that will show results from KV_Store used as filter and then query index. Basically. 1) KV Store DB -> for example: Assets (hostname, ip,  key_id, ...). used... See more...
Hello, trying to create visualization that will show results from KV_Store used as filter and then query index. Basically. 1) KV Store DB -> for example: Assets (hostname, ip,  key_id, ...). used as inputlookup -> this is much faster and can be populated from multiple index-es easier (also solve JOIN 50k limit). 2) Search index last 7 days that holds 200k+ results, index should be be filtered by key_id (returned from KV Store, KV store can be filtered much more granular from key_id than index that we wanna query later as it does not hold some fields that we wanna filter by). Query execute and kv_store return key_id that should be passed as filter to index search. What is the best way to filter based on two searches in big data sets (every data set is 50k+ events). currently using (filter example with * so it can be 1 or 50k+ key_id's) index=test [|inputlookup kv_store_lookup where filter=* | fields key_id ] this search works well when I have filter with 10, 20, 50 key_id's (got results in a matter of second), when it's "*" with 10k+ key id's then it's a little slow (10 seconds+) . Is there "some better way" or my queries are good that will be Visualization search combined from two searches where first search returns key_id's that second search should use.
Client Error Error Results Error ResultsPrevious week Percent of Total PercentDifference abc 1003 2 0 12.5 0 ab... See more...
Client Error Error Results Error ResultsPrevious week Percent of Total PercentDifference abc 1003 2 0 12.5 0 abc 1003 3   12.5 0 abc 1013 1 2 342 -50 abc 1027 3 3 5 0 abc 1027 5 xyz 43 zyz abc 1013 2 zyz 432 et abc Total 16 zyds 423 tert   My code is   --    | bucket _time span=1w | stats count as Result by LicenseKey, Error_Code | eval Client=coalesce(Client,LicenseKey) | eventstats sum(Result) as Total by Client | eval PercentOfTotal = round((Result/Total)*100,3) | sort - _time | streamstats current=f latest(Result) as Result_Prev by LicenseKey | eval PercentDifference = round(((Result/Result_Prev)-1)*100,2) | fillnull value="0" | append [ search index=abc sourcetype=yxx source= bff ErrorCode!=0 | `DedupDHI` | lookup abc LicenseKey OUTPUT Client | eval Client=coalesce(Client,LicenseKey) | stats count as Result by Client | eval ErrorCode="Total", PercentOfTotal=100] | lookup xyz_ErrorCodes ErrorCode OUTPUT Description | lookup uyz LicenseKey OUTPUT Client | eval Client=coalesce(Client,LicenseKey) | eval Error=if(ErrorCode!="Total", ErrorCode+" ("+coalesce(Description,"Description Missing - Update xyz_ErrorCodes")+")", ErrorCode) | fields Client, Error, Result, PercentOfTotal, PercentDifference, Error results previous week | sort CustomerName, Error, PercentDifference   Still not able to figure out the duplicate row issue, single row for one each error combined with total. any suggestions please? 
I want to extract package line as individual results, tried rex "Linux\ssystem\s\:\s+(?<packages>.+)", but that is just extracting the first package line.  tried rex "Linux\ssystem\s\:\s+(?<package... See more...
I want to extract package line as individual results, tried rex "Linux\ssystem\s\:\s+(?<packages>.+)", but that is just extracting the first package line.  tried rex "Linux\ssystem\s\:\s+(?<packages>(.+\w{1,3}\s\w{1,3}(\s+)?\d{1,2}\s\d{1,2}\:\d{1,2}\:\d{1,2}\s\d{4})", but same first line.   Here is the list of packages installed on the remote CentOS Linux system : python-prettytable-0.7.2-3.el7|(none) Wed Jan 9 20:38:03 2019 gettext-0.19.8.1-3.el7|(none) Wed May 13 07:35:27 2020 cpp-4.8.5-44.el7|(none) Tue Feb 2 09:59:27 2021 kmod-20-28.el7|(none) Tue Feb 2 09:59:31 2021 glibc-2.17-324.el7_9|(none) Wed Mar 16 18:10:11 2022 diffutils-3.3-5.el7|(none) Tue Feb 2 09:59:00 2021 elfutils-default-yama-scope-0.176-5.el7|(none) Tue Feb 2 09:59:35 2021 glibc-2.17-324.el7_9|(none) Wed Mar 16 18:10:12 2022 numactl-libs-2.0.12-5.el7|(none) Tue Feb 2 09:59:02 2021 device-mapper-event-1.02.170-6.el7_9.3|7 Tue Feb 2 09:59:51 2021
This is just a question for my learning.  When SQL set data is sent to Splunk via sql scripts, do you use sql syntax or do you utilize Splunk query linguistics? and can you format your rows and colum... See more...
This is just a question for my learning.  When SQL set data is sent to Splunk via sql scripts, do you use sql syntax or do you utilize Splunk query linguistics? and can you format your rows and columns in the same manner? I'm crowd sourcing to better build my report. 
I am trying to write a search that will compare data for the latest event with its previous event and show the difference if any for each host .I am trying to use earliest and latest event but I earl... See more...
I am trying to write a search that will compare data for the latest event with its previous event and show the difference if any for each host .I am trying to use earliest and latest event but I earliest doesnt take the immediate  preceding event  Following is the search I have tried but I dont think its right index=abc host=xyz | stats latest(id) as id  latest(SN) as SN latest(PN) as PN  latest(_time) as time by host |stats earliest(id) as eid   earliest(SN) as eSN  earliest(PN) as ePN   earliest(_time) as etime by host   Thanks in Advance Splunkers