All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have below query.  Getting data from dc_nfast index and putting it in test index and using this test index in dashboard.  we are not using test dc_nfast index anywhere. I want to refine the... See more...
Hello, I have below query.  Getting data from dc_nfast index and putting it in test index and using this test index in dashboard.  we are not using test dc_nfast index anywhere. I want to refine the query so that it gets data from same index and putting in it same index. Can one advice on this `comment("Get latest NFAST data")` index=dc_nfast source=MIDDLEWARE_NFAST_DETECTION NOT "Nfast Version" IN ("NA","[no results]") [ | tstats latest(_time) as latest earliest(_time) as earliest count where index=dc_nfast source=MIDDLEWARE_NFAST_DETECTION host=nfast* earliest=-30d by _time , source , host span=2m | stats values(count) as count values(earliest) as earliest values(latest) as latest by _time , source , host | eventstats median(count) as median p5(count) as p5 p95(count) as p95 by source , host | eval range=(p95-p5) | eval lower_count=(median-range*3) , upper_count=(median+range*3) | where count >=lower_count AND count<=upper_count | eval earliest = earliest - 1 , latest = latest + 1 | dedup source , host sortby - latest | return 10 earliest latest host source ] `comment("Get lifecyclestate, country and ipaddress from master asset lists")` | lookup dc_app_dcmaster_master_asset_ul.csv hostname OUTPUT master_ipaddress as ul_master_ipaddress master_lifeCycleState as ul_master_lifeCycleState master_country as ul_master_country | lookup dc_app_dcmaster_master_asset_win.csv hostname OUTPUT master_ipaddress as win_master_ipaddress master_lifeCycleState as win_master_lifeCycleState master_country as win_master_country | lookup dc_app_unicorn_lookup.csv hostName as hostname OUTPUT locationCode | eval ipaddress=coalesce(ul_master_ipaddress, win_master_ipaddress) , lifeCycleState=coalesce(ul_master_lifeCycleState,win_master_lifeCycleState) , country=coalesce(ul_master_country,win_master_country) | fillnull value="UNKNOWN" ipaddress lifeCycleState country locationCode `comment("Join on serial number data from HSMC CSV import")` | rename "Serial Number" as HSMSerialNumber | makemv HSMSerialNumber `comment("Count the number of serial numbers per host")` | eval HSM_count=mvcount(HSMSerialNumber) | fillnull value=0 HSM_count | mvexpand HSMSerialNumber | join type=left HSMSerialNumber [ search `get_latest_source_in_range(test , /opt/splunk/etc/apps/dc_ta_nfast_inputs/git/master/HSMSplunkFeed/SPLUNK_HSM_Import_Listing.csv)` ] | rename Country as HSM_country "HSM *" as HSM_* "HSM * *" as HSM_*_ | foreach HSM* [ fillnull value="No_HSM_data" <<FIELD>> ] `comment("Add EIM service data - creating")` | lookup app_eim_lookup_eim_basic_extract.csv hostname OUTPUT PLADA_CRITICALITY SERVICE_IT_ORG6 SERVICE_IT_ORG7 IT_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL | fillnull value="UNKNOWN" IT_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL PLADA_CRITICALITY SERVICE_IT_ORG6 SERVICE_IT_ORG7 | eval servicedata=mvzip(IT_SERVICE ,SERVICE_OWNER ,"##") | eval servicedata=mvzip(servicedata,SERVICE_OWNER_EMAIL ,"##") | eval servicedata=mvzip(servicedata,PLADA_CRITICALITY ,"##") | eval servicedata=mvzip(servicedata,SERVICE_IT_ORG6 ,"##") | eval servicedata=mvzip(servicedata,SERVICE_IT_ORG7 ,"##") | eval servicedata=mvdedup(servicedata) | fields - PLADA_CRITICALITY SERVICE_IT_ORG6 ISERVICE_IT_ORG7 T_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL | eval servicedata=mvdedup(servicedata) `comment("Now expand each servicedata into multiple lines where the is multiple values ")` | mvexpand servicedata `comment("Recreate the service values")` | rex field=servicedata "(?<IT_SERVICE>.*?)##(?<SERVICE_OWNER>.*?)##(?<SERVICE_OWNER_EMAIL>.*?)##(?<PLADA_CRITICALITY>.*?)##(?<SERVICE_IT_ORG6>.*?)##(?<SERVICE_IT_ORG7>.*)" `comment("Run addinfo command to capture info_search_time field")` | addinfo `comment("Write out to index")` | eval _time=now() | table hostname Combined Nfast* ipaddress lifeCycleState HSM_count country locationCode IT_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL PLADA_CRITICALITY SERVICE_IT_ORG6 SERVICE_IT_ORG7 HSMSerialNumber HSM* | collect index=test source=stash   Thanks  
What does 'litsearch' mean? What is  'litsearch-expansion' ? Could anyone explain me, what exactly the word 'litsearch' mean in Splunk's context ?
Hello there. I finally managed to set up WMI-based event log monitoring and it seems to work The problem is that it's gonna give me way to many events. I want to pull just a subset of the events... See more...
Hello there. I finally managed to set up WMI-based event log monitoring and it seems to work The problem is that it's gonna give me way to many events. I want to pull just a subset of the events from the Applicatonlog. With ordinary WinEventLog input I could set up a whitelist/blacklist to limit the processed events at the forwarder level. The same doesn't seem to work with the WMI:whatever type of input. Is there indeed no way to limit the ingested events? Do I have to do it further down the stream by selective routing on HF?
Hello Community I have some troubles with the option "action.email" in a saved search. I want to create a report with the Splunk API and set the available parameter "action.email" to "true" / "1" (b... See more...
Hello Community I have some troubles with the option "action.email" in a saved search. I want to create a report with the Splunk API and set the available parameter "action.email" to "true" / "1" (because the default value is false). I tried it like the below query, but it's not working as expected. After executing it's always the default value (false) and Splunk didn't changed it to  "true" or "1":   curl -k -u <splunk_username>:<splunk_password> https://<splunk_ip>:<splunk_mgmt-port>/servicesNS/<user>/<app>/saved/searches -d name=Test_Report -d action.email=1 --data-urlencode -d search="<splunk_query>"   In a second step I tried to edit the report directly in Splunk Web -> Search, Reports, and Alerts -> testReport -> Advanced Edit. But everytime after I saved the report with the new parameter "action.email = 1" it looks like Splunk is reseting this value back to "false". In my behavior., Splunk only saves the value "true" consistent after I edited the savedsearches.conf file.  Can you please help my with this problem? Thanks
Hello I am looking to monitoring and review F5 Big-IP logs, but I see both F5 Security and F5 Networks are no longer supported by Splunk. Is there a newer app by Splunk that other people are using t... See more...
Hello I am looking to monitoring and review F5 Big-IP logs, but I see both F5 Security and F5 Networks are no longer supported by Splunk. Is there a newer app by Splunk that other people are using to monitor F5 logs or are both of these app usable in Splunk 8.2? Also we do currently review logs via syslog for now.
Dear Splunkers,   I am trying to forward a specific sourcetype (let's call it "mySourcetype") to a third party software with a Heavy Forwarder. The Heavy Forwarder receives the data on a udp port ... See more...
Dear Splunkers,   I am trying to forward a specific sourcetype (let's call it "mySourcetype") to a third party software with a Heavy Forwarder. The Heavy Forwarder receives the data on a udp port with a splunk input: [udp://xxxxx:514] connection_host = dns sourcetype = mySourcetype _SYSLOG_ROUTING = mySyslogRouting   We added the _SYSLOG_ROUTING line to also send it to the third party software as described in outputs.conf: [syslog:mySyslogRouting] type = udp priority = NO_PRI syslogSourceType = sourcetype::mySourcetype server = yyyyyy:3000   Unfortunately, the destination "yyyyyy:3000" receive all data (not just this sourcetype) from the HF. So that means all the other sourcetype and internal Splunk logs...   We tried to use the props.conf and transforms.conf to route it: props.conf [mySourcetype] TRANSFORMS-changehost = routeSourcetype   transforms.conf [routeSourcetype] DEST_KEY=_SYSLOG_ROUTING REGEX=(.) FORMAT=mySyslogRouting   The result is the same : the destination receive too much data.   Any idea to limit the Syslog forwarding to just one sourcetype?   Thanks   EDIT: I found an old post that seems to be talking about the same issue but with no answers... https://community.splunk.com/t5/Getting-Data-In/Route-syslog-from-heavy-forwarder-to-3rd-party-but-only/m-p/535722
I am trying to figure out how to reroute a specific host to a different index. For example, search results of host=1234test shows in index=best_life... How would I change the index of host1234 from... See more...
I am trying to figure out how to reroute a specific host to a different index. For example, search results of host=1234test shows in index=best_life... How would I change the index of host1234 from best_life to fall into a different index that exist already ie. (index=other_index)..
Hi, I have an issue where I can see something is consuming licenses ingestion for a specific sourcetype. Unfortunately, the host is blank in index=_internal source="*license_usage.log*, however, I do... See more...
Hi, I have an issue where I can see something is consuming licenses ingestion for a specific sourcetype. Unfortunately, the host is blank in index=_internal source="*license_usage.log*, however, I do know the index. I cannot find what host is sending data Indexed today by potentially sending dates in the past. I have found sending events dates in the past to be this issues. Only time i have solved it before, is DEBUG an HEC and I don't want to keep doing that. I want to do something like this: | tstats count where index=os AND sourcetype=ps groupby host| where < actually ingest time was yesterday but event time was days in the past, list event time and ingest time>   Is this possible? Thank you! Chris
We were investigating some indexes that have low RAW to Index Ratio and came across _audit whose RAW to Index Ratio is 0.81:1. At first glance, _audit seemed a good candidate to learn how to f... See more...
We were investigating some indexes that have low RAW to Index Ratio and came across _audit whose RAW to Index Ratio is 0.81:1. At first glance, _audit seemed a good candidate to learn how to find out if an index has high cardinality and what we can do about it (like tsidx reduction along with bloomfilters). First is not frequently searched to test tsidx reduction or bloomfilters, moreover, it is an index that everyone has in their Splunk installations so, we could benefit from common knowledge.  We came accross the following numbers about cardinality by taking a sample of the data and using tstats and walklex: earliest latest number of events keywords in lexicon min number of events per keyword keywords with min number of events percentage of keywords with min number of events 18/09/2021 00:00 22/09/2021 24:00 5721945 6764698 10 176669 2,61%   Just by looking at the above table it is hard to tell if we are in front of an index whose data changes a lot or not. What is considered a high cardinality index? It would be awesome to have some reference numbers but i was not able to find them anywhere. Q1: Do we have any reference numbers that once compared to, would unequivocally tell us either or not the bucket is an  high cardinality one? Nonetheless, should we expect Raw to Index Ration to drop bellow 1:1? Then we went through and inspected the size of the tsidx files against the size of the buckets indexer vm bucketsize_bytes tsidxsize_bytes bucketsize_megabytes tsidxsize_megabytes A 1317675655 933241652 1257 890 B 1321231309 892793498 1260 851 C 1464189620 1003103122 1396 957 D 1538519922 1045951037 1467 997 E 901792050 609003289 860 581 F 1417458990 929185810 1352 886 G 1591446741 1167724482 1518 1114 H 1497574135 1009380670 1428 963 totals 11049888422 7590383560     Results show that the tsidx files take around ~69% of the overall disk space needed to store the _audit index in the indexers Q2: Once again, is this a sign of high cardinality? Q3: Lastly, any SEGMENTATION config that is commonly applied to _audit index?   
I am using the Fundamentals 1 dataset to learn about lookups.  I have created a csv file with a column for productId (to match the productId field in the access_combined_wcookie sourcetype), a column... See more...
I am using the Fundamentals 1 dataset to learn about lookups.  I have created a csv file with a column for productId (to match the productId field in the access_combined_wcookie sourcetype), a column for produceCode (to match the Code field in the vendor_sales sourcetype), and a column for productPrice (that's the output field).  I've created a lookup definition called productDetails which seems correct when I check it using inputlookup. What I want to do is create a timechart with 2 series: one with online sales, and the other vendor sales.  This is what I've tried: index=main (sourcetype="vendor_sales" OR sourcetype="access_combined_wcookie") | lookup productDetails productId AS productId | lookup productDetails productCode AS Code | timechart count(productPrice) by sourcetype The first lookup matches the productId field in access_combined_wcookie, and the second lookup matches the Code field in vendor_sales.  The result I'm getting is a set of counts for access_combined_wcookie, and all 0's for vendor_sales.  If I switch around the order of the lookups, then I get counts for vendor_sales but 0's for access_combined_wcookie.  Basically, whichever lookup is second is ignored. I found a lot of forum posts about joining multiple csv lookups, but I couldn't apply them to my problem.  Any help would be greatly appreciated.  Thanks.
Currently our DB Collect Version is 3.1.3 and we are upgrading to Splunk Version 8. Just wanted to check do we need to upgrade our DB Collect as well.
Hello Fellow Splunkers , Is there a way that I can get a list of Input apps on a UF host that are not distributed by the DS that its connected to ? (i.e Locally created Inputs) - How ? If Yes ,... See more...
Hello Fellow Splunkers , Is there a way that I can get a list of Input apps on a UF host that are not distributed by the DS that its connected to ? (i.e Locally created Inputs) - How ? If Yes , can we disable it by using Splunk or DS or a master app that can disable all non DS deployed apps. -please suggest how again !? Regard Samuel
Hi, community members   I am trying to write a query that looks like this: | dbxquery query="select VULNERABILITY_LIFECYCLE, SOURCE, CLOSURE_FY, CLOSURE_QUARTER, CLOSURE_DATE from table [...]" |... See more...
Hi, community members   I am trying to write a query that looks like this: | dbxquery query="select VULNERABILITY_LIFECYCLE, SOURCE, CLOSURE_FY, CLOSURE_QUARTER, CLOSURE_DATE from table [...]" | eval MONTH=strftime(strptime(CLOSURE_DATE,"%Y-%m-%d %H:%M:%S"),"%m") | eval SURSA = if(SOURCE!="QUALYS-P","Confirmed", "Potential") | chart count over MONTH by SURSA   My problem is that I want this chart to represent a financial year, not a calendar year. How can I do this? (also,  without skipping months)   Thank you for your support, Daniela
Hi, I got this dashboard in which I work with tokens to select specific customer data. By selecting the correct customer, the drop down menu pushes a customer number (formatted like 30001, 30002,...... See more...
Hi, I got this dashboard in which I work with tokens to select specific customer data. By selecting the correct customer, the drop down menu pushes a customer number (formatted like 30001, 30002,... 30999) to all searches within the dashboard. This is working flawless. Now I created a new search within that same dashboard that only uses the trailing number (1,2, ... 999). If I test this in a normal search by pushing the customer id via an eval into the cust_id field it works. But the same search with tokens in a dashboards returns the error "wating on input". Can't debug it either because there is no sid.   Working search:   index=x sourcetype=x | eval regid=30001 | eval cust_id=$regid$ | rex field=cust_id mode=sed "s/(^30*)//1" | where port==$cust_id$ | top limit=1 port     NOT working dashboard search:   index=x sourcetype=x | eval cust_id=$regid$ | rex field=cust_id mode=sed "s/(^30*)//1" | where port==$cust_id$ | top limit=1 port      
Hi, is it possible to get the otp dynamically in synthetic job in selenium script in AppDynamics? Regards, Madhu
Hi I have key value that call (duration) in my application log that show duration of each job done. each day when I get maximum duration it has false positive because it is natural to become high d... See more...
Hi I have key value that call (duration) in my application log that show duration of each job done. each day when I get maximum duration it has false positive because it is natural to become high duration in some point. It’s not normal when it continues high duration. e.g.  normal condition: 00:01:00.000 WARNING duration[0.01] 00:01:00.000 WARNING duration[100.01] 00:01:00.000 WARNING duration[0.01]   abnormal condition: 00:01:00.000 WARNING duration[0.01] 00:01:00.000 WARNING duration[100.01] 00:01:00.000 WARNING duration[50.01] 00:01:00.000 WARNING duration[90.01] 00:01:00.000 WARNING duration[100.01] 00:01:00.000 WARNING duration[0.01]   1-how can I detect abnormal condition with splunk? (Best way with minimum false positive on hug data) 2-which visualization or chart more suitable to show this abnormal condition daily? this is huge log file and it is difficult to show all data for each day on single chart. Any idea?  Thanks,  
Hi Team, I want to know if it is possible pass data present in a format block of one playbook to another playbook being called. So its like PB1--->Format block -------> PB2 called ------>PB2 perfor... See more...
Hi Team, I want to know if it is possible pass data present in a format block of one playbook to another playbook being called. So its like PB1--->Format block -------> PB2 called ------>PB2 performs functions on the previous format block data. I know as a workaround this can be done by adding an artifact and using it in subplaybook. But would prefer if it is possible without it. Kindly let me know if any further info is required. Thanks in advance! Regards, Shaquib
HI, We see inconsistency in the value of pctIlde time captured in the actual linux machine and in our splunk_ta_nix app. In our Splunk_TA_NIX APP the values pctIdle is different. But in the user en... See more...
HI, We see inconsistency in the value of pctIlde time captured in the actual linux machine and in our splunk_ta_nix app. In our Splunk_TA_NIX APP the values pctIdle is different. But in the user end Linux Machine, when checked using SAR command the values are not matching 5.2.0 is the version of SPLUNK_TA_NIX APP. How do we fix this issue? Any suggestions Please
Hello all, I haven't used rex many times. I have a URL like this, http;s://ab-abcd.in.xyz.com/abcd_xyz/job/example_name/" . Here ab-abcd.in.xyz.com is server name, abcd_xyz is project name and exa... See more...
Hello all, I haven't used rex many times. I have a URL like this, http;s://ab-abcd.in.xyz.com/abcd_xyz/job/example_name/" . Here ab-abcd.in.xyz.com is server name, abcd_xyz is project name and example_name is task name. I have to extract these using regex. I have tried using this query, |rex field=URL "https:\//(?<server_name>\w*)/\(?<project_name>\w*)\/job\/(?<task_name>\w*)\/"| table server_name project_name task_name I know that this query is wrong.But confused on how to correct it.  Can anyone help me to correct this query.
I have an api which has a number of endpoint, e.g., /health, /version, /specification and so on... I have a query which extracts the response time from logs and creates stats as duration. Here is th... See more...
I have an api which has a number of endpoint, e.g., /health, /version, /specification and so on... I have a query which extracts the response time from logs and creates stats as duration. Here is the query - index=*my_index* namespace="my_namespace" "Logged Request" | rex field=log "OK:(?<duration>\d+)"| stats avg(duration) as Response_Time by log Here's an example of generated log - " 2021-08-23 20:36:15.627 [INFO ] [-19069] a.a.ActorSystemImpl - Logged Request:HttpMethod(POST):http://host/manager/resource_identifier/ids/getOrCreate/bulk?streaming=true&dscid=52wu9o-b5bf6995-38c5-4472-90d7-2f3edb780276:200 OK:2622 " The query that I've shared extracts the duration from every log. I need to find a way to extract the name of the endpoint and calculate the mean of duration of each endpoint? Can some one help me with this, I am really new to splunk.