All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, We are using inputs.conf and props.conf to ingest a flat csv file. The issue we are having is the sourcetype name is appending a -2 to the sourcetype even though it is a unique name. Example: ... See more...
Hello, We are using inputs.conf and props.conf to ingest a flat csv file. The issue we are having is the sourcetype name is appending a -2 to the sourcetype even though it is a unique name. Example: sourcetype=sourcetypename | results sourcetypename-2 #inputs.conf [monitor://C:\Import\sample.csv] index= test sourcetype= sourcetypename #props.conf [sourcetypename] FIELD_DELIMITER=, CHECK_FOR_HEADER = true HEADER_MODE = firstline  Any help would be appreciated!
Hi There, I need to download the dashboard in PNG file using CLI, is there any way to do this?  Splunk Version: 8.2, Dashboard type:  Dashboard Studio. Thank you in advance for your help.
Hi, I am asking if it's possible to ingest logfiles where one logline would contain a DateTime and the following lines only contain Time, until the next entry with a DateTime. If we ignore Date as a... See more...
Hi, I am asking if it's possible to ingest logfiles where one logline would contain a DateTime and the following lines only contain Time, until the next entry with a DateTime. If we ignore Date as a whole by using a custom time DateTime format only consisting of %H:%M:%S it's using the creation date of the file and the time pulled from the individual event. While that works without issues for files containing less than 24 hours it fails for files containing more than 24 hours of data: ### Job STARTED at 2021/09/21 00:30:00 [INFO ] 00:30:01 This is a test message [WARN ] 01:15:01 This is a warning message ### Job STARTED at 2021/09/22 06:10:00 [INFO ] 06:10:01 This is a test message [WARN ] 07:11:00 This is a warning message Regards  
Here is log example -  http://host/manager/resource_identifier/ids/getOrCreate/bulk?dscid=LuSxrA-1c42bb5b-f862-4861-892f-69320e1a59e7:200 Created:78 I need to extract string after ids/ untill first... See more...
Here is log example -  http://host/manager/resource_identifier/ids/getOrCreate/bulk?dscid=LuSxrA-1c42bb5b-f862-4861-892f-69320e1a59e7:200 Created:78 I need to extract string after ids/ untill first ? or :  So output would be - getOrCreate/bulk I am trying this -  rex field=log ":(?<url>ids\/[^?: ]*)"   What am I missing?
Please help with SPLs to find list of my Splunk server instances, FWs & Indexers. Need Splunk version & machine names & IPs. Thx a million in advance. What is the best order to upgrade them all to Sp... See more...
Please help with SPLs to find list of my Splunk server instances, FWs & Indexers. Need Splunk version & machine names & IPs. Thx a million in advance. What is the best order to upgrade them all to Splunk 8.2.2.? 
Hi splunkers, We are having  multisite architecture in our organization which is running on splunk version 7.3.6. This version ig going to end of support after 22 october 2021. We are upgrading vers... See more...
Hi splunkers, We are having  multisite architecture in our organization which is running on splunk version 7.3.6. This version ig going to end of support after 22 october 2021. We are upgrading version to 8.1 but after upgrade are not able to login into splunk through web.  Even in internal logs are seeing this error message "ERROR [614c8c71667f891f364ad0] config:140 - [HTTP 401] Client is not authenticated Traceback (most recent call last):". But on test environemnt we haven't got this issue. For login into splunk having facility of LDAP and IDM . All IDM roles are mapping and working fine with existing version and also working on test environment with version 8.1.
Our MPS team provisions IAM user for non-personalized access.  All IAM Access Keys need to be rotated every 90 days. MPS has created a rotation service which helps you to fulfil this requirement. Th... See more...
Our MPS team provisions IAM user for non-personalized access.  All IAM Access Keys need to be rotated every 90 days. MPS has created a rotation service which helps you to fulfil this requirement. This impacts Splunk as the Key ID within Configuration - Account will need to be updated once a new Key has been created. Is there a solution where Splunk can automatically update the Key ID value once a new access key has been created?
Hi, I have some saved queries that return results that do not show any result when included in a dashboard as a panel.     <panel> <title>PAN - GlobalProtect session duration AVG</title> <table>... See more...
Hi, I have some saved queries that return results that do not show any result when included in a dashboard as a panel.     <panel> <title>PAN - GlobalProtect session duration AVG</title> <table> <search ref="PAN - GLobalProtect session duration AVG last 7d"></search> <option name="drilldown">none</option> </table> </panel>   but when I click on the magnifier glass in the dashboard , it runs the query in a search box and it shows the correct results. what am I missing here? I checked permissions in the saved search and everyone has read permission. cheers
Hello, I have below query.  Getting data from dc_nfast index and putting it in test index and using this test index in dashboard.  we are not using test dc_nfast index anywhere. I want to refine the... See more...
Hello, I have below query.  Getting data from dc_nfast index and putting it in test index and using this test index in dashboard.  we are not using test dc_nfast index anywhere. I want to refine the query so that it gets data from same index and putting in it same index. Can one advice on this `comment("Get latest NFAST data")` index=dc_nfast source=MIDDLEWARE_NFAST_DETECTION NOT "Nfast Version" IN ("NA","[no results]") [ | tstats latest(_time) as latest earliest(_time) as earliest count where index=dc_nfast source=MIDDLEWARE_NFAST_DETECTION host=nfast* earliest=-30d by _time , source , host span=2m | stats values(count) as count values(earliest) as earliest values(latest) as latest by _time , source , host | eventstats median(count) as median p5(count) as p5 p95(count) as p95 by source , host | eval range=(p95-p5) | eval lower_count=(median-range*3) , upper_count=(median+range*3) | where count >=lower_count AND count<=upper_count | eval earliest = earliest - 1 , latest = latest + 1 | dedup source , host sortby - latest | return 10 earliest latest host source ] `comment("Get lifecyclestate, country and ipaddress from master asset lists")` | lookup dc_app_dcmaster_master_asset_ul.csv hostname OUTPUT master_ipaddress as ul_master_ipaddress master_lifeCycleState as ul_master_lifeCycleState master_country as ul_master_country | lookup dc_app_dcmaster_master_asset_win.csv hostname OUTPUT master_ipaddress as win_master_ipaddress master_lifeCycleState as win_master_lifeCycleState master_country as win_master_country | lookup dc_app_unicorn_lookup.csv hostName as hostname OUTPUT locationCode | eval ipaddress=coalesce(ul_master_ipaddress, win_master_ipaddress) , lifeCycleState=coalesce(ul_master_lifeCycleState,win_master_lifeCycleState) , country=coalesce(ul_master_country,win_master_country) | fillnull value="UNKNOWN" ipaddress lifeCycleState country locationCode `comment("Join on serial number data from HSMC CSV import")` | rename "Serial Number" as HSMSerialNumber | makemv HSMSerialNumber `comment("Count the number of serial numbers per host")` | eval HSM_count=mvcount(HSMSerialNumber) | fillnull value=0 HSM_count | mvexpand HSMSerialNumber | join type=left HSMSerialNumber [ search `get_latest_source_in_range(test , /opt/splunk/etc/apps/dc_ta_nfast_inputs/git/master/HSMSplunkFeed/SPLUNK_HSM_Import_Listing.csv)` ] | rename Country as HSM_country "HSM *" as HSM_* "HSM * *" as HSM_*_ | foreach HSM* [ fillnull value="No_HSM_data" <<FIELD>> ] `comment("Add EIM service data - creating")` | lookup app_eim_lookup_eim_basic_extract.csv hostname OUTPUT PLADA_CRITICALITY SERVICE_IT_ORG6 SERVICE_IT_ORG7 IT_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL | fillnull value="UNKNOWN" IT_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL PLADA_CRITICALITY SERVICE_IT_ORG6 SERVICE_IT_ORG7 | eval servicedata=mvzip(IT_SERVICE ,SERVICE_OWNER ,"##") | eval servicedata=mvzip(servicedata,SERVICE_OWNER_EMAIL ,"##") | eval servicedata=mvzip(servicedata,PLADA_CRITICALITY ,"##") | eval servicedata=mvzip(servicedata,SERVICE_IT_ORG6 ,"##") | eval servicedata=mvzip(servicedata,SERVICE_IT_ORG7 ,"##") | eval servicedata=mvdedup(servicedata) | fields - PLADA_CRITICALITY SERVICE_IT_ORG6 ISERVICE_IT_ORG7 T_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL | eval servicedata=mvdedup(servicedata) `comment("Now expand each servicedata into multiple lines where the is multiple values ")` | mvexpand servicedata `comment("Recreate the service values")` | rex field=servicedata "(?<IT_SERVICE>.*?)##(?<SERVICE_OWNER>.*?)##(?<SERVICE_OWNER_EMAIL>.*?)##(?<PLADA_CRITICALITY>.*?)##(?<SERVICE_IT_ORG6>.*?)##(?<SERVICE_IT_ORG7>.*)" `comment("Run addinfo command to capture info_search_time field")` | addinfo `comment("Write out to index")` | eval _time=now() | table hostname Combined Nfast* ipaddress lifeCycleState HSM_count country locationCode IT_SERVICE SERVICE_OWNER SERVICE_OWNER_EMAIL PLADA_CRITICALITY SERVICE_IT_ORG6 SERVICE_IT_ORG7 HSMSerialNumber HSM* | collect index=test source=stash   Thanks  
What does 'litsearch' mean? What is  'litsearch-expansion' ? Could anyone explain me, what exactly the word 'litsearch' mean in Splunk's context ?
Hello there. I finally managed to set up WMI-based event log monitoring and it seems to work The problem is that it's gonna give me way to many events. I want to pull just a subset of the events... See more...
Hello there. I finally managed to set up WMI-based event log monitoring and it seems to work The problem is that it's gonna give me way to many events. I want to pull just a subset of the events from the Applicatonlog. With ordinary WinEventLog input I could set up a whitelist/blacklist to limit the processed events at the forwarder level. The same doesn't seem to work with the WMI:whatever type of input. Is there indeed no way to limit the ingested events? Do I have to do it further down the stream by selective routing on HF?
Hello Community I have some troubles with the option "action.email" in a saved search. I want to create a report with the Splunk API and set the available parameter "action.email" to "true" / "1" (b... See more...
Hello Community I have some troubles with the option "action.email" in a saved search. I want to create a report with the Splunk API and set the available parameter "action.email" to "true" / "1" (because the default value is false). I tried it like the below query, but it's not working as expected. After executing it's always the default value (false) and Splunk didn't changed it to  "true" or "1":   curl -k -u <splunk_username>:<splunk_password> https://<splunk_ip>:<splunk_mgmt-port>/servicesNS/<user>/<app>/saved/searches -d name=Test_Report -d action.email=1 --data-urlencode -d search="<splunk_query>"   In a second step I tried to edit the report directly in Splunk Web -> Search, Reports, and Alerts -> testReport -> Advanced Edit. But everytime after I saved the report with the new parameter "action.email = 1" it looks like Splunk is reseting this value back to "false". In my behavior., Splunk only saves the value "true" consistent after I edited the savedsearches.conf file.  Can you please help my with this problem? Thanks
Hello I am looking to monitoring and review F5 Big-IP logs, but I see both F5 Security and F5 Networks are no longer supported by Splunk. Is there a newer app by Splunk that other people are using t... See more...
Hello I am looking to monitoring and review F5 Big-IP logs, but I see both F5 Security and F5 Networks are no longer supported by Splunk. Is there a newer app by Splunk that other people are using to monitor F5 logs or are both of these app usable in Splunk 8.2? Also we do currently review logs via syslog for now.
Dear Splunkers,   I am trying to forward a specific sourcetype (let's call it "mySourcetype") to a third party software with a Heavy Forwarder. The Heavy Forwarder receives the data on a udp port ... See more...
Dear Splunkers,   I am trying to forward a specific sourcetype (let's call it "mySourcetype") to a third party software with a Heavy Forwarder. The Heavy Forwarder receives the data on a udp port with a splunk input: [udp://xxxxx:514] connection_host = dns sourcetype = mySourcetype _SYSLOG_ROUTING = mySyslogRouting   We added the _SYSLOG_ROUTING line to also send it to the third party software as described in outputs.conf: [syslog:mySyslogRouting] type = udp priority = NO_PRI syslogSourceType = sourcetype::mySourcetype server = yyyyyy:3000   Unfortunately, the destination "yyyyyy:3000" receive all data (not just this sourcetype) from the HF. So that means all the other sourcetype and internal Splunk logs...   We tried to use the props.conf and transforms.conf to route it: props.conf [mySourcetype] TRANSFORMS-changehost = routeSourcetype   transforms.conf [routeSourcetype] DEST_KEY=_SYSLOG_ROUTING REGEX=(.) FORMAT=mySyslogRouting   The result is the same : the destination receive too much data.   Any idea to limit the Syslog forwarding to just one sourcetype?   Thanks   EDIT: I found an old post that seems to be talking about the same issue but with no answers... https://community.splunk.com/t5/Getting-Data-In/Route-syslog-from-heavy-forwarder-to-3rd-party-but-only/m-p/535722
I am trying to figure out how to reroute a specific host to a different index. For example, search results of host=1234test shows in index=best_life... How would I change the index of host1234 from... See more...
I am trying to figure out how to reroute a specific host to a different index. For example, search results of host=1234test shows in index=best_life... How would I change the index of host1234 from best_life to fall into a different index that exist already ie. (index=other_index)..
Hi, I have an issue where I can see something is consuming licenses ingestion for a specific sourcetype. Unfortunately, the host is blank in index=_internal source="*license_usage.log*, however, I do... See more...
Hi, I have an issue where I can see something is consuming licenses ingestion for a specific sourcetype. Unfortunately, the host is blank in index=_internal source="*license_usage.log*, however, I do know the index. I cannot find what host is sending data Indexed today by potentially sending dates in the past. I have found sending events dates in the past to be this issues. Only time i have solved it before, is DEBUG an HEC and I don't want to keep doing that. I want to do something like this: | tstats count where index=os AND sourcetype=ps groupby host| where < actually ingest time was yesterday but event time was days in the past, list event time and ingest time>   Is this possible? Thank you! Chris
We were investigating some indexes that have low RAW to Index Ratio and came across _audit whose RAW to Index Ratio is 0.81:1. At first glance, _audit seemed a good candidate to learn how to f... See more...
We were investigating some indexes that have low RAW to Index Ratio and came across _audit whose RAW to Index Ratio is 0.81:1. At first glance, _audit seemed a good candidate to learn how to find out if an index has high cardinality and what we can do about it (like tsidx reduction along with bloomfilters). First is not frequently searched to test tsidx reduction or bloomfilters, moreover, it is an index that everyone has in their Splunk installations so, we could benefit from common knowledge.  We came accross the following numbers about cardinality by taking a sample of the data and using tstats and walklex: earliest latest number of events keywords in lexicon min number of events per keyword keywords with min number of events percentage of keywords with min number of events 18/09/2021 00:00 22/09/2021 24:00 5721945 6764698 10 176669 2,61%   Just by looking at the above table it is hard to tell if we are in front of an index whose data changes a lot or not. What is considered a high cardinality index? It would be awesome to have some reference numbers but i was not able to find them anywhere. Q1: Do we have any reference numbers that once compared to, would unequivocally tell us either or not the bucket is an  high cardinality one? Nonetheless, should we expect Raw to Index Ration to drop bellow 1:1? Then we went through and inspected the size of the tsidx files against the size of the buckets indexer vm bucketsize_bytes tsidxsize_bytes bucketsize_megabytes tsidxsize_megabytes A 1317675655 933241652 1257 890 B 1321231309 892793498 1260 851 C 1464189620 1003103122 1396 957 D 1538519922 1045951037 1467 997 E 901792050 609003289 860 581 F 1417458990 929185810 1352 886 G 1591446741 1167724482 1518 1114 H 1497574135 1009380670 1428 963 totals 11049888422 7590383560     Results show that the tsidx files take around ~69% of the overall disk space needed to store the _audit index in the indexers Q2: Once again, is this a sign of high cardinality? Q3: Lastly, any SEGMENTATION config that is commonly applied to _audit index?   
I am using the Fundamentals 1 dataset to learn about lookups.  I have created a csv file with a column for productId (to match the productId field in the access_combined_wcookie sourcetype), a column... See more...
I am using the Fundamentals 1 dataset to learn about lookups.  I have created a csv file with a column for productId (to match the productId field in the access_combined_wcookie sourcetype), a column for produceCode (to match the Code field in the vendor_sales sourcetype), and a column for productPrice (that's the output field).  I've created a lookup definition called productDetails which seems correct when I check it using inputlookup. What I want to do is create a timechart with 2 series: one with online sales, and the other vendor sales.  This is what I've tried: index=main (sourcetype="vendor_sales" OR sourcetype="access_combined_wcookie") | lookup productDetails productId AS productId | lookup productDetails productCode AS Code | timechart count(productPrice) by sourcetype The first lookup matches the productId field in access_combined_wcookie, and the second lookup matches the Code field in vendor_sales.  The result I'm getting is a set of counts for access_combined_wcookie, and all 0's for vendor_sales.  If I switch around the order of the lookups, then I get counts for vendor_sales but 0's for access_combined_wcookie.  Basically, whichever lookup is second is ignored. I found a lot of forum posts about joining multiple csv lookups, but I couldn't apply them to my problem.  Any help would be greatly appreciated.  Thanks.
Currently our DB Collect Version is 3.1.3 and we are upgrading to Splunk Version 8. Just wanted to check do we need to upgrade our DB Collect as well.
Hello Fellow Splunkers , Is there a way that I can get a list of Input apps on a UF host that are not distributed by the DS that its connected to ? (i.e Locally created Inputs) - How ? If Yes ,... See more...
Hello Fellow Splunkers , Is there a way that I can get a list of Input apps on a UF host that are not distributed by the DS that its connected to ? (i.e Locally created Inputs) - How ? If Yes , can we disable it by using Splunk or DS or a master app that can disable all non DS deployed apps. -please suggest how again !? Regard Samuel