All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi this is quite often asked question. You could find answers from community by google. But shortly, you couldn’t find that information from splunk audit logs. Here is couple links to tell more abo... See more...
Hi this is quite often asked question. You could find answers from community by google. But shortly, you couldn’t find that information from splunk audit logs. Here is couple links to tell more about the reason. https://community.splunk.com/t5/Splunk-Search/Data-used-by-searches/m-p/687785#M234581 https://community.splunk.com/t5/Splunk-Search/How-to-find-which-indexes-are-used/m-p/674510 r. Ismo
Hello , i have logs in following path /abc-logs/hosta/mods/stdout.240513-070854 /abc-logs/hostb/mods/stdout.240513-070854 /abc-logs/hostc/mods/stdout.240513-070854 /abc-logs/hostd.a.clusters.ab... See more...
Hello , i have logs in following path /abc-logs/hosta/mods/stdout.240513-070854 /abc-logs/hostb/mods/stdout.240513-070854 /abc-logs/hostc/mods/stdout.240513-070854 /abc-logs/hostd.a.clusters.abc.com/mods/stdout.240206-084344 /abc-logs/hoste/mods/stdout.240513-070854 when I am trying monitor this path to get logs into splunk .I only get two files .when checked internal logs i see following errors 05-16-2024 10:07:25.609 -0700 ERROR TailReader [1846912 tailreader0] - File will not be read, is too small to match seekptr checksum (file=/abc-logs/hosta/mods/stdout.240513-070854).  Last time we saw this initcrc, filename was different.  You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source.  Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info. A possible timestamp match (Fri Feb 13 15:31:30 2009) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: FileClassifier C:\abc-logs\hostd.a.clusters.abc.com\mods\stdout.240206-084344 I am using below props [ mods ] BREAK_ONLY_BEFORE_DATE=null CHARSET=AUTO CHECK_METHOD=entire_md5 DATETIME_CONFIG=CURRENT LINE_BREAKER=([\r\n]+) MAX_DAYS_AGO =2000 MAX_DAYS_HENCE=365 NO_BINARY_CHECK=true SHOULD_LINEMERGE=false category=Custom crcSalt=<SOURCE> initCrcLength = 1048576 i tried changing the CHECK_METHOD to other options but it did not work  Thanks in advance 
Hi That’s almost mission impossible with standard setup as you could do queries without defining any indexes into it. You could also use eventtypes etc. to hide real index names. If you star to inde... See more...
Hi That’s almost mission impossible with standard setup as you could do queries without defining any indexes into it. You could also use eventtypes etc. to hide real index names. If you star to index all your search logs from sh side and look litesearch part then that could give to you more accurate index list? r. Ismo
It’s still same situation for supported languages. Supported means e.g. splunklib etc. integration support. Of course you could use almost any languages to do e.g. scripted inputs.
But what about Splunk Cloud. Does it also support Perl?
Hello @CSReviews you can export as csv file it's then easy to import. https://hurricanelabs.com/splunk-tutorials/ingesting-a-csv-file-into-splunk/ "Upload with Splunk Web"    
Hello all, Just wondering if anyone else is removed index time extractions for the Cisco DNA Center Add-on (6668). I don't like that it needlessly indexes fields then resolved the duplicate-field ... See more...
Hello all, Just wondering if anyone else is removed index time extractions for the Cisco DNA Center Add-on (6668). I don't like that it needlessly indexes fields then resolved the duplicate-field issue by disabling KV_MODE. I was thinking of adding something like this to the app props.conf but I am still looking better options.     INDEXED_EXTRACTIONS =  KV_MODE=JSON SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n\[\]\,]+\s*)([\{])
Hello @whitecat001 try this : index=_audit action="search" search="*" NOT user="splunk-system-user" savedsearch_name="" NOT search="\'|history*" NOT search="\'typeahead*" | rex "index=(?P<myIndex>... See more...
Hello @whitecat001 try this : index=_audit action="search" search="*" NOT user="splunk-system-user" savedsearch_name="" NOT search="\'|history*" NOT search="\'typeahead*" | rex "index=(?P<myIndex>\w+)\s+\w+=" | stats count by myIndex
Was able to get this to give me one line for the non-critical pods total missing count over time. index=abc sourcetype=kubectl importance=non-critical | lookup pod_list pod_name_lookup as pod_name O... See more...
Was able to get this to give me one line for the non-critical pods total missing count over time. index=abc sourcetype=kubectl importance=non-critical | lookup pod_list pod_name_lookup as pod_name OUTPUT pod_name_lookup | append [inputlookup pod_list where importance = non-critical | rename pod_name_lookup as pod_name_all] | eventstats values(pod_name_all) as pod_name_all | where sourcetype == "kubectl" | timechart span=1h@h values(pod_name_lookup) as pod_name_lookup values(pod_name_all) as pod_name_all | eval missing = mvappend(missing, mvmap(pod_name_all, if(pod_name_all IN (pod_name_lookup), null(), pod_name_all))) | timechart span=1h@h count(missing) as non-critical-pods-missing   Working towards the goal of being able to get another line for critical.
So I have the following setup and everything is good but I want to kind of do a subsearch In the Event - Sample User-ABCDEF assigned Role-'READ' on Project-1234 to GHIJKL Current SPL  index="... See more...
So I have the following setup and everything is good but I want to kind of do a subsearch In the Event - Sample User-ABCDEF assigned Role-'READ' on Project-1234 to GHIJKL Current SPL  index="xxxx" "role-'WRITE'" OR "role-'READ'" | rex "User-(?<userid>[^,]*)" | rex "(?<resource>\w+)$" | eval userid=upper(userid) | stats c as Count latest(_time) as _time by userid I get an output as this ABCDEF ASSIGNED ROLE-'READ' ON PROJECT-1234 TO GHIJKL   What I want is to search on just the GHIJKL after it extracts or should I just put it at the front so it only fetches that?
Hi KVStore and CSV are considered internal, correct? Based on your experience which one is the fastest?   KVStore? Thanks
Maybe this is what you need. Note, as far as I know there are no fields that show the index used by a search,  that show the index used by searches, so you have to extract that from the SPL code,  an... See more...
Maybe this is what you need. Note, as far as I know there are no fields that show the index used by a search,  that show the index used by searches, so you have to extract that from the SPL code,  and index= can be all over the place in the code and also in macros,  so its tricky, but may be this will work for you. This shows the count of searches by index_used | rest splunk_server=local /services/search/jobs | fields author title, updated, search, runDuration, provenance, latestTime, owner eai:acl.app, diskUsage | rename author AS user eai:acl.app AS app title AS search_code | rex field=search_code "(?<index_used>index\s*=\s*[^ ]+|index\s+IN|search\s*=\s*index=|search\s*=\s*inputlookup\s+in|index\s*=_\*)" | stats count(search_code) AS volume_of_searches_ran BY index_used | sort - volume_of_searches_ran    
Hi @Pablo.Jaña, Does this help: https://docs.appdynamics.com/appd/onprem/23.x/23.11/en/extend-appdynamics/appdynamics-apis/create-central-identity-user-api
Thanks for the response can i get a query that helps to show how much searches are been ran per indexes volume
The TA for Genesys cloud logs ingestion can be installed from: https://github.com/SplunkBAUG/CCA/blob/main/TA_genesys_cloud-1.0.14.spl   And the app for visualization is " Genesys Cloud Operationa... See more...
The TA for Genesys cloud logs ingestion can be installed from: https://github.com/SplunkBAUG/CCA/blob/main/TA_genesys_cloud-1.0.14.spl   And the app for visualization is " Genesys Cloud Operational Analytics App"
Assuming you can still be dependent on column names, you should go back to pre-transpose and add the following | eval row=mvrange(0,3) | mvexpand row | eval column=mvindex(split("search_name,ID,Time... See more...
Assuming you can still be dependent on column names, you should go back to pre-transpose and add the following | eval row=mvrange(0,3) | mvexpand row | eval column=mvindex(split("search_name,ID,Time",","),row) | eval new_row=case(row=0,search_name,row=1,ID,row=2,Time) | table column new_row
For running jobs - try this from the GUI - see the link for curl base CLI command | rest splunk_server=local /services/search/jobs | fields author title, updated, search, runDuration, provenanc... See more...
For running jobs - try this from the GUI - see the link for curl base CLI command | rest splunk_server=local /services/search/jobs | fields author title, updated, search, runDuration, provenance, latestTime, owner eai:acl.app, diskUsage | rename author AS user eai:acl.app AS app title AS search_code | eval diskUsage_MB = round(diskUsage/1024/1024,2) | table user search_code, updated, search, runDuration, provenance, latestTime, owner, app diskUsage_MB Here's the Rest API and others https://docs.splunk.com/Documentation/Splunk/9.2.1/RESTREF/RESTsearch#search.2Fjobs   
Pls what is the rest endpoint for searches that users are running 
I want a query that shows  the total volume of indexes used for splunk searches. Query on information that has to do with how much indexes are used based on splunk searches     
what indexes are used the most in splunk searches